Integrating volterra series model and deep neural networks to equalize nonlinear power amplifiers

Information

  • Patent Grant
  • 11855813
  • Patent Number
    11,855,813
  • Date Filed
    Monday, September 19, 2022
    a year ago
  • Date Issued
    Tuesday, December 26, 2023
    5 months ago
Abstract
The nonlinearity of power amplifiers (PAs) has been a severe constraint in performance of modern wireless transceivers. This problem is even more challenging for the fifth generation (5G) cellular system since 5G signals have extremely high peak to average power ratio. Nonlinear equalizers that exploit both deep neural networks (DNNs) and Volterra series models are provided to mitigate PA nonlinear distortions. The DNN equalizer architecture consists of multiple convolutional layers. The input features are designed according to the Volterra series model of nonlinear PAs. This enables the DNN equalizer to effectively mitigate nonlinear PA distortions while avoiding over-fitting under limited training data. The non-linear equalizers demonstrate superior performance over conventional nonlinear equalization approaches.
Description
FIELD OF THE INVENTION

The present invention relates to the field of equalization of nonlinear radio frequency power amplifiers, and more particularly to a neural network implementation of radio frequency power amplifier equalization


BACKGROUND OF THE INVENTION

Most modern wireless communication systems, including the fifth generation (5G) cellular systems, use multi-carrier or OFDM (orthogonal frequency division multiplexing) modulations whose signals have extremely high peak to average power ratio (PAPR). This makes it challenging to enhance the efficiency of power amplifiers (PAs). Signals with high PAPR need linear power amplifier response in order to reduce distortion. Nevertheless, PAs have the optimal power efficiency only in the nonlinear saturated response region. In practice, the PAs in the wireless transceivers have to work with high output backoff (OBO) in order to suppress nonlinear distortions, which unfortunately results in severe reduction of power efficiency [1]. This problem, originated from the nonlinearity of PAs, has been one of the major constraints to enhance the power efficiency of modern communication systems.


Various strategies have been investigated to mitigate this problem. The first strategy is to reduce the PAPR of the transmitted signals. Many techniques have been developed for this purpose, such as signal clipping, peak cancellation, error waveform subtraction [2]. For OFDM signals, pilot tones and unmodulated subcarriers can be exploited to reduce PAPR with some special pre-coding techniques [3].


The second strategy is to linearize the PAs at the transmitters. One of the dominating practices today is to insert a digital pre-distorter (DPD) before the PA, which distorts the signals appropriately so as to compensate for the nonlinear PA response [4] [5] [6]. DPD has been applied widely in many modern transmitters [2].


The third strategy is to mitigate the nonlinear PA distortions at the receivers via post-distorter equalization [7] [8] [9]. The solution presented in [10] develops a Bayesian signal detection algorithm based on the nonlinear response of the PAs. However, this method works for the simple “AM-AM AM-PM” nonlinear PA model only. Alternatively, as a powerful nonlinear modeling tool, artificial neural networks have also been studied for both nonlinear modeling of PAs [11] [12] and nonlinear equalization [13] [14] [15].


One of the major design goals for the 5G systems is to make the communication systems more power efficient. This needs efficient PAs, which is unfortunately more challenging since 5G signals have much higher PAPR and wider bandwidth [16] [17]. This is an especially severe problem for cost and battery limited devices in massive machine-type communications or internet of things (IoT). Existing nonlinear PA mitigation strategies may not be sufficient enough. PAPR can be reduced to some extent only. DPD is too complex and costly for small and low-cost 5G devices. Existing DPD and equalization techniques have moderate nonlinear distortion compensation capabilities only.


As a matter of fact, the nonlinear equalization strategy is more attractive to massive MIMO and millimeter wave transmissions due to the large number of PAs needed [18] [19] [20]. Millimeter wave transmissions require much higher transmission power to compensate for severe signal attenuation. Considering the extremely high requirement on PA power efficiency and the large number of PAs in a transmitter, the current practice of using DPD may not be appropriate due to implementation complexity and cost.


There are various types of intermodulation that can be found in radio systems, see, Rec. ITU-R SM.1446: Type 1 Single channel intermodulation: where the wanted signal is distorted by virtue of non-linearities in the transmitter; Type 2 Multichannel intermodulation: where the wanted signals of multi channels are distorted by virtue of non-linearities in the same transmitter; Type 3 Inter transmitter intermodulation: where one or more transmitters on a site intermodulate, either within the transmitters themselves or within a non-linear component on site to produce intermodulation products; Type 4 Intermodulation due to active antennas: the multicarrier operating mode of an active antenna, along with the non-linearity of amplifiers, originates spurious emissions under the form of intermodulation signals; and Type 5 Intermodulation due to passive circuits: where transmitters share the same radiating element and intermodulation occurs due to non-linearities of passive circuits. See, Rep. ITU-R-SM.2021


An amplifier can be characterized by a Taylor series of the generalized transfer function [32]

i0+k1eIN+k2eIN2+k3eIN3+k4eIN4+k5eIN5+ . . .


where i0 is the quiescent output current, k1, k2, etc. are coefficients and eIN represents the input signal. When two sinusoidal frequencies ω1=2πƒ1 and ω2=2πƒ2 of the amplitude a1 and a2 are applied to the input of the amplifier, the input signal is:

eIN=a1 cos ω1t+a2 cos ω2t


and the output iOUT may be shown to be the sum of the DC components:







i

O

U

T


=


i
0

+



k
2

2



(


a
1
2

+

a
2
2


)


+



k
4

8



(


3


a
1
4


+

1

2


a
1
2


+

3


a
2
4



)







fundamental components:








+

(



k
1



a
1


+


3
4



k
3



a
1
3


+


3
2



k
3



a
1



a
2
2


+


5
8



k
5



a
1
5


+


15
4



k
5



a
1
3



a
2
2


+


15
8



k
5



a
1



a
2
4



)



cos


ω
1


t

+


(



k
1



a
2


+


3
4



k
3



a
2
3


+


3
2



k
3



a
1
2



a
2


+


5
8



k
5



a
2
5


+


15
4



k
5



a
1
2



a
2
3


+


15
8



k
5



a
1
4



a
2



)


cos


ω
2


t





2nd order components:











+

(



1
2



k
2



a
1
2


+


1
2



k
3



a
1
4


+


3
2



k
4



a
1
2



a
2
2



)



cos

2


ω
1


t

+


(



1
2



k
2



a
2
2


+


1
2



k
3



a
2
4


+


3
2



k
4



a
1
2



a
2
2



)


cos

2


ω
2


t

+


(



k
2



a
1



a
2


+


3
2



k
4



a
1
3



a
2


+


3
2



k
4



a
1



a
2
3



)



cos

(


ω
1

±

ω
2


)


t











3rd order components:








+

(



1
4



k
3



a
1
3


+


5

1

6




k
5



a
1
5


+


5
4



k
5



a
1
3



a
2
2



)



cos

3


ω
1


t

+


(



1
4



k
3



a
2
3


+


5

1

6




k
5



a
2
5


+


5
4



k
5



a
1
2



a
2
3



)


cos

3


ω
2


t

+


(



3
4



k
3



a
1
2



a
2


+


5
4



k
5



a
1
4



a
2


+



1

5

8



k
5



a
1
2



a
2
3



)



cos

(


ω
1

±

2


ω
2



)


t

+


(



3
4



k
3



a
1



a
2
2


+


5
4



k
5



a
1



a
2
4


+



1

5

8



k
5



a
1
3



a
2
2



)



cos

(


ω
2

±

2


ω
1



)


t





4th order components:








+

1
8




k
4



a
1
4


cos

4


ω
1


t

+


1
8



k
4



a
2
4


cos

4


ω
2


t

+


1
2



k
4



a
1
3



a
2



cos

(


3


ω
1


±

ω
2


)


t

+


3
4



k
4



a
1
2



a
2
2



cos

(


2


ω
1


±

2


ω
2



)


t

+


1
2



k
4



a
1



a
2
3



cos

(


ω
1

±

3


ω
2



)


t





and 5th order components:











+

1

1

6





k
5



a
1
5


cos

5


ω
1


t

+


1

1

6




k
5



a
2
5


cos

5


ω
2


t

+


5

1

6




k
5



a
1
4



a
2



cos

(


4


ω
1


±

ω
2


)


t

+


5
8



k
5



a
1
3



a
2
2



cos

(


3


ω
1


±

2


ω
2



)


t

+


5
8



k
5



a
1
2



a
2
3



cos

(


2


ω
1


±

3


ω
2



)


t

+


5

1

6




k
5



a
1



a
2
4



cos

(


ω
1

±

4


ω
2



)


t











The series may be expanded further for terms in k6eIN6 etc. if desired. All the even order terms produce outputs at harmonics of the input signal and that the sum and difference products are well removed in frequency far from the input signal. The odd order products, however, produce signals near the input frequencies ƒ1±2ƒ2 and ƒ2±2ƒ1. Therefore, the odd order intermodulation products cannot be removed by filtering, only by improvement in linearity.


Assuming class A operation, a1=a2 and k4, k5 are very small, the 3rd order intermodulation product IM3 becomes proportional to a3. That means that the cube of the input amplitude and the graph of the intermodulation products will have a slope of 3 in logarithmic scale while the wanted signal will have the slope of 1. Second order products IM2 can be similarly calculated, and the graph for these has a slope of two. The points where these graphs cross are called 3rd order intercept point IP3 and 2nd order intercept point IP2, respectively. IP3 is the point where the intermodulation product is equal to the fundamental signal. This is a purely theoretical consideration, but gives a very convenient method of comparing devices. For example, a device with intermodulation products of −40 dBm at 0 dBm input power is to be compared with one having intermodulation products of −70 dBm for −10 dBm input. By reference to the intercept point, it can be seen that the two devices are equal.


The classical description of intermodulation of analogue radio systems deals with a two-frequency input model to a memoryless non-linear device. This non-linear characteristic can be described by a function ƒ(x), which yields the input-output relation of the element device. The function, ƒ, is usually expanded in a Taylor-series and thus produces the harmonics and as well the linear combinations of the input frequencies. This classical model is well suited to analogue modulation schemes with dedicated frequency lines at the carrier frequencies. The system performance of analogue systems is usually measured in terms of signal-to-noise (S/N) ratio, and the distorting intermodulation signal can adequately be described by a reduction of S/N.


With digital modulation methods, the situation is changed completely. Most digital modulation schemes have a continuous signal spectrum without preferred lines at the carrier frequencies. The system degradation due to intermodulation is measured in terms of bit error ratio (BER) and depends on a variety of system parameters, e.g. the special modulation scheme which is employed. For estimation of the system performance in terms of BER a rigorous analysis of non-linear systems is required. There are two classical methods for the analysis and synthesis of non-linear systems: the first one carries out the expansion of the signal in a Volterra series [27]. The second due to Wiener uses special base functionals for the expansion.


Both methods lead to a description of the non-linear system by higher order transfer functions having n input variables depending on the order of the non-linearity. Two data signals x1(t) and x2(t), originated from x(t), are linearly filtered by the devices with the impulse responses ha(t) and hb(t) in adjacent frequency bands. The composite summed signal y is hereafter distorted by an imperfect square-law device which might model a transmit-amplifier. The input-output relation of the non-linear device is given by: custom character(t)=y(t)+ay2(t)


The output signal z(t) including the intermodulation noise is caused by non-linearities of third order. For this reason, the imperfect square-law device is now replaced by an imperfect cubic device with the input-output relation: custom character(t)=y(t)+ay3(t)


There are several contributions of the intermodulation noise falling into the used channels near ƒ0.


Linearization of a transmitter system may be accomplished by a number of methods:

    • Feedforward linearization: This technique compares the amplified signal with an appropriately delayed version of the input signal and derives a difference signal, representing the amplifier distortions. This difference signal is in turn amplified, and subtracted from the final HPA output. The main drawback of the method is the requirement for a 2nd amplifier—the technique can, however, deliver an increase in output power of some 3 dB when used with a TWT.
    • Feedback linearization: In audio amplifiers, linearization may readily be achieved by the use of feedback, but this is less straightforward at high RF frequencies due to limitations in the available open-loop amplifier gain. It is possible, however, to feedback a demodulated form of the output, to generate adaptive pre-distortion in the modulator. It is clearly not possible to apply such an approach in a bent-pipe transponder, however, where the modulator and HPA are rather widely separated.
    • Predistortion: Rather than using a method that responds to the actual instantaneous characteristics of the HPA, it is common to pre-distort the input signal to the amplifier, based on a priori knowledge of the transfer function. Such pre-distortion may be implemented at RF, IF or at baseband. Baseband linearizers, often based on the use of look-up tables held in firmware memory are becoming more common with the ready availability of VLSI techniques, and can offer a compact solution. Until recently, however, it has been easier to generate the appropriate pre-distortion function with RF or IF circuitry.


RF amplifier linearization techniques can be broadly divided into two main categories:

    • Open-loop techniques, which have the advantage of being unconditionally stable, but have the drawback of being unable to compensate for changes in the amplifier characteristics.
    • Closed-loop techniques, which are inherently self-adapting to changes in the amplifier, but can suffer from stability problems.


Predistortion involves placing a compensating non-linearity into the signal path, ahead of the amplifier to be linearized. The signal is thus predistorted before being applied to the amplifier. If the predistorter has a non-linearity which is the exact inverse of the amplifier non-linearity, then the distortion introduced by the amplifier will exactly cancel the predistortion, leaving a distortionless output. In its simplest analogue implementation, a practical predistorter can be a network of resistors and non-linear elements such as diodes or transistors. Although adaptive predistortion schemes have been reported, where the non-linearity is implemented in digital signal processing (DSP), they tend to be very computationally or memory intensive, and power hungry.


Feedforward [28] is a distortion cancellation technique for power amplifiers. The error signal generated in the power amplifier is obtained by summing the loosely coupled signal and a delayed inverted input signal, so that the input signal component is cancelled. This circuit is called the signal cancelling loop. The error signal is amplified by an auxiliary amplifier, and is then subtracted from the delayed output signal of the power amplifier, so that the distortion at the output is cancelled. This circuit is called the error cancelling loop. It is necessary to attenuate the input signal component lower than the error signal at the input of the auxiliary amplifier, so that the residual main signal does not cause overloading of the auxiliary amplifier, or does not cancel the main signal itself at the equipment output.


Negative feedback [29] is a well-known linearization technique and is widely used in low frequency amplifiers, where stability of the feedback loop is easy to maintain. With multi-stage RF amplifiers however, it is usually only possible to apply a few dB of overall feedback before stability problems become intractable [30]. This is mainly due to the fact that, whereas at low frequency it can be ensured that the open-loop amplifier has a dominant pole in its frequency response (guaranteeing stability), this is not feasible with RF amplifiers because their individual stages generally have similar bandwidths. Of course, local feedback applied to a single RF stage is often used, but since the distortion reduction is equal to the gain reduction, the improvement obtained is necessarily small because there is rarely a large excess of open loop gain available.


At a given center frequency, a signal may be completely defined by its amplitude and phase modulation. Modulation feedback exploits this fact by applying negative feedback to the modulation of the signal, rather than to the signal itself. Since the modulation can be represented by baseband signals, we can successfully apply very large amounts of feedback to these signals without the stability problems that beset direct RF feedback. Early applications of modulation feedback used amplitude (or envelope) feedback only, applied to valve amplifiers [31], where amplitude distortion is the dominant form of non-linearity. With solid-state amplifiers however, phase distortion is highly significant and must be corrected in addition to the amplitude errors.


For estimation of the system performance in terms of BER a rigorous analysis of non-linear systems is required. There are two classical methods for the analysis and synthesis of non-linear systems: the first one carries out the expansion of the signal in a Volterra series [27]. The second due to Wiener uses special base functionals for the expansion. These are the Wiener G-functionals which are orthogonal if white Gaussian noise excites the system. It is the special autocorrelation property of the white Gaussian noise which makes it so attractive for the analysis of non-linear systems. The filtered version of AWGN, the Brownian movement or the Wiener process, has special features of its autocorrelation which are governed by the rules for mean values of the products of jointly normal random variables.


The non-linear system output signal y(t) can be expressed by a Volterra series:

y(t)=H0+H1+H2+ . . .

where Hi is the abbreviated notation of the Volterra operator operating on the input x(t) of the system. The first three operators are given in the following. The convolution integrals are integrated from −∞, to +∞.

H0[x(t)]=h0
H1[x(t)]=∫h1(τ)x(t−τ)
H2[x(t)]=∫∫h212)x(t−τ1)x(t−τ2)12


The kernels of the integral operator can be measured by a variation of the excitation time of input pulses, e.g. for the second order kernel h21, τ2): x(t)=δ(t−τ1)δ(t−τ2). A better method is the measurement of the kernel by the cross-correlation of exciting white Gaussian noise n(t) as input signal with the system output yi(t). These equations hold, if:

Φnn(τ)=Aδ(τ)


is the autocorrelation function of the input signal x(t)=n(t) (white Gaussian noise) where A is the noise power spectral density. The first three kernels are given then by:











h
0

=



y
0



(
t
)


_








h
1

(
σ
)

=


1
A





y
1



(
t
)


n


(

t
-
σ

)


_









h
2

(


σ
1

,

σ
2


)

=


1

2


A
2







y
2



(
t
)


n


(

t
-

σ
1


)


n


(

t
-

σ
2


)


_













The overline denotes the expected value, or temporal mean value for ergodic systems.


The method can be expanded to higher order systems by using higher order Volterra operators Hn. However, the Volterra operators of different order are not orthogonal and, therefore, some difficulties arise at the expansion of an unknown system in a Volterra series.


These difficulties are circumvented by the Wiener G-functionals, which are orthogonal to all Volterra operators with lower order, if white Gaussian noise excites the system.











TABLE 1





Volterra




kernels
Direct Fourier Transform
Laplace transform







Linear (1st order)





H
1

=




"\[LeftBracketingBar]"



H
1

(
ω
)



"\[RightBracketingBar]"


=



"\[LeftBracketingBar]"





-



+







h
1

(

τ
1

)

·

exp

(


-
j


ω


τ
1


)

·
d



τ
1





"\[RightBracketingBar]"







H1 (p) = k1 · L1 (p)





Quadratic (2nd order)





H
2

=




"\[LeftBracketingBar]"



H
2

(
ω
)



"\[RightBracketingBar]"


=



"\[LeftBracketingBar]"





-



+







-



+







h
2

(


τ
1

,

τ
2


)

·

exp
[


-
j


ω


(


τ
1

+

τ
2


)


]

·
d



τ
1


d


τ
2






"\[RightBracketingBar]"







H2 (p) = k2 · L1 (2p)





Cubic (3rd order)





H
3

=




"\[LeftBracketingBar]"



H
3

(
ω
)



"\[RightBracketingBar]"


=



"\[LeftBracketingBar]"





-



+







-



+







-



+







h
3

(


τ
1

,

τ
2

,

τ
3


)

·

exp
[


-
j


ω


(


τ
1

+

τ
2

+

τ
3


)


]

·
d



τ
1


d


τ
2


d


τ
3







"\[RightBracketingBar]"







H3 (p) = k3 · L1 (3p)









See, Panagiev, Oleg. “Adaptive compensation of the nonlinear distortions in optical transmitters using predistortion.” Radioengineering 17, no. 4 (2008): 55.


The first three Wiener G-functionals are:

G0[x(t)]=k0
G1[x(t)]=∫k11)x(t−τ1)1
G2[x(t)]=∫∫k212)x(t−τ1)x(t−τ2)12−A∫k212)1
G3[x(t)]=∫∫∫k3123)x(t−τ1)x(t−τ2)x(t−τ3)123−3A∫∫k3122)x(t−τ1)12


For these functionals hold:

Hm[n(t)]Gn[n(t)]=0 for m<n


if the input signal n(t) is white Gaussian noise.


The two data signals x1(t) and x2(t), from a single signal x(t), are linearly filtered by the devices with the impulse responses ha(t) and hb(t) in adjacent frequency bands. The composite summed signal y is hereafter distorted by an imperfect square-law device which might model a transmit-amplifier. The input-output relation of the non-linear device is given by:

custom character(t)=y(t)+ay2(t)


The output signal z(t) is therefore determined by:

custom character(t)=∫[ha(τ)+hb(τ)]x(t−τ)dτ+a{∫[ha(τ)+hb(τ)]x(t−τ)dτ}2


The first and second order Volterra-operators H1 and H2 for this example are accordingly determined by the kernels:

h1(τ)=ha(τ)+hb(τ)


and

h212)=ha1)[ha2)+hb2)]+hb1)[ha2)+hb2)]


This kernel h21, τ2) is symmetric, so that:

h212)=h221)


The second order kernel transform H21, ω2) is obtained by the two-dimensional Fourier-transform with respect to τ1 and τ2, and can be obtained as:

H212)={Ha1)[Ha2)+Hb2)]+Hb1)[Ha2)+Hb2)]}

by elementary manipulations. Ha(ω) and Hb(ω) are the Fourier-transforms of ha(t) and hb(t). With the transform X(ω) of the input signal x(t), an artificial two-dimensional transform Z21, ω2) is obtained:

Z(2)12)=H212)X1)X2)


with the two-dimensional inverse Z2(t1, t2). The output signal z(t) is:

z(t)=z(2)(t,t)


The transform Z(ω) of z(t) can be obtained by convolution:










Z

(
ω
)

=


1

2

π








Z

(
2
)


(


ω
1

,

ω
-

ω
1



)


d


ω
1














where the integration is carried out from −∞ to +∞.


The output z(t) can be as well represented by use of the Wiener G-functionals:

z(t)=G0+G1+G2+ . . .


where Gi is the simplified notation of Gi[x(t)]. The first two operators are:

G0[x(t)]=−A∫[ha(τ)+hb(τ)]2dτ=const
G1[x(t)]=∫[ha(τ)+hb(τ)]x(t−τ)

The operator Gi equals Hi in this example. For x(t) equal white Gaussian noise x(t)=n(t):


G1[n(t)]h0 holds for all h0, especially:

G1G0=0.
G2[x(t)]=∫[ha1)ha2)+ha1)hb2)+hb1)ha2)+hb1)hb2)]x(t−τ1)x(t−τ2)12−A∫[ha1)+hb1)]21


The consequence is:

G2h0=h0∫[ha1)ha2)+ha1)hb2)+hb1)ha2)+hb1)hb2)]n(t−τ1)n(t−τ2)12−h0A∫[ha1)+hb1)]21


and

G2h0=0 because of n(t−τ1)n(t−τ2)=Aδ(τ1−τ2)


and similarly:

G2H1=0 for all H1


This equation involves the mean of the product of three zero mean jointly Gaussian random variables, which is zero.


The Wiener kernels can be determined by exciting the system with white Gaussian noise and taking the average of some products of the system output and the exciting noise process n(t):









k
0

=


z


(
t
)


_










k
1

(
τ
)

=


1
A




z


(
t
)


n


(

t
-
τ

)


_








and







k
2

(


τ
1

,

τ
2


)

=


1

2


A
2






z


(
t
)


n


(

t
-

τ
1


)


n


(

t
-

τ
2


)


_







For RF-modulated signals the intermodulation distortion in the proper frequency band is caused by non-linearities of third order. For this reason, the imperfect square-law device is now replaced by an imperfect cubic device with the input-output relation:

custom character(t)=y(t)+ay3(t)


If only the intermodulation term which distorts the signal in its own frequency band is considered, the kernel transform of the third-order Volterra operator Z(3)1, ω2, ω3) becomes then:











Z

(
3
)


(


ω
1

,

ω
2

,

ω
3


)

=

a





i
=
1

3



[



H
a

(

ω
i

)

+


H
b

(

ω
i

)


]



X

(

ω
i

)














The intermodulation part in the spectrum of z(t) is now given by:










Z

(
ω
)

=


1


(

2

π

)

2










Z

(
3
)


(


ω
-

μ
1


,


μ
1

-

μ
2


,

μ
2


)


d


μ
1


d


μ
2















For a cubic device replacing the squarer, however, there are several contributions of the intermodulation noise falling into the used channels near ƒ0.


See, Amplifier References, infra.


The Volterra series is a general technique, and subject to different expressions of analysis, application, and simplifying presumptions. Below is further discussion of the technique.


A system may have hidden states of input-state-output models. The state and output equations of any analytic dynamical system are

{dot over (x)}(t)=ƒ(x,u,θ)
y(t)=g(x,u,θ)+ε


{dot over (x)}(t) is an ordinary differential equation and expresses the rate of change of the states as a parameterized function of the states and input. Typically, the inputs u(t) correspond to designed experimental effects. There is a fundamental and causal relationship (Fliess et al 1983) between the outputs and the history of the inputs. This relationship conforms to a Volterra series, which expresses the output y(t) as a generalized convolution of the input u(t), critically without reference to the hidden states {dot over (x)}(t). This series is simply a functional Taylor expansion of the outputs with respect to the inputs (Bendat 1990). The reason it is a functional expansion is that the inputs are a function of time.










y

(
t
)

=



i




0
t







0
t




κ
i

(


σ
1

,


,

σ
i


)



u

(

t
-

σ
1


)







,


,


u

(

t
-

σ
i


)


d


σ
1


,


,

d


σ
i












κ
i

(


σ
1

,


,

σ
i


)

=




i


y

(
t
)






u

(

t
-

σ
1


)


,


,



u

(

t
-

σ
i


)











where κi1, . . . σi) is the ith order kernel, and the integrals are restricted to the past (i.e., integrals starting at zero), rendering the equation causal. This equation is simply a convolution and can be expressed as a GLM. This means that we can take a realistic model of responses and use it as an observation model to estimate parameters using observed data. Here the model is parameterized in terms of kernels that have a direct analytic relation to the original parameters θ of the physical system. The first-order kernel is simply the conventional HRF. High-order kernels correspond to high-order HRFs and can be estimated using basis functions as described above. In fact, by choosing basis function according to








A

(
σ
)

i

=





κ

(
σ
)

1





θ
i








one can estimate the physical parameters because, to a first order approximation, βii. The critical step is to start with a causal dynamic model of how responses are generated and construct a general linear observation model that allows estimation and inference about the parameters of that model. This is in contrast to the conventional use of the GLM with design matrices that are not informed by a forward model of how data are caused.


Dynamic causal models assume the responses are driven by designed changes in inputs. An important conceptual aspect of dynamic causal models pertains to how the experimental inputs enter the model and cause responses. Experimental variables can illicit responses in one of two ways. First, they can elicit responses through direct influences on elements. The second class of input exerts its effect through a modulation of the coupling among elements. These sorts of experimental variables would normally be more enduring. These distinctions are seen most clearly in relation to particular forms of causal models used for estimation, for example the bilinear approximation















x
.

(
t
)

=


f

(

x
,
u

)







=


Ax
+

u

B

x

+
Cu








y
=


g

(
x
)

+
ε





A
=



f



x






B
=




2

f




x




u







C
=



f



u













This is an approximation to any model of how changes in one element x(t)i are caused by activity of other elements. Here the output function g(x) embodies a model. The matrix A represents the connectivity among the regions in the absence of input u(t). Effective connectivity is the influence that one system exerts over another in terms of inducing a response ∂{dot over (x)}/∂x. This latent connectivity can be thought of as the intrinsic coupling in the absence of experimental perturbations. The matrix B is effectively the change in latent coupling induced by the input. It encodes the input-sensitive changes in A or, equivalently, the modulation of effective connectivity by experimental manipulations. Because B is a second-order derivative it is referred to as bilinear. Finally, the matrix C embodies the extrinsic influences of inputs on activity. The parameters θ={A,B,C} are the connectivity or coupling matrices that we wish to identify and define the functional architecture and interactions among elements. We can express this as a GLM and estimate the parameters using EM in the usual way (see Friston et al 2003). Generally, estimation in the context of highly parameterized models like DCMs requires constraints in the form of priors. These priors enable conditional inference about the connectivity estimates.


The central idea, behind dynamic causal modelling (DCM), is to model a physical system as a deterministic nonlinear dynamic system that is subject to inputs and produces outputs. Effective connectivity is parameterized in terms of coupling among unobserved states. The objective is to estimate these parameters by perturbing the system and measuring the response. In these models, there is no designed perturbation and the inputs are treated as unknown and stochastic. Furthermore, the inputs are often assumed to express themselves instantaneously such that, at the point of observation the change in states will be zero. In the absence of bilinear effects we have











x
.

=

0






=


Ax
+
Cu








x
=


-

A

-
1




C

u






This is the regression equation used in SEM where A=A′−I and A′ contains the off-diagonal connections among regions. The key point here is that A is estimated by assuming u is some random innovation with known covariance. This is not really tenable for designed experiments when u represent carefully structured experimental inputs. Although SEM and related autoregressive techniques are useful for establishing dependence among responses, they are not surrogates for informed causal models based on the underlying dynamics of these responses.


The Fourier transform pair relates the spectral and temporal domains. We use the same symbol F, although F(t) and F(ω) are different functions:








F

(
t
)

=


1

2

π







-





d

ω


F

(
ω
)



e


-
i


ω

t






,



F

(
ω
)

=




-





d

t


F

(
t
)



e

i

ω

t










Accordingly, a convolution integral is derived:







D

(
t
)

=




-






dt
1



ε

(

t
1

)



E

(

t
-

t
1


)







where D(t), ε(t), E(t), are related to D(ω), ε(−iω), E(ω), respectively. Note that D(t) can be viewed as an integral operation, acting on E(t) is the simplest form of a Volterra Function Series (VFS). This can also be expressed in the VDO representation

D(t)=ε(∂t)E(t)=ε(∂τ)E(τ)|τ⇒t


The instruction τ⇒t is superfluous in a linear case, but becomes important for non-linear systems. For example, consider a harmonic signal clarifying the role of the VDO:









E

(
t
)

=


E
0



e


-
i


ω

t










D

(
t
)

=



E
0



e


-
i


ω

t







-





d


t
1



ε

(

t
1

)



e

i

ω


t
1






=



ε

(


-
i


ω

)



E
0



e


-
i


ω

t



=


ε

(


t

)



E
0



e


-
i


ω

t










In nonlinear systems, the material relations involve powers and products of fields, and {dot over (x)}(t) can be replaced by a series involving powers of E(ω), but this leads to inconsistencies.


However, the convolution can be replaced by a “super convolution”, the Volterra function series (VFS), which can be considered a Taylor expansion series with memory, given by:









D

(
t
)

=



m



D

(
m
)


(
t
)










D

(
m
)


(
t
)

=




-






dt
1









-






dt
m




ε

(
m
)


(


t
1

,


,

t
m


)



E

(

t
-

t
1


)






E

(

t
-

t
m


)










Typically, the VFS contains the products of fields expected for nonlinear systems, combined with the convolution structure. Various orders of nonlinear interaction are indicated by m. Theoretically all the orders co-exist (in practice the series will have to be truncated within some approximation), and therefore we cannot readily inject a time harmonic signal. If instead a periodic signal,







E

(
t
)

=



n



E
n



e


-
i


n

ω

t









is provided, we find










D

(
m
)


(
t
)

=






n
1

,


,

n
m






ε

(
m
)


(



-
i



n
1


ω

,


,


-

i


n
m


ω



)



E

n
1







E

n
m




e


-
iN


ω

t




=



N



D
N



e


-
iN


ω

t













N
=


n
1

+

+

n
m








displaying the essential features of a nonlinear system, namely, the dependence on a product of amplitudes, and the creation of new frequencies as sums (including differences and harmonic multiples) of the interacting signals frequencies. This function contains the weighting function ε(m)(−in1ω, . . . , −inmω) for each interaction mode.


The extension to the nonlinear VDO is given by

D(m)(t)=ε(m)(∂t1, . . . ,∂tm)E(t1) . . . E(tm)|t1, . . . ,tm⇒t


In which the instruction t1, . . . , tm⇒t guarantees the separation of the differential operators, and finally renders both sides of the equation to become functions of t.


The VFS, including the convolution integral, is a global expression describing D(t) as affected by integration times extending from −∞ to ∞. Physically this raises questions about causality, i.e., how can future times affect past events. In the full-fledged four-dimensional generalization causality is associated with the so called “light cone” (Bohm, 1965). It is noted that the VDO representation is local, with the various time variables just serving for book keeping of the operators, and where this representation is justified, causality problems are not invoked. In a power amplifier the physical correlate of this feature is that all past activity leads to a present state of the system, e.g., temperature, while the current inputs affect future states. In general, the frequency constraint is obtained from the Fourier transform of the VFS, having the form












D

(
m
)


(
ω
)

=


1


(

2

π

)


m
-
1








-





d


ω
1








-





d


ω

m
-
1





ε

(
m
)


(



-
i



ω
1


,


,



-
i



ω
m



)



E

(

ω
1

)






E

(

ω
m

)












ω
=


ω
1

+


+

ω
m













In which we have m−1 integrations, one less than in the VFS form. Consequently, the left and right sides of the Fourier transform are functions of ω, ωm, respectively. The additional constraint ω=ω1+ . . . +ωm completes the equation and renders it self-consistent.


See, Volterra Series References, infra.


An alternate analysis of the VFS is as follows. Let x[n] and y[n] represent the input and output signals, respectively, of a discrete-time and causal nonlinear system. The Volterra series expansion for y[n] using x[n] is given by:








y
[
n
]

=


h
0

+





m
1

=
0






h
1

[

m
1

]



x
[

n
-

m
1


]



+





m
1

=
0








m
2

=
0






h
2

[


m
1

,

m
2


]



x
[

n
-

m
1


]



x
[

n
-

m
2


]




+

+





m
1

=
0








m
2

=
0














m
p

=
0





h
p

[


m
1

,

m
2

,


,

m
p


]



x
[

n
-

m
1


]

×

[

n
-

m
2


]






x
[

n
-

m
p


]






+



)




hp[m1, m2, . . . , mp] is known as the p-th order Volterra kernel of the system. Without any loss of generality, one can assume that the Volterra kernels are symmetric, i.e., hp[m1, m2, . . . , mp] is left unchanged for any of the possible p! Permutations of the indices m1, m2, . . . , mp. One can think of the Volterra series expansion as a Taylor series expansion with memory. The limitations of the Volterra series expansion are similar to those of the Taylor series expansion, and both expansions do not do well when there are discontinuities in the system description. Volterra series expansion exists for systems involving such type of nonlinearity. Even though clearly not applicable in all situations, Volterra system models have been successfully employed in a wide variety of applications.


Among the early works on nonlinear system analysis is a very important contribution by Wiener. His analysis technique involved white Gaussian input signals and used “G-functionals” to characterize nonlinear system behavior. Following his work, several researchers employed Volterra series expansion and related representations for estimation and time-invariant or time variant nonlinear system identification. Since an infinite series expansion is not useful in filtering applications, one must work with truncated Volterra series expansions.


The discrete time impulse response of a first order (linear) system with memory span is aggregate of all the N most recent inputs and their nonlinear combinations into one expanded input vector as

Xe(n)=[x(n),x(n−1), . . . ,x(n−N+1),x2(n)x(n)x(n−1), . . . ,xQ(n−N+1)]T


Similarly, the expanded filter coefficients vector H(n) is given by

H(n)=[h1(0),h1(1), . . . ,h1(N−1),h2(0,0),h2(0,1), . . . ,hQ(N−1, . . . ,N−1)]T


The Volterra Filter input and output can be compactly rewritten as

y(n)=HT(n)Xe(n)


The error signal e(n) is formed by subtracting y(n) from the noisy desired response d(n), i.e.,

e(n)=d(n)−y(n)=d(n)−HT(n)Xe(n)


For the LMS algorithm, this may be minimized to

E[e2(n)]=E[d(n)−HT(n)Xe(n)]


The LMS update equation for a first order filter is

H(n+1)=H(n)+μ|e(n)|Xe(n)

where μ is small positive constant (referred to as the step size) that determines the speed of convergence and also affects the final error of the filter output. The extension of the LMS algorithm to higher order (nonlinear) Volterra filters involves a few simple changes. Firstly, the vector of the impulse response coefficients becomes the vector of Volterra kernels coefficients. Also, the input vector, which for the linear case contained only a linear combination, for nonlinear time varying Volterra filters, complicates treatment.


The RLS (recursive least squares) algorithm is another algorithm for determining the coefficients of an adaptive filter. In contrast to the LMS algorithm, the RLS algorithm uses information from all past input samples (and not only from the current tap-input samples) to estimate the (inverse of the) autocorrelation matrix of the input vector.


To decrease the influence of input samples from the far past, a weighting factor for the influence of each sample is used. The Volterra filter of a fixed order and a fixed memory adapts to the unknown nonlinear system using one of the various adaptive algorithms. The use of adaptive techniques for Volterra kernel estimation has been well studied. Most of the previous research considers 2nd order Volterra filters and some consider the 3rd order case.


A simple and commonly used algorithm is based on the LMS adaptation criterion. Adaptive Volterra filters based on the LMS adaptation algorithm are computational simple but suffer from slow and input signal dependent convergence behavior and hence are not useful in many applications. As in the linear case, the adaptive nonlinear system minimizes the following cost function at each time:







J
[
n
]

=




k
=
0

n




λ

n
-
k


(


d
[
k
]

-



H
T

[
n
]



X
[
k
]



)

2






where, H(n) and X(n) are the coefficients and the input signal vectors, respectively, A is a factor that controls the memory span of the adaptive filter and d(k) represents the desired output. The solution can be obtained by differentiating J[n] with respect to H[n], setting the derivative to zero, and solving for H[n]. The optimal solution at time n is given by

H[n]=C−1[n]P[n]


where,









C
[
n
]

=




k
=
0

n



λ

n
-
k




X
[
k
]




X
T

[
k
]








and




P
[
n
]

=




k
=
0

n



λ

n
-
k




d
[
k
]



X
[
k
]








H[n] can be recursively updated by realizing that

C[n]=λC[n−1]+X[n]XT[n] and P[n]=λP[n−1]+d[n]X[n]


The computational complexity may be simplified by making use of the matrix inversion lemma for inverting C[n]. The derivation is similar to that for the RLS linear adaptive filter.

C−1[n]=λ−1C−1[n−1]−λ−1k[n]XT[n]C−1[n−1]


There are a few simple models for basic amplifier non-linear behavior. A more rigorous model could include the Volterra series expansion which can model complex non-linearities such as memory effects. Among the simpler models are the Rapp model, Saleh model and the Ghorbani model. Combinations of pure polynomial models and filter models are also often referred to as fairly simple models, e.g., the Hammerstein model.


The advantage of the simpler models is usually in connection to for a need of very few parameters to model the non-linear behavior. The drawback is that such a model only can be used in conjunction with simple architecture amplifiers such as the basic Class A, AB and C amplifiers. Amplifiers such as the high efficiency Doherty amplifier can in general not be modelled by one of these simple models. In addition, to properly capture the PA behavior for the envisaged large NR bandwidths, it is essential to use PA models capturing the memory effects. Such models would require an extensive set of empirical measurements for proper parameterization.


The Rapp model has basically two parameters by which the general envelop distortion may be described. It mimics the general saturation behavior of an amplifier and lets the designer set a smoothness of the transition by a P-factor. By extending this also to model phase distortion, one has in total six parameters available. The basic simple model may be found as:







V
out

=


V

i

n




(

1
+


(




"\[LeftBracketingBar]"


V

i

n




"\[RightBracketingBar]"



V
sat


)


2

P



)


1

2

P








This model produces a smooth transition for the envelope characteristic as the input amplitude approaches saturation. In the more general model, both AM-AM and AM-PM distortion can be modelled. In general terms, the model describes the saturation behavior of a radio amplifier in a good way.









F


A

M

-

A

M



=


G

x



(

1
+




"\[LeftBracketingBar]"



G

x


V
sat




"\[RightBracketingBar]"



2

P



)


1

2

P











F


A

M

-

P

M



=


A


x
q



1
+


(

x
B

)

q









where “x” is the envelope of the complex input signal. If signal measurements are at hand of the input/output relationship, the parameters of the model may be readily found for a particular amplifier by for example regression techniques. The strength of the Rapp model is lies in its simple and compact formulation, and that it gives an estimation of the saturation characteristics of an amplifier. The drawback of this simple model is of course that it cannot model higher order classes of amplifiers such as the Doherty amplifier. It also lacks the ability to model memory effects of an amplifier.


The Saleh model is a similar model to the Rapp model. It also gives an approximation to the AM-AM and AM-PM characteristics of an amplifier. It offers a slightly fewer number of parameters (4) that one can use to mimic the input/output relationship of the amplifier. The AM-AM distortion relation and AM-PM distortion relation are found to be as:










g

(
r
)



A

M

-

A

M



=



α
a


r


1
+


β
a



r
2












f

(
r
)



A

M

-

P

M



=



α
φ



r
2



1
+


β
φ



r
2










where “r” is the envelope of the complex signal fed into the amplifier, and α/β are real-valued parameters that can be used to tune the model to fit a particular amplifier.


The Ghorbani model also gives expressions similar to the Saleh model, where AM-AM and AM-PM distortion is modeled. Following Ghorbani, the expressions are symmetrically presented:







g

(
r
)

=




x
1



r

x
2




1
+


x
3



r

x
2





+


x
4


r









f

(
r
)

=




y
1



r

y
2




1
+


y
3



r

y
2





+


y
4


r






In the expressions above, g(r) corresponds to AM-AM distortion, while ƒ(r) corresponds to AM-PM distortion. The actual scalars x1-4 and y1-4 have to be extracted from measurements by curve fitting or some sort of regression analysis.


The next step in the more complex description of the non-linear behavior of an amplifier is to view the characterization as being subject to a simple polynomial expansion. This model has the advantage that it is mathematically pleasing in that it for each coefficient reflects higher order of inter-modulations. Not only can it model third order intermodulation, but also fifth/seventh/ninth etc. Mathematically it can also model the even order intermodulation products as well, it merely is a matter of discussion whether these actually occur in a real RF application or not.







y

(
t
)

=


a
0

+


a
1



x

(
t
)


+


a
2




x

(
t
)

2


+


a
3




x

(
t
)

3


+


a
4




x

(
t
)

4
















A

IP

3


=


4


a
1

/
3




"\[LeftBracketingBar]"


a
3



"\[RightBracketingBar]"








Coefficients may be readily expressed in terms of Third Order Intercept point IP3 and gain, as described above. This feature makes this model especially suitable in low level signal simulations, since it relates to quantities that usually are readily available and easily understood amongst RF engineers.


The Hammerstein model consists of a combination of a Linear+Non-Linear block that is capable of mimicking a limited set of a Volterra Series. As the general Volterra series models a nested series of memory and polynomial representations, the Hammerstein model separates these two defining blocks that can in theory be separately identified with limited effort. The linear part is often modelled as a linear filter in the form of a FIR-filter.







s

(
n
)

=




k
=
0


K
-
1




h

(
k
)



x

(

n
-
k

)







The non-linear part is then on the other hand simply modelled as polynomial in the enveloped domain.

y(t)=a0+a1x(t)+a2x(t)2+a3x(t)3+a4x(t)4 . . .


The advantage of using a Hammerstein model in favor of the simpler models like Rapp/Saleh or Ghorbani is that it can in a fairly simple way also model memory effects to a certain degree. Although, the model does not benefit from a clear relationship to for example IIP3/Gain but one has to employ some sort of regression technique to derive polynomial coefficients and FIR filter tap coefficients.


The Wiener model describes like the Hammerstein model a combination of Non-linear and Linear parts that are cascaded after each other. The difference to the Hammerstein model lies in the reverse order of non-linear to linear blocks. The initial non-linear block is preferably modelled as a polynomial in the envelope of the complex input signal. This block is the last one in the Hammerstein model as described above. The polynomial coefficients may themselves be complex, depending on what fits measured data best. See expressions for non-linear and linear parts under the Hammerstein section. The second block which is linear may be modelled as an FIR filter with a number of taps that describes the memory depth of the amplifier.


The state-of-the-art approaches consider the so called Volterra series, and is able to model all weak non-linearity with fading memory. Common models like, for example, the memory polynomial can also be seen as a subset of the full Volterra series and can be very flexible in designing the model by simply adding or subtracting kernels from the full series.


The discrete-time Volterra series, limited to causal systems with symmetrical kernels (which is most commonly used for power amplifier modelling) is written as







y
[
n
]

=


β
0

+




p
=
1








τ
1

=
0








τ
2

=

τ
1













τ
p

=

τ

p
-
1






β

p
,

τ
1

,

τ
2

,



,

τ
p









j
1

=
1

p


x
[

n
-


τ

j
1








j
2

=

p
+
1




2

p

-
1




x
_

[

n
-

τ

j
2



]


















in which P is the non-linear order and M is the memory-depth. There are benefits which the Volterra series hold over other modelling approaches, including:

    • It is linear in parameters, meaning that the optimal parameters may be found through simple linear regression analysis from measured data. It further captures frequency dependencies through the inclusion of memory effects which is a necessity for wideband communication.
    • The set of kernels, or basis functions, best suited for modelling a particular power amplifier may be selected using methods which rely on physical insight. This makes the model scalable for any device technology and amplifier operation class.
    • It can be extended into a multivariate series expansion in order to include the effects of mutual coupling through antenna arrays. This enables the studies on more advanced algorithms for distortion mitigation and pre-coding.


It may be observed that other models such as static polynomials, memory polynomials and combinations of the Wiener and Hammerstein models are all subsets of the full Volterra description. As previously stated, empirical measurements are needed to parameterized PA model based on Volterra series expansion.


A subset of the Volterra Series is the memory polynomial with polynomial representations in several delay levels. This is a simpler form of the general Volterra series. The advantage of this amplifier model is its simple form still taking account of memory effects. The disadvantage is that the parameters have to be empirically solved for the specific amplifier in use.

PAmemory=x(t)·[a0+a1·|x(t)|+a2·|x(t)|2+ . . . ]++x(t−t0)·[b0+b1·|x(t−t0)|+b2·|x(t−t0)|2+ . . . ]++x(t−t1)·[c0+c1·|x(t−t1)|+c2·|x(t−t1)|2+ . . . ]+ . . .


The equation above shows an expression for a memory polynomial representation of an amplifier involving two memory depth layers. Each delayed version of the signal is associated with its own polynomial expressing the non-linear behavior.


See Filter References, infra.


The purpose of a PA behavioral model is to describe the input-to-output relationship as accurately as possible. State-of-the-art approaches lean on a fundament of the so called Volterra series consisting of a sum of multidimensional convolutions. Volterra series are able to model all weak nonlinearities with fading memory and thus are feasible to model conventional PAs aimed for linear modulation schemes.


The GMP model is given by








y
GMP

(
n
)

=





k


K
a







l


L
a





a
kl



x

(

n
-
l

)






"\[LeftBracketingBar]"


x

(

n
-
l

)



"\[RightBracketingBar]"



2

k





+




k


K
b







l


L
b







m

M




b
klm



x

(

n
-
l

)






"\[LeftBracketingBar]"


x

(

n
-
l
-
m

)



"\[RightBracketingBar]"



2

k












where yGMP(n) and x(n) represent the complex baseband equivalent output and input, respectively, of the model. The first term represents the double sum of so-called diagonal terms where the input signal at time shift l, x(n−l); l∈La, is multiplied by different orders of the time aligned input signal envelope |x(n−l)|2k; k∈Ka. The triple sum represents cross terms, i.e. the input signal at each time shifts is multiplied by different orders of the input signal envelope at different time shifts. The GMP is linear in the coefficients, akl and bklm, which provides robust estimation based on input and output signal waveforms of the PAs to be characterized. As a complement to the above, also memoryless polynomial models have been derived based on:








y
P

(
n
)

=




k


K
p





a
k



x

(
n
)






"\[LeftBracketingBar]"


x

(
n
)



"\[RightBracketingBar]"



2

k








It is thus seen that, while the Volterra series has been considered generally in a variety of contexts, and for power amplifier linearization, the particular implementation does not necessarily follow from broad prescriptions.


See, Volterra Series Patents, infra.


SUMMARY OF THE INVENTION

A deep neural network (DNN)-based equalizer is provided to equalize the PA distorted signals at a radio frequency receiver. This DNN equalizer exploits both the Volterra series nonlinearity modeling of PAs, to construct the input features of the DNN, which can help the DNN converge rapidly to the desired nonlinear response under limited training data and training.


Conventionally, Volterra series and neural networks are studied as two separate techniques for nonlinear PAs [2]. Volterra series has been a popular choice for constructing the models of nonlinear power amplifiers. Many digital predistorters or nonlinear equalizers have been developed based on such modeling. Similarly, artificial neural networks have also been applied to model or equalizer the nonlinear PAs. By integrating these two techniques together, equalizers may be more efficient and have low-cost implementation than conventional digital pre-distorters, and have high performance in mitigating power amplifier with even severe nonlinearity.


In particular, conventional shallow feedforward neural networks using time-delayed inputs have only limited performance. The present DNN equalizer has much superior performance and does not need too much training data.


Nonlinear Power Amplifier Models

The nonlinear response of the power amplifiers are usually described by the baseband discrete model y(n)=ƒ(x(n)), where x(n) is the input signal and y(n) is the output signal. The function ƒ(x(n)) is some nonlinear function.


Consider the baseband discrete model of the PA y(n)=ƒ(x(n), x(n−1), . . . ), where x(n) is the input signal, y(n) is the output signal, and ƒ(⋅) is some nonlinear function. The simplest nonlinear PA model is the “AM-AM AM-PM” model. Let the amplitude of the input signal be Vx=E[|x(n)|], where E[⋅] denotes short-term expectation or average. The output sample y(n)'s amplitude Vy=E[y(n)] and additional phase change ψy=E[/y(n)] depend on Vx in nonlinear ways as











V
y

=


g


V
x




(

1
+


g


V
x


c


)


1

2

σ





,


V
y

=


α


V
x
p




(

1
+


V
x

β


)

q







(
1
)







where g is the linear gain, σ the smoothness factor, and c denotes the saturation magnitude of the PA. Typical examples of these parameters are g=4:65, σ=0:81, c=0:58, α=2560, β=0:114, p=2:4, and q=2:3, which are used in the PA models regulated by IEEE 803.11ad task group (TG) [10].


More accurate models should take into consideration the fact that nonlinearity leads to memory effects. In this case, Volterra series are typically used to model PAs [4] [21]. A general model is [5]










y

(
n
)

=




d
=
0

D





k
=
0

P



b

k

d




x

(

n
-
d

)






"\[LeftBracketingBar]"


x

(

n
-
d

)



"\[RightBracketingBar]"



k
-
1









(
2
)








with up to Pth order nonlinearity and up to D step memory.


Because higher order nonlinearity usually has smaller magnitudes, in order to simplify models, many papers have considered smaller P only, e.g.,







y

(
n
)

=




d
=
0

D


(



β
D



x

(

n
-
d

)


+


α
d



x

(

n
-
d

)






"\[LeftBracketingBar]"


x

(

n
-
d

)



"\[RightBracketingBar]"


2



)







with only the third-order nonlinearity. It can be shown that only odd-order nonlinearity (i.e., odd k) is necessary as even-order nonlinearity disappears during spectrum analysis.


It can be shown that only odd-order nonlinearity (i.e., odd k) is necessary because even-order nonlinearity falls outside of the passband and will be filtered out by the receiver bandpass filters [2]. To illustrate this phenomenon, we can consider some simple examples where the input signal x(n) consists of a few single frequency components only. Omitting the memory effects, if x(n) is a single frequency signal, i.e., x(n)=V0 cos(a0+ϕ), where a0=2πƒ0n. Then, the output signal can be written as










y

(
n
)

=



c
1



V
0



cos

(


a
0

+
ϕ
+

ψ
1


)


+


(



3
4



c
3



V
0
3


+


5
8



c
5



V
0
5



)



cos

(


a
0

+
ϕ
+

ψ
3

+

ψ
5


)







(
3
)
















+

1
2




c
2



V
0
2


+


3
8



c
4



V
0
4







(
4
)














+

(



1
2



c
2



V
0
2


+


1
2



c
4




V
0
4

(


cos

(


2


a
0


+

2

ϕ

+

2


ψ
2


+

2


ψ
4



)














(
5
)








where the first line (3) is the inband response with AM-AM & AM-PM nonlinear effects, the second line (4) is the DC bias, and the third line (5) includes all the higher frequency harmonics. At the receiving side, we may just have (3) left because all the other items will be canceled by bandpass filtering.


If x(n) consists of two frequencies, i.e., x(n)=V1 cos(a11)+V2 cos(a22), where ai=2πƒin, then the inband response includes many more items, such as the first order items ciVi cos(aiii), the third order items c3(Vi3+ViVj2) cos(aiii), the fifth order items c5(Vi5+ViVj4+ViVj2) cos(aiii), for i,j∈{1,2}. There are also intermodulation items that consist of nai±maj as long as they are within the passband of the bandpass filter, such as (Vi2Vj+Vi2Vj3+V14Vj) cos(2ai−aj+2ϕi−ϕj+2ψi−ψj).


There are many other higher order items with frequencies nai, n(ai±aj), or nai+maj, that cannot pass the passband filter. One of the important observations is that the contents that can pass the passband filter consist of odd-order nonlinearity only.


If x(n) consists of three or more frequencies, we can have similar observations, albeit the expressions are more complex. Let the input signal x(n) be











x

(
n
)

=




i
=
1

3



V
i



cos

(

a
i

)




,


a
i

=

2

π


f
i


n






(
6
)







Based on [22], the nonlinear distorted output response y(n)=ƒ(x(n)) can be written as










y

(
n
)

=




i
=
1





k
i




x
i

(
n
)







(
7
)







where ki represents the gain coefficients for the ith order components. The 1st order component is simply k1x(n). The 2nd order component includes the DC component, the sum/difference of beat components, and the second-order harmonic components. Specifically,

k2x2(n)=g2,0+g2,1(n)+g2,2(n),  (8)


where







g

2
,
0


=




i
=
1

3



V
i
2

/
2


(
n
)










g

2
,
1


=




i
=
1

3





j

i




V
i



V
j



cos

(


a
i

±

a
j


)











g

2
,
2


=




i
=
1

3



V
i
2



cos

(

2


a
i


)

/
2.






The 3rd order component includes the third-order harmonic components g3,1(n), the third intermodulation beat components g3,2(n), the triple beat components g3,3(n), the self-compression/expansion components g3,4(n), and the cross-compression/expansion components g3,5(n).


This gives








k
3




x
3

(
n
)


=




i
=
1

5



g

3
,
i


(
n
)







Where







g

3
,
1


(
n
)

=


1
4






i
=
1

3



A
i
3



cos

(

3


a
i


)

/
4











g

3
,
2


(
n
)

=


3
4






i
=
1

3





j

i




A
i
2



A
j



cos

(


2


a
i


±

2


a
j



)













g

3
,
3


(
n
)

=


3
2



(




i
=
1

3


A
i


)



cos

(




i
=
1

3


(

±

a
i


)


)










g

3
,
4


(
n
)

=


3
4






i
=
1

3



A
i
3



cos

(

a
i

)












g

3
,
5


(
n
)

=


3
2






i
=
1

3





j

i




A
i



A
j
2



cos

(

a
i

)









The 4th order component includes the DC components g4,0, the fourth-order harmonic components g4,1(n), the fourth intermodulation beat components g4,2(n), the sum/difference beat components g4,3(n), the second harmonic components g4,5(n). This gives








k
4




x
4

(
n
)


=




i
=
0

5



g

4
,
i


(
n
)






where









g

4
,
0


=



3
8






i
=
1

3


A
i
4



+


3
4






i
=
1

3





j

i




A
i
2



A
j
2
















g

4
,
1


=


1
8






i
=
1

3



A
i
4



cos

(

4


a
i


)












g

4
,
2


=



4
8






i
=
1

3





j

i




A
i
3



A
j



cos

(


3


a
i


±

a
j


)





+



1

2

8






i
=
1

3




A
i
2

(




j

i



A
j


)



cos
(

2


a
i






j

i



(

±

a
j


)



)













g

4
,
3


=


6
4






i
=
1

3


cos

(


a
i

±

a

mod
(


i
+
1

,
3

)



)







(



A
i
2



A

mod
(


i
+
1

,
3

)

2


+


A
i



A

mod
(


i
+
1

,
3

)

3


+


A
i
3



A

mod
(


i
+
1

,
3

)



+




j
=
1

3


A
i



)










g

4
,
4


=


3
2






i
=
1

3



cos

(

2


a
i


)



(


A
i
2






j
=
1

3


A
j
2



)









The 5th order component includes the fifth-order harmonic components g5,1(n), the fifth intermodulation beat components g5,2(n), the self-compression/expansion components g5,3(n), the cross-compression/expansion components g5,4(n), the third harmonic components g5,5(n), the third intermodulation beat components g5,6(n), and the triple beat components g5,7(n). This gives








k
5




x
5

(
n
)


=




i
=
1

5



g

5
,
i


(
n
)






Where










g

5
,
1


(
n
)

=


1
5






i
=
1

3




A
i
5



cos

(

5


a
i


)













g

5
,
2


(
n
)

=



5
8






i
=
1

3






j

i





A
i



A
j
4



cos

(


a
i

±

4


a
j



)





+


A
i
2



A
j
3



cos

(


2


a
i


±

3


a
j



)


+


A
i
3



A
j
2



cos

(


3


a
i


±

2


a
j



)


+


A
i
4



A
j



cos

(


4


a
i


±

a
j


)













g

5
,
3


(
n
)

=


5
8






i
=
1

3




A
i
5



cos

(

a
i

)















g

5
,
4


(
n
)

=


15
4






i
=
1

3




cos

(

a
i

)



(





j

i



(



A
i
3



A
j
2


+


A
i



A
j
4



)


+


A
i






j

i



A
j
2




)















g

5
,
5


(
n
)

=


5
4






i
=
1

3




cos

(

3


a
i


)



(


A
i
3






j
=
1

3



A
j
2



)















g

5
,
6


(
n
)

=


15
4






i
=
1

3






j

i




cos

(


2


a
i


±

a
j


)

×

(



A
i
3



A
j
2


+


A
i
4



A
j


+


A
i
2



A
j







k

i

,
j



A
k
2




)






)










g

5
,
7


(
n
)

=


15
8



cos

(




i
=
1

3



(

±

a
i


)


)



(





i
=
1

3




A
i






j

i



A
j
2




+




i
=
1

3




A
i
3






j

i



A
j





)







These nonlinear spectrum growth expressions can be similarly applied if the signal x(n) is the QAM or OFDM signal. Especially, the harmonics provides us a way to design the input signal vectors for DNN equalizers. Note that some of the spectrums that are deviated too much from the transmitted signal bandwidth will be attenuated by the receiver bandpass filters.


DNN-Based Nonlinear Equalization
A. Nonlinear Equalizer Models

To mitigate the PA nonlinear distortions, nonlinear equalizers can be applied at the receivers. Obviously, the Volterra series model can still be used to analyze the response of nonlinear equalizers. One of the differences from (2) is that the even order nonlinearity may still be included and may increase the nonlinear mitigation effects [5].


Consider the system block diagram of nonlinear equalization shown in FIG. 1, which shows a signal x(n) entering a nonlinear power amplifier, to produce a distorted signal y(n), which passes through a channel h1, which produces a response r(n), which is fed to a neural network equalizer to produce a corrected output z(n).


Let the received signal be











r

(
n
)

=






=
0

L




h




y

(

n
-


)



+

v

(
n
)



,




(
9
)








where hcustom character is the finite-impulse response (FIR) channel coefficients and v(n) is additive white Gaussian noise (AWGN). With the received sample sequence r(n), a nonlinear equalizer will generate z(n) as the estimated symbols.


If the PA has only slight nonlinearity as modeled by the simple “AM-AM AM-PM” model (1), the received samples r(n) may be stacked together into M+1 dimensional vectors r(n)=[r(n), . . . , r(n−M)]T, where (⋅)T denotes transpose, and write the received samples in vector form as

r(n)=HG(n)x(n)+v(n)  (10)

where H is an (M+1)×(M+L+1) dimensional channel matrix









H
=

[




h
o







h
L



































h
o







h
L




]





(
11
)








and






G

(
n
)

=

diag


{



V

y

(
n
)




e

j


ψ

y

(
n
)





,


,


V

n

(

y
-
M
-
L

)




e

j


ψ

y

(

n
-
M
-
L

)






}







is an (M+L+1)×(M+L+1) dimensional diagonal matrix which consists of the nonlinear PA responses, x(n)=[x(n), . . . , x(n−M−L)]T, and v(n)=[v(n, . . . , v(n−M)]T. To equalize the received signal, we apply a nonlinear equalizer with the form

ƒT=G′(n)[ƒ0, . . . ,ƒM]  (12)


where [ƒ0, . . . , ƒM]H≈[0, . . . , 1, . . . , 0] is to equalize the propagation channel, and








G


(
n
)




1

V

y

(

n
-
d

)





e

-
j


ψ

y

(

n
-
d

)










is to equalize the nonlinear PA response. Let {circumflex over (r)}(n) be the output of the first linear equalization step. The second nonlinear equalization step can be implemented as a maximum likelihood estimation problem, i.e.,







z

(
n
)

=

arg

min



x

(
n
)








"\[LeftBracketingBar]"




r
ˆ

(
n
)

-


V
y



e

j


ψ
y





x

(
n
)





"\[RightBracketingBar]"


2

.







This gives the output

custom character(n)=ƒTr(n)≈x(n−d)  (13)


with certain equalization delay d.


Both the channel coefficients hcustom character and the nonlinear PA responses Vy, ψy can be estimated via training, as can the channel equalizer ƒT. Because the PA nonlinearity is significant for large signal amplitude only, we can apply small-amplitude training signals x(n) first to estimate the channel hcustom character and the channel equalizer [ƒ0, . . . , ƒM]. We can then remove the channel H from (10) with the first step linear channel equalization. Because the matrix G(n) is diagonal, we can easily estimate G(n) with regular training and then estimate the transmitted symbols as outlined in (13).


For more complex nonlinear PA responses, such as (2), we can conduct channel equalization similarly as (12). First, we can still apply small-amplitude training signals to estimate [ƒ0, . . . , ƒM] so as to equalize the channel hcustom character. This linear channel equalization step gives {circumflex over (r)}(n)≈y(n). We can then focus on studying the equalization of nonlinear distortion of the PA, which can in general be conducted with the maximum likelihood method,











{





x
^

(
n
)

:

n

=
0

,


,
N

}

=

arg


min

{

x

(
n
)

}






n
=
0

N






"\[LeftBracketingBar]"




r
^

(
n
)

-


y
^

(
n
)




"\[RightBracketingBar]"


2




,




(
14
)







where {circumflex over (r)}(n) is the sequence after the linear channel equalization, ŷ(n) is the sequence reconstructed by using the sequence x(n) and the nonlinear PA response parameters bkd based on (2), and N is the total number of symbols. The optimization problem (14) can be solved with the Viterbi sequence estimation algorithm if the memory length of the PA is small enough and the PA nonlinear response is known to the receiver.


In case the PA nonlinear response cannot be estimated, the equalization of nonlinear PA response is challenging. In this case, one of the ways is to use the conventional Volterra series equalizer, which approximates G′(n) with a Volterra series model. Similar to (2), this gives










𝓏

(
n
)

=




d
=
0

D






k
=
0

P




g
kd




r
^

(

n
-
d

)







"\[LeftBracketingBar]"



r
^

(

n
-
d

)



"\[RightBracketingBar]"



k
-
1


.








(
15
)







The objective of the Volterra series equalizer design is to design gkd such that custom character(n)≈x(n−custom character) for some equalization delay custom character.


Similarly, as the DPD design of [5], based on the Volterra series model (15), we can estimate the coefficients gkd by casting the estimation into a least squares problem











min

{

g
kd

}






n
=
L

N






"\[LeftBracketingBar]"



x

(

n
-
L

)

-




d
=
0

D






k
=
1

P




g
kd




r
^

(

n
-
d

)






"\[LeftBracketingBar]"



r
^

(

n
-
d

)



"\[RightBracketingBar]"



k
-
1








"\[RightBracketingBar]"


2



,




(
16
)







with training symbols x(n) and received samples {circumflex over (r)}(n). Note that only the coefficients gkd are needed to be estimated, and these coefficients are linear with respect to {circumflex over (r)}(n) and x(n).


Define the vector a=[g00, g01, . . . , gPD]T, and the vector x=[x(0); . . . , x(N−L)]T. Define the (N−L+1)×DP data matrix









B
=

[





r
^

(
L
)






r
^

(
L
)





"\[LeftBracketingBar]"



r
^

(
L
)



"\[RightBracketingBar]"














r
^

(

L
-
D

)



"\[RightBracketingBar]"





r
^

(

L
-
D

)




"\[RightBracketingBar]"



P
-
1






















r
^

(
N
)






r
^

(
N
)





"\[LeftBracketingBar]"



r
^

(
N
)



"\[RightBracketingBar]"











r
^

(

N
-
D

)






"\[LeftBracketingBar]"



r
^

(

N
-
D

)



"\[RightBracketingBar]"



P
-
1






]





(
17
)







Then, (16) becomes










min
a





x
-
Ba



2





(
18
)







Solution to (18) is

a=B+x  (19),

where B+=(BHB)−1B is the pseudo-inverse of the matrix B. From (19), we can obtain the Volterra series equalizer coefficients gkd. One of the major problems for the Volterra series equalizer is that it is hard to determine the order sizes, i.e., the values of D and P. Even for a nonlinear PA with slight nonlinear effects (i.e., small D and P in (2)), the length of D and P for Volterra series equalizer may be extremely long in order for (15) to have sufficient nonlinearity mitigation capability.


A potential way to resolve this problem is to apply artificial neural networks to fit the nonlinear equalizer response (15). Neural networks can fit arbitrary nonlinearity and can realize this with potentially small sizes. Nevertheless, in conventional neural network equalizers such as [14] [15], the input (features) to the neural networks was simply a time-delayed vector [r(n), . . . , r(n−M)]. Although neural networks may have the capability to learn the nonlinear effects specified in (15), in practice the training may not necessarily converge to the desirable solutions due to local minimum and limited training data. In addition, conventional neural network equalizers were all feed-forward networks with fully connected layers only, which often suffer from problems like shallow network architecture and over-fitting.


It is therefore an object to provide a radio receiver, comprising: an input configured to receive a transmitted radio frequency signal representing a set of symbols communicated through a communication channel; a Volterra series processor configured to decompose the transmitted radio frequency signal as a Volterra series expansion; an equalizer, comprising a deep neural network trained with respect to channel distortion, receiving the Volterra series expansion; and an output, configured to present data corresponding to a reduced distortion of the received distorted transmitted radio frequency signal.


It is also an object to provide a radio reception method, comprising: receiving a transmitted radio frequency signal representing a set of symbols communicated through a communication channel; decomposing the transmitted radio frequency signal as a Volterra series expansion; equalizing the Volterra series expansion with a deep neural network trained with respect to channel distortion, receiving the Volterra series expansion; and presenting data corresponding to a reduced distortion of the received transmitted radio frequency signal.


It is a further object to provide an equalization method for a radio signal, comprising: storing parameters for decomposition of a received radio frequency signal as a Volterra series expansion; processing the Volterra series expansion in a deep neural network comprising a plurality of neural network hidden layers and at least one fully connected neural network layer, trained with respect to radio frequency channel distortion; and presenting an output of the deep neural network. The method may further comprise demodulating the output of the deep neural network, wherein a bit error rate of the demodulator is reduced with respect to an input of the received radio frequency signal to the demodulator.


It is another object to provide an equalizer for a radio receiver, comprising: a memory configured to store parameters for decomposition of a received radio frequency signal as a Volterra series expansion; a deep neural network comprising a plurality of neural network hidden layers and at least one fully connected neural network layer, trained with respect to radio frequency channel distortion, receiving the Volterra series expansion of the received radio frequency signal; and an output configured to present an output of the deep neural network. The system may further comprise a demodulator, configured to demodulate the output, wherein a bit error rate of the demodulator is reduced with respect to an input of the received radio frequency signal to the demodulator.


The Volterra series expansion may comprise at least third or fifth order terms.


The deep neural network may comprise at least two or three convolutional network layers. The deep neural network may comprise at least three one-dimensional convolutional network layers. The convolutional layers may be hidden layers. The deep neural network may comprise at least three one-dimensional layers, each layer having at least 10 feature maps. The radio receiver may further comprise a fully connected layer subsequent to the at least three layers.


The distorted transmitted radio frequency signal comprises an orthogonal frequency multiplexed (OFDM) signal, a quadrature amplitude multiplexed (QAM) signal, a QAM-16 signal, a QAM-64 signal, a QAM-256 signal, a quadrature phase shift keying (QPSK) signal, a 3G signal, a 4G signal, a 5G signal, a WiFi (IEEE-802.11 standard family) signal, a Bluetooth signal, a cable broadcast signal, an optical transmission signal, a satellite radio signal, etc.


The radio receiver may further comprise a demodulator, configured to demodulate output as the set of symbols.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a system block diagram with nonlinear power amplifier and deep neural network equalizer.



FIG. 2 shows a block diagram of DNN equalizer.



FIGS. 3A-3D show constellations of 16 QAM over a simulated PA. FIG. 3A: received signal. FIG. 3B: Volterra equalizer output. FIG. 3C: time-delayed NN output. FIG. 3D: Volterra+NN output.



FIGS. 4A-4D show constellation of 16 QAM over a real PA. FIG. 4A: received signal. FIG. 4B: Volterra equalizer output. FIG. 4C: time-delayed NN output. FIG. 4D: Volterra+NN output.



FIG. 5 shows a comparison of three equalization methods for 16-QAM under various NLD levels.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Volterra-Based DNN Equalizer

The present technology therefore employs deep neural networks to implement the nonlinear equalizer in the receiver, which can mitigate the nonlinear effects of the received signals due to not only PAs but also nonlinear channels and propagations. The architecture of the DNN equalizer is shown in FIG. 2, which shows an input X, which undergoes a series of three 1-d convolutions, and FC (fully-connected) dropout, to produce the output Y.


Different from [10], multi-layer convolutional neural networks (CNNs) are employed. Different from conventional neural network predistorters proposed in [6], neural networks are used as equalizers at the receivers. Different from conventional neural network equalizers such as those proposed in [14] [15], in the present DNN equalizer, not only the linear delayed samples r(n), but also the CNN and the input features in X are used. The Volterra series models are applied to create input features.


We can assume that the linear channel H has already been equalized by a linear equalizer, whose output signal is r(n). In fact, this equalization is not required, but simplifies the presentation of the analysis.


According to Volterra series representation of nonlinear functions, the input-output response of the nonlinear equalizer can be written as










𝓏

(
n
)

=




k
=
1

P







d
1

=
0

D











d
k

=
0

D




f


d
1

,


,

d
k








i
=
1

k




r

(

n
-

d
i


)

.











(
20
)







One of major problems is that the number of coefficients ƒd1, . . . , dk increases exponentially with the increase of memory length D and nonlinearity order P. There are many different ways to develop more efficient Volterra series representations with reduced number of coefficients. For example, [23], exploits the fact that higher-order terms do not contribute significantly to the memory effects of PAs to reduce the memory depth d when the nonlinearity order k increases.


This technique can drastically reduce the total number of coefficients. In [24] [25] and [26], a dynamic deviation model was developed to reduce the full Volterra series model (20) to the following simplified one:







𝓏

(
n
)

=




𝓏
s

(
n
)

+


𝓏
d

(
n
)


=





k
=
1

P




f

k
,
0





r
k

(
n
)



+




k
=
1

P






j
=
1

k





r

k
-
j


(
n
)







d
1

=
0

D











d
j

=

d

j
-
1



D




f

k
,
j







i
=
1

j



r

(

n
-

d
i


)















where custom characters(n) is the static term, and custom characterd(n) is the dynamic term that includes all the memory effects. We can see that the total number of coefficients can be much reduced by controlling the dynamic order j which is a selectable parameter.


We construct the input features of the DNN based on the model (21). Corresponding to the static term custom characters(n), we change it to:









𝓏
^

s

(
n
)

=




1

k

P




f

k
,
0




r

(
n
)







"\[LeftBracketingBar]"


r

(
n
)



"\[RightBracketingBar]"



k
-
1


.







The reason that (22) changes rk(n) to r(n)|r(n)|k-1 is that only the signal frequency within the valid passband is interested. This means the input feature vector X should include terms r(n)|r(n)|k-1. Similarly, corresponding to the dynamic term custom characterd(n), we need to supply rk-j(n)Πi−1jr(n−di) in the features where half of the terms r(n) and r(n−di) should be conjugated. For simplicity, in the DNN equalizer, the vector X includes r(n−q)|r(n−q)|k-1 for some q and k.


By applying Volterra series components directly as features of the input X, the DNN can develop more complex nonlinear functions with a fewer number of hidden layers and a fewer number of neurons. This will also make the training procedure converge much faster with much less training data.


In FIG. 2, the input X is a tensor formed by the real and imaginary parts of r(n−q)|r(n|q)k-1 with appropriate number of delays q and nonlinearities k. There are three single dimension convolutional layers, each with 20 or 10 feature maps. After a dropout layer for regularization, this is followed by a fully-connected layer with 20 neurons. Finally, there is a fully-connected layer to form the output tensor Y which has two dimensions. The output Y is used to construct the complex custom character(n), where custom character(n)={circumflex over (x)}(n−d) for some appropriate delay d. All the convolutional layers and the first fully connected layer use the sigmoid activation function, while the output layer uses the linear activation function. The mean square error loss function Lloss=E[|x(n−d)−custom character(n)|2] is used, where custom character(n) is replaced by Y and x(n−d) is replaced by training data labels.


Experiment Evaluations

Experiments are presented on applying the Volterra series based DNN equalizer (Volterra+NN) for nonlinear PA equalization. The (Volterra+NN) scheme with the following equalization methods: a Volterra series-based equalizer (Volterra) and a conventional time-delay neural network equalizer (NN). The performance metrics are mean square error (MSE)

√{square root over (E[|z(n)−x(n−d)|2]/E[|x(n−d)|2])}


and symbol error rate (SER).


Both simulated signals and real measurement signals were employed. To generate simulated signals, a Doherty nonlinear PA model consisting of 3rd and 5th order nonlinearities was employed. Referring to (2), the coefficients bk,q were

b0,0:2={1.0513+0.0904j,−0.068−0.0023j,0.0289−0.0054j}
b2,0:2={−0.0542−0.29j,0.2234+0.2317j,−0.0621−0.0932j}
b4,0:2={−0.9657−0.7028j,−0.2451−0.3735j,0.1229+0.1508j},


which was used in [5] to simulate a 5th order dominant nonlinear distortion derived from PA devices used in the satellite industry. For real measurement, our measurement signals were obtained from PA devices used in the cable TV (CATV) industry, which are typically dominated by 3rd order nonlinear distortion (NLD). Various levels of nonlinear distortion, in terms of dBc, were generated by adjusting the PAs.


For the Volterra equalizer, the approximate response of the nonlinear equalizer with delays including 8 pre- and post-main taps and with nonlinearities including even and odd order nonlinearity up to the 5th order was employed. To determine the values of the Volterra coefficients, N=4; 096 training symbols were transmitted through the PA and then collected the noisy received samples r(n).


For the conventional time-delay NN equalizer, a feedforward neural network with an 80-dimensional input vector X and 5 fully-connected hidden layers with 20, 20, 10, 10, 10 neurons, respectively, was applied.



FIG. 3 shows the constellation and MSE of the equalizer's outputs. It can be seen that the proposed scheme provides the best performance.



FIG. 4 shows the constellation of 16 QAM equalization over the real PA. The corresponding SER were 0.0067, 0.0027, 0.00025, respectively. It can be seen that the Volterra+NN scheme has the best performance.



FIG. 5 provides MSE measurements for 16-QAM under various nonlinear distortion level dBc. For each 1 dB increase in NLD, the resultant MSE is shown for the “Measured”, “Volterra”, “NN”, and the proposed “Volterra+NN” cases. MSE reduction diminishes appreciably as modulation order increases from QPSK to 64-QAM, but small improvements in MSE have been observed lead to appreciable SER improvement, especially for more complex modulation orders. The 4,096 symbol sample sizes have limited the measurements to a minimum measurable 0.000244 SER, which represents 1 symbol error out of 4,096 symbols.


Table 2 summarizes equalization performance, which shows the averaged percent reduction/improvement in MSE and SER from the NLD impaired data for multiple modulation orders. Note that 0% SER improvement for QPSK was because the received signal's SER was already very low.









TABLE 2







EQ-MSE/SER improvement in percentage over measured NLD











Volterra
NN
Volterra + NN














MSE
SER
MSE
SER
MSE
SER

















64-QAM
16%
26% 
10%
25%
42%
44%


16-QAM
41%
2%
35%
 6%
85%
28%


QPSK
57%
0%
100% 
 0%
100% 
 0%


Average
38%
9%
48%
10%
76%
24%









The nonlinear equalization scheme presented by integrating the Volterra series nonlinear model with deep neural networks yields superior results over conventional nonlinear equalization approaches in mitigating nonlinear power amplifier distortions. It finds application for many 5G communication scenarios.


The technology may be implemented as an additional component in a receiver, or within the digital processing signal chain of a modern radio. A radio is described in US 20180262217, expressly incorporated herein by reference.


In an implementation, a base station may include a SDR receiver configured to allow the base station to operate as an auxiliary receiver. In an example implementation, the base station may include a wideband receiver bank and a digital physical/media access control (PHY/MAC) layer receiver. In this example, the SDR receiver may use a protocol analyzer to determine the protocol used by the source device on the uplink to the primary base station, and then configure the digital PHY/MAC layer receiver for that protocol when operating as art auxiliary receiver. Also, the digital PHY/MAC layer receiver may be configured to operate according to another protocol when operating as a primary base station. In another example, the base station may include a receiver hank for a wireless system, for example, a fifth Generation (5G) receiver bank, and include an additional receiver having SDR configurable capability. The additional receiver may be, for example, a digital Wi-Fi receiver configurable to operate according to various Wi-Fi protocols. The base station may use a protocol analyzer to determine the particular Wi-Fi protocol used by the source device on the uplink to the primary base station. The base station may then configure the additional receiver as the auxiliary receiver for that Wi-Fi protocol.


Depending on the hardware configuration, a receiver may be used to flexibly provide uplink support in systems operating according to one or more protocols such as the various IEEE 802.11 Wi-Fi protocols, 3rd Generation Cellular (3G), 4th Generation Cellular (4G) wide band code division multiple access (WCDMA), Long Term Evolution (LTE) Cellular, and 5th generation cellular (5G).


See, 5G References, infra.


Processing unit may comprise one or more processors, or other control circuitry or any combination of processors and control circuitry that provide, overall control according to the disclosed embodiments. Memory may be implemented as any type of as any type of computer readable storage media, including non-volatile and volatile memory.


The example embodiments disclosed herein may be described in the general context of processor-executable code or instructions stored on memory that may comprise one or more computer readable storage media (e.g., tangible non-transitory computer-readable storage media such as memory). As should be readily understood, the terms “computer-readable storage media” or “non-transitory computer-readable media” include the media for storing of data, code and program instructions, such as memory, and do not include portions of the media for storing transitory propagated or modulated data communication signals.


While the functionality disclosed herein has been described by illustrative example using descriptions of the various components and devices of embodiments by referring to functional blocks and processors or processing units, controllers, and memory including instructions and code, the functions and processes of the embodiments may be implemented and performed using any type of processor, circuit, circuitry or combinations of processors and or circuitry and code. This may include, at least in part, one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc. Use of the term processor or processing unit in this disclosure is mean to include all such implementations.


The disclosed implementations include a receiver, one or more processors in communication with the receiver, and memory in communication with the one or more processors, the memory comprising code that, when executed, causes the one or more processors to control the receiver to implement various features and methods according to the present technology.


Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example embodiments, implementations, and forms of implementing the claims and these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. Moreover, although the example embodiments have been illustrated with reference to particular elements and operations that facilitate the processes, these elements, and operations may be combined with or, be replaced by, any suitable devices, components, architecture or process that achieves the intended functionality of the embodiment. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications a falling within the scope of the appended claims.


REFERENCES



  • [1] J.-A. Lucciardi, P. Potier, G. Buscarlet, F. Barrami, and G. Mesnager, “Non-linearized amplifier and advanced mitigation techniques: Dvbs-2× spectral efficiency improvement,” in GLOBECOM 2017-2017 IEEE Global Communications Conference. IEEE, 2017, pp. 1-7.

  • [2] J. Wood, Behavioral modeling and linearization of RF power amplifiers. Artech House, 2014.

  • [3] C.-L. Wang and Y. Ouyang, “Low-complexity selected mapping schemes for peak-to-average power ratio reduction in ofdm systems,” IEEE Transactions on signal processing, vol. 53, no. 12, pp. 4652-4660, 2005.

  • [4] J. Kim and K. Konstantinou, “Digital predistortion of wideband signals based on power amplifier model with memory,” Electronics Letters, vol. 37, no. 23, pp. 1417-1418, 2001.

  • [5] L. Ding, G. T. Zhou, D. R. Morgan, Z. Ma, J. S. Kenney, J. Kim, and C. R. Giardina, “A robust digital baseband predistorter constructed using memory polynomials,” IEEE Transactions on communications, vol. 52, no. 1, pp. 159-165, 2004.

  • [6] M. Rawat, K. Rawat, and F. M. Ghannouchi, “Adaptive digital predistortion of wireless power amplifiers/transmitters using dynamic realvalued focused time-delay line neural networks,” IEEE Transactions on Microwave Theory and Techniques, vol. 58, no. 1, pp. 95-104, 2010.

  • [7] S. Dimitrov, “Non-linear distortion cancellation and symbol-based equalization in satellite forward links,” IEEE Trans Wireless Commun, vol. 16, no. 7, pp. 4489-4502, 2017.

  • [8] D. J. Sebald and J. A. Bucklew, “Support vector machine techniques for nonlinear equalization,” IEEE Transactions on Signal Processing, vol. 48, no. 11, pp. 3217-3226, 2000.

  • [9] S. Chen, B. Mulgrew, and P. M. Grant, “A clustering technique for digital communications channel equalization using radial basis function networks,” IEEE Transactions on neural networks, vol. 4, no. 4, pp. 570-590, 1993.

  • [10] B. Li, C. Zhao, M. Sun, H. Zhang, Z. Zhou, and A. Nallanathan, “A bayesian approach for nonlinear equalization and signal detection in millimeter-wave communications,” IEEE Transactions on Wireless Communications, vol. 14, no. 7, pp. 3794-3809, 2015.

  • [11] F. Mkadem and S. Boumaiza, “Physically inspired neural network model for rf power amplifier behavioral modeling and digital predistortion,” IEEE Transactions on Microwave Theory and Techniques, vol. 59, no. 4, pp. 913-923, 2011.

  • [12] T. Liu, S. Boumaiza, and F. M. Ghannouchi, “Dynamic behavioral modeling of 3 g power amplifiers using real-valued time-delay neural networks,” IEEE Transactions on Microwave Theory and Techniques, vol. 52, no. 3, pp. 1025-1033, 2004.

  • [13] M. Ibnkahla, “Applications of neural networks to digital communications-a survey,” Signal processing, vol. 80, no. 7, pp. 1185-1215, 2000.

  • [14] D.-C. Park and T.-K. J. Jeong, “Complex-bilinear recurrent neural network for equalization of a digital satellite channel,” IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 711-725, 2002.

  • [15] A. Uncini, L. Vecci, P. Campolucci, and F. Piazza, “Complex-valued neural networks with adaptive spline activation function for digital-radio-links nonlinear equalization,” IEEE Transactions on Signal Processing, vol. 47, no. 2, pp. 505-514, 1999.

  • [16] M. S. Sim, M. Chung, D. Kim, J. Chung, D. K. Kim, and C.-B. Chae, “Nonlinear self-interference cancellation for full-duplex radios: From link-level and system-level performance perspectives,” IEEE Communications Magazine, vol. 55, no. 9, pp. 158-167, 2017.

  • [17] I. Yoffe and D. Wulich, “Predistorter for mimo system with nonlinear power amplifiers,” IEEE Transactions on Communications, vol. 65, no. 8, pp. 3288-3301, 2017.

  • [18] M. Abdelaziz, L. Anttila, and M. Valkama, “Reduced-complexity digital predistortion for massive mimo,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 6478-6482.

  • [19] H. Yan and D. Cabric, “Digital predistortion for hybrid precoding architecture in millimeter-wave massive mimo systems,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 3479-3483.

  • [20] C. Mollén, E. G. Larsson, and T. Eriksson, “Waveforms for the massive mimo downlink: Amplifier efficiency, distortion, and performance,” IEEE Transactions on Communications, vol. 64, no. 12, pp. 5050-5063, 2016.

  • [21] A. Cheaito, M. Crussière, J.-F. Helard, and Y. Louët, “Quantifying the memory effects of power amplifiers: Evm closed-form derivations of multicarrier signals.” IEEE Wireless Commun. Letters, vol. 6, no. 1, pp. 34-37, 2017.

  • [22] K. Simons, Technical Handbook for CATV Systems, 3rd Edition. Jerrod Publication No. 436-001-01, 1968.

  • [23] J. Staudinger, J.-C. Nanan, and J. Wood, “Memory fading volterra series model for high power infrastructure amplifiers,” in Radio and Wireless Symposium (RWS), 2010 IEEE. IEEE, 2010, pp. 184-187.

  • [24] A. Zhu, J. C. Pedro, and T. J. Brazil, “Dynamic deviation reduction-based volterra behavioral modeling of rf power amplifiers,” IEEE Transactions on microwave theory and techniques, vol. 54, no. 12, pp. 4323-4332, 2006.

  • [25] A. Zhu, P. J. Draxler, J. J. Yan, T. J. Brazil, D. F. Kimball, and P. M. Asbeck, “Open-loop digital predistorter for rf power amplifiers using dynamic deviation reduction-based volterra series,” IEEE Transactions on Microwave Theory and Techniques, vol. 56, no. 7, pp. 1524-1534, 2008.

  • [26] L. Guan and A. Zhu, “Simplified dynamic deviation reduction-based volterra model for doherty power amplifiers,” in Integrated Nonlinear Microwave and Millimetre-Wave Circuits (INMMIC), 2011 Workshop on. IEEE, 2011, pp. 1-4.

  • [27] Schetzen, M. The Volterra and Wiener Theories of Non-linear Systems. (1980) Wiley & Sons.

  • [28] Black, H. S. [October, 1928] Translating system. U.S. Pat. No. 1,686,792.

  • [29] Black, H. S. [December, 1937] Wave translating system. U.S. Pat. No. 2,102,671.

  • [30] Mitchell, A. F. [November, 1979] A 135 MHz feedback amplifier. IEEE Colloq. Broadband High Frequency Amplifiers.

  • [31] Arthanayake, T. and Wood, H. B. [8 Apr. 1971] Linear amplification using envelope feedback. Elec. Lett.

  • [32] Chadwick, P. [1986] Wideband Amplifier Applications Book, Edition 2, Plessey Semiconductor.



Amplifier References



  • Aghvami, A. H. and Robertson, I. D. [April, 1993] Power limitation and high-power amplifier non linearities in on-board satellite communications systems. Electron. and Comm. Engin. J.

  • Arthanayake, T. and Wood, H. B. [8 Apr. 1971] Linear amplification using envelope feedback. Elec. Lett.

  • Bennet, T. J. and Clements, R. F. [May, 1974] Feedforward—An alternative approach to amplifier linearisation. Radio and Electron. Engin.

  • Bhargava, V. K. et al. [1981] Digital Communications by Satellite, John Wiley and Sons.

  • Black, H. S. [December, 1937] Wave translating system. U.S. Pat. No. 2,102,671.

  • Black, H. S. [October, 1928] Translating system. U.S. Pat. No. 1,686,792.

  • Bond F. E. and Meyer, H. F. [April, 1970] Intermodulation effects in limiter amplifier repeaters. IEEE Trans. Comm., Vol. COM-18, p. 127-135.

  • Chadwick, P. [1986] Wideband Amplifier Applications Book, Edition 2, Plessey Semiconductor.

  • Cole, R. A. [December, 1989] Linearisation of a power amplifier using Cartesian Loop feedback. Report No. 72/89/R/451/C. Roke Manor Res.

  • ETSI [August, 1994] Standard ETR 132. Radio broadcasting systems; Code of practice for site engineering VHF FM sound broadcasting transmitters. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.

  • ETSI [January, 1995] European Standard ETS 300 384. Radio broadcasting systems; Very high frequency (VHF), frequency modulated, sound broadcasting transmitters. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.

  • ETSI [June, 1998] Standard ETR 053 Ed 3—Radio site engineering for equipment and systems in the mobile service. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.

  • ETSI [March, 1997] European Standard ETS 300 113. Radio equipment and systems (RES); Land mobile service; Technical characteristics and test conditions for radio equipment intended for the transmission of data (and speech) and having an antenna connector. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.

  • Gray, L. F. [1980] Application of broadband linearisers to satellite transponders. IEEE Conf. Proc. ICC'80.

  • Heathman, A. C. [1989] Methods for intermodulation prediction in communication systems. Ph. D. Thesis, University of Bradford, United Kingdom.

  • IESS [November, 1996] IESS-401 (Rev. 4). Performance requirements for intermodulation products transmitted from INTELSAT earth stations. Intelsat Earth Station Standard (IESS).

  • Kaeadar, K. [December, 1986] Gaussian white-noise generation for digital signal synthesis. IEEE Trans. Inst. and Meas., Vol. IM 35, 4.

  • Kahn, L. R. [July, 1952] SSB transmission by envelope elimination and restoration. Proc. IRE.

  • Mitchell, A. F. [November, 1979] A 135 MHz feedback amplifier. IEEE Colloq. Broadband High Frequency Amplifiers.

  • Pavliouk, A. [1977] Unification of measurement procedures for out-of-band emission spectra and peak envelope power of single-sideband radio transmitter measurements. Proc. of the NIIR, 4 (in Russian).

  • Petrovic, V. and Gosling, W. [10 May 1979] Polar loop transmitter. Elec. Lett.

  • Pye Telecom [November, 1978] Intermodulation in VHF and UHF radio systems—locating and minimizing the effects. Engineering Notes, Pub. Ref. No. TSP480/1, United Kingdom.

  • Radiocommunications Agency [April, 1987] Code of practice for radio site engineering. MPT 1331. Radiocommunications Agency (RA), Flyde Microsystems Ltd. United Kingdom.

  • Saleh, A. M. [May 1982] Intermodulation analysis of FDMA satellite systems employing compensated and uncompensated TWT‘s’. IEEE Trans. Comm., Vol. COM-30, 5.

  • Schetzen, M. [1980] The Volterra and Wiener Theories of Non-linear Systems. Wiley & Sons.

  • Schetzen, M. [1980] The Volterra and Wiener Theories of Non-linear Systems. Wiley & Sons.

  • Shahid, M., Shepherd, S. J., Lin, B., Khairruddin, I., and Barton, S. K. [December, 1996] Study of methods of measuring multi-carrier intermodulation performance Report No. 581, Purchase Order No. 142379 d′ESA, with University of Bradford, United Kingdom.

  • Shimbo, O. [February, 1971] Effects of intermodulation, AM-PM conversion, and additive noise in multicarrier TWT systems. Proc. IEEE, Vol. 59, p. 230-238.

  • Smith, C. N. [1986] Application of the polar loop technique to UHF SSB transmitters. Ph.D. Thesis, University of Bath.

  • Smith, C. N. and PETROVIC, V. [1982] Cartesian loop transmitter. Internal Research Report, University of Bath, School of Electrical and Electronic Engineering.

  • Tondryk, W. [1991] Intermodulation testing of the INMARSAT payload—Response to system PDR. Marconi Space Systems Ltd.

  • Wassermann, M. et al. [1983] Study and breadboarding of an L-band high power linearized TWT amplifier. Final Report, ESTEC contract No. 5459/83/NL/GM.

  • Wood, A. [October 1998] Radio interference: Sources and solutions. LPRA NEWS, p. 21.



Volterra Series References



  • Bohm, D. The Special Theory of Relativity, Benjamin, 1965.

  • Censor, D., & Melamed, T, 2002, Volterra differential constitutive operators and locality considerations in electromagnetic theory, PIER—Progress in Electromagnetic Research, 36: 121-137

  • Censor, D., 2000, A quest for systematic constitutive formulations for general field and wave systems based on the Volterra differential operators, PIER—Progress In Electromagnetics Research, (25): 261-284

  • Censor, D., 2001, Constitutive relations in inhomogeneous systems and the particle-field conundrum, PIER—Progress In Electromagnetics Research, (30): 305-335

  • Schetzen, M., 1980, The Volterra and Wiener Theorems of Nonlinear Systems, New York, Chichester, Brisbane and Toronto: John Wiley and Sons

  • Sonnenschein, M & Censor, D., 1998, Simulation of Hamiltonian light beam propagation in nonlinear media, JOSA—Journal of the Optical Society of America B, (15): 1335-1345



Filter References



  • Akaiwa, Y. Introduction to Digital Mobile Communication. New York: Wiley, 1997.

  • Altera Corporation. Digital Predistortion Reference Design. Application Note 314, 2003.

  • Aysal, Tuncer C., and Kenneth E. Barner, “Myriad-Type Polynomial Filtering”, IEEE Transactions on Signal Processing, vol. 55, no. 2, February 2007.

  • Barner, Kenneth E., and Tuncer Can Aysal, “Polynomial Weighted Median Filtering”, IEEE Transactions on Signal Processing, vol. 54, no. 2, February 2006.

  • Biglieri, Ezio, Sergio Barberis, and Maurizio Catena, “Analysis and Compensation of Nonlinearities in Digital Transmission Systems”, IEEE Journal on selected areas in Communications, vol. 6, no. 1, January 1988.

  • Budura, Georgeta, and Corina Botoca, “Efficient Implementation of the Third Order RLS Adaptive Volterra Filter”, FACTA Universitatis (NIS) Ser.: Elec. Energ. vol. 19, no. 1, April 2006.

  • Ding, L., et al. “A Robust Digital Baseband Predistorter Constructed Using Memory Polynomials,” IEEE Transactions on Communications, Vol. 52, No. 1, June 2004.

  • Fang, Yang-Wang, Li-Cheng Jiao, Xian-Da Zhang and Jin Pan, “On the Convergence of Volterra Filter Equalizers Using a Pth-Order Inverse Approach”, IEEE Transactions on Signal Processing, vol. 49, no. 8, August 2001.

  • Guérin, Alexandre, Gérard Faucon, and Régine Le Bouquin-Jeannès, “Nonlinear Acoustic Echo Cancellation Based on Volterra Filters”, IEEE Transactions on Speech and Audio Processing, vol. 11, no. 6, November 2003.

  • Haykin, Simon, “Adaptive Filter Theory”, Fourth Edition, Pearson Education, 2008.

  • Kamiya, N., and F. Maehara. “Nonlinear Distortion Avoidance Employing Symbol-wise Transmit Power Control for OFDM Transmission,” Proc. of Int'l. OFDM Workshop, Hamburg, 2009.

  • Kim, J., and K. Konstantinou. “Digital predistortion of wideband signals based on power amplifier model with memory,” Electronic Letters, Vol. 37, No. 23, November 2001.

  • Krall, Christoph, Klaus Witrisal, Geert Leus and Heinz Koeppl, “Minimum Mean-Square Error Equalization for Second-Order Volterra Systems”, IEEE Transactions on Signal Processing, vol. 56, no. 10, October 2008.

  • Leis, John, “Adaptive Filter Lecture Notes & Examples”, Nov. 1, 2008 www.usq.edu.au/users/leis/notes/sigproc/adfilt.pdf.

  • López-Valcarce, Roberto, and Soura Dasgupta, “Second-Order Statistical Properties of Nonlinearly Distorted Phase-Shift Keyed (PSK) Signals”, IEEE Communications Letters, vol. 7, no. 7, July 2003.

  • Lozhkin, Alexander N. “Turbo Linearizer for High Power Amplifier.” In 2011 IEEE 73rd Vehicular Technology Conference (VTC Spring), pp. 1-5. IEEE, 2011.

  • Mathews, V. John, “Adaptive Polynomial Filters,” IEEE Signal Processing Magazine, Vol. 8, No. 3, July 1991.

  • Park, Dong-Chul, and Tae-Kyun Jung Jeong, “Complex-Bilinear Recurrent Neural Network for Equalization of a Digital Satellite Channel”, IEEE Transactions on Neural Networks, vol. 13, no. 3, May 2002.

  • Rai, Amrita, and Amit Kumar Kohli. “Analysis of Adaptive Volterra Filters with LMS and RLS Algorithms.” AKGEC Journal of Technology 2, no. 1 (2011).

  • Therrien, Charles W., W. Kenneth Jenkins, and Xiaohui Li, “Optimizing the Performance of Polynomial Adaptive Filters: Making Quadratic Filters Converge Like Linear Filters”, IEEE Transactions on Signal Processing, vol. 47, no. 4, April 1999.

  • Tsimbinos John, and Langford B. White, “Error Propagation and Recovery in Decision-Feedback Equalizers for Nonlinear Channels”, IEEE Transactions on Communications, vol. 49, no. 2, February 2001.

  • Woo, Young Yun, et al. “Adaptive Digital Feedback Predistortion Technique for Linearizing Power Amplifiers,” IEEE Transactions on Microwave Theory and Techniques, Vol. 55, No. 5, May 2007.

  • Zaknich, A., “Principal of Adaptive Filter and Self Learning System”, Springer Link 2005.



Volterra Series Patents

U.S. Pat. Nos. and Published Patent Application Nos.: U.S. Pat. Nos. 4,615,038; 4,669,116; 4,870,371; 5,038,187; 5,309,481; 5,329,586; 5,424,680; 5,438,625; 5,539,774; 5,647,023; 5,692,011; 5,694,476; 5,744,969; 5,745,597; 5,790,692; 5,792,062; 5,815,585; 5,889,823; 5,924,086; 5,938,594; 5,991,023; 6,002,479; 6,005,952; 6,064,265; 6,166,599; 6,181,754; 6,201,455; 6,201,839; 6,236,837; 6,240,278; 6,288,610; 6,335,767; 6,351,740; 6,381,212; 6,393,259; 6,406,438; 6,408,079; 6,438,180; 6,453,308; 6,504,885; 6,510,257; 6,512,417; 6,532,272; 6,563,870; 6,600,794; 6,633,208; 6,636,115; 6,668,256; 6,687,235; 6,690,693; 6,697,768; 6,711,094; 6,714,481; 6,718,087; 6,775,646; 6,788,719; 6,812,792; 6,826,331; 6,839,657; 6,850,871; 6,868,380; 6,885,954; 6,895,262; 6,922,552; 6,934,655; 6,940,790; 6,947,857; 6,951,540; 6,954,476; 6,956,433; 6,982,939; 6,992,519; 6,999,201; 6,999,510; 7,007,253; 7,016,823; 7,061,943; 7,065,511; 7,071,797; 7,084,974; 7,092,043; 7,113,037; 7,123,663; 7,151,405; 7,176,757; 7,209,566; 7,212,933; 7,236,156; 7,236,212; 7,239,301; 7,239,668; 7,251,297; 7,268,620; 7,272,594; 7,286,009; 7,295,961; 7,304,591; 7,305,639; 7,308,032; 7,333,559; 7,348,844; 7,400,807; 7,403,884; 7,412,469; 7,423,699; 7,436,883; 7,443,326; 7,489,298; 7,512,900; 7,542,518; 7,551,668; 7,570,856; 7,571,401; 7,576,606; 7,589,725; 7,590,518; 7,602,240; 7,606,539; 7,610,183; 7,657,405; 7,720,232; 7,720,236; 7,728,658; 7,729,446; 7,733,177; 7,746,955; 7,755,425; 7,760,887; 7,773,692; 7,774,176; 7,795,858; 7,796,960; 7,808,315; 7,812,666; 7,821,337; 7,821,581; 7,822,146; 7,826,624; 7,847,631; 7,852,913; 7,853,443; 7,864,881; 7,873,172; 7,885,025; 7,885,797; 7,889,007; 7,894,788; 7,895,006; 7,899,416; 7,902,925; 7,903,137; 7,924,942; 7,929,375; 7,932,782; 7,970,150; 7,970,151; 7,979,837; 7,991,073; 7,991,167; 7,995,674; 8,005,858; 8,023,668; 8,031,882; 8,039,871; 8,045,066; 8,046,199; 8,065,060; 8,089,689; 8,105,270; 8,139,630; 8,148,983; 8,149,950; 8,160,191; 8,165,854; 8,170,508; 8,185,853; 8,193,566; 8,195,103; 8,199,399; 8,213,880; 8,244,787; 8,260,732; 8,265,583; 8,270,530; 8,294,605; 8,295,790; 8,306,488; 8,310,312; 8,315,970; 8,331,511; 8,331,879; 8,345,348; 8,346,692; 8,346,693; 8,346,711; 8,346,712; 8,351,876; 8,354,884; 8,355,684; 8,358,169; 8,364,095; 8,369,447; 8,369,595; 8,380,773; 8,390,375; 8,390,376; 8,396,693; 8,410,843; 8,410,850; 8,412,133; 8,421,534; 8,432,220; 8,437,513; 8,463,582; 8,467,438; 8,477,581; 8,483,343; 8,483,450; 8,487,706; 8,489,047; 8,494,463; 8,498,369; 8,509,347; 8,509,712; 8,519,440; 8,532,215; 8,532,964; 8,538,039; 8,564,368; 8,565,343; 8,577,311; 8,587,375; 8,599,050; 8,605,814; 8,605,819; 8,611,190; 8,611,459; 8,611,820; 8,615,208; 8,619,905; 8,620,631; 8,626,089; 8,649,743; 8,675,925; 8,704,595; 8,705,166; 8,712,345; 8,718,178; 8,718,209; 8,724,857; 8,737,937; 8,737,938; 8,744,141; 8,744,377; 8,758,271; 8,761,409; 8,766,917; 8,767,869; 8,780,693; 8,787,628; 8,798,559; 8,804,807; 8,804,871; 8,811,532; 8,823,452; 8,831,074; 8,831,133; 8,831,135; 8,838,218; 8,843,088; 8,843,089; 8,849,611; 8,855,175; 8,855,234; 8,867,601; 8,874,411; 8,885,765; 8,886,341; 8,891,701; 8,896,471; 8,897,351; 8,903,192; 8,909,176; 8,909,328; 8,933,752; 8,934,573; 8,958,470; 8,964,901; 8,964,996; 8,971,834; 8,976,896; 8,994,657; 8,995,571; 8,995,835; 9,008,153; 9,014,299; 9,019,643; 9,020,454; 9,025,607; 9,031,168; 9,036,734; 9,048,865; 9,048,900; 9,071,313; 9,077,508; 9,088,472; 9,094,036; 9,094,151; 9,104,921; 9,106,304; 9,130,628; 9,137,492; 9,143,274; 9,160,280; 9,160,310; 9,160,687; 9,166,610; 9,166,635; 9,166,698; 9,171,534; 9,184,784; 9,185,529; 9,189,458; 9,191,041; 9,191,049; 9,199,860; 9,209,753; 9,209,841; 9,214,968; 9,214,969; 9,225,295; 9,225,501; 9,231,530; 9,231,647; 9,231,801; 9,236,996; 9,246,525; 9,246,731; 9,252,798; 9,252,821; 9,253,608; 9,257,943; 9,258,156; 9,261,978; 9,264,153; 9,265,461; 9,270,304; 9,270,512; 9,271,123; 9,276,602; 9,294,113; 9,304,501; 9,306,606; 9,311,535; 9,312,892; 9,314,623; 9,322,906; 9,337,781; 9,337,783; 9,352,155; 9,361,681; 9,361,936; 9,362,869; 9,362,942; 9,363,068; 9,369,093; 9,369,255; 9,369,541; 9,397,516; 9,404,950; 9,413,516; 9,419,722; 9,431,972; 9,438,178; 9,438,356; 9,439,597; 9,451,920; 9,460,246; 9,461,597; 9,461,676; 9,473,077; 9,479,322; 9,509,331; 9,509,350; 9,517,030; 9,531,475; 9,536,539; 9,537,759; 9,544,126; 9,559,831; 9,564,876; 9,571,312; 9,575,570; 9,590,664; 9,590,668; 9,595,920; 9,595,982; 9,607,003; 9,607,628; 9,608,676; 9,608,718; 9,614,554; 9,628,119; 9,628,120; 9,646,116; 9,647,717; 9,654,211; 9,654,216; 9,659,120; 9,660,593; 9,660,730; 9,665,510; 9,667,292; 9,674,368; 9,680,423; 9,680,497; 9,697,845; 9,705,477; 9,706,296; 9,712,179; 9,712,233; 9,713,010; 9,722,646; 9,722,691; 9,726,701; 9,727,677; 9,735,741; 9,735,800; 9,735,811; 9,735,876; 9,737,258; 9,742,599; 9,746,506; 9,749,161; 9,755,691; 9,762268; 9,768,891; 9,778,902; 9,780,869; 9,780,881; 9,787,459; 9,794,000; 9,800,437; 9,800,734; 9,820311; 9,831,899; 9,837,970; 9,843,346; 9,859,845; 9,866,183; 9,877,265; 9,882,648; 9,887,862; 9,900088; 9,912,435; 9,913,194; 9,923,524; 9,923,640; 9,923,714; 9,928,212; 9,935,590; 9,935,645; 9,935715; 9,935,761; 9,940,938; 9,941,963; 9,953,656; 9,954,384; 9,960,794; 9,960,804; 9,960,900; 9,971920; 9,973,279; 9,974,957; 9,983,243; 9,998,223; 9,998,406; 9,999,780; 10,008,218; 10,009,050; 10,009,109; 10,009,259; 10,013,515; 10,015,593; 10,033,413; 10,033,568; 10,050,636; 10,050,710; 10,050,714; 10,063,265; 10,063,364; 10,075,201; 10,095,927; 10,097,273; 10,097,939; 10,101,370; 10,108,858; 10,110,315; 10,116,390; 10,128,955; 10,141,944; 10,142,754; 10,147,431; 10,148,417; 10,153,793; 10,181,825; 10,224,970; 20010036334; 20010051871; 20020041210; 20020060827; 20020075918; 20020126604; 20020146993; 20020161539; 20020161542; 20020169585; 20020178133; 20020181521; 20020186874; 20030046045; 20030057963; 20030063854; 20030071684; 20030142832; 20030195706; 20030223507; 20040019443; 20040044489; 20040130394; 20040136423; 20040155707; 20040179629; 20040208242; 20040258176; 20050021266; 20050021319; 20050031117; 20050031131; 20050031132; 20050031133; 20050031134; 20050031137; 20050031138; 20050031139; 20050031140; 20050049838; 20050100065; 20050141637; 20050141659; 20050174167; 20050177805; 20050180526; 20050226316; 20050237111; 20050243061; 20050253806; 20050270094; 20050271216; 20050273188; 20060039498; 20060052988; 20060083389; 20060093128; 20060095236; 20060104395; 20060104451; 20060133536; 20060209982; 20060222128; 20060239443; 20060256974; 20060262942; 20060262943; 20060264187; 20060269074; 20060269080; 20060274904; 20070005326; 20070018722; 20070030076; 20070033000; 20070063770; 20070080841; 20070133713; 20070133719; 20070136018; 20070136045; 20070152750; 20070160221; 20070168100; 20070190952; 20070229154; 20070237260; 20070247425; 20070252651; 20070252813; 20070276610; 20080001947; 20080032642; 20080129379; 20080130787; 20080130788; 20080130789; 20080152037; 20080158154; 20080158155; 20080180178; 20080240325; 20080261541; 20080283882; 20080285640; 20080293372; 20090003134; 20090027117; 20090027118; 20090058521; 20090067643; 20090072901; 20090075610; 20090094304; 20090146740; 20090153132; 20090185613; 20090256632; 20090287624; 20090289706; 20090291650; 20090302938; 20090302940; 20090318983; 20100007489; 20100033180; 20100060355; 20100090762; 20100093290; 20100094603; 20100097714; 20100114813; 20100135449; 20100148865; 20100152547; 20100156530; 20100183106; 20100194474; 20100199237; 20100254450; 20100283540; 20100292602; 20100292752; 20100311361; 20100312495; 20110003570; 20110025414; 20110028859; 20110037518; 20110054354; 20110054355; 20110064171; 20110069749; 20110081152; 20110085678; 20110087341; 20110096865; 20110102080; 20110103455; 20110110473; 20110121897; 20110125684; 20110125685; 20110125686; 20110125687; 20110140779; 20110144961; 20110149714; 20110177956; 20110181360; 20110204975; 20110211842; 20110268226; 20110270590; 20110293051; 20120007153; 20120007672; 20120027070; 20120029663; 20120086507; 20120093376; 20120098481; 20120098596; 20120119810; 20120140860; 20120147993; 20120154040; 20120154041; 20120158384; 20120165633; 20120176190; 20120176609; 20120217557; 20120229206; 20120256687; 20120259600; 20120263256; 20120306573; 20120328128; 20130005283; 20130009702; 20130015917; 20130030239; 20130034188; 20130040587; 20130044791; 20130044836; 20130093676; 20130113559; 20130114762; 20130166259; 20130170842; 20130176153; 20130207723; 20130222059; 20130243119; 20130243122; 20130243135; 20130257530; 20130271212; 20130272367; 20130285742; 20130301487; 20130303103; 20130315291; 20130321078; 20130330082; 20130336377; 20140009224; 20140029658; 20140029660; 20140030995; 20140031651; 20140036969; 20140044318; 20140044319; 20140044320; 20140044321; 20140072074; 20140077981; 20140081157; 20140086356; 20140086361; 20140095129; 20140107832; 20140126670; 20140126675; 20140133848; 20140140250; 20140161207; 20140167704; 20140172338; 20140198959; 20140213919; 20140225451; 20140226828; 20140229132; 20140247906; 20140266431; 20140269857; 20140269970; 20140269989; 20140269990; 20140270405; 20140278303; 20140279778; 20140292406; 20140292412; 20140294119; 20140294252; 20140313946; 20140314176; 20140314181; 20140314182; 20140317163; 20140323891; 20140333376; 20140372091; 20150003625; 20150005902; 20150016567; 20150018632; 20150025328; 20150031317; 20150031969; 20150032788; 20150043678; 20150051513; 20150061911; 20150070089; 20150077180; 20150078484; 20150092830; 20150098710; 20150104196; 20150131757; 20150156003; 20150156004; 20150162881; 20150172081; 20150180495; 20150193565; 20150193666; 20150194989; 20150202440; 20150214987; 20150215937; 20150223748; 20150241996; 20150249889; 20150256216; 20150270856; 20150270865; 20150288375; 20150295643; 20150311927; 20150311973; 20150311985; 20150322647; 20150326190; 20150333781; 20150357975; 20150358042; 20150358191; 20150381216; 20150381220; 20150381821; 20160005419; 20160022161; 20160028433; 20160034421; 20160036472; 20160036528; 20160065311; 20160079933; 20160087604; 20160087657; 20160099776; 20160111110; 20160117430; 20160124903; 20160126903; 20160127113; 20160132735; 20160134380; 20160156375; 20160162042; 20160173117; 20160191020; 20160218752; 20160225385; 20160241277; 20160248531; 20160259960; 20160261241; 20160269210; 20160287871; 20160308619; 20160309042; 20160316283; 20160329927; 20160334466; 20160336762; 20160352427; 20160359552; 20160373212; 20160380661; 20160380700; 20170012585; 20170012709; 20170014032; 20170032184; 20170033809; 20170041124; 20170043166; 20170047899; 20170061045; 20170063312; 20170077944; 20170077945; 20170078023; 20170078027; 20170093497; 20170095195; 20170104503; 20170108943; 20170117854; 20170141807; 20170141938; 20170163465; 20170170999; 20170180061; 20170195053; 20170207934; 20170214468; 20170214470; 20170222717; 20170244582; 20170245054; 20170245079; 20170255593; 20170272283; 20170304625; 20170322243; 20170324421; 20170338841; 20170338842; 20170339569; 20170346510; 20170366209; 20170366259; 20170373647; 20170373759; 20180013456; 20180013495; 20180026586; 20180026673; 20180041219; 20180062674; 20180070394; 20180102850; 20180131502; 20180167042; 20180167092; 20180167093; 20180180420; 20180191448; 20180219566; 20180254769; 20180262370; 20180269988; 20180279197; 20180294879; 20180294884; 20180302111; 20180309465; 20180316320; 20180331814; 20180333580; 20180367219; 20190007075; 20190013867; 20190013874; 20190013991; 20190020415; 20190028131; 20190030334; 20190036622; and 20190042536.


5G References

U.S. Pat. Nos. 6,675,125; 6,778,966; 7,027,981; 7,190,292; 7,206,420; 7,212,640; 7,558,391; 7,865,177; 8,085,943; 8,599,014; 8,725,706; 8,776,625; 8,898,567; 8,989,762; 9,160,579; 9,203,654; 9,235,268; 9,401,823; 9,432,564; 9,460,617; 9,531,427; 9,544,006; 9,564,927; 9,565,045; 9,613,408; 9,621,387; 9,660,851; 9,680,670; 9,686,112; 9,712,238; 9,712,350; 9,712,354; 9,713,019; 9,722,318; 9,729,281; 9,729,378; 9,742,521; 9,749,083; 9,774,476; 9,859,981; 9,871,679; 9,876,530; 9,877,206; 9,882,608; 9,893,919; 9,899,182; 9,900,048; 9,900,122; 9,900,123; 9,900,190; 9,912,436; 9,929,755; 9,942,074; 9,998,172; 9,998,187; 10,003,364; 10,027,397; 10,027,427; 10,027,523; 10,033,107; 10,033,108; 10,050,815; 10,051,483; 10,051,488; 10,062,970; 10,063,354; 10,069,467; 10,069,535; 10,079,652; 10,084,562; 10,090,594; 10,096,883; 10,103,777; 10,123,217; 10,129,057; 10,135,145; 10,148,016; 10,148,360; 10,168,501; 10,170,840; 10,171,158; 10,191,376; 10,198,582; 10,200,106; 10,205,212; 10,205,231; 10,205,482; 10,205,655; 10,211,855; 10,212,014; 10,218,405; 10,224,634; 20020051546; 20020085725; 20020103619; 20020172374; 20020172376; 20020172378; 20030035549; 20030055635; 20030098805; 20030112088; 20090221257; 20110238690; 20110249024; 20110252320; 20110288457; 20120112908; 20130110974; 20130201316; 20140226035; 20150146805; 20150146806; 20150230105; 20150280945; 20150310739; 20160093029; 20160149665; 20160149731; 20160197642; 20160218891; 20160226681; 20160352361; 20160352362; 20160352419; 20170012862; 20170018851; 20170018852; 20170019131; 20170026095; 20170032129; 20170033465; 20170033466; 20170033953; 20170033954; 20170063430; 20170078400; 20170085003; 20170085336; 20170093693; 20170104617; 20170110795; 20170110804; 20170111805; 20170134205; 20170201288; 20170229782; 20170230083; 20170245157; 20170269481; 20170271117; 20170288917; 20170295048; 20170311307; 20170317781; 20170317782; 20170317783; 20170317858; 20170318482; 20170331899; 20180013452; 20180034912; 20180048497; 20180054232; 20180054233; 20180054234; 20180054268; 20180062886; 20180069594; 20180069731; 20180076947; 20180076979; 20180076982; 20180076988; 20180091195; 20180115040; 20180115058; 20180123256; 20180123257; 20180123749; 20180123836; 20180123856; 20180123897; 20180124181; 20180131406; 20180131541; 20180145411; 20180145412; 20180145414; 20180145415; 20180151957; 20180152262; 20180152330; 20180152925; 20180159195; 20180159196; 20180159197; 20180159228; 20180159229; 20180159230; 20180159232; 20180159240; 20180159243; 20180159615; 20180166761; 20180166784; 20180166785; 20180166787; 20180167105; 20180167148; 20180175892; 20180175978; 20180198668; 20180205399; 20180205481; 20180227158; 20180248592; 20180254754; 20180254924; 20180262243; 20180278693; 20180278694; 20180294897; 20180301812; 20180302145; 20180309206; 20180323826; 20180324005; 20180324006; 20180324021; 20180324601; 20180331413; 20180331720; 20180331721; 20180331871; 20180343304; 20180351687; 20180358678; 20180359126; 20180375940; 20190013577; 20190013837; 20190013838; 20190020530; 20190036222; 20190052505; 20190074563; 20190074564; 20190074565; 20190074568; 20190074580; 20190074584; 20190074597; 20190074598; 20190074864; 20190074865; and 20190074878.

Claims
  • 1. A distortion-compensating processor, comprising: at least one automated processor configured to decompose a non-linearly distorted signal derived from an information signal, received from a channel having a channel non-linear distortion into a truncated series expansion of at least third order with memory comprising a series of terms, each term representing incremental non-linearity order and associated delay;an adaptive multi-layer feedforward deep neural network comprising a plurality of hidden layers, and at least one dropout layer, receiving as inputs the series of terms, and producing an equalized output signal; andan output port configured to present the equalized output signal,the multi-layer feedforward deep neural network being trained with respect to the channel non-linear distortion associated with communication of a series of symbols using training data comprising the series of terms, to equalize the signal,the multi-layer feedforward deep neural network being configured to receive the respective terms associated with incremental non-linearity orders and associated delay values, and to selectively produce the equalized output signal, representing the information signal wherein the channel non-linear distortion is reduced.
  • 2. The distortion-compensating processor according to claim 1, wherein the training data comprises a set of small amplitude training signals to estimate a channel response and a set of large amplitude training signals to estimate a power amplifier non-linearity.
  • 3. The distortion-compensating processor according to claim 1, wherein the information signal is distorted by amplification by a radio frequency power amplifier and transmission through a radio frequency communication channel, wherein the non-linearly distorted signal is received by the at least one automated processor from a radio receiver.
  • 4. The distortion-compensating processor according to claim 1, wherein the series expansion of at least third order with memory comprises a Volterra series expansion.
  • 5. The distortion-compensating processor according to claim 4, wherein the terms of the Volterra series expansion are defined by:
  • 6. The distortion-compensating processor according to claim 1, wherein: x(n) is a signal sequence representing information in the non-linearly distorted signal y(n) distorted by an analog process within a channel h;
  • 7. The distortion-compensating processor according to claim 6, wherein:
  • 8. The distortion-compensating processor according to claim 1, wherein the truncated series expansion of at least third order with memory comprises at least fifth order terms, and the deep multi-layer feedforward neural network has at least two convolutional network layers.
  • 9. The distortion-compensating processor according to claim 1, wherein the deep multi-layer feedforward neural network comprises at least three hidden layers, each hidden layer comprising at least 10 feature maps, and a fully connected layer subsequent to the at least three hidden layers.
  • 10. The distortion-compensating processor according to claim 1, wherein the non-linearly distorted signal comprises a frequency division multiplexed radio frequency modulated set of signals representing the information signal distorted by a radio frequency power amplifier;further comprising a frequency division multiplexed radio frequency signal demodulator configured to demodulate the equalized output signal as the set of symbols.
  • 11. A method of compensating for a distortion, comprising: decomposing a non-linearly distorted signal received from a channel having a channel non-linear distortion into a truncated series expansion of at least third order with memory, based on an information signal communicated through the channel, the decomposition comprising a series of terms, each term representing incremental non-linearity order and associated delay, using at least one automated processor;equalizing the non-linearly distorted signal with an automated equalizer comprising a multi-layer feedforward deep neural network comprising a plurality of hidden layers and at least one dropout layer, by receiving coefficients of respective terms associated with respective incremental non-linearity orders and associated delay by the automated equalizer, and producing selectively from the equalizer an output signal representing the information signal wherein the channel non-linear distortion is reduced; andupdating the multi-layer feedforward deep neural network to reduce error of the output signal with respect to the information signal.
  • 12. The method according to claim 11, wherein the non-linearly distorted signal comprises a frequency division multiplexed signal amplified and distorted by a radio frequency power amplifier, received though a radio frequency receiver, further comprising demodulating information contained within the frequency division multiplexed signal from the output signal.
  • 13. The method according to claim 11, further comprising outputting the output signal, wherein the truncated series expansion of at least third order with memory comprises at least fifth order terms and the multi-layer feedforward deep neural network comprises at least two hidden layers.
  • 14. The method according to claim 11, wherein the series expansion of at least third order with memory comprises a Volterra series expansion having terms defined by:
  • 15. The method according to claim 11, wherein: x(n) is a signal sequence representing information in the signal y(n) distorted by an analog process within a channel h;
  • 16. The method according to claim 11, wherein the multi-layer feedforward deep neural network comprises at least one convolutional layer, followed by a fully connected layer with dropout comprising the at least one dropout layer, and an output layer.
  • 17. The method according to claim 11, wherein the multi-layer feedforward deep neural network comprises at least three one-dimensional convolutional layers each comprising at least 10 feature maps, followed by a first fully-connected layer with the at least one dropout layer provided for regularization, and a second fully-connected layer which produces the output with respect to a delay, the multi-layer feedforward deep neural network producing an output tensor having two dimensions,wherein the at three one-dimensional convolutional layers and the first fully connected layer use a sigmoid activation function, and the fully connected output layer uses a linear activation function.
  • 18. A non-linear distortion-compensating processor, comprising: an input signal processor configured to decompose a received non-linearly distorted signal based on an information signal communicated through a channel, into a truncated Volterra series expansion, the truncated Volterra series expansion comprising a series of terms, each term comprising a sum of multidimensional convolutions of at least third order each with an associated time delay component; anda multi-layer feedforward deep neural network comprising a plurality of hidden neural network layers comprising at least one convolutional neural network layer and at least one dropout layer, trained with respect to a non-linear distortion of the information signal represented in the non-linearly distorted signal, and updated dependent on equalizer error, to receive the series of terms of the truncated Volterra series expansion, and to selectively produce an output signal representing the non-linearly distorted signal at least partially compensated for the non-linear distortion.
  • 19. The non-linear distortion-compensating processor according to claim 18, further comprising a demodulator configured to demodulate information modulated in the non-linearly distorted signal, wherein the non-linearly distorted signal comprises a frequency division multiplexed signal distorted by a power amplifier and a communication channel.
  • 20. The non-linear distortion-compensating processor according to claim 18, wherein the multi-layer feedforward deep neural network comprises a plurality of convolutional layers, each convolutional layer comprising at least 10 feature maps, followed by a first fully connected layer with the at least one dropout layer provided for regularization, and a second fully connected layer for output, wherein the plurality of convolutional layers and the first fully connected layer use a non-linear activation function, and the second fully connected layer uses a linear activation function.
  • 21. A method of compensating for a distortion, comprising: decomposing a signal received from a channel having a channel non-linear distortion into a truncated series expansion of at least third order with memory, using at least one automated processor; andequalizing the signal with an automated equalizer comprising a multi-layer feedforward deep neural network trained having at least one convolutional layer, followed by a fully connected layer with dropout, and an output layer, with respect to the channel non-linear distortion, by receiving terms of the truncated series expansion of at least third order with memory by the automated equalizer, and producing selectively from the equalizer an output representing the signal wherein the channel non-linear distortion is reduced.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is a Continuation of U.S. patent application Ser. No. 17/234,102, filed Apr. 19, 2021, now U.S. Pat. No. 11,451,419, issued Sep. 20, 2022, which is a Continuation of U.S. patent application Ser. No. 16/812,229, filed Mar. 6, 2020, now U.S. Pat. No. 10,985,951, issued Apr. 20, 2021, which Claims benefit of priority under 35 U.S.C. § 119(e) from, and is a non-provisional of, U.S. Provisional Patent Application No. 62/819,054, filed Mar. 15, 2019, the entirety of which is expressly incorporated herein by reference.

US Referenced Citations (1217)
Number Name Date Kind
4615038 Lim et al. Sep 1986 A
4669116 Agazzi et al. May 1987 A
4870371 Gottwald et al. Sep 1989 A
5038187 Zhou Aug 1991 A
5309481 Viviano et al. May 1994 A
5329586 Agazzi Jul 1994 A
5424680 Nazarathy et al. Jun 1995 A
5438625 Klippel Aug 1995 A
5539774 Nobakht et al. Jul 1996 A
5647023 Agazzi et al. Jul 1997 A
5692011 Nobakht et al. Nov 1997 A
5694476 Klippel Dec 1997 A
5744969 Grochowski et al. Apr 1998 A
5745597 Agazzi et al. Apr 1998 A
5790692 Price et al. Aug 1998 A
5792062 Poon et al. Aug 1998 A
5815585 Klippel Sep 1998 A
5889823 Agazzi et al. Mar 1999 A
5924086 Mathur et al. Jul 1999 A
5938594 Poon et al. Aug 1999 A
5991023 Morawski et al. Nov 1999 A
6002479 Barwicz et al. Dec 1999 A
6005952 Klippel Dec 1999 A
6064265 Yun et al. May 2000 A
6166599 Aparin et al. Dec 2000 A
6181754 Chen Jan 2001 B1
6201455 Kusunoki Mar 2001 B1
6201839 Kavcic et al. Mar 2001 B1
6236837 Midya May 2001 B1
6240278 Midya et al. May 2001 B1
6288610 Miyashita Sep 2001 B1
6335767 Twitchell et al. Jan 2002 B1
6351740 Rabinowitz Feb 2002 B1
6381212 Larkin Apr 2002 B1
6393259 Kusunoki May 2002 B1
6406438 Thornton Jun 2002 B1
6408079 Katayama et al. Jun 2002 B1
6438180 Kavcic et al. Aug 2002 B1
6453308 Zhao et al. Sep 2002 B1
6504885 Chen Jan 2003 B1
6510257 Barwicz et al. Jan 2003 B1
6512417 Booth et al. Jan 2003 B2
6532272 Ryan et al. Mar 2003 B1
6563870 Schenk May 2003 B1
6600794 Agarossi et al. Jul 2003 B1
6633208 Salkola et al. Oct 2003 B2
6636115 Kim et al. Oct 2003 B2
6668256 Lynch Dec 2003 B1
6675125 Bizjak Jan 2004 B2
6687235 Chu Feb 2004 B1
6690693 Crowder Feb 2004 B1
6697768 Jones et al. Feb 2004 B2
6711094 Katz et al. Mar 2004 B1
6714481 Katz et al. Mar 2004 B1
6718087 Choa Apr 2004 B2
6775646 Tufillaro et al. Aug 2004 B1
6778966 Bizjak Aug 2004 B2
6788719 Crowder Sep 2004 B2
6812792 Mattsson et al. Nov 2004 B2
6826331 Barwicz et al. Nov 2004 B2
6839657 Verbeyst et al. Jan 2005 B2
6850871 Barford et al. Feb 2005 B1
6868380 Kroeker Mar 2005 B2
6885954 Jones et al. Apr 2005 B2
6895262 Cortes et al. May 2005 B2
6922552 Noori Jul 2005 B2
6934655 Jones et al. Aug 2005 B2
6940790 Powelson et al. Sep 2005 B1
6947857 Jones et al. Sep 2005 B2
6951540 Ebbini et al. Oct 2005 B2
6954476 Coldren et al. Oct 2005 B2
6956433 Kim et al. Oct 2005 B2
6982939 Powelson et al. Jan 2006 B2
6992519 Vilander et al. Jan 2006 B2
6999201 Shimizu Feb 2006 B1
6999510 Batruni Feb 2006 B2
7007253 Gullapalli et al. Feb 2006 B2
7016823 Yang Mar 2006 B2
7027981 Bizjak Apr 2006 B2
7061943 Coldren et al. Jun 2006 B2
7065511 Zhao et al. Jun 2006 B2
7071797 Ye Jul 2006 B2
7084974 Barwicz et al. Aug 2006 B1
7092043 Vorenkamp et al. Aug 2006 B2
7113037 Nezami Sep 2006 B2
7123663 De Gaudenzi et al. Oct 2006 B2
7151405 Nezami Dec 2006 B2
7176757 Nakatani Feb 2007 B2
7190292 Bizjak Mar 2007 B2
7206420 Bizjak Apr 2007 B2
7209566 Griniasty Apr 2007 B2
7212640 Bizjak May 2007 B2
7212933 Kouri et al. May 2007 B2
7236156 Liberty et al. Jun 2007 B2
7236212 Carr et al. Jun 2007 B2
7239301 Liberty et al. Jul 2007 B2
7239668 De Gaudenzi et al. Jul 2007 B2
7251297 Agazzi Jul 2007 B2
7268620 Nygren et al. Sep 2007 B2
7272594 Lynch et al. Sep 2007 B1
7286009 Andersen et al. Oct 2007 B2
7295961 Root et al. Nov 2007 B2
7304591 Raphaeli Dec 2007 B2
7305639 Floyd et al. Dec 2007 B2
7308032 Capofreddi Dec 2007 B2
7333559 Song et al. Feb 2008 B2
7348844 Jaenecke Mar 2008 B2
7400807 Minelly et al. Jul 2008 B2
7403884 Hemmett Jul 2008 B2
7412469 Dalipi Aug 2008 B2
7423699 Vorenkamp et al. Sep 2008 B2
7436883 Batruni Oct 2008 B2
7443326 Raphaeli Oct 2008 B2
7489298 Liberty et al. Feb 2009 B2
7512900 Lynch et al. Mar 2009 B2
7542518 Kim et al. Jun 2009 B2
7551668 Higashino et al. Jun 2009 B2
7558391 Bizjak Jul 2009 B2
7570856 Minelly et al. Aug 2009 B1
7571401 Rao et al. Aug 2009 B1
7576606 Andersen et al. Aug 2009 B2
7589725 Snyder et al. Sep 2009 B2
7590518 Phillips Sep 2009 B2
7602240 Gao et al. Oct 2009 B2
7606539 Singerl et al. Oct 2009 B2
7610183 Danko Oct 2009 B2
7657405 Singerl et al. Feb 2010 B2
7720232 Oxford May 2010 B2
7720236 Oxford May 2010 B2
7728658 Andersen et al. Jun 2010 B2
7729446 Copeland Jun 2010 B2
7733177 Borkar et al. Jun 2010 B1
7746955 Rexberg Jun 2010 B2
7755425 Klingberg et al. Jul 2010 B2
7760887 Oxford Jul 2010 B2
7773692 Copeland et al. Aug 2010 B2
7774176 Rao et al. Aug 2010 B2
7795858 Tufillaro et al. Sep 2010 B2
7796960 Rashev et al. Sep 2010 B1
7808315 Goodman et al. Oct 2010 B2
7812666 Chieng et al. Oct 2010 B2
7821337 Yamanouchi et al. Oct 2010 B2
7821581 Vorenkamp et al. Oct 2010 B2
7822146 Copeland Oct 2010 B2
7826624 Oxford Nov 2010 B2
7847631 Jiang et al. Dec 2010 B2
7852913 Agazzi et al. Dec 2010 B2
7853443 Hemmett Dec 2010 B2
7864881 Hori et al. Jan 2011 B2
7865177 Sorrells et al. Jan 2011 B2
7873172 Lashkari Jan 2011 B2
7885025 Eppler et al. Feb 2011 B2
7885797 Koppl et al. Feb 2011 B2
7889007 Kim et al. Feb 2011 B2
7894788 Keehr et al. Feb 2011 B2
7895006 Thompson Feb 2011 B2
7899416 McCallister et al. Mar 2011 B2
7902925 Kim et al. Mar 2011 B2
7903137 Oxford et al. Mar 2011 B2
7924942 Rexberg Apr 2011 B2
7929375 Nuttall et al. Apr 2011 B2
7932782 Lau et al. Apr 2011 B2
7970150 Oxford Jun 2011 B2
7970151 Oxford et al. Jun 2011 B2
7979837 Rao et al. Jul 2011 B1
7991073 Utsunomiya et al. Aug 2011 B2
7991167 Oxford Aug 2011 B2
7995674 Hori et al. Aug 2011 B2
8005858 Lynch et al. Aug 2011 B1
8023668 Pfaffinger Sep 2011 B2
8031882 Ding Oct 2011 B2
8039871 Nogami Oct 2011 B2
8045066 Vorenkamp et al. Oct 2011 B2
8046199 Copeland Oct 2011 B2
8065060 Danko Nov 2011 B2
8085943 Bizjak Dec 2011 B2
8089689 Savage-Leuchs Jan 2012 B1
8105270 Hunter Jan 2012 B2
8139630 Agazzi et al. Mar 2012 B2
8148983 Biber et al. Apr 2012 B2
8149950 Kim et al. Apr 2012 B2
8160191 Row et al. Apr 2012 B2
8165854 Wei Apr 2012 B1
8170508 Davies May 2012 B2
8185853 Kim et al. May 2012 B2
8193566 Nogami Jun 2012 B2
8195103 Waheed et al. Jun 2012 B2
8199399 Savage-Leuchs Jun 2012 B1
8213880 van Zelm et al. Jul 2012 B2
8244787 Principe et al. Aug 2012 B2
8260732 Al-Duwaish et al. Sep 2012 B2
8265583 Venkataraman Sep 2012 B1
8270530 Hamada et al. Sep 2012 B2
8294605 Pagnanelli Oct 2012 B1
8295790 Koren et al. Oct 2012 B2
8306488 Rashev et al. Nov 2012 B2
8310312 Lee et al. Nov 2012 B2
8315970 Zalay et al. Nov 2012 B2
8331511 Beidas et al. Dec 2012 B2
8331879 van Zelm et al. Dec 2012 B2
8345348 Savage-Leuchs Jan 2013 B1
8346692 Rouat et al. Jan 2013 B2
8346693 Al-Duwaish et al. Jan 2013 B2
8346711 Al-Duwaish et al. Jan 2013 B2
8346712 Rizvi et al. Jan 2013 B2
8351876 McCallister et al. Jan 2013 B2
8354884 Braithwaite Jan 2013 B2
8355684 Yu et al. Jan 2013 B2
8358169 Sen et al. Jan 2013 B2
8364095 van Zelm et al. Jan 2013 B2
8369447 Fuller et al. Feb 2013 B2
8369595 Derakhshani et al. Feb 2013 B1
8380773 Batruni Feb 2013 B2
8390375 Miyashita Mar 2013 B2
8390376 Bai Mar 2013 B2
8396693 Danko Mar 2013 B2
8410843 Goodman et al. Apr 2013 B2
8410850 Mazzucco et al. Apr 2013 B2
8412133 Davies Apr 2013 B2
8421534 Kim et al. Apr 2013 B2
8432220 Peyresoubes et al. Apr 2013 B2
8437513 Derakhshani et al. May 2013 B1
8463582 Song et al. Jun 2013 B2
8467438 Beidas Jun 2013 B2
8477581 Liu et al. Jul 2013 B1
8483343 Agazzi et al. Jul 2013 B2
8483450 Derakhshani et al. Jul 2013 B1
8487706 Li et al. Jul 2013 B2
8489047 McCallister et al. Jul 2013 B2
8494463 Davies Jul 2013 B2
8498369 Forrester et al. Jul 2013 B2
8509347 Kim et al. Aug 2013 B2
8509712 van Zelm et al. Aug 2013 B2
8519440 Nogami Aug 2013 B2
8532215 Huang et al. Sep 2013 B2
8532964 Wei Sep 2013 B2
8538039 Pfaffinger Sep 2013 B2
8564368 Bai Oct 2013 B1
8565343 Husted et al. Oct 2013 B1
8577311 Wolf et al. Nov 2013 B2
8587375 Kim et al. Nov 2013 B2
8599014 Prykari et al. Dec 2013 B2
8599050 Rachid et al. Dec 2013 B2
8605814 McCallister et al. Dec 2013 B2
8605819 Lozhkin Dec 2013 B2
8611190 Hughes et al. Dec 2013 B1
8611459 McCallister Dec 2013 B2
8611820 Gilmore Dec 2013 B2
8615208 McCallister et al. Dec 2013 B2
8619905 Ishikawa et al. Dec 2013 B2
8620631 Al-Duwaish Dec 2013 B2
8626089 Singerl et al. Jan 2014 B2
8649743 McCallister et al. Feb 2014 B2
8675925 Derakhshani et al. Mar 2014 B2
8704595 Kim et al. Apr 2014 B2
8705166 Savage-Leuchs Apr 2014 B1
8712345 Ishikawa et al. Apr 2014 B2
8718178 Carbone et al. May 2014 B1
8718209 Lozhkin May 2014 B2
8724857 Derakhshani et al. May 2014 B2
8725706 Arrasvuori et al. May 2014 B2
8737937 Ishikawa et al. May 2014 B2
8737938 Rashev et al. May 2014 B2
8744141 Derakhshani et al. Jun 2014 B2
8744377 Rimini et al. Jun 2014 B2
8758271 Hunter et al. Jun 2014 B2
8761409 Pfaffinger Jun 2014 B2
8766917 Liberty et al. Jul 2014 B2
8767869 Rimini et al. Jul 2014 B2
8776625 Questo et al. Jul 2014 B2
8780693 Kim et al. Jul 2014 B2
8787628 Derakhshani et al. Jul 2014 B1
8798559 Kilambi et al. Aug 2014 B2
8804807 Liu et al. Aug 2014 B2
8804871 Rimini et al. Aug 2014 B2
8811532 Bai Aug 2014 B2
8823452 Chen et al. Sep 2014 B2
8831074 Agazzi et al. Sep 2014 B2
8831133 Azadet et al. Sep 2014 B2
8831135 Utsunomiya et al. Sep 2014 B2
8838218 Khair Sep 2014 B2
8843088 van Zelm et al. Sep 2014 B2
8843089 Davies Sep 2014 B2
8849611 Haviland et al. Sep 2014 B2
8855175 Wyville et al. Oct 2014 B2
8855234 Kim et al. Oct 2014 B2
8867601 Beidas Oct 2014 B2
8874411 Ishikawa et al. Oct 2014 B2
8885765 Zhang et al. Nov 2014 B2
8886341 van Zelm et al. Nov 2014 B1
8891701 Eliaz et al. Nov 2014 B1
8896471 Pagnanelli Nov 2014 B1
8897351 Kravtsov Nov 2014 B2
8898567 Arrasvuori et al. Nov 2014 B2
8903192 Malik et al. Dec 2014 B2
8909176 van Zelm et al. Dec 2014 B2
8909328 Chon Dec 2014 B2
8933752 Nagatani et al. Jan 2015 B2
8934573 McCallister et al. Jan 2015 B2
8958470 Bolstad et al. Feb 2015 B2
8964901 Kim et al. Feb 2015 B2
8964996 Klippel et al. Feb 2015 B2
8971834 Keehr et al. Mar 2015 B2
8976896 McCallister et al. Mar 2015 B2
8989762 Negus et al. Mar 2015 B1
8994657 Liberty et al. Mar 2015 B2
8995571 Chen Mar 2015 B2
8995835 Yan et al. Mar 2015 B2
9008153 Pang et al. Apr 2015 B2
9014299 Teterwak Apr 2015 B2
9019643 Medard et al. Apr 2015 B2
9020454 Waheed et al. Apr 2015 B2
9025607 Zeger et al. May 2015 B2
9031168 Liu May 2015 B1
9036734 Mauer et al. May 2015 B1
9048865 Pagnanelli Jun 2015 B2
9048900 Pratt et al. Jun 2015 B2
9071313 Monsen Jun 2015 B2
9077508 Koike-Akino et al. Jul 2015 B2
9088472 Jain et al. Jul 2015 B1
9094036 Rachid et al. Jul 2015 B2
9094151 Alonso et al. Jul 2015 B2
9104921 Derakhshani et al. Aug 2015 B2
9106304 Row et al. Aug 2015 B2
9130628 Mittal et al. Sep 2015 B1
9137082 Ali Sep 2015 B1
9137492 Lima et al. Sep 2015 B2
9143274 Zeger et al. Sep 2015 B2
9160280 Abdelhafiz et al. Oct 2015 B1
9160310 Velazquez et al. Oct 2015 B2
9160579 Tan et al. Oct 2015 B1
9160687 Haeupler et al. Oct 2015 B2
9166610 Klippel Oct 2015 B2
9166635 Carbone et al. Oct 2015 B2
9166698 Bae et al. Oct 2015 B2
9171534 Tronchin et al. Oct 2015 B2
9184784 Ding et al. Nov 2015 B2
9185529 Medard et al. Nov 2015 B2
9189458 Langer et al. Nov 2015 B1
9191041 Mkadem et al. Nov 2015 B2
9191049 Mikhemar et al. Nov 2015 B2
9199860 MacArthur Dec 2015 B2
9203654 Terry Dec 2015 B2
9209753 Xiao et al. Dec 2015 B2
9209841 Yu et al. Dec 2015 B2
9214968 Wang Dec 2015 B2
9214969 Hammi Dec 2015 B2
9225295 Ishikawa et al. Dec 2015 B2
9225501 Azadet Dec 2015 B2
9231530 Kaushik et al. Jan 2016 B1
9231647 Polydoros et al. Jan 2016 B2
9231801 Rimini et al. Jan 2016 B2
9235268 Arrasvuori et al. Jan 2016 B2
9236996 Khandani Jan 2016 B2
9246525 Breynaert et al. Jan 2016 B2
9246731 Kim et al. Jan 2016 B2
9252798 Rachid et al. Feb 2016 B2
9252821 Shor et al. Feb 2016 B2
9253608 Medard et al. Feb 2016 B2
9257943 Onishi Feb 2016 B2
9258156 Wloczysiak Feb 2016 B2
9261978 Liberty et al. Feb 2016 B2
9264153 Kim et al. Feb 2016 B2
9265461 Hunter et al. Feb 2016 B2
9270304 Monsen Feb 2016 B2
9270512 Eliaz et al. Feb 2016 B2
9271123 Medard et al. Feb 2016 B2
9276602 Pagnanelli Mar 2016 B1
9294113 Feizi-Khankandi et al. Mar 2016 B2
9304501 Danko Apr 2016 B2
9306606 Zhang Apr 2016 B2
9311535 Derakhshani et al. Apr 2016 B2
9312892 Chang Apr 2016 B2
9314623 Bardakjian et al. Apr 2016 B2
9322906 Seppa et al. Apr 2016 B2
9337781 Hammi May 2016 B2
9337783 Matsubara et al. May 2016 B2
9352155 Moehlis et al. May 2016 B2
9361681 Derakhshani et al. Jun 2016 B2
9361936 Medard et al. Jun 2016 B2
9362869 Dechen et al. Jun 2016 B2
9362942 Hammler Jun 2016 B1
9363068 Azadet et al. Jun 2016 B2
9369093 Gustavsson Jun 2016 B2
9369255 Medard et al. Jun 2016 B2
9369541 Medard et al. Jun 2016 B2
9397516 Hunter et al. Jul 2016 B2
9401823 Terry Jul 2016 B2
9404950 Lafontaine et al. Aug 2016 B2
9413516 Khandani Aug 2016 B2
9419722 Winzer et al. Aug 2016 B2
9431972 Xu Aug 2016 B1
9432564 Nurmenniemi Aug 2016 B2
9438178 Sulimarski et al. Sep 2016 B1
9438356 Kim et al. Sep 2016 B2
9439597 Warrick et al. Sep 2016 B2
9451920 Khair Sep 2016 B2
9460246 Williams Oct 2016 B2
9460617 Beaurepaire et al. Oct 2016 B2
9461597 Abdelrahman et al. Oct 2016 B2
9461676 Santucci et al. Oct 2016 B2
9473077 Feldman et al. Oct 2016 B2
9479322 Khandani Oct 2016 B2
9509331 Pagnanelli Nov 2016 B1
9509350 Magesacher et al. Nov 2016 B1
9517030 Hunter et al. Dec 2016 B2
9531427 Henry et al. Dec 2016 B2
9531475 Agazzi et al. Dec 2016 B2
9536539 Chang et al. Jan 2017 B2
9537759 Calmon et al. Jan 2017 B2
9544006 Henry et al. Jan 2017 B2
9544126 Zeger et al. Jan 2017 B2
9559831 Zeger et al. Jan 2017 B2
9564876 Kim et al. Feb 2017 B2
9564927 Fonseka et al. Feb 2017 B2
9565045 Terry Feb 2017 B2
9571312 Brandt-Pearce et al. Feb 2017 B2
9575570 Liberty et al. Feb 2017 B2
9590664 Rexberg et al. Mar 2017 B2
9590668 Kim et al. Mar 2017 B1
9595920 Li et al. Mar 2017 B2
9595982 Weber et al. Mar 2017 B2
9607003 Medard et al. Mar 2017 B2
9607628 Gautama Mar 2017 B2
9608676 Chen Mar 2017 B2
9608718 Monsen et al. Mar 2017 B2
9613408 Micovic et al. Apr 2017 B2
9614554 Beidas et al. Apr 2017 B2
9621387 Magers Apr 2017 B1
9628119 Gal et al. Apr 2017 B2
9628120 Yu et al. Apr 2017 B2
9646116 Liu et al. May 2017 B2
9647717 Xiong et al. May 2017 B2
9654211 Zhao et al. May 2017 B2
9654216 Nakashima et al. May 2017 B2
9659120 Fehri et al. May 2017 B2
9660593 Abdelrahman et al. May 2017 B2
9660730 Rope May 2017 B1
9660851 Hadani et al. May 2017 B2
9665510 Kaushik et al. May 2017 B2
9667292 Xue et al. May 2017 B2
9674368 Kechichian et al. Jun 2017 B2
9680423 Chen Jun 2017 B2
9680497 Pagnanelli Jun 2017 B2
9680670 Henry et al. Jun 2017 B2
9686112 Terry Jun 2017 B2
9697845 Hammarqvist Jul 2017 B2
9705477 Velazquez Jul 2017 B2
9706296 Uhle et al. Jul 2017 B2
9712179 Rachid et al. Jul 2017 B2
9712233 Deng et al. Jul 2017 B1
9712238 Ashrafi et al. Jul 2017 B2
9712350 Henry et al. Jul 2017 B2
9712354 Hadani et al. Jul 2017 B2
9713010 Khandani Jul 2017 B2
9713019 Negus et al. Jul 2017 B2
9722318 Adriazola et al. Aug 2017 B2
9722646 Matthews et al. Aug 2017 B1
9722691 Colavolpe et al. Aug 2017 B2
9726701 Lafontaine et al. Aug 2017 B2
9727677 Fehri et al. Aug 2017 B2
9729281 Hadani et al. Aug 2017 B2
9729378 Ahmed Aug 2017 B1
9735741 Pratt et al. Aug 2017 B2
9735800 Pagnanelli Aug 2017 B2
9735811 Dalipi et al. Aug 2017 B2
9735876 Duthel Aug 2017 B2
9737258 Poon et al. Aug 2017 B2
9742521 Henry et al. Aug 2017 B2
9742599 Iyer Seshadri et al. Aug 2017 B2
9746506 Lafontaine et al. Aug 2017 B2
9749083 Henry et al. Aug 2017 B2
9749161 Gal et al. Aug 2017 B1
9755691 Kim et al. Sep 2017 B2
9762268 Yang et al. Sep 2017 B2
9768891 Ishikawa et al. Sep 2017 B2
9774476 Nammi et al. Sep 2017 B2
9778902 Azadet et al. Oct 2017 B2
9780869 Rope Oct 2017 B2
9780881 Rope et al. Oct 2017 B1
9787351 Martineau et al. Oct 2017 B2
9787459 Azadet Oct 2017 B2
9794000 Ling Oct 2017 B2
9800437 Kingery et al. Oct 2017 B2
9800734 Kechichian et al. Oct 2017 B2
9820311 Khandani Nov 2017 B2
9831899 Boghrat et al. Nov 2017 B1
9837970 Hammi et al. Dec 2017 B2
9843346 Mundarath et al. Dec 2017 B1
9859845 Sarbishaei et al. Jan 2018 B2
9859981 Ashrafi et al. Jan 2018 B2
9866183 Magesacher et al. Jan 2018 B2
9871679 Henry et al. Jan 2018 B2
9876530 Negus et al. Jan 2018 B2
9877206 Henry et al. Jan 2018 B2
9877265 Kim et al. Jan 2018 B2
9882608 Adriazola et al. Jan 2018 B2
9882648 Agazzi et al. Jan 2018 B2
9887862 Zhou et al. Feb 2018 B2
9893919 Kim et al. Feb 2018 B2
9899182 Borodovsky Feb 2018 B2
9900048 Hadani et al. Feb 2018 B2
9900088 Rope Feb 2018 B2
9900122 Henry et al. Feb 2018 B2
9900123 Henry et al. Feb 2018 B2
9900190 Henry et al. Feb 2018 B2
9912435 Kim et al. Mar 2018 B2
9912436 Henry et al. Mar 2018 B2
9913194 Kim et al. Mar 2018 B2
9923524 Carbone et al. Mar 2018 B2
9923640 Okabe et al. Mar 2018 B2
9923714 Lima et al. Mar 2018 B2
9928212 Agee Mar 2018 B2
9929755 Henry et al. Mar 2018 B2
9935590 Sulimarski et al. Apr 2018 B2
9935645 Tangudu et al. Apr 2018 B1
9935715 Duthel Apr 2018 B2
9935761 Azadet Apr 2018 B2
9940938 Dick et al. Apr 2018 B2
9941963 Magri et al. Apr 2018 B2
9942074 Luo et al. Apr 2018 B1
9953656 Dick et al. Apr 2018 B2
9954384 Hunter et al. Apr 2018 B2
9960794 Weissman et al. May 2018 B2
9960804 Kim et al. May 2018 B2
9960900 Azadet May 2018 B2
9971920 Derakhshani et al. May 2018 B2
9973279 Shen May 2018 B2
9974957 Choi et al. May 2018 B2
9983243 Lafontaine et al. May 2018 B2
9998172 Barzegar et al. Jun 2018 B1
9998187 Ashrafi et al. Jun 2018 B2
9998223 Zhao et al. Jun 2018 B2
9998406 Haeupler et al. Jun 2018 B2
9999780 Weyh et al. Jun 2018 B2
11451419 Li Sep 2022 B2
20010051871 Kroeker Mar 2001 A1
20010036334 Choa Nov 2001 A1
20020041210 Booth et al. Apr 2002 A1
20020051546 Bizjak May 2002 A1
20020060827 Agazzi May 2002 A1
20020075918 Crowder Jun 2002 A1
20020085725 Bizjak Jul 2002 A1
20020103619 Bizjak Aug 2002 A1
20020126604 Powelson et al. Sep 2002 A1
20020146993 Persico et al. Oct 2002 A1
20020161539 Jones et al. Oct 2002 A1
20020161542 Jones et al. Oct 2002 A1
20020169585 Jones et al. Nov 2002 A1
20020172374 Bizjak Nov 2002 A1
20020172376 Bizjak Nov 2002 A1
20020172378 Bizjak Nov 2002 A1
20020178133 Zhao et al. Nov 2002 A1
20020181521 Crowder et al. Dec 2002 A1
20020186874 Price et al. Dec 2002 A1
20030035549 Bizjak et al. Feb 2003 A1
20030046045 Pileggi et al. Mar 2003 A1
20030055635 Bizjak Mar 2003 A1
20030057963 Verbeyst et al. Mar 2003 A1
20030063854 Barwicz et al. Apr 2003 A1
20030071684 Noori Apr 2003 A1
20030098805 Bizjak May 2003 A1
20030112088 Bizjak Jun 2003 A1
20030142832 Meerkoetter et al. Jul 2003 A1
20030195706 Korenberg Oct 2003 A1
20030223507 De Gaudenzi et al. Dec 2003 A1
20040019443 Jones et al. Jan 2004 A1
20040044489 Jones et al. Mar 2004 A1
20040130394 Mattsson et al. Jul 2004 A1
20040136423 Coldren et al. Jul 2004 A1
20040155707 Kim et al. Aug 2004 A1
20040179629 Song et al. Sep 2004 A1
20040208242 Batruni Oct 2004 A1
20040258176 Mattsson et al. Dec 2004 A1
20050021266 Kouri et al. Jan 2005 A1
20050021319 Li et al. Jan 2005 A1
20050031117 Browning et al. Feb 2005 A1
20050031131 Browning et al. Feb 2005 A1
20050031132 Browning et al. Feb 2005 A1
20050031133 Browning et al. Feb 2005 A1
20050031134 Leske Feb 2005 A1
20050031137 Browning et al. Feb 2005 A1
20050031138 Browning et al. Feb 2005 A1
20050031139 Browning et al. Feb 2005 A1
20050031140 Browning Feb 2005 A1
20050049838 Danko Mar 2005 A1
20050100065 Coldren et al. May 2005 A1
20050141637 Domokos Jun 2005 A1
20050141659 Capofreddi Jun 2005 A1
20050174167 Vilander et al. Aug 2005 A1
20050177805 Lynch et al. Aug 2005 A1
20050180526 Kim et al. Aug 2005 A1
20050226316 Higashino et al. Oct 2005 A1
20050237111 Nygren et al. Oct 2005 A1
20050243061 Liberty et al. Nov 2005 A1
20050253806 Liberty et al. Nov 2005 A1
20050270094 Nakatani Dec 2005 A1
20050271216 Lashkari Dec 2005 A1
20050273188 Barwicz et al. Dec 2005 A1
20060039498 de Figueiredo et al. Feb 2006 A1
20060052988 Farahani et al. Mar 2006 A1
20060083389 Oxford et al. Apr 2006 A1
20060093128 Oxford May 2006 A1
20060095236 Phillips May 2006 A1
20060104395 Batruni May 2006 A1
20060104451 Browning et al. May 2006 A1
20060133536 Rexberg Jun 2006 A1
20060209982 De Gaudenzi et al. Sep 2006 A1
20060222128 Raphaeli Oct 2006 A1
20060239443 Oxford et al. Oct 2006 A1
20060256974 Oxford Nov 2006 A1
20060262942 Oxford Nov 2006 A1
20060262943 Oxford Nov 2006 A1
20060264187 Singerl et al. Nov 2006 A1
20060269074 Oxford Nov 2006 A1
20060269080 Oxford et al. Nov 2006 A1
20060274904 Lashkari Dec 2006 A1
20070005326 Koppl et al. Jan 2007 A1
20070018722 Jaenecke Jan 2007 A1
20070030076 Kim et al. Feb 2007 A1
20070033000 Farahani et al. Feb 2007 A1
20070063770 Rexberg Mar 2007 A1
20070080841 Raphaeli Apr 2007 A1
20070133713 Dalipi Jun 2007 A1
20070133719 Agazzi et al. Jun 2007 A1
20070136018 Fernandez et al. Jun 2007 A1
20070136045 Rao et al. Jun 2007 A1
20070152750 Andersen et al. Jul 2007 A1
20070160221 Pfaffinger Jul 2007 A1
20070168100 Danko Jul 2007 A1
20070190952 Waheed et al. Aug 2007 A1
20070229154 Kim et al. Oct 2007 A1
20070237260 Hori et al. Oct 2007 A1
20070247425 Liberty et al. Oct 2007 A1
20070252651 Gao et al. Nov 2007 A1
20070252813 Liberty et al. Nov 2007 A1
20070276610 Korenberg Nov 2007 A1
20080001947 Snyder et al. Jan 2008 A1
20080032642 Singerl et al. Feb 2008 A1
20080129379 Copeland Jun 2008 A1
20080130787 Copeland Jun 2008 A1
20080130788 Copeland Jun 2008 A1
20080130789 Copeland et al. Jun 2008 A1
20080152037 Kim et al. Jun 2008 A1
20080158154 Liberty et al. Jul 2008 A1
20080158155 Liberty et al. Jul 2008 A1
20080180178 Gao et al. Jul 2008 A1
20080240325 Agazzi et al. Oct 2008 A1
20080261541 Langer Oct 2008 A1
20080283882 Nogami Nov 2008 A1
20080285640 McCallister et al. Nov 2008 A1
20080293372 Principe et al. Nov 2008 A1
20090003134 Nuttall et al. Jan 2009 A1
20090027117 Andersen et al. Jan 2009 A1
20090027118 Andersen et al. Jan 2009 A1
20090058521 Fernandez Mar 2009 A1
20090067643 Ding Mar 2009 A1
20090072901 Yamanouchi et al. Mar 2009 A1
20090075610 Keehr et al. Mar 2009 A1
20090094304 Batruni Apr 2009 A1
20090146740 Lau et al. Jun 2009 A1
20090153132 Tufillaro et al. Jun 2009 A1
20090185613 Agazzi et al. Jul 2009 A1
20090221257 Sorrells et al. Sep 2009 A1
20090256632 Klingberg et al. Oct 2009 A1
20090287624 Rouat et al. Nov 2009 A1
20090289706 Goodman et al. Nov 2009 A1
20090291650 Singerl et al. Nov 2009 A1
20090302938 Andersen et al. Dec 2009 A1
20090302940 Fuller et al. Dec 2009 A1
20090318983 Armoundas et al. Dec 2009 A1
20100007489 Misra et al. Jan 2010 A1
20100033180 Biber et al. Feb 2010 A1
20100060355 Hamada et al. Mar 2010 A1
20100090762 van Zelm et al. Apr 2010 A1
20100093290 van Zelm et al. Apr 2010 A1
20100094603 Danko Apr 2010 A1
20100097714 Eppler et al. Apr 2010 A1
20100114813 Zalay et al. May 2010 A1
20100135449 Row et al. Jun 2010 A1
20100148865 Borkar et al. Jun 2010 A1
20100152547 Sterling et al. Jun 2010 A1
20100156530 Utsunomiya et al. Jun 2010 A1
20100183106 Beidas et al. Jul 2010 A1
20100194474 Ishikawa et al. Aug 2010 A1
20100199237 Kim et al. Aug 2010 A1
20100254450 Narroschke et al. Oct 2010 A1
20100283540 Davies Nov 2010 A1
20100292602 Worrell et al. Nov 2010 A1
20100292752 Bardakjian et al. Nov 2010 A1
20100311361 Rashev et al. Dec 2010 A1
20100312495 Haviland et al. Dec 2010 A1
20110003570 Yu et al. Jan 2011 A1
20110025414 Wolf et al. Feb 2011 A1
20110028859 Chian Feb 2011 A1
20110037518 Lee et al. Feb 2011 A1
20110054354 Hunter et al. Mar 2011 A1
20110054355 Hunter et al. Mar 2011 A1
20110064171 Huang et al. Mar 2011 A1
20110069749 Forrester et al. Mar 2011 A1
20110081152 Agazzi et al. Apr 2011 A1
20110085678 Pfaffinger Apr 2011 A1
20110087341 Pfaffinger Apr 2011 A1
20110096865 Hori et al. Apr 2011 A1
20110102080 Chatterjee et al. May 2011 A1
20110103455 Forrester et al. May 2011 A1
20110110473 Keehr et al. May 2011 A1
20110121897 Braithwaite May 2011 A1
20110125684 Al-Duwaish et al. May 2011 A1
20110125685 Rizvi et al. May 2011 A1
20110125686 Al-Duwaish et al. May 2011 A1
20110125687 Al-Duwaish et al. May 2011 A1
20110140779 Koren et al. Jun 2011 A1
20110144961 Ishikawa et al. Jun 2011 A1
20110149714 Rimini et al. Jun 2011 A1
20110177956 Korenberg Jul 2011 A1
20110181360 Li et al. Jul 2011 A1
20110204975 Miyashita Aug 2011 A1
20110211842 Agazzi et al. Sep 2011 A1
20110238690 Arrasvuori et al. Sep 2011 A1
20110249024 Arrasvuori et al. Oct 2011 A1
20110252320 Arrasvuori et al. Oct 2011 A1
20110268226 Lozhkin Nov 2011 A1
20110270590 Aparin Nov 2011 A1
20110288457 Questo et al. Nov 2011 A1
20110293051 Lozhkin Dec 2011 A1
20120007153 Nogami Jan 2012 A1
20120007672 Peyresoubes et al. Jan 2012 A1
20120027070 Beidas Feb 2012 A1
20120029663 Danko Feb 2012 A1
20120086504 Tsukamoto Apr 2012 A1
20120086507 Kim et al. Apr 2012 A1
20120093376 Malik et al. Apr 2012 A1
20120098481 Hunter et al. Apr 2012 A1
20120098596 Nagatani et al. Apr 2012 A1
20120112908 Prykari et al. May 2012 A1
20120119810 Bai May 2012 A1
20120140860 Rimini et al. Jun 2012 A1
20120147993 Kim et al. Jun 2012 A1
20120154040 Kim et al. Jun 2012 A1
20120154041 Kim et al. Jun 2012 A1
20120158384 Wei Jun 2012 A1
20120165633 Khair Jun 2012 A1
20120176190 Goodman et al. Jul 2012 A1
20120176609 Seppa et al. Jul 2012 A1
20120217557 Nogami Aug 2012 A1
20120229206 van Zelm et al. Sep 2012 A1
20120256687 Davies Oct 2012 A1
20120259600 Al-Duwaish Oct 2012 A1
20120263256 Waheed et al. Oct 2012 A1
20120306573 Mazzucco et al. Dec 2012 A1
20120328128 Tronchin et al. Dec 2012 A1
20130005283 van Zelm et al. Jan 2013 A1
20130009702 Davies Jan 2013 A1
20130015917 Ishikawa et al. Jan 2013 A1
20130030239 Weyh et al. Jan 2013 A1
20130034188 Rashev et al. Feb 2013 A1
20130040587 Ishikawa et al. Feb 2013 A1
20130044791 Rimini et al. Feb 2013 A1
20130044836 Koren et al. Feb 2013 A1
20130093676 Liberty et al. Apr 2013 A1
20130110974 Arrasvuori et al. May 2013 A1
20130113559 Koren et al. May 2013 A1
20130114762 Azadet et al. May 2013 A1
20130166259 Weber et al. Jun 2013 A1
20130170842 Koike-Akino et al. Jul 2013 A1
20130176153 Rachid et al. Jul 2013 A1
20130201316 Binder et al. Aug 2013 A1
20130207723 Chen et al. Aug 2013 A1
20130222059 Kilambi et al. Aug 2013 A1
20130243119 Dalipi et al. Sep 2013 A1
20130243122 Bai Sep 2013 A1
20130243135 Row et al. Sep 2013 A1
20130257530 Davies Oct 2013 A1
20130271212 Bai Oct 2013 A1
20130272367 Beidas Oct 2013 A1
20130285742 van Zelm et al. Oct 2013 A1
20130301487 Khandani Nov 2013 A1
20130303103 Mikhemar et al. Nov 2013 A1
20130315291 Kim et al. Nov 2013 A1
20130321078 Ishikawa et al. Dec 2013 A1
20130330082 Alonso et al. Dec 2013 A1
20130336377 Liu et al. Dec 2013 A1
20140009224 van Zelm et al. Jan 2014 A1
20140029658 Kim et al. Jan 2014 A1
20140029660 Bolstad et al. Jan 2014 A1
20140030995 Kim et al. Jan 2014 A1
20140031651 Chon Jan 2014 A1
20140036969 Wyville et al. Feb 2014 A1
20140044318 Derakhshani et al. Feb 2014 A1
20140044319 Derakhshani et al. Feb 2014 A1
20140044320 Derakhshani et al. Feb 2014 A1
20140044321 Derakhshani et al. Feb 2014 A1
20140072074 Utsunomiya et al. Mar 2014 A1
20140077981 Rachid et al. Mar 2014 A1
20140081157 Joeken Mar 2014 A1
20140086356 Azadet et al. Mar 2014 A1
20140086361 Azadet et al. Mar 2014 A1
20140095129 Liu et al. Apr 2014 A1
20140107832 Danko Apr 2014 A1
20140126670 Monsen May 2014 A1
20140126675 Monsen May 2014 A1
20140133848 Koike-Akino et al. May 2014 A1
20140140250 Kim et al. May 2014 A1
20140161207 Teterwak Jun 2014 A1
20140167704 Lafontaine et al. Jun 2014 A1
20140172338 Lafontaine et al. Jun 2014 A1
20140198959 Derakhshani et al. Jul 2014 A1
20140213919 Poon et al. Jul 2014 A1
20140225451 Lafontaine et al. Aug 2014 A1
20140226035 Nurmenniemi Aug 2014 A1
20140226828 Klippel et al. Aug 2014 A1
20140229132 Lafontaine et al. Aug 2014 A1
20140247906 Pang et al. Sep 2014 A1
20140266431 Chen Sep 2014 A1
20140269857 Rimini et al. Sep 2014 A1
20140269970 Pratt et al. Sep 2014 A1
20140269989 Santucci et al. Sep 2014 A1
20140269990 Chen Sep 2014 A1
20140270405 Derakhshani et al. Sep 2014 A1
20140278303 Larimore Sep 2014 A1
20140279778 Lazar et al. Sep 2014 A1
20140292406 Dechen et al. Oct 2014 A1
20140292412 Feldman et al. Oct 2014 A1
20140294119 Sochacki Oct 2014 A1
20140294252 Derakhshani et al. Oct 2014 A1
20140313946 Azadet Oct 2014 A1
20140314176 Azadet Oct 2014 A1
20140314181 Azadet Oct 2014 A1
20140314182 Azadet Oct 2014 A1
20140317163 Azadet et al. Oct 2014 A1
20140323891 Sterling et al. Oct 2014 A1
20140333376 Hammi Nov 2014 A1
20140372091 Larimore Dec 2014 A1
20150003625 Uhle et al. Jan 2015 A1
20150005902 van Zelm et al. Jan 2015 A1
20150016567 Chen Jan 2015 A1
20150018632 Khair Jan 2015 A1
20150025328 Khair Jan 2015 A1
20150031317 Wang Jan 2015 A1
20150031969 Khair Jan 2015 A1
20150032788 Velazquez et al. Jan 2015 A1
20150043678 Hammi Feb 2015 A1
20150051513 Hunter et al. Feb 2015 A1
20150061911 Pagnanelli Mar 2015 A1
20150070089 Eliaz et al. Mar 2015 A1
20150077180 Matsubara et al. Mar 2015 A1
20150078484 Xiao et al. Mar 2015 A1
20150092830 Kim et al. Apr 2015 A1
20150098710 Agazzi et al. Apr 2015 A1
20150104196 Bae et al. Apr 2015 A1
20150131757 Carbone et al. May 2015 A1
20150146805 Terry May 2015 A1
20150146806 Terry May 2015 A1
20150156003 Khandani Jun 2015 A1
20150156004 Khandani Jun 2015 A1
20150162881 Hammi Jun 2015 A1
20150172081 Wloczysiak Jun 2015 A1
20150180495 Klippel Jun 2015 A1
20150193565 Kim et al. Jul 2015 A1
20150193666 Derakhshani et al. Jul 2015 A1
20150194989 Mkadem et al. Jul 2015 A1
20150202440 Moehlis et al. Jul 2015 A1
20150214987 Yu et al. Jul 2015 A1
20150215937 Khandani Jul 2015 A1
20150223748 Warrick et al. Aug 2015 A1
20150230105 Negus et al. Aug 2015 A1
20150241996 Liberty et al. Aug 2015 A1
20150249889 Iyer et al. Sep 2015 A1
20150256216 Ding et al. Sep 2015 A1
20150270856 Breynaert et al. Sep 2015 A1
20150270865 Polydoros et al. Sep 2015 A1
20150280945 Tan et al. Oct 2015 A1
20150288375 Rachid et al. Oct 2015 A1
20150295643 Zhao et al. Oct 2015 A1
20150310739 Beaurepaire et al. Oct 2015 A1
20150311927 Beidas et al. Oct 2015 A1
20150311973 Colavolpe et al. Oct 2015 A1
20150311985 Kim et al. Oct 2015 A1
20150322647 Danko Nov 2015 A1
20150326190 Gustavsson Nov 2015 A1
20150333781 Alon et al. Nov 2015 A1
20150357975 Avniel et al. Dec 2015 A1
20150358042 Zhang Dec 2015 A1
20150358191 Eliaz et al. Dec 2015 A1
20150381216 Shor et al. Dec 2015 A1
20150381220 Gal et al. Dec 2015 A1
20150381821 Kechichian et al. Dec 2015 A1
20160005419 Chang et al. Jan 2016 A1
20160022161 Khair Jan 2016 A1
20160028433 Ding et al. Jan 2016 A1
20160034421 Magesacher et al. Feb 2016 A1
20160036472 Chang Feb 2016 A1
20160036528 Zhao et al. Feb 2016 A1
20160065311 Winzer et al. Mar 2016 A1
20160079933 Fehri et al. Mar 2016 A1
20160087604 Kim et al. Mar 2016 A1
20160087657 Yu et al. Mar 2016 A1
20160093029 Micovic et al. Mar 2016 A1
20160099776 Nakashima et al. Apr 2016 A1
20160111110 Gautama Apr 2016 A1
20160117430 Fehri et al. Apr 2016 A1
20160124903 Agee May 2016 A1
20160126903 Abdelrahman et al. May 2016 A1
20160127113 Khandani May 2016 A1
20160132735 Derakhshani et al. May 2016 A1
20160134380 Kim et al. May 2016 A1
20160149665 Henry et al. May 2016 A1
20160149731 Henry et al. May 2016 A1
20160156375 Yang et al. Jun 2016 A1
20160162042 Liberty et al. Jun 2016 A1
20160173117 Rachid et al. Jun 2016 A1
20160191020 Velazquez Jun 2016 A1
20160197642 Henry et al. Jul 2016 A1
20160218752 Liu Jul 2016 A1
20160218891 Nammi et al. Jul 2016 A1
20160225385 Hammarqvist Aug 2016 A1
20160226681 Henry et al. Aug 2016 A1
20160241277 Rexberg et al. Aug 2016 A1
20160248531 Eliaz et al. Aug 2016 A1
20160259960 Derakhshani et al. Sep 2016 A1
20160261241 Hammi et al. Sep 2016 A1
20160269210 Kim et al. Sep 2016 A1
20160287871 Bardakjian et al. Oct 2016 A1
20160308619 Ling Oct 2016 A1
20160309042 Kechichian et al. Oct 2016 A1
20160316283 Kim et al. Oct 2016 A1
20160317096 Adams Nov 2016 A1
20160329927 Xiong et al. Nov 2016 A1
20160334466 Rivoir Nov 2016 A1
20160336762 Hunter et al. Nov 2016 A1
20160352361 Fonseka et al. Dec 2016 A1
20160352362 Fonseka et al. Dec 2016 A1
20160352419 Fonseka et al. Dec 2016 A1
20160352427 Anandakumar et al. Dec 2016 A1
20160359552 Monsen et al. Dec 2016 A1
20160373212 Ling et al. Dec 2016 A1
20160380661 Xue et al. Dec 2016 A1
20160380700 Shen Dec 2016 A1
20170012585 Sarbishaei et al. Jan 2017 A1
20170012709 Duthel Jan 2017 A1
20170012862 Terry Jan 2017 A1
20170014032 Khair Jan 2017 A1
20170018851 Henry et al. Jan 2017 A1
20170018852 Adriazola et al. Jan 2017 A1
20170019131 Henry et al. Jan 2017 A1
20170026095 Ashrafi et al. Jan 2017 A1
20170032129 Linde et al. Feb 2017 A1
20170032184 Derakhshani et al. Feb 2017 A1
20170033465 Henry et al. Feb 2017 A1
20170033466 Henry et al. Feb 2017 A1
20170033809 Liu Feb 2017 A1
20170033953 Henry et al. Feb 2017 A1
20170033954 Henry et al. Feb 2017 A1
20170041124 Khandani Feb 2017 A1
20170043166 Choi et al. Feb 2017 A1
20170047899 Abdelrahman et al. Feb 2017 A1
20170061045 Okuyama et al. Mar 2017 A1
20170063312 Sulimarski et al. Mar 2017 A1
20170063430 Henry et al. Mar 2017 A1
20170077944 Pagnanelli Mar 2017 A1
20170077945 Pagnanelli Mar 2017 A1
20170078023 Magri et al. Mar 2017 A1
20170078027 Okabe et al. Mar 2017 A1
20170078400 Binder et al. Mar 2017 A1
20170085003 Johnson et al. Mar 2017 A1
20170085336 Henry et al. Mar 2017 A1
20170093497 Ling et al. Mar 2017 A1
20170093693 Barzegar et al. Mar 2017 A1
20170095195 Hunter et al. Apr 2017 A1
20170104503 Pratt et al. Apr 2017 A1
20170104617 Magers Apr 2017 A1
20170108943 Liberty et al. Apr 2017 A1
20170110795 Henry et al. Apr 2017 A1
20170110804 Henry et al. Apr 2017 A1
20170111805 Barzegar et al. Apr 2017 A1
20170117854 Ben Smida et al. Apr 2017 A1
20170134205 Kim et al. May 2017 A1
20170141807 Chen et al. May 2017 A1
20170141938 Chen et al. May 2017 A1
20170163465 Piazza et al. Jun 2017 A1
20170170999 Zhou et al. Jun 2017 A1
20170180061 Ishikawa et al. Jun 2017 A1
20170195053 Duthel Jul 2017 A1
20170201288 Magers Jul 2017 A1
20170207934 Iyer Seshadri et al. Jul 2017 A1
20170214468 Agazzi et al. Jul 2017 A1
20170214470 Nishihara et al. Jul 2017 A1
20170222717 Rope Aug 2017 A1
20170229782 Adriazola et al. Aug 2017 A1
20170230083 Adriazola et al. Aug 2017 A1
20170244582 Gal et al. Aug 2017 A1
20170245054 Sheen et al. Aug 2017 A1
20170245079 Sheen et al. Aug 2017 A1
20170245157 Henry et al. Aug 2017 A1
20170255593 Agee Sep 2017 A9
20170269481 Borodovsky Sep 2017 A1
20170271117 Borodovsky Sep 2017 A1
20170272283 Kingery et al. Sep 2017 A1
20170288917 Henry et al. Oct 2017 A1
20170295048 Terry Oct 2017 A1
20170304625 Buschman Oct 2017 A1
20170311307 Negus et al. Oct 2017 A1
20170317781 Henry et al. Nov 2017 A1
20170317782 Henry et al. Nov 2017 A1
20170317783 Henry et al. Nov 2017 A1
20170317858 Henry et al. Nov 2017 A1
20170318482 Negus et al. Nov 2017 A1
20170322243 Lafontaine et al. Nov 2017 A1
20170324421 Tangudu et al. Nov 2017 A1
20170331899 Binder et al. Nov 2017 A1
20170338841 Pratt Nov 2017 A1
20170338842 Pratt Nov 2017 A1
20170339569 Khandani Nov 2017 A1
20170346510 Chen et al. Nov 2017 A1
20170366209 Weissman et al. Dec 2017 A1
20170366259 Rope Dec 2017 A1
20170373647 Chang et al. Dec 2017 A1
20170373759 Rope et al. Dec 2017 A1
20180013452 Henry et al. Jan 2018 A9
20180013456 Miyazaki et al. Jan 2018 A1
20180013495 Ling Jan 2018 A1
20180026586 Carbone et al. Jan 2018 A1
20180026673 Kim et al. Jan 2018 A1
20180034912 Binder et al. Feb 2018 A1
20180041219 Rachid et al. Feb 2018 A1
20180048497 Henry et al. Feb 2018 A1
20180054232 Henry et al. Feb 2018 A1
20180054233 Henry et al. Feb 2018 A1
20180054234 Stuckman et al. Feb 2018 A1
20180054268 Abdoli et al. Feb 2018 A1
20180062674 Boghrat et al. Mar 2018 A1
20180062886 Henry et al. Mar 2018 A1
20180069594 Henry et al. Mar 2018 A1
20180069731 Henry et al. Mar 2018 A1
20180070394 Khandani Mar 2018 A1
20180076947 Kazakevich et al. Mar 2018 A1
20180076979 Lomayev et al. Mar 2018 A1
20180076982 Henry et al. Mar 2018 A1
20180076988 Willis et al. Mar 2018 A1
20180091195 Carvalho et al. Mar 2018 A1
20180102850 Agazzi et al. Apr 2018 A1
20180115040 Bennett et al. Apr 2018 A1
20180115058 Henry et al. Apr 2018 A1
20180123256 Henry et al. May 2018 A1
20180123257 Henry et al. May 2018 A1
20180123749 Azizi et al. May 2018 A1
20180123836 Henry et al. May 2018 A1
20180123856 Qu et al. May 2018 A1
20180123897 Lee et al. May 2018 A1
20180124181 Binder et al. May 2018 A1
20180131406 Adriazola et al. May 2018 A1
20180131502 Askar et al. May 2018 A1
20180131541 Henry et al. May 2018 A1
20180145411 Henry et al. May 2018 A1
20180145412 Henry et al. May 2018 A1
20180145414 Henry et al. May 2018 A1
20180145415 Henry et al. May 2018 A1
20180151957 Bennett et al. May 2018 A1
20180152262 Henry et al. May 2018 A1
20180152330 Chritz et al. May 2018 A1
20180152925 Nammi et al. May 2018 A1
20180159195 Henry et al. Jun 2018 A1
20180159196 Henry et al. Jun 2018 A1
20180159197 Henry et al. Jun 2018 A1
20180159228 Britz et al. Jun 2018 A1
20180159229 Britz Jun 2018 A1
20180159230 Henry et al. Jun 2018 A1
20180159232 Henry et al. Jun 2018 A1
20180159240 Henry et al. Jun 2018 A1
20180159243 Britz et al. Jun 2018 A1
20180159615 Kim Jun 2018 A1
20180166761 Henry et al. Jun 2018 A1
20180166784 Johnson et al. Jun 2018 A1
20180166785 Henry et al. Jun 2018 A1
20180166787 Johnson et al. Jun 2018 A1
20180167042 Nagasaku Jun 2018 A1
20180167092 Hausmair et al. Jun 2018 A1
20180167093 Miyazaki et al. Jun 2018 A1
20180167105 Vannucci et al. Jun 2018 A1
20180167148 Vannucci et al. Jun 2018 A1
20180175892 Henry et al. Jun 2018 A1
20180175978 Beidas et al. Jun 2018 A1
20180180420 Korenberg Jun 2018 A1
20180191448 Kaneda et al. Jul 2018 A1
20180198668 Kim et al. Jul 2018 A1
20180205399 Baringer et al. Jul 2018 A1
20180205481 Shlomo et al. Jul 2018 A1
20180219566 Weissman et al. Aug 2018 A1
20180227158 Luo et al. Aug 2018 A1
20180248592 Ashrafi Aug 2018 A1
20180254754 Carvalho et al. Sep 2018 A1
20180254769 Alic et al. Sep 2018 A1
20180254924 Berardinelli et al. Sep 2018 A1
20180262217 Hassan Sep 2018 A1
20180262243 Ashrafi et al. Sep 2018 A1
20180262370 Al-Mufti et al. Sep 2018 A1
20180269988 Yu et al. Sep 2018 A1
20180278693 Binder et al. Sep 2018 A1
20180278694 Binder et al. Sep 2018 A1
20180279197 Kim et al. Sep 2018 A1
20180294879 Anandakumar et al. Oct 2018 A1
20180294884 Rope et al. Oct 2018 A1
20180294897 Vannucci et al. Oct 2018 A1
20180301812 Henry et al. Oct 2018 A1
20180302111 Chen et al. Oct 2018 A1
20180302145 Kim Oct 2018 A1
20180309206 Henry et al. Oct 2018 A1
20180309465 Pratt Oct 2018 A1
20180316320 Soubercaze-Pun et al. Nov 2018 A1
20180323826 Vannucci et al. Nov 2018 A1
20180324005 Kim et al. Nov 2018 A1
20180324006 Henry et al. Nov 2018 A1
20180324021 Chritz et al. Nov 2018 A1
20180324601 Barzegar et al. Nov 2018 A1
20180331413 Adriazola et al. Nov 2018 A1
20180331720 Adriazola et al. Nov 2018 A1
20180331721 Adriazola et al. Nov 2018 A1
20180331814 Khandani Nov 2018 A1
20180331871 Martinez Nov 2018 A1
20180333580 Choi et al. Nov 2018 A1
20180343304 Binder et al. Nov 2018 A1
20180351687 Henry et al. Dec 2018 A1
20180358678 Henry et al. Dec 2018 A1
20180359126 Wang Dec 2018 A1
20180367219 Dar et al. Dec 2018 A1
20180375940 Binder et al. Dec 2018 A1
20190007075 Kim et al. Jan 2019 A1
20190013577 Henry et al. Jan 2019 A1
20190013837 Henry et al. Jan 2019 A1
20190013838 Henry et al. Jan 2019 A1
20190013867 Ling et al. Jan 2019 A1
20190013874 Ling Jan 2019 A1
20190013991 Duyck et al. Jan 2019 A1
20190020415 Agazzi et al. Jan 2019 A1
20190020530 Au et al. Jan 2019 A1
20190028131 Wang Jan 2019 A1
20190030334 Lerman et al. Jan 2019 A1
20190036222 Henry et al. Jan 2019 A1
20190036622 Lagoy et al. Jan 2019 A1
20190042536 Agee Feb 2019 A1
20190052505 Baldemair et al. Feb 2019 A1
20190074563 Henry et al. Mar 2019 A1
20190074564 Henry et al. Mar 2019 A1
20190074565 Henry et al. Mar 2019 A1
20190074568 Henry et al. Mar 2019 A1
20190074580 Henry et al. Mar 2019 A1
20190074584 Henry et al. Mar 2019 A1
20190074597 Vannucci et al. Mar 2019 A1
20190074598 Henry et al. Mar 2019 A1
20190074864 Henry et al. Mar 2019 A1
20190074865 Henry et al. Mar 2019 A1
20190074878 Henry et al. Mar 2019 A1
20190109568 Ben Smida et al. Apr 2019 A1
20190115877 Ben Smida et al. Apr 2019 A1
20190150774 Brinkmann et al. May 2019 A1
20190199365 Rachid et al. Jun 2019 A1
20190204369 Lafontaine et al. Jul 2019 A1
20190207589 Alic et al. Jul 2019 A1
20190215073 Schmogrow et al. Jul 2019 A1
20190222179 Doi Jul 2019 A1
20190222326 Dunworth et al. Jul 2019 A1
20190260401 Megretski et al. Aug 2019 A1
20190260402 Chuang et al. Aug 2019 A1
20190268026 Liu Aug 2019 A1
20190280730 Zhang et al. Sep 2019 A1
20190280778 Zhu et al. Sep 2019 A1
20190312552 Chan et al. Oct 2019 A1
20190312571 Hovakimyan et al. Oct 2019 A1
20190312648 Cavaliere et al. Oct 2019 A1
20190326942 Spring et al. Oct 2019 A1
20190348956 Megretski et al. Nov 2019 A1
20190386621 Alon et al. Dec 2019 A1
20190386750 Wang et al. Dec 2019 A1
20200000355 Khair Jan 2020 A1
20200008686 Khair Jan 2020 A1
20200012754 Larimore Jan 2020 A1
20200028476 Kim et al. Jan 2020 A1
20200064406 Ritzberger et al. Feb 2020 A1
20200067543 Kim et al. Feb 2020 A1
20200067600 Agazzi et al. Feb 2020 A1
20200076379 Tanio et al. Mar 2020 A1
20200091608 Alpman et al. Mar 2020 A1
20200110474 Liberty et al. Apr 2020 A1
20200119756 Boghrat et al. Apr 2020 A1
20200145033 Rafique May 2020 A1
20200145112 Wang et al. May 2020 A1
20200162035 Gorbachov May 2020 A1
20200169227 Megretski et al. May 2020 A1
20200169334 Li et al. May 2020 A1
20200169937 Kim et al. May 2020 A1
20200174514 Amiralizadeh Asl et al. Jun 2020 A1
20200186103 Weber et al. Jun 2020 A1
20200195296 Sarkas et al. Jun 2020 A1
20200225308 Dosenbach et al. Jul 2020 A1
20200244232 Cope et al. Jul 2020 A1
20200252032 Faig et al. Aug 2020 A1
20200259465 Wu et al. Aug 2020 A1
20200266768 Chen et al. Aug 2020 A1
20200274560 Pratt Aug 2020 A1
20200279546 Chase Sep 2020 A1
20200295975 Li et al. Sep 2020 A1
20200313763 Wang et al. Oct 2020 A1
20200321920 Chiron Oct 2020 A1
20200333295 Schiffres et al. Oct 2020 A1
20200351072 Khandani Nov 2020 A1
20200366253 Megretski et al. Nov 2020 A1
20200382147 Menkhoff et al. Dec 2020 A1
20200389207 Taniguchi et al. Dec 2020 A1
20200395662 Tervo et al. Dec 2020 A1
20210006207 Carvalho et al. Jan 2021 A1
20210013843 Cohen et al. Jan 2021 A1
20210021238 Chan et al. Jan 2021 A1
20210044461 Groen et al. Feb 2021 A1
20210075464 Luo Mar 2021 A1
20210075649 Mahmood et al. Mar 2021 A1
20210097128 Murase Apr 2021 A1
Non-Patent Literature Citations (341)
Entry
U.S. Appl. No. 10/003,364, filed Jun. 19, 2018, Willis III et al.
U.S. Appl. No. 10/008,218, filed Jun. 26, 2018, Wu et al.
U.S. Appl. No. 10/009,050, filed Jun. 26, 2018, Chen et al.
U.S. Appl. No. 10/009,109, filed Jun. 26, 2018, Rope et al.
U.S. Appl. No. 10/009,259, filed Jun. 26, 2018, Calmon et al.
U.S. Appl. No. 10/013,515, filed Jul. 3, 2018, Williams.
U.S. Appl. No. 10/015,593, filed Jul. 3, 2018, Iyer et al.
U.S. Appl. No. 10/027,397, filed Jul. 17, 2018, Kim.
U.S. Appl. No. 10/027,427, filed Jul. 17, 2018, Vannucci et al.
U.S. Appl. No. 10/027,523, filed Jul. 17, 2018, Chritz et al.
U.S. Appl. No. 10/033,107, filed Jul. 24, 2018, Henry et al.
U.S. Appl. No. 10/033,108, filed Jul. 24, 2018, Henry et al.
U.S. Appl. No. 10/033,413, filed Jul. 24, 2018, Pratt.
U.S. Appl. No. 10/033,568, filed Jul. 24, 2018, Piazza et al.
U.S. Appl. No. 10/050,636, filed Aug. 14, 2018, Tangudu et al.
U.S. Appl. No. 10/050,710, filed Aug. 14, 2018, Anandakumar et al.
U.S. Appl. No. 10/050,714, filed Aug. 14, 2018, Nishihara et al.
U.S. Appl. No. 10/050,815, filed Aug. 14, 2018, Henry et al.
U.S. Appl. No. 10/051,483, filed Aug. 14, 2018, Barzegar et al.
U.S. Appl. No. 10/051,488, filed Aug. 14, 2018, Vannucci et al.
U.S. Appl. No. 10/062,970, filed Aug. 28, 2018, Vannucci et al.
U.S. Appl. No. 10/063,265, filed Aug. 28, 2018, Pratt et al.
U.S. Appl. No. 10/063,354, filed Aug. 28, 2018, Hadani et al.
U.S. Appl. No. 10/063,364, filed Aug. 28, 2018, Khandani.
U.S. Appl. No. 10/069,467, filed Sep. 4, 2018, Carvalho et al.
U.S. Appl. No. 10/069,535, filed Sep. 4, 2018, Vannucci et al.
U.S. Appl. No. 10/075,201, filed Sep. 11, 2018, Gazneli et al.
U.S. Appl. No. 10/079,652, filed Sep. 18, 2018, Henry et al.
U.S. Appl. No. 10/084,562, filed Sep. 25, 2018, Abdoli et al.
U.S. Appl. No. 10/090,594, filed Oct. 2, 2018, Henry et al.
U.S. Appl. No. 10/095,927, filed Oct. 9, 2018, Derakhshani et al.
U.S. Appl. No. 10/096,883, filed Oct. 9, 2018, Henry et al.
U.S. Appl. No. 10/097,273, filed Oct. 9, 2018, Agazzi et al.
U.S. Appl. No. 10/097,939, filed Oct. 9, 2018, Sheen et al.
U.S. Appl. No. 10/101,370, filed Oct. 16, 2018, Lafontaine et al.
U.S. Appl. No. 10/103,777, filed Oct. 16, 2018, Henry et al.
U.S. Appl. No. 10/108,858, filed Oct. 23, 2018, Derakhshani et al.
U.S. Appl. No. 10/110,315, filed Oct. 23, 2018, Ling.
U.S. Appl. No. 10/116,390, filed Oct. 30, 2018, Ling et al.
U.S. Appl. No. 10/123,217, filed Nov. 6, 2018, Barzegar et al.
U.S. Appl. No. 10/128,955, filed Nov. 13, 2018, Rope et al.
U.S. Appl. No. 10/129,057, filed Nov. 13, 2018, Willis III et al.
U.S. Appl. No. 10/135,145, filed Nov. 20, 2018, Henry et al.
U.S. Appl. No. 10/141,944, filed Nov. 27, 2018, Rachid et al.
U.S. Appl. No. 10/142,754, filed Nov. 27, 2018, Sheen et al.
U.S. Appl. No. 10/147,431, filed Dec. 4, 2018, Dick et al.
U.S. Appl. No. 10/148,016, filed Dec. 4, 2018, Johnson et al.
U.S. Appl. No. 10/148,360, filed Dec. 4, 2018, Ashrafi.
U.S. Appl. No. 10/148,417, filed Dec. 4, 2018, Ling et al.
U.S. Appl. No. 10/153,793, filed Dec. 11, 2018, Hausmair et al.
U.S. Appl. No. 10/168,501, filed Jan. 1, 2019, Ashrafi.
U.S. Appl. No. 10/170,840, filed Jan. 1, 2019, Henry et al.
U.S. Appl. No. 10/171,158, filed Jan. 1, 2019, Barzegar et al.
U.S. Appl. No. 10/181,825, filed Jan. 15, 2019, Ben Smida et al.
U.S. Appl. No. 10/191,376, filed Jan. 29, 2019, Borodovsky.
U.S. Appl. No. 10/198,582, filed Feb. 5, 2019, Linde et al.
U.S. Appl. No. 10/200,106, filed Feb. 5, 2019, Barzegar et al.
U.S. Appl. No. 10/205,212, filed Feb. 12, 2019, Henry et al.
U.S. Appl. No. 10/205,231, filed Feb. 12, 2019, Henry et al.
U.S. Appl. No. 10/205,482, filed Feb. 12, 2019, Barzegar et al.
U.S. Appl. No. 10/205,655, filed Feb. 12, 2019, Barzegar et al.
U.S. Appl. No. 10/211,855, filed Feb. 19, 2019, Baringer et al.
U.S. Appl. No. 10/212,014, filed Feb. 19, 2019, Qu et al.
U.S. Appl. No. 10/218,405, filed Feb. 26, 2019, Magers.
U.S. Appl. No. 10/224,634, filed Mar. 5, 2019, Henry et al.
U.S. Appl. No. 10/224,970, filed Mar. 5, 2019, Pratt.
U.S. Appl. No. 10/230,353, filed Mar. 12, 2019, Alic et al.
U.S. Appl. No. 10/230,550, filed Mar. 12, 2019, Al-Mufti et al.
U.S. Appl. No. 10/237,114, filed Mar. 19, 2019, Duyck et al.
U.S. Appl. No. 10/243,525, filed Mar. 26, 2019, Nagasaku.
U.S. Appl. No. 10/243,596, filed Mar. 26, 2019, Kerhuel et al.
U.S. Appl. No. 10/268,169, filed Apr. 23, 2019, van Zelm et al.
U.S. Appl. No. 10/270,478, filed Apr. 23, 2019, Liu.
U.S. Appl. No. 10/291,384, filed May 14, 2019, Askar et al.
U.S. Appl. No. 10/305,432, filed May 28, 2019, Trayling et al.
U.S. Appl. No. 10/305,435, filed May 28, 2019, Murugesu et al.
U.S. Appl. No. 10/311,243, filed Jun. 4, 2019, Calmon et al.
U.S. Appl. No. 10/320,340, filed Jun. 11, 2019, Pratt et al.
U.S. Appl. No. 10/333,474, filed Jun. 25, 2019, Alon et al.
U.S. Appl. No. 10/333,561, filed Jun. 25, 2019, Liu.
U.S. Appl. No. 10/333,764, filed Jun. 25, 2019, Arditti Ilitzky.
U.S. Appl. No. 10/334,637, filed Jun. 25, 2019, Khandani.
U.S. Appl. No. 10/348,341, filed Jul. 9, 2019, Weissman et al.
U.S. Appl. No. 10/348,345, filed Jul. 9, 2019, Guyton et al.
U.S. Appl. No. 10/374,781, filed Aug. 6, 2019, Khandani.
U.S. Appl. No. 10/384,064, filed Aug. 20, 2019, Choi et al.
U.S. Appl. No. 10/389,449, filed Aug. 20, 2019, Ling et al.
U.S. Appl. No. 10/396,723, filed Aug. 27, 2019, Haas et al.
U.S. Appl. No. 10/404,296, filed Sep. 3, 2019, Kim et al.
U.S. Appl. No. 10/404,376, filed Sep. 3, 2019, Schmogrow et al.
U.S. Appl. No. 10/417,353, filed Sep. 17, 2019, Larimore.
U.S. Appl. No. 10/419,046, filed Sep. 17, 2019, Chen et al.
U.S. Appl. No. 10/419,126, filed Sep. 17, 2019, Ling.
U.S. Appl. No. 10/447,211, filed Oct. 15, 2019, Rollins et al.
U.S. Appl. No. 10/447,244, filed Oct. 15, 2019, Zhou et al.
U.S. Appl. No. 10/452,621, filed Oct. 22, 2019, Medard et al.
U.S. Appl. No. 10/463,276, filed Nov. 5, 2019, Hunter et al.
U.S. Appl. No. 10/474,775, filed Nov. 12, 2019, Okuyama et al.
U.S. Appl. No. 10/491,170, filed Nov. 26, 2019, Ben Smida et al.
U.S. Appl. No. 10/491,171, filed Nov. 26, 2019, Ben Smida et al.
U.S. Appl. No. 10/498,372, filed Dec. 3, 2019, Pratt.
U.S. Appl. No. 10/505,638, filed Dec. 10, 2019, Agazzi et al.
U.S. Appl. No. 10/511,337, filed Dec. 17, 2019, Boghrat et al.
U.S. Appl. No. 10/514,776, filed Dec. 24, 2019, Liberty et al.
U.S. Appl. No. 10/516,434, filed Dec. 24, 2019, Chang et al.
U.S. Appl. No. 10/523,159, filed Dec. 31, 2019, Megretski et al.
U.S. Appl. No. 10/530,574, filed Jan. 7, 2020, Shi et al.
U.S. Appl. No. 10/534,325, filed Jan. 14, 2020, Dash et al.
U.S. Appl. No. 10/536,221, filed Jan. 14, 2020, Wang et al.
U.S. Appl. No. 10/540,984, filed Jan. 21, 2020, Malik et al.
U.S. Appl. No. 10/545,482, filed Jan. 28, 2020, Dash et al.
U.S. Appl. No. 10/554,183, filed Feb. 4, 2020, Doi.
U.S. Appl. No. 10/560,140, filed Feb. 11, 2020, Wu et al.
U.S. Appl. No. 10/581,470, filed Mar. 3, 2020, Megretski et al.
U.S. Appl. No. 10/594,406, filed Mar. 17, 2020, Zhu et al.
U.S. Appl. No. 10/608,751, filed Mar. 31, 2020, Yu et al.
U.S. Appl. No. 10/622,951, filed Apr. 14, 2020, Chen et al.
U.S. Appl. No. 10/623,049, filed Apr. 14, 2020, Zhang et al.
U.S. Appl. No. 10/623,118, filed Apr. 14, 2020, Lagoy et al.
U.S. Appl. No. 10/630,323, filed Apr. 21, 2020, Spring et al.
U.S. Appl. No. 10/630,391, filed Apr. 21, 2020, LaGasse et al.
U.S. Appl. No. 10/644,657, filed May 5, 2020, Megretski et al.
U.S. Appl. No. 10/666,307, filed May 26, 2020, Wang.
U.S. Appl. No. 10/735,032, filed Aug. 4, 2020, Liu.
U.S. Appl. No. 10/741,188, filed Aug. 11, 2020, Dick et al.
U.S. Appl. No. 10/742,388, filed Aug. 11, 2020, Khandani.
U.S. Appl. No. 10/749,480, filed Aug. 18, 2020, Tanio et al.
U.S. Appl. No. 10/756,682, filed Aug. 25, 2020, Faig et al.
U.S. Appl. No. 10/756,774, filed Aug. 25, 2020, Sarkas et al.
U.S. Appl. No. 10/756,822, filed Aug. 25, 2020, Marsella et al.
U.S. Appl. No. 10/763,904, filed Sep. 1, 2020, Megretski et al.
U.S. Appl. No. 10/770,080, filed Sep. 8, 2020, Dick et al.
U.S. Appl. No. 10/775,437, filed Sep. 15, 2020, Rivoir.
U.S. Appl. No. 10/812,166, filed Oct. 20, 2020, Kim et al.
U.S. Appl. No. 10/826,444, filed Nov. 3, 2020, Soubercaze-Pun et al.
U.S. Appl. No. 10/826,615, filed Nov. 3, 2020, Cavaliere et al.
U.S. Appl. No. 10/833,634, filed Nov. 10, 2020, Chan et al.
U.S. Appl. No. 10/841,013, filed Nov. 17, 2020, Agazzi et al.
U.S. Appl. No. 10/862,517, filed Dec. 8, 2020, Kim et al.
U.S. Appl. No. 10/873,301, filed Dec. 22, 2020, Chiron.
U.S. Appl. No. 10/887,022, filed Jan. 5, 2021, Dar et al.
U.S. Appl. No. 10/892,786, filed Jan. 12, 2021, Pratt et al.
U.S. Appl. No. 10/897,276, filed Jan. 19, 2021, Megretski et al.
U.S. Appl. No. 10/902,086, filed Jan. 26, 2021, Agee.
U.S. Appl. No. 10/905,342, filed Feb. 2, 2021, Sterling et al.
U.S. Appl. No. 10/911,029, filed Feb. 2, 2021, Velazquez et al.
U.S. Appl. No. 10/931,238, filed Feb. 23, 2021, Megretski et al.
U.S. Appl. No. 10/931,318, filed Feb. 23, 2021, Mahmood et al.
U.S. Appl. No. 10/931,320, filed Feb. 23, 2021, Megretski et al.
U.S. Appl. No. 10/931,366, filed Feb. 23, 2021, Wang et al.
U.S. Appl. No. 10/965,380, filed Mar. 30, 2021, Kim et al.
U.S. Appl. No. 10/972,139, filed Apr. 6, 2021, Luo.
U.S. Appl. No. 10/979,090, filed Apr. 13, 2021, Rafique.
U.S. Appl. No. 10/979,097, filed Apr. 13, 2021, Luo.
U.S. Appl. No. 10/985,951, filed Apr. 20, 2021, Li et al.
Abdelaziz, M., L. Anttila, and M. Valkama, “Reduced-complexity digital predistortion for massive mimo,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 6478-6482.
Abrudan, Traian. Volterra Series and Nonlinear Adaptive Filters. Technical report, Helsinki University of Technology, 2003.
Aghvami, A. H. and Robertson, I. D. [Apr. 1993] Power limitation and high-power amplifier non linearities in on-board satellite communications systems. Electron. and Comm. Engin. J.
Akaiwa, Y. Introduction to Digital Mobile Communication. New York: Wiley, 1997.
Alexander, N. A., A. A. Chanerley, A. J. Crewe, and S. Bhattacharya. “Obtaining spectrum matching time series using a Reweighted Volterra Series Algorithm (RVSA).” Bulletin of the Seismological Society of America 104, No. 4 (2014): 1663-1673.
Altera Corporation. Digital Predistortion Reference Design. Application Note 314, 2003.
Ahluwalia, Sehej Swaran. “An FPGA Implementation Of Adaptive Linearization Of Power Amplifiers.” PhD diss., Georgia Institute of Technology, 2019.
Akaiwa, Yoshihiko. Introduction to digital mobile communication. John Wiley & Sons, 2015.
Annabestani, Mohsen, and Nadia Naghavi. “Practical realization of discrete-time Volterra series for high-order nonlinearities.” Nonlinear Dynamics 98, No. 3 (2019): 2309-2325.
Arthanayake, T. and Wood, H. B. [Apr. 8, 1971] Linear amplification using envelope feedback. Elec. Lett.
Aysal, Tuncer C., and Kenneth E. Barner, “Myriad-Type Polynomial Filtering”, IEEE Transactions on Signal Processing, vol. 55, No. 2, Feb. 2007.
Barner, Kenneth E., and Tuncer Can Aysal, “Polynomial Weighted Median Filtering”, IEEE Transactions on Signal Processing, vol. 54, No. 2, Feb. 2006.
Barner, Kenneth E,.Gonzalo R. Arce, Giovanni L. Sicuranza, and Ilya Shmulevich, “Nonlinear Signal and Image Processing—Part I”, EURASIP J. Applied Signal Processing (2002).
Barner, Kenneth E,.Gonzalo R. Arce, Giovanni L. Sicuranza, and Ilya Shmulevich, “Nonlinear Signal and Image Processing—Part II”, EURASIP J. Applied Signal Processing (2002).
Bedrosian, Edward, and Stephen O. Rice. “The output properties of Volterra systems (nonlinear systems with memory) driven by harmonic and Gaussian inputs.” Proceedings of the IEEE 59, No. 12 (1971): 1688-1707.
Bennet, T. J. and Clements, R. F. [May 1974] Feedforward—An alternative approach to amplifier linearisation. Radio and Electron. Engin.
Bhargava, V. K. et al. [1981] Digital Communications by Satellite, John Wiley and Sons.
Biglieri, Ezio, Sergio Barberis, and Maurizio Catena, “Analysis and Compensation of Nonlinearities in Digital Transmission Systems”, IEEE Journal on selected areas in Communications, vol. 6, No. 1, Jan. 1988.
Li, Bin, Chenglin Zhao, Mengwei Sun, Haijun Zhang, Zheng Zhou, and Arumugam Nallanathan. “A Bayesian approach for nonlinear equalization and signal detection in millimeter-wave communications.” IEEE Transactions on Wireless Communications 14, No. 7 (2015): 3794-3809.
Birpoutsoukis, Georgios, Anna Marconato, John Lataire, and Johan Schoukens. “Regularized nonparametric Volterra kernel estimation.” Automatica 82 (2017): 324-327.
Black, H. S. [Dec. 1937] Wave translating system. U.S. Pat. No. 2,102,671.
Black, H. S. [Oct. 1928] Translating system. U.S. Pat. No. 1,686,792.
Bohm, D. The Special Theory of Relativity, Benjamin, 1965.
Bond F. E. and Meyer, H. F. [Apr. 1970] Intermodulation effects in limiter amplifier repeaters. IEEE Trans. Comm., vol. COM-18, p. 127-135.
Bouzrara, Kais, Abdelkader Mbarek, and Tarek Garna. “Non-linear predictive controller for uncertain process modelled by GOBF-Volterra models.” International Journal of Modelling, Identification and Control 19, No. 4 (2013): 307-322.
Boyd, Stephen, Leon O. Chua, and Charles A. Desoer. “Analytical foundations of Volterra series.” IMA Journal of Mathematical Control and Information 1, No. 3 (1984): 243-282.
Budura, Georgeta, and Corina Botoca, “Efficient Implementation of the Third Order RLS Adaptive Volterra Filter”, FACTA Universitatis (NIS) Ser.: Elec. Energ. vol. 19, No. 1, Apr. 2006.
Budura, Georgeta, and Corina Botoca. “Nonlinearities identification using the LMS Volterra filter.” Communications Department Faculty of Electronics and Telecommunications Timisoara, Bd. V. Parvan 2 (2005).
Campello, Ricardo JGB, Gérard Favier, and Wagner C. Do Amaral. “Optimal expansions of discrete-time Volterra models using Laguerre functions.” Automatica 40, No. 5 (2004): 815-822.
Campello, Ricardo JGB, Wagner C. Do Amaral, and Gérard Favier. “A note on the optimal expansion of Volterra models using Laguerre functions.” Automatica 42, No. 4 (2006): 689-693.
Carassale, Luigi, and Ahsan Kareem. “Modeling nonlinear systems by Volterra series.” Journal of engineering mechanics 136, No. 6 (2010): 801-818.
Censor, D., & Melamed, T, 2002, Volterra differential constitutive operators and locality considerations in electromagnetic theory, PIER—Progress in Electromagnetic Research, 36: 121-137.
Censor, D., 2000, A quest for systematic constitutive formulations for general field and wave systems based on the Volterra differential operators, PIER—Progress In Electromagnetics Research, (25): 261-284.
Censor, D., 2001, Constitutive relations in inhomogeneous systems and the particle-field conundrum, PIER—Progress In Electromagnetics Research, (30): 305-335.
Censor, D., “Volterra Series and Operators” (2003).
Cheaito, A., M. Crussière, J .- F. Hélard, and Y. Louët, “Quantifying the memory effects of power amplifiers: Evm closed-form derivations of multicarrier signals.” IEEE Wireless Commun. Letters, vol. 6, No. 1, pp. 34-37, 2017.
Chen, S., B. Mulgrew, and P. M. Grant, “A clustering technique for digital communications channel equalization using radial basis function networks,” IEEE Transactions on neural networks, vol. 4, No. 4, pp. 570-590, 1993.
Chen, Hai-Wen. “Modeling and identification of parallel and feedback nonlinear systems.” In Proceedings of 1994 33rd IEEE Conference on Decision and Control, vol. 3, pp. 2267-2272. IEEE, 1994.
Da Rosa, Alex, Ricardo JGB Campello, and Wagner C. Amaral. “Choice of free parameters in expansions of discrete-time Volterra models using Kautz functions.” Automatica 43, No. 6 (2007): 1084-1091.
Deng, Yongjun, and Zhixing Yang. “Comments on” Complex-bilinear recurrent neural network for equalization of a digital Satellite Channel“.” IEEE Transactions on Neural Networks 17, No. 1 (2006): 268.
Dimitrov, S., “Non-linear distortion cancellation and symbol-based equalization in satellite forward links,” IEEE Trans Wireless Commun, vol. 16, No. 7, pp. 4489-4502, 2017.
Ding, L., G. T. Zhou, D. R. Morgan, Z. Ma, J. S. Kenney, J. Kim, and C. R. Giardina, “A robust digital baseband predistorter constructed using memory polynomials,” IEEE Transactions on communications, vol. 52, No. 1, pp. 159-165, 2004.
Doyle III, Francis J., Babatunde A. Ogunnaike, and Ronald K. Pearson. “Nonlinear model-based control using second-order Volterra models.” Automatica 31, No. 5 (1995): 697-714.
ETSI [Aug. 1994] Standard ETR 132. Radio broadcasting systems; Code of practice for site engineering VHF FM sound broadcasting transmitters. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.
ETSI [Jan. 1995] European Standard ETS 300 384. Radio broadcasting systems; Very high frequency (VHF), frequency modulated, sound broadcasting transmitters. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.
ETSI [Jun. 1998] Standard ETR 053 Ed 3—Radio site engineering for equipment and systems in the mobile service. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.
ETSI [Mar. 1997] European Standard ETS 300 113. Radio equipment and systems (RES); Land mobile service; Technical characteristics and test conditions for radio equipment intended for the transmission of data (and speech) and having an antenna connector. European Telecommunications Standards Institute, Sophia Antipolis, F-06291, Valbonne Cedex, France.
Eun, Changsoo, and Edward J. Powers. “A new Volterra predistorter based on the indirect learning architecture.” IEEE transactions on signal processing 45, No. 1 (1997): 223-227.
Evans, Ceri, David Rees, Lee Jones, and Michael Weiss. “Periodic signals for measuring nonlinear Volterra kernels.” IEEE transactions on instrumentation and measurement 45, No. 2 (1996): 362-371.
Fang, Yang-Wang, Li-Cheng Jiao, Xian-Da Zhang and Jin Pan, “On the Convergence of Volterra Filter Equalizers Using a Pth-Order Inverse Approach”, IEEE Transactions on Signal Processing, vol. 49, No. 8, Aug. 2001.
Fang, Xi, Yixin Fu, Xin Sui, Lei Zhang, Xianwei Gao, Ding Ding, and Lifang Liu. “Volterra-based fiber nonlinearity impairment modeling for OFDM/OQAM systems.” In 2019 18th International Conference on Optical Communications and Networks (ICOCN), pp. 1-3. IEEE, 2019.
Fang, Yang-Wang, Li-Cheng Jiao, Xian-Da Zhang, and Jin Pan. “On the convergence of Volterra filter equalizers using a pth-order inverse approach.” IEEE transactions on signal processing 49, No. 8 (2001): 1734-1744.
Frank, Walter A. “An efficient approximation to the quadratic Volterra filter and its application in real-time loudspeaker linearization.” Signal Processing 45, No. 1 (1995): 97-113.
Frank, Walter A. “Sampling requirements for Volterra system identification.” IEEE Signal Processing Letters 3, No. 9 (1996): 266-268.
Franz, Matthias O., and Bernhard Schölkopf. “A unifying view of Wiener and Volterra theory and polynomial kernel regression.” Neural computation 18, No. 12 (2006): 3097-3118.
Franz, Matthias O., Volterra and Weiner Series, Scholarpedia, 6(10):11307 doi: 10.4249/scholarpedia.11307.
Gardner, William A., and Teri L. Archer. “Exploitation of cyclostationarity for identifying the Volterra kernels of nonlinear systems.” IEEE transactions on Information Theory 39, No. 2 (1993): 535-542.
Gardner, William A., and Teri L. Archer. “Simplified methods for identifying the Volterra kernels of nonlinear systems.” In [1991] Proceedings of the 34th Midwest Symposium on Circuits and Systems, pp. 98-101. IEEE, 1992.
Genecili, Hasmet, and Michael Nikolaou. “Design of robust constrained model-predictive controllers with volterra series.” AIChE Journal 41, No. 9 (1995): 2098-2107.
Ghannouchi, Fadhel M., Oualid Hammi, and Mohamed Helaoui. Behavioral modeling and predistortion of wideband wireless transmitters. John Wiley & Sons, 2015.
Gilbert, Elmer. “Functional expansions for the response of nonlinear differential systems.” IEEE transactions on Automatic Control 22, No. 6 (1977): 909-921.
Gray, L. F. [1980] Application of broadband linearisers to satellite transponders. IEEE Conf. Proc. ICC'80.
Guan, L., and A. Zhu, “Simplified dynamic deviation reduction-based volterra model for doherty power amplifiers,” in Integrated Nonlinear Microwave and Millimetre-Wave Circuits (INMMIC), 2011 Workshop on. IEEE, 2011, pp. 1-4.
Guan, Lei, and Anding Zhu. “Low-cost FPGA implementation of Volterra series-based digital predistorter for RF power amplifiers.” IEEE Transactions on Microwave Theory and Techniques 58, No. 4 (2010): 866-872.
Guérin, Alexandre, Gérard Faucon, and Régine Le Bouquin-Jeannès, “Nonlinear Acoustic Echo Cancellation Based on Volterra Filters”, IEEE Transactions on Speech and Audio Processing, vol. 11, No. 6, Nov. 2003.
Guiomar, Fernando P., Jacklyn D. Reis, António L. Teixeira, and Armando N. Pinto. “Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.” Optics express 20, No. 2 (2012): 1360-1369.
Hanson, Joshua. “Learning Volterra Series via RKHS Methods.” (2019).
Haykin, Simon, “Adaptive Filter Theory”, Fourth Edition, Pearson Education, 2008.
Heathman, A. C. [1989] Methods for intermodulation prediction in communication systems. Ph. D. Thesis, University of Bradford, United Kingdom.
Heiskanen, Antti, Janne Aikio, and Timo Rahkonen. “A 5th order Volterra study of a 30W LDMOS power amplifier.” In Proceedings of the 2003 International Symposium on Circuits and Systems, 2003. ISCAS'03., vol. 4, pp. IV-IV. IEEE, 2003.
Helie, Thomas, “Introduction to Volterra series and applications to physical audio signal processing” IRCAM-CNRS UMR9912-UPMC, Paris, France DAFx, 2011.
Helie, Thomas, “Systemes entree-sortie non lineaires et applications en audio-acoustique”, Series de Volterra, Ecole Thematiqu “Theorie du Controlee n Mechanique” (2019).
Hermann, Robert. “Volterra modeling of digital magnetic saturation recording channels.” IEEE Transactions on Magnetics 26, No. 5 (1990): 2125-2127.
Hoerr, Ethan, and Robert C. Maher. “Using Volterra series modeling techniques to classify black-box audio effects.” In Audio Engineering Society Convention 147. Audio Engineering Society, 2019.
Ibnkahla, M., “Applications of neural networks to digital communications—a survey,” Signal processing, vol. 80, No. 7, pp. 1185-1215, 2000.
IESS Interlsat Earth Station Standards Document IESS-101 (Rev. 61) 2005.
IESS [Nov. 1996] IESS-401 (Rev. 4). Performance requirements for intermodulation products transmitted from INTELSAT earth stations. Intelsat Earth Station Standard (IESS).
Israelsen, Brett W., and Dale A. Smith. “Generalized Laguerre reduction of the Volterra kernel for practical identification of nonlinear dynamic systems.” arXiv preprint arXiv:1410.0741 (2014).
Kaeadar, K. [Dec. 1986] Gaussian white-noise generation for digital signal synthesis. IEEE Trans. Inst. and Meas., vol. IM 35, 4.
Kafadar, Karen. “Gaussian white-noise generation for digital signal synthesis.” IEEE transactions on instrumentation and measurement 4 (1986): 492-495.
Kahn, L. R. [Jul. 1952] SSB transmission by envelope elimination and restoration. Proc. IRE.
Kamiya, N., and F. Maehara. “Nonlinear Distortion Avoidance Employing Symbol-wise Transmit Power Control for OFDM Transmission,” Proc. of Int'l. OFDM Workshop, Hamburg, 2009.
Khan, A. A., and N. S. Vyas. “Application of Volterra and Wiener theories for nonlinear parameter estimation in a rotor-bearing system.” Nonlinear Dynamics 24, No. 3 (2001): 285-304.
Khan, A. A., and N. S. Vyas. “Non-linear parameter estimation using Volterra and Wiener theories.” Journal of Sound and Vibration 221, No. 5 (1999): 805-821.
Kim, J., and K. Konstantinou, “Digital predistortion of wideband signals based on power amplifier model with memory,” Electronics Letters, vol. 37, No. 23, pp. 1417-1418, 2001.
Koh, Taiho, and E. Powers. “Second-order Volterra filtering and its application to nonlinear system identification.” IEEE Transactions on acoustics, speech, and signal processing 33, No. 6 (1985): 1445-1455.
Kohli, Amit Kumar, and Amrita Rai. “Numeric variable forgetting factor RLS algorithm for second-order Volterra filtering.” Circuits, Systems, and Signal Processing 32, No. 1 (2013): 223-232.
Korenberg, Michael J., and Ian W. Hunter. “The identification of nonlinear biological systems: Volterra kernel approaches.” Annals of biomedical engineering 24, No. 2 (1996): 250-268.
Krall, Christoph, Klaus Witrisal, Geert Leus and Heinz Koeppl, “Minimum Mean-Square Error Equalization for Second-Order Volterra Systems”, IEEE Transactions on Signal Processing, vol. 56, No. 10, Oct. 2008.
Ku, Y. H., and Alfred A. Wolf. “Volterra-Wiener functionals for the analysis of nonlinear systems.” Journal of the Franklin Institute 281, No. 1 (1966): 9-26.
Leis, John, “Adaptive Filter Lecture Notes & Examples”, Nov. 1, 2008 www.usq.edu.au/users/leis/notes/sigproc/adfilt.pdf.
Li, B., C. Zhao, M. Sun, H. Zhang, Z. Zhou, and A. Nallanathan, “A bayesian approach for nonlinear equalization and signal detection in millimeter-wave communications,” IEEE Transactions on Wireless Communications, vol. 14, No. 7, pp. 3794-3809, 2015.
Li, Jian, and Jacek Ilow. “Adaptive Volterra predistorters for compensation of non-linear effects with memory in OFDM transmitters.” In 4th Annual Communication Networks and Services Research Conference (CNSR'06), pp. 4-pp. IEEE, 2006.
Liu, T., S. Boumaiza, and F. M. Ghannouchi, “Dynamic behavioral modeling of 3g power amplifiers using real-valued time-delay neural networks,” IEEE Transactions on Microwave Theory and Techniques, vol. 52, No. 3, pp. 1025-1033, 2004.
Lopez-Bueno, David, Teng Wang, Pere L. Gilabert, and Gabriel Montoro. “Amping up, saving power: Digital predistortion linearization strategies for power amplifiers under wideband 4GV5G burst-like waveform operation.” IEEE Microwave Magazine 17, No. 1 (2015): 79-87.
Álvarez-López, Luis, and Juan A. Becerra. “Application of deep learning methods to the mitigation of nonlinear effects in communication systems.”
López-Valcarce, Roberto, and Soura Dasgupta, “Second-Order Statistical Properties of Nonlinearly Distorted Phase-Shift Keyed (PSK) Signals”, IEEE Communications Letters, vol. 7, No. 7, Jul. 2003.
Lozhkin, Alexander N. “Turbo Linearizer for High Power Amplifier.” In 2011 IEEE 73rd Vehicular Technology Conference (VTC Spring), pp. 1-5. IEEE, 2011.
Lucciardi, J.-A., P. Potier, G. Buscarlet, F. Barrami, and G. Mesnager, “Non-linearized amplifier and advanced mitigation techniques: Dvbs-2x spectral efficiency improvement,” in GLOBECOM 2017-2017 IEEE Global Communications Conference. IEEE, 2017, pp. 1-7.
Maass, Wolfgang, and Eduardo D. Sontag. “Neural systems as nonlinear filters.” Neural computation 12, No. 8 (2000): 1743-1772.
Marmarelis, Vasilis Z., and Xiao Zhao. “Volterra models and three-layer perceptrons.” IEEE Transactions on Neural Networks 8, No. 6 (1997): 1421-1433.
Mathews, V. John, “Adaptive Polynomial Filters,” IEEE Signal Processing Magazine, vol. 8, No. 3, Jul. 1991.
Mathews, V. John. “Orthogonalization of correlated Gaussian signals for Volterra system identification.” IEEE Signal Processing Letters 2, No. 10 (1995): 188-190.
Mirri, Domenico, G. Luculano, Fabio Filicori, Gaetano Pasini, Giorgio Vannini, and G. P. Gabriella. “A modified Volterra series approach for nonlinear dynamic systems modeling.” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 49, No. 8 (2002): 1118-1128.
Mkadem, F., and S. Boumaiza, “Physically inspired neural network model for rf power amplifier behavioral modeling and digital predistortion,” IEEE Transactions on Microwave Theory and Techniques, vol. 59, No. 4, pp. 913-923, 2011.
Mollén, C., E. G. Larsson, and T. Eriksson, “Waveforms for the massive mimo downlink: Amplifier efficiency, distortion, and performance,” IEEE Transactions on Communications, vol. 64, No. 12, pp. 5050-5063, 2016.
Mzyk, Grzegorz, Zygmunt Hasiewicz, and Paweł Mielcarek. “Kernel Identification of Non-Linear Systems with General Structure.” Algorithms 13, No. 12 (2020): 328.
Niknejad, Ali M., “Volterra/Wiener Representation of Non-Linear Systems”, U.C. Berkeley (2014).
Nowak, Robert D., and Barry D. Van Veen. “Volterra filter equalization: A fixed point approach.” IEEE Transactions on signal processing 45, No. 2 (1997): 377-388.
Nowak, Robert D., and Barry D. Van Veen. “Random and pseudorandom inputs for Volterra filter identification.” IEEE Transactions on Signal Processing 42, No. 8 (1994): 2124-2135.
Ogunfunmi, Tokunbo. Adaptive nonlinear system identification: The Volterra and Wiener model approaches. Springer Science & Business Media, 2007.
Orcioni, Simone. “Improving the approximation ability of Volterra series identified with a cross-correlation method.” Nonlinear Dynamics 78, No. 4 (2014): 2861-2869.
Orcioni, Simone, Alessandro Terenzi, Stefania Cecchi, Francesco Piazza, and Alberto Carini. “Identification of Volterra models of tube audio devices using multiple-variance method.” Journal of the Audio Engineering Society 66, No. 10 (2018): 823-838.
Orcioni, Simone, Massimiliano Pirani, Claudio Turchetti, and Massimo Conti. “Practical notes on two Volterra filter identification direct methods.” In 2002 IEEE International Symposium on Circuits and Systems. Proceedings (Cat. No. 02CH37353), vol. 3, pp. III-III. IEEE, 2002.
Palm, G., and T. Poggio. “The Volterra representation and the Wiener expansion: validity and pitfalls.” SIAM Journal on Applied Mathematics 33, No. 2 (1977): 195-216.
Park, D.-C., and T.-K. J. Jeong, “Complex-bilinear recurrent neural network for equalization of a digital satellite channel,” IEEE Transactions on Neural Networks, vol. 13, No. 3, pp. 711-725, 2002.
Perev, K. “Orthogonal Approximation of Volterra Series and Wiener G-functionals Descriptions for Nonlinear Systems.” Information Technologies and Control (2019), Online ISSN: 2367-5357, DOI: 10.7546/itc-2019-0003.
Petrovic, V. and Gosling, W. [May 10, 1979] Polar loop transmitter. Elec. Lett.
Pirani, Massimiliano, Simone Orcioni, and Claudio Turchetti. “Diagonal Kernel Point Estimation of th-Order Discrete Volterra-Wiener Systems.” EURASIP Journal on Advances in Signal Processing 2004, No. 12 (2004): 1-10.
Prazenica, Richard J., and Andrew J. Kurdila. “Multiwavelet constructions and Volterra kernel identification.” Nonlinear Dynamics 43, No. 3 (2006): 277-310.
Radiocommunications Agency [Apr. 1987] Code of practice for radio site engineering. MPT 1331. Radiocommunications Agency (RA), Flyde Microsystems Ltd. United Kingdom.
Rai, Amrita, and Amit Kumar Kohli. “Analysis of Adaptive Volterra Filters with LMS and RLS Algorithms.” AKGEC Journal of Technology 2, No. 1 (2011).
Rawat, M., K. Rawat, and F. M. Ghannouchi, “Adaptive digital predistortion of wireless power amplifiers/transmitters using dynamic realvalued focused time-delay line neural networks,” IEEE Transactions on Microwave Theory and Techniques, vol. 58, No. 1, pp. 95-104, 2010.
Roheda, Siddharth, and Hamid Krim. “Conquering the CNN Over-Parameterization Dilemma: A Volterra Filtering Approach for Action Recognition.” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, No. 07, p. 11948-11956. 2020.
Root, D. E., J. Xu, T. Nielsen, C. Gillease, R. Biernacki, and J. Verspecht. “Measurement-based Nonlinear Behavioral Modeling of Transistors and Components.”
Rugh, Wilson J., Nonlinear System Theory: The Volterra/Wiener Approach. Johns Hopkins University Press, 1981.
Rupp, Markus, “Pre-Distortion Algorithm For Power Amplifiers”, Ph.D. Dissertation, eingereicht an der Technischen Universitat Wien.
Saadoon, Mohammed Ayad. “Linearization Of Power Amplifiers With Memory Effects Using Memory Polynomial And Linearly Interpolated Look-Up Table Predistorter.” (2017).
Saleh, A. M. [May 1982] Intermodulation analysis of FDMA satellite systems employing compensated and uncompensated TWT's'. IEEE Trans. Comm., vol. COM-30, 5.
Sáez, Raúl Gracia, and Nicolás Medrano Marqués. “RF power amplifier linearization in professional mobile radio communications using artificial neural networks.” IEEE Transactions on Industrial Electronics 66, No. 4 (2018): 3060-3070.
Saleh, Adel AM. “Intermodulation analysis of FDMA satellite systems employing compensated and uncompensated TWT's.” IEEE Transactions on Communications 30, No. 5 (1982): 1233-1242.
Sarti, Augusto, and Silvano Pupolin. “Recursive techniques for the synthesis of a pth-order inverse of a volterra system.” European Transactions on Telecommunications 3, No. 4 (1992): 315-322.
Schetzen, M., 1980, The Volterra and Wiener Theorems of Nonlinear Systems, New York, Chichester, Brisbane and Toronto: John Wiley and Sons.
Schetzen, Martin. “Synthesis of a class of non-linear systems.” International Journal of Control 1, No. 5 (1965): 401-414.
Schetzen, Martin. “Nonlinear system modeling based on the Wiener theory.” Proceedings of the IEEE 69, No. 12 (1981): 1557-1573.
Schetzen, Martin. “Multilinear theory of nonlinear networks.” Journal of the Franklin Institute 320, No. 5 (1985): 221-247.
Schetzen, Martin. “Nonlinear system modelling and analysis from the Volterra and Wiener perspective.” In Block-oriented Nonlinear System Identification, pp. 13-24. Springer, London, 2010.
Sebald, D. J., and J. A. Bucklew, “Support vector machine techniques for nonlinear equalization,” IEEE Transactions on Signal Processing, vol. 48, No. 11, pp. 3217-3226, 2000.
Shafi, Fahim. “Prediction of harmonic and intermodulation performance of frequency dependent nonlinear circuits and systems.” PhD diss., King Fahd University of Petroleum and Minerals, 1994.
Shimbo, O. [Feb. 1971] Effects of intermodulation, AM-PM conversion, and additive noise in multicarrier TWT systems. Proc. IEEE, vol. 59, p. 230-238.
Silva, Walter. “Reduced-order models based on linear and nonlinear aerodynamic impulse responses.” In 40th Structures, Structural Dynamics, and Materials Conference and Exhibit, p. 1262. 1999.
Sim, M. S., M. Chung, D. Kim, J. Chung, D. K. Kim, and C.-B. Chae, “Nonlinear self-interference cancellation for full-duplex radios: From link-level and system-level performance perspectives,” IEEE Communications Magazine, vol. 55, No. 9, pp. 158-167, 2017.
Simons, K., Technical Handbook for CATV Systems, 3rd Edition. Jerrod Publication No. 436-001-01, 1968.
Smith, C. N. Application of the Polar Loop Technique to UHF SSB transmitters. University of Bath (United Kingdom), 1986.
Sonnenschein, M & Censor, D., 1998, Simulation of Hamiltonian light beam propagation in nonlinear media, JOSA—Journal of the Optical Society of America B, (15): 1335-1345.
Staudinger, J., J.-C. Nanan, and J. Wood, “Memory fading volterra series model for high power infrastructure amplifiers,” in Radio and Wireless Symposium (RWS), 2010 IEEE. IEEE, 2010, pp. 184-187.
Stenger, Alexander, and Rudolf Rabenstein. “Adaptive Volterra filters for nonlinear acoustic echo cancellation.” In NSIP, pp. 679-683. 1999.
Stenger, Alexander, Lutz Trautmann, and Rudolf Rabenstein. “Nonlinear acoustic echo cancellation with 2nd order adaptive Volterra filters.” In 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), vol. 2, pp. 877-880. IEEE, 1999.
Stepashko, Volodymyr, 2nd International Conference on Inductive Modelling (ICIM'2008) Kyiv, Sep. 15-19, 2008.
Swiss Federal Institute of Technology, Lausanne, “Advantages of nonlinear polynomial predictors” (2010).
Tang, Hao, Y. H. Liao, J. Y. Cao, and Hang Xie. “Fault diagnosis approach based on Volterra models.” Mechanical Systems and Signal Processing 24, No. 4 (2010): 1099-1113.
Thapar, HK, and BJ Leon. “Lumped nonlinear system analysis with Volterra series[Final Phase Report, Nov. 1978-Nov. 1979].” (1980).
Therrien, Charles W., W. Kenneth Jenkins, and Xiaohui Li, “Optimizing the Performance of Polynomial Adaptive Filters: Making Quadratic Filters Converge Like Linear Filters”, IEEE Transactions on Signal Processing, vol. 47, No. 4, Apr. 1999.
Thomas, Erik GF, J. Leo van Hemmen, and Werner M. Kistler. “Calculation of Volterra kernels for solutions of nonlinear differential equations.” SIAM Journal on Applied Mathematics 61, No. 1 (2000): 1-21.
Tsimbinos John, and Langford B. White, “Error Propagation and Recovery in Decision-Feedback Equalizers for Nonlinear Channels”, IEEE Transactions on Communications, vol. 49, No. 2, Feb. 2001.
Tsimbinos, John, and Kenneth V. Lever. “Applications of higher-order statistics to modelling, identification and cancellation of nonlinear distortion in high-speed samplers and analogue-to-digital converters using the Volterra and Wiener models.” In [1993 Proceedings] IEEE Signal Processing Workshop on Higher-Order Statistics, pp. 379-383. IEEE, 1993.
Tsimbinos, John, and Langford B. White. “Error propagation and recovery in decision-feedback equalizers for nonlinear channels.” IEEE Transactions on Communications 49, No. 2 (2001): 239-242.
Tsimbinos, John, and Kenneth V. Lever. “The computational complexity of nonlinear compensators based on the Volterra inverse.” In Proceedings of 8th Workshop on Statistical Signal and Array Processing, pp. 387-390. IEEE, 1996.
Tsimbinos, John, and Kenneth V. Lever. “Computational complexity of Volterra based nonlinear compensators.” Electronics Letters 32, No. 9 (1996): 852-854.
Uncini, A., L. Vecci, P. Campolucci, and F. Piazza, “Complex-valued neural networks with adaptive spline activation function for digital-radio-links nonlinear equalization,” IEEE Transactions on Signal Processing, vol. 47, No. 2, pp. 505-514, 1999.
Vazquez, Rafael, and Miroslav Krstic. “Volterra boundary control laws for a class of nonlinear parabolic partial differential equations.” IFAC Proceedings vols. 37, No. 13 (2004): 1253-1258.
Wang, C.-L. and Y. Ouyang, “Low-complexity selected mapping schemes for peak-to-average power ratio reduction in ofdm systems,” IEEE Transactions on signal processing, vol. 53, No. 12, pp. 4652-4660, 2005.
Wang, Chin-Liang, and Yuan Ouyang. “Low-complexity selected mapping schemes for peak-to-average power ratio reduction in OFDM systems.” IEEE Transactions on signal processing 53, No. 12 (2005): 4652-4660.
Watanabe, Atsushi. “The volterra series expansion of functionals defined on the finite-dimensional vector space and its application to saving of computational effort for volterra kernels.” Electronics and Communications in Japan (Part I: Communications) 69, No. 4 (1986): 37-46.
Wei, Wentao, Peng Ye, Jinpeng Song, Hao Zeng, Jian Gao, and Yu Zhao. “A behavioral dynamic nonlinear model for time-interleaved ADC based on Volterra series.” IEEE Access 7 (2019): 41860-41873.
Woo, Young Yun, et al. “Adaptive Digital Feedback Predistortion Technique for Linearizing Power Amplifiers,” IEEE Transactions on Microwave Theory and Techniques, vol. 55, No. 5, May 2007.
Wood, J., Behavioral modeling and linearization of RF power amplifiers. Artech House, 2014.
Wray, Jonathan, and Gary GR Green. “Calculation of the Volterra kernels of non-linear dynamic systems using an artificial neural network.” Biological cybernetics 71, No. 3 (1994): 187-195.
Yan, H., and D. Cabric, “Digital predistortion for hybrid precoding architecture in millimeter-wave massive mimo systems,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 3479-3483.
Yasui, Syozo. “Stochastic functional Fourier series, Volterra series, and nonlinear systems analysis.” IEEE Transactions on Automatic Control 24, No. 2 (1979): 230-242.
Yoffe, I., and D. Wulich, “Predistorter for mimo system with nonlinear power amplifiers,” IEEE Transactions on Communications, vol. 65, No. 8, pp. 3288-3301, 2017.
Yu, Chao, Lei Guan, Erni Zhu, and Anding Zhu. “Band-limited Volterra series-based digital predistortion for wideband RF power amplifiers.” IEEE Transactions on Microwave Theory and Techniques 60, No. 12 (2012): 4198-4208.
Zaknich, A., “Principals of Adaptive Filter and Self Learning Systems”, Springer Link 2005.
Zhang, Haitao. On the design techniques to improve self-heating, linearity, efficiency and mismatch protection in broadband HBT power amplifiers. University of California, Irvine, 2006.
Zhu, Anding. “Behavioral modeling for digital predistortion of RF power amplifiers: from Volterra series to CPWL functions.” In 2016 IEEE Topical Conference on Power Amplifiers for Wireless and Radio Applications (PAWR), pp. 1-4. IEEE, 2016.
Zhu, A., J. C. Pedro, and T. J. Brazil, “Dynamic deviation reduction-based volterra behavioral modeling of rf power amplifiers,” IEEE Transactions on microwave theory and techniques, vol. 54, No. 12, pp. 4323-4332, 2006.
Zhu, A., P. J. Draxler, J. J. Yan, T. J. Brazil, D. F. Kimball, and P. M. Asbeck, “Open-loop digital predistorter for rf power amplifiers using dynamic deviation reduction-based volterra series,” IEEE Transactions on Microwave Theory and Techniques, vol. 56, No. 7, pp. 1524-1534, 2008.
Zhu, Anding, and Thomas J. Brazil. “Behavioral modeling of RF power amplifiers based on pruned Volterra series.” IEEE Microwave and Wireless components letters 14, No. 12 (2004): 563-565.
Zhu, Anding, and Thomas J. Brazil. “An adaptive Volterra predistorter for the linearization of RF high power amplifiers.” In 2002 IEEE MTT-S International Microwave Symposium Digest (Cat. No. 02CH37278), vol. 1, pp. 461-464. IEEE, 2002.
Zhu, Anding, Michael Wren, and Thomas J. Brazil. “An efficient Volterra-based behavioral model for wideband RF power amplifiers.” In IEEE MTT-S International Microwave Symposium Digest, 2003, vol. 2, pp. 787-790. IEEE, 2003.
Zhu, Anding, and Thomas J. Brazil. “RF power amplifier behavioral modeling using Volterra expansion with Laguerre functions.” In IEEE MTT-S International Microwave Symposium Digest, 2005., pp. 4-pp. IEEE, 2005.
Zhu, Anding, Paul J. Draxler, Jonmei J. Yan, Thomas J. Brazil, Donald F. Kimball, and Peter M. Asbeck. “Open-loop digital predistorter for RF power amplifiers using dynamic deviation reduction-based Volterra series.” IEEE Transactions on Microwave Theory and Techniques 56, No. 7 (2008): 1524-1534.
Zhu, Anding, Jos Carlos Pedro, and Telmo Reis Cunha. “Pruning the Volterra series for behavioral modeling of power amplifiers using physical knowledge.” IEEE Transactions on Microwave Theory and Techniques 55, No. 5 (2007): 813-821.
Zhu, Anding, Paul J. Draxler, Chin Hsia, Thomas J. Brazil, Donald F. Kimball, and Peter M. Asbeck. “Digital predistortion for envelope-tracking power amplifiers using decomposed piecewise Volterra series.” IEEE Transactions on Microwave Theory and Techniques 56, No. 10 (2008): 2237-2247.
Zhu, Anding, and Thomas J. Brazil. “An overview of Volterra series based behavioral modeling of RF/microwave power amplifiers.” In 2006 IEEE annual wireless and microwave technology conference, pp. 1-5. IEEE, 2006.
Zhu, Yun-Peng, and Zi-Qiang Lang. “A new convergence analysis for the Volterra series representation of nonlinear systems.” Automatica 111 (2020): 108599.
Related Publications (1)
Number Date Country
20230021633 A1 Jan 2023 US
Provisional Applications (1)
Number Date Country
62819054 Mar 2019 US
Continuations (2)
Number Date Country
Parent 17234102 Apr 2021 US
Child 17947577 US
Parent 16812229 Mar 2020 US
Child 17234102 US