This invention relates to digital compensation of a non-linear circuit or system, for instance linearizing a non-linear power amplifier and radio transmitter chain, and in particular to effective parameterization of a digital pre-distorter used for digital compensation.
One method for compensation of such a non-linear circuit is to “pre-distort” (or “pre-invert”) the input. For example, an ideal circuit outputs a desired signal u[.] unchanged, such that y[.]=, while the actual non-linear circuit has an input-output transformation y[.]=F(u[.]), where the notation y[.] denotes a discrete time signal. A compensation component is introduced before the non-linear circuit that transforms the input u[.] which represents the desired output, to a predistorted input v[.] according to a transformation v[.]=C(u[.]). Then this predistorted input is passed through the non-linear circuit, yielding y[.]F(v[.]). The functional form and selectable parameters values that specify the transformation C( ) are chosen such that y[.]≈u[.] as closely as possible in a particular sense (e.g., minimizing mean squared error), thereby linearizing the operation of tandem arrangement of the pre-distorter and the non-linear circuit as well as possible.
In some examples, the DPI) performs the transformation of the desired signal u[.] to the input y[.] by using delay elements to form a set of delayed versions of the desired signal, and then using a non-linear polynomial function of those delayed inputs. In some examples, the non-linear function is a Volterra series:
y[n]=x0+ΣpΣτ
or
y[n]=x0+ΣpΣτ
In some examples, the non-linear function uses a reduced set of Volterra terms or a delay polynomial:
y[n]=x0+ΣpΣτxp(τ)u[n−τ]|u[n−τ|(p-1).
In these cases, the particular compensation function C is determined by the values of the numerical configuration parameters xp.
In the case of a radio transmitter, the desired input u[.] may be a complex discrete time baseband signal of a transmit band, and y[.] may represent that transmit band as modulated to the carrier frequency of the radio transmitter by the function F( ) that represents the radio transmit chain. That is, the radio transmitter may modulate and amplify the input v[.] to a (real continuous-time) radio frequency signal p(.) which when demodulated back to baseband, limited to the transmit band and sampled, is represented by y[.].
There is a need for a pre-distorter with a form that both accurately compensates for the non-linearities of the transmit chain, and that imposes as few computation requirements in terms of arithmetic operations to be performed to pre-distort a signal and in terms of the storage requirements of values of the configuration parameters. There is also a need for the form of the pre-distorter to be robust to variation in the parameter values and/or to variation of the characteristics of the transmit chain so that performance degradation of pre-distortion does not exceed that which may be commensurate with the degree of such variation.
In one aspect, in general, a pre-distorter that both accurately compensates for the non-linearities of a radio frequency transmit chain, and that imposes as few computation requirements in terms of arithmetic operations and storage requirements, uses a diverse set of real-valued signals that are derived from the input signal, as well as optional input envelope and other relevant measurements of the system. The derived real signals are passed through configurable non-linear transformations, which may be adapted during operation based on sensed output of the transmit chain, and which may be efficiently implemented using lookup tables. The outputs of the non-linear transformations serve as gain terms for a set of complex signals, which are transformations of the input. The gain-adjusted complex signals are summed to compute the pre-distorted signal, which is passed to the transmit chain. A small set of the complex signals and derived real signals may be selected for a particular system to match the non-linearities exhibited by the system, thereby providing further computational savings, and reducing complexity of adapting the pre-distortion through adapting of the non-linear transformations.
In another aspect, in general, a method of signal predistortion linearizes a non-linear circuit. An input signal (u) is processed to produce multiple transformed signals (w). The transformed signals are processed to produce multiple phase-invariant derived signals (r). These phase-invariant derived signals (r) are determined such that each derived signal (rj) is equal to a non-linear function of one or more of the transformed signals. The derived signals are phase-invariant in the sense that a change in the phase of a transformed signal does not change the value of the derive signal. At least some of the derived signals are equal to functions of different one or more of the transformed signals. Each derived signal (rj) of the phase-invariant derived signals is processed according to a parametric non-linear transformation to produce a time-varying gain component (gi) of multiple gain components (g). A distortion term is then formed by accumulating multiple terms. Each term is a product of a transformed signal of the transformed signals and a time-varying gain. The time-varying gain is a function (Φ) of one or more of the phase-invariant derived signals (i.e., a gain signal determined from the derived signals) and the function of the one or more of the phase-invariant derived signals is decomposable into a combination of one or more parametric functions (ϕ) of a corresponding single one of thephase invariant derived signals (rj) yielding a corresponding one of the time-varying gain components (i.e., component gain signals). An output signal (v) is determined from the distortion term and provided for application to the non-linear circuit.
Aspects may include one or more of the following features.
The non-linear circuit includes a radio-frequency section including a radio-frequency modulator configured to modulate the output signal to a carrier frequency to form a modulated signal and an amplifier for amplifying the modulated signal.
The input signal (u) includes quadrature components of a baseband signal for transmission via the radio-frequency section.
The input signal (u) and the transformed signals (w) are complex-valued signals.
Processing the input signal (u) to produce the transformed signals (w) includes forming at least one of the transformed signals as a linear combination of the input signal (u) and one or more delayed versions of the input signal.
At least one of the transformed signals is formed as a linear combination includes forming a linear combination with at least one imaginary or complex multiple input signal or a delayed version of the input signal.
Forming at least one of the transformed signals, wk to be a multiple of Dαwa+jdwb, where wa and wb are other of the transformed signals, and Dα represents a delay by α, and d is an integer between 0 and 3.
Forming the at least one of the transformed signals includes time filtering the input signal to form said transformed signal. The time filtering of the input signal can include includes applying a finite-impulse-response (FIR) filter to the input signal, or applying an infinite-impulse-response (HR) filter to the input signal.
The transformed signals (w) include non-linear functions of the input signal (u).
The non-linear functions of the input signal (u) include at least one function of a form u[n−τ]|u[n−τ]|p for a delay r and an integer power p or Πj=1 . . . pu[n−τj]Πj=p+1 . . . 2p-1u[n−τj]* for a set for integer delays τ1 to τ2p-1, where * indicates a complex conjugate operation.
Determining a plurality of phase-invariant derived signals (r) comprises determining real-valued derived signals.
Determining the phase-invariant derived signals (r) comprises processing the transformed signals (w) to produce a plurality of phase-invariant derived signals r.
Each of the derived signals is equal to a function of a single one of the transformed signals.
Processing the transformed signals (w) to produce the phase-invariant derived signals includes, for at least one derived signal (rp), computing said derived signal by first computing a phase-invariant non-linear function of one of the transformed signals (wk) to produce a first derived signal, and then computing a linear combination of the first derived signal and delayed versions of the first derived signal to determine the at least one derived signal.
Computing a phase-invariant non-linear function of one of the transformed signals (wk) comprises computing a power of a magnitude of the one of the transformed signals (|wk|p) for an integer power p≥1. For example, p=1 or p=2.
Computing the linear combination of the first derived signal and delayed versions of the first derived signal comprises time filtering the first derived signal. Time filtering the first derived signal can include applying a finite-impulse-response (FIR) filter to the first derived signal or applying an infinite-impulse-response (IIR) filter to the first derived signal.
Processing the transformed signals (w) to produce the phase-invariant derived signals includes computing a first signal as a phase-invariant non-linear function of a first signal of the transformed signals, and computing a second signal as a phase-invariant non-linear function of a second of the transformed signals, and then computing a combination of the first signal and the second signal to form at least one of the phase-invariant derived signals.
At least one of the phase-invariant derived signals is equal to a function for two of the transformed signals wa and wb with a form |wa[t]|α|wb[t−τ]|β for positive integer powers α and β.
The transformed signals (w) are processed to produce the phase-invariant derived signals by computing a derived signal rk[t] using at least one of the following transformations:
rk[t]=|wa[t]|α,
where α>0 for a transformed signal wa[t];
rk[t]=0.5(1−θ+ra[t−α]+θrb[t]),
where θ∈{1,−1}, a,b∈{1, . . . , k−1}, and α is an integer and ra[t] and rb[t] are other of the derived signals;
rk[t]=ra[t−α]rb[t],
where a,b∈{1, . . . , k−1} and α is an integer and ra[t] and rb[t] are other of the derived signals; and
rk[t]=rk[t−1]+2−d(ra[t]−rk[t−1]),
where a∈{1, . . . , k−1} and d is an integer d>0.
The time-varying gain components comprise complex-valued gain components.
The method includes transforming a first derived signal (rj) of the plurality of phase-invariant derived signals according to one or more different parametric non-linear transformation to produce a corresponding time-varying gain components.
The one or more different parametric non-linear transformations comprises multiple different non-linear transformations producing corresponding time-varying gain components.
Each of the corresponding time-varying gain components forms a part of a different term of the plurality of terms of the sum forming the distortion term.
Forming the distortion term comprises forming a first sum of products, each term in the first sum being a product of a delayed version of the transformed signal and a second sum of a corresponding subset of the gain components.
The distortion term δ[t] has a form
wherein for each term indexed by k, ak selects the transformed signal, dk determines the delay of said transformed signal, and Λk determines the subset of the gain components.
Transforming a first derived signal of the derived signals according to a parametric non-linear transformation comprises performing a table lookup in a data table corresponding to said transformation according to the first derived signal to determine a result of the transforming.
The parametric non-linear transformation comprises a plurality of segments, each segment corresponding to a different range of values of the first derived signal, and wherein transforming the first derived signal according to the parametric non-linear transformation comprises determining a segment of the parametric non-linear transformation from the first derived signal and accessing data from the data table corresponding to a said segment.
The parametric non-linear transformation comprises a piecewise linear or a piecewise constant transformation, and the data from the data table corresponding to the segment characterizes endpoints of said segment.
The non-linear transformation comprises a piecewise linear transformation, and transforming the first derived signal comprises interpolating a value on a linear segment of said transformation.
The method further includes adapting configuration parameters of the parametric non-linear transformation according to sensed output of the non-linear circuit.
The method further includes acquiring a sensing signal (y) dependent on an output of the non-linear circuit, and wherein adapting the configuration parameters includes adjusting said parameters according to a relationship of the sensing signal (y) and at least one of the input signal (u) and the output signal (v).
Adjusting said parameters includes reducing a mean squared value of a signal computed from the sensing signal (y) and at least one of the input signal (u) and the output signal (v) according to said parameters.
Reducing the mean squared value includes applying a stochastic gradient procedure to incrementally update the configuration parameters.
Reducing the mean squared value includes processing a time interval of the sensing signal (y) and a corresponding time interval of at least one of the input signal) and the output signal (v).
The method includes performing a matrix inverse of a Gramian matrix determined from the time interval of the sensing signal and a corresponding time interval of at least one of the input signal (u) and the output signal (v).
The method includes forming the Gramian matrix as a time average Gramian.
The method includes performing coordinate descent procedure based on the time interval of the sensing signal and a corresponding e interval of at least one of the input signal (u) and the output signal (v).
Transforming a first derived signal of the plurality of derived signals according to a parametric non-linear transformation comprises performing a table lookup in a data table corresponding to said transformation according to the first derived signal to determine a result of the transforming, and wherein adapting the configuration parameters comprises updating values in the data table.
The parametric non-linear transformation comprises a greater number of piecewise linear segments than adjustable parameters characterizing said transformation.
The non-linear transformation represents a function that is a sum of scaled kernels, a magnitude scaling each kernel being determined by a different one of the adjustable parameters characterizing said transformation.
Each kernel comprises a piecewise linear function.
Each kernel is zero for at least some range of values of the derived signal.
In another aspect, in general, a digital predistorter circuit is configured to perform all the steps of any of the methods set forth above.
In another aspect, in general, a design structure is encoded on a non-transitory machine-readable medium. The design structure comprises elements that, when processed in a computer-aided design system, generate a machine-executable representation of the digital predistortion circuit that is configured to perform all the steps of any of the methods set forth above.
In another aspect, in general, a non-transitory computer readable media is programmed with a set of computer instructions executable on a processor. When these instructions are executed, they cause operations including all the steps of any of the methods set forth above.
Referring to
The structure of the radio transmitter 100 shown in
The baseband section 110 has a predistorter 130, which implements the transformation from the baseband input u[.] to the input v[.] to the RF section 140. This predistorter is configured with the values of the configuration parameters x provided by the adaptation section 160 if such adaptation is provided. Alternatively, the parameter values are set when the transmitter is initially tested, or may be selected based on operating conditions, for example, as generally described in U.S. Pat. No. 9,590,668, “Digital Compensator.”
In examples that include an envelope-tracking aspect, the baseband section 110 includes an envelope tracker 120, which generates the envelope signal e[.]. For example, this signal tracks the magnitude of the input baseband signal, possibly filtered in the time domain to smooth the envelope. In particular, the values of the envelope signal may be in the range [0,1], representing the fraction of a full range. In some examples, there are NE such components of the signal (i.e., e[.]=(e1[ ], . . . , eN
Turning to the RF section 140, the predistorted baseband signal v[.] passes through an RF signal generator 142, which modulates the signal to the target radio frequency band at a center frequency fc. This radio frequency signal passes through a power amplifier (PA) 148 to produce the antenna driving signal p(.). In the illustrated example, the power amplifier is powered at a supply voltage determined by an envelope conditioner 122, which receives the envelope signal e[.] and outputs a time-varying supply voltage Vc to the power amplifier.
As introduced above, the predistorter 130 is configured with a set of fixed parameters z, and values of a set of adaptation parameters x, which in the illustrated embodiment are determined by the adaptation section 160. Very generally, the fixed parameters determined the family of compensation functions that may be implemented by the predistorter, and the adaptation parameters determine the particular function that is used. The adaptation section 160 receives a sensing of the signal passing between the power amplifier 148 and the antenna 150, for example, with a signal sensor 152 preferably near the antenna (i.e., after the RF signal path between the power amplifier and the antenna, in order to capture non-linear characteristics of the passive signal path). RF sensor circuitry 164 demodulates the sensed signal to produce a representation of the signal band y[.], which is passed to an adapter 162. The adapter 162 essentially uses the inputs to the RF section, namely v[.] and/or the input to the predistorter u[.] (e.g., according to the adaptation approach implemented) and optionally e[.], and the representation of sensed output of the RF section, namely y[.]. In the analysis below, the RF section is treated as implementing a generally non-linear transformation represented as y[.]=F(v[.], e[.]) in the baseband domain, with a sampling rate sufficiently large to capture not only the bandwidth of the original signal u[.] but also a somewhat extended bandwidth to include significant non-linear components that may have frequencies outside the desired transmission band. In later discussions below, the sampling rate of the discrete time signals in the baseband section 110 is denoted as fs.
In the adapter 162 is illustrated in
Although various structures for the transformation implemented by the predistorter 130 may be used, in one or more embodiments described below, the functional form implemented is
v[.]=u[.]+δ[.]
where
δ[.]=Δ(u[.],e[.])
and Δ(,), which may be referred to as the distortion term, is effectively parameterized by the parameters x. Rather than using a set of terms as outlined above for the Volterra or delay polynomial approaches, the present approach makes use of a multiple stage approach in which a diverse set of targeted distortion terms are combined in a manner that satisfies the requirements of low computation requirement, low storage requirement, and robustness, while achieving a high degree of linearization.
Very generally, structure of the function Δ(,) is motivated by application of the Koltnogorov Superposition Theorem (KST). One statement of KST is that a non-linear function of d arguments x1, . . . xd∈[0,1]d may be expressed as
for some functions gi and hij. Proofs of the existence of such functions may concentrate on particular types of non-linear functions, for example, fixing the hij and proving the existence of suitable gi. In application to approaches described in this document, this motivation yields a class of non-linear functions defined by constituent non-linear functions somewhat analogous to the gi and/or the hij in the KST formulation above.
Referring to
Note that as illustrated in
In one implementation, the set of complex baseband signals includes the input itself; w1=u, as well as well as various delays of that signal, for example, wk=u[t−k+1] for k=1, . . . , NW. In another implementation, the complex signals output from the complex layer are arithmetic functions of the input, for example
(u[t]+u[t−1])/2;
(u[t]+ju[t−1])/2; and
((u[t]u[t−1])/2+u[t−2])/2.
In at least some examples, these arithmetic functions are selected to limit the computational resources by having primarily additive operations and multiplicative operations by constants that may be implemented efficiently (e.g., division by 2). In another implementation, a set of relatively short finite-impulse-response (FIR) filters modify the input u[t] to yield wk[t], where the coefficients may be selected according to time constants and resonance frequencies of the RF section
In yet another implementation, the set of complex baseband signals includes the input itself, w1=u, as well as well as various combinations, for example, of the form
wk=0.5(Dαwa+jdwb)
where Dα represents a delay of a signal by an integer number α samples, and d is an integer, generally with d∈{0, 1, 2, 3} may depend on k and k>a, b (i.e., each signal wk may be defined in terms of previously defined signals), such that
wk[t]=0.5(wa[t−α]+jdwb[t])
There are various ways of choosing which combinations of signals (e.g., the a, b, d values) determine the signals constructed. One way is essentially by trial and error, for example, adding signals from a set of values in a predetermined range that most improve performance in a greedy manner (e.g., by a directed search) one by one.
Continuing to refer to
In one implementation, each of the complex signals wk passes to one or more corresponding non-linear functions ƒ(w), which accepts a complex value and outputs a real value r that does not depend on the phase of its input (i.e., the function is phase-invariant). Examples of these non-linear functions; with an input u=ure+juim include the following
|w|=|wre+jwim|=(wre2+wim2)1/2;
ww*=|w|2;
log(a+ww*); and
|w|1/2.
In at least some examples, the non-linear function is monotone or non-decreasing in norm (e.g., an increase in w corresponds to an increase in r=ƒ(u)).
In some implementations, the output of a non-linear, phase-invariant function may be filtered, for example, with a real linear time-invariant filters. In some examples, each of these filters is an Infinite Impulse-Response (IIR) filter implemented as having a rational polynomial Laplace or Z Transform (i.e., characterized by the locations of the poles and zeros of the Transform of the transfer function). An example of a Z transform for an IIR filter is:
where, for example, p=0.7105 and q=0.8018. In other examples, a Finite Impulse-Response (FIR). An example of a FIR filter with input x and output y is:
for example with k=1 or k=4.
In yet another implementation, the particular signals are chosen (e.g., by trial and error, in a directed search, iterative optimization, etc.) from one or more of the following families of signals:
As illustrated in
According to (a), the components of e are automatically treated as real signals (i.e., the components of r). Option (b) presents a convenient way of converting complex signals to real ones while assuring that scaling the input u by a complex constant with unit absolute value does not change the outcome (i.e., phase-invariance). Options (c) and (d) allow addition, subtraction, and (if needed) multiplication of real signals. Option (e) allows averaging (i.e., cheaply implemented low-pass filtering) of real signals and option (f) offers more advanced spectral shaping, which is needed for some real-world power amplifiers 148, which may exhibit a second order resonance behavior. Note that more generally, the transformations producing the r components are phase invariant in the original baseband input u, that is, multiplication of u[t] by exp(jθ) or exp(jωt) does not change rp[t].
Constructing the signals w and r can provide a diversity of signals from which the distortion term may be formed using a parameterized transformation. In some implementations, the form of the transformation is as follows:
The function Φ(x)(r) takes as an argument the NR components of r, and maps those values to a complex number according to the parameters values of x. That is, each function Φk(x)(r) essentially provides a time-varying complex gain for the kth term in the summation forming the distortion term. With up to D delays (i.e., 0≤dk, D) and NW different w[t] functions, there are up to NWD terms in the sum. The selection of the particular terms (i.e., the values of ak and dk) is represented in the fixed parameters z that configure the system.
Rather than configuring functions of NR arguments, some embodiments structure the Φk(x)(r) functions as a summation of functions of single arguments as follows:
where the summation over j may include all NR terms, or may omit certain terms. Overall, the distortion term is therefore computed to result in the following:
Again, the summation over j may omit certain terms, for example, as chosen by the designer according to their know-how and other experience or experimental measurements. This transformation is implemented by the combination stage 230, labelled LR in
As an example of one term in summation that yields the distortion term, consider w1=u, and r=|u|2 (i.e., applying transformation (b) with a=1, and α=2), which together yield a term of the form u ϕ(|u|2) where ϕ( ) is one of the parameterized scalar functions. Note the contrast of such a term as compared to a simple scalar weighting of a terms u |u|2, which lack the larger number of degrees of freedom obtainable though the parameterization of ϕ( ).
Each function ϕk,j(rj) implements a parameterized mapping from the real argument rj, which is in the range [0,1], to a complex number, optionally limited to complex numbers with magnitudes less than or equal to one. These functions are essentially parameterized by the parameters x, which are determined by the adaptation section 160 (see
In practice, a selection of a subset of these terms are used, being selected for instance by trial-and-error or greedy selection. In an example of a greedy iterative selection procedure, a number of possible terms (e.g., w and r combinations) are evaluated according to their usefulness in reducing a measure of distortion (e.g., peak or average RMS error, impact on EVM, etc. on a sample data set) at an iteration and one or possible more best terms are retained before proceeding to the next iteration where further terms may be selected, with a stopping rule, such as a maximum number of terms or a threshold on the reduction of the distortion measure. A result is that for any term k in the sum, only a subset of the NR components of r are generally used. For a highly nonlinear device, a design generally works better employing a variety of rk signals. For nonlinear systems with strong memory effect (i.e., poor harmonic frequency response), the design tends to require more shifts in the wk signals. In an alternative selection approach, the best choices of wk and rk with given constraints starts with a universal compensator model which has a rich selection of wk and rk, and then an L1 trimming is used to restrict the terms.
Referring to
Referring to
The function 420 is then effectively defined by the weighted sum of these kernels as:
where the xl are the values at the endpoints of the linear segments.
Referring to
Referring to
Referring to
It should also be understood that the approximation shown in
Referring to
This summation is implemented in the modulation stage 340 shown in
Note that the parameterization of the predistorter 130 (see
One efficient approach to implementing the lookup table stage 330 is to restrict each of the functions ϕk,j( ) to have a piecewise constant or piecewise linear form. Because the argument to each of these functions is one of the components of r, the argument range is restricted to [0,1], the range can be divided into 2s sections, for example, 2s equal sized sections with boundaries at i2−s for i∈{0, 1, . . . , 2s}. In the case of piecewise constant function, the function can be represented in a table with 2s complex values, such that evaluating the function for a particular value of rj involves retrieving one of the values. In the case of piecewise linear functions, a table with 1+2s values can represent the function, such that evaluating the function for a particular value of rj involves retrieving two values from the table for the boundaries of the section that rj within, and appropriately linearly interpolating the retrieved values.
Referring to
The lookup table approach can be applied to piecewise linear function, as illustrated in
As introduced above, the particular constructions used to assemble the complex signals wk and real signals rk may be based on trial-and-error, analytical prediction of impact of various terms, heuristics, and/or a search or combinatorial optimization to select the subset for a particular situation (e.g., for a particular power amplifier, transmission band, etc). One possible optimization approach may make use of greedy selection of productions to add to a set of wk and rk signals according to their impact on an overall distortion measure.
Very generally, the parameters x of the predistorter 130 (see
Therefore, the adaptation section 160 essentially determines the parameters used to compute the distortion term as δ[t]=Δ(u[t−τ], . . . , u[t−1]) in the case that τ delayed values of the input u are used. More generally, τd delayed values of the input and τf look-ahead values of the input are used. This range of inputs is defined for notational conveniences as qu[t]=(u[t−τd], . . . , u[t+τf]). (Note that with the optional use of the terms e[t], these values are also included in the qu([t]) term.) This term is parameterized by values of a set of complex parameters x therefore the function of the predistorter can be expressed as
v[t]=C(qu[t])=u[t]+Δ(qu[t])
One or more approaches to determining the values of the parameter $x$ that define the function δ( ) are discussed below.
The distortion term can be viewed in a form as being a summation
where the αb are complex scalars, and Bb( ) can be considered to be basis functions evaluated with the argument qu[t]. The quality of the distortion term generally relies on there being sufficient diversity in the basis functions to capture the non-linear effects that may be observed. However, unlike some conventional approaches in which the basis functions are fixed, and the terms αb are estimated directly, or possibly are represented as functions of relatively simple arguments such as |u[t]|, in approaches described below, the equivalents of the basis functions Bb( ) are themselves parameterized and estimated based on training data. Furthermore, the structure of this parameterization provides both a great deal of diversity that permits capturing a wide variety of non-linear effects, and efficient runtime and estimation approaches using the structure.
As discussed above, the complex input u[t] to produce a set of complex signals wk[t] using operations such as complex conjugation and multiplication of delayed versions of u[t] or other wk [t]. These complex signals are then processed to form a set of phase-invariant real signals rp[t] using operations such as magnitude, real, or imaginary parts, of various wk[t] or arithmetic combinations of other rp[i] signals. In some examples, these real values are in the range [0,1.0] or [−1,0,1.0], or in some other predetermined bounded range. The result is that the real signals have a great deal of diversity and depend on a history of u[t] at least by virtue of at least some of the wk[t] depending on multiple delays of u[t]. Note that computation of the wk[t] and rp[t] can be performed efficiently. Furthermore, various procedures may be used to retain only the most important of these terms for any particular use case, thereby further increasing efficiency.
Before turning to the parameter estimation approach, recall that the distortion term can be represented as
where r[t] represents the entire of the rp[t] real quantities, and Φ( ) is a parameterized complex function. For efficiency of computation, this non-linear function is separated into terms that each depend on a single real value as
For parameter estimation purposes, each of the non-linear functions ϕ( ) may be considered to be made up of a weighted sum of the fixed kernels bl(r), discussed above with reference to
Introducing the kernel form of non-linear functions into the definition of the distortion term yields
In this form representing the triple (k, p, l) as b, the distortion term can be expressed as
where
Bb[t]≙Bb(qu[t])=wa
It should be recognized that for each time t, the complex values Bb[t] depends on the fixed parameters z and the input u over a range of times, but does not depend on the adaptation parameters x. Therefore the complex values Bb[t] for all the combinations b=(k, p, l) can be treated used in place of the input in the adaptation procedure.
An optional approach extends the form of the distortion term to introduce linear dependence on a set of parameter values, p1[t], . . . , pd[t], which may, for example be obtained by monitoring temperature, power level, modulation center frequency, etc. In some cases, the envelope signal e[t] may be introduced as a parameter. Generally, the approach is to augment the set of non-linear functions according to a set of environmental parameters p1[t], . . . , pd[t] so that essentially each function
ϕk,p(r)
is replaced with d linear multiples to form d+1 functions
ϕk,p(r),ϕk,p(r)p1[t], . . . ,ϕk,p(r)pd[t]
This essentially forms the set of basis functions
Bb(qu[t])≙wa
where b represents the tuple (k, p, l, d) and p0=1.
What should be evident is that this form achieves a high degree of diversity in the functions Bb( ) without incurring runtime computational cost that may be associated with conventional techniques that have a comparably diverse set of basis functions. Determination of the parameter values xb generally can be implemented in one of two away: direct and indirect estimation. In direct estimation, the goal is to adjust the parameters x according to the minimization:
where the minimization varies the function Δ(qu[t]) while the terms qu[t], v[t], and y[t] are fixed and known. In indirect estimation, the goal is to determine the parameters x according to the minimization
Where qy[t] is defined in the same manner as qu[t], except using y rather than u Solutions to both the direct and indirect approaches are similar, and the indirect approach is described in detail below.
Adding a regularization term, an objective function for minimization in the indirect adaptation case may be expressed as
where e[t]=v[t]−y[t]. This can be expressed in vector/matrix form as
where
a[t]=[B1(qy[t]),B2(qy[t]), . . . ,Bn(qy[t])].
Using the form, following matrices can be computed:
and
From these, one approach to updating the parameters x is by a solution
x←(ρIn+G)−1L
where In denotes an n×n identity.
In some examples, the Gramian, G, and related terms above, are accumulated over a sampling interval T, and then the matrix inverse is computed. In some examples, the terms are updated in a continual decaying average using a “memory Gramian” approach. In some such examples, rather than computing the inverse at each step, a coordinate descent procedure is used in which at each iteration, only one of the components of x is updated, thereby avoiding the need to perform a full matrix inverse, which may not be computationally feasible in some applications.
As an alternative to the solution above, a stochastic gradient approach may be used implementing:
x←x−ζ(a[t]′(a[t]x−e[t])+ρx)
where ζ is a step size that is selected adaptively. For example, one or more samples of the signals are selected a fixed or random intervals to update the parameters or a buffer of past pairs qy[t],v[t]) is maintained, for example, by periodic updating, and random samples from the buffer are selected to update the parameter values using the gradient update equation above.
Yet other adaptation procedures that may be used in conjunction with the approaches presented in this document are described in co-pending U.S. application Ser. No. 16/004,594, titled “Linearization System,” filed on Jun. 11, 2018, which is incorporated herein by reference.
Returning to the selection of the particular s to be used for a device to be linearized, which are represented in the fixed parameters z, which includes the selection of the particular wk terms to generate, and then the particular rp to generate from the wk, and then the particular subset of rp to use to weight each of the wk in the sum yielding the distortion term, uses a systematic methodology. One such methodology is performed when a new device (a “device under test”, DUT) is evaluated for linearization. For this evaluation, recorded data sequences (u[.], y[.]) and/or (v[.], y[.]) are collected. A predistorter structure that includes a large number of terms, possibly an exhaustive set of terms within a constrain on delays, number of wk and rp terms etc. is constructed. The least mean squared (LMS) criterion discussed above is used to determine the values of the exhaustive set of parameters x. Then, a variable selection procedure is used and this set of parameters is reduced, essentially, by omitting terms that have relatively little impact on the distortion term δ[.]. One way to make this selection uses the LASSO (least absolute shrinkage and selection operator) technique, which is a regression analysis method that performs both variable selection and regularization, to determine which terms to retain for use in the runtime system. In some implementations, the runtime system is configured with the parameter values x determined at this stage. Note that it should be understood that there are some uses of the techniques described above that omit the adapter completely (i.e., the adapter is a non-essential part of the system), and the parameters are set one (e.g., at manufacturing time), and not adapted during operation, or may be updated from time to time using an offline parameter estimation procedure.
An example of applying the techniques described above starts with the general description of the distortion term
The complex signals derived from the input, and the real signals derived from the complex signals are chosen do that the fill distortion term has the following full form:
This form creates a total of 198 (=121+22+55) terms. In an experimental example, this set of terms is reduced from 198 terms to 6 terms using a LASSO procedure. These remaining 6 terms result in the distortion term having the form:
This form is computationally efficient because only 6 wk complex signals and 6 real signals rp terms that must be computed at each time step. If each non-linear transformation is represented by 32 linear segments, then the lookup tables have a total of 6 times 33, or 198 complex values. If each non-linear function is represented by 32 piecewise segments defined by 6 kernels, then there are only 36 complex parameter values that need to be adapted (i.e., 6 scale factors for the kernels of each non-linear function, and 6 such non-linear functions).
The techniques described above may be applied in a wide range of radio-frequency communication systems. For example, approach illustrated in
A summary of a typical use case of the approaches described above is as follows. First, initial data sequences (u[.], y[.]) and/or (v[.], y[.]), as well as corresponding sequences e[.] and p[.] in implementations that make use of these optional inputs, are obtained for a new type of device, for example, for a new cellular base station or a smartphone handset. Using this data, a set of complex signals wk and real signals rp are selected for the runtime system, for example, based on an ad hoc selection approach, or an optimization such as using the LASSO approach. In this selection stage, computational constraints for the runtime system are taken into account so that the computational limitations are not exceeded and/or performance requirements are met. Such computational requirements may be expressed, for example, in terms computational operations per second, storage requirements, and/or for hardware implementations in terms of circuit area or power requirements. Note that there may be separate limits on the computational constraints for the predistorter 130, which operates on every input value, and on the adapter, which may operate only from time to time to update the parameters of the system. Having determined the terms to be used in the runtime system, a specification of that system is produced. In some implementations, that specification includes code that will execute on a processor, for example, an embedded processor for the system. In some implementations, the specification includes a design structure that specifies a hardware implementation of the predistorter and/or the adapter. For example, the design structure may include configuration data for a field-programmable gate array (FPGA), or may include a hardware description language specification of an application-specific integrated circuit (ASIC). In such hardware implementations, the hardware device includes input and output ports for the inputs and outputs shown in
In some implementations, a computer accessible non-transitory storage medium includes instructions for causing a digital processor to execute instructions implementing procedures described above. The digital processor may be a general-purpose processor, a special purpose processor, such as an embedded processor or a controller, and may be a processor core integrated in a hardware device that implements at least some of the functions in dedicated circuitry (e.g., with dedicated arithmetic units, storage registers, etc.). In some implementations, a computer accessible non-transitory storage medium includes a database representative of a system including some or all of the components of the linearization system. Generally speaking, a computer accessible storage medium may include any non-transitory storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium may include storage media such as magnetic or optical disks and semiconductor memories. Generally, the database representative of the system may be a database or other data structure which can be read by a program and used, directly or indirectly, to fabricate the hardware comprising the system. For example, the database may be a behavioral-level description or register-transfer level (RTL) description of the hardware functionality in a high-level design language (HDL) such as Verilog or VHDL. The description may be read by a synthesis tool which may synthesize the description to produce a netlist comprising a list of gates from a synthesis library. The netlist comprises a set of gates that also represent the functionality of the hardware comprising the system. The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the system. In other examples, the database may itself be the netlist (with or without the synthesis library) or the data set.
It is to be understood that the foregoing description is intended to illustrate and not to limit the scope of the invention, which is defined by the scope of the appended claims. Reference signs, including drawing reference numerals and/or algebraic symbols, in parentheses in the claims should not be seen as limiting the extent of the matter protected by the claims; their sole function is to make claims easier to understand by providing a connection between the features mentioned in the claims and one or more embodiments disclosed in the Description and Drawings. Other embodiments are within the scope of the following claims.
This application claims the benefit of U.S. Provisional Application No. 62/670,315, filed on May 11, 2018, and U.S. Provisional Application No. 62/747,994, filed Oct. 19, 2018, each of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4979126 | Pao et al. | Dec 1990 | A |
5819165 | Hulkko et al. | Oct 1998 | A |
5980457 | Averkiou | Nov 1999 | A |
6052412 | Ruether et al. | Apr 2000 | A |
6240278 | Midya et al. | May 2001 | B1 |
6985704 | Yang et al. | Jan 2006 | B2 |
7149257 | Braithwaite | Dec 2006 | B2 |
7295815 | Wright et al. | Nov 2007 | B1 |
7342976 | McCallister | Mar 2008 | B2 |
7418056 | Suzuki et al. | Aug 2008 | B2 |
7469491 | McCallister et al. | Dec 2008 | B2 |
7529652 | Gahinet et al. | May 2009 | B1 |
7539464 | Suzuki et al. | May 2009 | B2 |
7561857 | Singerl et al. | Jul 2009 | B2 |
7577211 | Braithwaite | Aug 2009 | B2 |
7599431 | Anderson et al. | Oct 2009 | B1 |
7729668 | Leffel | Jun 2010 | B2 |
7904033 | Wright et al. | Mar 2011 | B1 |
7953378 | Saed | May 2011 | B2 |
8014263 | Tao et al. | Sep 2011 | B2 |
8030997 | Brown et al. | Oct 2011 | B2 |
8064850 | Yang et al. | Nov 2011 | B2 |
8185066 | Camp, Jr. et al. | May 2012 | B2 |
8203386 | Van Der Heijden et al. | Jun 2012 | B2 |
8290086 | Bose et al. | Oct 2012 | B2 |
8334723 | Hongo | Dec 2012 | B2 |
8368466 | Bai | Feb 2013 | B2 |
8390376 | Bai | Mar 2013 | B2 |
8412132 | Tang et al. | Apr 2013 | B2 |
8446979 | Yee | May 2013 | B1 |
8451053 | Perreault et al. | May 2013 | B2 |
8467747 | Kim et al. | Jun 2013 | B2 |
8489047 | McCallister et al. | Jul 2013 | B2 |
8519789 | Hawkes | Aug 2013 | B2 |
8537041 | Chandrasekaran et al. | Sep 2013 | B2 |
8576941 | Bai et al. | Nov 2013 | B2 |
8659353 | Dawson et al. | Feb 2014 | B2 |
8686791 | Lozhkin | Apr 2014 | B2 |
8711976 | Chandrasekaran | Apr 2014 | B2 |
8718582 | See et al. | May 2014 | B2 |
8731005 | Schlee | May 2014 | B2 |
8731105 | Bai | May 2014 | B2 |
8737938 | Rashev et al. | May 2014 | B2 |
8766717 | Sorrells et al. | Jul 2014 | B2 |
8787494 | Bai | Jul 2014 | B2 |
8817859 | Ghannouchi et al. | Aug 2014 | B2 |
8976896 | McCallister et al. | Mar 2015 | B2 |
8989307 | Zhou et al. | Mar 2015 | B2 |
9026064 | Wang et al. | May 2015 | B2 |
9130628 | Mittal et al. | Sep 2015 | B1 |
9184710 | Braithwaite | Nov 2015 | B2 |
9214969 | Hammi | Dec 2015 | B2 |
9215120 | Rexberg et al. | Dec 2015 | B2 |
9226189 | Kularatna et al. | Dec 2015 | B1 |
9252821 | Shor et al. | Feb 2016 | B2 |
9337782 | Mauer et al. | May 2016 | B1 |
9374196 | Yang et al. | Jun 2016 | B2 |
9450621 | Xiong et al. | Sep 2016 | B2 |
9590567 | Zhao et al. | Mar 2017 | B2 |
9590668 | Kim et al. | Mar 2017 | B1 |
9628119 | Gal et al. | Apr 2017 | B2 |
9660593 | Abdelrahman et al. | May 2017 | B2 |
9680434 | Yan et al. | Jun 2017 | B2 |
9893748 | Ye et al. | Feb 2018 | B2 |
9923595 | Molina et al. | Mar 2018 | B2 |
9973370 | Langer et al. | May 2018 | B1 |
10080178 | Stapleton et al. | Sep 2018 | B2 |
10141896 | Huang | Nov 2018 | B2 |
10148230 | Xu et al. | Dec 2018 | B2 |
10153793 | Hausmair et al. | Dec 2018 | B2 |
10224970 | Pratt | Mar 2019 | B2 |
20020080891 | Ahn et al. | Jun 2002 | A1 |
20030184374 | Huang et al. | Oct 2003 | A1 |
20040076247 | Barak et al. | Apr 2004 | A1 |
20040116083 | Suzuki et al. | Jun 2004 | A1 |
20040142667 | Lochhead | Jul 2004 | A1 |
20060022751 | Fuller et al. | Feb 2006 | A1 |
20060154622 | Piirainen | Jul 2006 | A1 |
20060229036 | Muller et al. | Oct 2006 | A1 |
20070091992 | Dowling | Apr 2007 | A1 |
20070230557 | Balasubramonian et al. | Oct 2007 | A1 |
20080019453 | Zhao et al. | Jan 2008 | A1 |
20080039045 | Filipovic et al. | Feb 2008 | A1 |
20080101502 | Navidpour et al. | May 2008 | A1 |
20080247487 | Cai et al. | Oct 2008 | A1 |
20080285640 | McCallister et al. | Nov 2008 | A1 |
20100026354 | Utsunomiya et al. | Feb 2010 | A1 |
20110098011 | Camp, Jr. et al. | Apr 2011 | A1 |
20110128992 | Maeda et al. | Jun 2011 | A1 |
20110150130 | Kenington | Jun 2011 | A1 |
20110255627 | Gotman et al. | Oct 2011 | A1 |
20110273236 | Heijden et al. | Nov 2011 | A1 |
20120093210 | Schmidt et al. | Apr 2012 | A1 |
20120219048 | Camuffo et al. | Aug 2012 | A1 |
20130044791 | Rimini et al. | Feb 2013 | A1 |
20140161159 | Black et al. | Jun 2014 | A1 |
20140161207 | Teterwak | Jun 2014 | A1 |
20140177695 | Cha et al. | Jun 2014 | A1 |
20140274105 | Wang | Sep 2014 | A1 |
20140347126 | Laporte et al. | Nov 2014 | A1 |
20150043323 | Choi et al. | Feb 2015 | A1 |
20150049841 | Laporte et al. | Feb 2015 | A1 |
20150061761 | Wills et al. | Mar 2015 | A1 |
20150103952 | Wang et al. | Apr 2015 | A1 |
20150123735 | Wimpenny | May 2015 | A1 |
20150171768 | Perreault | Jun 2015 | A1 |
20150325913 | Vagman | Nov 2015 | A1 |
20150333781 | Alon et al. | Nov 2015 | A1 |
20160028433 | Ding et al. | Jan 2016 | A1 |
20160087604 | Kim et al. | Mar 2016 | A1 |
20160094253 | Weber et al. | Mar 2016 | A1 |
20160095110 | Li et al. | Mar 2016 | A1 |
20160100180 | Oh | Apr 2016 | A1 |
20160112222 | Pashay-Kojouri et al. | Apr 2016 | A1 |
20160191020 | Velazquez | Jun 2016 | A1 |
20160241277 | Rexberg et al. | Aug 2016 | A1 |
20160249300 | Tsai et al. | Aug 2016 | A1 |
20160285485 | Fehri et al. | Sep 2016 | A1 |
20160373072 | Magesacher et al. | Dec 2016 | A1 |
20170033969 | Yang et al. | Feb 2017 | A1 |
20170041124 | Khandani | Feb 2017 | A1 |
20170077981 | Tobisu et al. | Mar 2017 | A1 |
20170176507 | O'Keeffe et al. | Jun 2017 | A1 |
20170244582 | Gal et al. | Aug 2017 | A1 |
20180097530 | Yang et al. | Apr 2018 | A1 |
20180337700 | Huang et al. | Nov 2018 | A1 |
20190238204 | Kim et al. | Aug 2019 | A1 |
20190260401 | Megretski et al. | Aug 2019 | A1 |
20190260402 | Chuang et al. | Aug 2019 | A1 |
Number | Date | Country |
---|---|---|
1560329 | Aug 2005 | EP |
20120154430 | Nov 2012 | WO |
WO2018156932 | Aug 2018 | WO |
WO2018227093 | Dec 2018 | WO |
WO2018227111 | Dec 2018 | WO |
WO2019014422 | Jan 2019 | WO |
WO2019070573 | Apr 2019 | WO |
WO2019094713 | May 2019 | WO |
WO2019094720 | May 2019 | WO |
Entry |
---|
Aguirre, et al., “On the Interpretation and Practice of Dynamical Differences Between Hammerstein and Wiener Models”, IEEE Proceedings on Control Theory Appl; vol. 152, No. 4, Jul. 2005, pp. 349-356. |
Barradas, et al. “Polynomials and LUTs in PA Behavioral Modeling: A Fair Theoretical Comparison”, IEEE Transactions on Microwave Theory and Techniques; vol. 62, No. 12, Dec. 2014, pp. 3274-3285. |
Bosch et al. “Measurement and Simulation of Memory Effects in Predistortion Linearizers, ” IEEE Transactions on Mircrowave Theory and Techniques; vol. 37. No. 12; Dec. 1989, pp. 1885-1890. |
Braithwaite, et al. “Closed-Loop Digital Predistortion (DPD) Using an Observation Path with Limited Bandwidth” IEEE Transactions on Microwave Theory and Techniques; vol. 63, No. 2; Feb. 2015, pp. 726-736. |
Cavers, “Amplifier Linearization Using a Digital Predistorter with Fast Adaption and Low Memory Requirements;” IEEE Transactions on Vehicular Technology; vol. 39; No. 4; Nov. 1990, pp. 374-382. |
D'Andrea et al., “Nonlinear Predistortion of OFDM Signals over Frequency-Selective Fading Channels,” IEEE Transactions on Communications; vol. 49; No. 5, May 2001; pp. 837-843. |
Guan, et al. “Optimized Low-Complexity Implementation of Least Squares Based Model Extraction of Digital Predistortion of RF Power Amplifiers”, IEEE Transactions on Microwave Theory and Techniques; vol. 60, No. 3, Mar. 2012; pp. 594-603. |
Henrie, et al., “Cancellation of Passive Intermodulation Distortion in Microwave Networks”, Proceedings of the 38th European Microwave Conference, Oct. 2008, Amsterdam, The Netherlands, pp. 1153-1156. |
Hong et al., “Weighted Polynomial Digital Predistortion for Low Memory Effect Doherty Power Amplifier,” IEEE Transactions on Microwave Theory and Techniques; vol. 55; No. 5, May 2007, pp. 925-931. |
Kwan, et al., “Concurrent Multi-Band Envelope Modulated Power Amplifier Linearized Using Extended Phase-Aligned DPD”, IEEE Transactions on Microwave Theory and Techniques; vol. 62, No. 12, Dec. 2014, pp. 3298-3308. |
Lajoinie, et al. “Efficient Simulation of NPR for the Optimum Design of Satellite Transponders SSPAs”, EEE MTT-S International; vol. 2; Jun. 1998; pp. 741-744. |
Li et al. “High-Throughput Signal Component Separator for Asymmetric Multi-Level Outphasing Power Amplifiers”, IEEE Journal of Solid-State Circuits; vol. 48; No. 2; Feb. 2013; pp. 369-380. |
Liang, et al. “A Quadratic-Interpolated LUT-Based Digital Predistortion Techniques for Cellular Power Amplifiers”, IEEE Transactions on Circuits and Systems; II: Express Briefs, vol. 61, No. 3, Mar. 2014; pp. 133-137. |
Liu, et al. “Digital Predistortion for Concurrent Dual-Band Transmitters Using 2-D Modified Memory Polynomials”, IEEE Transactions on Microwave Theory and Techniques, vol. 61, No. 1, Jan. 2013, pp. 281-290. |
Molina, et al. “Digital Predistortion Using Lookup Tables with Linear Interpolation and Extrapolation: Direct Least Squares Coefficient Adaptation”, IEEE Transactions on Microwave Theory and Techniques, vol. 65, No. 3, Mar. 2017; pp. 980-987. |
Morgan, et al. “A Generalized Memory Polynomial Model for Digital Predistortion of RF Power Amplifiers”, IEEE Transactions of Signal Processing; vol. 54; No. 10; Oct. 2006; pp. 3852-3860. |
Naraharisetti, et a., “2D Cubic Spline Implementation for Concurrent Dual-Band System”, IEEE, 2013, pp. 1-4. |
Naraharisetti, et al. “Efficient Least-Squares 2-D-Cubic Spline for Concurrent Dual-Band Systems”, IEEE Transactions on Microwave Theory and Techniques, vol. 63; No. 7, Jul. 2015; pp. 2199-2210. |
Osamu Muta et al., “Adaptive predistortion linearization based on orthogonal polynomial expansion for nonlinear power amplifiers in OFDM systems”, Communications and Signal Processing (ICCP), International Conference On, IEEE, pp. 512-516, 2011. |
Panigada, et al. “A 130 mW 100 MS/s Pipelined ADC with 69 SNDR Enabled by Digital Harmonic Distortion Correction”, IEEE Journal of Solid-State Circuits; vol. 44; No. 12; Dec. 2009, pp. 3314-3328. |
Peng, et al. “Digital Predistortion for Power Amplifier Based on Sparse Bayesian Learning”, IEEE Transactions on Circuits and Systems, II: Express Briefs; 2015, pp. 1-5. |
Quindroit et al. “FPGA Implementation of Orthogonal 2D Digital Predistortion System for Concurrent Dual-Band Power Amplifiers Based on Time-Division Multiplexing”, IEEE Transactions on Microwave Theory and Techniques; vol. 61; No. 12, Dec. 2013, pp. 4591-4599. |
Rawat, et al. “Adaptive Digital Predistortion of Wireless Power Amplifiers/Transmitters Using Dynamic Real-Valued Focused Time-Delay Line Neural Networks”, IEEE Transactions on Microwave Theory and Techniques; vol. 58, No. 1; Jan. 2010; pp. 95-104. |
Safari, et al. “Spline-Based Model for Digital Predistortion of Wide-Band Signals for High Power Amplifier Linearization”, IEEE; 2007, pp. 1441-1444. |
Sevic, et al. “A Novel Envelope-Termination Load-Pull Method of ACPR Optimization of RF/Microwave Power Amplifiers”, IEEE MTT-S International; vol. 2, Jun. 1998; pp. 723-726. |
Tai, “Efficient Watt-Level Power Amplifiers in Deeply Scaled CMOS”, Ph.D. Dissertation; Carnegie Mellon University; May 2011; 129 pages. |
Tehrani, et al. “Modeling of Long Term Memory Effects in RF Power Amplifiers with Dynamic Parameters”, IEEE; 2012, pp. 1-3. |
Yu, et al. “A Generalized Model Based on Canonical Piecewise Linear Functions for Digital Predistortion”, Proceedings of the Asia-Pacific Microwave Conference; 2016 |
Yu, et al. “Band-Limited Volterra Series-Based Digital Predistortion for Wideband RF Power Amplifiers”, IEEE Transactions of Microwave Theory and Techniques; vol. 60; No. 12; Dec. 2012, pp. 4198-4208. |
Yu, et al. “Digital Predistortion Using Adaptive Basis Functions”, IEEE Transations on Circuits and Systems—I. Regular Papers; vol. 60, No. 12; Dec. 2013, pp. 3317-3327. |
Zhang et al. “Linearity Performance of Outphasing Power Amplifier Systems,” Design of Linear Outphasing Power Amplifiers; Google e-book; 2003; Retrieved on Jun. 13, 2014; Retrieved from internet http:www.artechhouse.com/uploads/public/documents/chapters/Zhang-Larson. |
Zhu et al. “Digital Predistortion for Envelope-Tracking Power Amplifiers Using Decomposed Piecewise Volterra Sereis,” IEEE Transactions on Microwave Theory and Techniques; vol. 56; No. 10; Oct. 2008; pp. 2237-2247. |
Guan, Lei, and Anding Zhu. “Green communications: Digital predistortion for wideband RF power amplifiers.” IEEE Microwave Magazine 15, No. 7 (2014): 84-99. |
Zhu, Anding, Jos C. Pedro, and Thomas J. Brazil. “Dynamic deviation reduction-based Volterra behavioral modeling of RF power amplifiers.” IEEE Transactions on microwave theory and techniques 54, No. 12 (2006): 4323-4332. |
Zhu, Anding, Paul J. Draxler, Jonmei J. Yan, Thomas J. Brazil, Donald F. Kimball, and Peter M. Asbeck. “Open-loop digital predistorter for RF power amplifiers using dynamic deviation reduction-based Volterra series.” IEEE Transactions on Microwave Theory and Techniques 56, No. 7 (2008): 1524-1534. |
Cao, Haiying, Hossein Mashad Nemati, Ali Soltani Tehrani, Thomas Eriksson, and Christian Fager. “Digital predistortion for high efficiency power amplifier architectures using a dual-input modeling approach.” IEEE Transactions on Microwave Theory and Techniques 60, No. 2 (2012): 361-369. |
Naraharisetti, Naveen. “Linearization of Concurrent Dual-Band Power Amplifier Using Digital Predistortion.” PhD diss., The Ohio State University, 2014. |
Tehrani, Ali Soltani. Behavioral modeling of radio frequency transmitters. Chalmers University of Technology, 2009. |
PCT International Search Report, PCT/US2019/031714, dated Aug. 13, 2019, 15 pgs. |
Number | Date | Country | |
---|---|---|---|
20190348956 A1 | Nov 2019 | US |
Number | Date | Country | |
---|---|---|---|
62670315 | May 2018 | US | |
62747994 | Oct 2018 | US |