A portion of the disclosure of this patent document contains material which is subject to copyright protection. This patent document may show and/or describe matter which is or may become trade dress of the owner. The copyright and trade dress owner has no objection to the facsimile reproduction by anyone of the patent disclosure as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright and trade dress rights whatsoever.
This disclosure relates to spectral analysis of electronic circuits.
Conventional electronic circuit spectral analysis calculates an exterior node admittance spectrum one frequency at a time. The circuit admittance is the sum of admittances of component parts connected to interior and exterior nodes. Interior node elimination (or Kron reduction) yields the exterior node admittance, from which the circuit behavior may be predicted. This process is repeated by a loop over frequency points to create an admittance spectrum. Central processing unit (“CPU”) time scales linearly in the number of frequency points.
A fast method and system are described for spectral analysis of electronic circuits and equivalent linear time invariant systems. Symbolic algebra can be used to transform equations into a form that is more efficient for computation or that provides greater insight. An algebraic node transformation of the admittance can improve the scaling of CPU time with the number of frequency points. The nodal analysis of the admittance is transformed by a process of algebraic node expansion and interior node reduction. The transformed form for the exterior node admittance is analogous to a Green's, or impulse response, function. This simplifies the frequency dependence of the admittance, which enables orders of magnitude faster computation of spectral properties such as the scattering matrix and pole zero analysis.
The algebraic node transformation method is applicable to lumped component circuits assembled from resistors, inductors and capacitors. It is also applicable to many other linear time invariant systems which have equivalent circuit analogues. It may be adapted to distributed electronic components such as calculated by electromagnetic simulation.
The design and characterization of circuits such as filters, duplexers, and multiplexer can involve processes of synthesis, optimization and survey over extremely large numbers of possible circuit designs. Spectral analysis over thousands of frequency points and pole zero analysis can be a rate limiting steps for this design process. The methods described herein accelerate the circuit design process by orders of magnitude, which can enable discovery of new types of circuits.
The method introduces algebraic node analysis, and matrix transformations comprised, in part, of successive nodal expansions and interior node reductions, to isolate the frequency dependence from the remaining computation of the spectral response of a circuit. The transformation from interior to algebraic node representations of the admittance enables most of the calculation to be done one time only for all frequencies, thereby accelerating calculation for many frequency points.
According to this method, a frequency dependent external node admittance matrix is transformed into the Green's function form. The time Fourier transform propagates as the exponential of time multiplied by a complex symmetric stable Hurwitz matrix. Frequency independent matrices required to form the Green's function are calculated before a final vectorized calculation of the frequency spectrum. Efficient linear algebra methods developed for Green's function evaluation in many disciplines, such as physics and chemistry, may be adapted to the spectral analysis of circuits. The central processing unit (“CPU”) time scales sublinearly in the number of frequency points. It approaches linear scaling at large number of frequency points, but with a much smaller coefficient than the conventional method. Detailed comparisons depend on the particular circuit and the method for Green's function evaluation.
For many applications, the method is orders of magnitude faster than the conventional spectral analysis method, especially if large numbers of frequency points are desired. Examples drawn from surface acoustic wave device design demonstrate approximately two orders of magnitude speed increase for spectral analysis and pole zero analysis.
The algebraic node transformation method also provides a new characterization of the dynamical behavior of circuits that is complementary to other established methods. Features of analyticity, causality, positivity and sum rules may transcend particular applications. They can be used as powerful constraints on circuit data analysis and modeling.
At step 101, a parts list for a circuit is received. The admittance matrix of a circuit is assembled from the admittance matrices of its parts by connecting parts together at nodes.
At step 102, an admittance matrix is determined for each part at one frequency using a library of frequency-dependent functions. This method includes a library of functions to calculate an admittance matrix for each type of part. Simple parts such as resistors, inductors and capacitors have only exterior nodes. Parts that are subcircuits have interior nodes as well.
At step 103, a circuit admittance matrix can be determined at one frequency by attaching exterior nodes of parts to circuit nodes.
At step 104, the circuit interior nodes are then eliminated (or Kron reduced) to determine a circuit exterior node admittance matrix at one frequency, where the submatrices are functions of frequency.
At step 105, the calculations of step 102, step 103, and step 104 are looped over an indicated frequency spectrum, such that a circuit exterior node admittance is determined at all frequencies of interest at step 106.
At step 107, the circuit exterior node admittance is transformed to a circuit scattering matrix at all frequencies of interest. The CPU time for the conventional algorithm scales linearly in the number of frequency points.
In the algebraic node transformation method shown in
At step 201, a parts list for a circuit is received.
At step 202, admittance matrices with algebraic nodes are determined for each part in the parts list using a library of frequency-independent functions. Here, an expanded admittance matrix is calculated for each type of part, which includes exterior and interior circuit nodes, as well as algebraic nodes. The diagonal elements of the admittance for algebraic nodes have a linear frequency dependence. All other admittance matrix elements are frequency independent.
At step 203, the expanded circuit admittance with exterior, interior and algebraic nodes is assembled by connecting exterior nodes of parts to exterior and interior circuit nodes. The expanded admittance for the complete circuit retains all the algebraic nodes of the simple parts and the subcircuits that comprise the whole. The only frequency dependence is linear in the diagonal elements of algebraic nodes. All other elements of the expanded circuit admittance are frequency independent.
At step 204, the expanded circuit admittance is transformed into a reduced circuit admittance matrix for only exterior and algebraic nodes. The interior circuit nodes are eliminated by Kron reduction. The reduced matrix is the sum of a frequency-dependent diagonal matrix and a frequency-independent complex symmetric matrix. The algebraic node submatrix of the frequency independent matrix satisfies the stability properties of a Hurwitz matrix. Eigenvalues are either positive real or complex conjugate pairs with positive real parts. They describe the internal dynamics of the system. External coupling is provided by the matrix elements between algebraic and exterior nodes.
At step 205, this admittance is further transformed by reduction of the algebraic nodes. The resulting circuit admittance matrix for exterior circuit nodes has a Green's Function form with a spectral dependence on the Hurwitz matrix. This form is similar mathematically to impulse or linear response functions in other disciplines.
At step 206, the Green's Function is evaluated to determine a circuit exterior node admittance at all frequencies of interest. The submatrices in this expression are independent of frequency, so they only need to be calculated once. For modest numbers of frequency points, the CPU time required to evaluate the submatrices for all frequency points in the fast circuit spectral analysis method is comparable to the time required for evaluation of submatrices for a single frequency point in the conventional method. A variety of existing numerical methods developed in other disciplines may be adapted to evaluate Green's functions.
At step 207, the circuit exterior node admittance is transformed to a circuit scattering matrix at all frequencies of interest.
In nodal analysis, a circuit is represented by a set of nodes with electrical components connected between them. Each circuit node is associated with a current I and voltage V. A set of nodes is associated with vectors of currents I and voltages V. By definition, the admittance matrix Y is the relation between currents and voltages, I=YV. The circuit obeys Kirchoff Current Law. Currents for ‘interior’ nodes are zero. Currents for ‘exterior’ nodes may be non-zero. Nodal analysis using exterior and interior nodes is prior art.
Algebraic nodes may be introduced by considering the following problem. Assume there are fundamental units of resistance R0 and capacitance C0. And assume there are elemental shunt components with these values shown in
The admittances for unit components are as follows.
For a unit shunt capacitor shown in
Y1C=jωCu.
For a unit shunt resistor shown in
For a unit J inverter in
Shunt means the component is connected between an exterior node and ground. J inverters are represented by a double line between two exterior nodes.
The choice of scales is arbitrary. The unit of resistance may be an ohm Ω, or it may be the transmission line standard of 50Ω. The unit of capacitance may be a picofarad. Ru=1 and Cu=1 in the selected units. Using dimensional analysis, if required the admittances units can be reintroduced at the end of a calculation.
There are two interior nodes.
Node elimination, or Kron reduction, is important to all that follows. To clarify, suppose there are two types of nodes, a and b. In block submatrix form
To eliminate all the b nodes, invoke Kirchoffs Current Law to solve for Ib=0. The result is
Y
aa
r
=Y
aa
−Y
ab
Y
bb
−1
Y
ba.
Here, the superscript r denotes reduced.
For the shunt resistor example, the first node is exterior and the second two are interior. Elimination of the third node yields
There is an impedance R on the second node. Reduce that to yield the familiar one node admittance, Y1R=1/R.
The two and three node admittances are obtained from the one node admittance by ‘expansion’. The reverse process is ‘reduction’. All three forms produce the same electrical behavior, so they are ‘equivalent circuits’.
The unit shunt capacitor is the only frequency ω dependent elemental component. Use of it requires complex variable analysis. An equivalent circuit is needed to control mathematical singularities in the complex ω plane, especially how to close contour integrals. The solution shown in
The infinitesimal defines a limit, as in
Here, δ(x) is a Dirac delta function, an infinitely sharp positive function with an integral equal to one. P indicates take the Cauchy principal value in integrals. It relates real and imaginary parts of quantities to be calculated. In ω's Fourier conjugate time variable t, it guarantees Fourier transforms are causal, which means admittances for passive circuits decay with increasing time. Lossless components need this limiting procedure. For lossy components, resistors may replace ε to define complex ω plane contour integrals.
In the following, a unit shunt capacitor is implicitly in parallel with an infinitesimal conductance ε. Equivalently, ω implicitly has an infinitesimal imaginary part −jε. It is made explicit only where needed for emphasis.
An equivalent circuit for a shunt inductor can be assembled from these components as shown
Elimination, or Kron reduction, of the algebraic node results in the usual one exterior node admittance for an inductor, Y1L=1/jωL. That proves the expanded two node circuit expanded is an equivalent circuit for an inductor.
The two node admittance puts the frequency ω and the inductor L variables on separate matrix elements. As shall be demonstrated, this separation enables linear algebra transformations that are advantageous for calculating frequency sweeps, or the spectral response, of a circuit. For that reason, the new interior node of the shunt capacitor is labeled an ‘algebraic’ node.
An equivalent circuit for a shunt capacitor can be assembled as shown in
The first node is an exterior node, the second node is interior, and the third node is algebraic. Elimination of the algebraic and interior nodes yields the familiar one exterior node admittance Y1C=jωC. The two J inverters have enabled a scale transformation of the unit capacitor.
An equivalent circuit for a series component can be assembled as shown in
The first two nodes are exterior, and the second two are interior. Elimination of the interior nodes results in the standard two exterior node admittance for a series resistor,
For the applications considered in this disclosure, the interior nodes may be eliminated. However, there can be zeros on the diagonal elements for the interior nodes, which may result in division by zero during node elimination. This problem may be regularized by adding an infinitesimal shunt resistance or conductance at the node to be eliminated.
For example, elimination of the interior node for the shunt capacitor Y3C yields
This is a two node expanded admittance, with one exterior node and one algebraic node.
Two equivalent ways of representing the one node admittance are
The purpose of infinitesimals is to track how to take a limit in complex variable analysis. So the rules for infinitesimal math differ from normal math, in the same sense that rules for infinity math differ. That is, ε multiplied by something is ε times the sign of something, ε+ε=ε, ε×ε=ε, and so on. In the first equation, ω is implicitly ω−jε. The two equations are equivalent in the sense of infinitesimal math.
The fully expanded admittance for a series inductor has five nodes, two exterior, two interior and one algebraic. To regularize, add an infinitesimal shunt conductance at the shunt inductor node, and eliminate interior nodes. The reduced admittance has two exterior nodes and one algebraic node,
The ε elements are negligible for numerical purposes and the ε=0 limit may be taken.
The use of tiny resistors and conductances as regularization in complex variable analysis differs from their use to regularize numerical calculations. A computer has limited precision. Regularization by adding tiny resistances or conductances is helpful to dampen effects of random machine errors. The scale for regularization corresponds to the square root of machine precision, approximately 10−8Ω for typical computers. Such numerical regularization is not taken to a zero limit. Too small a regularization can cause numerical calculations to go unstable resulting in, e.g., division by zero errors.
Equivalent circuits for series inductors and capacitors with internal nodes eliminated are illustrated in
As shown above, equivalent circuits for any RLC circuit can be assembled from the simple components in
After reduction of all interior nodes, all RLC circuits have an admittance expanded with algebraic nodes in the block matrix form
A subscript ‘e’ denotes an exterior node, and a subscript ‘a’ denotes an algebraic node. Here 1 denotes a unit diagonal matrix in the space of algebraic nodes. All of the w dependence is made explicit by the expansion to algebraic nodes. The number of such nodes equals the sum of capacitors and inductors in the circuit.
Expansion of the admittance to algebraic nodes is a great simplification of the frequency dependence of RLC circuits. All of the block matrices need to be calculated only once for all frequencies. As shall be shown, this dramatically accelerates the spectral analysis of circuits.
To compute the admittance spectrum for exterior nodes, reduce all algebraic nodes
Unless a matrix has an explicit (ω) dependence, it is frequency independent. Such matrices need to be calculated only once for a complete spectral analysis. Here the 1 is a diagonal unit matrix in the space of algebraic nodes.
Define the Laplace transform variable S≡jω. In order for the RLC circuit to be stable, the characteristic polynomial,
P(s)≡det[(s+ε)1+Yaa],
must satisfy the Hurwitz criterion. It must have real coefficients of powers of s. The roots are in the left hand complex s-plane. The ε ensures this condition is met even in the lossless circuit limit. Roots are either negative real, or they come in complex conjugate pairs with negative real parts. The roots are dynamical modes of the system.
To address spectral properties, it is better to work with H≡jYaa. The Fourier transform of the admittance evolves with time by multiplying by ejHt. H is a square symmetric matrix. It is real for lossless circuits and complex for lossy circuits. A square complex matrix H may be diagonalized with left and right eigenvector matrices W′ and V, respectively
HV=VE; W′H=EW′.
E is a diagonal matrix of eigenvalues corresponding to resonant frequencies of the circuit. V is a matrix of eigenvectors, which are linear superpositions of the algebraic nodes.
Symmetric means that the non-conjugate transpose HT=H. A diagonal matrix satisfies ET=E. Then a sequence of substitutions leads to
V
T
H
T
=E
T
V
T
V
T
H=EV
T
W′=V
T
Multiply the left hand side by VT and the right hand side of the second by V. The result is
V
T
VE=V
T
HV=EV
T
V.
Inasmuch as this shows K≡VTV commutes with a non-unit diagonal matrix, it must also be diagonal. It follows that
In the limit of lossless circuits, H is real symmetric, K→1, eigenvectors are orthonormal, and eigenvalues are real.
The Hurwitz stability property requires real parts to be paired, Ek=±wk+jΓk, or to be pure imaginary Ek=jΓk. This symmetry helps ensure the admittance obeys Y(−ω)=Y(ω)*; that is, the admittance is Fourier transform to a real causal convolution function of time.
The inverse matrix has a representation in terms of eigenvalues and eigenvectors,
The real part is positive definite and satisfies a frequency integral sum rule,
Here 1 is a diagonal unit matrix in the space of algebraic nodes.
Real and imaginary parts are not independent. They are related by Cauchy principal value integrals, as in
Here P indicates to take the principal value integral. They are a consequence of analyticity in the lower half w plane. That is, all eigenvalues, and therefore poles of G, are in the upper half plane corresponding to decaying modes in the Fourier transform time variable.
Then the exterior node spectral admittance is,
Y
ee
r(ω)=Yee−YeaG(ω)YawT.
It is the sum of a real frequency independent term and a projection of G(ω) from algebraic nodes to exterior nodes using a non-conjugate transpose inner product.
Connecting an MBVD model as a subcircuit in a filter circuit requires uses only the exterior e circuit nodes. Calculating the frequency dependence of the MBVD circuit uses only the algebraic nodes. The interior circuit nodes 603, 604, and 605 and the algebraic nodes 607 and 609 of the two capacitors C and C0, respectively, may be reduced. The Kron reduction procedure is to set the currents for these nodes to zero and to solve for the relation of currents and voltages of the remaining nodes. The calculation proceeds in the order of nodes that do not have zeros on the diagonal. The result is a five-node reduced admittance matrix for the MBVD that has only two exterior circuit nodes and three frequency-dependent algebraic nodes, represented as:
The reduced admittance of the MBVD has the characteristic form of the sum of frequency times a diagonal unit matrix and a frequency independent complex symmetric matrix.
The H is j times the algebraic node submatrix of the frequency-independent part of the reduced admittance. The eigenvalues of H are
The first two eigenvalues are the resonant frequencies of the motional resonator damped by the resistor. The third eigenvalue is the exponential decay rate for the static capacitance R0, C0 part of the circuit. The right eigenvectors for an MBVD can be expressed in terms of loss angle θ:
The eigennodes are linear superpositions of the original algebraic nodes. For a lossless MBVD, the angle θ is zero, and the eigennodes are orthonormal. For lossy circuits, the eigennodes in general are not orthonormal.
The MBVD is only one example of the nodal analysis of the algebraic node transformation method. An infinite number of other circuits comprised of R, L, and C parts may be analyzed using algebraic node expansion and Kron reduction. They will show similar characteristics.
At step 901, the Hurwitz matrix H, which as discussed above is j times the algebraic node submatrix of admittance, is diagonalized using efficient linear algebra techniques. Most of the CPU time is spent in the diagonalization. The matrix dimension N is the sum of the number of inductors and the number of capacitors. The CPU time for diagonalizing non-sparse matrices scales as N3. The memory scales as N2. Diagonalization is fast for small circuits. The exterior node admittance at one frequency is calculated as a sum over eigennodes of the Hurwitz matrix.
At step 902, the calculation over many frequency points is vectorized, i.e. a style of computer programming where operations are applied to whole arrays instead of individual elements. Multiple frequencies are calculated almost as quickly as a single frequency. A sweep of, for example, 1000 frequency points using the fast circuit spectral analysis method requires CPU time comparable to calculating a single frequency point in the conventional method.
Here, the Hurwitz matrix is diagonalized once independent of the number of frequency points. The sweep over frequencies may be vectorized and requires negligible CPU time.
At step 207, the method continues, as in
For large circuits, the CPU time and memory required to diagonalize the Hurwitz matrix may be too large.
At step 1001, Green's Function is evaluated by linear algebra one frequency at a time over a frequency spectrum of interest. Linear algebra methods, such as LU decomposition, are used to calculate the effect of (ω−{tilde over (H)})−1. The linear algebra is done one frequency point at a time, rather than array processing (or vectorizing) the frequency dependence. This will be more efficient than the conventional circuit spectral analysis method, because the work of calculating the required submatrices (such as the Hurwitz matrix and the exterior to algebraic node admittance) need only be done once regardless of the number of frequency points. However, this frequency loop is not easily vectorized, and the CPU time will scale linearly in the number of frequency points.
At step 1101, the Green's Function is expanded in Chebyshev Polynomial Moments. A Chebyshev moment expansion is analogous to a Fourier expansion.
At step 1102, a finite number of Chebyshev Polynomial Moments of Green's Functions are created by Hurwitz matrix on vector multiplication. This provides a truncated moment expansion. The frequency resolution improves with increasing the number of moments.
At step 1103, truncated Chebyshev Polynomial Moment Expansion of Green's Function is evaluated by Fast Fourier Transform. The Kernel in the name comes from the use of apodization (or reweighting) of the moments in the expansion to minimize the Gibbs phenomenon.
The details of the KPM are as follows. It is a method developed in quantum physics and chemistry to analyze spectra by evaluating Green's functions. Here KPM is adapted to fast circuit spectral analysis. For large sparse Hurwitz matrices it scales linearly in N for both CPU requirements and memory requirements. The analogue of the Hurwitz matrix for Green's function evaluation in quantum systems is a Hamiltonian matrix, which is Hermitian. KPM is routinely applied to physical systems whose Hamiltonian matrices have dimension of more than a billion.
KPM uses expansions of the Green's function in a polynomial series. If the eigenspectrum of the Hurwitz matrix is bounded above and below, all eigenvalues may be scaled to the range between −1 to +1. Let a hat  symbolize scaling of variables to the range of support of the polynomial. Then, there is an operator identity for the Dirac delta function,
Here, the Tn are Chebyshev polynomials of the first kind. This identity may be applied to generate a truncated Chebyshev polynomial expansion of the imaginary part of the Green's function (or real part of the admittance)
The real part can be calculated from the imaginary part by the analyticity relations. The gn are apodization weights used to minimize Gibbs oscillations in truncation of a series. The Chebyshev moments μn are calculated by the Hurwitz matrix on vector multiplies using the Chebyshev polynomial recursion, which is a numerically stable operation. If the multiplies can be reduced to rules, the storage requirement may be only three vectors having the dimension of the Hurwitz matrix. The number of moments needed scales linearly with the inverse frequency resolution desired. Since Chebyshev polynomial expansions are truncated Fourier series, evaluation of spectra may be accomplished by fast Fourier transforms.
If the eigenvalue spectrum is unbounded, then other choices of polynomial with infinite or semi-infinite support may be preferred, such as Hermite, Laguerre and Legendre. KPM can be applied with minimal modification to the real symmetric Hurwitz matrices appropriate for large lossless circuits. The KPM needs further development to be applicable to lossy circuits.
In addition to the three (diagonalization, linear algebra, kernel polynomial) methods detailed above, there are other Green's function evaluation methods widely used in physics that may be adapted to circuit spectral analysis. A Hamiltonian matrix for quantum physics plays a similar time evolution role as the Hurwitz matrix for circuits. For example, a time series of the admittance can be generated by equating time evolution as multiplication by the exponential of the time multiplied by the Hamiltonian, as discussed above for a causal Green's function. This time series may be analyzed by Fourier methods or by Filter Diagonalization methods to calculate the exterior circuit admittance spectrum. The most appropriate numerical method will depend on the particular circuit being analyzed and the software implementation of the method.
The MBVD parts values are taken from fits to finite element simulations of the electroacoustic properties of LiTaO3 SAW resonators, which in turn have been validated by experiment. The Y matrix is sparse. Graph 1200 shows three terms of the S matrix (in dB) of the reconfigurable SAW filter circuit vs. frequency (in MHz) calculated by the algebraic node transformation method. S(1,2) is the transfer function from port 1 to port 2 of the reconfigurable SAW filter circuit. S(1,1) is the reflection at port 1 and S(2,2) is the reflection at port 2 of the reconfigurable SAW filter circuit.
‘Analysis’ starts with a circuit and predicts its response. ‘Synthesis’ is the inverse process of starting with a desired response and finding circuits that can reproduce it. For example, the response function may be a scattering matrix. For each channel, the response F(s) is specified as a rational function of frequency s=j ω, a ratio of finite order numerator polynomial P(s) and equal or higher order denominator polynomial Q(s):
In order for the electrical response of the circuit to be stable, the numerator and denominator polynomials must both be ‘Hurwitz’. This is a set of constraints on the type of polynomials. Monic polynomials may be completely specified by their roots, so the Hurwitz conditions may be restated in terms of the roots. Roots of numerator polynomials are called ‘zeros’. Roots of the denominator polynomial are ‘poles’.
Characterization of a circuit response by its poles and zeros is termed ‘pole zero analysis’. This is an important tool for network synthesis and characterization. Ideal components in a ladder topology circuit may be synthesized by matching poles and zeros to a continued fraction expansion of the driving point admittance. In principle, for ideal components equivalent circuit transformations enable infinite variations of circuits to be found that meet pole zero objectives.
In practice, real components are lossy and deviate from the ideal model. Realizing the pole zero objectives may be difficult. Real circuits need to be characterized. Real component parameters may need to be tuned, searched and optimized to recover pole zero objectives while maintaining realizability. The poles and zeros may also need to be optimized to minimize circuit complexity and loss with available components.
Efficient methods for extracting poles and zeros are needed for synthesis and characterization. The poles and zeros are distributed in the complex ω-plane. They are found by extending scattering matrix calculations to complex frequencies. Minima are zeros of numerator polynomials and maxima are poles of denominator polynomials. For the surface acoustic wave device examples considered here, the search covers a two dimensional space of real frequencies over a several hundred MHz range and imaginary frequencies in the ±100 MHz range. Typical searches at high resolution require S matrix evaluations at up to a million complex frequencies.
For example,
Pole zero analysis can be done either using the conventional spectral analysis method depicted in
For large numbers of frequency points, these examples demonstrate an orders of magnitude improvement in speed for the algebraic node transformation method compared to the conventional spectral analysis method. The details of the performance comparison between methods will depend on the specific circuit, the specific software and hardware implementation, and the specific needs of the application.
Applications of the algebraic node transformation method are not limited to the spectral analysis of circuits, and indeed they are not limited to circuits. They are generally applicable to linearly time invariant (LTI) systems, which are important in such diverse areas as control systems, seismology, mechanical filters, acoustics, etc.
The method expands nodal analysis to ‘algebraic nodes’ which separate the frequency variable from component variables. Algebraic nodes are created by assembling standard system components from more elemental unit components. Then interior nodes connecting standard components are reduced. The result is a transformation of nodal analysis from interior nodes to algebraic nodes, which is better suited for fast computation over many frequency points.
The method is most straightforwardly applied to any system that may be modeled as an equivalent electrical circuit. Such analogies are well developed for mechanical, magnetic, acoustic, optical, and hydraulic networks. They work where the network can be assembled by connecting discrete and linear components, analogous to lumped elements. Particularly useful is the ability of network analysis to synthesize a network to meet a prescribed frequency response, such as a filter. When multiple types of components are assembled into a network, electrical analogies enable the entire system to be represented in a unified, coherent way. Equivalent circuits are useful to model transducers, sensors and actuators that cross between energy types. Electrical analogies also exist in mathematics, for example, the solution of inhomogeneous differential equations.
The analogies are developed by finding relationships between variables in one domain that have a mathematical form identical to variables in the electrical domain. For example, two analogies are widely used for mechanical systems: the impedance analogy and the mobility analogy. The impedance analogy makes force and voltage analogous while the mobility analogy makes force and current analogous. A second variable must be chosen to make pairs of power conjugate variables analogous. When multiplied together they have units of power. In the impedance analogy this results in force and velocity being analogous to voltage and current respectively. The ratio of power conjugate variables, such as force/velocity, is analogous to electrical impedance. The mobility analogy preserves the topology of networks instead of impedances. A mechanical network diagram can have the same topology as its analogous electrical network diagram.
The node expansion and reduction process uses concepts that have analogies in other disciplines. An electrical engineer may recognize that G has features characteristic of an impulse response function. In other disciplines, labels for similar formula are causal Green's function, transfer function, propagator, linear response, and resolvent operator. A system with interior degrees of freedom is responding linearly to an exterior interaction. In the RLC circuit example, exterior interaction is represented by Yea with the frequency variable w replaced by algebraic nodes ‘a’.
The Cauchy principal value integral relation between real and imaginary parts also appears in a variety of disciplines using different names. The labels include dispersion relations, the Kramers Kronig relations, the Hilbert transform, and the Sokhotski-Plemelj theorem. These analyticity properties are a consequence of requirements that Fourier transforms to time be real and the time evolution be causal. Sum rules, analyticity and positive definite properties have proven to be helpful constraints in the analysis of incomplete experimental data. They may hold even if the data come from a poorly characterized system, and thus they are useful for statistical inference.
Node expansion and reduction is not limited to spectral analysis and the frequency variable ω. For example, a different ordering of nodal expansions and reductions may be applied to efficiently analyze loss in an RLC circuit. Expand from elementary circuit components as before. Then reduce all internal nodes except the shunt resistor nodes, labeled ‘g’. The resulting block matrix form of the admittance is
Here G is the diagonal conductance matrix for shunt resistors. All submatrices are ω dependent but not loss dependent. They are purely imaginary and reactive. They can be calculated once as lossless circuits regardless of the loss distribution. Such calculations may use the very efficient spectral analysis methods with algebraic nodes introduced here. The final step is reduction of shunt resistor nodes to yield the exterior admittance with loss. Such node reordering may accelerate calculation of loss effects in circuit design.
The node expansion and reduction method may be applied to many other combinations of circuit nodes. The appropriate choice of node reduction order depends on the circuit features of interest. The method is equally applicable to impedance analysis of circuits. There should also be analogues for mesh analysis, which is the complement of nodal analysis.
Analogies suggest further extensions of the node expansion and reduction method. For example, spatial variation is important to distributed components such as a transmission line. There is a Fourier transform wavenumber variable k that is conjugate to space. Calculation of the wavenumber dependence of the admittance for distributed components may be accelerated by similar node expansion and reduction techniques. However, no causality properties are required for wavenumber, so the matrices will not necessarily be Hurwitz.
However, the analogies are imperfect, and therefore potentially misleading. Consider quantum mechanics. The systems are lossless. The norm in Hilbert space is a complex conjugate transpose inner product. The time evolution matrix H is called the Hamiltonian. H is Hermitian. Its eigenvalues are real. They are related to energy by Planck's constant E= ω.
Circuits have profoundly unique features. Circuits are typically lossy. The norm in nodal space is a non-conjugate transpose inner product. The time evolution matrix Yaa is complex symmetric and satisfies Hurwitz stability conditions. Its eigenvalues are complex conjugate pairs or real. There is no analogue of Planck's constant for circuits.
Coupling matrix methods for microwave filter design use a mathematical form analogous to the algebraic node expansion of the admittance matrix (See, e.g., Cameron, et al, Microwave Filters for Communication Systems). The coupling matrix model captures the frequency dependence near bandpass of resonators in a lossless filter circuit. It is an important tool for network synthesis, especially in microwave engineering. The analogue to ω is a low pass frequency variable Ω ∝ ω/ω0−ω0/ω where ω0 is a bandpass center frequency. Algebraic nodes are analogous to shunt resonators. Interior nodes are analogous to non-resonant nodes.
However, coupling matrix methods apply only to resonators. They approximate lossless inductors and capacitors LC as Frequency Independent Reactances (FIRs). This incorrect frequency dependence violates complex variable analysis properties of LC components. FIR approximation can be matched exactly to LC circuits at only one frequency. Fidelity between the coupling matrix model and its LC realization degrades away from the matching frequency. FIR approximation has not been extended to systems with loss due to resistors R.
The node expansion and reduction method captures the correct frequency dependence of RLC circuits.
This patent claims priority from provisional application No. 62/339,445 filed on May 20, 2016, entitled “SPECTRAL ANALYSIS OF ELECTRONIC CIRCUITS” which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62339445 | May 2016 | US |