Not Applicable
Not Applicable1 1 All research contained herein was conducted independently and did not receive funding or support of any kind from any agency, organization, or individual.
Not Applicable2 2 No agency, organization, or individual besides the author was involved in the research contained herein.
Not Applicable
Not Applicable
Let V be a vector space and T: V→V a linear operator. A scalar λ is an eigenvalue for T exactly when there is a vector v≠0 such that T(v)=λv. For each such λ, such a v is an eigenvector for λ, and the set of v for which T(v)=λv is a subspace of V, the λ-eigenspace. The ideal situation is that there exists a basis for V consisting exclusively of eigenvectors—in this case, the operator T behaves like a diagonal matrix, and knowledge of the eigenvalues and eigenvectors drastically simplifies analysis of virtually anything involving T (cf. Chapter 5 of [2]). It is not always true that there is such a basis, but it is true for many important classes of operators (cf. § 5.6 of [2]), and is almost certain to be true for a random matrix. Regardless, eigenvalues and eigenvectors are desired throughout the sciences.
The standard way to express the concept of “eigenvalue” in finite terms, taught in all introductory courses on Linear Algebra, is via the “Characteristic Polynomial”. For any matrix A representing T, and with I the identity matrix and x a variable, the determinant det(xI−A) is the same polynomial, called the Characteristic Polynomial of T. Its roots are precisely the eigenvalues of T. Despite its conceptual value, it is comically difficult to calculate eigenvalues in this way unless T is very special or dim(V) is extremely small (cf. preface to Chapter 15 of [1]. In practice, eigenvalues are approximated to desired accuracy by using one of a suite of iterative methods, which are mostly descendants of Power Method.
The famous Power Method is one of the oldest and simplest methods for approximating eigenvalues. Suppose that A is a square matrix which has a unique eigenvalue λ1 of largest magnitude. The iterates A(v), A2(v), A3(v), . . . of a random vector v increasingly point in the direction of an eigenvector for λ1. Comparing two sufficiently accurate consecutive iterates yields an approximation of λ1, and either is an approximation of an eigenvector for λ1. For extremely large applications, it is the only practical method to approximate eigenvalues and eigenvectors due to the relatively low amount of arithmetic that it requires (cf. § 12.3 & § 15.2 [1]).
In the remainder of this specification, I describe and examine a useful improvement of Power Method: Introspective Power Method.
Despite Power Method's age and widespread use, it seems (judging from books, articles, internet, conversations) to be unknown that one can extract more information from Power Method with very little additional computational cost or code: by carefully implementing a simple idea, one can also acquire an approximation of the second-largest eigenvalue λ2. As is well-known, |λ2/λ1| is an indicator of the speed at which the iterates Ak(v) converge (in Projective Space) to an eigenvector. The origin of the improvement, which is also the reason for the term “Introspective”, is the simple observation that Power Method can learn about λ2 by analyzing the speed at which its own iteration is converging (cf. § 2 & § 3 of the Detailed Description).
A specific example of a situation in which the matrix A can be very large and both λ1, λ2 are desired comes from Network Analysis, in which the underlying object is what mathematicians usually call a graph. With any graph G are associated several important matrices, such as the Adjacency Matrix A, and Spectral Graph Theory is the study of graphs via the eigenvalues and eigenvectors of these special matrices. There are many uses for λ1 (cf. Chapter 3 of [1]), but lower eigenvalues like λ2 also appear3. For example, the smallest number of cliques (subnetworks in which every pair of nodes is connected) into which a network can be subdivided is bounded by a simple expression involving λ1, λ2 (cf. no. 12 in § 3.6 of [1]). This λ2 also has significance for Chromatic Numbers, in part because of the well-known duality between colorings and clique partitions. 3 It is important to remember that when Graph Theorists say “largest” and “second-largest”, they really mean “rightmost” and “second-rightmost”. Luckily, it is easy to shift an adjacency matrix so that the second-rightmost is indeed the second-largest, and undo the shift after eigenvalues are approximated.
Although the accuracy of the approximations of λ2 provided by Introspective Power Method can vary and may be unsatisfactory in some circumstances, it is at least possible to state with fairly high confidence what is the minimum accuracy of the approximation (cf. § 6 of the Detailed Description). In any case, the cost of Introspective Power Method (cf. § 7 of the Detailed Description) is so modest and the successes so frequent that it should be used in essentially all situations in which λ2 is valuable and Power Method is already the method of choice; in the cases where it fails to satisfy, one can resort to Deflated Power Method (next) with only a little waste.
The customary way to approximate λ2 is to Deflate, which “deletes” λ1 from A, and then apply Power Method again to approximate the now-largest λ2 (cf. § 4.2 of [2]). The computational cost of this is significantly higher than what is proposed here. That said, it must be admitted that a second use of Power Method would provide an approximation of a λ2-eigenvector and it seems difficult to extend this functionality to Introspective Power Method.
Working implementations are provided (cf. § 5 of the Detailed Description) in the form of Octave functions, which can be easily converted to MATLAB if desired. These implementations were used to produce all figures and data below.
Denote by the field of Real Numbers, and by the field of Complex Numbers.
For any coordinate vector v, denote by vi its ith coordinate. For two coordinate vectors v, w of the same size, denote by v·w their Dot Product. Fix a typical norm ∥ ∥, such as the 2-norm ∥ ∥2 or the max-norm ∥ ∥∞. The choice of ∥ ∥ is deliberately omitted to allow flexibility. For any matrix A, denote by AT its ordinary transpose.
Given vectors v, w such that w≈cv for some scalar c, denote by w//v an approximation of c. Although this seems ill-defined, there are many concrete definitions of the // operation. For example, one can define w//v to be the Rayleigh Quotient v·w/v·v. One can also define w//v to be the ratio wi/vi for fixed i, or random i, or i which maximizes |vi|, etc. The implementation of // is deliberately omitted to allow flexibility.
Let A be a Real square matrix. Assume that A is diagonalizable over , although this is a stronger hypothesis than is truly necessary4 (cf. Theorem 4.1 of [1]). Let λ1 be the distinct eigenvalues, numbered so that |λ1|≥|λ2|≥|λ1| for all other i. Assume that |λ1|>|λ2|>|λi| for all other i. Set ρ|λ2/λ1|, which is a major predictor of Power Method's speed for A. 4 In fact, even the hypothesis of semisimplicity in Theorem 4.1 of [1] is not logically necessary, but convergence is so slow in the non-semisimple case that the extra generality is utterly useless in practice.
To simplify the notation, assume that A has only three distinct eigenvalues. For any vector v, there are scalars ai and unit eigenvectors vi such that v=a1v1+a2v2+a3v3. Fix a v such that ai≠0, which is satisfied in practice by using a “random” v. Denote by uk the kth normalized iterate of v by A, i.e. uk ∥Ak(v)∥−1 Ak(v).
By definition of “eigenvector”,
A
k(v)=a1λ1kvi+a2λ2kv2+a3λ3kv3
Since the v1-term of Ak(v) dominates (in a relative sense) as k increases, the first thing to note is that
for large k, where ϵk=sign(a1λ1k).
DEFINITION (depth-2 aggregates). Define
Observe that, since ϵk is constant if λ1>0 and alternating if λ1<0,
for large k, where ±=sign(−λ1).
The imprecise idea explained in the Brief Summary is made precise by observing that ∥δk∥/∥δk−1∥→ρ as k→∞ and, provided that (1) is reasonably accurate, sign(λ2) can be deduced by checking whether the signs of the entries of δk are constant or alternating.
A superior viewpoint is that
as k→∞.
At the conclusion of Power Method, a good approximation of λl is known. Monitoring the sequence δk provides an approximation of λ2/|λ1|. Thus, λ2 is approximated.
However, all of this is theoretical and there are real obstacles to implementation. I discuss these next.
Implementing § 2 requires two opposing demands to be satisfied simultaneously:
In general, there is a “sweet spot” in which the approximation of λ2/|λ1| should be extracted, and it is almost a tautology that this occurs strictly after iteration begins but strictly before accuracy λ1 is attained.
REFER TO FIGURES.
The basic idea to extract subsets like those depicted in
The point is that very few of the values in the desired subset are literally equal, and so do not exhibit their “true” multiplicity. However, after some rounding they coalesce into a single value which becomes the mode and has much larger frequency than previous modes. All subsequent roundings will increase frequency by smaller amounts.
The preceding description is an oversimplification of what actually happens, but conveys the main idea. In reality, what exactly “spike” should mean is a bit tricky. At present, the best approximations seem to result from selecting the mode whose frequency increased the most as a percentage of the previous mode's frequency:
DEFINITION (pseudomode). Let be a sequence of Floating-Point Numbers, denote by ms the mode of the sequence obtained by rounding the elements of to s significant digits, and by fs the frequency of ms. Define the pseudomode of to be mS, where S is the s maximizing (fs−fs+1)/fs+1. If there are multiple such s, the largest is chosen.
The pseudomode of the sequence δk//δk−1 is taken as the approximation of λ2/|λ1|. A straightforward implementation, in Octave, of the above definition is given in § 5 below. The computational cost of this function, called pseudomode, is discussed in § 7 below.
REFER TO FIGURES.
The reader may naturally ask if a simpler approach is just as good as pseudomode. For example, one could simply wait until consecutive terms in the sequence δk+1//δk are sufficiently close and choose one as approximation of λ2/|A1|. It is plausible from the examples (see Figures) that such an approach is vulnerable to “coincidental closeness”, and this is indeed the case.
REFER TO FIGURES.
A more important advantage of pseudomode over the simpler approaches is that it can provide some level of confidence regarding the accuracy of the approximation of λ2; see § 6 for details.
Of course, Introspective Power Method can fail for the same reasons that Power Method can fail. If |λ2|≈|λ3| then the candidates for λ2/|λ1| can fail to stabilize near the true value before the underlying Power Method terminates.
REFER TO FIGURES.
Conceptually, there are three scenarios:
Scenario (3) is responsible for the vast majority of cases in which Introspective Power Method does not return a reasonably correct approximation of λ2.
I further address the reliability of Introspective Power Method in § 6 below.
This section provides a complete implementation, in Octave, of Introspective Power Method. More specifically, this section provides an implementation of
Since the underlying Power Method below uses ∥ ∥∞ to measure errors and normalize (cf. § 4.1.1 of [1]), the following seems to be a good implementation of the // operation:
To extract a good approximation of λ2/|λ1|, an implementation of the idea from § 3 is needed:
REMARK. The additional return value sig is the number of significant digits not yet rounded when the pseudomode occurred. Its purpose is explained in § 6 below.
Finally, the following is an implementation of Introspective Power Method, using ∥ ∥∞ for all purposes:
REMARK. The required code (highlighted above) can be inserted into any implementation of Power Method and does not interfere with its standard functionality.
A central, and somewhat paradoxical, challenge for any approximation algorithm is to promise accuracy of the approximation compared to the unknown true value.
Power Method can promise something, because it approximates both the eigenvalue and an eigenvector: if and λ′1 and v′1 are approximations then the absolute residual error ∥A(v′1)−λ′1v′1∥, or its relative version, can be checked.
REMARK. Strictly speaking, neither residual error implies anything definite about |λ′1−λ1| or |λ′1−λ1|/|λi| (cf. (53.5) in Chapter 3 of [2] and § 3.2.1 of [1]).
What can Introspective Power Method promise? Although it is true, mathematically, that δk converges to a λ2-eigenvector v2 as k→∞, it seems difficult to get a good approximation of v2 in practice without squandering much of the efficiency that makes the method advantageous. So, it is not possible at present to promise a residual error.
However, pseudomode's return value sig seems to be a reliable indicator of accuracy:
HEURISTIC (confidence). The relative error in the approximation of λ2 is likely to be 101−sig or better, and extremely likely to be 102−sig or better.
REFER TO FIGURES.
If less accuracy for λ1 is demanded then, for reasons discussed in § 3, one should expect the approximation of λ2 to degrade somewhat on average. A batch of 10000 randomized trials similar to that shown in
REMARK. For details about how the above trials were randomized, see Appendix B.
In this section, I show that the extra computational cost when Introspection is added to Power Method is considerably smaller than the cost to subsequently run Deflated Power Method.
Let n×n be the size of A. Let N be the number of iterations performed during Introspective Power Method, which depends only on the underlying Power Method and is unaffected by the improvement. Let M be the number of successive roundings to be performed by the pseudomode function (cf. § 5 above). Since M is bounded by a very small number dependent only on the floating-point system and the granularity of rounding (e.g. M≤17 in Double Precision with Decimal rounding), it will be treated as a constant in the asymptotic θ-notation.
Here is a list of the “new” operations added to Power Method:
The cost of the procedure in (a) is linear in n and so complexity (a) is, in asymptotic notation, Nθ(n). The cost of any reasonable implementation of // is also linear in n and so complexity (b) is Nθ(n). By nature of M and because the cost of rounding a single Floating-Point Number can be assumed fixed, complexity (c) is Nθ(1). One way to efficiently accomplish (d) is to sort the initial N-array first, observe that rounding does not cause the array to become unsorted, and note that the mode of any sorted array can be easily calculated during a single traversal of the array. Expressed in asymptotic notation, complexity (d) is θ(N log(N))+θ(N). By nature of M and because the procedure required by (e) is very simple, cost (e) can be safely ignored.
SUMMARY. In terms of routine low-level operations like Floating-Point Arithmetic/Comparison, the extra computational cost required to run Power Method when Introspection is included is, in asymptotic notation, θ(Nn+N log(N)); the leading coefficients are modest.
On the other hand, the components of Deflated Power Method are as follows:
I assume that the number of iterations needed for the second use of Power Method is the same N, because no other assumption seems more reasonable.
REMARK. The assumption in the previous paragraph simplifies the analysis of complexity, but it should be noted there are meaningful relationships between the performance of Introspection and that of Deflated Power Method. For example, the situation in which Introspective Power Method is most likely to produce a poor approximation is precisely the situation in which Deflated Power Method is likely to require a large number of iterations (cf. § 4).
The cost of the procedure in (B) is quadratic in n so complexity (B) is, in asymptotic notation, Nθ(n2). As before, complexity (C) is Nθ(n). The cost of the procedure in (D) is linear in n so complexity (D) is Nθ(n). Depending on whether Deflation is accomplished inside or outside of the main iteration, complexity (A) will be either Nθ(n) or θ(n2), respectively.
SUMMARY In terms of routine low-level operations like Floating-Point Arithmetic (including √{square root over ( )}), the computational cost required to run Deflated Power Method is, in asymptotic notation, θ(Nn2); the leading coefficients are modest.
There is no rigorous relationship between n and N, and I prefer not to impose any such assumption. Nonetheless, for the computational cost of Deflated Power Method to be competitive with Introspection, the number N of iterations would need to be unrealistically larger than the size n of the matrix. To get a rough idea of the scale, note that if A is merely 10×10 then the smallest N for which Nn2<N log2(N) is roughly N≈1030.
Introspective Power Method's other costs, like storage/reads/writes are moderate. For example, the extra memory required is
By arranging deeper cancellation, one can approximate smaller eigenvalues:
DEFINITION (depth-3 aggregates). Define
where λ2, ρ, δk are as before.
Observe that
for a certain constant C, and therefore
for large k. This requires storage and update of δk−2 in addition to δk and δk−1.
A general definition is:
DEFINITION (depth-i aggregates). Define δk(i) recursively by
Note that δk(2)=δk and δk(3)=ηk.
Observe that
for large k.
REMARK. Of course, errors become compounded as i increases, so one cannot go very deep before the approximations become mostly garbage. On the other hand, this is somewhat of an obstacle for Deflated Power Method too (cf. § 4.2.5 of [1]).
Finally, Introspective Power Method can be used in the Complex setting with no significant changes. Given an implementation of Power Method that itself operates in the Complex setting, simply observe that cancellation of dominant terms in the sequence of normalized iterates uk is arranged by forming uk−ζuk−1, where now ζλ1/|λ1| is a “complex sign”, i.e. root of unity.
Let A and λi be as above.
Let u1 be a unit eigenvector for λ1. Let u be any vector such that u·u1=1; one choice for u is u1 but there are other choices (cf. § 4.2.5 of [1]). Define U to be the outer product u1·uT of u1 and u.
For any number λ, the eigenvalues of A−λU are λ1−λ, λ2, . . . (cf. Theorem 4.2 of [1]). Thus, if λ is chosen to be λ1 then the largest eigenvalue of A−λU is λ2. Deflated Power Method for A is simply the application of Power Method to A−λ1U. There is a non-trivial correspondence between eigenvectors of A and eigenvectors of A−λU (cf. § 4.2.1 of [1]). Note that Deflated Power Method is effective even if λ is merely an approximation of λ1, since then λ1−λ≈0 and λ2 is probably still the largest eigenvalue of A−λ1U.
A key point is that Power Method can be used on A−λU without ever actually modifying A, since outer products transform vectors in a predictable and simple way:
If Z is the outer product of x and y then Z(v)=(y·v)x. Thus, (A−λU)(v) can be determined by separately calculating A(v) and the comparatively inexpensive λ1U(v). This is significant when sparseness is important, since A−λU is usually extremely dense (cf. § 4.2.5 of [1]).
For all randomized trials above, the matrix A was generated as follows: