Computing interval parameter bounds from fallible measurements using systems of nonlinear equations

Information

  • Patent Application
  • 20040015531
  • Publication Number
    20040015531
  • Date Filed
    July 15, 2003
    21 years ago
  • Date Published
    January 22, 2004
    20 years ago
Abstract
One embodiment of the present invention provides a system that computes interval parameter bounds from fallible measurements. During operation, the system receives a set of measurements z1, . . . , zn, wherein an observation model describes each z1 as a function of a p-element vector parameter x=(xi, . . . , xp). Next, the system forms a system of nonlinear equations zi−h(x)=0 (i=1, . . . , n) based on the observation model. Finally, the system solves the system of nonlinear equations to determine interval parameter bounds on x.
Description


BACKGROUND

[0002] 1. Field of the Invention


[0003] The present invention relates to techniques for performing arithmetic operations involving interval operands within a computer system. More specifically, the present invention relates to a method and an apparatus for computing interval parameter bounds from fallible measurements using systems of nonlinear equations.


[0004] 2. Related Art


[0005] Rapid advances in computing technology make it possible to perform trillions of computational operations each second. This tremendous computational speed makes it practical to perform computationally intensive tasks as diverse as predicting the weather and optimizing the design of an aircraft engine. Such computational tasks are typically performed using machine-representable floating-point numbers to approximate values of real numbers. (For example, see the Institute of Electrical and Electronics Engineers (IEEE) standard 754 for binary floating-point numbers.)


[0006] In spite of their limitations, floating-point numbers are generally used to perform most computational tasks.


[0007] One limitation is that machine-representable floating-point numbers have a fixed-size word length, which limits their accuracy. Note that a floating-point number is typically encoded using a 32, 64 or 128-bit binary number, which means that there are only 232, 264 or 2128 possible symbols that can be used to specify a floating-point number. Hence, most real number values can only be approximated with a corresponding floating-point number. This creates estimation errors that can be magnified through even a few computations, thereby adversely affecting the accuracy of a computation.


[0008] A related limitation is that floating-point numbers contain no information about their accuracy. Most measured data values include some amount of error that arises from the measurement process itself. This error can often be quantified as an accuracy parameter, which can subsequently be used to determine the accuracy of a computation. However, floating-point numbers are not designed to keep track of accuracy information, whether from input data measurement errors or machine rounding errors. Hence, it is not possible to determine the accuracy of a computation by merely examining the floating-point number that results from the computation.


[0009] Interval arithmetic has been developed to solve the above-described problems. Interval arithmetic represents numbers as intervals specified by a first (left) endpoint and a second (right) endpoint. For example, the interval [a, b], where a<b, is a closed, bounded subset of the real numbers, R, which includes a and b as well as all real numbers between a and b. Arithmetic operations on interval operands (interval arithmetic) are defined so that interval results always contain the entire set of possible values. The result is a mathematical system for rigorously bounding numerical errors from all sources, including measurement data errors, machine rounding errors and their interactions. (Note that the first endpoint normally contains the “infimum”, which is the largest number that is less than or equal to each of a given set of real numbers. Similarly, the second endpoint normally contains the “supremum”, which is the smallest number that is greater than or equal to each of the given set of real numbers. Also note that the infimum and the supremum can be represented by floating point numbers.)


[0010] One commonly performed operation is to compute bounds on nonlinear parameters from a set of fallible measurements. Using the traditionally accepted methodology to compute approximate parameter values from nonlinear models of observable data requires a number of questionable assumptions. In the best case, if all assumptions are satisfied, the final result is a less than 100% statistical confidence interval rather than a containing interval bound. For example, the method of least squares produces a solution approximation even when the data on which it is based are inconsistent.


[0011] Hence, what is needed is a method and an apparatus that uses interval techniques to compute bounds on nonlinear parameters from fallible measurements.



SUMMARY

[0012] One embodiment of the present invention provides a system that computes interval parameter bounds from fallible measurements. During operation, the system receives a set of measurements z1, . . . , zn wherein an observation model describes each zi as a function of a p-element vector parameter x=(x1, . . . , xp). Next, the system forms a system of nonlinear equations zi−h(x)=0 (i=1, . . . , n) based on the observation model. Finally, the system solves the system of nonlinear equations to determine interval parameter bounds on x.


[0013] In a variation on this embodiment, the system of nonlinear equations is an “overdetermined system” in which there are more equations than unknowns.


[0014] In a variation on this embodiment, each measurement zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, and h is actually a q-element vector of functions h=(h1, . . . , hq)T.


[0015] In a variation on this embodiment, receiving the set of measurements involves receiving values for a set of conditions c1, . . . , cn under which the corresponding observations zi were made. In this variation, the system of nonlinear equations is of the form zi−h(x|ci)=0 (i=1, . . . , n).


[0016] In a further variation, each condition ci is actually an r-element vector of conditions ci=(cil, . . . , cir)T.


[0017] In a further variation, each condition ci is not known precisely but is contained within an interval cIi.


[0018] In a variation on this embodiment, equations in the system of nonlinear equations are of the form zi−h(x|ci)+εI(x, ci)=0 (i=1, . . . , n), which includes an error model εI(x, ci) that provides interval bounds on measurement errors for zi.


[0019] In a further variation, if zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, then εI is actually a q-element vector εI=(ε1, . . . , εq)T.


[0020] In a further variation, if there exists no solution to the system of nonlinear equations, the system determines that at least one of the following is true: (1) at least one of the set of measurements zi, . . . , zn is faulty; (2) the observation model h(x|ci) is false; (3) the error model εI(x, ci) is false; and (4) the computational system used to compute interval bounds on elements of x is flawed.


[0021] In a variation on this embodiment, solving the system of nonlinear equations involves first linearizing the system of nonlinear equations to form a corresponding system of linear equations, and then solving the system of linear equations through Gaussian elimination.







BRIEF DESCRIPTION OF THE FIGURES

[0022]
FIG. 1 illustrates a computer system in accordance with an embodiment of the present invention.


[0023]
FIG. 2 illustrates the process of compiling and using code for interval computations in accordance with an embodiment of the present invention.


[0024]
FIG. 3 illustrates an arithmetic unit for interval computations in accordance with an embodiment of the present invention.


[0025]
FIG. 4 is a flow chart illustrating the process of performing an interval computation in accordance with an embodiment of the present invention.


[0026]
FIG. 5 illustrates four different interval operations in accordance with an embodiment of the present invention.


[0027]
FIG. 6 illustrates the process of performing a Gaussian Elimination operation on an overdetermined interval system of linear equations in accordance with an embodiment of the present invention.


[0028]
FIG. 7 illustrates the process of generating a preconditioning matrix in accordance with an embodiment of the present invention.


[0029]
FIG. 8 presents a flow chart illustrating the process of computing interval parameter bounds from fallible measurements in accordance with an embodiment of the present invention.







[0030] Table 1 (located near the near the end of the specification—not with the figures) illustrates a correspondence between parameter estimation and nonlinear equations in accordance with an embodiment of the present invention.


DETAILED DESCRIPTION

[0031] The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.


[0032] The data structures and code described in this detailed description are typically stored on a computer readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. This includes, but is not limited to, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs) and DVDs (digital versatile discs or digital video discs), and computer instruction signals embodied in a transmission medium (with or without a carrier wave upon which the signals are modulated). For example, the transmission medium may include a communications network, such as the Internet.


[0033] Computer System


[0034]
FIG. 1 illustrates a computer system 100 in accordance with an embodiment of the present invention. As illustrated in FIG. 1, computer system 100 includes processor 102, which is coupled to a memory 112 and a to peripheral bus 110 through bridge 106. Bridge 106 can generally include any type of circuitry for coupling components of computer system 100 together.


[0035] Processor 102 can include any type of processor, including, but not limited to, a microprocessor, a mainframe computer, a digital signal processor, a personal organizer, a device controller and a computational engine within an appliance. Processor 102 includes an arithmetic unit 104, which is capable of performing computational operations using floating-point numbers. Processor 102 communicates with storage device 108 through bridge 106. and peripheral bus 110. Storage device 108 can include any type of non-volatile storage device that can be coupled to a computer system. This includes, but is not limited to, magnetic, optical, and magneto-optical storage devices, as well as storage devices based on flash memory and/or battery-backed up memory.


[0036] Processor 102 communicates with memory 112 through bridge 106. Memory 112 can include any type of memory that can store code and data for execution by processor 102. As illustrated in FIG. 1, memory 112 contains computational code for intervals 114. Computational code 114 contains instructions for the interval operations to be performed on individual operands, or interval values 115, which are also stored within memory 112. This computational code 114 and these interval values 115 are described in more detail below with reference to FIGS. 2-5.


[0037] Note that although the present invention is described in the context of computer system 100 illustrated in FIG. 1, the present invention can generally operate on any type of computing device that can perform computations involving floating-point numbers. Hence, the present invention is not limited to the computer system 100 illustrates in FIG. 1.


[0038] Compiling and Using Interval Code


[0039]
FIG. 2 illustrates the process of compiling and using code for interval computations in accordance with an embodiment of the present invention. The system starts with source code 202, which specifies a number of computational operations involving intervals. Source code 202 passes through compiler 204, which converts source code 202 into executable code form 206 for interval computations. Processor 102 retrieves executable code 206 and uses it to control the operation of arithmetic unit 104.


[0040] Processor 102 also retrieves interval values 115 from memory 112 and passes these interval values 115 through arithmetic unit 104 to produce results 212. Results 212 can also include interval values.


[0041] Note that the term “compilation” as used in this specification is to be construed broadly to include pre-compilation and just-in-time compilation, as well as use of an interpreter that interprets instructions at run-time. Hence, the term “compiler” as used in the specification and the claims refers to pre-compilers, just-in-time compilers and interpreters.


[0042] Arithmetic Unit for Intervals


[0043]
FIG. 3 illustrates arithmetic unit 104 for interval computations in more detail accordance with an embodiment of the present invention. Details regarding the construction of such an arithmetic unit are well known in the art. For example, see U.S. Pat. Nos. 5,687,106 and 6,044,454. Arithmetic unit 104 receives intervals 302 and 312 as inputs and produces interval 322 as an output.


[0044] In the embodiment illustrated in FIG. 3, interval 302 includes a first floating-point number 304 representing a first endpoint of interval 302, and a second floating-point number 306 representing a second endpoint of interval 302. Similarly, interval 312 includes a first floating-point number 314 representing a first endpoint of interval 312, and a second floating-point number 316 representing a second endpoint of interval 312. Also, the resulting interval 322 includes a first floating-point number 324 representing a first endpoint of interval 322, and a second floating-point number 326 representing a second endpoint of interval 322.


[0045] Note that arithmetic unit 104 includes circuitry for performing the interval operations that are outlined in FIG. 5. This circuitry enables the interval operations to be performed efficiently.


[0046] However, note that the present invention can also be applied to computing devices that do not include special-purpose hardware for performing interval operations. In such computing devices, compiler 204 converts interval operations into a executable code that can be executed using standard computational hardware that is not specially designed for interval operations.


[0047]
FIG. 4 is a flow chart illustrating the process of performing an interval computation in accordance with an embodiment of the present invention. The system starts by receiving a representation of an interval, such as first floating-point number 304 and second floating-point number 306 (step 402). Next, the system performs an arithmetic operation using the representation of the interval to produce a result (step 404). The possibilities for this arithmetic operation are described in more detail below with reference to FIG. 5.


[0048] Interval Operations


[0049]
FIG. 5 illustrates four different interval operations in accordance with an embodiment of the present invention. These interval operations operate on the intervals X and Y. The interval X includes two endpoints,


[0050]

x
denotes the lower bound of X, and


[0051] {overscore (x)} denotes the upper bound of X.


[0052] The interval X is a closed subset of the extended (including −∞ and +∞) system of real numbers R* (see line 1 of FIG. 5). Similarly the interval Y also has two endpoints and is a closed subset of the extended real numbers R* (see line 2 of FIG. 5).


[0053] Note that an interval is a point or degenerate interval if X=[x, x]. Also note that the left endpoint of an interior interval is always less than or equal to the right endpoint. The set of extended real numbers, R* is the set of real numbers, R, extended with the two ideal points negative infinity and positive infinity:




R*
=(R∪{−∞})∪{+∞}=[−∞,+∞].



[0054] We also define R** by replacing the unsigned zero, {0}, from R* with the interval [−0,+0].




R**=R*
−{0}∪[−0,+0]=[−∞,+∞], because 0=[−0,+0].



[0055] In the equations that appear in FIG. 5, the up arrows and down arrows indicate the direction of rounding in the next and subsequent operations. Directed rounding (up or down) is applied if the result of a floating-point operation is not machine-representable.


[0056] The addition operation X+Y adds the left endpoint of X to the left endpoint of Y and rounds down to the nearest floating-point number to produce a resulting left endpoint, and adds the right endpoint of X to the right endpoint of Y and rounds up to the nearest floating-point number to produce a resulting right endpoint.


[0057] Similarly, the subtraction operation X−Y subtracts the right endpoint of Y from the left endpoint of X and rounds down to produce a resulting left endpoint, and subtracts the left endpoint of Y from the right endpoint of X and rounds up to produce a resulting right endpoint.


[0058] The multiplication operation selects the minimum value of four different terms (rounded down) to produce the resulting left endpoint. These terms are: the left endpoint of X multiplied by the left endpoint of Y; the left endpoint of X multiplied by the right endpoint of Y; the right endpoint of X multiplied by the left endpoint of Y; and the right endpoint of X multiplied by the right endpoint of Y. This multiplication operation additionally selects the maximum of the same four terms (rounded up) to produce the resulting right endpoint.


[0059] Similarly, the division operation selects the minimum of four different terms (rounded down) to produce the resulting left endpoint. These terms are: the left endpoint of X divided by the left endpoint of Y; the left endpoint of X divided by the right endpoint of Y; the right endpoint of X divided by the left endpoint of Y; and the right endpoint of X divided by the right endpoint of Y. This division operation additionally selects the maximum of the same four terms (rounded up) to produce the resulting right endpoint. For the special case where the interval Y includes zero, X/Y is an exterior interval that is nevertheless contained in the interval R*.


[0060] Note that the result of any of these interval operations is the empty interval if either of the intervals, X or Y, are the empty interval. Also note, that in one embodiment of the present invention, extended interval operations never cause undefined outcomes, which are referred to as “exceptions” in the IEEE 754 standard.


[0061] Solving an Overdetermined System of Interval Linear Equations


[0062] In order to solve a system of interval nonlinear equations, we first describe a technique for solving a system of interval linear equations. We can subsequently use this technique in solving a corresponding system of interval nonlinear equations.


[0063] Given the real (n×n) matrix A and the (n×l) column vector b, the linear system of equations


Ax=b  (1)


[0064] is consistent if there is a unique (n×l) vector x for which the system in (1) is satisfied. If the number of rows in A and elements in b is m≠n, then the system is said to be either under- or overdetermined depending on whether m<n or n<m. In the overdetermined case, if m−n equations are not linearly dependent on the remaining equations, there is no solution vector x that satisfies the system. In the underdetermined case there is no unique solution.


[0065] In the point (non-interval) case, there is no generally reliable way to decide if an overdetermined system based on fallible observations is consistent or not. Instead a least squares solution is generally sought. In the interval case, if the system of equations is sufficiently inconsistent, the computed interval solution set will be empty. If there are at least some parameter values that are consistent with all the observations, it is possible to delete inconsistent parameter values and bound the consistent ones.


[0066] We now consider the problem of solving overdetermined systems of equations in which the coefficients are intervals. That is, we consider a system of the form


AIx=bI  (2)


[0067] where AI is an interval matrix of m rows and n columns with m>n. The interval vector bI has m components. Such a system might arise directly or by linearizing an overdetermined system of nonlinear equations. (Note that within this specification and in the following claims, we sometimes drop the superscript “I” when referring the interval matrices or vectors.)


[0068] The solution set of (2) is the set of vectors x for which there exists a real matrix A ε AI and a real vector b ε bI such that (1) is satisfied. In general, the system in (2) is inconsistent if its solution set is empty. First, we assume that there exists at least one A ε AI and b ε bI such that (1) is consistent. Later, we consider the inconsistent case. Moreover, we also assume that the data in AI and bI are fallible. That is, there exists at least one A ε AI and b ε bI such that (1) is inconsistent. Our goal is to implicitly exclude values of x that are inconsistent with all A ε AI and b ε bI. For example, the redundancy resulting from the fact that there are more equations than variables might be deliberately introduced to sharpen the interval bound on the set of solutions to (2). In a following section, we show how this sharpening is accomplished.


[0069] We shall simplify the system using Gaussian elimination. In the point case, it is good practice to avoid forming normal equations from the original system. Instead, one performs elimination using normal operation matrices to zero all elements of the coefficient matrix except for an upper triangle. After this first phase, the normal equations of this simpler system can be formed and solved. Our procedure begins with a phase similar to the first phase just described. However, we do not quite complete the usual elimination procedure. We have no motivation to use normal operations because we do not form the normal equations. This is just as well because interval normal matrices do not exist.


[0070] When using interval Gaussian elimination, it is generally necessary to precondition the system to avoid excessive widening of intervals due to dependence. In the following section, we show how preconditioning can be done in the present case where AI is not square.


[0071] Preconditioning


[0072] Preconditioning can be done in the same way it is done when AI is square. Let Ac denote the center of the interval matrix AI. Partition Ac as
1Ac=[AcAc](3)


[0073] where A′c is an n by n matrix and A″c ″is an m−n by n matrix. Note that Ac need only be an approximation for the center of AI. Define the partitioned square matrix
2C=[Ac0AcI](4)


[0074] where I denotes the identity matrix of order m−n, and the block denoted by 0 is an n×m−n matrix of zeros.


[0075] Define the preconditioning matrix B to be an approximation for the inverse
3[(Ac)-10Ac(Ac)-1I]&AutoLeftMatch;


[0076] of C.


[0077] To precondition (2) we multiply by B. We obtain


MIx=rI  (5)


[0078] where MI=BAI is an m by n interval matrix and rI=BbI is an interval vector of m components. When computing MI and rI, we use interval arithmetic to bound rounding errors.


[0079] Elimination


[0080] We now perform elimination. We apply an interval version of Gaussian elimination to the system MIx=rI thereby transforming MI into almost (see below) upper trapezoidal form. We assume that this procedure only fails when all possible pivot elements contain zero. Note that after preconditioning, no pivot selection is performed during the elimination to obtain a result with the form
4[TIWI]x=[uIvI](6)


[0081] where TI is a square upper triangular interval matrix of order n, and both uI and vI are interval vectors of n and m−n components, respectively. The submatrix WI is a matrix of m−n rows and n columns. It is zero except in the last column. Therefore, we can represent it in the form




W


I
=[0zI]



[0082] where 0 denotes an m−n by n−1 block of zeros, and zI is a vector of m−n intervals. From (6), we now have a set of equations




z


i


x


n


=v


i
(i=1, . . . , m−n).  (7)



[0083] Also,


Tnnxn=un.  (8)


[0084] Therefore, the unknown value xn is contained in the interval
5xn=unTnni=1m-nvizi.(9)


[0085] Taking this intersection is what implicitly eliminates fallible data from AI and bI. It is this operation that allows us to get a sharper bound on the set of solutions to the original system (2) than might otherwise be obtained.


[0086] If the original system contains at least one consistent set of equations, the intersection in (9) will not be empty. Knowing xn we can backsolve (6) for xn−1, . . . , xi. From (6), this takes the standard form of backsolving a triangular system TIx=uI. Sharpening xn using (9) also produces sharper bounds xI on the other components of x when we backsolve.


[0087] Inconsistency


[0088] Now suppose the initial equations (2) are not consistent. Then the preconditions of equations (7) might or might not be consistent. Widening of intervals due to dependence and roundoff can cause the intersection in (9) to be non-empty.


[0089] Nevertheless, suppose we find that the intersection in (9) is empty. This event proves that the original equations (2) are inconsistent. Proving inconsistency might be the signal that a theory is measurably false, which might be an extremely enlightening event. On the other hand, inconsistency might only mean that invalid measurements have been made.


[0090] If invalid measurements are suspected, it might be important to discover which equation(s) in (2) are inconsistent. We might know which equation(s) in the transformed system (6) must be eliminated to obtain consistency. However, an equation in (6) is generally a linear combination of all the original equations in (2). Therefore, to establish consistency in the original system, we generally cannot determine which of its equation(s) to remove.


[0091] We might be able to determine a likely removal candidate by using the following steps:


[0092] 1. Remove enough equations from (6) that the intersection in (9) is not empty.


[0093] 2. Solve (6) for xn−1, . . . , x1. This process cannot fail because we assume the elimination process to obtain (6) does not fail.


[0094] 3. Substitute the solution into the original system (2). Any equation(s) in (2) whose left and right members do not intersect can be discarded.


[0095] Summary of the Gaussian Elimination Operation


[0096]
FIG. 6 illustrates the process of performing a Gaussian Elimination operation on an overdetermined interval system of linear equations in accordance with an embodiment of the present invention. The system starts by receiving a representation of the overdetermined system of linear equations Ax=b (step 602). In this representation, A is a matrix with m rows corresponding to m equations and n columns corresponding to n variables, x includes n variable components, b includes m scalar components, and m>n. The system then stores this representation in memory (step 604).


[0097] Next, the system preconditions Ax=b to generate a modified system BAx=Bb that can be solved with reduced growth of interval widths (step 606). This preconditioning process is described in more detail below with reference to FIG. 7.


[0098] The system then performs an interval Gaussian elimination operation on BAx=Bb to form
6[TW]x=[uv],


[0099] wherein T is a square upper triangular matrix of order n, u is an interval vector with n components, v is an interval vector with m−n components, and W is a matrix with m−n rows and n columns, and wherein W is zero except in the last column, which is represented as a column vector z with m−n components (step 608).


[0100] Note that interval Gaussian elimination can fail. If so, the system simply terminates (step 609).


[0101] If Gaussian elimination does not fail, the system performs an interval intersection operation based on the equations zixn=vi(i=1, . . . ,m−n) and Tnnxn=un to solve for
7xn=unTnni=1m-nvizi


[0102] (step 610).


[0103] Finally, if xn is not the empty interval, the system performs a back substitution operation using xn and Tx=u to solve for the remaining components (xn−1, . . . , x1) of x (step 612).


[0104]
FIG. 7 illustrates the process of generating a preconditioning matrix in accordance with an embodiment of the present invention. The system starts by determining a non-interval matrix Ac, which is the approximate center of the interval matrix A (step 702). Next, the system augments the m×n matrix Ac to produce an n×n partitioned matrix
8C=[Ac0AcI],


[0105] wherein A′c is an n×n matrix, A″c is an m−n×n matrix, I is the identity matrix of order m−n, and 0 is an n×m−n matrix of zeros (step 704). Finally, the system calculates the approximate inverse of the partitioned matrix C to produce the preconditioning matrix B (step 706). If C happens to be singular, its elements can be perturbed until it is no longer so. This causes no difficulty because C is just used to compute the approximate inverse B.


[0106] Parameter Estimation in Nonlinear Models


[0107] Overdetermined (tall) systems of nonlinear equations naturally arise in the context of computing interval parameter bounds from fallible data. In tall systems, there are more interval equations than unknowns. As a result, these systems can appear to be inconsistent when they are not. A technique is described to compute interval nonlinear parameter bounds from fallible data and to possibly prove that no bounds exist because the tall system is inconsistent.


[0108] Interval arithmetic has been used to perform the analysis of fallible observations from an experiment to compute bounds on Newton's constant of gravitation G. (see B. Lang, Verified Quadrature in Determining Newton's Constant of Gravitation, Journal of Universal Computer Science, 4(1):16-24, 1998.) Because the computed bounds were sufficiently different from the then accepted approximate value, subsequent experiments were conducted to refine the accepted approximate value and the interval bound on G. (see “The Controversy of Newton's Gravitational Constant,” The Eöt-Wash Group: Laboratory of Gravitational Physics, www.npl.washington.edu/eotwash/gconst.html)


[0109] Using the traditionally accepted methodology to compute approximate parameter values from nonlinear models of observable data requires a number of questionable assumptions. In the best case, if all assumptions are satisfied, the final result is a less than 100% statistical confidence interval rather than a containing interval bound. For example, the method of least squares produces a solution approximation even when the data on which it is based are inconsistent.


[0110] A better procedure is to solve a system of interval nonlinear equations using the interval version of Newton's method. If assumptions for this procedure are satisfied, the result is a guaranteed bound on the parameter(s) in question. If assumptions are sufficiently violated and enough observations are available, the procedure can prove the system of equations and interval data are inconsistent. This better procedure is now described.


[0111] Nonlinear Parameter Estimation


[0112] Let n q-element vector measurements, z1, . . . , zn with zi=(zil, . . . , ziq)T be given. Assume these measurements depend on the value of a p-element vector parameter, xi=(x1, . . . , xp)T. Moreover, assume an analytic model exists for the observation vectors, zi, as a function of x and the true value ci=(Cil, . . . , cir)T of conditions under which the zi are measured. Thus:




z


i


=h
(x|ci)  (11)



[0113] The problem is to construct interval bounds xI on the elements of x from interval bounds zIi on the fallible measurements, zi and interval bounds cIi on the conditions of measurement.


[0114] Interval Observation Bounds


[0115] The development of interval measurement bounds begins by recognizing that a measurement z can be modeled (or thought of) as an unknown value t to which an error is added from the interval


ε×[−1,1]=εI  (12)


[0116] where 0≦ε. No assumption is made about the distribution of individual measurement errors from the interval εI that are added to t in the process of measuring z.


[0117] At once it follows that


zεt+εI.  (13)


[0118] More importantly, if the interval observation Z is defined to be z+εI, then


tεZ.  (14)


[0119] Enclosure (14) is an immediate consequence of the fact that zero is the midpoint of εI. This simple idea has a number of implications. They are:


[0120] Given multiple interval observations Zi, all of which are enclosures of t, so must their intersection. Therefore
9ti=1nZi.


[0121] Given random finite intervals Zi, all of which contain the value t, the expected width of their intersection decreases as n increases.


[0122] An empty intersection is proof that t ∉ Zi for some value of i. This can be true either because


[0123] the width of the interval observations Zi is too narrow,


[0124] there is no single value t that is contained in all the interval measurements Zi, or


[0125] both of the above.


[0126] The first alternative means that the assumption regarding the accuracy of the measurement process is false. The second alternative means that the model for the single common value t is false.


[0127] Walster explored how this simple idea works in practice to compute an interval bound on a common value under various probability distributions for values of the random variable ε ε εI. (see G. W. Walster, “Philosophy and Practicalities of Interval Arithmetic,” R. E. Moore Editor, Reliability on Computing, pages 309-323, Academic Press, Inc., San Diego, Calif. 1988.) He also discussed how this estimation principle can be generalized and used to bound parameters of nonlinear models given bounded interval observations, or observation vectors. The following is a more complete elaboration of the nonlinear generalization.


[0128] General Development


[0129] Given exact values of a set of conditions ci=(cil, . . . , cir)T under which observations zi=(zil, . . . , ziq)T are made, assume the observation vectors, zi (i=1, . . . , n), satisfy the following model:


ziεh(x|ci)+ε(x, ci)×[−1,1];  (15)


[0130] where the vectors 0≦ε(x, ci) bound unknown modeling and direct measurement errors. Specifically, if the p elements of x and the rn elements of all the ci were known (in practice they are not), assume it would be possible to compute intervals


εI(x, ci)=ε(x, ci)×[−1,1];  (16)


[0131] from which it follows immediately that


0εzi−h(x|ci)+εI(x, ci).  (17)


[0132] Note that (16) is a generalization of (12), and (17) is a generalization of (14) if written in the form 0 ε Z−t.


[0133] If (as is normally the case) the conditions ci under which measurements zi are made are not known, but are contained in intervals cIi, then taking all bounded modeling and observation errors into account:


0εzi−h(x|cIi)+εI(x, cIi).  (18)


[0134] In this general form, the widths of interval measurements zIi are themselves functions of both the unknown parameters x and fallibly measured conditions under which the measurements are made. That is:




z


I


i


=z


i
I(x, cIi).  (19)



[0135] This interval observation model is the generalization of Z=z+εI which is consistent with (13). The interval observation model (19) is needed to solve for interval bounds on the parameter vector x. If there is no solution for a given set of interval observation vectors zIi and interval bounds on measurement conditions cIi, then either:


[0136] the observation model h(x|cIi) is false;


[0137] the measurement error model εI(x, cIi) is false;


[0138] the computational system used to compute interval bounds on the elements of x is flawed; or


[0139] some combination of the above.


[0140] In this way, and by eliminating alternative explanations, the theory represented in h(x|cIi) or the observation error model represented in εI(x, cIi) can be proved to be false.


[0141] The System of Nonlinear Equations


[0142] To guarantee any computed interval xI is indeed a valid bound on the true value of x, the following must be true:


[0143] the given model h;


[0144] the interval bounds cIi on the conditions under which fallible measurements zi are made; and,


[0145] the model for interval bounds εI(x, cIi) on observation errors.


[0146] To be consistent with the given models, all the actual measurement vectors zi must satisfy relation (18). A logically equivalent, but more suggestive way to write this system of constraints is:




z


i


−h
(x|cIi)+εI(x, cIi)=0(i=1, . . . , n)  (20)



[0147] When used in (20), a possible value of x produces intervals that contain zero for all i. Any value of x that fails to do this cannot be in the solution set of (20). Thus, (20) is just an interval system of nonlinear equations in the unknown parameter vector x. The problem is that the total number of scalar equations nq might be much larger than the number p of scalar unknowns in the parameter vector x. Point (rather than interval) systems of equations where p<nq are called “overdetermined”. For interval nonlinear equations, this is a misnomer because the interval equations might or might not be consistent. As mentioned above, inconsistency (an empty solution set) is an informative event.


[0148] Solving Nonlinear Equations


[0149] Let f:nm (n≦m) be a continuously differentiable function. The parameter estimation problem described above is just a special case of the more general problem now considered. Table 1 below shows the correspondence between the parameter estimation problem and equivalent nonlinear equations to be solved. Both unknowns and equations are shown.
1TABLE 1Correspondence between Parameter Estimation and Nonlinear EquationsParameter EstimationNonlinear EquationsUnknowns10xI=(x1I,,xpI)T11xI=(X1,,Xp)TEquations12zi-h(x|ciI)+εI(x,ciI)=0(i=1,,n)13f=(f1,,fm)T=0wherem=nq


[0150] Having established this correspondence, the problem becomes to find and bound all the solution vectors of f(x)=0 in a given initial box xI(0). For non-interval methods, it can sometimes be difficult to find reasonable bounds on a single solution, quite difficult to find reasonable bounds on all solutions, and generally impossible to know whether reasonable bounds on all solutions have been found. In contrast, it is a straightforward problem to find reasonable bounds on solutions in xI(0) using interval methods; and it is trivially easy to computationally determine that all solutions in xI(0) have been bounded. What is unusual in this problem is that the order m of f can be greater than the order p of xI. A factor that simplifies obtaining solution(s) is the assumption that the equations are consistent. This has the effect of reducing the number of equations at a solution to the number of variables.


[0151] Linearization and Gaussian Elimination


[0152] Let x and y be points in a box xI. Suppose we expand each component ƒi (i=1, . . . , m) of f by one of the procedures commonly used to linearize nonlinear equations to be solved using the interval Newton method. Define the matrix of partial derivatives of the elements ƒi of f with respect to the elements xj (j=1, . . . , p) of x:
14Jij=(fi(x)xj).


[0153] If n=m, the system is square, and J is the Jacobian of f. This is the usual situation in which the interval Newton method is applied. (see [Hansen] E. R. Hansen, “Global Optimization Using Interval Analysis,” Marcel Dekker, Inc., New York, 1992).


[0154] In passing it is worth noting that in place of partial derivatives, slopes can be used to good advantage. Slopes have narrower width than interval bounds on derivatives and might exist when derivatives are undefined. Nevertheless, the remaining development uses derivatives as they are more familiar than slopes.


[0155] Combining the results in vector form:


f(y)εf(x)+J(x, xI)(y−x)  (21)


[0156] Even in the non-square situation, J is still referred to herein as the Jacobian of f. The notation J(x, xI) is used to emphasize the fact that a tighter expansion of ƒ can be obtained if both point and interval values of x elements are used to compute Jacobian matrix elements (see [Hansen]).


[0157] If y is a zero of f, then f(y)=0, and (21) is replaced by,




f
(x)+J(i x, xI)(y−x)=0.  (22)



[0158] Define the solution set of (22) to be




s={y|ƒ


i
(x)+[J(x, x′(i))(y−x)]i=0,x′(ixI(i=1, . . . , n)}.



[0159] This set contains any point y ε xI for which f(y)=0.


[0160] The smaller the box xI, the smaller the set s. The object of an interval Newton method is to reduce xI until s is as small as desired so that a solution point y ε xI is tightly bounded. Note that s is generally not a box.


[0161] Normally, the system of linear equations in (22) is solved using any of a variety of interval methods. In the present situation if n<m, the linear system is overdetermined and therefore appears to be inconsistent. This is not necessarily the case. If the procedure described above for interval linear equations is used to compute an interval bound yI on y, then yI contains the set of consistent solutions s.


[0162] For the solution of (22), the standard and distinctive notation N(x, xI) is used in place of yI. This emphasizes the solution's dependence on both x and xI.


[0163] From (22), define an iterative process of the form




f
(x)+J(x, xI)(N(x(k),xI(k))−x)=0  (23a)





x


I(k+1)


=x


I(k)


∩N
(x(k), xI(k))  (23b)



[0164] for k=0,1,2, . . . where x(k) must be in xI(k). A good choice for x(k) is the center m(xI(k)) of xI(k). For details on computing N (x(k), xI(k)) when the system of interval linear equations appears to be overdetermined, see [Hansen].


[0165] Summary of Parameter Estimation in Nonlinear Models


[0166]
FIG. 8 presents a flow chart summarizing the process of computing interval parameter bounds from fallible measurements in accordance with an embodiment of the present invention. During operation, the system receives a set of measurements z1, . . . , zn (step 802), as well as values for measurement conditions c1, . . . , cn under which the corresponding observations zi were made (step 804). Next, the system forms a system of interval nonlinear equations zi−h(x|ci)+εI(x, ci)=0 (i=1, . . . , n) based on a nonlinear model h and an error model ε (step 806).


[0167] The system then uses standard interval Newtown techniques described above to solve the system of nonlinear equations to determine interval parameter bounds for x. More specifically, the system linearizes the system of nonlinear equations (step 808), and then solves the system of linear equations using the technique described above with reference to FIG. 6 (step 810). The system then intersects the solution with the given box (step 812). Next, the system determines if the solution has converged to be within specified tolerances (step 814). If so, the system stops. Otherwise, the system applies the interval Newton procedure for splitting if needed (step 816) and returns to step 808 to linearize the system of equations again.


[0168] Conclusion for Parameter Estimation in Nonlinear Models


[0169] Computing bounds on nonlinear parameters from fallible observations is a pervasive problem. In the presence of uncertain observations, attempting to capture uncertainty with Gaussian error distributions is problematic when nonlinear functions of observations are computed.


[0170] The procedure described in this specification uses the interval solution of a system of nonlinear equations to compute bounds on nonlinear parameters from fallible data. Among the many advantages of this approach is the ability to aggregate data from independent experiments, thereby continuously narrowing interval bounds. Whenever different interval results are inconsistent, or if the set of interval bounds from a given data set is empty, this proves an assumption is violated or the model for the observations is measurably wrong.


[0171] Narrower parameter bounds can be computed from a calibrated system. It is interesting to note that the same procedure as described above can be used to solve the calibration problem. All that must be done is to modify (8) in the following way:


[0172] replace selected unknown parameter values with their now measured bounds xIi; and


[0173] solve for narrower bounds on any parameters in the model for εI(x, ci).


[0174] The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.


Claims
  • 1. A method for computing interval parameter bounds from fallible measurements, comprising: receiving a set of measurements z1, . . . , zn, wherein an observation model describes each zi as a function of a p-element vector parameter x=(x1, . . . , xp); storing the set of measurements z1, . . . , zn in a memory in a computer system; forming a system of nonlinear equations zi−h(x)=0 (i=1, . . . , n) based on the observation model; and solving the system of nonlinear equations to determine interval parameter bounds on x.
  • 2. The method of claim 1, wherein the system of nonlinear equations is an “overdetermined system” in which there are more equations than unknowns.
  • 3. The method of claim 1, wherein each measurement zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, and h is actually a q-element vector of functions h=(h1, . . . , hq)T.
  • 4. The method of claim 1, wherein receiving the set of measurements involves receiving values for a set of conditions c1, . . . , cn under which the corresponding observations zi were made; and wherein equations in the system of nonlinear equations account for the conditions ci and are of the form zi−h(x|ci)=0 (i=1, . . . , n).
  • 5. The method of claim 4, wherein each condition ci is actually an r-element vector of conditions ci=(cil, . . . , cir)T.
  • 6. The method of claim 4, wherein each condition ci is not known precisely but is contained within an interval cIi.
  • 7. The method of claim 4, wherein equations in the system of nonlinear equations are of the form zi−h(x|ci)+εI(x, ci)=0 (i=1, . . . , n), which includes an error model εI(x, ci) that provides interval bounds on measurement errors for zi.
  • 8. The method of claim 7, wherein if zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, then εI is actually a q-element vector εI=(ε1, . . . , εq)T.
  • 9. The method of claim 7, wherein if there exists no solution to the system of nonlinear equations, the method further comprises determining that at least one of the following is true: at least one of the set of measurements zi, . . . , zhd n l is faulty; the observation model h(x|ci) is false; the error model εI(x, ci) is false; and the computational system used to compute interval bounds on elements of x is flawed.
  • 10. The method of claim 1, wherein solving the system of nonlinear equations involves: linearizing the system of nonlinear equations to form a corresponding system of linear equations; and solving the system of linear equations.
  • 11. The method of claim 10, wherein solving the system of nonlinear equations involves using Gaussian Elimination.
  • 12. A computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for computing interval parameter bounds from fallible measurements, the method comprising: receiving a set of measurements z1, . . . , zn, wherein an observation model describes each zi as a function of a p-element vector parameter x=(x1, . . . , xp); storing the set of measurements z1, . . . , zn in a memory in a computer system; forming a system of nonlinear equations zi−h(x)=0 (i=1, . . . , n) based on the observation model; and solving the system of nonlinear equations to determine interval parameter bounds on x.
  • 13. The computer-readable storage medium of claim 12, wherein the system of nonlinear equations is an “overdetermined system” in which there are more equations than unknowns.
  • 14. The computer-readable storage medium of claim 12, wherein each measurement zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, and h is actually a q-element vector of functions h=(h1, . . . , hq)T.
  • 15. The computer-readable storage medium of claim 12, wherein receiving the set of measurements involves receiving values for a set of conditions c1, . . . , cn under which the corresponding observations zi were made; and wherein equations in the system of nonlinear equations account for the conditions ci and are of the form zi−h(x|ci)=0 (i=1, . . . , n).
  • 16. The computer-readable storage medium of claim 15, wherein each condition ci is actually an r-element vector of conditions ci=(cil, . . . , cir)T.
  • 17. The computer-readable storage medium of claim 15, wherein each condition ci is not known precisely but is contained within an interval cIi.
  • 18. The computer-readable storage medium of claim 15, wherein equations in the system of nonlinear equations are of the form, zi−h(x|ci)+εI(x, ci)=0 (i=1, . . . , n), which includes an error model εI(x, ci) that provides interval bounds on measurement errors for zi.
  • 19. The computer-readable storage medium of claim 18, wherein if zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, then εI is actually a q-element vector εI=(εl, . . . , εq)T.
  • 20. The computer-readable storage medium of claim 18, wherein if there exists no solution to the system of nonlinear equations, the method further comprises determining that at least one of the following is true: at least one of the set of measurements zi, . . . , zn is faulty; the observation model h(x|ci) is false; the error model εI(x, ci) is false; and the computational system used to compute interval bounds on elements of x is flawed.
  • 21. The computer-readable storage medium of claim 12, wherein solving the system of nonlinear equations involves: linearizing the system of nonlinear equations to form a corresponding system of linear equations; and solving the system of linear equations.
  • 22. The computer-readable storage medium of claim 21, wherein solving the system of nonlinear equations involves using Gaussian Elimination.
  • 23. An apparatus that computes interval parameter bounds from fallible measurements, comprising: a receiving mechanism configured to receive a set of measurements z1, . . . , zn, wherein an observation model describes each zi as a function of a p-element vector parameter x=(x1, . . . , xp); a memory in a computer system for storing the set of measurements z1, . . . , zn; an equation forming mechanism configured to form a system of nonlinear equations zi−h(x)=0 (i=1, . . . , n) based on the observation model; and a solver configured to solve the system of nonlinear equations to determine interval parameter bounds on x.
  • 24. The apparatus of claim 23, wherein the system of nonlinear equations is an “overdetermined system” in which there are more equations than unknowns.
  • 25. The apparatus of claim 23, wherein each measurement zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, and h is actually a q-element vector of functions h=(h1, . . . , hq)T.
  • 26. The apparatus of claim 23, wherein the receiving mechanism is additionally configured to receive values for a set of conditions c1, . . . , cn under which the corresponding observations zi were made; and wherein equations in the system of nonlinear equations account for the conditions ci and are of the form zi−h(x|ci)=0 (i=1, . . . , n).
  • 27. The apparatus of claim 26, wherein each condition ci is actually an r-element vector of conditions ci=(cil, . . . , cir)T.
  • 28. The apparatus of claim 26, wherein each condition ci is not known precisely but is contained within an interval cIi.
  • 29. The apparatus of claim 26, wherein equations in the system of nonlinear equations are of the form zi−h(x|ci)+εI(x, ci)=0 (i=1, . . . , n), which includes an error model εI(x, ci) that provides interval bounds on measurement errors for zi.
  • 30. The apparatus of claim 29, wherein if zi is actually a q-element vector of measurements zi=(zil, . . . , ziq)T, then εI is actually a q-element vector εI=(ε1, . . . , εq)T.
  • 31. The apparatus of claim 29, wherein if there exists no solution to the system of nonlinear equations, the solver is configured to determine that at least one of the following is true: at least one of the set of measurements zi, . . . , zn is faulty; the observation model h(x|ci) is false; the error model εI(x, ci) is false; and the computational system used to compute interval bounds on elements of x is flawed.
  • 32. The apparatus of claim 23, wherein the solver is configured to: linearize the system of nonlinear equations to form a corresponding system of linear equations; and to solve the system of linear equations.
  • 33. The apparatus of claim 32, wherein the solver is configured to solve the system of nonlinear equations using Gaussian Elimination.
RELATED APPLICATIONS

[0001] This application hereby claims priority under 35 U.S.C. §119 to U.S. Provisional Patent Application No. 60/396,246, filed on Jul. 16, 2002, entitled, “Overdetermined (Tall) Systems of Nonlinear Equations,” by inventors G. William Walster and Eldon R. Hansen (Attorney Docket No. SUN-8507PSP).

Provisional Applications (1)
Number Date Country
60396246 Jul 2002 US