The present invention relates generally to techniques for determination of the global optimum of a function which represents a characteristic of a system, and more particularly to a technique using a greedy randomized adaptive search procedure.
Optimization problems arise in many disciplines of the natural, physical, and social sciences. Examples include materials science, biology, chemistry, genetics, military science, electrical engineering, robotics, economics, portfolio theory, and transportation science. In many of these problems, multiple local optima exist. Optima comprise maxima and minima. Global optimization methods attempt to overcome locally optimal solutions in their search for a globally optimal solution.
There have been many powerful local search techniques developed for optimization problems. Local search algorithms iteratively improve upon the current best solution by looking in a neighborhood of the current solution for a better solution. These local search techniques employ conditions to stop when a solution is found that is the best in a local neighborhood, hence finding a local optimum. Global optimization methods that perform more complex searches across the feasible region are advantageous. One of the main difficulties in global optimization problems lies in the inability to determine whether a local optimum is a global optimum. Although there exist methods for finding a local optimum in a finite number of iterations, it is well known that the global optimization problem is inherently unsolvable in a finite number of steps.
Methods for global optimization can be classified as deterministic or random. Examples of deterministic methods, which do not have any stochastic components, include branch-and-bound, covering methods, and generalized descent methods. Random, or stochastic, methods are methods that employ randomness in some way to aid in finding the global optimum. Although a pure random search may converge on the global optimum, it is too inefficient to be of practical value.
Different methods assume various a priori information. For example, some methods assume that the gradient or Hessian is available. Others assume that the function satisfies some regularity conditions, such as convexity or continuity. In addition, some methods require information on the feasible region. Methods which require certain a priori information on a problem have limited application, since this information may not be available in a particular application.
Therefore, global optimization methods which require less information about the structure of the problem are advantageous. Methods which are computationally efficient are further advantageous.
The global minimum of a function ƒ(x) representing a characteristic of a system comprising at least one object is generated. The value of the function is based at least in part on a first plurality of first data elements x. A first data element x0 is randomly selected from the first plurality of first data elements, and the value of the function ƒ(x0) corresponding to x0 is calculated. A smaller value of ƒ(x) is searched for by generating a second plurality of second data elements based at least in part on x0 and ƒ(x0). The values of the function corresponding to the second plurality of second data elements are then calculated. A second data element xi is selected from the second plurality of second data elements wherein ƒ(x1) is less than ƒ(x0). In an embodiment of C-GRASP, the process wherein the second plurality of second data elements is generated and wherein x1 is selected is referred to as a “construction phase.”
A further smaller value of ƒ(x) is searched for by generating a third plurality of third data elements based at least in part on x1 and ƒ(x1). The values of the function corresponding to the third plurality of third data elements are then calculated. A third data element x2 is selected from the third plurality of third data elements wherein η(x2) is less than ƒ(x1). In an embodiment of C-GRASP, the process wherein the third plurality of third data elements is generated and wherein x2 is selected is referred to as a “local-improvement phase.” The sequence of a construction phase followed by a local-improvement phase is iterated multiple times for an initial data element x0 to generate a local minimum in a region around x0. The entire process comprising multiple iterations of a sequence of a construction phase followed by a local-improvement phase is then repeated at a plurality of initial data elements to generate a plurality of local minima. In an embodiment of C-GRASP, the process of generating a plurality of local minima is referred to as a “total phase.” The smallest of the local minima is selected to be the global minimum.
Embodiments of C-GRASP introduce randomness in all three phases (construction, local improvement, and total) to increase the probability of generating local minima and the global minimum. Neighborhood searches are randomized searches, wherein the searches are based on a search grid. The probability of finding a local minimum increases with the density of the search grid. Performing a neighborhood search with a fixed high-density grid, however, is computationally inefficient. In an embodiment of C-GRASP, the search initially starts with a relatively low-density grid. When continued searches with the current grid size fail to yield improved values, the density of the grid is increased to increase the probability of finding an improved value. This adaptive procedure is computationally efficient. C-GRASP does not require a priori knowledge of the gradient of the function, and places few constraints on the domain of data elements. C-GRASP is applicable to global optimization problems in a wide variety of fields.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
Embodiments of C-GRASP are illustrated below with an application to the problem of sensor registration.
Sensor registration is a process applicable to any generic data measurement system comprising two or more sensors which collect sets of measurements from one or more objects. Herein, an “object” refers to an entity with characteristics that may be measured. Examples of objects include a car, an air mass, and a light beam. In the examples above, characteristics include the speed of a car, the temperature of an air mass, and the wavelength of a light beam. Objects may also include non-physical entities. In an economics application, for example, an object may include a business, and characteristics may include profit, number of employees, and cost of goods sold.
Herein, a “sensor” refers to an entity that may measure characteristics of an object. In the examples above, sensors include a speedometer for measuring the speed of a car, a thermometer for measuring the temperature of an air mass, and a spectrometer for measuring the wavelength of a light beam. Sensors are not restricted to physical instruments. A person may be a sensor (for example, counting chairs, observing colors, and feeling surface texture). Herein, a “database” collecting data may also be a “sensor.”
Herein, a set of measurements comprises “data elements”. Data elements may be measured from more than one object. For example, each temperature in a combined set of temperature measurements from five objects is a “data element”. Data elements comprise both measured values and mathematical functions of measured values. For example, if the temperatures of five objects are measured, the average of the five temperatures is also a data element. As another example, if angle is measured, the sine of the angle is also a data element.
For a single object whose characteristics are being measured by two or more sensors, differences in the measurements from different sensors may result from differences in the characteristics of the sensors themselves. Herein, “registration error” refers to the difference in the measurements of the same characteristic of the same object by multiple sensors, wherein the difference is a function of the characteristics of the sensors. Registration errors between sensors fall into two classes: random and systematic. Random registration errors are inherent in measurement systems and generally cannot be corrected. In a system measuring positions of objects, for example, the position sensors may be sensitive to temperature variations. If the temperature fluctuates, random registration errors may be generated. Similarly, in a system measuring temperatures at different locations, random registration errors may be generated if the temperature sensors are subjected to shock and vibration.
Systematic errors, on the other hand, are stable (or at least quasi-stable). In a system measuring the positions of objects, for example, the coordinates of a common fixed reference point may be offset in the different sensors. Similarly, in a system measuring temperatures, there may be a calibration error between the temperature sensors. Systematic errors are capable of being corrected.
If measurements from multiple objects are being collected by multiple sensors, the identities of the objects being measured must be determined. Herein, “object being measured” refers to an object whose characteristics are being measured by a sensor. Herein, “assignment” refers to specifying the correspondence between an object being measured by one sensor and an object being measured by another sensor. For example, if sensor A measures the temperatures of three objects, designated object A1, object A2, and object A3, and if sensor B measures the temperatures of two objects, designated object B1 and object B2, then, assignment of object A1 to object B2 means that object A1 and object B2 are the same object being measured by the two sensors.
Herein, “sensor registration” refers to a process wherein multiple objects being measured by multiple sensors are assigned and wherein systematic errors between the multiple sensors are corrected. If assignment of the objects is known, sensor registration may be performed by routine techniques such as least-squares estimation or Kalman filtering. If assignment of the objects is not known, sensor registration is more complex. A computationally efficient method for sensor registration wherein the assignment of objects is not known is advantageous.
An example of sensor registration is discussed here with respect to a radar system, shown schematically in
In this example, both the assignment of objects and the systematic error are unknown. A process is therefore required to determine which target detected by sensor B 104 is the same (if any) as a target detected by sensor A 102. In this example, systematic error arises from mapping between coordinate systems XA-YA and XB-YB. For example, the zero reference point, scale, and orientation of the two coordinate systems may be different. There are several approaches for solving these two unknowns. In one approach, solution of both unknowns are simultaneously attempted. In a second approach, the systematic error is first corrected without regard to assignment, and then proper assignment is established. This second approach may appear to be counter-intuitive, but an embodiment, discussed below, shows that it is advantageous.
An embodiment for performing sensor registration uses global optimization analysis. A generalized version of the example given above for a radar system is used to illustrate an embodiment. The number of targets detected by sensor A 102 is NA, and the number of targets detected by sensor B 104 is NB. The sensors detect the track of each target. Herein, “track” is a state function comprising characteristic parameters of the target. In this example, as discussed below, the track comprises position coordinates (X-Y-Z) and their covariances. The objective in this instance is to determine correct assignment of the targets and to correct systematic errors. In this example, determining correct “assignment of targets” refers to correctly determining which target measured by sensor A 102 is the same target (if any) measured by sensor B 104. The coordinate system of sensor A 102 is used as the reference coordinate system. Therefore, systematic errors are attributed to sensor B 104.
Variables PA(i) and CA(i) denote the position and covariance estimates of the i-th track detected by sensor A 102. Similarly, PB(j) and CB(j) denote the position and covariance estimates of the j-th track detected by sensor B C04. Covariance here refers to covariance of the variables in the track (for example, covariance of the X-Y-Z values). There is a one-to-one correspondence between a target and its track. Therefore i ranges from 1 to NA, and j ranges from 1 to NB. The method discussed here is illustrated for two sensors, but may apply to more than two. For example, one sensor may be chosen as a reference sensor, and each of the other sensors may then be registered to the reference sensor.
A function Ω denotes a function which characterizes the systematic error arising from data measured by sensor B 104. That is, Ω(PB(j)) and Ω(CB(j)) would remove the systematic error from the j-th track of sensor B 104. Then, the likelihood function which associates the i-th track of sensor A 102 with the j-th track of sensor B 104 can be written as a function of Ω:
for i=1, . . . , NA and j=1, . . . , NB,
t
ij
=P
A(i)−Ω(PB(j)) (Eqn. 2)
S
ij
=C
A(i)+Ω(CB(j)) (Eqn. 3)
If the i-th track of sensor A 102 corresponds to the same target as the j-th track of sensor B 104, then the systematic error is a function of −Fij(Ω). That is, as the likelihood of association between these two tracks increases, the systematic error decreases. In this instance, one objective of sensor registration is to minimize −Fij(Ω).
In the case in which assignment of objects is unknown, correct assignment of targets must be determined. Assignment may be characterized by the assignment variable ψij, where ψij=1 if the i-th track of sensor A 102 corresponds to the same target as the j-th track of sensor B 104, and ψij=0 otherwise. The objective of determining the correct assignment of targets and minimizing the systematic error may be formulated as:
finding the function Ω which minimizes the function F(Ω),
Herein, F(Ω) is referred to as a “sensor registration function.” The constraints in Eqns. 5-7 allow two situations. In the first, a target detected by sensor A 102 does not correspond to any target detected by sensor B 104. In the second, a target detected by sensor A 102 has a one-to-one correspondence to a target detected by sensor B 104.
Eqns. 4-7 constitute a mixed-integer non-linear programming problem, in which ψij are the binary variables, and the continuous variables are encapsulated in the Ω function. In one embodiment, a method for solving Eqns. 4-7 comprises decomposing the problem into two steps. The first step is to determine the “best” Ω, irrespective of the assignment variables. The first step may be formulated as:
Herein, {circumflex over (F)}(Ω) is referred to as a “systematic error function.” Finding the “best” Ω requires finding the global minimum of the systematic error function {circumflex over (F)}(Ω) as a function of Ω. The solution yields a correction factor which corrects the systematic errors.
Once the best Ω has been determined, a linear assignment method may then be used to determine the assignment variables between the data from sensor A 102 and the data, corrected for systematic errors, from sensor B 104.
Existing algorithms attempt to solve Eqns. 4-7 in one step: that is, simultaneously determine assignment of targets and correct systematic errors. Some existing methods are complex and may involve embedding a non-linear programming algorithm in a branch and bound scheme. In the worst case, the algorithms are equivalent to enumerating all of the assignments, and, for each assignment, solving a non-convex, non-linear optimization problem. Some existing algorithms require a good initial first estimate of the systematic error. If the initial estimate is insufficient, they may converge on the wrong solution. Some existing algorithms suffer from a guaranteed exponential worse-case bound on the number of assignments considered. They are computationally inefficient.
The method used in an embodiment solves Eqns. 4-7 by decomposing the problem into a two-step problem. The first step is solving a non-linear problem. The second step involves solving a linear assignment problem. Existing methods for solving linear assignment problems are well characterized and computationally efficient. The decomposition method used in an embodiment has a number of advantages over prior art methods. These advantages include, but are not limited to, the following: it is computationally efficient; it does not require an initial good estimate; and it does not suffer from a guaranteed exponential worse-case bound on the number of assignments.
Finding the solution to Eqn. 8 falls into a class of mathematical problems referred to as global optimization problems. Optimization comprises both maximization and minimization. Since a maximization problem, however, can always be recast as a minimization problem, hereinafter, the phrase “global minimization problem” will be used to comprise both global minimization and global maximization problems.
A simple illustrative example of a global minimization problem is shown in the graph in
In some global minimization problems, the global minimum may be determined analytically. In general, however, numerical computation techniques are used. These techniques do not necessarily generate a “true” global minimum, because, in some problems, the existence of a global minimum may not be able to be established. Even if a global minimum does exist, the global minimum generated by a numerical computation technique cannot be analytically proven to be the “true” global minimum. In general, the result of the numerical computation is an “estimate” of the global minimum. To simplify the terminology, herein, the term “global minimum” comprises the true global minimum for problems in which a global minimum may be analytically established and calculated, and an estimate of the global minimum otherwise. A numerical computation method for generating a global minimum is successful if it generates a close approximation of the true global minimum in problems in which a global minimum may be calculated analytically. In general, a numerical computation method for generating a global minimum is successful if it generates empirically successful solutions in a high percentage of applications. Techniques which are computationally efficient are advantageous.
An embodiment uses a numerical computation method for solving global minimization problems. It is referred to herein as “Continuous Greedy Randomized Adaptive Search Procedure (C-GRASP)”. In one embodiment, C-GRASP is a metaheuristic for solving continuous global minimization problems subject to box constraints. Generalizing the example shown in
S={x=(x1, . . . xn)ε:l≦x≦u}, (Eqn. 9)
where x, l, and u are n-dimensional vectors which are members of the n-dimensional space of real numbers n. The domain is bounded over the region l≦x≦u, such that ui≧li for i=1, . . . , n.
For a function ƒ(x) in n-dimensional space which is mapped into one-dimensional space,
ƒ(x):→
the global minimization problem is to find the value of x* which yields the minimum value of ƒ(x) within the region bounded by l≦x≦u. That is,
x*=argmin{ƒ(x)|l≦x≦u}. (Eqn. 10)
One skilled in the art may apply embodiments of C-GRASP to other instances. For example, C-GRASP may operate in a space of complex numbers by mapping the real and imaginary components of a complex number to two real numbers. Other embodiments may apply to domains which are not subject to box constraints.
After the construction phase, a local-improvement phase attempts to find a “locally-improved” solution by conducting a search in the local neighborhood 412 around the greedy randomized solution x1 406. A solution is improved if the new value of ƒ(x) is less than the previous value of ƒ(x). Note that 412 is represented by the region within the square. This is for illustration only, and does not necessarily represent a region used in an embodiment. The locally-improved solution is represented by x2 408, indicated by an ellipse. Improvement results if ƒ(x2)<ƒ(x1). The point x2 408 is then used as the initial point for a second series of construction and local-improvement phases. This series is iterated a user-defined number of times to yield the local minimum in a region around the initial point x0 404. In the total phase, the entire sequence of construction phase followed by local-improvement phase is then repeated for an array of initial randomly chosen points, such as 414-420, within domain S 402. At each point, the result is a local minimum in a region around the point. The minimum of the set of local minima is then the global minimum within the domain.
In step 516, a line search is conducted to determine the minimum value of ƒ(x) as the i-th coordinate of x is varied while the other n−1 coordinates are held fixed. The value of the i-th coordinate which minimizes ƒ(x) is stored as the variable zi. The corresponding value of the function ƒ(zi) is stored as the variable gi. In step 518, the current minimum value of gi, denoted min, is compared with the current value gi. If gi<min, then in step 524, min is set to gi; min←gi. If gi is not less than min, then, in step 522, the current value of min is retained. In step 520, the current maximum value of gi, denoted max, is compared with the current value gi. If gi>max, then in step 528, max is set to gi; max←gi. If gi is not greater than max, then, in step 526, the current value of max is retained. Upon completion of all iterations in step 510, min is the best (minimum) value of gi for all values of i, and max is the worst (maximum) value of gi for all values of i.
After the line search has been performed for each unfixed coordinate, a restricted candidate list (RCL) is formed that contains the unfixed coordinates i whose corresponding values of gi meet a user-defined condition. In step 530, RCL is initialized to the null set; RCL←Ø. In step 532, steps 534-538 are iterated for i=1, 2, . . . , n. In step 534, if the following condition is met:
iεU AND gi≦(1−α)·min+α·max, where α is a user-defined parameter 0≦α≦1,
then, in step 538, i is added to RCL. If the condition is not met, then, in step 536, the value is not added to RCL.
In step 540, j is chosen at random from RCL; j←Random(RCL). In step 542, the value of xj is set to zj; xj←zj. Choosing a coordinate in this way ensures randomness in the construction phase. The value {j} is then removed from U. The process then returns to step 504, and steps 508-542 are repeated until all values of j have been chosen. At this time, U=Ø, and in step 506, the final value of x*={z1, z2, . . . , zn},ƒ*=ƒ(x*) from the construction phase is generated. The final values of x* and ƒ*=ƒ(x*) generated from a construction phase is referred to herein as a “greedy randomized solution.” The flowchart presented in
This value of x is a “good” value. This value is then processed by a local-improvement phase to yield a locally-improved value. A locally-improved value (if any) is determined by a local neighborhood search around the initial value of x, such that the locally-improved value of η(x) is less than the previous value of ƒ(x). An example of a local neighborhood search is illustrated in
The point 612 represents another initial value of x. The grid size in the neighborhood of 612 has a higher density, with a grid size of h=h0/2. That is, the grid length 616 and the grid height 614 are both h0/2. A larger grid size is computationally faster than a small grid size. But a small grid size provides higher resolution and increases the probability of finding an improved value. It is computationally inefficient, however, to start the search with a fine grid size. Instead, as discussed below, an adaptive procedure is used in which a coarse grid is first used to find improved solutions. The grid is then successively reduced when needed to find more improved solutions. Hereinafter, h is referred to as a “discretization parameter.” In the example of
The steps of the local-improvement phase are shown in the flowchart of
For large values of n, the number of directions may be too large to process efficiently. A user-defined parameter MaxD specifies the maximum number of directions to try. In step 704, the set of possible directions is calculated, and MaxD is specified. The process then continues to step 706, in which NumD, which tracks the number of local-improvement attempts, is compared with MaxD. The value NumD is initialized to 0, and the process continues to step 710. A set of MaxD direction vectors are randomly generated. From this available set, a direction is randomly selected. A trial solution is then generated in step 712:
x=x*+h·d
ƒ(x)=ƒ(x*+h·d).
In step 714, the new ƒ(x) is compared with the previous ƒ*. If ƒ(x)<ƒ*, then in step 718, x* and ƒ* are set to the new values; x*←x and ƒ*←ƒ(x). If ƒ(x) is not less than ƒ*, then, in step 716, the current values of x*,ƒ* are retained. The process then returns to step 706 for another iteration. When MaxD trials have been completed, then, in step 708, the final best values of x*,ƒ* are generated from the local-improvement phase. These are the values of the local minimum in a local region around the initial x. The final values of x* and ƒ*=ƒ(x*) generated from a local-improvement phase is referred to herein as a “locally-improved solution.”
Once the input parameters have been specified, NumR is compared to MaxNumR in step 806. The value NumR tracks the number of the runs of a major iteration. The value NumR is initially set to 0, and the process continues to step 808. In this step, a starting point x is initialized from a uniform random distribution of points within the domain S bounded by l, u. The discretization parameter h is initialized to a value h0.
In step 810, NumIt, which tracks the number of minor iterations, is compared with MaxIt. The value of NumIt is initially set to 0, and the process continues to step 812. In this step, the C-GRASP construction and local-improvement phases, discussed above, are executed to generate a solution x,ƒ(x). In step 816, this value of ƒ(x) is compared to ƒ*, the current minimum value of ƒ(x). The value of ƒ* is initially set to ∞. If the new value of ƒ(x) is less than ƒ*, the new value is an improved value. In step 814, x* is set to the current value of x, and ƒ* is set to the current value of ƒ(x). The parameter NumNo counts the number of times that step 812 has not generated an improved value. In step 814, NumNo is reset to 0. After the completion of step 814, the process returns to step 810. If the maximum number of minor iterations has not been reached, the process in step 812 attempts to generate a further improvement. Improvement continues as long as the newly generated ƒ(x) is less than the currents. Even if improvement continues to occur with each iteration, however, the improvement process will terminate when the maximum number of minor iterations has been completed.
Referring back to step 816, if there is no improvement, then, in step 818, the value of NumNo is incremented by one. In step 822, the value NumNo is compared with MaxNo, which is the user-defined maximum permissible number of minor iterations at the current value of h with no improvement. If the maximum number has not been reached, then, in step 820, the current value of h is retained, and the process returns to step 810 for another iteration, until the maximum number MaxIt has been executed. Referring back to step 822, if MaxNo has been reached, then, in step 824, the value of h is reduced to h←h/2 to increase the probability of improvement. NumNo is reset to 0. The process then returns to step 810 for another minor iteration, until the maximum number of minor iterations has been executed. In step 810, when the maximum number of minor iterations has been executed (NumIt=MaxIt), the local search is completed. The local minimum x*, ƒ* is stored.
The process then returns to step 806, where the local search is repeated for another initial point. In step 806, when the number of major iterations NumR reaches the maximum number MaxNumR, the overall process stops, and, in step 804, the global minimum x*,ƒ* is generated from the minimum of the set of local minima.
The above embodiments may be enhanced to provide faster execution. Enhancements to all three phases (construction phase, local-improvement phase, and total phase) are described herein.
In step 904, U is checked to see whether it is null, U=Ø. If it is not, the process continues to step 908, in which the variables min and max are initialized: min←+∞;max←−∞. In step 910, steps 912-932 are iterated for i=1, 2, . . . , n. In step 912, i is checked to see whether it is an element of U. If it is, the process continues to step 916. If it is not, the process stops in step 914 for that value of i. In step 916, the ReUse parameter, discussed below in relation to step 950, is checked. If it is false, the process stops in step 930. If it is true, the process continues to step 918.
In step 918, a line search is conducted to determine the minimum value of ƒ(x) as the i-th coordinate of x is varied while the other n−1 coordinates are held fixed. The value of the i-th coordinate which minimizes ƒ(x) is stored as the variable zi. The corresponding function ƒ(zi) is stored as the variable gi. In step 920, the current minimum value of gi, denoted min, is compared with the current value gi. If gi<min, then in step 924, min is set to gi; min gi. If gi is not less than min, then, in step 922, the current value is retained. In step 926, the current maximum value of gi, denoted max, is compared with gi. If gi>max, then in step 932, max is set to gi; max←gi. If gi is not greater than max, then, in step 928, the current value of max is retained. Upon completion of all iterations in step 910, min is the best (minimum) value of gi for all values of i, and max is the worst (maximum) value of gi for all values of i.
After the line search has been performed for each unfixed coordinate, a restricted candidate list (RCL) is formed that contains the unfixed coordinates i whose corresponding gi meets a user-defined condition. In step 934, RCL is initialized to the null set; RCL←Ø. In step 936, steps 938-942 are iterated for i=1, 2, . . . , n. In step 938, if the following condition is met:
iεU AND gi≦min+α·(max−min)
In step 944, j is chosen at random from RCL; j←Random(RCL). Choosing a coordinate in this way ensures randomness in the construction phase. In step 946, if xj=zj, then, in step 950, ReUse true. Under these circumstances, there is no need to repeat the line search in that direction. Instead, the previously generated value may be reused. Computation time is thereby reduced. In step 946, if xj=zj is not true, the process continues to step 948, in which xj←zj; ReUse←false; and ImprC←true. The parameter ImprC tracks whether the construction phase has generated an improved value. It will be used below in the enhanced total phase. If it has generated an improved value, then ImprC←true. In step 952, the value {j} is then removed from U. The process then returns to step 904, and steps 908-952 are iterated until all values of j have been chosen. At this time, U=Ø, and in step 906, the final value of x*={z1, z2, . . . , zn}ƒ*=ƒ(x*) from the construction phase is generated.
The enhanced local-improvement phase uses an enhanced local neighborhood search method.
Formally, the neighborhood comprising the search points at the corners of the square grids about a point
S
h(
where Z indicates the space of integers.
The search points in the h-neighborhood are defined by the set of points:
B
h(
The points in Bh(
The flowchart in
In step 1206, the process checks whether stopping criteria have been met. In the first embodiment discussed above, the stopping criteria in
Continuing to step 1214, the C-GRASP enhanced construction and enhanced local-improvement phases are executed to generate a solution x,ƒ(x). In step 1216, ƒ(x) is compared with the current best (minimum) value ƒ*. The value ƒ(x) is initialized to ∞, and the process proceeds to step 1218, in which the best values are set to the current values; x*←x and ƒ*←ƒ(x). The process then returns to step 1210. Referring back to step 1216, if there has been no improvement, the process continues to step 1220. If there has been no improvement in both the construction phase AND the local-improvement phase, then, in step 1222, the value of h is reduced by a factor of two to improve the probability of improvement in the next iteration; h←h/2. If there has been improvement in one of the phases, then, in step 1224, the current value of h is retained.
The process then returns to step 1210 for the next iteration. The iterations continue until h drops below the ending value he. In the first embodiment, shown in
One embodiment of a data processing system which performs sensor registration and C-GRASP processing may be implemented using a computer. As shown in
Embodiments of C-GRASP may be used in a variety of applications. In an economics application, for example, the function may represent a profit of a business, and the data elements may comprise price of goods sold, cost of goods sold, employee salaries, taxes, and operating expenses. In a semiconductor manufacturing application, for example, the function may represent a surface roughness of a wafer in a chemical polishing system comprising a wafer, a rotating-platen polishing apparatus, and a polishing solution, and the data elements may comprise chemical composition of the wafer, chemical composition of the polishing solution, rotational speed of the rotating platen, and temperature of the polishing solution. In a manufacturing application, for example, the function may represent the position of a robot arm comprising a plurality of mechanical segments, and the data elements may comprise translations and rotations of the segments.
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application is related to U.S. patent application Ser. No. ______ (Attorney Docket No. 2006-A2059), entitled Sensor Registration by Global Optimization Procedures, which is being filed concurrently herewith and which is herein incorporated by reference in its entirety.