Next alternative generating apparatus using simulated annealing and method thereof

Information

  • Patent Grant
  • 6516307
  • Patent Number
    6,516,307
  • Date Filed
    Monday, February 28, 2000
    24 years ago
  • Date Issued
    Tuesday, February 4, 2003
    21 years ago
Abstract
A multi-dimensional Gaussian distribution generating portion obtains one vector from a multi-dimensional Gaussian probability distribution and outputs it as an initial momentum. An approximated energy function differential generating portion generates a gradient of an approximated energy function. A Hamilton equation integral portion integrates a Hamilton equation using a gradient of an approximated energy function for a predetermined integral period with initial conditions of an initial state and an initial momentum. The result of the integration as the state of the next alternative is output from a next alternative generating apparatus. The state is used with a high probability corresponding to the difference of approximated energy calculated by an approximated energy difference calculating portion and the difference between a value of an original energy in the present state and that in the next state.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




The present invention relates to a processing apparatus for solving an optimizing problem using simulated annealing (hereinafter, referred to as SA) and a method thereof.




2. Description of the Related Art




In recent years, in order to solve problems in various fields, it has become necessary to solve relatively large optimizing problems. The SA is a general-purpose solving method for solving such optimizing problems. The SA has been successfully used for optimizing problems such as cell placing problems in VLSI (very large scale integration) design and scheduling in production process.




In the annealing method, a substance is heated to a high temperature and then gradually cooled so that the energy thereof physically becomes the minimum state (ground state). In a system to be evaluated, evaluating energy is defined. An annealing process of the system is simulated corresponding to metropolis procedure so as to optimize the state of the system. This technique is referred to as SA.




Next, the algorithm of conventional SA will be described. First, assume a system that is composed of many objects that have various states. The sum of states of all the objects is the state of the system. As a real function of the state of the system, energy is defined. When SA is applied for a problem of a combination, energy is defined corresponding to an evaluation value of the combination. As the energy is low, the evaluation is high.




Next, objects are selected in a proper order. For each object, the metropolis procedure is applied. States of alternatives with probabilities are designated to each object. A system energy E


f


in the state of an alternative designated to each object and a system energy E


n


in the present state are calculated. On the other hand, random numbers in the range from 0 to 1 are uniformly generated. The generated random numbers are denoted by r. A temperature parameter is denoted by T. When exp[−(E


f


−E


n


)/T]≧r, each object is designated to the state of the alternative. On the other hand, when exp[−(E


f


=E


n


)/T]<r, each object is kept in the present state.




When this operation is continued with T (constant), the system converges to a Gibbs distribution due to the temperature T and the system energy. When T is gradually decreased and approaches O, the system converges to a state of which the minimum value or semi-minimum value of energy is given. The latter is referred to as SA due to an analogy of a physical phenomenon. In particular, a temperature change schedule is referred to as an annealing schedule. Logically, it has been proved that when the temperature T is very gradually decreased, the system converges to a state of which the minimum value of the energy is given. However, practically, so as to reduce the process time, the temperature is relatively quickly decreased.




The SA has the following advantages. As a first advantage, it is logically assured that the system converges to an optimum solution. In reality, the system converges to an optimum solution or a semi-optimum solution. As a second advantage, since the SA can be generally used, it can be easily applied for various problems. As a third advantage, functions that represent system energy or restricting condition have almost no restrictions and they may be discontinuous.




However, as described above, since the algorithm of the SA is basically a sequence of simple operations that determine whether or not to accept the state of the next alternative, it may take a very long time. In addition, the acceptance of the state of the alternative depends on the difference of energy in the present state and the state of the alternative. Thus, the state is not always accepted. When the state of the alternative is not accepted, since the system remains in the present state, the convergence of the system is further delayed.




In other words, when the SA is used, a good solution can be finally obtained. However, it takes a very long time to obtain a solution. Thus, it is necessary to perform the system at a high speed.




SUMMARY OF THE INVENTION




An object of the present invention is to modify the algorithm of the SA so as to provide a processing apparatus for solving an optimizing problem at high speed and a method thereof.




As an aspect of the present invention, when determining whether or not to transit the present state to the next state, an alternative of the next state is selected so that the probability for accepting the alternative of the next state becomes high. In other words, the probability for selecting the alternative of the next state is designated so that the alternative is accepted as the next state with a high probability. Thus, the simulated annealing process can be performed at high speed, thereby contributing to solving various optimizing problems.




These and other objects, features and advantages of the present invention will become more apparent in light of the following detailed description of best mode embodiments thereof, as illustrated in the accompanying drawings.











BRIEF DESCRIPTION OF DRAWINGS





FIG. 1

is a theoretical block diagram according to the present invention;





FIG. 2

is a block diagram showing a first construction of a next alternative generating apparatus according to an embodiment of the present invention;





FIG. 3

is a flow chart showing a state transiting process;





FIG. 4

is a block diagram showing a second construction of a next alternative generating apparatus according to an embodiment of the present invention;





FIG. 5

is a block diagram showing a construction of a VLSI cell placing apparatus;





FIG. 6

is a flow chart showing a VLSI cell placing process (No. 1);





FIG. 7

is a flow chart showing the VLSI cell placing process (No. 2);





FIG. 8

is a schematic diagram showing an energy function of a VLSI cell placing problem;





FIG. 9

is a graph showing an example of an integral path;





FIG. 10

is a schematic diagram showing an example of a region in which a cell cannot be placed; and





FIG. 11

is a block diagram showing a systematic construction of the present invention.





FIG. 12

shows a pseudo-program code of a state transition process;





FIG. 13

shows a pseudo-program code of an acceptance determining process;











DESCRIPTION OF THE PREFERRED EMBODIMENTS




The present invention is a next alternative generating apparatus and a method thereof for determining a next state in an SA process so as to solve an optimizing problem.





FIG. 1

is a theoretical block diagram showing a next alternative generating apparatus according to the present invention. The next alternative generating apparatus, shown in

FIG. 1

, comprises an input unit


1


, an integral unit


2


, an output unit


3


, a momentum generating unit


4


, an energy approximating unit


5


, and an accepting unit


6


.




The input unit


1


inputs a present state and a predetermined period. The momentum generating unit


4


generates a momentum. The energy approximating unit


5


generates an approximated value of a value with respect to approximated energy. The integral unit


2


integrates a Hamilton equation using the approximated value generated by the energy approximating unit


5


for the predetermined period with initial conditions of the present state received from the input unit


1


and the momentum generated by the momentum generating unit


4


. The output unit


3


outputs the state of which the Hamilton equation is integrated by the integral unit


2


as the state of the next alternative.




The output unit


3


outputs the difference between the value of the approximated energy in the present state and the value of the approximated energy in the next alternative state to the accepting unit


6


.




The accepting unit


6


determines whether or not to accept the state of the next alternative received from the output unit


3


based on both the difference of the approximated energy and a difference between the value of an original energy in the present state and the value of the original energy in the next alternative state so that detailed balance is satisfied.




The input unit


1


, shown in

FIG. 1

, corresponds to a finite dimensional Euclidean space designating portion


22


and a region probability distribution generating portion


23


, shown in

FIG. 4

, a cell selecting portion


32


, a cell state reading portion


33


, an integral period designating portion


35


, shown in

FIG. 5

, and so forth. The integral unit


2


corresponds to a Hamilton equation integral portion


13


, shown in

FIGS. 2 and 4

. The output unit


3


corresponds to a Hamilton equation integral portion


13


, an approximated energy difference calculating portion


14


, a region probability density ratio calculating portion


24


, a defined region restoring portion


25


, and so forth. The momentum generating unit


4


corresponds to a multi-dimensional Gaussian distribution generating portion


12


, shown in

FIGS. 2 and 4

, and a multi-dimensional Gaussian distribution parameter designating portion


34


, shown in FIG.


5


. The energy approximating unit


5


corresponds to an approximated energy function differential generating portion


15


, shown in

FIGS. 2 and 4

. The accepting unit


6


corresponds to an acceptance determining portion


38


, shown in FIG.


5


.




Generally, a Hamilton equation describes coordinates that represent a state and a time change of a momentum that is conjugate to the coordinates with potential energy. Thus, when a solution of the Hamilton equation is obtained, the coordinates, the initial value of the momentum and a function form of the potential energy, or a function form of the differential thereof, are required.




In the next alternative generating apparatus, shown in

FIG. 1

, the input unit


1


inputs a present state that is equivalent to the initial coordinates to the integral unit


2


. The momentum generating unit


4


generates an initial momentum and outputs it to the integral unit


2


. The integral unit


2


uses the initial momentum as an initial value.




The energy approximating unit


5


generates an amount of approximated energy that is equivalent to the potential energy (for example, an approximated value of the gradient of the approximated energy) and inputs the amount to the integral unit


2


. The gradient of the approximated energy corresponds to a gradient of an approximated energy function of the embodiment. The integral unit


2


can numerically integrate the Hamilton equation using the above-described approximated value on an integral path instead of the function form of the potential energy. After a predetermined period designated by the input unit


1


elapses, the state obtained at the point is output.




According to this method, a precise function form of the potential energy is not required. With a proper approximated energy function, the Hamilton equation can be integrated. In particular, with a simply approximated energy function, calculations are simplified, thereby increasing the speed of the integral unit


2


.




The output unit


3


outputs the state obtained by the integral (namely, the state equivalent to the end point of the integral path as the state of the next alternative) to the accepting unit


6


. The accepting unit


6


determines whether or not to accept the state of the next alternative received from the output unit


3


so that the detailed balance is satisfied.




Since the integral unit


2


that selects the state of the next alternative and the accepting unit


6


that determines whether or not to accept the alternative are independently provided, the probability of selecting the next alternative and the probability of accepting it can be individually adjusted in consideration of the detailed balance. In particular, when the accepting unit


6


adjusts the accepting probability corresponding to the difference of approximated energy received from the output unit


3


, the accepting probability can be improved with the detailed balance satisfied. When the accepting probability is high, the state of the next alternative is less discarded. Thus, the next state is quickly determined and thereby the processing speed is increased.




Next, embodiments of the present invention will be described in detail.




First, the detailed balance used in the present invention will be described. The necessary condition of which a Markoff process converges to a single probability distribution {P(x)} is a detailed balance that is given by the following equation:








P


(


x


)


P


(


x→y


)=


P


(


y


)


P


(


y→x


)


∀x,y


  (1)






where P(x) is the probability of which the state x takes place; and P(x→y) is a transition probability of which the state x transits to the state y. The physical meaning of the equation (1) is in that in a predicted steady state the flow that transits from the state x to the state y is balanced to the flow that transits from the state y to the state x.




In a technique such as the SA, to satisfy the equation (1), the transition probability P(x→y) is divided into a selective probability P


C


(x→y) and an accepting probability P


A


(x→y) as follows:








P


(


x→y


)=


P




C


(


x→y





P




A


(


x→y)


  (2)






Thus, the accepting probability P


A


(x→y) is adjusted.




For example, with a Gibbs distribution represented by energy E(x) in the state x of a probability distribution P(x) and temperature T, the detailed balance of the equation (1) is rewritten as follows by the equation (2):










exp


[


-

{


E


(
y
)


-

E


(
y
)



}


/
T

]


=



P


(
y
)



P


(
x
)



=




P
C



(

x

y

)




P
C



(

y

x

)



·



P
A



(

x

y

)




P
A



(

y

x

)









(
3
)













Next, the method for accomplishing the detailed balance of the present invention will be described. If a simple form of an approximated energy function is given to the original energy E(x), when the Hamilton equation is solved with respect to the approximated energy function, the state y of the next alternative can be selected with a particular selective probability.




First, a simple form of approximated energy, {overscore (E)}(x), of the energy E(x) is given. The simple form represents a form of which the number of calculations is small. The approximated energy {overscore (E)}(x) is very close to the original energy E(x), not always accurate thereto. For example, even if the approximated energy {overscore (E)}(x) is inaccurate to some extent, {overscore (E)}(x) may be approximated to part of the original energy E(x). Of course, it is preferable that {overscore (E)}(x) is accurately approximated to E(x). However, when the accuracy is required, the number of calculations increases. Thus, in consideration of the total performance, the approximated energy in a proper level should be used.




Thereafter, the next alternative is selected so that the selective probability satisfies the following equation corresponding to the difference of approximated energy between the state x and the state y:










exp


[


-

{



E
_



(
y
)


-


E
_



(
x
)



}


/
T

]


=



P
C



(

x

y

)




P
C



(

y

x

)







(
4
)













To allow the selective probability of the equation (4) to satisfy the transitive probability, the accepting probability and the detailed balance of the equation (3), the accepting probability should satisfy the following equation:










exp


[


-

(


{


E


(
y
)


-

E


(
x
)



}

-

{



E
_



(
y
)


-


E
_



(
x
)



}


)


/
T

]


=



P
A



(

x

y

)




P
A



(

y

x

)







(
5
)













Due to the equation (5), when the difference of the approximated energy {{overscore (E)}(y)−{overscore (E)}(x)} is close to the difference of the original energy {E(y)−E(x)} to some extent or when the former is partially approximated to the latter, the accepting probability can be remarkably improved.




For example, assume that the accepting probability with respect to the transition between the state x and the state y is given by the following equation:








P




A


(


x→y


)=exp[−({


E


(


y


)−


E


(


x


)}−{


{overscore (E)}


(


y


)−


{overscore (E)}


(


x


)})/


T]P




A


(


y→x


)=1  (6)






Thus, the accepting probability of the equation (6) satisfies the equation (5). At this point, the accepting probability P


A


(y→x) is always 1. When the difference of the approximated energy {overscore (E)} is close to the difference of the original energy E, the accepting probability P


A


(x→y) also approaches to 1. In such a manner, the accepting probability of the selected next alternative can be kept to 1 or a value close to 1. The accepting probability of the equation (6) is only an example. Thus, another accepting probability that satisfies the equation (5) may be used.




In other words, the fundamental feature of the method according to the present invention can be described as follows. First, energy is approximated or predicted in a simple form. The state of a next alternative is selected corresponding to a selective probability that accomplishes the difference of the approximated energy (predicted energy) with a predetermined probability. The difference of the approximated energy and the difference of the actually measured energy are calculated with the accepting probability.




To actually implement this method, the selective probability should be accomplished with the difference of the approximated energy. As an example of the selecting method, a technique used in the Hybrid Monte Carlo method (PHYSICS LETTER S. B., Volume 195, number 2, p. 216-222, Sep. 3, 1987) can be used. In particular, when the approximated energy function is a piecewise linear function, the calculations of the HMC method can be simply performed. The piecewise linear function is a function of which pieces of a function are present in a state space and they are linear functions in respective regions.




In the HMC method, as an initial momentum, one vector is obtained from a multi-dimensional Gaussian distribution. Next, a Hamilton equation, of which an energy function is potential energy, is integrated for a predetermined period with initial conditions of the original state (coordinates) and the initial momentum. The resultant coordinates are selected as the state of the next alternative.




In the present invention, when the Hamilton equation is solved (integrated), an approximated energy function {overscore (E)} is used instead of the original energy function.




When a state variable (coordinates) with any dimensions is denoted by q and a momentum vector with the same dimensions as the state variable is denoted by p, the Hamiltonian to be stored is given by the following equation:








H


(


{circumflex over (q)}, {circumflex over (p)}


)=


{overscore (E)}


(


{circumflex over (q)}


)+½


|{circumflex over (p)}|




2


  (7)






When τ represents a virtual period used for a calculation, the Hamilton equation to be integrated is given by the following equation:












q



τ


=


+



H




p
^




=

p
^






(
8
)










p
^




τ


=


-



H



p



=

-



^








E
_



(

p
^

)









(
9
)













In the equation (9), represents a nabula with the same dimensions as the state variable. (The nabula is a vector that has a component of a differential operator.) The Hamiltonian of the equation (7) is kept constant along the integral path.




When the approximated energy function is a piecewise linear function, a component of the state variable q that is integrated in one divided region is a quadratic curve of time τ. The quadratic curve is extended to the boundary of the region. At this point, the coordinates, the momentum vector and the remaining period of the integral are recorded. With initial conditions of the coordinates and the momentum vector, the Hamilton equation is integrated in the adjacent region. The result of the integral is represented with a quadratic curve. This operation is repeated for a predetermined period τ


C


The state of the value of the state variable q at the resultant point is an alternative of the next state. In each region, since the approximated energy {overscore (E)} is a linear function of the state variable q, the equations (8) and (9) can be easily integrated. Thus, the next alternative can be easily selected.




Next, with reference to the accompanying drawings, the construction and algorithm of an embodiment of the present invention will be described.





FIG. 2

is a block diagram showing a first construction of a next alternative generating apparatus according to the present invention.

FIG. 3

is a flow chart showing a state transiting process of the next alternative generating apparatus and an SA simulating apparatus (not shown) or a process thereof. In

FIG. 2

, a next alternative generating apparatus


11


is an apparatus that generates next alternatives of continuous state variables. The next alternative generating apparatus


11


comprises a multi-dimensional Gaussian distribution generating portion


12


, a Hamilton equation integral portion


13


, an approximated energy difference calculating portion


14


, and an approximated energy function differential generating portion


15


.




When the process is started in

FIG. 3

, a state variable to be changed is designated by the user or the like (at step S


1


). At this point, the initial state, integral period, change parameter information and multi-dimensional Gaussian distribution parameter are supplied to the next alternative generating apparatus


11


. The change parameter information is a designated one of parameters (state variables) that represent a system to be simulated. The initial state represents the present value of a state variable. The multi-dimensional Gaussian distribution parameter represents the dimensions of a parameter to be changed. For example, the multi-dimensional Gaussian distribution parameter is obtained from the change parameter information. The integral period represents a period for which a Hamilton equation is integrated.




The next alternative generating apparatus


11


internally performs the following process. First, the multi-dimensional Gaussian distribution generating portion


12


generates one vector from the multi-dimensional Gaussian probability distribution, designated by the multi-dimensional Gaussian distribution parameter, and outputs the generated vector as an initial momentum (at step S


2


). The approximated energy function differential generating portion


15


generates a gradient of an approximated energy function of coordinates corresponding to the value of the state variable. (The gradient is a vector that has a component of a first order differential of a function.)




With initial values of the state variable q and the momentum vector p that are the initial state and the initial momentum received from the multi-dimensional Gaussian distribution generating portion


12


, respectively, the Hamilton equation differential portion


13


integrates the Hamilton equation of the equations (8) and (9) with the gradient {overscore (E)}({circumflex over (q)}) received from the approximated energy function differential generating portion


15


for the integral period (at step S


3


). The Hamilton equation differential portion


13


outputs the integral result as the state of the next alternative (at step S


4


). The approximated energy difference calculating portion


14


calculates the difference between the value of the approximated energy function in the initial state and the value of the approximated energy function in the state of the next alternative obtained by the Hamilton equation integral portion


13


(namely, the difference of the approximated energy).




The next alternative generating apparatus


11


outputs the state of the finally selected next alternative and the difference of the approximated energy to the SA simulating apparatus or the process thereof. The state of the next alternative may be meaningless in such a case where the value of the state variable is out of a defined region. In this case, the SA simulating apparatus, or the process thereof, discards the state of the next alternative. Even if the next alternative is discarded, the detailed balance is still satisfied.




In particular, when the approximated energy function is a piecewise linear function, since {overscore (E)}({circumflex over (q)}) is a constant vector, a component of the momentum vector p in each region is expressed by a linear curve due to the equation (9). In addition, the component of the state variable q in each region is expressed by a quadratic curve due to the equation (8). Thus, the Hamilton equation can be relatively easily solved.




The approximated energy function {overscore (E)} may be the original energy function E. However, in most cases, since the number of calculations increases, it seems that the original energy function E is less used. In addition, the approximated energy function {overscore (E)} should be at least first-order differentiable. The differential result is not always continuous as long as the Hamilton equation can be solved.




With the difference of approximated energy and the state of the next alternative, the SA simulating apparatus, or the process thereof, determines whether or not to accept the state of the next alternative so that the equation (5) is satisfied (at step S


5


). When the state of the next alternative is accepted, the present state is changed to the state of the next alternative (at step S


6


). When the state of the next alternative is not accepted, the present state is kept (at step S


7


). With the accepting probability that satisfies the equation (5), when the difference of the approximated energy {overscore (E)} is close to the difference of the original energy E, the state of the next alternative can be accepted with a high probability. Thus, since the state of the next alternative generated is less discarded, the speed of the process is increased.




To accomplish such an accepting probability, for example, the metropolis procedure is used in the following manner. For the transition from the state x to the state y, the energy difference {E(y)−E(x)} between the original energy in the initial state x and the original energy in the state y of the next alternative is calculated. Thus, the received difference of approximated energy is {{overscore (E)}(y)−{overscore (E)}(x)}. Random numbers ranging from 0 to 1 are uniformly generated. The random numbers are denoted by r. When A≡exp[−{E(y)−E(x)}−{{overscore (E)}(y)−{overscore (E)}(x)})/T]≧r, the state of the next alternative is accepted. When A<r is satisfied, the present state is kept.




When [{E(y)−E(x)}−{{overscore (E)}(y)−{overscore (E)}(x)})]≧0, then 0<A≦1. Thus, the probability for accepting the state y is equal to the probability of which the random number r that has a value of A or less. Consequently, P


A


(x→y)=A/1=exp[−({E(y)−E(x)}−{{overscore (E)}(y)−{overscore (E)}(x)})/T]. For the transition from the state y to the state x, since the state x and the state y, in the definition of A, are substituted with each other, the relation of A≧1≧r is satisfied. Thus, regardless of the value of the random number r, the state x is always accepted. Consequently, the relation of P


A


(y→x)=1 is satisfied. As a result, the accepting probability of the equation (6) is accomplished, which satisfies the equation (5).




This method is only an example for determining whether or not to accept the state of the next alternative. Thus, as long as the equation (5) is satisfied, another accepting method may be used.




Thereafter, the SA simulating apparatus, or the process thereof, determines whether or not to finish the process (at step S


8


). When the next state transition is performed, the process after the step S


1


is repeated. Otherwise, the process is finished.




The first construction with respect to the continuous state can be applied to an image restoring process or the like. The image restoring process is a process for restoring an original image from image data affected by noise. As an example of which the SA is applied to the image restoring problem, a technical paper by S. Geman and D. Geman is known (Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images, IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-6, No. 6, 1984, pp. 721-742).




In the image restoring process, the value of each pixel represented in a gray level (a real value in a predetermined range) and a line process that is a discrete quantity (0 or 1) between adjacent pixels, are equivalent to a state variable. When the value of the line process is 1, it represents that two adjacent pixels are not continuous or there is a contour line therebetween.




The energy function is composed of three portions. The first portion represents the difference between each pixel value and a pixel with a noise.




The second portion represents the difference between each pixel value and adjacent pixel value thereof. When the value of a line process between adjacent pixels is 1, even if the value of a pixel largely differs from the value of the adjacent pixel, energy does not increase. When the value of the line process is 0, as the value of a pixel differs from the value of the adjacent pixel, energy increases.




The third portion is energy obtained corresponding to the characteristic of the line process. Since it is considered that the line process represents part of a contour line, a sequence of values of the line process does not often intersect. Thus, if such a situation takes place, energy becomes very large. In addition, the sequence of values of the line process hardly curves. Thus, in such a situation, energy becomes relatively large.




With such an energy function, the next alternative generating apparatus


11


, according to the first construction, determines the next alternative of the value of a pixel with a continuous quantity in state variables. Thus, the original image can be finally restored. For a line process of a discrete state variable, the value of the next alternative is determined by the conventional metropolis procedure or a next alternative generating apparatus, according to a second construction (that will be described later).





FIG. 4

is a block diagram showing a second construction of the next alternative generating apparatus, according to the present invention. The next alternative generating apparatus


21


, shown in

FIG. 4

, is an apparatus that generates a next alternative in a discrete state. The next alternative generating apparatus


21


comprises a finite dimensional Euclidean space designating portion


22


, a region probability distribution generating portion


23


, a region probability density ratio calculating portion


24


and a definition range restoring portion


25


as well as the multi-dimensional Gaussian distribution generating portion


12


, the Hamilton equation integral portion


13


, the approximated energy difference calculating portion


14


and the approximated energy function differential generating portion


15


.




The region probability distribution generating portion


23


inputs an initial state, generates one point of the region of a finite dimensional Euclidean space corresponding to the probability distribution thereof, and outputs the coordinates thereof to the Hamilton equation integral portion


13


, the approximated energy difference calculating portion


14


and the region probability density ratio calculating portion


24


. The finite dimensional Euclidean space designating portion


22


determines the dimension of the finite dimensional Euclidean space of a parameter to be changed corresponding to the change parameter information and outputs the result to the Hamilton equation integral portion


13


.




The Hamilton equation integral portion


13


integrates the Hamilton equations of the equations (8) and (9) for the predetermined integral period with the coordinates of the point generated by the region probability distribution generating portion


23


, as are initial values of the state variable q in the same manner as the first construction. The Hamilton equation integral portion


13


outputs the coordinates of the resultant point to the approximated energy difference calculating portion


14


, the region probability density ratio calculating portion


24


and the defined region restoring portion


25


. The approximated energy difference calculating portion


14


calculates the difference of approximated energy between the point in the initial state and the point obtained by the Hamilton equation integral portion


13


.




When the region probability density ratio calculating portion


24


receives two points of the finite dimensional Euclidean space, it calculates the ratio of the probability densities of the points in the region and outputs it. More specifically, the region probability density ratio calculating portion


24


calculates the ratio of the probability densities of the start point and the end point of the integral path of the Hamilton equation. The region in which the probability densities are calculated may vary corresponding to the points. The ratio of the probability densities is output as a compensating term for the detailed balance so as to determine whether or not to accept the state of the next alternative. The defined region restoring portion


25


obtains a point in the defined region of the state variable, corresponding to one point in the predetermined finite dimensional Euclidean space, and outputs the point as the state of the next alternative.




With the difference of approximated energy and the compensating term of the detailed balance that are output from the next alternative generating apparatus


21


, the SA simulating apparatus, or the process thereof (not shown), determines whether or not to accept the state of the generated next alternative.




In the first and second constructions, the approximated energy {overscore (E)} is defined so that one set of a state variable to be changed and a value that is not changed are consistently designated. In other words, it is not necessary to globally define the approximated energy {overscore (E)}. For example, when a particular state variable is varied, the definition of the approximated energy {overscore (E)} can be used. When another state variable is varied, the definition of another approximated energy {overscore (E)} can be used. This is because whenever the next alternative of each state variable is selected, the approximated energy {overscore (E)} is used as a tool that generates the selective probability.




Next, with reference to

FIGS. 5

to


10


, an example for applying the next alternative generating apparatus according to the second construction for an VLSI cell placing problem will be described. The VLSI cell placing problem is a problem for determining the positions of cells in a rectangular cell placing region so as to design a VLSI.




Conventionally, energy used for the SA process of a VLSI cell placing problem is composed of the sum of five types of energy that are, for example routing, overlapping, net congestion, pin congestion and timing driven. Whenever the position of one cell of the VLSI is moved, these types of energy individually increase.




When half perimeter technique is used as a calculating method of energy of a routing that depends on the line length, the energy of the routing is piecewise linear. Thus, even if it is used as approximated energy, the number of calculations is not abnormally increased. In the half perimeter method, the half of the peripheral length of a minimum rectangular that includes the positions of all pins contained in a particular net, is designated as the line length of the net.




For overlapping, net congestion, pin congestion, and so forth, an average quantity of a predetermined region for each epoch is recorded. The resultant quantity is converted into a piecewise linear function. The piecewise linear function is used as approximated energy. However, for these congestions, it should be considered that when a cell is removed from the original position, energy in the region decreases.




For approximated energy of congestions, the original energy is calculated for each basic cell. In a region of a right triangle or a rectangle with vertexes of the calculated original energy, a piecewise linear function is used. For each line and each column, the congestions are calculated. If the congestion at a particular point is defined as the sum of the congestion of the line and the congestion of the column, the rectangular region can be used as a unit.





FIG. 5

is a block diagram showing the construction of a VLSI cell placing apparatus, according to an embodiment. The VLSI cell placing apparatus


31


shown in

FIG. 5

comprises a cell selecting portion


32


, a cell state reading portion


33


, a multi-dimensional Gaussian distribution parameter designating portion


34


, an integral period designating portion


35


, a cell state storing portion


36


, a cell state changing portion


37


and an acceptance determining portion


38


as well as the next alternative generating apparatus


21


, shown in FIG.


4


.




The cell selecting portion


32


selects a cell to be moved next and outputs the selected result as change parameter information to the next alternative generating apparatus


21


. When the cell state reading portion


33


receives a cell to be moved from the cell selecting portion


32


, the cell state reading portion


33


reads the present state of the cell from the cell state storing portion


36


and outputs the read state as an initial state to the next alternative generating apparatus


21


.




The multi-dimensional Gaussian distribution parameter designating portion


34


outputs a multi-dimensional Gaussian distribution to the next alternative generating apparatus


21


. The integral period designating portion


35


outputs an integral period to the next alternative generating apparatus


21


. The acceptance determining portion


38


receives the state of the next alternative, the difference of approximated energy, and the compensating term of the detailed balance from the next alternative generating apparatus


21


and determines whether or not to accept the state of the next alternative. When the acceptance determining portion


38


accepts the state of the next alternative, the cell state changing portion


37


rewrites the state of the cell in the cell state storing portion


36


. After the above-described process sequence is finished, the cell selecting portion


32


determines a cell to be moved next.




When the present invention is applied to a VLSI cell placing problem, it is preferably applied to the energy of routing. Next, an example of this problem will be described.




In a VLSI cell placing problem, a state variable that determines energy is a position in which a cell is placed. The state variable is discretely defined on a chip. However, when the position in which a cell is placed is represented by a particular point in the region corresponding to a basic cell, the detailed balance is not satisfied. This is because the result of an integral of a Hamilton equation starting with the particular point does not always accord with any basic cell.




To solve this problem, the next alternative generating apparatus


21


divides the entire region on the chip into regions corresponding to basic cells. When a particular cell is selected, the region of the basic cell in which the particular cell is placed is obtained. The Hamilton equation is integrated from the obtained point. A basic cell with the region including the resultant point is determined as an alternative of a cell to be moved. The ratio of probability densities of the two points is output as a compensating term of the detailed balance. The acceptance determining portion


38


determines whether or not to accept the state of the next alternative corresponding to the compensated accepting method with the compensating term of the detailed balance. Thus, while the detailed balance is satisfied, a discrete state variable can be used.




As another problem, the energy of routing varies corresponding to the rotating state of a cell. In a conventional cell placing program, as coordinates of a logical pin in a cell (a virtual pin in a logical design), the average value of the coordinates of pins with the same potential corresponding to the logical pin is used. The positions of the pins with the same potential vary corresponding to the rotations of cells. Thus, the coordinates of the logical pin vary. Consequently, the energy of routing depending on the line length between logical pins varies.




As one method for solving this problem, it is assumed that the coordinates of all pins are at the center of a cell. It is also assumed that the approximated energy used in an integral of a Hamilton equation is calculated corresponding to such an assumption. In addition, it is assumed that a cell parallely moves with the same orientation as the initial state without rotation. When the cell is finally placed in the position of the alternative, the rotation is also considered so that the cell can be placed in the basic cell. In this case, since the cell has the same orientation, the origin of the cell is not always placed in the position of the alternative. Before and after the cell is moved, the number of positions of the cell placed in the basic cell may vary. In this case, so as to satisfy the detailed balance, the number of positions is compensated corresponding to, for example, the accepting probability.





FIGS. 6 and 7

are the first and second parts of a flow chart of a VLSI cell placing process in the case where the energy of routing is used as approximated energy.




When the process is started in

FIG. 6

, the cell selecting portion


32


designates a cell of which the state is changed (at step S


11


). Thus, the designated cell becomes an object to be moved. Thereafter, according to a predetermined method, an approximated energy function, corresponding to the designated cell, and a region (mesh) in a Euclidean space, corresponding to a discrete cell position, are generated (at step S


12


).




Since the approximated energy function and the mesh depend on a cell being to be moved, the cell selecting portion


32


may generate the function and the mesh and output them to the next alternative generating apparatus


21


. Alternatively, the user may create the approximated energy function and the mesh and input them to the VLSI cell placing apparatus


31


. As another alternative method, the next alternative generating apparatus


21


may generate the approximated energy function and the mesh corresponding to the change parameter information.




Next, the calculating method of the approximated energy function for the routing and the method for generating the mesh that is an integral unit of a Hamilton equation, will be described in detail. The approximated energy function and the mesh are determined whenever one cell is placed.




The energy of routing is calculated by the above-described half perimeter method. At this point, it is assumed that the coordinates of each pin are at the center of a cell thereof so as to prevent the influence of the rotation of the cell. Generally, since one cell has a plurality of pins that are connected to a plurality of different nets, a net connected to a designated cell becomes designated. The amount of routing (line length) of the designated net corresponding to the coordinates of the cell depends on the relation among the maximum value and minimum value of the coordinates of other pins and the coordinates of the pin of the cell. For example, as long as the coordinates of the center of the cell are in the range of the maximum value and minimum value of the coordinates of the other pins, the amount of routing is constant. The amount of routing is used as the approximated energy.




In reality, the maximum values and-minimum values of x and y coordinates of other pins for each net connected to the designated cell are obtained with the following precautions.




(1) A plurality of pins of an objective cell may be included in a net. In this case, it is necessary to obtain the maximum values and minimum values of other pins excluding such pins.




(2) It is assumed that the coordinates of the pin are at the center of the cell thereof.




The energy of routing of a particular net is expressed by a piecewise linear function with boundaries of obtained maximum values and minimum values as shown in FIG.


8


. The xy coordinates of a point on a curved surface, shown in

FIG. 8

, represent the coordinates at the center of a cell to be moved. The z coordinate represents the energy of routing in the case where the cell is placed at the position. In

FIG. 8

, a rectangular region with the lowest energy accords with a distribution region of other pins of an objective net (namely, a minimum rectangular region including the other pins).




Such a rectangular region is present for each net connected to an objective cell. The final approximated energy function is expressed by the sum of the energy of routing for each net connected to the cell. Thus, the final approximated energy function is a piecewise linear function.




Meshes, shown in

FIG. 8

, are composed of the rectangular regions. In other words, for each net connected to one cell, the maximum value and minimum value of x and y coordinates of pins that are not connected to the cell are obtained. With boundaries of the obtained maximum and minimum values, meshes are formed. In each small region of the meshes, the approximated energy function is linear.




Thereafter, for example, the cell selecting portion


32


counts the number of positions of a cell placed in a cell shape, designated at step S


11


, with considering the rotation of the cell and so forth (at step S


13


). The region probability distribution generating portion


23


of the next alternative generating apparatus


21


obtains one point of the region as an initial position {circumflex over (q)}


s


corresponding to the uniform probability distribution in the region of the Euclidean space corresponding to the basic cell at which the origin of the cell is placed (at step S


14


). The multi-dimensional Gaussian distribution generating portion


12


obtains one vector from the multi-dimensional Gaussian distribution as an initial momentum {circumflex over (p)}


s


(at step S


15


).




Thereafter, the Hamilton equation integral portion


13


integrates a Hamilton equation of which an approximated energy function has potential energy for an integral period with the initial conditions of the initial position {circumflex over (q)}


s


and the initial momentum {circumflex over (p)}


s


(at step S


16


). The obtained point position and momentum are represented by {circumflex over (q)}


e


and {circumflex over (p)}


e


, respectively. At this point, the approximated energy function differential generating portion


15


generates {overscore (E)} corresponding to the approximated energy function generated at step S


12


and outputs it to the Hamilton equation integral portion


13


.




Next, the integrating method of a Hamilton equation using meshes will be described in detail.





FIG. 9

is a schematic diagram showing an integral path of a Hamilton equation. In

FIG. 9

, straight lines in x and y axes represent boundaries of meshes. As boundaries, the maximum and minimum values of the coordinates of pins are used. Thus, the widths of meshes are not uniform.




Now, it is assumed that the initial position {circumflex over (q+EEE


s


is in a mesh M


0


and an integral period designated by the integral period designating portion


35


is denoted by τ


I


. Since {overscore (E)})} is constant in the mesh M


0


(the value is denoted by ({overscore (E)})


0


), the equations (8) and (9) are solved in the mesh M


0


as follows:







{circumflex over (q)}={circumflex over (q)}




S




+{circumflex over (p)}




S


τ−½(


{overscore (E)}


)


0


τ


2


  (10)








{circumflex over (p)}={circumflex over (p)}




S


−(


{overscore (E)})




0


τ  (11)






The solution of the equation (10) represents a quadratic curve with respect to period τ. In particular, ({overscore (E)})


0


=0 represents a linear curve (namely, a straight line). First, the point of intersection of the quadratic curve and the boundary lines of the mesh and the period required for the intersection are calculated. The position of the point of intersection is denoted by {circumflex over (q)}


1


and the required period is denoted by τ


0


.




When τ


I


≦τ


0


, the position {circumflex over (q)}


e


and the momentum {circumflex over (p)}


e


are given and the integration of the Hamilton equation is finished.








{circumflex over (q)}




e




={overscore (q)}




S




+{circumflex over (p)}




S


τ


I


−½(


{overscore (E)}


)


0


τ


I




2


  (12)









{circumflex over (p)}




e




={circumflex over (p)}




S


−(


{overscore (E)})




0


τ


I


  (13)




When τ


I





0


, the position of the point of intersection {circumflex over (q)}


1


and the momentum {circumflex over (p)}


1


are given as follows:








{circumflex over (q)}




1




={circumflex over (q)}




S




={circumflex over (p)}




S


τ


0




−½(




{overscore (E)})




0


τ


0




2


  (14)










{circumflex over (p)}




1




={circumflex over (p)}




S


−(


{overscore (E)}


)


0


τ


0


  (15)






Next, the period τ


I


is substituted with τ


I





1


−τ


0


. A mesh that contacts the mesh M


0


with a boundary of a side of the position of the point of intersection {circumflex over (q)}


1


is denoted by M


1


. In the same manner as the integration of the Hamilton equation for the period τ


I


with the initial conditions of the initial position {circumflex over (q)}


s


and the initial momentum {circumflex over (p)}


s


in the mesh M


0


, the Hamilton equation is integrated for the remaining period τ


1


with the initial conditions of the position of the point of intersection {circumflex over (q)}


1


and the momentum {circumflex over (p)}


1


in the mesh M


1


.




Such an operation is repeated from mesh to mesh until the remaining period becomes 0. At the boundary of each mesh, the position of the point of intersection, the momentum of the point of intersection and the remaining period of the integration are supplied to the integral process of the next mesh. In

FIG. 9

, the point of intersection on the boundary of the meshes M


1


and M


2


is {circumflex over (q)}


2


, the momentum thereof is {circumflex over (p)}


2


, the point of intersection of the meshes M


2


and M


3


is {circumflex over (q)}


3


, and the momentum thereof is {circumflex over (p)}


3


. At the position {circumflex over (q)}


e


in the mesh M


3


, the remaining period is 0.




According to this method, when the integral path is reversely moved, the initial position is obtained. When a Hamilton equation is integrated for the period τ


I


with the initial conditions of a set of ({circumflex over (q)}


e


, −{circumflex over (p)}


e


) that have a different sign of ({circumflex over (q)}


e


, {circumflex over (p)}


e


) as the final integral result of the Hamilton equation, the results are ({circumflex over (q)}


s


, −{circumflex over (p)}


s


).




Next, the defined region restoring portion


25


selects a basic cell corresponding to a region, including the obtained position {circumflex over (q)}


e


, and outputs it as the position of the next alternative (at step S


17


). For example, the acceptance determining portion


38


counts the number of positions of the cell placed in the shape of which the original cell is parallely moved to the position of the alternative (at step S


18


of

FIG. 7

) and selects one position with the same probability (at step S


19


).




The acceptance determining portion


38


determines whether or not to accept the position of the alternative using a method that satisfies the detailed balance and accomplishes the accepting probability for compensating the number of positions at the original position and the position of the alternative (for example, the metropolis procedure) (at steps S


20


and S


21


). At this point, to satisfy the detailed balance, the difference of approximated energy and the compensating term of the detailed balance are used.




When the acceptance determining portion


38


accepts the next alternative, the cell state changing portion


37


moves the position of the cell in the cell state storing portion


36


to the position of the next alternative in a selected rotating state (orientation) (at step S


22


). When the acceptance determining portion


38


does not accept the next alternative, the cell state changing portion


37


keeps the cell in the original state (at step S


23


).




Thereafter, the cell selecting portion


32


determines whether or not to finish the process (at step S


24


). When the next state transition is performed, the process from step S


11


is repeated. Otherwise, the process is finished.




As described above, when one cell to be moved is determined, it is prepared for calculating alternatives of the positions to which the cell is moved. Thus, when the same cell is moved a plurality of times, the efficiency may be remarkably improved. However, in this case, it is necessary to predetermine the number of times the cell is moved.




At step S


12


, the boundary of a mesh of the approximated energy function does not always accord with the boundary of a basic cell on a chip. However, when the boundary of a mesh of the approximated energy function is designated to each basic cell, or a half thereof, a mesh that accords with each basic cell, or a half thereof, can be generated. Even if there is a region of a basic cell in which a cell cannot be placed on a chip, when the boundary of each basic cell is included in the boundary of each mesh, such a region can be easily handled.





FIG. 10

is a schematic diagram showing a region of a basic cell in which a cell cannot be placed. In

FIG. 10

, it is assumed that the boundary of a mesh accords with the region of a basic cell. In the integration of a Hamilton equation at step S


16


, the integral path in a mesh M


A


is at the point of intersection {circumflex over (q)}


B


and the momentum thereof is {circumflex over (p)}


B


. However, the cell cannot be placed in the basic cell corresponding to the mesh M


B


that contacts the side including the point of intersection {circumflex over (q)}


B


.




Assuming that the mesh M


B


is ignored, a basic cell in which a cell can be placed is searched in the direction perpendicular to the side including the point of intersection {circumflex over (q)}


B


. In

FIG. 10

, the cell can be placed in the basic cell corresponding to the mesh M


C


. The point of intersection {circumflex over (q)}


B


is horizontally moved until it contacts the mesh M


C


. The contacted point is denoted by {circumflex over (q)}


c


. The orientation of the momentum {circumflex over (p)}


c


at the point of intersection {circumflex over (q)}


c


is the same as that of the momentum {circumflex over (p)}


B


. The magnitude of the momentum {circumflex over (p)}


c


is designated so that the following equation is satisfied:






½


|{circumflex over (p)}




C


|


2




+{overscore (E)}


(


{circumflex over (q)}




C


)=½


|{circumflex over (p)}




B


|


2




+{overscore (E)}


(


{circumflex over (q)}




B


)  (16)






Thereafter, the integral path is extended from the mesh M


C


to the mesh M


D


. In this method, while the detailed balance is satisfied, basic cells in which cells cannot be placed can be ignored.




When the following expression is satisfied








{overscore (E)}


(


{circumflex over (q)}




C


)>½


|{circumflex over (p)}




B


|


2




+{overscore (E)}


(


{circumflex over (q)}




B


)  (17)






regardless of the momentum {circumflex over (p)}


c


, the equation (16) cannot be satisfied. In this case, the integral path may be reflected. In other words, with the point of intersection {circumflex over (q)}


B


and the momentum −{circumflex over (p)}


B


instead of the point of intersection {circumflex over (q)}


c


and the momentum {circumflex over (p)}


c


, the integral path is extended. Alternatively, at the boundary of the mesh M


B


and the mesh M


C


, the integral path may be reflected.




In the case shown in

FIG. 10

, basic cells in which cells cannot be placed may not be ignored. In this case, in any mesh, the integral path is extended. If a cell is not placed in a basic cell at the last point, the position cannot be used. For the cell, the next alternative may be generated once again. Alternatively, a new cell may be used.




The cell placing process at a chip edge can be performed by the following two methods. As a first method, even if the integral path exceeds the chip edge, calculations are continued. Since −({overscore (E)}), on the right side of the equation (9), faces the inside of the chip, even if the path on the way exceeds the chip, the last position is in the chip with a high probability. As a second method, when the integral path exceeds the chip edge, the process is stopped.




In the first method, when the last position is outside the chip, the process of the cell is stopped. In the second method, when the integral path exceeds the chip edge, the process for the cell is stopped.




In the above description, the variance of the multi-dimensional Gaussian distribution used for selecting the initial momentum {circumflex over (p)}


s


was 1. However, a more conventional distribution may be used. For example, to select the momentum {circumflex over (p)}=(p


x


, p


y


) corresponding to the two-dimensional coordinates {circumflex over (q)}=(q


x


, q


y


), the case using a multi-dimensional standard distribution having a density function as expressed by the following equation is considered:











1



2





π




σ
x







-


p
x
2


2


σ
x
2











1



2





π




σ
y







-


p
y
2


2


σ
y
2






=


1



(


2





π


)

2



σ
x



σ
y







-


1
2



[



p
x
2


σ
x
2


+


p
y
2


σ
y
2



]









(
18
)













where σ


x


and σ


y


are variances in x and y directions, respectively.




Variable conversions expressed by the following equations are performed:












p
^



=


[




p
x







p
y





]

=

[




1
/

σ
x




0




0



1
/

σ
y





]










p
^

=


[




1
/

σ
x




0




0



1
/

σ
y





]





[




p
x






p
y




]






(
19
)









q
^



=


[




q
x







q
y





]

=

[




1
/

σ
x




0




0



1
/

σ
y





]










q
^

=


[




1
/

σ
x




0




0



1
/

σ
y





]





[




q
x






q
y




]






(
20
)













When the probability distribution is represented by the momentum {circumflex over (p)}′, the following standard normal distribution is obtained:










1


(


2





π


)

2






-



&LeftBracketingBar;


p
^



&RightBracketingBar;

2

2







(
21
)













A Hamiltonian corresponding to coordinates {circumflex over (q)}′ and momentum {circumflex over (p)}′ that have been converted with the temperature T is given by the following equation:








H


(


{circumflex over (q)}′, {circumflex over (p)}′


)=1


/T{overscore (E)}


(


{circumflex over (q)}′)+


½


|{circumflex over (p)}′|




2


  (22)






At this point, the Hamilton equation, with respect to the coordinates {circumflex over (q)}′ and the momentum {circumflex over (p)}′, is given by the following equation:











[




1
/

σ
x




0




0



1
/

σ
y





]






q
^




τ



=






q
^






τ


=


+



H





p
^






=



p
^



=


[




1
/

σ
x




0




0



1
/

σ
y





]



p
^









(
23
)










q
^




τ


=

p
^





(
24
)













Thus, the equation of motion with respect to the coordinates {circumflex over (q)} is given by the following equation:

















p
^






τ


=






-



H





q
^






=


-

1
T











^



q
^







E
_



(


q
^



)











=






-

1
T











^



q
^







E
_



(


q
^



)








q
^






q
^












=






-


1
T



[




σ
x



0




0



σ
y




]








^



q
^







E
_



(

q
^

)











(
25
)










2



q
^





τ
2



=


-



1
T



[




σ
x



0




0



σ
y




]


2







^



q
^







E
_



(

q
^

)








(
26
)













In the equation (26), as the temperature T increases, the influence of potential energy {overscore (E)} decreases. As the variances σ


x


and σ


y


increase, the influence of the gradients thereof increase. As the variances σ


x


and σ


y


increase, the evaluation of the momentum p, as the kinetic energy, decreases. Thus, the result of the equation (26) accords with the conventional intuitive inference.




When a basic cell in which a cell cannot be placed is ignored, the temperature and variance affect the VLSI cell placing process. In this case, since kinetic energy and potential energy vary, instead of the equation (18), using the following equation, the mesh M


B


should be removed:












1
2




&LeftBracketingBar;


p
^

C


&RightBracketingBar;

2


+


1
T




E
_



(

q
C

)




=



1
2




&LeftBracketingBar;


p
^

B


&RightBracketingBar;

2


+


1
T




E
_



(


q
^

B

)








(
27
)













However, in the equation (27), the momentum {circumflex over (p)}


B


′ and the momentum {circumflex over (p)}


C


′ have been converted in the same manner as the momentum {circumflex over (p)}


B


and the momentum {circumflex over (p)}


C


corresponding to the equation (19).




Next, a real example of the VLSI cell placing process will be described. The coordinates of a cell are represented by, for example, a set of two discrete integers. The state of one cell can be placed in a two-dimensional Euclidean space corresponding to the x and y coordinates of the position of the origin of a cell. The entire state of a VLSI chip can be placed in a finite dimensional Euclidean space that is the sum of two-dimensional Euclidean spaces of all cells.




The region of the finite dimensional Euclidean space corresponding to each state of a VLSI chip is designated in the following manner. For simplicity, a two-dimensional Euclidean space for each cell is considered. It is assumed that discrete coordinate values of each cell are represented by natural numbers N={1, 2, . . . } and that the coordinate values are placed at part of reals R. The region of the basic cell corresponding to the coordinates (2, 5) of a cell is defined as [1.5, 2.5)×[4.5, 5.5). In addition, it is assumed that a probability distribution is predesignated to each region.




The cell selecting portion


32


selects a cell to be moved and outputs the coordinates thereof as change parameter information, that is parameters to be changed to the next alternative generating apparatus


21


. The cell state reading portion


33


reads the coordinates of all cells, including the selected cell from the cell state storing portion


36


, and outputs them as an initial state to the next alternative generating apparatus


21


. The multi-dimensional Gaussian distribution parameter designating portion


34


outputs multi-dimensional Gaussian distribution parameters to the next alternative generating apparatus


21


. The integral period designating portion


35


outputs an integral period to the next alternative generating apparatus


21


. In this case, the multi-dimensional Gaussian distribution parameters are a set of two positive reals. When the coordinates of the selected cell are (2,5), the region probability distribution generating portion


23


obtains one point from the region corresponding to the probability distribution of the region [1.5, 2.5)×[4.5, 5.5) corresponding to the coordinates (2, 5) of the cell. This point is represented by, for example (1.89, 5.32).




With the initial position of the point, an initial momentum that is a vector is generated by the multi-dimensional Gaussian distribution generating portion


12


corresponding to the multi-dimensional Gaussian distribution parameters. With the initial momentum, the Hamilton equation is integrated. The resultant point is denoted by, for example, (3.45, 6.98). The defined region restoring portion


25


obtains coordinates of an integer (3, 7) as the position of the next alternative from the region [2.5, 3.5)×[6.5, 7.5) of the basic cell corresponding to the obtained point (3.45, 6.98).




On the other hand, the region probability density ratio calculating portion


24


calculates the ratio of probability densities at two points (1.89, 5.32) and (3.45, 6.98) and outputs the result as a compensating term of the detailed balance. When a uniform probability distribution is given to each region, the probability density function for Lebesgue's measure is 1. In this case, the ratio of the probability density that is output from the region probability density ratio calculating portion


24


is always 1.




The acceptance determining portion


38


determines whether or not to accept the position of the next alternative corresponding to the compensating term of the detailed balance. Now, assume that the determination of the acceptance of the next alternative is performed corresponding to the metropolis procedure. In addition, assume that the initial position is denoted by {circumflex over (q)}


s


, the position of the next alternative is denoted by {circumflex over (q)}


e


, the rotating state of a cell at the initial position {circumflex over (q)}


s


is denoted by R


s


, the rotating state of a cell at the position of the next alternative {circumflex over (q)}


e


is denoted by R


e


, the number of rotating states at the initial position {circumflex over (q)}


s


is denoted by r


s


, and the number of rotating states at the position of next alternative {circumflex over (q)}


e


is denoted by r


e


. Thus, the probabilities of which the rotating states R


s


and R


e


are selected is represented by 1/r


s


and 1/r


e


, respectively. Moreover, assumed that a point corresponding to the initial position {circumflex over (q)}


s


is denoted by {circumflex over (q)}


s


′, a point corresponding to the position of the next alternative {circumflex over (q)}


e


is denoted by {circumflex over (q)}


e


′, the original energy function is denoted by E, the approximated energy function is denoted by {overscore (E)}, the probability density at the initial position {circumflex over (q)}


s


′ is denoted by p({circumflex over (q)}


s


′), the probability density at the position of the secondary alternative {circumflex over (q)}


e


′ is denoted by p({circumflex over (q)}


e


′), and the temperature parameter is denoted by T. In this case, the acceptance determining portion


38


calculates the value A′ of the next equation.










A


=


exp


[

-



{


E


(


q
^

e

)


-

E


(


q
^

S

)



}

-

{



E
_



(


q
^

e


)


-


E
_



(


q
^

S


)



}


T


]


×



p


(


q
^

e


)



p


(


q
^

S


)



·


r
S


r
e








(
28
)













Random numbers ranging from 0 to 1 are uniformly generated. The random numbers are represented by r. When A′≧r, the acceptance determining portion


38


accepts the position of the next alternative. When A′<r, the acceptance determining portion


38


does not accept the position of the next alternative.




In the equation (28), the energy function E is a function of coordinates of a discrete cell. In contrast, the approximated energy function {overscore (E)} is a function of continuous coordinates of a finite dimensional Euclidean space. The next alternative generating apparatus


21


supplies the difference of approximated energy {{overscore (E)}({circumflex over (q)}


e


′)−{overscore (E)}({circumflex over (q)}


s


′)} and the compensating term of the detailed balance {p({circumflex over (q)}


e


′)/p({circumflex over (q)}


s


′)}. In particular, when a uniform probability distribution is given, since p({circumflex over (q)}


e


′)/p({circumflex over (q)}


s


′)=1, the compensating term of the detailed balance does not affect the value of A′.




When the acceptance determining portion


38


accepts the position of the next alternative, the cell state changing portion


37


updates the state of the cell in the cell state storing portion


36


. When the acceptance determining portion


38


does not accept the position of the next alternative, the cell state changing portion


37


keeps the original state.




In the case of a VLSI cell placing problem, the coordinates of a cell to be moved accord with the number of state variables to be changed. The sets of coordinates of other cells are sets of state variables that are not changed. When an objective cell is moved, the definition of one approximated energy function {overscore (E)} should be employed. When another cell is moved, the definition of another E can be used. Thus, it is not necessary to globally define the approximated energy function {overscore (E)} with respect to all state variables.




In the above embodiment, the next alternative generating apparatus


21


is applied to the VLSI cell placing problem. However, the present invention is not limited to such a field. Instead, the present invention can be applied for any problem that can define a continuous or discrete state variable and an energy function.




For example, the present invention can be applied for a graph partitioning problem of which a graph composed of nods and branches is partitioned (Johnson et al., OPTIMIZATION BY SIMULATED ANNEALING: AN EXPERIMENTAL EVALUATION; PART I, GRAPH PARTITIONING, Operations Research, Vol. 37, No. 6, 1989, pp. 865-892) and a graph coloring problem of which a graph is colored (Johnson et al., OPTIMIZATION BY SIMULATED ANNEALING: AN EXPERIMENTAL EVALUATION; PART II, GRAPH COLORING AND NUMBER PARTITIONING, Operations Research, Vol. 39, No. 3, 1991, pp. 378-406). The graph partitioning problem can be applied to circuit designs and so forth. The graph coloring problem can be applied to scheduling. In addition, the present invention can be applied to production planning problems (Yugami, Hara, et al., SOLUTION OF LARGE SCALE PRODUCTION PLANNING PROBLEM CORRESPONDING TO SIMULATED ANNEALING, ARTIFICIAL INTELLIGENCE VOL. 90-8, pp. 61-69). In a production planning problem, the types of products produced by a machine are represented by state variables. The insufficient quantity of products is represented by energy.





FIG. 11

shows an example of the configuration of the system of the next alternative generating apparatus and the VLSI cell placing apparatus according to the present invention. The system shown in

FIG. 11

is configured using an information processing device (a computer), and comprises a CPU (central processing unit)


41


, memory


42


, an input device


43


, an output device


44


, an external storage device


45


, a medium drive device


46


, and a network connection device


47


. These units are interconnected through a bus


48


.




The memory


42


contains, for example, ROM (read-only memory), RAM (random access memory), etc., and stores a program and data used in the processes. The CPU


41


performs necessary processes by executing a program using the memory


42


.




A multi-dimensional Gaussian distribution generating portion


12


, a Hamilton equation integral portion


13


, an approximated energy difference calculating portion


14


, and an approximated energy function differential generating portion


15


shown in

FIG. 2

, a finite dimensional Euclidean space designating portion


22


, a region probability distribution generating portion


23


, a region probability density ratio calculating portion


24


, and a defined region restoring portion


25


shown in

FIG. 4

, a cell selecting portion


32


, a cell state reading portion


33


, a multi-dimensional Gaussian distribution parameter designating portion


34


, an integral period designating portion


35


, a cell state changing portion


37


, and an acceptance determining portion


38


shown in

FIG. 5

, etc. are stored in a specific program code segment in the memory


42


as program modules. The memory


42


can also function as a cell state storing portion


36


shown in

FIG. 5

, etc.




The input device


43


can be, for example, a keyboard, a pointing device, a touch panel, etc., and is used by a user inputting an instruction and information. The output device


44


can be, for example, a display, a printer, a speaker, etc., and is used in outputting to a user an inquiry and a process result.




For example, when a user inputs the positions of objects to be placed, the information processing device automatically obtains better positions of the objects, and outputs them visually. As a result, the user can solve an arbitrary object positioning problem such as placing semiconductor cells, placing buildings in a city plan, arranging the layout of interior articles, thereby easily recognizing the obtained positions of the objects.




The external storage device


45


can be, for example, a magnetic disk device, an optical disk device, magneto-optical disk device, etc. The information processing device stores the above described program and data in the external storage device


45


, and loads them to the memory


42


as necessary for use.




The medium drive device


46


drives a portable storage medium


49


, and accesses the stored contents. The portable storage medium


49


can be an arbitrary computer-readable storage medium such as a memory card, a floppy disk, CD-ROM (compact disk read only memory), an optical disk, a magneto-optical disk, etc. The user can store the program and data in the portable storage medium


49


, and load them to the memory


42


as necessary for use.




The network connection device


47


communicates with an external device through an arbitrary network (line) such as a LAN (local area network), etc., and transforms data for communications. The information processing device receives the program and data from an external device as necessary, and loads them to the memory


42


for use.




Described below are examples of program codes in a next alternative generating process and an acceptance determining process.

FIG. 12

shows a pseudo-program code of the state transition process shown in

FIG. 3

, and

FIG. 13

shows a pseudo-program code of the acceptance determining process called by the program shown in FIG.


12


. The instructions described in these program codes are stored in the memory


42


shown in

FIG. 11

in the format readable by the CPU


41


, and executed by the CPU


41


.




C


1


shown in

FIG. 12

shows the initialization of a temperature parameter for use in a simulated annealing process. C


2


specifies the time for Hamiltonian integration. The value of the integral time can be, if necessary, included in the parameter vector in designating a multi-dimensional Gaussian distribution parameter for each state variable. Therefore, it is set to 1 in this example. C


3


initializes the parameter vector.




C


4


starts a main loop. C


5


updates the temperature parameter. The update routine determines the subsequent temperature based on the number of times of the execution of a main loop, the temperature at the run-time of the previous execution, etc. Normally, the subsequent temperature can be obtained by multiplying the previous temperature by a constant smaller than 1.




C


6


and C


7


correspond to the process in step S


1


shown in FIG.


3


. C


6


selects specified_variable_id which is an identifier of the state variable to be changed. The C


7


sets the value of a state variable (normal vector) designated by the ID as the initial coordinate (initial position) to start the integration with.




C


8


obtains a parameter vector for designation of a multi-dimensional Gaussian distribution parameter for a state variable designated by the specified_variable_id. The parameter vector corresponds to a combination of the dispersion of respective dimensions.




C


9


corresponds to the process in step S


2


shown in

FIG. 3

, generates a vector at random from the multi-dimensional Gaussian distribution designated by the obtained parameter vector, and sets it as an initial momentum.




C


10


corresponds to the processes in steps S


3


and S


4


shown in

FIG. 3

, performs a Hamiltonian integral process using the initial momentum, the function providing the gradient of an approximated energy function, and the integral time with the specified value of the state variable defined as the initial position, and returns the result as a next position alternative. The function providing the gradient of the approximated energy function is specified by the pointer approximate_energy_function_gradient.




If the approximated energy function has a special form such as a piecewise linear function, and the Hamiltonian integral can easily be performed, then a special function for the integration is called based on the function form. In this case, C


10


can be described as follows.






candidate_position=integrate_hamiltonian_for_special_approximate_energy function (initial_position, initial_momentum, integral_time);






C


11


corresponds to the processes in steps S


5


, S


6


, and S


7


shown in

FIG. 3

, passes the initial value of the specified state variable, the value of the alternative obtained from the integration, the original energy function, the approximated energy function, and the temperature to the function “accepted” for providing the acceptance probability satisfying the detailed balance condition, and then determines whether or not the alternative position can be accepted. The function “accepted” returns ‘true’ if the alternative position is accepted. Otherwise, it returns ‘false’.




C


12


corresponds to the process in step S


6


shown in FIG.


3


. Since the new position is accepted, it changes the value of the specified state variable into candidate_position. Unless it is accepted, the value of the state variable is not changed.




C


13


corresponds to the process in step S


8


shown in

FIG. 3

, and checks the termination condition such as the number of times of executing a loop. C


14


indicates that control is returned to the main loop at C


4


unless the termination condition is satisfied. If the termination condition is satisfied, the process is terminated.




The function “accepted” called by C


10


is described as shown in FIG.


13


. The implementation is only an-example, and an arbitrary implementation can be used on condition that the condition of the equation (5) can be realized.




C


21


generates a random number between 0 and 1, and sets it to a variable “random”. C


22


computes the energy difference between the given initial position and the given alternative position using the original energy function. C


23


computes the energy difference between the initial position and the alternative position using the approximated energy function.




C


24


indicates the determination based on the acceptance probability of the equation (5). If the value of a natural logarithm on the right side is equal to or larger than the random number on the left side, then the alternative position is accepted. C


25


indicates that ‘true’ is returned if the alternative position is accepted. C


26


indicates that ‘false’ is returned if the alternative position is not accepted.




Although the present invention has been shown and described with respect to best mode embodiments thereof, it should be understood by those skilled in the art that the foregoing and various other changes, omissions and additions in the form and detail thereof may be made therein without departing from the spirit and scope of the present invention.



Claims
  • 1. A next alternative generating apparatus for determining the next state in a simulated annealing process for solving an optimizing problem, comprising:an input device inputting digital data of a state at present and the predetermined period; a momentum generating processing device generating digital data representing a gradient of a piecewise linear energy function, which has a simpler form than an original energy function and simplifies a calculation corresponding to a given data of a state; an integral processing device performing digital processing of a simplified integration of a Hamilton equation for the predetermined period with initial conditions of the state at present stored in said storage device and the momentum generated by said momentum generating processing device, said integral processing device calculating resultant data of a state using the data of the period, the state at present, the momentum, and a gradient obtained from a first discrete state of which the optimizing problem is described in an energy defining region into a space described in a second state as a continuous state and outputting a second state corresponding to a first state as the state at present, and wherein said output device is adapted for outputting a first state corresponding to a second state obtained as the resultant data of the state as a state of the next alternative.
  • 2. The next alternative generating apparatus as set forth in claim 1, further comprising:an accepting device determining whether or not to accept the state of the next alternative received from said output device so that a detailed balance is satisfied.
  • 3. The next alternative generating apparatus as set forth in claim 1, further comprising:an accepting device determining whether or not to accept the state of the next alternative received from said output device, wherein said output device is adapted for supplying a difference between a value of the piecewise linear energy function in the state at present and a value of the piecewise linear energy function in the state of the next alternative to said accepting device so as to determine whether or not to accept the state of the next alternative with the difference of the piecewise linear energy function.
  • 4. The next alternative generating apparatus as set forth in claim 3,wherein said accepting device determines whether or not to accept the state of the next alternative based on the difference of the piecewise linear energy function and a difference between a value of the original energy function in the state at present and a value of the original energy function in the state of the next alternative.
  • 5. The next alternative generating apparatus as set forth in claim 1,wherein said momentum generating processing device is adapted for obtaining data of a momentum from a Gaussian distribution corresponding to the dimensions of a parameter to be changed in the state at present.
  • 6. The next alternative generating apparatus as set forth in claim 1,wherein said input device is adapted for placing a first discrete state of which the optimizing problem is described in an energy defining region into a space described in a second state as a continuous state and outputting a second state corresponding to a first state as the state at present, and wherein said output device is adapted for outputting a first state corresponding to a second state obtained as the resultant data of the state as a state of the next alternative.
  • 7. The next alternative generating apparatus as set forth in claim 6,wherein said input device is adapted for dividing the space into a plurality of regions, designating a probability distribution to each region, and obtaining the state at present corresponding to the probability distribution in the region corresponding to the first state at present, and wherein said output device is adapted for causing the first state corresponding to a region including the second state obtained as the resultant data of the state to be the state of the next alternative.
  • 8. The next alternative generating apparatus as set forth in claim 7, further comprising:an accepting device determining whether or not to accept the state of the next alternative received from said output device, wherein said output device is adapted for supplying control values of a probability density of a region corresponding to the first state at present and a probability density of a region including the second state obtained as the resultant data of the state to said accepting device, and wherein said accepting device is adapted for determining whether or not to accept the state of the next alternative with the control values so that a detailed balance is satisfied.
  • 9. A simulating apparatus for use with a simulated annealing process, comprising:a next alternative selecting processing device selecting a state of the next alternative according to a probability distribution; a storage device storing data of the state of the next alternative selected by the next alternative selecting processing device; a calculating device calculating a difference between energy in a present state and in the state of the next alternative according to a simplified energy function which has a simpler form than an original energy function and simplifies a calculation; an accepting processing device accepting the state of the next alternative according to an accepting probability based on the difference of the simplified energy function and a difference between energy in the present state and in the state of the next alternative according to the original energy function; and an output device outputting a simulation result generated from the accepted state, which simulates a simulated annealing operation for a technical problem.
  • 10. A simulating apparatus for use with a simulated annealing process, comprising:a next alternative selecting processing device selecting a state of the next alternative by integrating a Hamilton equation for a predetermined period using a simplified energy function which has a simpler form than an original energy function and simplifies a calculation; a storage device storing data of the state of the next alternative selected by the next alternative selecting processing device; a calculating device calculating a difference between energy in a present state and in the state of the next alternative according to the simplified energy function; an accepting processing device accepting the state of the next alternative according to an accepting probability based on the difference of the simplified energy function and a difference between energy in the present state and in the state of the next alternative according to the original energy function; and an output device outputting a simulation result generated from the accepted state, which simulates a simulated annealing operation for a technical problem.
  • 11. The simulating apparatus as set forth in claim 10,wherein the simplified energy function is adapted for using a different function corresponding to a state.
  • 12. The simulating apparatus as set forth in claim 10,wherein said accepting processing device is adapted for generating uniform random number ranging from 0 to 1, accepting the state of the next alternative when the accepting probability is greater than the uniform random number, and not accepting the state of the next alternative when the accepting probability is less than the uniform random number.
  • 13. A next alternative generating apparatus for determining the next state in a simulated annealing process for solving an optimizing problem, comprising:input means for inputting digital data of a state at present and a predetermined period; storage means for storing the data of the state at present and the predetermined period; momentum generating processing means for generating digital data of a momentum; energy generating processing means for generating digital data representing a gradient of a piecewise linear energy function, which has a simpler form than an original energy function and simplifies a calculation corresponding to a given data of a state; integral processing means for performing digital processing of a simplified integration of a Hamilton equation for the predetermined period with initial conditions of the state at present stored in said storage means and the momentum generated by said momentum generating processing means, said integral processing means calculating resultant data of a state using the data of the period, the state at present, the momentum, and a gradient obtained from said energy generating processing means by giving data of a state to said energy generating processing means; and output means outputting the resultant data of the state as a state of the next alternative, which simulates an optimizing operation for a technical problem.
  • 14. A simulating apparatus for use with a simulated annealing process, comprising:next alternative selecting processing means for selecting a state of the next alternative according to a probability distribution; storage means for storing data of the state of the next alternative selected by the next alternative selecting processing means; calculating means for calculating a difference between energy in a present state and in the state of the next alternative according to a simplified energy function which has a simpler form than an original energy function and simplifies a calculation; accepting processing means for accepting the state of the next alternative according to an accepting probability based on the difference of the simplified energy function and a difference between energy in the present state and in the state of the next alternative according to the original energy function; and output means for outputting a simulation result generated from the accepted state, which simulates a simulated annealing operation for a technical problem.
  • 15. A simulating apparatus for use with a simulated annealing process, comprising:next alternative selecting processing means for selecting a state of the next alternative by integrating a Hamilton equation for a predetermined period using a simplified energy function which has a simpler form than an original energy function and simplifies a calculation; storage means for storing data of the state of the next alternative selected by the next alternative selecting processing means; calculating means for calculating a difference between energy in a present state and in the state of the next alternative according to the simplified energy function; accepting processing means for accepting the state of the next alternative according to an accepting probability based on the difference of the simplified energy function and a difference between energy in the present state and in the state of the next alternative according to the original energy function; and output means for outputting a simulation result generated from the accepted state, which simulates a simulated annealing operation for a technical problem.
Priority Claims (1)
Number Date Country Kind
7-037222 Feb 1995 JP
CROSS REFERENCE TO RELATED APPLICATION

The application is a continuation-in-part of U.S. patent application Ser. No. 08/959, 296 filed on Oct. 24, 1997, U.S. Pat. No. 6,052,677, which is a continuation of U.S. patent application Ser. No. 08/567,905 filed on Dec. 6, 1995, now abandoned.

US Referenced Citations (7)
Number Name Date Kind
5134685 Rosenbluth Jul 1992 A
5241465 Oba Aug 1993 A
5274742 Morita Dec 1993 A
5303328 Masui Apr 1994 A
5475608 Masuoka Dec 1995 A
5754444 Koford May 1998 A
5813798 Whiffen Sep 1998 A
Non-Patent Literature Citations (20)
Entry
Qian, “Computer networking representations for parallel distributed computing algorithms”, IEEE Proceedings of Intl Conf on Neural Networks v 2 p 1577-81.
Chen, “Electronic structure and morphology of alkali-metal clusters”, J. Phys. B: At. Mol. Opt. Phys. 23, pp 885-903.
Poteau, “Distance dependent Huckel type model for the study of sodium clusters”, Physical review B v 45 n4.
Lu, “first principles simulated annealing study of phase transitions and short range order in transition metal and semiconductor alloys”, Physical review B v 50 n10.
Goldstein, “Optimal protein folding codes from spin glass theory”, Proc Natl Acad Sci USA v89.
Puma, “Computer analysis of electron paramagnetic resonance data using the monte carlo method”, J. Phys C: solid state phys. 21.
Duane, “Hybrid Monte Carlo”, Physics Letters B v 195 n2.
Tomanek, “Growth regimes of carbon clusters”, Physical review letters v67.
De Groot, “Optimizing complex problems by nature's algorithms: simulated annealing and evolution strategy-a comparative study”, IEEE conf of parallel problem solving from nature.
Pensini, “Flowshop and TSP”.
Beckerman, “Segmentation and cooperative fusion of laser radar image data”, SPIE conf on sensor fusion and aerospace applications II v 2233.
Hynderickx, “Simulated anneal method for the determination of spin hamiltonian parameters from esr data”, Journal of magnetic resonance v70.
H. Igarashi, “An estimation of parameters in an energy function used in a simulated annealing method”, 1992 International Joint Conference on Neural Networks, vol. 4, pp. 480-485.
Chong Su Yu et al., “Parallel mean field annealing neural network for solving traveling salesman problem”, 1992 International Joint Conference on Neural Networks, vol. 4, pp. 532-536.
A. Rangarajan et al., “A continuation method for emission tomography”, Conference Record of the 1992 Nuclear Science Symposium and Medical Imaging Conference, vol. 2, pp. 1204-1206.
German et al., “Stochastic relaxation, Gibbs distributions, and the Bayesian Restoration of images”, IEEE Transactions of Pattern Analysis and Machine Intelligence PAM-6, No. 6, pp. 721-741, 1984.
Johnson et al., “Optimization by Simulated Annealing: An Experimental Evaluation; Part II, Graph Coloring and Number Partitioning”, Operations Research, vol. 39, No. 3, pp. 378-406, 1991.
Johnson et al., “Optimization by Simulated Annealing: An Experimental Evaluation; Part I, Graph Partitioning”, Operations Research, vol. 37, No. 6, pp; 865-892, 1989.
Yugami et al., “Solving a Large-Scale Production Scheduling by Extended Simulated Annealing”, Artificial Intelligence, vol. 90-8, pp. 61-69, 1993.
Hongo et al., “Contour Extraction By Local Parallel and Stochastic Algorithm which has Energy Learning Faculty”, The Institute of Electronics, Information and Communication Engineers, vol. J74-D-II, No. 3, pp. 348-356, 1991.
Continuations (1)
Number Date Country
Parent 08/567905 Dec 1995 US
Child 08/959296 US
Continuation in Parts (1)
Number Date Country
Parent 08/959296 Oct 1997 US
Child 09/514345 US