Constrained combinatorial optimization problems represent a class of computational challenges where the goal is to find the optimal solution from a finite set of feasible solutions, subject to a set of constraints. These problems are characterized by discrete decision variables and a multitude of constraints that limit the feasible solution space. The objective is to maximize or minimize an objective function while adhering to the specified constraints. Common examples include the traveling salesman problem, job scheduling, and network design. The complexity of these problems arises from the discrete nature of the decision variables, leading to a combinatorial explosion of possible solutions and requiring sophisticated algorithms for efficient exploration of the solution space.
Addressing constrained combinatorial optimization problems involves the development and application of various optimization techniques. Metaheuristic algorithms, such as genetic algorithms, simulated annealing, and ant colony optimization, are commonly employed due to their ability to navigate large solution spaces and find near-optimal solutions. These algorithms iteratively explore and exploit the solution space, balancing the trade-off between exploration for diverse solutions and exploitation to refine the current best-known solution. Additionally, mathematical programming techniques, such as integer programming and constraint programming, offer formal frameworks for modeling and solving constrained combinatorial optimization problems. These methods provide a systematic approach to formulate the problem, express constraints, and leverage optimization solvers to find optimal or near-optimal solutions.
Despite the advancements in optimization algorithms, constrained combinatorial optimization problems remain challenging due to their NP-hard nature. Efficiently solving large instances of these problems requires a combination of algorithmic innovations, problem-specific heuristics, and computational resources. The complexity and ubiquity of constrained optimization problems motivates the development of quantum algorithms as a new approach to tackle them.
For a more complete understanding of the invention, reference is made to the following description and accompanying drawings, in which:
The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
A detailed description of one or more examples of the invention is provided below along with accompanying figures that illustrate various features of the invention. The invention is described in connection with such examples, but the invention is not limited to any example. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
Described herein is an adiabatic quantum algorithm, quantum constrained Hamiltonian optimization (Q-CHOP), that may be used for solving constrained optimization problems. In various implementations, Q-CHOP enforces a Hamiltonian constraint at all times, and constructs an adiabatic path directly out of the problem's objective function to convert the worst feasible state into the best feasible state. In contrast to the standard approaches to adiabatic quantum optimization, Q-CHOP constructs the entire adiabatic schedule out of optimization problem data, without invoking an ad-hoc driver Hamiltonian or initial state that have no relation to the problem at hand. Q-CHOP further comes equipped with a natural choice for the constraint-enforcing “penalty factor”, leaving only the quantum runtime as a hyperparameter that may need careful tuning for performance. Q-CHOP consistently outperforms the standard, penalty-based adiabatic algorithm in numerical experiments across a diverse set of problems, suggesting that Q-CHOP is a promising general-purpose method for constrained combinatorial optimization.
Referring to
In some implementations, the quantum computing algorithm represented by the quantum circuit specification is used for evolving the time-dependent Hamiltonian over a first iteration comprising a sequence of discrete time intervals. These discrete time intervals can be variationally optimized using a processor (e.g., a classical processor such as the processor core(s) of the digital computer 104, or a quantum processor, or a hybrid quantum-classical computing system that includes both a classical processor and a quantum processor). For example, after evolving the time-dependent Hamiltonian over the first iteration, a variational optimization of the second Hamiltonian may be performed to determine a second iteration comprising a second sequence of discrete time intervals. The variational optimization may include performing gradient descent, Adam, etc. The same sequence of discrete time intervals may be evolved repeatedly in order to generate an expectation value of the second Hamiltonian for a given sequence of discrete time intervals. In such a case, the variational optimization would optimize the expectation value of the second Hamiltonian.
As way of background, consider a constrained combinatorial optimization problem of the form:
Here f is the objective function and Ci are constraints. It is assume without loss of generality that the all constraint functions are nonnegative, which implies that they are minimized by constraint-satisfying (feasible) bitstrings; otherwise, Ci(x) can be replaced in Eq. (1) by {tilde over (C)}i(x)=Ci(x)2. The treatment of inequality constraints is discussed further below.
Under the assumption of nonnegative constraints, the optimization problem in Eq. (1) can be encoded into a system of qubits by the objective and constraint Hamiltonians:
thereby associating “good” bitstrings of low objective-function value with low-energy states of Hobj, and identifying the set of feasible bitstrings with the ground-state subspace of Hcon. Thusly encoded, the optimization problem in Eq. (1) can be solved via the adiabatic algorithm.
To this end, the standard procedure is to first construct a problem Hamiltonian Hprob=Hobj+λHcon with a penalty factor for constraint violation, A, that is sufficiently large to ensure that the ground state of Hprob is feasible. One then chooses a driver Hamiltonian Hdriver that satisfies two properties: (i) it has an easy-to-prepare ground state, and (ii) it does not commute with Hprob. Finally, the quantum algorithm prepares the ground state of Hdriver, and slowly changes the Hamiltonian of the system from Hdriver to Hprob over the course of a total time T, for example via the interpolating Hamiltonian:
sweeping s: 0→1 as the time t: 0→T.
T is referred to as the “quantum runtime” of the algorithm. Property (i) is necessary to ensure that the algorithm solves the optimization problem by adiabatically tracking the ground state of Hinterp, while property (ii) ensures that Hinterp has off-diagonal terms that induce nontrivial dynamics.
Choosing an interpolating Hamiltonian of the form in Eq. (3) is not required, and indeed sometimes different paths from Hdriver to Hprob can improve performance, in the extreme case turning algorithmic failure into success. For simplicity, however, the interpolating Hamiltonian is considered. This approach is referred to as the standard adiabatic algorithm (SAA).
The adiabatic theorem ensures that the SAA prepares an optimal solution in time T that scales with an inverse power of the smallest spectral gap Δinterpmin of Hinterp(minimized over s∈[0, 1]), provided that there are no symmetries that fragment Hilbert space into uncoupled sectors. However, the gap Δinterpmin is usually not known in advance, so in practice the SAA requires making an ad hoc choice of some “long” time T, and hoping that T is sufficiently large. If the selected time is too small, the SAA will not only fail to provide an optimal solution, it may even fail to provide a feasible solution.
There are a few additional shortcomings of the SAA that are worth noting. First, the SAA requires making a careful choice of the penalty factor λ. Increasing λ makes it more likely that the SAA finds feasible solutions to the optimization problem at hand. However, doing so also increases the runtime of the algorithm, or equivalently the depth of a quantum circuit that implements this algorithm with fixed error. Making an optimal choice of λ has been shown to be a difficult task in practice and NP-hard in general, though known heuristic strategies exist that work well in certain cases.
Second, the SAA relies on an ad hoc initial state and driver Hamiltonian Hdriver that has no connection to the problem of interest. A common choice for the initial state is the uniform superposition over all bitstrings (representing the uniform prior), and a transverse magnetic field for which the uniform superposition is a unique ground state. However, even here one must choose an overall normalization (prefactor) for Hdriver, analogous to the choice of penalty factor λ. Finally, there is an intuitive sense in which the SAA does “too much work”: it searches the configuration space of all bitstrings, feasible or otherwise.
Described below are various examples of quantum constrained Hamiltonian optimization (Q-CHOP) and its application to problems with various objectives and constraints. The Pauli-Z operator for qubit j∈{1, 2, . . . , N} is denoted by Zj=|00|j−|1
1|j, the Pauli-X operator by Xj=|0
1|j+|1
0|j, and the Pauli-Y operator by Yj=−iZjXj. The generator of a global rotation about axis α∈{x, y, z} on the Bloch sphere is denoted by Sα, e.g.
such that Rα(θ)=e−iθS
The general strategy for Q-CHOP is as follows:
Intuitively the constraint Hamiltonian Hcon will restrict dynamics to the feasible subspace feas, while the ramp Hramp will adiabatically transfer the initial state |xinit
to an optimal (or approximate) solution to the optimization problem. In this general form, Q-CHOP shares many features with the SAA. The scalar A is analogous to the penalty factor in the SAA, and indeed the Hamiltonians of Q-CHOP and the SAA at s=1 are equal (up to an overall normalization factor). Similarly, the choices of an initial Hamiltonian Hinit and a ramp Hramp for Q-CHOP are analogous to choices of a driver Hamiltonian Hdriver and a path from Hdriver to Hprob in the SAA.
Though this general form of Q-CHOP does not practically reduce the number of choices necessary to solve the constrained optimization problem, it effectively reduces the size of the classical search space of the quantum algorithm, which can reduce requisite runtimes. Moreover, Q-CHOP provides a theoretical framework for making a suitable choice of λ. Denote the operator norm of M by ∥M∥, let ∥Hramp∥=maxs∈[0,1] ∥Hramp(s)∥, and denote the spectral gap of Hcon by Δcon. If
then second-order perturbation theory guarantees that Q-CHOP will yield feasible states with probability
Intuitively, condition (a) ensures that the Hramp is not “strong enough” to violate constraints, while condition (b) precludes rapid (e.g., Floquet) driving schemes that resonantly pump energy into the system. The spectral gap Δcon can typically be determined (or at least lower-bounded) by inspection of the constraint Hamiltonian Hcon.
Larger values of penalty factor λ provide better guarantees of feasible solutions, while smaller values of λ are desirable to increase the weight of the ramp Hramp that drives evolution within the feasible subspace. Condition (a) and the guarantee in Eq. (5) provide an upper bound on “reasonable” choices of λ, namely λmax≈2∥Hramp∥/Δcon. In practice, even the upper bound λ≤λmax is too stringent, as the above discussion still holds if the norm ∥Hramp(s)∥ is replaced by ∥(1−Pfeas)Hramp(S)Pfeas∥, where Pfeas is a projector onto the feasible subspace. One should therefore expect smaller values of λ to still yield predominantly feasible states, although the limits on λ may need to be found empirically, since the norm ∥(1−Pfeas)HrampPfeas∥ is generally not known a priori. Moreover, the ramp that constructed and described below has the feature that ∥Hramp∥≤∥Hobj∥. The fact that a suitable choice of λ can be determined from ∥Hobj∥ and Δcon directly, rather than from a significantly more complex interpolating Hamiltonian in the case of the SAA, is a notable advantage of Q-CHOP, since these quantities are typically easy to estimate or bound appropriately. A uniform linear objective
(where ≃ denotes equality up to a constant), for example, has norm ∥Hobj∥=N/2, so if Δcon=1 then λ=N is a reasonable choice. However, in the worst case, determining the optimal choice of λ may be NP-hard as is the case for the SAA.
It is often the case that the objective function of an optimization problem is linear in the decision variables of the problem, in which case the corresponding objective Hamiltonian can be written (up to an irrelevant constant shift) as a weighted sum of single-qubit Pauli-Z operators. This Hamiltonian is then odd with respect to qubit inversion, which is to say that x|Hobj|x
=−
, where x=(x1, x2, . . . )∈{0, 1}N is a bitstring and
where parameterising the ramp is by the angle θ=sπ for brevity, and Ry(θ)=e−iθS
When these two conditions hold, namely objective Hamiltonian is odd worst feasible state |xworst is easy-to-prepare, the Q-CHOP can be done as follows. Prepare |xworst
, pick a quantum runtime T and positive number λ≈2∥Hobj∥/Δcon, and evolve from time t=0 to t=T under the total Hamiltonian
sweeping θ: 0→π as t: 0→T, for example with the linear ramp θ=πt/T. Here ∥Hobj∥ is the operator norm of Hobj, and Δcon can be replaced by an estimate or lower bound on the spectral gap of Hcon. The prescription for choosing λ is only a guideline, and smaller values of λ may work well in practice. λ was set equal to the qubit number N in all examples described herein.
In the case of MIS, Q-CHOP may reduce to the specialized algorithm based on non-Abelian adiabatic mixing, up to an adiabatically vanishing addition of ∂tθSy to Htot(θ) in Eq. (7). However, the general formalism of Q-CHOP extends to more general constrained optimization problems as well. Intriguingly, the addition of ∂tθSy can be identified with approximate counterdiabatic driving, and its form depends only on the linearity of the MIS objective function. An added term of ∂tθSy is therefore expected to marginally improve the performance of Q-CHOP for any optimization problem with linear objectives, and similar approximate counterdiabatic driving terms can likely be derived for non-linear objectives as well.
In various examples, Q-CHOP may be sped up for problems with single-body objective Hamiltonians of the form:
where {cj} are arbitrary real coefficients. For convenience and clarity, Hobj,z=Hobj, as well as rotated versions Hobj,x and Hobj,y in which the Pauli-Z operators in Hobj,z are respectively replaced by Pauli-X and Pauli-Y operators. The restriction of Hamiltonians to the feasible subspace is denoted as feas by a superscript, such that Hobj,αfeas=PfeasHobj,αPfeas, where Pfeas is a projector onto
feas.
Q-CHOP may rely on the adiabatic theorem to track the ground state of a Hamiltonian H(θ) by slowly varying a parameter θ. Evolving too quickly introduces errors due non-adiabatic effects. These effects can be mitigated by the introduction of a counterdiabatic drive, in which the Hamiltonian is replaced according to
where the gauge field G can be found by minimizing the Hibert-Schmidt norm of the operator
Here [G, H]=GH−HG, and the (squared) Hilbert-Schmidt norm of is ∥
∥HS2=tr(
).
When time evolution is sufficiently slow to avoid exciting states outside the feasible subspace, Q-CHOP effectively evolves from time t=0 to t=T under the Hamiltonian
where the angle θ: 0→π as t: 0→T. As Htotfeas(θ) is the Hamiltonian to augment with a counterdiabatic drive, next is to find a gauge field G that minimizes the norm of (G, Htotfeas) which is equivalent to minimizing the norm of
(G, Hobjfeas). To this end, the following expansion may be used:
and in turn
It is unclear how to minimize the norm of (G, Hobjfeas) exactly, so minimizing it variationally with respect to the ansatz may be used:
for which
Nonzero values of gx and gz only contribute to imaginary components to (Gg, Hobjfeas), and conversely all imaginary components of
(Gg, Hobjfeas) involve factors of gx or gz. To minimize the Hilbert-Schmidt norm of
(Gg, Hobjfeas), set gx=gz=0 and define Gy=gySyfeas, leaving:
The projector Pfeas commutes with Hobj,z (both are diagonal in the computational basis), which allows a simplification:
Further substituting ∂θHobjfeas(θ) from Eq. (14) into Eq. (18), provides:
The sin(θ) term vanishes as θ→0 or π, at which point (gySyfeas, Hobjfeas) is minimized by gy=1. Near θ=π/2, the sin(θ) term dominates, but here no definite statements about the norm of
(Gy, Hobjfeas) can be made without knowledge of the feasible subspace. Nonetheless, given that i [Sy, Hobj,x]=Hobj,z, i [Syfeas, Hobj,xfeas] is generally expected to be “Hobj,zfeas-like” so a minimum near gy˜1 is expected. In some examples, the gauge field G=Sy is a reasonable (albeit sub-optimal) choice to mitigate non-adiabatic effects. In the case of linear objectives, the total Hamiltonian for Q-CHOP with approximate counterdiabatic driving thus becomes
The small factor of ∂tθ˜1/T prevents ∂tθ Sy from generating a population transfer to non-feasible states, so ∂tθ Sy≈∂tθ Sffeas.
In some examples, the requirement of an odd objective used in Sec. ?? is relaxed and a modification of the algorithm that no longer requires the worst feasible state to be easy-to-prepare. In the case of a non-odd objective, the prescription in Eq. (6) needs modification to ensure that the rotated objective Hamiltonian changes sign when θ: 0→π. To this end, the objective can be decomposed into Pauli strings as
where N={1, 2, . . . , N} is the set of qubit indices. Next one qubit in each term is rotated, and average over each choice of qubits, altogether defining
In some examples, the objective is decomposed into even and odd terms, rotate the odd terms globally as in Eq. (6), and rotate the even terms “locally” as in Eq. (23). These examples may have the benefit of reduced computational overheads when discretizing Q-CHOP for gate-based quantum computation. In some examples, the terms of a Hamiltonian may be made up of a sum of components.
In some examples of Q-CHOP, the worst feasible state of a constrained optimization problem is easy to prepare is required. In other examples, however, this requirement is relaxed to that of having at least one feasible state that is easy to prepare. For example, let |x★ be an easy-to-prepare eigenstate of Hobj, and let E★=
x★|Hobj|x★
be the corresponding objective energy. Define a subproblem with the same constraint Hamiltonian as the original optimization problem, Hcon, but a new objective Hamiltonian
for which |x★ is the worst feasible state. By construction, Hobj★ is minimized within the feasible subspace by either the best feasible state |xbest
or worst feasible state |xworst
of the original optimization problem, depending on which corresponding objective energy (with respect to Hobj) is farther from E★. If the optimization subproblem defined by (Hobj★, Hcon) is solved by |xbest
, then after solving the subproblem the process is complete. Otherwise, the worst feasible state |xworst
identified by the subproblem to solve the original optimization problem (Hobj, Hcon) can be used. The relaxation of Q-CHOP from needing the worst feasible state to needing any feasible state comes at the cost of having to solve a subproblem with an objective Hobj★ that has quadratically larger norm, and can therefore be expected to require a larger penalty factor λ★ and a larger quantum runtime T*. There may additionally be concerns if the energy E★ has degeneracy d★>>1, in which case a population transfer into the ground state of the final Q-CHOP Hamiltonian can be expected to be suppressed by a factor of 1/d★.
Next, the treatment of inequality constraints in Q-CHOP and the SAA is discussed. An inequality constraint of the form D(x)≥0 can be converted into an equality constraint by the introduction of a slack variable: D(x)=n, where n≥0 is a new decision variable for the optimization problem. This equality constraint can then be rearranged and squared to arrive at the constraint
at the cost of expanding the optimization domain as {0,1}N→{0, 1}N×{0, 1, 2, . . . }. The Hamiltonian enforcing this constraint takes the form
where {circumflex over (n)}D=Σn≥0 n|nn|D is a number operator that can be identified with an auxiliary bosonic mode indexed by the constraint function D. In this sense, bosonic modes may be resources for enforcing inequality constraints.
In practice, the dimension of a slack variable can be truncated at Dmax=maxx D(x) to encode a problem into a finite-dimensional Hilbert space. A discrete-variable encoding of the slack variable then requires └log2(Dmax+1)┐ qubits, though it may be desirable to encode the slack variable directly into a qudit or oscillator. If the maximum of D is not efficiently computable, Dmax can be replaced by an upper bound on D(x). Expanding
where each xj∈{0, 1}, denotes the constant contribution to D(x) by c0D=c{ }D, and the set of remaining coefficients by CD={cSD: S⊂N, |S|>0}. A suitable upper bound for D(x) is then given by the sum of positive coefficients in CD,
where ± denotes the set of positive (+) or negative (−) integers. If the coefficients in CD have a nontrivial greatest common divisor gcd(CD)>1, the dimensionality of the slack variable can be further reduced by pruning its allowed values to the set
For the SAA, the introduction of a slack variable may force a choice of initial state and driver Hamiltonian for the corresponding slack qudit. A natural generalization of the qubit-only case is to interpret the initial state |+⊗N and transverse-field driver Hamiltonian
respectively as a uniform prior and a sum of projectors onto the uniform prior for each decision variable. Here ≃ denotes equality up to an overall constant. This interpretation motivates initializing the slack qudit into a uniform superposition of its domain, and adding the (negative) projector onto this state to the driver Hamiltonian. In some examples, this generalization of initial state and driver Hamiltonian is used.
In some examples of Q-CHOP, there are two subtleties to address for the treatment of inequality constraints. First, once an initial worst feasible state |xworst is identified, the slack variable for inequality constraint D may be initialized to |nworst
=|D(xworst)
. Second, merely adding a slack qudit with the constraint in Eq. (27) to the Q-CHOP Hamiltonian fragments Hilbert space into uncoupled sectors indexed by the value of n=D(x). Initializing the slack variable in |nworst
thereby enforces, in effect, the much stronger constraint that D(x)=nworst. To remedy this situation, a strategy of engineering an alternative adiabatic path that adds thoughtfully chosen mixing terms to the Q-CHOP Hamiltonian may be used.
In order to satisfy the constraint enforced by Eq. (27), added mixing terms may be used to couple the decision variables and the slack variables, changing them in a correlated manner. A Hamiltonian term that flips qubit j, for example, should simultaneously change the value of all slack variables associated with inequality constraints that involve variable xj, so as to ensure that inequality constraints remain satisfied. However, engineering a strictly constraint-satisfying mixing Hamiltonian would involve complex multi-control logic that may generally be inefficient to implement. Accordingly, in some examples, slack variables are allowed to change to any value, and rely on the always-on constraint Hamiltonian of Q-CHOP to make constraint-violating terms off-resonant. For problems with inequality constraints, the objective Hamiltonian of Q-CHOP may be modified using a slack variable mixing operator S(θ). As a specific example of S(θ), the objective Hamiltonian of Q-CHOP may be modified as Hobj(θ)→Hobj(θ)×S(θ), with
Here |D| is the dimension of the slack qudit for constraint D and P+D is a projector onto is uniform superposition |+
D=Σn∈
/√{square root over (|
D|)}, such that |
D|P+D=
|m
n|D. The factor of sin θ ensures that the added mixing terms vanish at θ=0 and π, which is necessary to preserve the initial and final ground states of the Q-CHOP Hamiltonian.
If the polynomial D(x) has degree d and coefficients with magnitude O(1), then |D|=O(Nd). It is therefore worth noting that the slack operator in Eq. (31) grows with the product of the slack variable dimensions, ΠD|
D|. Though it is not surprising for the cost of solving an optimization problem to grow with the degree of its inequality constraints, it is highly desirable to have a cost that does not grow multiplicatively with the number of inequality constraints.
Described below are specific examples that compare the performance of Q-CHOP numerically using exact simulation and to SAA. A comprehensive performance comparison between Q-CHOP and the SAA requires a detailed, problem-aware assessment of the numerous choices that are made to instantiate simulation Hamiltonians, including choices of normalization factors, driver Hamiltonians (for the SAA), penalty factor, adiabatic schedules, and total evolution times.
To simulate Q-CHOP, an initial state is evolved with the Hamiltonian in Eq. (7) using a linear ramp, θ=πt/T. In the case of the SAA, the initial state |+⊗N∝(|0
+|1
)⊗N is evolved with the Hamiltonian
where
is a transverse field normalized to have a spectral range equal to the qubit number N and the penalty factor is λ=N. For optimization problems with slack qudits, the qudits are initialized to a uniform superposition and projectors are added onto these initial states to Sx. In all the described experiments, the objective Hamiltonian is normalized to make the optimization problems scale-invariant. Specifically, Hobj is set to Hobj/(Hobj), where the normalization factor
(Hobj) defined in terms of the nonzero coefficients of Hobj in Eq. (22) by
Here denotes an average, and the empty set is excluded to make the norm
(Hobj) independent of constant shifts to Hobj. This normalization is equivalent to that of up to a constant factor of four. The constant factor is chosen so that
(Sz)=1 for the uniform linear objective Sz≃−Σj|1
1|j with spectral range N.
Unlike the objective Hamiltonians, constraint Hamiltonians typically have a natural expression as a sum of projectors, each of which has a spectral range of 1. This ensures that the constraint Hamiltonians have a spectral gap of at least 1, and typically equal (or otherwise close) to 1. Therefore, no additional preconditioning of the constraint Hamiltonians are performed unless specified otherwise. For problems with inequality constraints, all constraint coefficients are divided by their greatest common divisor by taking D(x)→D(x)/gcd(CD∪c0D), prune the set of allowed slack variable values, and modify the SAA and Q-CHOP Hamiltonians to address slack variables as discussed above.
To highlight the features and results and various implementations, the following metrics are used to measure the performance of an optimization algorithm. The first is the in-constraint approximation ratio r, which is defined so that the best and worst feasible states have values of 1 and 0, respectively. The operator whose expectation is equal to the in-constraint approximation ration is given by
where Pfeas is a projector onto the feasible-state manifold, and =
|Hobj|
for
∈{best, worst} are the objective energies of the best and worst feasible states.
The second quantity is the overlap between the state produced by the quantum algorithm and the state encoding the optimal solution to the optimization problem. The corresponding operator is the projector onto the optimal solutions of an optimization problem:
For optimization problems whose objective function values may lie anywhere on the real line, finding the exact solution is too strict of a criterion for success. In this case, the overlap with E-optimal states may be used, with the operator given by
Table 0.1 summarizes the problems that are simulate in the numerical experiments described below, and some features that are relevant for Q-CHOP. All simulations were performed by numerically integrating equations of motion using the DOP853 method of scipy.integrate.solve_ivp, with both absolute and relative error tolerances set to 10−8.
The problem of finding a maximum independent set (MIS) is considered first. Given a graph G=(V, E), a set S⊂V is an independent set if none of its vertices are neighbors; that is, {v, w}∉E for all v, w∈S. A maximum independent set is an independent set containing the largest possible number of vertices. Candidate solutions S⊂V to MIS can be identified by bitstrings x={xv: v∈V}∈{0, 1}|V| for which xv=1 if and only if v∈S. The identification of subsets is referred to as the membership encoding into |V|-bit strings. MIS can thus be defined by the following integer program:
The constraints of MIS are sometimes equivalently written as xv+xw≤1, but the formulation with equality constraints in Eq. (37) avoids the need for slack variables. The worst feasible solution to MIS is the empty set, identified by the all-0 bitstring x=(0, 0, . . . ). Many constrained optimization problems, including as maximum clique, minimum vertex cover, and maximum set packing, are straightforwardly reducible to MIS with no significant overhead.
Solving MIS with an adiabatic quantum algorithm was previously considered by Yu, Wilczek, and Wu. Specifically, solving MIS with Q-CHOP is equivalent to running the Yu, Wilzcek, and Wu algorithm with λ=−4 and T=4N2. However, the previously known algorithm provides little insight into the choice λ. As a specific example, previous work did not discuss the fact that making |∂tφ| too large (|λ| too small) makes the algorithm for MIS fail due to population leakage outside of the feasible subspace. In contrast, the formalism of Q-CHOP makes clear from Eq. (5) that one should choose |λ|˜N. The Q-CHOP formalism additionally clarifies that choosing ∂tφ>0 (λ<0) makes these algorithms track the state of maximal energy within the feasible subspace, making them more vulnerable to mixing with non-feasible states.
To evaluate Q-CHOP on MIS and to compare its performance to that of SAA, random (Erdös-Rényi) graphs G(N, p) with N vertices and edge-creation probability p=0.3 were considered. Popt
with respect to N (at T=2πN2) with a p-value of 6×10−4 (in favor of a negative slope), whereas for Q-CHOP, the null hypothesis is rejected with p-value of 0.01 in favor of a positive slope. While the utility of a linear regression test is limited for a quantity that is bounded on [0, 1], it illustrates the statistical significance of the trend observed in
Another example is the problem of finding a directed minimum dominating set (DMDS). Given a directed graph G=(V, E), a vertex v∈V is dominated by a vertex u∈V if the directed edge (u, v)∈E. A dominating set of G is then a subset S⊂V for which every vertex of the graph is either contained in S, or dominated by a member of S. A minimum dominating set of a graph is a dominating set containing the fewest possible vertices. DMDS can be defined by the following integer program:
where the binary variable x denotes membership in the dominating set and
Q-CHOP is useful for problems beyond graph problems. For example, Q-CHOP may be used on the knapsack problem, a widely studied combinatorial optimization problem with applications in logistics and finance. The difficulty of solving the knapsack problem is the basis of knapsack cryptosystems, which are candidates for post-quantum cryptography, and closely-related lattice-based cryptosystems are under active consideration for the development of post-quantum cryptographic standards. A knapsack problem consists of a collection of items I to be placed in a “knapsack”. Each item i∈I has a weight wi and value vi, and the knapsack has weight capacity W. The goal is to choose a subset of items that maximizes value without exceeding the capacity of the knapsack. Again identifying subsets of I with the membership encoding, the knapsack problem can be defined by the following integer program:
The knapsack problem thereby introduces inequality constraints for Q-CHOP and the SAA, so simulating a knapsack problem instance requires a total Hilbert space dimension of 2|I|(W+1). The worst feasible state of knapsack problem is the all-0 state.
The task of generating classically hard knapsack problem instances was previously studied by Jooken, Leyman, and De Causmaecker, with an implementation of their instance generator provided at: https://github.com/JorikJooken/knapsackProblemlnstances.
In addition to a number of items N and knapsack capacity W, an ensemble of random instances generated by the above method is defined by a number of groups g that determine item weight and value statistics, a fraction of items f that are assigned to the last group, an upper bound s on the value and weight of items in the last group, and a noise parameter ∈. As an experiment, to produce random knapsack problem instances with N items, the above generator implementation with (W, g, f, s, E)=(2N, ┌N/2┐, 1/N, N, 0.1) can be used. Any knapsack problem instances in which any item has weight greater than W are rejected.
r
and optimal state probabilities
Popt
from knapsack simulations with N=8 items and quantum runtime T=2πN2. The data in the top panels here is a coarse-grained slice of the data at time t=T in the left-most panels of
r
and
Popt
) between Q-CHOP and the SAA 506 in the same simulations as the top panels are shown. Q-CHOP outperforms the SAA in every simulated knapsack problem instance with N=8 items.
r
and
Popt
with progress t/T throughout the solution of individual problem instances. Even so, Q-CHOP achieves reliably high approximation ratios, and finds optimal states with higher probability than the SAA in every simulated instance (see
Popt
with T. The performance gap between Q-CHOP and the SAA persists for different problem sizes N.
As another example, the problem of optimizing a combinatorial auction (CA), which is commonly used mechanism for market-based resource allocation and procurement, is considered. A combinatorial auction consists of a collection I of items to be sold, and a set B of bids that offer to purchase baskets of items. Each item i∈I has multiplicity mi. Each bid b=(Pb, qb)∈B offers to pay pb for the basket of items specified by the vector qb=(qbi: i∈I), where qbi is the quantity of item i in the basket requested by bid b. The goal of a combinatorial auction is for the seller to accept a subset of the bids that maximizes payments without exceeding the available inventory of items. Identifying the subset of accepted bids by the membership encoding, a combinatorial can be defined by the following integer program:
Due to the inequality constraints and associated slack variables, a simulating a combinatorial auction problem instance requires a total Hilbert space dimension of 2|I|Πi∈I(mi+1). The worst feasible solution of the combinatorial auction is the all-0 state.
As can be seen by comparing Eq. (39) to Eq. (40), the knapsack problem can be interpreted as a combinatorial auction consisting of one item with multiplicity W. A combinatorial auction of unique items (in which all multiplicities mi=1) can be also be reduced to an instance of weighted MIS on a graph (V, E) in which nodes are identified with bids, V=B, and weighted by the payments pb in the objective function, as in Eq. (40). The edges E are obtained by inserting, for each item i∈I, a complete graph on the bids that request item i. In other words, E=∪i∈I Ei with Ei={{a, b}: qai=qbi=1}.
r
and
Popt
) between Q-CHOP and the SAA 706 in the same simulations as the top panels are shown. Q-CHOP outperforms the SAA in every simulated combinatorial auction problem instance with N=8 bids.
To generate combinatorial auction problem instances, the implementation of the Combinatorial Auction Test Suite provided by Gasse, Chelelat, Ferroni, Charlin, and Lodi was used. Instances of combinatorial auctions with |I|=3 items and N bids for varying N were generated. All other parameters defining an ensemble of combinatorial auctions are set to the defaults as in the provided implementation. These defaults set all multiplicities mi=1, so the combinatorial auction instances are equivalent to instances of weighted MIS drawn from an appropriately defined ensemble of weighted graphs. For the sake of testing Q-CHOP and the SAA with inequality constraints, however, the formulation of the combinatorial auction in Eq. (40) were kept, which is simulated with auxiliary slack variables.
r
and
Popt
tapers off with progress t/T rather than hitting a plateau. Despite exhibiting plateau behavior, Q-CHOP still achieves more reliably high approximation ratios, higher optimal-state probabilities, and a faster growth of these performance metrics with quantum runtime T. The performance gap between Q-CHOP and the SAA persists for different problem sizes N.
At 810 a quantum processor applies coupling and transformation operations to a plurality of quantum states according to a Hamiltonian specification. The quantum processor may include a plurality of quantum processing elements associated with respective quantum states. At 820 a digital computer receives information based at least in part on measurements of one or more quantum states associated with respective quantum processing elements of the quantum processor. At 830, the digital computer provides information for preparing one or more quantum states associated with respective quantum processing elements of the quantum processor based at least in part on the information. At 830, a control module applies the applied coupling and transformation operations based on interaction with the digital computer for processing the constrained optimization problem. At 840, the processing includes preparing quantum states associated with a plurality of the quantum processing elements characterized by a summation of a constraint Hamiltonian representing a constraint of the constrained optimization problem and an objective Hamiltonian representing an objective function of the constrained optimization problem. At 850, the control module evolves a time-dependent Hamiltonian according to an evolution that includes forming a sum of a first term having the constraint Hamiltonian and a second term. The second term includes a product of (A) a time-dependent scalar function and (B) a time-dependent operator that is initially equal to the objective Hamiltonian and is evolved into a negative of the objective Hamiltonian.
Although
It will thus be seen that the objects set forth above, among those made apparent from the preceding description, are efficiently attained and, because certain changes may be made in carrying out the above method and in the construction(s) set forth without departing from the spirit and scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Although the foregoing examples have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed examples are illustrative and not restrictive.
This application claims priority to U.S. Provisional Application No. 63/526,614, entitled Quantum Constrained Hamiltonian Optimization, filed Jul. 13, 2023, which is incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
63526614 | Jul 2023 | US |