Efficient algorithms that sample from probability distributions are of broad practical importance in areas including statistical physics, optimization, and machine learning. Quantum systems are naturally suited for encoding sample problems: according to the Born rule, a projective measurement of a quantum state |ψ in an orthonormal basis {|s} yields a random sample drawn from the probability distribution p(s)=|s|ψψ|2. This observation underpins recent work aiming to demonstrate quantum advantage over classical computers by sampling from a probability distribution defined in terms of a quantum gate sequence or an optical network. While these efforts have led to impressive experimental demonstrations, thus far they have limited implications for practically relevant problems.
Therefore, there is a continuing need for systems and methods for constructing and implementing useful quantum algorithms in quantum information science.
In an example embodiment, the present disclosure provides a method of sampling from a probability distribution, the method comprising receiving a description of a probability distribution, determining a first Hamiltonian having a ground state encoding the probability distribution, determining a second Hamiltonian, the second Hamiltonian being continuously transformable into the first Hamiltonian via a path through at least one quantum phase transition, initializing a quantum system according to a ground state of the second Hamiltonian, evolving the quantum system from the ground state of the second Hamiltonian to the ground state of the first Hamiltonian according to the path through the at least one quantum phase transition, and performing a measurement on the quantum system, thereby obtaining a sample from the probability distribution.
In another example embodiment, the present disclosure provides a method of configuring a quantum computer to sample from a probability distribution, the method comprising receiving a description of a probability distribution, determining a first Hamiltonian having a ground state encoding the probability distribution, determining a second Hamiltonian, the second Hamiltonian being continuously transformable into the first Hamiltonian via a path through at least one quantum phase transition, and providing instructions to a quantum computer to: initialize a quantum system according to a ground state of the second Hamiltonian, and evolve the quantum system from the ground state of the second Hamiltonian to the ground state of the first Hamiltonian according to the path through the at least one quantum phase transition. The method further includes receiving from the quantum computer a measurement on the quantum system, thereby obtaining a sample from the probability distribution.
In yet another example embodiment, the present disclosure provides a computer program product for configuring a quantum computer to sample from a probability distribution, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: receiving a description of a probability distribution, determining a first Hamiltonian having a ground state encoding the probability distribution, determining a second Hamiltonian, the second Hamiltonian being continuously transformable into the first Hamiltonian via a path through at least one quantum phase transition, and providing instructions to a quantum computer to: initialize a quantum system according to a ground state of the second Hamiltonian, and evolve the quantum system from the ground state of the second Hamiltonian to the ground state of the first Hamiltonian according to the path through the at least one quantum phase transition. The method further includes receiving from the quantum computer a measurement on the quantum system, thereby obtaining a sample from the probability distribution.
In still another example embodiment, the present disclosure provides a system comprising: a quantum computer, and a computing node configured to: receive a description of a probability distribution, determine a first Hamiltonian having a ground state encoding the probability distribution, determine a second Hamiltonian, the second Hamiltonian being continuously transformable into the first Hamiltonian via a path through at least one quantum phase transition, and provide instructions to the quantum computer to: initialize a quantum system according to a ground state of the second Hamiltonian, and evolve the quantum system from the ground state of the second Hamiltonian to the ground state of the first Hamiltonian according to the path through the at least one quantum phase transition. The computing node is further configured to receive from the quantum computer a measurement on the quantum system, thereby obtaining a sample from the probability distribution.
In various embodiments, determining the first Hamiltonian comprises deriving the first Hamiltonian from a projected entangled pair state (PEPS) representation of its ground state.
In various embodiments, the description of the probability distribution comprises a description of a Markov chain whose stationary distribution is the probability distribution. In various embodiments, the Markov chain satisfies detailed balance. In various embodiments, determining the first Hamiltonian comprises constructing the first Hamiltonian from the Markov chain. In various embodiments, the description of the Markov chain comprises a generator matrix. In various embodiments, the Markov chain comprises a single-site update.
In various embodiments, the ground state of the second Hamiltonian is a product state.
In various embodiments, evolving is adiabatic.
In various embodiments, the quantum system comprises a plurality of confined neutral atoms. In various embodiments, each of the plurality of confined neutral atoms is configured to blockade at least one other of the plurality of confined neutral atoms when excited into a Rydberg state. In various embodiments, initializing the quantum system comprises exciting each of a subset of the plurality of confined neutral atoms according to the ground state of the second Hamiltonian. In various embodiments, evolving comprises directing a time-varying beam of coherent electromagnetic radiation to each of the plurality of confined neutral atoms. In various embodiments, the plurality of confined neutral atoms is confined by optical tweezers.
In various embodiments, the probability distribution comprises a classical Gibbs distribution. In various embodiments, the path is distinct from a path along a set of first Hamiltonians associated with the Gibbs distribution at different temperatures. In various embodiments, the Gibbs distribution is a Gibbs distribution of weighted independent sets of a graph. In various embodiments, the graph is a unit disk graph. In various embodiments, the graph is a chain graph. In various embodiments, the graph is a star graph with two vertices per branch.
In various embodiments, the Gibbs distribution is a Gibbs distribution of an Ising model. In various embodiments, the Gibbs distribution is a zero-temperature Gibbs distribution and the Ising model is a ferromagnetic 1D Ising model. In various embodiments, the Gibbs distribution is a Gibbs distribution of a classical Hamiltonian encoding an unstructured search problem. In various embodiments, the Gibbs distribution is a zero-temperature Gibbs distribution and the unstructured search problem has a single solution.
In various embodiments, performing the measurement comprises imaging the plurality of confined neutral atoms. In various embodiments, imaging the plurality of confined neutral atoms comprises quantum gas microscopy.
The systems and methods described above have many advantages, such as providing quantum speedup for sampling from a probability distribution.
Various objectives, features, and advantages of the disclosed subject matter can be more fully appreciated with reference to the following detailed description of the disclosed subject matter when considered in connection with the following drawings, in which like reference numerals identify like elements.
Constructing and implementing useful quantum algorithms is one of the challenges in quantum information science. Efficient sampling from a probability distribution is an important computational problem with applications ranging from statistical physics over Monte Carlo and optimization algorithms to machine learning. A family of quantum algorithms is introduced herein that provides unbiased samples drawn from a probability distribution by preparing a quantum state that encodes the entire probability distribution. This approach is exemplified with several specific examples, including sampling from a Gibbs distribution of the one-dimensional Ising model, and sampling from a Gibbs distribution of weighted independent sets. Since approximating the size of the maximum independent set on a random graph is NP (nondeterministic polynomial) hard, the latter case encompasses computationally hard problems, which are potentially relevant for applications such as computer vision, biochemistry, and social networks. In some embodiments, this approach is shown to lead to a speedup over a classical Markov chain algorithm for several examples, including sampling from the Gibbs distribution associated with the one-dimensional Ising model and sampling from the Gibbs distribution of weighted, independent sets of two different graphs. In some embodiments, a realistic implementation of sampling from independent sets based on Rydberg atom arrays is also described herein. Without being bound by theory, this approach connects computational complexity with phase transitions, providing a physical interpretation of quantum speedup, and opens the door to exploring potentially useful sampling algorithms using near-term quantum devices.
In some embodiments, the key steps for a method 100 of sampling from a probability distribution are shown in
|ψ=Σs√{square root over (p(s))}|s, (1)
followed by a projective measurement in the {|s} basis. The state |ψ is said to encode the probability distribution p(s). The measurement projects |ψ into |s with probability p(s)=|s|ψ|2. Given the definition of this quantum state, a first Hamiltonian Hq is determined 120 such that |ψ is the ground state of Hq. The first Hamiltonian Hq is referred to herein as the parent Hamiltonian of |ψ. Possible methods for determining a suitable parent Hamiltonian are described below. Next, a quantum system is initialized 130 in a state |φ, which is the ground state of a second Hamiltonian H0. In some embodiments, the ground state of the second Hamiltonian H0 can be a product state. The second Hamiltonian H0 is assumed to be transformable into the first Hamiltonian Hq via a continuous set of Hamiltonians, subsequently referred to herein as a continuous path. Moreover, the path is assumed to pass through at least one quantum phase transition. The state |ψ is obtained by evolving 140 the quantum system under a time-dependent Hamiltonian that follows the path from H0 to Hq. In some embodiments, the evolution can be an adiabatic evolution. The method 100 then includes performing 150 a measurement on the quantum system, thereby obtaining a sample from the probability distribution. Repeating the initializing 130, evolving 140, and measuring 150 yields the entire probability distribution p(s).
Without being bound by theory, the requirement that the path crosses a phase transition arises from the observation that this can give rise to a speedup of the quantum algorithms described herein when compared to classical algorithms based on the Markov chains detailed below. Without being bound by theory, in two of the examples presented below, the origin of the speedup can be understood as resulting from ballistic (i.e., linear in time) propagation of domain walls, shown in
In some embodiments, two distinct constructions of the parent Hamiltonian are described in what follows. A Hamiltonian of a quantum system can represent an interaction or a plurality of interactions for the quantum system. A Hamiltonian is an operator acting on the quantum system. Eigenvalues of a Hamiltonian correspond to an energy spectrum of the quantum system. The ground state of a Hamiltonian is the quantum state of the quantum system with minimal energy. The ground state of a Hamiltonian can be a quantum state at zero temperature.
The first construction of the parent Hamiltonian employed in determining the first Hamiltonian follows the prescription in Verstraete et al. for constructing the first Hamiltonian from the Markov chain. See Verstraete, F., Wolf, M. M., Perez-Garcia, D., and Cirac, J. I. Criticality, the Area Law, and the Computational Power of Projected Entangled Pair States, Phys. Rev. Lett. 96, 220601 (2006), which is hereby incorporated by reference in its entirety. One first defines a Markov chain that samples from the desired probability distribution p(s), for which the ratio p(s)/p(s′) is known for all pairs s and s′ of elements of the event space. The Markov chain can be specified (i.e., described) by a generator matrix M, where a probability distribution qt(s) at time t is updated according to qt+1(s)=Σs′qt(s′)M(s′, s). The Markov chain is assumed to satisfy the detailed balance condition p(s)M(s, s′)=p(s′) M(s′, s). In certain embodiments, the Markov chain can comprise a single-site update. In some embodiments, a Markov chain can be constructed using, for example, the Metropolis-Hastings algorithm. Then, by construction, the probability distribution p(s) is a stationary distribution of the Markov chain and constitutes a left eigenvector of M with an eigenvalue of unity. The detailed balance property implies that the matrix Hq defined by the matrix elements
is real and symmetric. Here, n is the size of the quantum system in which the computation is carried out. The factor of n ensures that the spectrum of the parent Hamiltonian is extensive as is required for any physical Hamiltonian. In some embodiments, n can be the number of spins. The matrix Hq can be viewed as a quantum Hamiltonian with the state |ψ being its zero-energy eigenstate. The state |ψ is a ground state because the spectrum of Hq is bounded from below by 0. For every eigenvalue n(1−λ) of Hq, there exists an eigenvalue λ of M, where λ≤1 because M is a stochastic matrix. Thus, Hq is a valid choice of parent Hamiltonian. If the Markov chain is irreducible and aperiodic, the Perron-Frobenius theorem further guarantees that |ψ is the unique ground state of Hq. Due to the one-to-one correspondence between the spectra of generator matrix M and the parent Hamiltonian Hq, the spectral gap between the ground state and first excited state of Hq implies a bound for the mixing time of the Markov chain described by M. To account for the natural parallelization during the evolution in a quantum system, the mixing time of the Markov chain is divided by n for a fair comparison, denoting the result by tm. The correspondence between the spectra of M and Hq establishes the bound tm≥1/Δ−1/n, where Δ is the gap between the ground state and first excited state of the parent Hamiltonian.
The second construction of the parent Hamiltonian employed in determining the first Hamiltonian is based on a general construction described in Perez-Garcia et al., which can be applied whenever the state |ψ has a known representation as a projected entangled pair state (PEPS). See Perez-Garcia, D., Verstraete, F., Cirac, J. I., and Wolf, M. M. PEPS as unique ground states of local Hamiltonians, Quant. Inf. Comp. 8, 0650 (2008), which is hereby incorporated by reference in its entirety. In some embodiments, the probability distribution p(s) can be the Gibbs distribution of a classical spin Hamiltonian with one- and two-body terms. Such a classical Hamiltonian can be written as Hc=−Σihisi−Σi,jJijsisj, where hi and Jij are real parameters, si∈{+1, −1} are classical spin variables, and each sum runs from 1 to n, with n denoting the number of spins. The corresponding Gibbs distribution is given by p(s)=e−βH
Following the prescription in Perez-Garcia et al., a PEPS representation for this state can be constructed, thereby deriving the first Hamiltonian from a PEPS representation of its ground state. Given the PEPS representation, one can straightforwardly compute the reduced density operator ρi of a finite region Ri surrounding a given site i. Denoting the projector onto the kernel of ρi by Pi, the parent Hamiltonian can be written as Hq=ΣiPi. By construction, the parent Hamiltonian is positive semi-definite and Hq|ψ=0, which implies that |ψ is a ground state of Hq.
While embodiments described herein employ the above two methods of constructing the parent Hamiltonian, a person of skill in the art would understand, based on the present disclosure, that other constructions of the parent Hamiltonian can be used in the design of quantum algorithms described herein. The physical realization of the parent Hamiltonian is not restricted to the use of Rydberg states of neutral atoms described below. In general, any local parent Hamiltonian can be efficiently simulated as a quantum circuit. A person of skill in the art would understand how to implement such a circuit on a given platform, including, but not limited to, superconducting qubits, trapped ion qubits, and neutral atom qubits. A direct, physical realization of the parent Hamiltonian as the one described in this disclosure has the advantage over Hamiltonian simulation that it can be implemented with minimal overhead, rendering it suitable for currently available quantum devices. A person of skill in the art would understand that other Hamiltonians can be realized in a given physical platform, with each platform supporting a particular set of naturally realizable Hamiltonians.
In some embodiments, the probability distribution can be a Gibbs distribution of a classical Hamiltonian. Every configuration s of the classical system is associated with a classical energy Hc(s). The corresponding Gibbs distribution is given by p(s)=e−βH
The examples described below demonstrate that the method described herein of sampling from a classical probability distribution using a quantum computer can give rise to speedup over classical Markov chains. It was possible to rigorously establish this improvement as the examples were sufficiently simple to be analytically and numerically tractable. However, more challenging problems are of greater practical significance. In some embodiments, such problems include sampling from the Gibbs distribution of a spin glass with a large number of spins or sampling from independent sets on a large, disordered graph. While it may not be possible to compute the exact phase diagram for such instances, it may nevertheless be possible to establish that the initial state |φ and the final state |ψ belong to distinct quantum phases. In some embodiments, the approximate knowledge of the phase diagram could be sufficient to identify candidate paths to prepare |ψ. In some embodiments, the paths could be further optimized using a hybrid classical-quantum optimization loop, where a parametrized path is realized on a quantum computer and its parameters are optimized on a classical computer.
To prepare the state |ψ, a quantum system is prepared in the initial state |φ, which is subjected to the time-dependent Hamiltonian H(t). The state evolves according to the Schrödinger equation
with the initial condition |φ(0)=|φ. The time-dependent Hamiltonian follows a continuous path from H(0)=H0 to H(ttot)=Hq, where trot is the total time of the evolution and Hq is the parent Hamiltonian of |ψ. The continuous path is chosen such that |φ(ttot) has a large overlap with the desired state |ψ. In some embodiments, the overlap can be quantified by the fidelity =|(φ(ttot)|ψ2. For the purpose of sampling, it is not necessary that |φ(ttot) be exactly equal to |ψ, it is sufficient that the fidelity is close to one. The total variation distance d=∥p−q∥ between the desired probability distribution p(s) and the prepared distribution q(s)=|s|φ(ttot)|2 is bounded by d≤√{square root over (1−)}. In the examples presented in this disclosure, the state preparation time will be characterized by the time ta that is required to achieve a fidelity exceeding 0.999. A person of skill in the art would understand that other fidelity thresholds can also be used.
In some embodiments, the time evolution can be adiabatic. Without being bound by theory, according to the adiabatic theorem, it is
min all irrelevant global phase α, provided the gap of H(t) does not vanish anywhere along the path. Under this assumption, it is always possible to prepare a close approximation to |ψ(i.e., reach a high fidelity) by choosing ttot sufficiently large. The required value of ttot depends on the choice of the path H(t) and, in particular, the rate with which the Hamiltonian changes. In some embodiments, the rate of change can be chosen such that
for some constant ϵ that depends on ttot. Here, |n(t) is an instantaneous eigenstate of H(t) with eigenenergy En(t), that is, H(t)|n(t)=En(t)|n(t). The states are ordered such that En(t)≥Em(t) when n>m, such that n=0 corresponds to the ground state. Without being bound by theory, this particular choice of rate of change is motivated by the goal of minimizing nonadiabatic transitions, which depend on the spectral gap [denominator on the left-hand side of Eq. (5) above] as well as the rate of change of the eigenstates [matrix element squared on the left-hand side of Eq. (5) above].
While embodiments of the present disclosure discuss adiabatic time evolution, a person of skill in the art would understand the time evolution does not need to be completely adiabatic throughout the time evolution. In some embodiments, it is advantageous for the quantum state to have nonadiabatic transitions to excited states before jumping back to the ground state along the continuous path. In the following examples of the weighted independent sets on the star graph, the time evolution of the quantum state includes nonadiabatic transitions.
Sampling from the 1D Ising Model
The developed quantum algorithm is now illustrated by considering a ferromagnetic Ising model composed of n spins in one dimension (1D). In some embodiments, the classical Hamiltonian is given by Hc=−Σi=1nσizσi+1z with periodic boundary conditions, letting arσn+1z=σ1z and σ0z=σnz. Glauber dynamics are chosen for the Markov chain, in which at each time step, a spin is selected at random and its value is drawn from a thermal distribution with all other spins fixed. Up to a constant, the corresponding parent Hamiltonian determined using Eq. (2) takes the form
H
q(β)=Σi=1n[h(β)σix+J1(β)σizσi+1z−J2(β)σi−1zσixσi+1z], (6)
where 4h(β)=1+1/cos h(2β), 2J1(β)=tan h(2β), and 4J2(β)=1−1/cos h(2β) (see below for details). At infinite temperature (β=0), J1=J2=0 and h=½, and the ground state is a paramagnet aligned along the x-direction, which corresponds to an equal superposition of all classical spin configurations. When the temperature is lowered, the parameters move along a segment of a parabola in the two-dimensional parameter space (J1/h, J2/h) shown by the curve (ii) 220 in
In some embodiments, the quantum phase diagram of the parent Hamiltonian for arbitrary values of h, J1, and J2 is obtained by performing a Jordan-Wigner transformation that maps Eq. (6) onto a free-fermion model (see below). The distinct quantum phases are displayed in
To prepare the state |ψ(β) for the desired inverse temperature β, one may start from the ground state of H0=Hq(β) before smoothly varying the parameters (h, J1, J2) to bring the Hamiltonian into its final form Hq(β). States corresponding to finite values of β can be connected to |ψ(β) by a path (ii) 220 that lies fully in the paramagnetic phase. Both adiabatic state preparation and the Markov chain are efficient in this case. Indeed, it has been shown previously that there exists a general quantum algorithm with run time ˜log n for gapped parent Hamiltonians, which is identical to the Markov mixing time tm for the Ising chain.
Sampling at zero temperature is more challenging, with the mixing time of the Markov chain bounded by tm≥n2 (see below). For the quantum algorithm, the four different paths 210, 220, 230, and 240 shown in
Without being bound by theory, these scalings follow from the nature of the phase transitions. The dynamical critical exponent at the tricritical point 250 is z=2, meaning that the gap closes with system size as Δ−1/n2, which is consistent with the time required along path (ii) 220. As show below, the dynamical critical exponent at all phase transitions away from the tricritical point 250 is z=1, and the gap closes as Δ˜1/n. Therefore, the paramagnetic to ferromagnetic phase transition can be crossed adiabatically in a time proportional to n, only limited by ballistic propagation of domain walls shown in
While this example illustrates a mechanism for quantum speedup, sampling from the Gibbs distribution of large Ising chains is hard only at zero temperature. Since sampling at zero temperature is equivalent to optimization, there may exist more suitable algorithms to solve the problem. The approach described herein is, however, not limited to such special cases, as illustrated below by a Gibbs distribution for which the Markov chain mixes slowly even at finite temperature. In addition, note that while the parent Hamiltonian, Eq. (2) above, does not have a simple physical realization, it is the sum of local terms and can thus be efficiently realized on a quantum computer by means of Hamiltonian simulation. However, for near term applications on small quantum devices, it is desirable to identify problems for which the parent Hamiltonian has a natural implementation. Such an example is provided below in the context of sampling from a weighted independent set problem of unit disk graphs.
An independent set of a graph is any subset of vertices in which no two vertices share an edge. An example of an independent set (black vertices 310) is shown in
To construct a quantum algorithm, in some embodiments, each vertex is associated with a spin variable σiz=2ni−1. From Eq. (2), single spin flips with the Metropolis-Hastings update rule yield the parent Hamiltonian
H
q(β)=ΣiPi[Ve,i(β)ni+Vg,i(β)(1−ni)−Ωi(β)σix], (7)
where one only considers the subspace spanned by the independent sets (see below). In Eq. (7), Pi=Πj∈N
The projectors Pi involve up to d-body terms, where d is the degree of the graph. Nevertheless, in some embodiments, they can be implemented, e.g., using programmable atom arrays with minimal experimental overhead for certain classes of graphs. In the case of so-called unit disk graphs, shown in
The set of Hamiltonians Hq(β) is indicated by the curve (i) 320 in
A quantum speedup is obtained by choosing a different path (ii) 330, shown in
Sampling from Hard Graphs
A graph is considered next as an example for which it is hard to sample from independent sets even at nonzero temperature. The graph takes the shape of a star with b branches and two vertices per branch, as shown in
The Markov chain on this graph has severe kinetic constraints, because changing the central vertex from unoccupied to occupied requires all neighboring vertices to be unoccupied. Assuming that each individual branch is in thermal equilibrium, the probability of accepting such a move is given by p0→1=[(1+eβ)/(1+2eβ)]b. The reverse process is energetically suppressed with an acceptance probability of p1→0=e−bβ. The central vertex can thus become trapped in the thermodynamically unfavorable configuration, resulting in a mixing time that grows exponentially with b at any finite temperature. When starting from a random configuration, the Markov chain will nevertheless sample efficiently at high temperature because the probability of the central vertex being initially occupied is exponentially small. By the same argument, the Markov chain almost certainly starts in the wrong configuration in the low temperature phase and convergence to the Gibbs distribution requires a time tm≥1/p0→1.
The corresponding quantum dynamics are captured by a two-state model formed by |ψ0(β) and |ψ1(β), which encode the Gibbs distribution at inverse temperature β with the central vertex fixed to be respectively unoccupied or occupied (see
The unstructured search problem was pivotal in the development of quantum algorithms. Grover's algorithm gave an early example of a provable quantum speedup, and it remains an essential subroutine in many proposed quantum algorithms. Moreover, the unstructured search problem played a crucial role in the conception of adiabatic quantum computing. It is shown below that, when applied to the unstructured search problem, in some embodiments, the formalism described herein recovers the adiabatic quantum search algorithm (AQS), along with its quadratic speedup over any classical algorithm. While the nonlocality of the resulting parent Hamiltonian renders it challenging to implement in practice, the result underlines the power of this approach in enabling quantum speedup.
Consider the problem of identifying a single marked configuration m (i.e., a single solution) in a space of a total of N elements. In some embodiments, to connect this search problem to a sampling problem, the energy −1 is assigned to the marked configuration, while all other states have energy 0. This is summarized by the classical Hamiltonian
H
c
=−|m
m| (8)
Solving the search problem can now be formulated as sampling from the Gibbs distribution associated with He at zero temperature. Given the lack of structure of the problem, a natural choice for the Markov chain is to propose any configuration with equal probability 1/N. If the update is accepted according the Metropolis-Hastings rule, the parent Hamiltonian according to Eq. (2) takes the form
H
q(β)=I−A(β)(|mm|+|m⊥m⊥|)−V0(β)|ψ0ψ0|−Vm(β)|mm|, (9)
where
The states |ψ0=Σi|i/√{square root over (N)} and |m⊥=Σi≠m|i/√{square root over (N−1)} are equal superpositions of all states in the search space with and without the marked state |m. For conciseness, the factor n=log N that would render the parent Hamiltonian extensive has been omitted from Eq. (9), as it represents only a logarithmic correction to the computation time.
Since |ψ0 is contained in the subspace spanned by {|m, |m⊥}, the Hamiltonian acts trivially on the orthogonal subspace. In fact, all nontrivial dynamics arise from the last two terms in Eq. (9).
The path determined by the set of Hamiltonians Hq(β) starts at (V0, Vm)=(1, 0) when β=0 and ends at (0, 1/N) as β→∞. The gap at zero temperature is equal to 1/N, which allows one to bound the mixing time by tm≥N (up to logarithmic corrections). This bound is expected, as any classical algorithm must check on average half the configurations to solve the unstructured search problem. Note that adiabatic state preparation along the path determined by Hq(β) leads to the same time complexity. One assumes that the ground state at β=0, given by |ψ0, can be readily prepared. Adiabatic state preparation experiences a bottleneck close to the quantum phase transition, where V0 and Vm are on the order of 1/√{square root over (N)} as indicated by the inset in
The above description suggests a speedup if one chooses a path that crosses the phase transition at a point where V0 and Vm do not depend on N. One such path 530 is shown in
In the above examples, the rate of change of the Hamiltonian parameters was chosen using Eq. (5) with the goal of optimally satisfying the adiabatic condition at every point along a given path. If the path is parametrized by a general set of parameters λμ, Eq. (5) can be written as
where
with the same notation for the energy eigenstates and the corresponding eigenenergies as in Eq. (5). Equations (13) and (14) ensure that the parameters change slowly when the gap is small, while simultaneously taking into account the matrix elements
which determine the coupling strength of nonadiabatic processes to a particular excited state |n. The total evolution time can be adjusted by varying c and is given by
where the integral runs along the path of interest.
It is shown below that for the cases studied here, a constant fidelity close to unity is reached at a small value of c that is approximately independent of the system size. Hence, the parametric dependence of the adiabatic state preparation time on the system size follows from the integral in Eq. (15). Indeed, one finds that the scalings along the various paths 210, 220, 230, and 240 for the 1D Ising model can be analytically established from the singular properties of gμv at the tricritical point 250. A similar numerical analysis is provided below for both of the independent set problems.
Implementation with Rydberg Atoms
For unit disk graphs, the parent Hamiltonian for the weighted independent set problem, Eq. (7), can be efficiently implemented in a quantum system comprising highly excited Rydberg states of a plurality of neutral atoms that are confined by, for example, optical tweezers. As illustrated in
In some embodiments, interactions between Rydberg atoms can also be used to implement more complicated parent Hamiltonians. For instance, Förster resonances between Rydberg states can give rise to simultaneous flips of two neighboring spins. In the chain graph, such updates allow defects of two adjacent, unoccupied vertices to propagate without an energy barrier. Finally, note that even though the star graph is not a unit disk graph, its parent Hamiltonian could potentially be implemented using anisotropic interactions between Rydberg states.
Without being bound by theory, the temperature at which the classical model associated with the independent set problem for the star graph undergoes a phase transition can be computed exactly. The partition function is given by
=(1+2eβ)b+ebβ(1+eβ)b. (16)
The two terms correspond to the different configurations of the central vertex. The probability that the central site is occupied is given by
In the thermodynamic limit β→∞, Eq. (17) becomes the step function p1=Θ(β−βc), where βc=log φ with φ=(√{square root over (5)}+1)/2 being the golden ratio. The entropy S=(U−F)/T can be computed from the Helmholtz free energy F=−log /β and the total energy U=−∂ log /∂β.
The star graph has three types of vertices: the vertex at the center and the inner and outer vertices on each branch. Without being bound by theory, restricting the analysis to the subspace that is completely symmetric under permutations of the branches, the total occupation numbers nin=Σi=1bnin,i and nout=Σi=1bnout,i, as well as the number of unoccupied branches no are introduced. The symmetric subspace is spanned by the states liken, |ncen, nin, nout, n0, where ncen∈{0,1}, while the other occupation numbers are nonnegative integers satisfying nin+nout+n0=b. If ncen=1, the independent set constraint further requires nin=0. The state |ncen, rin, nout, n0 is an equal superposition of b!/(nin! nout! n0!) independent configurations.
Without being bound by theory, the permutation symmetry leads to a bosonic algebra. One defines the bosonic annihilation operators bin, bout, and b0 respectively associated with the occupation numbers nin, nout, and n0. The Hamiltonian can be split into blocks where the central vertex is either occupied or unoccupied, as well as an off-diagonal term coupling them. Explicitly,
H
q
=H
q
(0)⊗(1−ncen)+Hq(1)⊗ncen+Hq(od)⊗σcenx. (18)
It follows from Eq. (7) that in terms of the bosonic operators
H
q
(0)
=V
e,in
b
in
†b
in
+V
e,out
b
out
†
b
out+(Vg,in+Vg,out)b0†b0−Ωin(bin†b0−Ωin(bin†+h.c.)−Ωout(bout†b0+h.c.)+Vg,cenP(nin=0), (19)
H
q
(1)
=V
e,out
b
out
†
b
out
+V
g,out
b
0
†
b
0−Ωout(bout†b0+h.c.)+Ve,cen, (20)
H
q
(od)=−ΩcenP(nin=0), (21)
where P(nin=0) projects onto states with no occupied inner vertices. The parameters are labeled in accordance to the definitions in Eq. (7) with the vertex indices i replaced by the type of vertex.
The Hamiltonian can be diagonalized approximately by treating P(nin=0) perturbatively. One identifies the lowest energy modes that diagonalize the quadratic parts of Hq(0) and Hq(1) and associates with them the bosonic annihilation operators c0 and c1, respectively. Both modes have zero energy while the other modes are gapped for any finite value of β. One may thus expect the ground state to be well approximated in the subspace spanned by |ψ0=c0†b|0/√{square root over (b!)} and |ψ1=c1†b|0/√{square root over (b!)}, where |0 denotes the bosonic vacuum, focusing on the situation where all parameters follow the path defined by the set of Hamiltonians Hq(β) except for Ωcen and Ve,cen, which may be adjusted freely. One can show that in this case, |ψ0 and |ψ1 encode the Gibbs distribution of weighted independent sets on the star graph with the central vertex held fixed.
To include the effect of the terms involving P(nin=0), one performs a Schrieffer-Wolff transformation for the subspace spanned by |ψ0 and |ψ1, arriving at the effective Hamiltonian
where the terms
are obtained by projecting the full Hamiltonian onto the low-energy subspace. The corrections from coupling to excited states as given by the Schrieffer-Wolff transformation up to second order are
Here, the relation P(nin=0)|ψ0=√{square root over (ε0)}σcenx|ψ1 was used, which holds along the paths of interest. The sums run over all excited states |En(0) with energy En(0) of the quadratic part of Hq(0) (excluding |ψ0). The term Ve,cen will be neglected in the energy denominators of Eqs. (26) and (27), which is justified when Ve,cen is small compared to En(0). The discussion remains valid even if this is not the case, because the second-order corrections from the Schrieffer-Wolff transformation can then be ignored. Combining these results, the complete effective Hamiltonian can be written as
where ƒ=Σn|En(0)|σcenx|ψ1|2/En(0). One finds numerically that ƒ decays as the inverse power law in b such that the approximations are well justified in the thermodynamic limit (see below). For the set of parent Hamiltonians Hq(β), it is Ve,cen=Ωcen2. Hence, Heff depends on ƒ only through an overall factor (1−ƒ), which tends to 1 in the limit of large b. The phase transition of the underlying classical model manifests itself as a first-order quantum phase transition from |ψ0 to |ψ1. The transition occurs when the two states are resonant, ε0=Ve,cen, which can be solved to give βc=log φ, in agreement with the above exact calculation of the phase transition temperature.
In some embodiments, Glauber dynamics define a Markov chain according to the following prescription: pick a spin at random and draw its new orientation from the Gibbs distribution with all other spins fixed. Starting from configuration s, the probability of flipping spin i in the Ising chain is thus given by
By promoting the values of the spins si to operators σiz, one can concisely write the generator of the Markov chain as
M=+Σ
i
p
iσix+(I−Σipi) (30)
Eq. (2) above immediately gives
The constant term ((n/2)I) was omitted in Eq. (6) above.
Without being bound by theory, the Hamiltonian in Eq. (6) or Eq. (32) above can be mapped onto a free-fermion model using a Jordan-Wigner transformation. One defines the fermion annihilation and creation operators ai, ai† and relates them to the Pauli matrices according to
Eq. (6) above becomes
H
q
=−hΣ
i=1
n(2ai†ai−1)−J1Σi=1n−1(ai†−ai)(ai+1†+ai+1)−J21Σi=1n−2(ai†−ai)(ai+2†+ai+2)+eiπN[J1(an†−an)(a1†+a1)+J2(an−1†−an−1)(a1†+a1)+J2(an†−an)(a2†+a2)], (34)
where N=Σi=1nai†ai is the total number of fermions. While the fermion number itself is not conserved, the parity eiπN is, allowing one to consider the even and odd subspaces independently.
One defines the momentum space operators
which satisfy fermionic commutation relations for suitably chosen k. Let
for l=0, 1, . . . , n−1 (mod n). With this definition, the inverse Fourier transformed operators have the formal property ai+n=−eiπNai, which accounts for the boundary terms in Eq. (34). The Hamiltonian simplifies to
While the above Hamiltonian can be diagonalized by a standard Bogoliubov transformation, it will prove more convenient for the purposes described herein to map it onto noninteracting spins. For 0<k<π, define
τkx=ak†a−k†+a−kak,τky=−i(ak†a−k†−a−kak),τkz=ak†ak−a−ka−k†. (38)
It is straightforward to check that these operators satisfy the same commutation relations as Pauli matrices. In addition, operators corresponding to different value of k commute such that one can view them as independent spin-1/2 systems, one for each value of k. The range of momenta is restricted to 0<k<π due to the redundancy τ−kα=−τkα. The cases k=0 and k=π require special treatment as both τkx and τky vanish.
For concreteness, one can assume that the number of spins n is even. The special cases k=0 and k=π are then both part of the odd parity subspace (eiπN=−1). The Hamiltonian of the even parity subspace can be written as
H
q
even32 2Σ0<j<πΣk(cos θkτkz+sin θkτky), (39)
where
E
k=√{square root over ((h+J1 cos k+J2 cos 2k)2+(J1 sin k+J2 sin 2k)2)}. (40)
The angles θk are uniquely defined by
E
k cos θk=−h−J1 cos k−J2 cos 2k, (41)
E
k sin θk=J1 sin k+J2 sin 2k. (42)
The ground state is given by
where |vac is the vacuum with respect to the ak operators. The ground state energy is
E
GS
even=−2 Σ0<k<πEk. (44)
In the odd parity subspace, one has
H
q
odd=2Σ0<k<πEk(cos θkτkz+sin θkτky)−(h+J1+J2)(2a0†a0−1)−(h−J1+J2)(2aα†aπ−1). (45)
The construction of the ground state is analogous to the even case with the additional requirement that either the a0 fermion or the aπ fermion, whichever has the lower energy, be occupied. One can show that the resulting energy is gapped above EGSeven when h+J1+J2 and h−J1+J2 have the same sign. In the case of opposite signs, the even and odd sector ground states are degenerate in the thermodynamic limit, corresponding to the symmetry breaking ground states of the ferromagnetic phase.
Above, adiabatic evolution was considered starting from the ground state at J1=J2=0. Following the above discussion, this state is part of the even subspace. Since the time evolution preserves parity, one can restrict the description to the even subspace, dropping all associated labels below.
Excited states can be constructed by flipping any of the r spins. Since any spin rotation commutes with the parity operator, singly excited states are given by
|k=τkx|GS, (46)
with an energy 4Ek above the ground state. The phase boundaries are identified by looking for parameters for which the excitation gap vanishes, Ek=0. In the thermodynamic limit, one can treat k as a continuous variable to identify the minima of Ek. There are three distinct cases:
when h−J2=0. A solution only exists for |J1|<2|h|.
As illustrated in
The dispersion is linear in all three cases, except for the two special points simultaneously satisfying h=J2 and h=±J1. These are tricritical points, where the dispersion minima from case 3 and either case 1 or 2 merge into a single minimum with quadratic dispersion. Hence, the dynamical critical exponent is z=1 at all phase transitions except for the tricritical points, where z=2. The gap closes as ˜n−z, as can be easily seen by considering the value of k closest to the dispersion minimum for a finite-sized system. Note that in case 3, the gap displays an oscillatory behavior as a function of system size for fixed (h, J1, J2) and may even vanish exactly. Nevertheless, the envelope follows the expected ˜1/n scaling.
To compute the fidelity after evolving under the time-dependent Hamiltonian following a given path, the Schrödinger equation was numerically integrated for each spin τk, working in the instantaneous eigen basis |χk±(t), which are eigenstates of Hk=2Ek(cos θkτkz+sin θkτky) with energies ±2Ek [see Eq. (39)]. It is convenient to parametrize each adiabatic path by a dimensionless time s running from 0 to 1. Writing the state at time s as
|ψk(s)=ck(s)|χ
the coefficients ck and dk are determined by the Schrödinger equation
with the initial condition ck(0)=1, dk(0)=0. The final fidelity is obtained by solving this equation for each spin and multiplying the individual fidelities,
=Å0<k<π|ck(1)|2. (49)
Note that all terms in Eq. (48) can be evaluated without having to solve for the physical evolution time t(s). The terms Ek(s) and dθk/ds are readily computed from Eqs. (40)-(42), while dt/ds follows from Eq. (13) above:
Here, λ1=J1, λ2=J2, setting h=1 throughout. To vary the total evolution time ttot, one simply adjusts the value of E. Good convergence was obtained by evolving under constant s=sn for an interval
before incrementing sn+1=sn+Δsn. The number of steps is independent of the total time, yet the final fidelity is well estimated since the probability of leaving the ground state is small in each step. The resulting infidelity 1− as a function of E for the four paths described above, a parametrization of which is given in Table 1, is shown in
The above numerical observations suggest that the adiabatic state preparation time is proportional to the quantity
l=∫√{square root over (Σμ,vgμvdλμdλv)}, (51)
which will be referred to herein as the adiabatic path length 1. In the same spirit, one calls gp, the adiabatic metric as it endows the parameter space with a distance measure relevant for adiabatic evolution. The adiabatic path length l is plotted in
One may gain an analytic understanding of the adiabatic path length by considering the adiabatic metric close to the tricritical point. From Eqs. (43) and (46) it is straightforward to show that
It then follows immediately from the definition in Eq. (14) above that
With λ1=J1, λ2=J2, this result may be written in matrix form as
In the thermodynamic limit, the momentum sum turns into an integral, which can be evaluated analytically. Setting h=1 and parametrizing J1=2+η cos α, J2=1+η sin α, the result is expanded close to the tricritical point (small η) to obtain
The first case corresponds to the ferromagnetic phase, while the second case applies to the paramagnetic and cluster-state-like phases. In both cases, the adiabatic metric diverges as a power law, G˜nη−ρ, with ρ=5/2 in the ferromagnetic phase and ρ=5 otherwise.
For finite-sized systems, one can show that exactly at the critical point, G˜nσ, where σ=6 in any direction not parallel to the J1 axis. Based on finite-sized scaling arguments, one expects that the metric follows the expression for the infinite system as one approaches the tricritical point until it saturates to the final value G˜nσ. One is thus led to define a critical region η<ηc determined by nηc−ρ˜nσ, in which the metric is approximately constant. These arguments imply that the path length should scale according to
The above prediction agrees well with the numerical results for paths (ii)-(iv) shown in
A person of skill in the art would understand that a similar analysis can be performed at the transition between the paramagnetic and the ferromagnetic phases, away from the tricritical point. One finds that the adiabatic path length always scales linearly with the system size.
In some embodiments, for the weighted independent set problem, the Metropolis-Hastings update rule is used instead of Glauber dynamics. A move is accepted with probability paccept=min(1, e−βΔE), where ΔE is the change in energy. With single site updates, the probability of changing the occupation of vertex i is given by
p
i
=P
i
e
−βw
n
, (57)
assuming that the weight wi is nonnegative. The projector Pi=ÅMj∈N
H
q=ΣiPi[e−βw
This can be brought into the form of Eq. (7) above using the identity e−βw
The parent Hamiltonian for the (unweighted) independent set problem on a chain has been previously discussed in a different context. For general parameters, the quantum phase diagram of the Hamiltonian can be estimated using numerical diagonalization of finite-sized chains. The complexity of the problem is reduced as one only needs to consider the subspace of independent sets. One further restricts oneself to states that are invariant under translation (zero momentum). Assuming that the initial state satisfies this condition, the state will remain in this subspace at all times since the Hamiltonian is translationally invariant.
The gap of the set of Hamiltonians Hq(β) as a function of β is shown in
Due to the exponentially vanishing gap, it is not possible to adiabatically reach the state encoding the Gibbs distribution at zero temperature along path (i) 320 discussed above. One therefore only prepares the state encoding the Gibbs distribution at βc=2 log n, whose fidelity relative to the state encoding the Gibbs distribution at zero temperature is approximately 90%, almost independent of n. The constant overlap reflects the fact that the correlation length at, is a fixed ratio of the system size. It is possible to increase the overlap by adding a constant to without changing the scaling behavior discussed herein. To determine the state preparation fidelity, the Schrödinger equation is numerically integrated by exactly diagonalizing the Hamiltonian at discrete time steps Δt. The steps are chosen such that Δt|d|0/dt|=10−3, where |0 denotes the instantaneous ground state. The expression is most conveniently evaluated using the identity |d|0/dt|2=Σn>0|n|dH/dt|0|2/(En−E0)2, where the sum runs over all excited states |n with energy En. The time step is related to the change in the parameters along the path by Eq. (13) above. As for the Ising chain, the fidelity approaches 1 at small ε in a fashion that is approximately independent of the number of vertices n. The same is true along path (ii) 330, for which it is in fact possible to adiabatically reach the state encoding the Gibbs distribution at zero temperature in finite time. An explicit parametrization of the two paths is provided in Table 2.
The fact that the fidelity at a fixed value of ε is approximately independent of n again allows one to understand the adiabatic state preparation time ta in terms of the adiabatic path length l in Eq. (51). The adiabatic path length l from β=0 to β=10 along paths (i) 320 and (ii) 330 is plotted in
In the Methods section above, the quantity ƒ was defined as
A numerical evaluation of this quantity ƒ as a function of b is shown in
Before a description of the different adiabatic paths, note that the state encoding the Gibbs distribution with β=0 can be efficiently prepared. For instance, one can start with Ve,i=1 and Vg,i=Ωi=0. Next, all Ωi are simultaneously ramped up to 1 at the same constant rate before doing the same for Vg,i. The Hamiltonian remains gapped along this path such that the adiabatic state preparation proceeds efficiently. Hence, one can use the state encoding the Gibbs distribution with β=0 as the initial point for all adiabatic paths.
Next, consider adiabatic evolution along the path defined by the set of Hamiltonians Hq(β).
Following the discussion above, one expects an improvement of the adiabatic state preparation time by increasing the value of Ωcen. As a first guess, one may consider a path where Ωcen is held constant at 1, while all other parameters are varied according to Hq(β) from β=0 to the desired final value of β (assumed to be below the phase transition). The time required to cross the phase transition is again ta˜1/J, where J is evaluated at the phase transition. The term ƒΩcen2 in the effective Hamiltonian [Eq. (28) above] shifts the phase transition close to β=0 such that ta˜(3/2)b/2. However, this does not yet result in a speedup for preparing the desired quantum state encoding the Gibbs distribution as one still has to decrease Steen to its final value e−bβ/2. It turns out that this step negates the speedup if performed adiabatically. The reason for this is that the large value of Peen admixes states where the central vertex is unoccupied, skewing the occupation probability away from its thermal expectation values. Even though this admixture is small, the time ta adiabatically remove it exceeds the time ta initially cross the phase transition. To avoid this issue, one can in principle suddenly switch Ωcen to its final value at the cost that the final fidelity is limited by the admixture. Perturbation theory and numerical results suggest that this results in an infidelity 1−that decays as 1/b2 for large values of b.
A slight modification of the path achieves a final state infidelity that is not only polynomially but exponentially small in b: first, Ve,cen is lowered from its initial value 1 to −1, which can be done in time ta˜(3/2)b/2 as before. Next, all other parameters are varied according to the set of Hamiltonians Hq(β), for which only a time polynomial in b is required. Numerical results for these two steps are shown in
While the speedup of this path over the Markov chain appears to be more than quadratic [ta˜(3/2)b/2 compared to tm≥φb], it is likely that a convergence time on the order of (3/2)b can be achieved using simulated annealing. For instance, one could consider an annealing schedule in which the weight on the central vertex is first increased. This shifts the phase transition towards β=0, allowing one to sample at the phase transition in a time that scales as (3/2)b. In the annealing schedule, the temperature can then be lowered to the desired value before ramping the weight of the central vertex back to its initial value. This annealing schedule is in many ways similar to the adiabatic path discussed above. It is nevertheless quadratically slower because unlike in the quantum case it is not possible to vary Ve,cen and Ωcen independently.
In some embodiments, the approach to quantum sampling algorithms described herein unveils a connection between computational complexity and phase transitions, and provides physical insight into the origin of quantum speedup. The quantum Hamiltonians appearing in the construction are guaranteed to be local given that the Gibbs distribution belongs to a local, classical Hamiltonian and that the Markov chain updates are local. Consequently, time evolution under these quantum Hamiltonians can be implemented using Hamiltonian simulation. Moreover, a hardware efficient implementation in near-term devices may be possible for certain cases such as the independent set problem. While the proposed realization utilizing Rydberg blockade is restricted to unit disk graphs, a wider class of graphs may be accessible using, for example, anisotropic interactions and individual addressing with multiple atomic sublevels.
In various embodiments discussed above, a quantum computer comprises a plurality of confined neutral atoms. However, various physical embodiments of a quantum computer are suitable for use according to the present disclosure. While qubits are characterized herein as mathematical objects, each corresponds to a physical qubit that can be implemented using a number of different physical implementations, such as trapped ions, optical cavities, individual elementary particles, molecules, or aggregations of molecules that exhibit qubit behavior. Accordingly, in some embodiments, a quantum computer comprises nonlinear optical media. In some embodiments, a quantum computer comprises a cavity quantum electrodynamics device. In some embodiments, a quantum computer comprises an ion trap. In some embodiments, a quantum computer comprises a nuclear magnetic resonance device. In some embodiments, a quantum computer comprises a superconducting device. In some embodiments, a quantum computer comprises a solid state device.
Referring now to
In computing node 10 there is a computer system/server 12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 12 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
Computer system/server 12 may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server 12 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
As shown in
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus, Peripheral Component Interconnect Express (PCIe), and Advanced Microcontroller Bus Architecture (AMBA).
Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12, and it includes both volatile and non-volatile media, removable and non-removable media.
System memory 28 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32. Computer system/server 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 34 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 18 by one or more data media interfaces. As will be further depicted and described below, memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
Program/utility 40, having a set (at least one) of program modules 42, may be stored in memory 28 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments as described herein.
Computer system/server 12 may also communicate with one or more external devices 14 such as a keyboard, a pointing device, a display 24, etc.; one or more devices that enable a user to interact with computer system/server 12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 12 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 22. Still yet, computer system/server 12 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As depicted, network adapter 20 communicates with the other components of computer system/server 12 via bus 18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
Referring to
In various embodiments, computing node 10 receives a description of a probability distribution, e.g., from a remote node via network adapter 20 or from internal or external storage such as storage system 34. Computing node 10 determines a first Hamiltonian having a ground state encoding the probability distribution and determines a second Hamiltonian, the second Hamiltonian being continuously transformable into the first Hamiltonian via a path through at least one quantum phase transition. Computing node 10 provides instructions to quantum computer 1701, for example via I/O interface 22 or network adapter 20. The instructions indicate to initialize a quantum system according to a ground state of the second Hamiltonian, and evolve the quantum system from the ground state of the second Hamiltonian to the ground state of the first Hamiltonian according to the path through the at least one quantum phase transition. As set out above, the instructions may indicate the parameters of a time-varying beam of coherent electromagnetic radiation to be directed to each of a plurality of confined neutral atoms 1702. However, it will be appreciated that alternative physical implementations of quantum computer 1702 will have platform-specific instructions for preparing and evolving a quantum system. Computing node 10 receives from the quantum computer a measurement on the quantum system, thereby obtaining a sample from the probability distribution.
The present disclosure may be embodied as a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Having thus described several illustrative embodiments, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to form a part of this disclosure and are intended to be within the spirit and scope of this disclosure. While some examples presented herein involve specific combinations of functions or structural elements, it should be understood that those functions and elements may be combined in other ways according to the present disclosure to accomplish the same or different objectives. In particular, acts, elements, and features discussed in connection with one embodiment are not intended to be excluded from similar or other roles in other embodiments. Additionally, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions. Accordingly, the foregoing description and attached drawings are by way of example only, and are not intended to be limiting.
This application in a continuation of International Application No. PCT/US21/12209, filed Jan. 5, 2021, which claims the benefit of U.S. Provisional Application No. 62/957,400, filed Jan. 6, 2020, each of which is hereby incorporated by reference in its entirety.
This invention was made with government support under N00014-15-1-2846 awarded by the Department of Defense/Office of Naval Research; and U.S. Pat. Nos. 1,125,846 and 1,506,284 awarded by the National Science Foundation. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
62957400 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US21/12209 | Jan 2021 | US |
Child | 17858570 | US |