PHASE ESTIMATION WITH RANDOMIZED HAMILTONIANS

Information

  • Patent Application
  • 20200293936
  • Publication Number
    20200293936
  • Date Filed
    June 03, 2019
    4 years ago
  • Date Published
    September 17, 2020
    3 years ago
  • CPC
    • G06N10/00
  • International Classifications
    • G06N10/00
Abstract
Existing methods for dynamical simulation of physical systems use either a deterministic or random selection of terms in the Hamiltonian. In this application, example approaches are disclosed where the Hamiltonian terms are randomized and the precision of the randomly drawn approximation is adapted as the required precision in phase estimation increases. This reduces both the number of quantum gates needed and in some cases reduces the number of quantum bits used in the simulation.
Description
FIELD

This application relates generally to quantum computing. In more detail, example approaches are disclosed where Hamiltonian terms are randomized and the precision of the randomly drawn approximation is adapted as the required precision in phase estimation increases.


SUMMARY

Existing methods for dynamical simulation of physical systems use either a deterministic or random selection of terms in the Hamiltonian. In this disclosure, example approaches are disclosed where the Hamiltonian terms are randomized and the precision of the randomly drawn approximation is adapted as the required precision in phase estimation increases. This reduces both the number of quantum gates needed and in some cases reduces the number of quantum bits used in the simulation.


Embodiments comprise randomizing phase estimation by replacing the Hamiltonian with a randomly generated one each time it is simulated. Further embodiments involve the use of randomization within an iterative phase estimation algorithm to select Hamiltonian terms for inclusion in the approximation as well as their ordering. Certain embodiments involve the use of importance functionals based on the significance of the term in the groundstate to determine whether it gets included in the randomly sampled Hamiltonian. Further embodiments involve the use of importance sampling based on the variational approximations to the groundstates, such as but not limited to, CISI) states. Certain embodiments involve the use of adaptive Bayesian methods in concert with this process to quantify the precision of the Hamiltonian needed given the current uncertainty in the eigenvalue that the algorithm is estimating.


In this application, example methods for performing a quantum simulation using adaptive Hamiltonian randomization. The particular embodiments described should not be construed as limiting, as the disclosed method acts can be performed alone, in different orders, or at least partially simultaneously with one another. Further, any of the disclosed methods or method acts can be performed with any other methods or method acts disclosed herein. In particular embodiments, a Hamiltonian to be computed by the quantum computer device is inputted; a number of Hamiltonian terms in the Hamiltonian is reduced using randomization within a phase estimation algorithm; and a quantum circuit description for the Hamiltonian is output with the reduced number of Hamiltonian terms.


In certain embodiments, the reducing comprises selecting one or more random Hamiltonian terms based on an importance function; reweighting the selected random Hamiltonian terms based on an importance of each of the selected Hamiltonian random terms; and generating the quantum circuit description using the reweighted random terms. Some embodiments further comprise implementing, in the quantum computing device, a quantum circuit as described by the quantum circuit description; and measuring a quantum state of the quantum circuit. Still further embodiments comprise re-performing the method based on results from the measuring (e.g., using an iterative process). In some embodiments, the iterative process comprises computing a desired precision value for the Hamiltonian; computing a standard deviation for the Hamiltonian based on results from the implementing and measuring; and comparing the desired precision value to the standard deviation. Some embodiments further comprise changing an order of the Hamiltonian terms based on the reducing. Certain embodiments further comprise applying importance functions to terms of the Hamiltonian in a ground state; and selecting one or more random Hamiltonian terms based at least in part on the importance functions. Some embodiments comprise using importance sampling based on a variational approximation to a groundstate. Certain embodiments further comprise using adaptive Bayesian methods to quantify a precision needed for the Hamiltonian given an estimate of the current uncertainty in an eigenvalue.


Other embodiments comprise one or more computer-readable media storing computer-executable instructions, which when executed by a computer cause the computer to perform a method comprising inputting a Hamiltonian to be computed by the quantum computer device; reducing a number of Hamiltonian terms in the Hamiltonian using randomization within a phase estimation algorithm; and outputting a quantum circuit description for the Hamiltonian with the reduced number of Hamiltonian terms.


The foregoing and other objects, features, and advantages of the disclosed technology will become more apparent from the following detailed description, which proceeds with reference to the accompanying figures.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a quantum circuit for performing iterative phase estimation.



FIGS. 2-9 comprise graphs that show the average ground energy shift (compared to unsampled Hamiltonian), variance in ground energies over sampled Hamiltonians, average qubit requirement, and average number of terms in sampled Hamiltonians for Li2, as a function of number of samples taken to generate the Hamiltonian and the value of the parameter p.



FIG. 10 is a flow chart showing an example method for implementing an importance sampling simulation method according to an embodiment of the disclosed technology.



FIG. 11 is a flow chart showing an example method for performing a quantum simulation using adaptive Hamiltonian randomization.



FIG. 12 illustrates a generalized example of a suitable classical computing environment in which aspects of the described embodiments can be implemented.



FIG. 13 shows an example of a possible network topology (e.g., a client-server network) for implementing a system according to the disclosed technology.



FIG. 14 shows another example of a possible network topology (e.g., a distributed computing environment) for implementing a system according to the disclosed technology.



FIG. 15 shows an exemplary system for implementing the disclosed technology.



FIG. 16 is a flow chart showing an example method for performing a quantum simulation using adaptive Hamiltonian randomization.





DETAILED DESCRIPTION
I. Introduction

Not all Hamiltonian terms are created equally in quantum simulation. Hamiltonians that naturally arise from chemistry, materials and other applications often are composed of terms that are negligibly small. These terms are often culled from the Hamiltonian well before it hits the simulator. Other terms that are formally present in the Hamiltonian are removed, not because of their norm, but rather because they are not expected to impact the quantity of interest. For example, in quantum chemistry one usually selects an active space of orbitals and ignores any orbitals outside the active space. This causes many large terms to be omitted from the Hamiltonian.


This process, often called decimation, often involves systematically removing terms from the Hamiltonian and simulating the dynamics. The idea behind such a scheme is to remove terms in the Hamiltonian until the maximum shift allowed in the eigenvalues is comparable to the level of precision needed. For the case of chemistry, chemical accuracy sets a natural accuracy threshold for such simulations, but in general this precision requirement need not be viewed as a constant.


One of the example innovations of this disclosure is that, in iterative phase estimation, the number of terms taken in the Hamiltonian should ideally not be held constant. The reason why is that the high-order bits are mostly irrelevant when one is trying to learn, for example, a given bit of a binary expansion of the eigenphase. A much lower accuracy simulation can be tolerated than it can when learning a high-order bit. It then makes sense to adapt the number of terms in the Hamiltonian as iterative phase estimation proceeds through the bits of the phase estimation. Example embodiments of the disclosed technology provide a systematic method for removing terms and provides formal proofs that such processes need not dramatically affect the results of phase estimation nor its success probability.


One of the concepts behind the example advanced decimation procedure is that a form of importance sampling is used to estimate, a priori, which terms in the Hamiltonian are significant. These randomized Hamiltonians are then used within a simulation circuit to prepare approximate ground states. It is then shown that, using analysis that is reminiscent of that behind the Zeno effect or the quantum adiabatic theorem, that the errors in the eigenstate prepared at each round of phase estimation need not have a substantial impact on the posterior mean of the eigenphase estimated for the true Hamiltonian. This shows, under appropriate assumptions on the eigenvalue gaps, that this process can be used to reduce the time complexity of simulation and even under some circumstances reduce the space complexity by identifying qubits that are not needed for the level of precision asked of the simulation.


The disclosure proceeds by first reviewing iterative phase estimation and Bayesian inference, which are used to quantify the maximal error in the inference of the phase. The disclosure then proceeds to examine the effect of using a stochastic Hamiltonian on the eigenphases yielded by phase estimation in the simple case where a fixed, but random, Hamiltonian is used at each step of iterative phase estimation. The more complicated case is then examined where each repetition of e−iHt in the iterative phase estimation circuit is implemented with a different random Hamiltonian. The theoretical analysis concludes by showing that the success probability is not degraded substantially if the eigenvalue gaps of the original Hamiltonian are sufficiently large. Further, numerical examples of this sampling procedure are shown and from that it can be concluded that the example sampling process for the Hamiltonian can have a substantial impact on the number of terms in the Hamiltonian and even in some cases the number of qubits used in the simulation.


II. Iterative Phase Estimation

The idea behind iterative phase estimation is based on the aim to build a quantum circuit that acts as an interferometer wherein the unitary one wishes to probe is applied in one of the two branches of the interferometer but not applied in the other. When the quantum state is allowed to interfere with itself at the end of the protocol the interference pattern reveals the eigenphase. This process allows the eigenvalues of U to be estimated within the standard quantum limit (i.e. the number of applications of U needed to estimate the phase within error ϵ is in Θ(1/ϵ2). If the quantum state is allowed to pass repeatedly through the interferometer circuit (or entangled inputs are used) then this scaling can be reduced to Θ(1/ϵ) which is known as the Heisenberg limit. Such a circuit is shown in schematic block diagram 100 of FIG. 1. In particular, FIG. 1 shows a quantum circuit for performing iterative phase estimation. M is the number of repetitions of the unitary U (not necessarily an integer), and θ is a phase offset between the ancilla |0custom-character and |1custom-character states.


The phase estimation circuit is easy to analyze in the case where U|ψcustom-character=ecustom-character. If U is repeated M times and θ is a phase offset then the likelihood of a given measurement outcome oϵ{0, 1} for the circuit in FIG. 1 with these parameters is










Pr


(



o

φ

;
M

,
θ

)


=



1
+


(

-
1

)



°cos


(

M


(

θ
-
φ

)


)




2

.





(
1
)







There are many free parameters that can be used when designing iterative phase estimation experiments. In particular, the rules for generating M and θ for each experiment vary radically along with the methods used to process the data that comes back from these experiments. Approaches such as Kitaev's phase estimation algorithm, robust phase estimation, information theory phase estimation, or any number of approximate Bayesian methods, provide good heuristics for picking these parameters. In this disclosure, it is assumed that one does not wish to specify to any of these methods for choosing experiments, nor does one wish to focus on the specific data processing methods used. Nonetheless, Bayesian methods are relied on to discuss the impact that randomizing the Hamiltonian can have on an estimate of the eigenphase.


Bayes' theorem can be interpreted as giving the correct way to update beliefs about some fact given a set of experimental evidence and prior beliefs. The initial beliefs of the experimentalist are encoded by a prior distribution Pr(ϕ). In many cases, it is appropriate to set Pr(ϕ) to be a uniform distribution on [0, 2π) to represent a state of maximal ignorance about the eigenphase. However, in quantum simulation broader priors can be chosen if each step in phase estimation uses Uj=e−iHtj and obeys Ujcustom-character=e−iE0tjcustom-character for different tj, since such experiments can learn E0 as opposed to experiments with a fixed t which yield ϕ=E0t mod 2π.


Bayes' theorem then gives the posterior distribution Pr(ϕ|o; ϕ, M) to be










Pr


(



φ

o

;
φ

,
M

)


=




Pr


(



o

φ

;
M

,
θ

)




Pr


(
φ
)







Pr


(



o

φ

;
M

,
θ

)




Pr


(
φ
)



d





φ



.





(
2
)







Given a complete data set rather than a single datum, one has that










Pr


(




φ




o



;
φ

,

M



)


=






j








Pr


(




o
j


φ

;

M
j


,

θ
j


)




Pr


(
φ
)









j




Pr


(




o
j


φ

;

M
j


,

θ
j


)




Pr


(
φ
)



d





φ




.





(
3
)







This probability distribution encodes the experimentalist's entire state of knowledge about ϕ given that the data is processed optimally.


It is not customary to return the posterior distribution (or an approximation thereof) as output from a phase estimation protocol. Instead, a point estimate for ϕ is given. The most frequently used estimate is the maximum a posteriori (MAP) estimate which is simply the d that has the maximum probability. While this quantity has a nice operational interpretation, it suffers from a number of deficiencies for purposes of this disclosure. The main drawback here is that the MAP estimate is not robust, in the sense that if two different values of ϕ have comparable likelihoods then small errors in the likelihood can lead to radical shifts in the MAP estimate. The posterior mean is a better estimate for this purpose, which formally is ∫ Pr(ϕ|{right arrow over (o)}; {right arrow over (M)}, {right arrow over (θ)})ϕdϕ. The posterior mean can be seen as the estimate that reduces the mean square error in any unbiased estimate of ϕ and thus it is well motivated. It also has the property that it is robust to small perturbations in the likelihood, which is a feature that is used below to estimate the impact on the results of a phase estimation experiment.


III. Errors in Likelihood Function
A. Linear Combinations of Unitaries

Linear combination of unitary methods for quantum simulation have rapidly become a favored method for simulating Hamiltonian dynamics in quantum systems. Unlike Trotter decompositions, many of these methods do not necessarily yield a unitary approximation to the simulated quantum dynamics. This means that it is impossible to use Stone's theorem directly to argue that the linear combination of unitaries method implements e−i{tilde over (H)}t in place of e−iHt. In turn, since the standard analysis of the error propagation from Trotter decompositions to the estimated phase in iterative phase estimation fails because one cannot reason about the eigenvalues of {tilde over (H)} directly.


Here, to address this in part, a discussion is provided of the impact that such errors can have on the likelihood function for iterative phase estimation.


Lemma 1. Let V be a possibly non-unitary operation that can be non-deterministically performed by a quantum computer in a controlled fashion such that there exists a unitary U with ∥U−V∥≤δ<1. If one defines the likelihood function post-selected on V succeeding to be {tilde over (P)}(o|Et; M, θ) then










P


(



o

Et

;
M

,
θ

)


-


P
~



(



o

Et

;
M

,
θ

)








δ

1
-
δ


.





Proof. Let one assume that o=0. Then, one has that for input state |ψcustom-character the error in the likelihood function output by iterative phase estimation is













P


(



0

Et

;
M

,
θ

)


-


P
~



(



0

Et

;
M

,
θ

)





=





Tr
(


[




Ue

i





θ





ψ









ψ



+


e


-
i






θ





ψ









ψ




U




4

]

-

[




Ve

i





θ





ψ









ψ



+


e


-
i






θ





ψ









ψ




V





4




V



ψ







]


)






1
2





Tr


(


U



ψ









ψ



-


V



ψ









ψ






V



ψ







)









1
2





Tr


(


U



ψ









ψ



-

V



ψ









ψ




)





+


1
2





Tr


(


V



ψ









ψ



-


V



ψ









ψ






V



ψ







)









δ
2

+




V


2



(


1

1
-
δ


-
1

)






δ
2

+



1
+
δ

2



(

δ

1
-
δ


)




=


δ

1
-
δ


.






(
4
)







Since P(0|Et; M, θ)+P(1|Et; M, θ)=1 it follows that the same bound must apply for o=1 as well. Thus the result holds for any o as claimed.


This result, while straight forward, is significant because it allows the maximum errors in the mean of the posterior distribution to be propagated through the iterative phase estimation protocol. The ability to propagate these errors will ultimately allow us to show that iterative phase estimation can be used to estimate eigenvalues from linear combinations of unitary methods.


B. Subsampling Hamiltonians

The case where terms are sampled uniformly from the Hamiltonian is now considered. Let the Hamiltonian be a sum of L simulable Hamiltonians Hl, Hcustom-character, Throughout, an eigenstate |ψcustom-characterof H and its corresponding eigenenergy E is considered. From the original, one can construct a new Hamiltonian










H
est

=


L
m






i
=
1

m



H


i








(
5
)







by uniformly sampling terms Hli from the original Hamiltonian.


When one randomly sub-samples the Hamiltonian, errors are naturally introduced. The main question is less about how large such errors are, but rather how they impact the iterative phase estimation protocol. The following lemma states that the impact on the likelihood functions can be made arbitrarily small.


Lemma 2. Let custom-characteri be an indexed family of sequences mapping {1, . . . . , m}→{1, . . . , L} formed by uniformly sampling elements from {1, . . . , L} independently with replacement and let {Hl:custom-character=1, . . . , L} be a corresponding family of Hamiltonians with H=custom-character. For |ψcustom-character an eigenstate of H such that H|ψcustom-character=E|ψcustom-character and







H
samp

=


L
m






k
=
1

m



H



i



(
k
)









with corresponding eigenstate Hsampicustom-character=Eiicustom-character one then has that the error in the likelihood function for phase estimation vanishes with high probability over Hsamp in the limit of large m:










P


(



o

Et

;
M

,
θ

)


-

P


(



o



E
i


t


;
M

,
θ

)







O






(


MtL

m











(



ψ




H





ψ



)




)






Proof. Because the terms Hli(k) are uniformly sampled, each set of terms {custom-characteri} is equally likely, and by linearity of expectation custom-character[Hsamp]=H, from which one knows that custom-character{i} [custom-characterψ|H−Hestcustom-character]=0.


The second moment is easy to compute from the independence property of the distribution.













{
i
}




(



ψ




H
samp




ψ



)


=




L
2


m
2







{
i
}




(



ψ







k
=
1

m




H



i



(
k
)






ψ





)



=



L
2


m
2







k
=
1

m







{
i
}




(



ψ




H



i



(
k
)






ψ



)


.








(
6
)







Since the different custom-characteri are chosen uniformly at random the result then follows from the observation that custom-character{i}(custom-characterψ|custom-charactercustom-character)=custom-character(custom-characterψ|custom-charactercustom-character).


From first order perturbation theory, one has that the leading order shift in any eigenvalue is O(custom-characterψ|(H−Hsamp)|ψcustom-character) to within error O(L/√{square root over (m)}). Thus the variance in this shift is from Eq. (6)













{
i
}




(



ψ




(

H
-

H
samp


)




ψ



)


=



L
2

m










(



ψ




H





ψ



)


.






(
7
)







This further implies that the perturbed eigenstate |ψicustom-character has eigenvalue











H
samp





ψ
i




=


E




ψ
i




+

O


(


L

m











(



ψ




H





ψ



)




)







(
8
)







with high probability over i from Markov's inequality. It then follows from Taylor's theorem and Eq. (1) that














P


(



o

Et

;
M

,
θ

)


-

P


(



o



E
i


t


;
M

,
θ

)











O


(


MtL

m











(



ψ




H





ψ



)




)



,




(
9
)







with high probability over i.


This result shows that if one samples the coefficients of the Hamiltonian that are to be included in the sub-sampled Hamiltonian uniformly then one can make the error in the estimate of the Hamiltonian arbitrarily small. In this context, taking m→∞ does not cause the cost of simulation to diverge (as it would for many sampling problems). This is because once every possible term is included in the Hamiltonian, there is no point in sub-sampling and one may as well take Hsamp to be II to eliminate the variance in the likelihood function that would arise from sub-sampling the Hamiltonian. In general, one needs to take mϵΩ(custom-character(custom-characterψ|custom-charactercustom-character)/(MtL)2) in order to guarantee that the error in the likelihood function is at most a constant. Thus, this shows that as any iterative phase estimation algorithm proceeds, that (barring the problem of accidentally exciting a state due to perturbation) one will be able to find a good estimate of the eigenphase by taking m to scale inverse quadratically with M.


IV. Bayesian Phase Estimation Using Random Hamiltonians

Theorem 3. Let E be an event and let P(E|θ) and P′(E|θ) for θϵ[−π, π) be two likelihood functions such that maxθ(|P(E|θ)−P′(E|θ)|)≤Δ and further assume that for prior P(θ) one has that min(P(E), P′(E))≥2Δ. One then has that












θ


(


P


(

θ

E

)


-


P




(

θ

E

)



)



d





θ









5





π





Δ


P


(
E
)



.




and






if








P


(

E

θ

)


=






j
=
1

N




P


(


E
j


θ

)







with





1


-





P




(


E
j


θ

)


/

P


(


E
j


θ

)








γ





then














θ


(


P


(

θ

E

)


-


P




(

θ

E

)



)



d





θ






5







π


(



(

1
+
γ

)

N

-
1

)


.






Proof. From the triangle inequality, one has that





|P(E)−P′(E)|=|∫P(θ)(P(E|θ)−P′(E|θ))dθ|≤Δ.  (10)


Thus it follows from the assumption that P′(E)≥2Δ that











(
11
)














P


(

θ

E

)


-


P




(

θ

E

)





=




P


(
θ
)








P


(

E

θ

)



P


(
E
)



-



P




(

E

θ

)




P




(
E
)


















P


(
θ
)




(






P


(

E

θ

)



P


(
E
)



-



P




(

E

θ

)



P


(
E
)






+






P




(

E

θ

)



P


(
E
)



-



P




(

E

θ

)




P




(
E
)







)













P


(
θ
)




(


Δ

P


(
E
)



+






P




(

E

θ

)





P




(
E
)


-
Δ


-



P




(

E

θ

)




P




(
E
)







)












Δ


(



P


(
θ
)



P


(
E
)



+


2



P




(

E

θ

)





P







2




(
E
)




)









Thus one has that



















θ


(


P


(

θ

E

)


-


P




(

θ

E

)



)



d





θ








Δ


(





θ


prior


P


(
E
)



+


2




θ


posterior





P




(
E
)




)



,










π






Δ


(


1

P


(
E
)



+

2


P




(
E
)




)













π






Δ


(


1

P


(
E
)



+

2


P


(
E
)


-
Δ



)















5





π





Δ


P


(
E
)



.








(
12
)







Now if one assumes that one has a likelihood function that factorizes into N experiments, one has that one can take










Δ

P


(
E
)



=







j
=
1

N



P


(


E
j


θ

)



-




j
=
1

N




P




(


E
j


θ

)








j
=
1

N



P


(


E
j


θ

)




=





j
=
1

N


1

-




j
=
1

N






P




(


E
j


θ

)



P


(


E
j


θ

)



.








(
13
)







From the triangle inequality














j
=
1

N


1

-




j
=
1

N





P




(


E
j


θ

)



P


(


E
j


θ

)














j
=
1


N
-
1



1

-




j
=
1


N
-
1






P




(


E
j


θ

)



P


(


E
j


θ

)







+




1
-



P




(


E
N


θ

)



P


(


E
N


θ

)









(

1
+
γ

)

N












j
=
1


N
-
1



1

-




j
=
1


N
-
1






P




(


E
j


θ

)



P


(


E
j


θ

)







+



γ


(

1
+
γ

)


N

.






(
14
)







Solving this recurrence relation gives














j
=
1

N


1

-




j
=
1

N





P




(


E
j


θ

)




P


(


E
j


θ

)









(

1
+
γ

)

N

-
1.





(
15
)







Thus the result follows.


V. Shift in the Posterior Mean from Using Random Hamiltonians

In this section, the shift in the posterior mean of the estimated phase is analyzed assuming a random shift δ(ϕ) in the joint likelihood of all the experiments,






P′({right arrow over (o)}|ϕ;{right arrow over (M)},{right arrow over (θ)})=P({right arrow over (o)}|ϕ;{right arrow over (M)},{right arrow over (θ)})+δ(ϕ).  (16)


Here, P({right arrow over (o)}|ϕ; {right arrow over (M)}, {right arrow over (θ)}) is the joint likelihood of a series of N outcomes {right arrow over (o)} given a true phase ϕ and the experimental parameters {right arrow over (M)} and {right arrow over (θ)} for the original Hamiltonian. P′({right arrow over (o)}|ϕ; M, {right arrow over (θ)}) is the joint likelihood with a new random Hamiltonian in each experiment. By a vector like {right arrow over (M)}, the repetitions are meant for each experiment performed in the series; Mi is the number of repetitions in the ith experiment.


First, one can work backwards from the assumption that the joint likelihood is shifted by some amount δ(ϕ), to determine an upper bound on the acceptable difference in ground state energies between the true and the random Hamiltonians. One can do this by working backwards from the shift in the joint likelihood of all experiments, to the shifts in the likelihoods of individual experiments, and finally to the corresponding tolerable differences between the ground state energies. Second, one can use this result to determine the shift in the posterior mean in terms of the differences in energies, as well as its standard deviation over the ensemble of randomly generated Hamiltonians.


A. Shifts in the Joint Likelihood

The random Hamiltonians for each experiment lead to a random shift in the joint likelihood of a series of outcomes






P′({right arrow over (o)}|ϕ;{right arrow over (M)},{right arrow over (θ)})=P({right arrow over (o)}|ϕ;{right arrow over (M)},{right arrow over (θ)})+δ(ϕ).  (17)


Assume that one would like to determine the maximum possible change in the posterior mean under this shifted likelihood. One can work under the assumption that the mean shift in the likelihood over the prior is at most |δ|≤P({right arrow over (o)})/2. The posterior is














P




(



φ


o



;

M



,

θ



)


=






P




(




o



φ

;

M



,

θ



)




P


(
φ
)








P




(




o



φ

;

M



,

θ



)




P


(
φ
)



d





φ









=






P


(




o



φ

;

M



,

θ



)




P


(
φ
)



+


δ


(
φ
)




P


(
φ
)








(



P


(




o



φ

;

M



,

θ



)




P


(
φ
)



+


δ


(
φ
)




P


(
φ
)




)


d





φ









=







P


(




o



φ

;

M



,

θ



)




P


(
θ
)



+


δ


(
φ
)




P


(
φ
)






P


(

o


)


+

δ
_



.








(
18
)







One can make progress toward bounding the shift in the posterior by first bounding the shift in the joint likelihood in terms of the shifts in the likelihoods of the individual experiments, as follows.


Lemma 4. Let P(oj|ϕ; Mj, θj) be the likelihood of outcome oj on the jth experiment for the Hamiltonian H, and P′(oj|ϕ; Mj, θj)=P(oj|ϕ; Mj, θj)+ϵj(ϕ) be the likelihood with the randomly generated Hamiltonian Hj. Assume that N maxj(|ϵj(ϕ)|/P(oj|ϕ, Mj, θj))<1 and |ϵj(ϕ)|≤P(oj|ϕ, Mj, θj)/2 for all experiments j. Then the mean shift in the joint likelihood of all N experiments,





|δ|=|∫P(ϕ)(P′({right arrow over (o)}|ϕ;{right arrow over (M)},{right arrow over (θ)})−P({right arrow over (o)}|ϕ;{right arrow over (M)},{right arrow over (θ)}))dϕ|,


is at most









δ
_





2





j
=
1

N




max
φ








ϵ
j



(
φ
)





P


(




o
j


φ

;

M
j


,

θ
j


)






P


(

o


)


.









Proof. One can write the joint likelihood in terms of the shift ϵj(ϕ) to the likelihoods of each of the N experiments in the sequence, P′(oj|ϕ; Mj, θj)=P(oj|ϕ; Mj, θj)+ϵj(ϕ). The joint likelihood is P′({right arrow over (o)}|ϕ; {right arrow over (M)}, {right arrow over (θ)})=Πj=1N (P(oj|ϕ; Mj, θj)+ϵj(ϕ)), so













log







P




(




o



φ

;

M



,

θ



)



=



log






(




j
=
1

N



(


P


(




o
j


φ

;

M
j


,

θ
j


)


+


ϵ
j



(
φ
)



)


)








=







j
=
1

N



log






P


(



o
j


φ

,

M
j

,

θ
j


)




+










log


(

1
+



ϵ
j



(
φ
)



P


(



o
j


φ

,

M
j

,

θ
j


)




)








=




log






P


(




o



φ

;

M



,

θ



)



+













j
=
1

N



log


(

1
+



ϵ
j



(
φ
)



P


(



o
j


φ

,

M
j

,

θ
j


)




)










(
19
)







This gives one the ratio of the shifted to the unshifted joint likelihood,












P




(




o



φ

;

M



,

θ



)



P


(




o



φ

;

M



,

θ



)



=


exp


[




j
=
1

N



log






(

1
+



ϵ
j



(
φ
)



P


(



o
j


φ

,

M
j

,

θ
j


)




)



]


.





(
20
)







One can then, for example, linearize and simplify this using inequalities for the logarithm and exponential. By finding inequalities which either upper or lower bound both these functions, one can upper bound on |δ(ϕ)| in terms of the unshifted likelihoods P({right arrow over (o)}|ϕ; {right arrow over (M)}, {right arrow over (θ)}) and P(oj|ϕ; Mj, θj), and the shift in the single-experiment likelihood ϵj(ϕ).


The inequalities that one can use to sandwich the ratio are


















1 − |x| ≤ exp(x),
for x ≤ 0



exp(x) ≤ 1 + 2|x|,
for x <1



−2|x| ≤ log(1 + x),
for |x| ≤ ½



log(1 + x) ≤ |x|,
for x ϵ  custom-character











In order for all four inequalities to hold, one must have that N maxj(|ϵj(ϕ)|/P(oj|ϕ, Mj, θj))<1 (for the exponential inequalities) and |ϵj(ϕ)|≤P(oj|ϕ, Mj, θj)/2 for all j (for the logarithm inequalities). Using them to upper bound the ratio of the shifted to the unshifted likelihood,













P




(




o



φ

;

M



,

θ



)



P


(




o



φ

;

M



,

θ



)



=




P


(




o



φ

;

M



,

θ



)


+

δ


(
φ
)




P


(




o



φ

;

M



,

θ



)





exp


[




j
=
1

N







ϵ
j



(
φ
)





P


(



o
j


φ

,

M
j

,

θ
j


)




]




1
+

2





j
=
1

N







ϵ
j



(
φ
)





P


(



o
j


φ

,

M
j

,

θ
j


)








;




(
22
)












δ


(
φ
)




2


P


(




o



φ

;

M



,

θ



)







j
=
1

N








ϵ
j



(
φ
)





P


(



o
j


φ

,

M
j

,

θ
j


)



.
















On the other hand, using them to lower bound the ratio,













P




(




o



φ

;

M



,

θ



)



P


(




o



φ

;

M



,

θ



)



=




P


(




o



φ

;

M



,

θ



)


+

δ


(
φ
)




P


(




o



φ

;

M



,

θ



)





exp


[


-
2






j
=
1

N







ϵ
j



(
φ
)





P


(




o
j


φ

;

M
j


,

θ
j


)





]




1
-

2





j
=
1

N







ϵ
j



(
φ
)





P


(




o
j


φ

;

M
j


,

θ
j


)








;




(
23
)












δ


(
φ
)





-
2



P


(




o



φ

;

M



,

θ



)







j
=
1

N








ϵ
j



(
φ
)





P


(




o
j


φ

;

M
j


,

θ
j


)



.
















The upper and lower bounds are identical up to sign. This allows one to combine them directly, so one has












δ


(
φ
)






2


P


(




o



φ

;

M



,

θ



)







j
=
1

N








ϵ
j



(
φ
)





P


(




o
j


φ

;

M
j


,

θ
j


)



.







(
24
)







From this, one can find an upper bound on the mean shift over the posterior, |δ|, since by the triangle inequality












δ
_



=







δ


(
φ
)




P


(
φ
)



d





φ






2





j
=
1

N






(






ϵ
j



(
φ
)





P


(




o
j


φ

;

M
j


,

θ
j


)





P


(




o



φ

;

M



,

θ



)




P


(
φ
)



)


d





φ






2





j
=
1

N




max
φ








ϵ
j



(
φ
)





P


(




o
j


φ

;

M
j


,

θ
j


)






P


(

o


)


.










(
25
)







So one has a bound on the shift in the joint likelihood in terms of the shifts in the likelihoods of individual experiments. These results allow one to bound the shift in the posterior mean in terms of the shifts in the likelihoods of the individual experiments ϵj(ϕ).


B. Shift in the Posterior Mean

One can use the assumption that |δ|≤P({right arrow over (o)})/2 to bound the shift in the posterior mean.


Lemma 5. Assuming in addition to the assumptions of Lemma 4 that |δ|≤P({right arrow over (o)})/2, the difference between the the posterior mean that one would see with the ideal likelihood function and the perturbed likelihood function is at most










φ
_

-


φ
_








8







max
φ








(




j
=
1

N







ϵ
j



(
φ
)





P


(



o
j


φ

,

M
j

,

θ
j


)




)











φ
_



post

.








Proof. One can approach the problem of bounding the difference between the posterior means by bounding the point-wise difference between the shifted posterior and the posterior with the original Hamiltonian,













P


(



φ


o



;

M



,

θ



)


-


P




(



φ


o



;

M



,

θ



)





=







P


(




o



φ

;

M



,

θ



)




P


(
φ
)




P


(

o


)



-




P


(




o



φ

;

M



,

θ



)




P


(
φ
)



+


δ


(
φ
)




P


(
φ
)






P


(

o


)


+

δ
_






.





(
26
)







As a first step, one can place an upper bound on the denominator of the shifted posterior,











(


P


(

o


)


+

δ
_


)



:












1


P


(

o


)


+

δ
_



=




1

P


(

o


)








k
=
0






(


-

δ
_



P


(

o


)



)

k









=




1

P


(

o


)



-


δ
_



P


(

o


)


2


+




δ
_

2



P


(

o


)


3







k
=
0






(


-

δ
_



P


(

o


)



)

k

















1

P


(

o


)



+


2




δ
_






P


(

o


)


2




=


1
+

2





δ
_



/

P


(

o


)






P


(

o


)




,








(
27
)







where in the two inequalities the assumption that |δ|≤P({right arrow over (o)})/2 was used. Using this, the point-wise difference between the posteriors is at most
















P


(




o



φ

;

M



,

θ



)




P


(
φ
)




P


(

o


)



-




P


(




o



φ

;

M



,

θ



)




P


(
φ
)



+


δ


(
φ
)




P


(
φ
)






P


(

o


)


+

δ
_














P


(




o



φ

;

M



,

θ



)




P


(
φ
)




P


(

o


)



-



P


(




o



φ

;

M



,

θ



)




P


(
φ
)





P


(

o


)


+

δ
_






+





δ


(
φ
)




P


(
φ
)





P


(

o


)


+

δ
_










2




δ
_





P


(




o



φ

;

M



,

θ



)




P


(
φ
)





P


(

o


)


2


+






δ


(
φ
)






P


(
φ
)




P


(

o


)





(

1
+


2




δ
_





P


(

o


)




)







2




δ
_





P


(




o



φ

;

M



,

θ



)




P


(
φ
)





P


(

o


)


2


+


2




δ


(
φ
)






P


(
φ
)




P


(

o


)





,




(
28
)







again using |δ|≤P({right arrow over (o)})/2. With this, one can bound the change in the posterior mean,

















φ
_

-


φ
_













φ







P


(



φ


o



;

M



,

θ



)


-



P


(

φ






o


;

M



,

θ



)





d





φ













2

P


(

o


)









φ




(






δ
_





P


(




o



φ

;

M



,

θ



)




P


(
φ
)




P


(

o


)



+




δ


(
φ
)






P


(
φ
)




)






d





φ















2

P


(

o


)









φ










δ


(
φ
)






P


(
φ
)



d





φ



+



2




δ
_





P


(

o


)









φ




(



P


(




o



φ

;

M



,

θ



)




P


(
φ
)




P


(

o


)



)






d





φ















2

P


(

o


)





(






φ










δ


(
φ
)






P


(
φ
)



d





φ


+





φ
_



post





δ
_





)









(
29
)







Now, the bounds from Lemma 4 allow one to bound the shift on the posterior mean in terms of the shifts in the likelihoods of individual experiments, custom-character(ϕ),














φ
_

-


φ
_









2

P


(

o


)





(






φ










δ


(
φ
)






P


(
φ
)



d





φ


+





φ
_



post





δ
_





)





2

P


(

o


)





(


2







max
φ






j
=
1

N








ϵ
j



(
φ
)





P


(



o
j


φ

,

M
j

,

θ
j


)





P


(

o


)








φ




(



P


(




o



φ

;

M



,

θ



)




P


(
φ
)




P


(

o


)



)






d





φ






+





φ
_



post





δ
_





)



,




(
30
)







where in the last step, one can multiple and divide by P({right arrow over (o)}). This is
















φ
_

-


φ
_











2

P


(

o


)





(


2







max
φ




(




j
=
1

N







ϵ
j



(
φ
)





P


(



o
j


φ

,

M
j

,

θ
j


)




)



P


(

o


)







φ
_



post




+





φ
_



post





δ
_





)












8







max
φ




(




j
=
1

N







ϵ
j



(
φ
)





P


(



o
j


φ

,

M
j

,

θ
j


)




)







φ
_



post

.











(
31
)







C. Acceptable Shifts in the Phase

A further question is what the bound on the shift in the posterior mean is in terms of shifts in the phase.


Theorem 6. If the assumptions of Lemma 5, for all j and x











[


-
π

,
π

)



P


(




o
j

|
θ

;
x

,

θ
j


)




=


1
+



(

-
1

)


o
j




cos


(


M
j



(


θ
j

-
x

)


)




2


,




for each of the N experiments, one has that the eigenphases used in PE {ϕ′j=1, . . . N} and the eigenphase the true Hamiltonian ϕ obey |ϕ−ϕ′j≤Δϕ and additionally P(oj|ϕ, Mj, θj)ϵΘ(1) then one has that the shift in the posterior mean of the eigenphase that arises from inaccuracies in the eigenvalues in the intervening Hamiltonians obeys










φ
_

-


φ
_








8







max
φ




(




j
=
1

N




M
j


P


(




o
j


φ

;

M
j


,

θ
j


)




)










Δ





φ



.








Furthermore if Σj MjϵO(1/ϵϕ) and P(oj|ϕ; Mj, θj)ϵΘ(1) for all j then










φ
_

-


φ
_








O


(




Δ





φ




ϵ
φ


)






Proof. One can express the shift in the posterior mean in terms of the shift in the phase applied to the ground state, Δϕ, by bounding ϵj(ϕ) in terms of it. Recall that the likelihood with the random Hamiltonian is






P′(oj|ϕ;Mjj)=P(oj|ϕ;Mjj)+ϵj(ϕ),  (32)


where the unshifted likelihood for the jth experiment is P(oj|ϕ; Mj, θj)=½(1+(−1)oj cos(Mj(ϕ−θj)). Thus,





j(ϕ)|=½|cos(Mj(ϕ+Δϕ−θj)−cos(Mj(ϕ−θj)|≤Mj|Δϕ|,  (33)


using the upper bound on the derivative sin(x)≤|x|. In sum, one has that the error in the posterior mean in the posterior mean is at most













φ
_

-


φ
_








8







max
φ




(




j
=
1

N




M
j


P


(



o
j

|
φ

,

M
j

,

θ
j


)




)






φ


_

post





Δφ


.








(
34
)







The result then follows from the fact that the absolute value of the posterior mean is at most π if the branch [−π, π) is chosen.


VI. Shift in the Eigenphase with a New Random Hamiltonian in Each Repetition

One can reduce the variance in the applied phase by generating a different Hamiltonian in each repetition. However, this is not without its costs: it can be viewed either as leading to a failure probability in the evolution or more generally to an additional phase shift.


The reason this reduces the variance is somewhat complex to formalize mathematically—it comes in when you compute the variance of |Δϕ|. Instead of just having the indices for a single Hamiltonian, the variance is over the indices of Mj Hamiltonians. Because of this, it only scales as Mcustom-characterοϕest] instead of M2custom-characterest] as it usually would (there is an underlying variance in ϕest). The cost is that, by reducing the variance in the phase in this way, one causes an additional shift in the phase. If one did not do this across multiple steps, one would be adiabatic all the way with the same wrong Hamiltonian. Instead, this means that one is only approximately adiabatic at the cost of the variance being lower by a factor Mj. Since the additional shift is also linear in [Mj], this can lead to an improvement. It generally requires that the gap be small, and that λj∝∥Hj−Hj-1∥ be small.


One then has a competition between the standard deviation being Mjcustom-characterest] or √{square root over (Mj)}custom-characterest], and this new shift which is linear in Mj. So depending on the gap, it might be hard to get a rigorous bound showing that this is better, and that one should not just stick with the higher variance from a single Hamiltonian.


A. Failure Probability of the Algorithm

For phase estimation, one can reduce the variance of the estimate in the phase by randomizing within the repetitions for each experiment. For the jth experiment with Mj repetitions (recall that Mj is not necessarily an integer), one divides into [Mj] repetitions.


Within each repetition, one can randomly generate a new Hamiltonian Hk. Each Hamiltonian Hk has a slightly different ground state and energy than all the others.


The reason this reduces the variance in the estimated phase is that the phases between repetitions are uncorrelated whereas for the single-Hamiltonian case, the variance in the phase exp(−iMϕest) is custom-character[Mϕest]=M2custom-characterest], when one simulates a different random Hamiltonian in each repetition (and estimate the sum of the phases, as exp(−iΣk=1M ϕk,est)), the variance is custom-characterk=1M ϕk,est]=Σk=1M custom-characterk,est].


By evolving under a different random instantiation of the Hamiltonian in each repetition, the variance in the phase is quadratically reduced; the only cost is that the algorithm now has either a failure probability (of leaving the ground state from repetition to repetition, e.g. in the transition from the ground state of Hk−1 to the ground state of Hk) or an additional phase shift compared to the true sum of the ground state energies. The first case is simpler to analyze: it is shown in Lemma 7, provided that the gap is sufficiently small, that the failure probability can be made arbitrarily small. One can do this by viewing the success probability of the algorithm as the probability of remaining in the ground state throughout the sequence of [Mj] random Hamiltonians. In the second case, one can prove in Lemma 8 a bound on the difference between eigenvalues if the state only leaves the ground space for short intervals during the evolution.


Lemma 7. Consider a sequence of Hamiltonians {Hk}k=1M, M>1. Let γ be the minimum gap between the ground and first excited energies of any of the Hamiltonians, γ=mink(E1k−E0k). Similarly, let λ=maxk∥Hk−Hk−1∥ be the maximum difference between any two in the sequence. The probability of leaving the ground state when transferring from Hi to H2 through to HM in order is at most 0<ϵ<1 provided that







λ
γ

<



1
-

exp


(


log


(

1
-
c

)



M
-
1


)




.





Proof. Let |ψikcustom-character be the ith eigenstate of the Hamiltonian Hk and let Eik be the corresponding energy. Given that the algorithm begins in the ground state of H1 (|ψ01custom-character), the probability of remaining in the ground state through all M steps is





|custom-characterψ0M0M-1custom-character . . . custom-characterψ0201custom-character|2.  (35)


This is the probability of the algorithm staying in the ground state in every segment. One can simplify this expression by finding a bound for ∥ψ0kcustom-character−|ψ0k−1custom-character|2. Let λkVk=Hk−Hk−1, where one can choose λk such that ∥Vk∥=1 to simplify the proof. Treating λkVk as a perturbation on Hk−1, the components of the shift in the ground state of Hk−1 are bounded by the derivative
















λ







ψ
0

k
-
1


|

ψ

k







=







ψ


k
-
1






V
k





ψ
0

k
-
1









E


k
-
1


-

E
0

k
-
1








(
36
)







multiplied by λ=max |λk|, where the maximization is over both k as well as perturbations for a given k. Using this,
















ψ

k

|

ψ
0

k
-
1







2




λ
2










ψ


k
-
1






V
k





ψ
0

k
-
1







2



(


E


k
-
1


-

E
0

k
-
1



)

2






λ
2











ψ


k
-
1






V
k





ψ
0

k
-
1







2


γ
2


.






(
37
)







This allows one to write |ψ0k+1custom-character=(1+δ0)|ψ0kcustom-character+custom-character|custom-charactercustom-character, where |custom-character≤λmaxk












ψ

k

|

V
k

|

ψ
0
k








E
0
k

-

E

k



.




Letting Vk0kcustom-characterkk), where one can again choose κk such that |ϕkcustom-characteris normalized,



















ψ
0
k



-



ψ
0

k
-
1







2

=





δ
0
2

+





>
0




δ

2






δ
0
2

+



λ
2


γ
2
















ψ


k
-
1






V
k





ψ
0

k
-
1







2











=




δ
0
2

+



λ
2


γ
2




κ
k
2















ψ

k

|

φ
k






2










=




δ
0
2

+



λ
2


γ
2





κ
k
2

.










(
38
)







One can solve for δ02 in terms of custom-character since (1+δ0)2+custom-character=1. Since √{square root over (1−x)}≥1−x for xϵ[0, 1],










δ
0
2

=



(

1
-


1
-





>
0




δ

2





)

2




(





>
0




δ

2


)

2







>
0




δ

2







(
39
)







since custom-character. Finally, returning to ∥ψ0kcustom-character−|ψ0k−1custom-character|2, since κk≤1 (this is true because ∥Vk∥=1), the difference between the ground states of the two Hamiltonians is at most
















ψ
0
k



-



ψ
0

k
-
1







2




2


λ
2



γ
2






(
40
)







This means that the overlap probability between the ground states of any two adjacent Hamiltonians is













ψ
0

k
+
1


|

ψ
0
k






2



1
-



λ
2


γ
2


.






Across M segments (M−1 transitions), the success probability is at least








(

1
-


λ
2


γ
2



)


M
-
1


.




If one wishes for the failure probability to be at most some fixed 0<ϵ<1, one must have












(

1
-


λ
2


γ
2



)


M
-
1


>

1
-
ϵ









λ
γ

<



1
-

exp


(


log


(

1
-
ϵ

)



M
-
1


)




.






(
41
)







If one can only prepare the ground state |ψ0custom-character of the original Hamiltonian, the success probability has an additional factor |custom-characterψ010custom-character|2. In this case, one can apply Lemma 7 with ∥H−H1∥ included in the maximization for λ. Further, since γ=mink(E1k−E0k)≤E1−E0−2λ|custom-characterψ0|Hk−H|ψ0custom-character≤E1−E0−2λ, where E1−E0 is the gap between the ground and first excited states of H, one needs










λ


E
1

-

E
0

-

2

λ



<



1
-

exp


(


log


(

1
-
ϵ

)


M

)




.





(
42
)







Provided that this occurs, one stays in the ground state of each Hamiltonian throughout the simulation with probability 1−ϵ. In this case, the total accumulated phase is


















ψ
0



M
j








e


-

iH



M
j






Δ





t






ψ
0




M
j



-
1


















ψ
0
2





e


-

iH
2



Δ





t






ψ
0
1












ψ
0
1





e


-

iH
1



Δ





t






ψ
0












ψ
0



M
j




|

ψ
0




M
j



-
1












ψ
0
2

|

ψ
0
1









ψ
0
1

|

ψ
0






=

exp


(


-
i






k
=
1



Mj






E
k
0


Δ





t



)



,










where





Δ





t

=


M
j



t
/




M
j



.








(
43
)







B. Phase Shifts Due to Hamiltonian Errors

One can generalize the analysis of the difference in the phase by determining the difference between the desired (adiabatic) unitary and the true one. Evolving under M random Hamiltonians in sequence, the unitary applied for each new Hamiltonian Hk is











U
k

=


exp


(


-

iH
k



Δ





t

)


=










ψ

k







ψ

k





e


-

iE

k



Δ





t






,




(
44
)







while the adiabatic unitary one would ideally apply is










U

k
,
ad


=










ψ


k
+
1








ψ

k






e


-

iE

k



Δ





t


.







(
45
)







The difference between the two is that true time evolution Uk under Hk applies phases to the eigenstates of Hk, while the adiabatic unitary Uk,ad applies the eigenphase, and then maps each eigenstate of Hk to the corresponding eigenstate of Hk+1. This means that if the system begins in the ground state of H1, the phase which will be applied to it by the sequence (U[Mj],adU[Mj]−1,ad . . . U2,adU1,ad is proportional to the sum of the ground state energies of each Hamiltonian in that sequence. By comparison, U[Mj]U[Mj]−1 . . . U2U1 will include contributions from many different eigenstates of the different Hamiltonians Hk.


One can bound the difference between the unitaries Uk and Uk,ad as follows.


Lemma 8. Let P0k be the projector onto the ground state of Hk, |ψ0kcustom-character, and let the assumptions of Lemma 7 hold. The difference between the eigenvalues of








U
k



P
0
k


=



exp


(


-

iH
k



Δ





t

)




P
0
k


=










ψ

k







ψ

k





e


-

iE

k



Δ





t




P
0
k






and












U

k
,
ad




P
0
k


=




ψ
0

k
+
1








ψ
0
k





e


-

iE

k



Δ





t




P
0
k



,




where Δt is the simulation time, is at most










(


U
k

-

U

k
,
ad



)



P
0
k








2






λ
2




(

γ
-

2





λ


)

2


.





Proof. First, one can expand the true unitary using the resolution of the identity Σppk+1custom-charactercustom-characterψpk+1|, the eigenstates of the next Hamiltonian, Hk+1:










U
k

=




p
,








ψ
p

k
+
1









ψ
p

k
+
1






ψ

k







ψ

k






e


-

iE

k



Δ





t


.









(
46
)







Let custom-character=custom-characterψpk+1lkcustom-character for p≠custom-character and 1+Δpp=custom-characterpk+1pkcustom-character when p=custom-character. In a sense, one is writing the new eigenstate |ψpk+1custom-character as a slight shift from the state |custom-character): this e reason that one chooses custom-characterψpk+1pkcustom-character=1+Δpp. Using this definition, one can continue to simplify Uk, as










U
k

=




p




(

1
+

Δ
pp


)





ψ
p

k
+
1








ψ
p
k





e


-

iE
p
k



Δ





t




+




p








Δ






p












ψ
p

k
+
1








ψ

k






e


-

iE

k



Δ





t


.








(
47
)







One is now well-positioned to bound ∥(Uk−Uk,ad)P0k∥. Noting that Uk,ad exactly equals the 1 in the first sum in Uk.













(


U
k

-

U

k
,
ad



)



P
0
k




=






p





Δ






p












ψ
p

k
+
1








ψ

k





e


-

iE

k



Δ





t






ψ
0
k







ψ
0
k







=



max


ψ











p












Δ






p












ψ
p

k
+
1








ψ

k





e


-

iE

k



Δ





t






ψ
0
k







ψ
0
k




ψ




2


=







p




Δ

p





0




e


-

iE
0
k



Δ





t






2





p







Δ

p





0




2

.









(
48
)







The final step in bounding ∥Uk−Uk,ad∥ is to bound |custom-character|=custom-characterψpk+1|custom-charactercustom-character similarly to how one bounded custom-character. For p≠custom-character, custom-characteris given by
















Δ

p










2

=






ψ
p

k
+
1






ψ

k







2




λ
2











ψ
p
k





V
k





ψ

k






2



(


E
p
k

-

E

k


)

2


.






(
49
)







So, as with the bounds on |δ0|2 and Σl>0l|2 in Lemma 7, ∥(Uk−Uk,ad)P0k∥ is upper bounded by












p






Δ

p





0




2


=



p











ψ
p

k
+
1






ψ
0
k






2





p




λ
2










ψ
p
k





V
k





ψ
0
k






2



(


E
p
k

-

E
0
k


)

2








2


λ
2









(

γ
-

2





λ


)

2



,







(
50
)







which completes the proof.


Theorem 9. Consider a sequence of Hamiltonians {Hk}k=1M, M>1. Let γ be the minimum gap between the ground and first excited energies of any of the Hamiltonians, γ=mink(E1k−E0k). Similarly, let λ=maxk∥Hk−Hk−1∥ be the maximum difference between any two Hamiltonians in the sequence. The maximum error in the estimated eigenphases of the unitary found by the products of these M Hamiltonians is at most











φ
est

-

φ
true








2





M






λ
2









(

γ
-

2





λ


)

2



,




with a probability of failure of at most e provided that







λ
γ

<



1
-

exp


(


log


(

1
-
ϵ

)



M
-
1


)




.





Proof. Lemma 8 gives the difference between eigenvalues of UkP0k and Uk,adP0k. Across the entire sequence, one has














U
M



P
0
M













U
k



P
0
k













U
1



P
0
1


-


U

M
,
ad




P
0
M













U

k
,
ad




P
0
k













U

1
,
ad




P
0
1














2





M






λ
2









(

γ
-

2





λ


)

2


.





(
51
)







This is the maximum possible difference between the accumulated phases for the ideal and actual sequences, assuming the system leaves the ground state for at most one repetition at a time.


The probability of leaving the groundstate as part of a Landau-Zener process instigated by the measurement at adjacent values of the Hamiltonians is, under the assumptions of Lemma 7, that the failure probability occurring at each projection is ϵ if











λ
γ

<


1
-

exp


(


log


(

1
-
ϵ

)



M
-
1


)





,




(
52
)







thus the result follows trivially from these two results.


VII. Importance Sampling

The fundamental idea behind the example approach to decimation of a Hamiltonian is importance sampling. This approach has already seen great use in coalescing, but one can use it slightly differently here. The idea behind importance sampling is to reduce the variance of the mean of a quantity by reweighting the sum. Specifically, one can write the mean of N numbers F(j) as












1
N





j



F


(
j
)




=



j




f


(
j
)





F


(
j
)



Nf


(
j
)






,




(
53
)







where ƒ(j) is the importance of a given term. This shows that one can view the initial unweighted average as the average of a reweighted quantity xj/(ƒ(j)N). While this does not have an impact on the mean of xj it can dramatically reduce the sample variance of the mean and thus is widely used in statistics to provide more accurate estimates of means. The optimal importance function to take in these cases is ƒ(j)∝|xj|. In such cases, it is straightforward to see that the variance of the resulting distribution is in fact zero if the sign of the xj is constant. In such cases a straight forward calculation shows that this optimal variance is






custom-character
ƒ

opt
=(custom-character(|F|))2−(custom-character(F))2  (54)


The optimal variance in (54) is in fact zero if the sign of the numbers is constant. While this may seem surprising, it becomes less mysterious when one notes that in order to compute the optimal importance function you need the ensemble mean that you would like to estimate. This would defeat the purpose of importance sampling in most cases. Thus if one wants to glean an advantage from importance sampling for Hamiltonian simulation then it is important to show that one can use it even with an inexact importance function that can be, for example, computed efficiently using a classical computer.


It is now shown how this robustness holds below.


Lemma 10. Let F:custom-characterNcustom-charactercustom-character with custom-character(F)=N−1Σj=0N-1 F(j) be an unknown probability distribution that can be sampled from and let {tilde over (F)}:custom-characterNcustom-charactercustom-character be a known function such that for all j, |{tilde over (F)}(j)|−|F(j)|=δj with |δj|≤|F(j)|/2. If importance sampling is used with an importance function ƒ(j)=|{tilde over (F)}(j)|/Σk|{tilde over (F)}(k)| then the variance obeys









f



(
F
)


=




1

N
2






j




f


(
j
)





(


F


(
j
)



Nf


(
j
)



)

2




-


(




(
F
)


)

2






4

N
2




(



k





δ
k




)



(



j





F


(
j
)





)


+




f
opt




(
F
)








Proof. The proof is a straight forward exercise in the triangle inequality once one uses the fact that |δj|≤|F(j)|/2 and the fact that 1/(1−|x|)≤1+2|x| for all xϵ[−½, ½]:















f



(
F
)


=





1

N
2




(




k





F


(
k
)





+

δ
k


)



(



j





F
2



(
j
)






F


(
j
)




+

δ
j




)


-


(




(
F
)


)

2














1

N
2




(




k





F


(
k
)





+

δ
k


)



(



j





F
2



(
j
)






F


(
j
)




-



δ
j






)


-


(




(
F
)


)

2














1

N
2




(




k





F


(
k
)





+



δ
k




)



(




j





F


(
j
)





+

2




δ
j





)


-


(




(
F
)


)

2








=





1

N
2




(



k





δ
k




)



(




j





F


(
j
)





+

2




δ
j





)


+












1

N
2




(



k





F


(
k
)





)



(

2




j





δ
j





)


+


(




(


F


)


)

2

-


(




(
F
)


)

2














4

N
2




(



k





δ
k




)



(



j





F


(
j
)





)


+





f
opt




(
F
)


.









(
55
)







This bound is tight in the sense that as maxkk|→0 the upper bound on the variance converges to (custom-character(|F|))2−(custom-character(F))2 which is the optimal attainable variance.


In applications such as quantum chemistry simulation, what one wants to do is minimize the variance in Lemma 2. This minimum variance could be attained by choosing ƒ(j)∝|custom-characterψ|Hjcustom-character|. However, the task of computing such a functional is atleast as hard as solving the eigenvalue estimation problem that one wants to tackle. The natural approach to take is to take inspiration from Lemma 10 and instead take ƒ(j)∝|custom-character{tilde over (ψ)}|Hj|{tilde over (ψ)}custom-character| where |{tilde over (ψ)}custom-character is an efficiently computable ansatz state such as a CISD state. In practice, however, the importance of a given term may not be entirely predicted by the ansatz prediction. In which case a hedging strategy can be used wherein for some ρϵ[0, 1], ƒ(j)∝(1−ρ)custom-characterHjcustom-character+ρ∥Hj∥. This strategy allows you to smoothly interpolate between importance dictated by the magnitude of the Hamiltonian terms as well as the expectation value in the surrogate for the groundstate.


VIII. Numerical Results

This disclosure has shown that it is possible to use iterative phase estimation using a randomized Hamiltonian. To show how effective example embodiments can be, two examples of diatomic molecules, dilithium and hydrogen chloride, are considered. In both cases, the molecules are prepared in a minimal STO6G basis and use CISD states found by variationally minimimizing the groundstate energy over all states within 2 excitations away from the Hartree Fock state. One can then randomly sample Hamiltonians terms and then examine several quantities of interest including the average groundstate energy, the variance in the groundstate energies and the average number of terms in the Hamiltonian. Interestingly, one can also look at the number of qubits present in the model. This can differ because some randomly sampled Hamiltonians will actually choose terms in the Hamiltonian that do not couple with the remainder of the system. In these cases, the number of qubits required to represent the state can in fact be lower than the total number that would be ordinarily expected.


One can see in FIGS. 2-5 and FIGS. 6-9 that the estimates of the ground state energy varies radically with the degree of hedging used. It is found that if ρ=1 for both cases then in all cases, one has a very large variance in the groundstate energy, as expected since importance sampling has very little impact in that case. Conversely, it is found that is one takes ρ=0, one can maximally privilege the importance of Hamiltonian terms from the CISI) state which leads to very concise models but with shifts in groundstate energies that are on the order of 1 Ha for even 107 randomly selected terms (some of which may be duplicates). If one insteads use a modest amount of hedging (ρ=2×10−5) then one notices that the shift in the ground state energy is minimized assuming that a shift in energy of 10%; of chemical accuracy or 0.1 mHa is acceptable for Hamiltonian truncation error. For dilithium, this represents a 30% reduction in the number of terms in the Hamiltonian; whereas for HCl this reduces the number of terms in the Hamiltonian by a factor of 3. Since the cost of a Trotter-Suzuki simulation of chemistry scales super-linearly with the number of terms in the Hamiltonian this constitutes a substantial reduction in the complexity.


One can also note that for the case of dilithium, the number of qubits needed to perform the simulation varied over the different runs. In contrast Chlorine showed no such behavior. This difference arises from the fact that dilithium requires six electrons that reside in 20 spin orbitals. Hydrogen Chloride consists of eighteen electrons and the example Fock space consists of 20 spin orbitals also. As a result, nearly every spin orbital will be relevant in that which explains why the number of spin orbitals needed to express dilithium to a fixed degree of precision changes whereas it does not for HCl. This illustrates that embodiments of the disclosed randomization procedure can be used to help select an active space for a simulation on the fly as the precision needed in the Hamiltonian increases through a phase estimation procedure.



FIGS. 2-9 comprise graphs 200, 300, 400, 500, 600, 700, 800, and 900 that show the average ground energy shift (compared to unsampled Hamiltonian), variance in ground energies over sampled Hamiltonians, average qubit requirement, and average number of terms in sampled Hamiltonians for Li2, as a function of number of samples taken to generate the Hamiltonian and the value of the parameter ρ. A term in the Hamiltonian Hα is sampled with probability ρα∝(1−ρ)custom-characterHαcustom-character+ρ∥Hα∥, where the expectation value is taken with the CISD state.


IX. Example Embodiments

In this section, example methods for performing the disclosed technology are disclosed. The particular embodiments described should not be construed as limiting, as the disclosed method acts can be performed alone, in different orders, or at least partially simultaneously with one another. Further, any of the disclosed methods or method acts can be performed with any other methods or method acts disclosed herein.



FIG. 10 is a flow chart 1000 showing an example method for implementing an importance sampling simulation method according to an embodiment of the disclosed technology.



FIG. 11 is a flow chart 1100 showing an example method for performing a quantum simulation using adaptive Hamiltonian randomization.



FIG. 16 is a flow chart 1600 showing an example method for performing a quantum simulation using adaptive Hamiltonian randomization.


At 1610, a Hamiltonian to be computed by the quantum computer device is inputted.


At 1612, a number of Hamiltonian terms in the Hamiltonian is reduced using randomization within a phase estimation algorithm.


At 1614, a quantum circuit description for the Hamiltonian is output with the reduced number of Hamiltonian terms.


In certain embodiments, the reducing comprises selecting one or more random Hamiltonian terms based on an importance function; reweighting the selected random Hamiltonian terms based on an importance of each of the selected Hamiltonian random terms; and generating the quantum circuit description using the reweighted random terms. Some embodiments further comprise implementing, in the quantum computing device, a quantum circuit as described by the quantum circuit description; and measuring a quantum state of the quantum circuit. Still further embodiments comprise re-performing the method based on results from the measuring (e.g., using an iterative process). In some embodiments, the iterative process comprises computing a desired precision value for the Hamiltonian; computing a standard deviation for the Hamiltonian based on results from the implementing and measuring; and comparing the desired precision value to the standard deviation. Some embodiments further comprise changing an order of the Hamiltonian terms based on the reducing. Certain embodiments further comprise applying importance functions to terms of the Hamiltonian in a ground state; and selecting one or more random Hamiltonian terms based at least in part on the importance functions. Some embodiments comprise using importance sampling based on a variational approximation to a groundstate. Certain embodiments further comprise using adaptive Bayesian methods to quantify a precision needed for the Hamiltonian given an estimate of the current uncertainty in an eigenvalue.


Other embodiments comprise one or more computer-readable media storing computer-executable instructions, which when executed by a computer cause the computer to perform a method comprising inputting a Hamiltonian to be computed by the quantum computer device; reducing a number of Hamiltonian terms in the Hamiltonian using randomization within a phase estimation algorithm; and outputting a quantum circuit description for the Hamiltonian with the reduced number of Hamiltonian terms.


The method can comprise selecting one or more random Hamiltonian terms based on an importance function: reweighting the selected random Hamiltonian terms based on an importance of each of the selected Hamiltonian random terms; and generating the quantum circuit description using the reweighted random terms. The method can further comprise causing a quantum circuit as described by the quantum circuit description to implemented by the quantum computing device; and measuring a quantum state of the quantum circuit. The method can further comprise computing a desired precision value for the Hamiltonian; computing a standard deviation for the Hamiltonian based on results from the implementing and measuring: comparing the desired precision value to the standard deviation; and re-performing the reducing based on a result of the comparing.


Another embodiment is a system, comprising a quantum computing system; and a classical computing system configured to communicate with and control the quantum computing system. In such embodiments, the classical computing system is further configured to: input a Hamiltonian to be computed by the quantum computer device; reduce a number of Hamiltonian terms in the Hamiltonian using randomization within an iterative phase estimation algorithm; and output a quantum circuit description for the Hamiltonian with the reduced number of Hamiltonian terms. The classical computing system can be further configured to: select one or more random Hamiltonian terms based on an importance function; reweight the selected random Hamiltonian terms based on an importance of each of the selected Hamiltonian random terms; and generate the quantum circuit description using the reweighted random terms. The classical computing system can be further configured to cause a quantum circuit as described by the quantum circuit description to be implemented by the quantum computing device; and measure a quantum state of the quantum circuit. Still further, the classical computing system can be further configured to compute a desired precision value for the Hamiltonian; compute a standard deviation for the Hamiltonian based on results from the implementing and measuring; compare the desired precision value to the standard deviation; and re-perform the reducing based on a result of the comparing. In still further embodiments, the classical computing system can be further configured such that, as part of the randomization, one or more unnecessary qubits are omitted.


X. Example Computing Environments


FIG. 12 illustrates a generalized example of a suitable classical computing environment 1200 in which aspects of the described embodiments can be implemented. The computing environment 1200 is not intended to suggest any limitation as to the scope of use or functionality of the disclosed technology, as the techniques and tools described herein can be implemented in diverse general-purpose or special-purpose environments that have computing hardware.


With reference to FIG. 12, the computing environment 1200 includes at least one processing device 1210 and memory 1220. In FIG. 12, this most basic configuration 1230 is included within a dashed line. The processing device 1210 (e.g., a CPU or microprocessor) executes computer-executable instructions. In a multi-processing system, multiple processing devices execute computer-executable instructions to increase processing power. The memory 1220 may be volatile memory (e.g., registers, cache, RAM, DRAM, SRAM), non-volatile memory (e.g., ROM, EEPROM, flash memory), or some combination of the two. The memory 1220 stores software 1280 implementing tools for performing any of the disclosed techniques for operating a quantum computer to perform Hamiltonian randomization as described herein. The memory 1220 can also store software 1280 for synthesizing, generating, or compiling quantum circuits for performing any of the disclosed techniques.


The computing environment can have additional features. For example, the computing environment 1200 includes storage 1240, one or more input devices 1250, one or more output devices 1260, and one or more communication connections 1270. An interconnection mechanism (not shown), such as a bus, controller, or network, interconnects the components of the computing environment 120). Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 1200, and coordinates activities of the components of the computing environment 1200.


The storage 1240 can be removable or non-removable, and includes one or more magnetic disks (e.g., hard drives), solid state drives (e.g., flash drives), magnetic tapes or cassettes, CD-ROMs, DVDs, or any other tangible non-volatile storage medium which can be used to store information and which can be accessed within the computing environment 1200. The storage 1240 can also store instructions for the software 1280 implementing any of the disclosed techniques. The storage 1240 can also store instructions for the software 1280 for generating and/or synthesizing any of the described techniques, systems, or quantum circuits.


The input device(s) 1250 can be a touch input device such as a keyboard, touchscreen, mouse, pen, trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1200. The output device(s) 1260 can be a display device (e.g., a computer monitor, laptop display, smartphone display, tablet display, netbook display, or touchscreen), printer, speaker, or another device that provides output from the computing environment 1200.


The communication connection(s) 1270 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.


As noted, the various methods and techniques for performing Hamiltonian randomization, for controlling a quantum computing device, to perform circuit design or compilation/synthesis as disclosed herein can be described in the general context of computer-readable instructions stored on one or more computer-readable media. Computer-readable media are any available media (e.g., memory or storage device) that can be accessed within or by a computing environment. Computer-readable media include tangible computer-readable memory or storage devices, such as memory 1220 and/or storage 1240, and do not include propagating carrier waves or signals per se (tangible computer-readable memory or storage devices do not include propagating carrier waves or signals per se).


Various embodiments of the methods disclosed herein can also be described in the general context of computer-executable instructions (such as those included in program modules) being executed in a computing environment by a processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, and so on, that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.


An example of a possible network topology 1300 (e.g., a client-server network) for implementing a system according to the disclosed technology is depicted in FIG. 13. Networked computing device 1320 can be, for example, a computer running a browser or other software connected to a network 1312. The computing device 1320 can have a computer architecture as shown in FIG. 12 and discussed above. The computing device 1320 is not limited to a traditional personal computer but can comprise other computing hardware configured to connect to and communicate with a network 1312 (e.g., smart phones, laptop computers, tablet computers, or other mobile computing devices, servers, network devices, dedicated devices, and the like). Further, the computing device 1320 can comprise an FPGA or other programmable logic device. In the illustrated embodiment, the computing device 1320 is configured to communicate with a computing device 1330 (e.g., a remote server, such as a server in a cloud computing environment) via a network 1312. In the illustrated embodiment, the computing device 1320 is configured to transmit input data to the computing device 1330, and the computing device 1330 is configured to implement a technique for controlling a quantum computing device to perform any of the disclosed embodiments and/or a circuit generation/compilation/synthesis technique for generating quantum circuits for performing any of the techniques disclosed herein. The computing device 1330 can output results to the computing device 1320. Any of the data received from the computing device 1330 can be stored or displayed on the computing device 1320 (e.g., displayed as data on a graphical user interface or web page at the computing devices 1320). In the illustrated embodiment, the illustrated network 1312 can be implemented as a Local Area Network (LAN) using wired networking (e.g., the Ethernet IEEE standard 802.3 or other appropriate standard) or wireless networking (e.g. one of the IEEE standards 802.11a, 802.11b, 802.11g, or 802.11n or other appropriate standard). Alternatively, at least part of the network 1312 can be the Internet or a similar public network and operate using an appropriate protocol (e.g., the HTTP protocol).


Another example of a possible network topology 1400 (e.g., a distributed computing environment) for implementing a system according to the disclosed technology is depicted in FIG. 14. Networked computing device 1420 can be, for example, a computer running a browser or other software connected to a network 1112. The computing device 1420 can have a computer architecture as shown in FIG. 12 and discussed above. In the illustrated embodiment, the computing device 1420 is configured to communicate with multiple computing devices 1430, 1431, 1432 (e.g., remote servers or other distributed computing devices, such as one or more servers in a cloud computing environment) via the network 1412. In the illustrated embodiment, each of the computing devices 1430, 1431, 1432 in the computing environment 1400 is used to perform at least a portion of the Hamiltonian randomization technique and/or at least a portion of the technique for controlling a quantum computing device to perform any of the disclosed embodiments and/or a circuit generation/compilation/synthesis technique for generating quantum circuits for performing any of the techniques disclosed herein. In other words, the computing devices 1430, 1431, 1432 form a distributed computing environment in which aspects of the techniques for performing any of the techniques as disclosed herein and/or quantum circuit generation/compilation/synthesis processes are shared across multiple computing devices. The computing device 1420 is configured to transmit input data to the computing devices 1430, 1431, 1432, which are configured to distributively implement such as process, including performance of any of the disclosed methods or creation of any of the disclosed circuits, and to provide results to the computing device 1420. Any of the data received from the computing devices 1430, 1431, 1432 can be stored or displayed on the computing device 1420 (e.g., displayed as data on a graphical user interface or web page at the computing devices 1420). The illustrated network 1412 can be any of the networks discussed above with respect to FIG. 13.


With reference to FIG. 15, an exemplary system for implementing the disclosed technology includes computing environment 1500. In computing environment 1500, a compiled quantum computer circuit description (including quantum circuits for performing any of the disclosed techniques as disclosed herein) can be used to program (or configure) one or more quantum processing units such that the quantum processing unit(s) implement the circuit described by the quantum computer circuit description.


The environment 1500 includes one or more quantum processing units 1502 and one or more readout device(s) 1508. The quantum processing unit(s) execute quantum circuits that are precompiled and described by the quantum computer circuit description. The quantum processing unit(s) can be one or more of, but are not limited to: (a) a superconducting quantum computer; (b) an ion trap quantum computer; (c) a fault-tolerant architecture for quantum computing; and/or (d) a topological quantum architecture (e.g., a topological quantum computing device using Majorana zero modes). The precompiled quantum circuits, including any of the disclosed circuits, can be sent into (or otherwise applied to) the quantum processing unit(s) via control lines 1506 at the control of quantum processor controller 1520. The quantum processor controller (QP controller) 1520 can operate in conjunction with a classical processor 1510 (e.g., having an architecture as described above with respect to FIG. 12) to implement the desired quantum computing process. In the illustrated example, the QP controller 1520 further implements the desired quantum computing process via one or more QP subcontrollers 1504 that are specially adapted to control a corresponding one of the quantum processor(s) 1502. For instance, in one example, the quantum controller 1520 facilitates implementation of the compiled quantum circuit by sending instructions to one or more memories (e.g., lower-temperature memories), which then pass the instructions to low-temperature control unit(s) (e.g., QP subcontroller(s) 1504) that transmit, for instance, pulse sequences representing the gates to the quantum processing unit(s) 1502 for implementation. In other examples, the QP controller(s) 1520 and QP subcontroller(s) 1504 operate to provide appropriate magnetic fields, encoded operations, or other such control signals to the quantum processor(s) to implement the operations of the compiled quantum computer circuit description. The quantum controller(s) can further interact with readout devices 1508 to help control and implement the desired quantum computing process (e.g., by reading or measuring out data results from the quantum processing units once available, etc.)


With reference to FIG. 15, compilation is the process of translating a high-level description of a quantum algorithm into a quantum computer circuit description comprising a sequence of quantum operations or gates, which can include the circuits as disclosed herein (e.g., the circuits configured to perform one or more of the procedures as disclosed herein). The compilation can be performed by a compiler 1522 using a classical processor 1510 (e.g., as shown in FIG. 12) of the environment 1500 which loads the high-level description from memory or storage devices 1512 and stores the resulting quantum computer circuit description in the memory or storage devices 1512.


In other embodiments, compilation and/or verification can be performed remotely by a remote computer 1560 (e.g., a computer having a computing environment as described above with respect to FIG. 12) which stores the resulting quantum computer circuit description in one or more memory or storage devices 1562 and transmits the quantum computer circuit description to the computing environment 1500 for implementation in the quantum processing unit(s) 1502. Still further, the remote computer 1500 can store the high-level description in the memory or storage devices 1562 and transmit the high-level description to the computing environment 1500 for compilation and use with the quantum processor(s). In any of these scenarios, results from the computation performed by the quantum processor(s) can be communicated to the remote computer after and/or during the computation process. Still further, the remote computer can communicate with the QP controller(s) 1520 such that the quantum computing process (including any compilation, verification, and QP control procedures) can be remotely controlled by the remote computer 1560. In general, the remote computer 1560 communicates with the QP controller(s) 1520, compiler/synthesizer 1522, and/or verification tool 1523 via communication connections 1550.


In particular embodiments, the environment 1500 can be a cloud computing environment, which provides the quantum processing resources of the environment 1500 to one or more remote computers (such as remote computer 1560) over a suitable network (which can include the internet).


XI. Concluding Remarks

This application has shown that iterative phase estimation is more flexible than it previously may have been thought and that the number of terms in the Hamiltonian be randomized at each step of iterative phase estimation without substantially contributing to the underlying variance of an unbiased estimator of the eigenphase. It was further shown numerically that by using such strategies for sub-sampling the Hamiltonian terms that one can perform a simulation using fewer Hamiltonian terms than would be necessary for traditional approaches require. These reductions in the number of terms directly impact the complexity of Trotter-Suzuki based simulation and indirectly impact qubitization and truncated Taylor series simulation methods because they also reduce the 1-norm of the vector of Hamiltonian terms.


Having described and illustrated the principles of the disclosed technology with reference to the illustrated embodiments, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from such principles. For instance, elements of the illustrated embodiments shown in software may be implemented in hardware and vice-versa. Also, the technologies from any example can be combined with the technologies described in any one or more of the other examples. It will be appreciated that procedures and functions such as those described with reference to the illustrated examples can be implemented in a single hardware or software module, or separate modules can be provided. The particular arrangements above are provided for convenient illustration, and other arrangements can be used.

Claims
  • 1. A method of operating a quantum computing device, comprising: inputting a Hamiltonian to be computed by the quantum computer device;reducing a number of Hamiltonian terms in the Hamiltonian using randomization within a phase estimation algorithm; andoutputting a quantum circuit description for the Hamiltonian with the reduced number of Hamiltonian terms.
  • 2. The method of claim 1, wherein the method is performed by a classical computer.
  • 3. The method of claim 1, wherein the reducing comprises: selecting one or more random Hamiltonian terms based on an importance function;reweighting the selected random Hamiltonian terms based on an importance of each of the selected Hamiltonian random terms;generate the quantum circuit description using the reweighted random terms.
  • 4. The method of claim 3, further comprising: implementing, in the quantum computing device, a quantum circuit as described by the quantum circuit description;measuring a quantum state of the quantum circuit.
  • 5. The method of claim 4, further comprising re-performing the method of claim 4 based on results from the measuring.
  • 6. The method of claim 5, wherein the re-performing is performed based on an interative process.
  • 7. The method of claim 6, wherein the interative process comprises: computing a desired precision value for the Hamiltonian;computing a standard deviation for the Hamiltonian based on results from the implementing and measuring; andcomparing the desired precision value to the standard deviation.
  • 8. The method of claim 1, further comprising changing an order of the Hamiltonian terms based on the reducing.
  • 9. The method of claim 1, further comprising: applying importance functions to terms of the Hamiltonian in a ground state; andselecting one or more random Hamiltonian terms based at least in part on the importance functions.
  • 10. The method of claim 1, further comprising: using importance sampling based on a variational approximation to a groundstate.
  • 11. The method of claim 1, further comprising: using adaptive Bayesian methods to quantify a precision needed for the Hamiltonian given an estimate of the current uncertainty in an eigenvalue.
  • 12. One or more computer-readable media storing computer-executable instructions, which when executed by a computer cause the computer to perform a method, the method comprising: inputting a Hamiltonian to be computed by the quantum computer device;reducing a number of Hamiltonian terms in the Hamiltonian using randomization within a phase estimation algorithm; andoutputting a quantum circuit description for the Hamiltonian with the reduced number of Hamiltonian terms.
  • 13. The one or more computer-readable media of claim 12, wherein the method further comprises: selecting one or more random Hamiltonian terms based on an importance function;reweighting the selected random Hamiltonian terms based on an importance of each of the selected Hamiltonian random terms; andgenerating the quantum circuit description using the reweighted random terms.
  • 14. The one or more computer-readable media of claim 13, wherein the method further comprises: causing a quantum circuit as described by the quantum circuit description to implemented by the quantum computing device; andmeasuring a quantum state of the quantum circuit.
  • 15. The one or more computer-readable media of claim 14, wherein the method further comprises: computing a desired precision value for the Hamiltonian;computing a standard deviation for the Hamiltonian based on results from the implementing and measuring;comparing the desired precision value to the standard deviation; andre-performing the reducing based on a result of the comparing.
  • 16. A system, comprising: a quantum computing system; anda classical computing system configured to communicate with and control the quantum computing system, the classical computing system being further configured to:input a Hamiltonian to be computed by the quantum computer device;reduce a number of Hamiltonian terms in the Hamiltonian using randomization within an iterative phase estimation algorithm; andoutput a quantum circuit description for the Hamiltonian with the reduced number of Hamiltonian terms.
  • 17. The system of claim 16, wherein the classical computing system is further configured to: select one or more random Hamiltonian terms based on an importance function;reweight the selected random Hamiltonian terms based on an importance of each of the selected Hamiltonian random terms; andgenerate the quantum circuit description using the reweighted random terms.
  • 18. The system of claim 17, wherein the classical computing system is further configured to: cause a quantum circuit as described by the quantum circuit description to be implemented by the quantum computing device; andmeasure a quantum state of the quantum circuit.
  • 19. The system of claim 18, wherein the classical computing system is further configured to: compute a desired precision value for the Hamiltonian;compute a standard deviation for the Hamiltonian based on results from the implementing and measuring;compare the desired precision value to the standard deviation; andre-perform the reducing based on a result of the comparing.
  • 20. The system of claim 16, wherein the classical computing system is further configured to, as part of the randomization, one or more unnecessary qubits are omitted.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/819,301 entitled “PHASE ESTIMATION WITH RANDOMIZED HAMILTONIANS” and filed on Mar. 15, 2019, which is hereby incorporated herein by reference in its entirety.

Provisional Applications (1)
Number Date Country
62819301 Mar 2019 US