The disclosure pertains to training Boltzmann machines using quantum computers.
Deep learning is a relatively new paradigm for machine learning that has substantially impacted the way in which classification, inference and artificial intelligence (AI) tasks are performed. Deep learning began with the suggestion that in order to perform sophisticated AI tasks, such as vision or language, it may be necessary to work on abstractions of the initial data rather than raw data. For example, an inference engine that is trained to detect a car might first take a raw image and decompose it first into simple shapes. These shapes could form the first layer of abstraction. These elementary shapes could then be grouped together into higher level abstract objects such as bumpers or wheels. The problem of determining whether a particular image is or is not a car is then performed on the abstract data rather than the raw pixel data. In general, this process could involve many levels of abstraction.
Deep learning techniques have demonstrated remarkable improvements such as up to 30% relative reduction in error rate on many typical vision and speech tasks. In some cases, deep learning techniques approach human performance, such as in matching two faces. Conventional classical deep learning methods are currently deployed in language models for speech and search engines. Other applications include machine translation and deep image understanding (i.e., image to text representation).
Existing methods for training deep belief networks use contrastive divergence approximations to train the network layer by layer. This process is expensive for deep networks, relies on the validity of the contrastive divergence approximation, and precludes the use of intra-layer connections. The contrastive divergence approximation is inapplicable in some applications, and in any case, contrastive divergence based methods are incapable of training an entire graph at once and instead rely on training the system one layer at a time, which is costly and reduces the quality of the model. Finally, further crude approximations are needed to train a full Boltzmann machine, which potentially has connections between all hidden and visible units and may limit the quality of the optima found in the learning algorithm. Approaches are needed that overcome these limitations.
The disclosure provides methods and apparatus for training deep belief networks in machine learning. The disclosed methods and apparatus permit efficient training of generic Boltzmann machines that are currently untrainable with conventional approaches. In addition, the disclosed approaches can provide more rapid training in fewer steps. Gradients of objective functions for deep Boltzmann machines are determined using a quantum computer in combination with a classical computer. A quantum state encodes an approximation to a Gibbs distribution, and sampling of this approximate distribution is used to determine Boltzmann machine weights and biases. In some cases, amplitude estimation and fast quantum algorithms are used. Typically, a classical computer receives a specification of a Boltzmann machine and associated training data, and determines an objective function associated with the Boltzmann machine. A quantum computer determines at least one gradient of the objective function, and based on the gradient of the objective function, at least one hidden value or a weight of the Boltzmann machine is established. A mean-field approximation can be used to define an objective function, and gradients can be determined based on the sampling.
These and other features of the disclosure are set forth below with reference to the accompanying drawings.
As used in this application and in the claims, the singular forms “a,” “an,” and “the” include the plural forms unless the context clearly dictates otherwise. Additionally, the term “includes” means “comprises.” Further, the term “coupled” does not exclude the presence of intermediate elements between the coupled items.
The systems, apparatus, and methods described herein should not be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and non-obvious features and aspects of the various disclosed embodiments, alone and in various combinations and sub-combinations with one another. The disclosed systems, methods, and apparatus are not limited to any specific aspect or feature or combinations thereof, nor do the disclosed systems, methods, and apparatus require that any one or more specific advantages be present or problems be solved. Any theories of operation are to facilitate explanation, but the disclosed systems, methods, and apparatus are not limited to such theories of operation.
Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed systems, methods, and apparatus can be used in conjunction with other systems, methods, and apparatus. Additionally, the description sometimes uses terms like “produce” and “provide” to describe the disclosed methods. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art.
In some examples, values, procedures, or apparatus' are referred to as “lowest”, “best”, “minimum,” or the like. It will be appreciated that such descriptions are intended to indicate that a selection among many functional alternatives can be made, and such selections need not be better, smaller, or otherwise preferable to other selections.
The methods and apparatus described herein generally use a classical computer coupled to a quantum computer to train a deep Boltzmann machine. In order for the classical computer to update a model for the deep Boltzmann machine given training data, certain expectation values are computed. A quantum computer is arranged to accelerate this process. In typical examples, a classically tractable approximation to the state provided by a mean field approximation, or a related approximation, is used to prepare a quantum state that is close to the distribution that yields the desired expectation values. The quantum computer is then used to efficiently refine this approximation into precisely the desired distribution. The required expectation values are then learned by sampling from this quantum distribution.
In alternative examples, amplitude estimation is used. Instead of preparing the quantum computer in a state corresponding to a single training vector, the state is prepared in a quantum superposition of every training example in the set Amplitude estimation is used to find the required expectation values.
Boltzmann Machines
The Boltzmann machine is a powerful paradigm for machine learning in which the problem of training a system to classify or generate examples of a set of training vectors is reduced to the problem of energy minimization of a spin system. The Boltzmann machine consists of several binary units that are split into two categories: (a) visible units and (b) hidden units. The visible units are the units in which the inputs and outputs of the machine are given. For example, if a machine is used for classification, then the visible units will often be used to hold training data as well as a label for that training data. The hidden units are used to generate correlations between the visible units that enable the machine either to assign an appropriate label to a given training vector or to generate an example of the type of data that the system is trained to output.
Formally, the Boltzmann machine models the probability of a given configuration (v, h) of hidden and visible units via the Gibbs distribution:
P(v,h)=e−E(v,h)/Z
wherein Z is a normalizing factor known as the partition function, and v,h refer to visible and hidden unit values, respectively. The energy E of a given configuration of hidden and visible units is of the form:
wherein vectors v and h are visible and hidden unit values, vectors b and d are biases that provide an energy penalty for a hit taking a value of 1 and wi,j is a weight that assigns an energy penalty for the hidden and visible units both taking on a value of 1. Training a Boltzmann machine reduces to estimating these biases and weights by maximizing the log-likelihood of the training data. A Boltzmann machine for which the biases and weights have been determined is referred to as a trained Boltzmann machine. A so-called L2-regularization term can be added in order to prevent overfitting, resulting in the following form of an objective function:
This objective function is referred to as a maximum likelihood-objective (ML-objective) function and λ represents the regularization term. Gradient descent provides a method to find a locally optimal value of the ML-objective function. Formally, the gradients of this objective function can be written as:
The expectation values for a quantity x(v,h) are given by:
Note that it is non-trivial to compute any of these gradients: the value of the partition function Z is #P-hard to compute and cannot generally be efficiently approximated within a specified multiplicative error. This means modulo reasonable complexity theoretic assumptions, neither a quantum nor a classical computer should be able to directly compute the probability of a given configuration and in turn compute the log-likelihood of the Boltzmann machine yielding the particular configuration of hidden and visible units.
In practice, approximations to the likelihood gradient via contrastive divergence or mean-field assumptions have been used. These conventional approaches, while useful, are not fully theoretically satisfying as the directions yielded by the approximations are not the gradients of any objective function, let alone the log-likelihood. Also, contrastive divergence does not succeed when trying to train a full Boltzmann machine which has arbitrary connections between visible and hidden units. The need for such connections can be mitigated by using a deep restricted Boltzmann machine (shown in
Boltzmann machines can be used in a variety of applications. In one application, data associated with a particular image, a series of images such as video, a text string, speech or other audio is provided to a Boltzmann machine (after training) for processing. In some cases, the Boltzmann provides a classification of the data example. For example, a Boltzmann machine can classify an input data example as containing an image of a face, speech in a particular language or from a particular individual, distinguish spam from desired email, or identify other patterns in the input data example such as identifying shapes in an image. In other examples, the Boltzmann machine identifies other features in the input data example or other classifications associated with the data example. In still other examples, the Boltzmann machine preprocesses a data example so as to extract features that are to be provide to a subsequent Boltzmann machine. In typical examples, a trained Boltzmann machine can process data examples for classification, clustering into groups, or simplification such as by identifying topics in a set of documents. Data input to a Boltzmann machine for processing for these or other purposes is referred to as a data example. In some applications, a trained Boltzmann machine is used to generate output data corresponding to one or more features or groups of features associated with the Boltzmann machine. Such output data is referred to as an output data example. For example, a trained Boltzmann machine associated with facial recognition can produce an output data example that is corresponding to a model face.
Quantum Algorithm for State Preparation
Quantum computers can draw unbiased samples from the Gibbs distribution, thereby allowing probabilities to be computed by sampling (or by quantum sampling). As disclosed herein, a quantum distribution is prepared that approximates the ideal probability distribution over the model or data. This approximate distribution is then refined by rejection sampling into a quantum distribution that is, to within numerical error, the target probability distribution. Layer by layer training is unnecessary, and approximations required in conventional methods can be avoided. Beginning with a uniform prior over the amplitudes of the Gibbs state, preparing the state via quantum rejection sampling is likely to be inefficient. This is because the success probability depends on a ratio of the partition functions of the initial state and the Gibbs state which is generally exponentially small for machine learning problems. In some examples, a mean-field approximation is used over the joint probabilities in the Gibbs state, rather than a uniform prior. This additional information can be used to boost the probability of success to acceptable levels for numerically tractable examples.
The required expectation values can then be found by sampling from the quantum distribution. A number of samples needed to achieve a fixed sampling error can be quadratically reduced by using a quantum algorithm known as amplitude estimation.
Disclosed below are methods by which an initial quantum distribution is refined into a quantum coherent Gibbs state (often called a coherent thermal state or CTS). Mean-field approaches or generalizations thereof can be used to provide suitable initial states for the quantum computer to refine into the CTS. All units are assumed to be binary-valued in the following examples, but other units (such as Gaussian units) can be approximated within this framework by forming a single unit out of a string of several qubits.
Mean-Field Approximation
The mean-field approximation to the joint probability distribution is referred to herein as Q (v,h). The mean-field approximation is a variational approach that finds an uncorrelated distribution Q (v,h) that has minimal Kullback-Leibler (KL) divergence with the joint probability distribution P(v,h) given by the Gibbs distribution. The main benefit of using Q instead of P is that vihjmodel and log(Z) can be efficiently estimated using mean-field approximations. A secondary benefit is that the mean-field state can be efficiently prepared using single-qubit rotations.
More concretely, the mean-field approximation is a distribution such that
where μi and vj are chosen to minimize KL(Q∥P). The parameters μi and vj are called mean-field parameters.
Using the properties of the Bernoulli distribution, it can be shown that:
The optimal values of μi and vj are can be found by differentiating this equation with respect to μi and vj are and setting the result equal to zero. The solution to this is
wherein σ(x)=1/(1+exp(−x)) is the sigmoid function.
These equations can be implicitly solved by fixed point iteration, which involves initializing the μi and v3 arbitrarily and iterating until convergence is reached. Convergence is guaranteed provided that the norm of the Jacobian of the map is bounded above by 1. Solving the mean-field equations by fixed point iteration is analogous to Gibbs sampling with the difference being that here there are only a polynomial number of configurations to sample over and so the entire process is efficient.
Mean-field approximations to distributions such P(v,h)=δv,xexp−E(x,h)/Zx can be computed using the exact same methodology. The only difference is that in such cases the mean-field approximation is only taken over the hidden units. Such approximations are needed to compute the expectations over the data that are needed to estimate the derivatives of OML used below. It can also be shown that among all product distributions, Q is the distribution that leads to the least error in the approximation to the log-partition function.
Experimentally, mean-field approximations can estimate the log-partition function within less than 1% error, depending on the weight distribution and the geometry of the graph used. The mean-field approximation to the partition function is sufficiently accurate for small restricted Boltzmann machines. Structured mean-field approximation methods can be used to reduce such errors if needed, albeit at a higher classical computational cost. It can be shown that the success probability of the disclosed state preparation methods approach unity in the limit in which the strengths of the correlations in the model vanish.
The mean-field distribution is used to compute a variational approximation to the necessary partition functions. These approximations are shown below. If Q is the mean-field approximation to the Gibbs distribution P, then a mean field partition function ZMF is defined as:
Furthermore, for any x∈xtrain, let Qx be the mean-field approximation to the Gibbs distribution found for a Boltzmann machine with the visible units clamped to x and further define Zx,MF as:
To use a quantum algorithm to prepare P from Q, an upper bound κ on the ratio of the approximation P(v,h)≈e−E(v,h)/ZMF to Q(v,h) is needed. Let κ>0 be a constant that satisfies for all visible and hidden configurations (v,h):
wherein ZMF is the approximation to the partition function given above. Then for all configurations of hidden and visible units,
The mean-field approximation can also be used to provide a lower bound for the log-partition function. For example, Jensen's inequality can be used to show that
Thus, ZMF≤Z and P(v,h)≤e−E(v,h)/ZMF.
A coherent analog of the Gibbs state for a Boltzmann machine can be prepared with a probability of success of
Similarly, the Gibbs state corresponding to the visible units being clamped to a configuration x can be prepared with success probability
The mean-field parameters μi and vj can be determined as shown above and uniquely specify the mean-field distribution Q. The mean-field parameters are then used to approximate the partition functions Z and Zx prepare a coherent analog of Q(v,h), ∥ψMF, by performing a series of single-qubit rotations:
Rejection sampling can be used to refine this approximation to
Define
Note that this quantity can be computed efficiently from the mean-field parameters and so there is an associated efficient quantum circuit, and 0≤P(v,h)≤1.
Since quantum operations are linear, if this is applied to a state
the state
is obtained. An additional quantum bit is added, and a controlled rotation of the form Ry (2 sin−1(P(v,h))) is performed on this qubit to enact the following transformation:
The register that contains the qubit string P(v,h) is then reverted to the |0 state by applying the same operations used to prepare P(v,h) in reverse. This process is possible because all quantum operations, save measurement, are reversible. Since P(v,h)∈[0,1], this is a properly normalized quantum state and its square is a properly normalized probability distribution. If the rightmost quantum bit is measured and a result of 1 is obtained (projective measurements always result in a unit vector) then the remainder of the state will be proportional to
which is the desired state up to a normalizing factor. The probability of measuring 1 is the square of this constant of proportionality:
Preparing a quantum state that can be used to estimate the expectation values over the data requires a slight modification to this algorithm. First, for each x∈xtrain needed for the expectation values, Q(v,h) is replaced with the constrained mean-field distribution Qx(x,h). Then using this data, the quantum state
can be prepared. The same procedure is can be followed using Qx in place of Q, Zx instead of Z, and Zx,MF rather than ZMF. The success probability of this algorithm is:
wherein κx is the value of κ that corresponds to the case where the visible units are clamped to x.
This approach to state preparation problem uses a mean-field approximation rather than an infinite temperature Gibbs state as an initial state. This choice of initial state is important because the success probability of the state preparation process depends on the distance between the initial state and the target state. For machine learning applications, the inner product between the Gibbs state and the infinite temperature Gibbs state is often exponentially small; whereas the mean-field and Gibbs states typically have large overlaps.
As shown below, if an insufficiently large value of κ is used, then the state preparation algorithm can still be used, but at the price of reduced fidelity with the ideal coherent Gibbs state. Using relaxed assumptions, such that κQ(v,h)≥e−E(v,h)/ZMF for all (v,h)∈good, κQ(v,h)<e−E(v,h)/ZMF for all j∈bad, and
then a state can be prepared that has fidelity at least 1−∂ with the target Gibbs state with probability at least Z(1-∂)/(κZMF).
Prior to the measurement of the register that projects the state onto the success or failure branch, the state is:
The probability of successfully preparing the approximation to the state is then:
The fidelity of the resultant state with the ideal state
is:
since Q(v,h)ZMFκ≤e−E(v,h) and (v,h)∈bad. Using the assumption that
the fidelity is bounded above by:
Procedures for producing states that can be measured to estimate expectation values over the model and the data for training a deep Boltzmann machine are shown in Tables 1-2, respectively (as shown in
Gradient Calculation by Sampling
One method for estimating the gradients of OML involves preparing the Gibbs state from the mean-field state and then drawing samples from the resultant distribution in order to estimate the expectation values required in Eqns. (1a)-(1c) above. This approach can be improved using the quantum method known as amplitude amplification, a generalization of Grover's search algorithm that quadratically reduces the mean number of repetitions needed to draw a sample from the Gibbs distribution using the methods discussed above.
There exists a quantum algorithm that can estimate the gradient of OML using Ntrain samples for a Boltzmann machine on a connected graph with E edges. The mean number of quantum operations required by algorithm to compute the gradient is
wherein κv is the value of κ that corresponds to the Gibbs distribution when the visible units are clamped to v and f∈Õ(g) implies f∈O(g) up to polylogarithmic factors.
Table 3 (as shown in
In contrast, the number of operations and queries to UO required to estimate the gradients using greedy layer by layer optimization scales as Õ(NtrainE), wherein is the number of layers in the deep Boltzmann machine. Assuming that κ is a constant, it follows that a quantum sampling approach provides an asymptotic advantage for training deep networks. In practice, the two approaches are difficult to directly compare because they both optimize different objective functions and thus the qualities of the resultant trained models will differ. It is reasonable to expect, however, that the quantum approach will tend to find superior models because it optimizes the maximum likelihood objective function up to sampling error due to taking finite Ntrain.
Note that the method of Table 3 has an important advantage over typical quantum machine learning algorithms in that it does not require that the training vectors be stored in quantum memory. Instead, only
qubits are needed for a numerical precision of E in the evaluation of the energy and Q(v,h). This means that an algorithm that could not be done classically could be performed with fewer than 100 qubits, assuming that 32-bits of precision suffices for the energy and Q(v, h). Recent developments in quantum rotation synthesis could be used to remove the requirement that the energy is explicitly stored as a qubit string which might substantially reduce space requirements. An alternative method is disclosed below in which the quantum computer can coherently access this database via an oracle.
Training Via Quantum Amplitude Estimation
An alternative method is based on access to the training data via a quantum oracle which could represent either an efficient quantum algorithm that provides the training data (such as another Boltzmann machine used as a generative model) or a quantum database that stores the memory via a binary access tree. If the training set is {xi|i=1, . . . , Ntrain}, then the oracle is a unitary operation UO that, for any computational basis state |i and any bit strings y and xi of length nv the operation:
UO|i|y:=|iy⊕xi,
A single quantum access to UO is sufficient to prepare a uniform distribution over all the training data:
The state
can be efficiently prepared using quantum techniques so the entire procedure is efficient.
At first glance, the ability to prepare a superposition over all data from the training set seems to be a powerful resource. However, a similar probability distribution can be generated classically using one query by picking a random training vector. More sophisticated approaches are needed if to leverage computational advantages using such quantum superpositions. The method shown in Table 4 (as shown in
for a Boltzmann machine on a connected graph with E edges to within error δ using an expected number of queries to UO that scales as
and a number of quantum operations that scales as
for a constant learning rate r.
The method of computing gradients for a deep Boltzmann machine shown in Table 4 uses amplitude estimation. This method provides a quadratic reduction in the number of samples needed to learn the probability of an event occurring. For any positive integer L, the amplitude estimation algorithm of takes as input a quantum algorithm that does not use measurement and with success probability a and outputs ã(0≤ã≤1) such that
with probability at least 8/π2, using L iterations of Grover's algorithm. If a=0, then ã=0 with certainty, and if a=1 and L is even, then ã=1 with certainty. Amplitude estimation is described in further detail in Brassard et al. “Quantum amplitude amplification and estimation,” available at arxiv.org/quanth-ph/0005055 v1 (2000), which is incorporated herein by reference.
The procedure of Table 4 provides a method for computing the derivative of OML with respect to the weights. This procedure can be adapted to compute the derivatives with respect to the biases. The first step in this procedure is preparation of a uniform superposition of all training data and then applying U0 to the superposition to obtain:
Any quantum algorithm that does not use measurement is linear and hence applying qGenDataState (shown in Table 2 above) to this superposition yields:
If a measurement X=1 is success then Õ((κ+maxvκv)/Δ) preparations are needed to learn P(success)=P(x=1) to within relative error Δ/8 with high probability. This is because P(success)≥1/(κ+maxvκv). Similarly, success can be associated with an event in which an ith visible unit is 1 and the jth hidden unit is 1 and a successful state preparation is measured. This marking process is exactly the same as the previous case, but requires a Toffoli gate (a doubly-controlled NOT gate, which can be implemented using fundamental gates) and two Hadamard operations. Thus P(vi=hj=x=1) can be learned within relative error Δ/8 using Õ((κ+maxvκv)/Δ) preparations. It then follows from the laws of conditional probability that
can be calculated.
In order to ensure that the total error in vihjdata is at most Δ, the error in the quotient in (2) must be bounded. It can be seen that for Δ<1/2,
Therefore the algorithm gives vihjdata within error Δ.
The same steps can be repeated using qGenModelState (Table 1) as the state preparation subroutine used in amplitude estimation. This allows computation of vihjdata within error Δ using Õ(1/Δ) state preparations. The triangle inequality shows that the maximum error incurred from approximating vihjdata−(vihj)model is at most 2Δ. Therefore, with a learning rate of r, the overall error in the derivative is at most 2Δr. If Δ=δ/(2r) then the overall algorithm requires Õ(1/δ) state preparations for constant r.
Each state preparation requires one query to UO and Õ(E) operations assuming that the graph that underlies the Boltzmann machine is connected. This means that the expected query complexity of the algorithm is Õ((κ+maxvκv)/δ) and the number of circuit elements required is Õ((κ+maxvκv)E/δ).
There are two qualitative differences between the method of Table 4 and that of Table 3. First, the method of Table 4 provides detailed information about one direction of the gradient, whereas samples produced by the method of Table 3 provide limited information about every direction. The method of Table 4 can be repeated for each of the components of the gradient vector in order to perform an update of the weights and biases of the Boltzmann machine. Second, amplitude amplification is not used to reduce the effective value of κ. Amplitude amplification only gives a quadratic advantage if used in an algorithm that uses measurement and feedback unless the probability of success is known.
The quadratic scaling with E means that the method of Table 4 may not be preferable to that of Table 3 for learning all weights. On the other hand, the method of Table 4 can be used to improve previously estimated gradients. In one example, a preliminary gradient estimation step is performed using a direct gradient estimation method using O(√{square root over (Ntrain)}) randomly selected training vectors. Then the gradient is estimated by breaking the results into smaller groups and computing the mean and the variance of each component of the gradient vector over each of the subgroups. The components of the gradients with the largest uncertainty can then be learned using the above method with δ˜1/√{square root over (Ntrain)}. This approach allows the benefits of different approaches to be used, especially in cases where the majority of the uncertainty in the gradient comes from a small number of components.
The discussion above is directed to learning on restricted Boltzmann machines and deep Boltzmann machines. The disclosed quantum methods can train full Boltzmann machines given that the mean-field approximation to the Gibbs state has only polynomially small overlap with the true Gibbs state. The intra-layer connections associated with such Boltzmann machines can permit superior models.
With reference to
A method 300 of gradient calculation is illustrated in
Referring to
At 410, a qubit is added, and quantum superposition is used rotate the qubit to
At 412, amplitude estimation is used to measure |1 and at 414, (v,h) is measured. The measured value (v,h) is returned at 416.
A data average can be determined in a similar fashion. Referring to
At 510, a qubit is added, and quantum superposition is used rotate the qubit to
At 512, amplitude estimation is used to measure |1 and at 514, (v,h) is measured. The measured value (v,h) is returned at 516.
A method 600 of gradient calculation using amplitude estimation is shown in
A model average can be determined using a method 700 shown in
At 710, a qubit is added, and quantum superposition is used rotate this qubit to
At 712, amplitude estimation is used to determine a probability of measuring this qubit as |1 and at 714, amplitude estimation is used to determine the probability of this qubit being |1 and hi=vj=1. The ratio of the two probabilities is returned at 716.
A model average can also be determined using a method 800 shown in
are obtained at 802, and a mean field approximation to a Gibbs state is determined at 804. At 806, a mean-field state is prepared on qubits in a quantum computer simultaneously for each |xi in the superposition. At 808, a qubit string is prepared that stores the energy of each configuration (v,h). This qubit string can be represented as
At 810, a qubit is added, and quantum superposition is used rotate this qubit to
At 812, amplitude estimation is used to determine a probability of measuring this qubit as |1 and at 814, amplitude estimation is used to determine the probability of this qubit being |1 and hi=vj=1. The ratio of the two probabilities is returned at 816.
Computing Environments
With reference to
As shown in
The exemplary PC 900 further includes one or more storage devices 930 such as a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and an optical disk drive for reading from or writing to a removable optical disk (such as a CD-ROM or other optical media). Such storage devices can be connected to the system bus 906 by a hard disk drive interface, a magnetic disk drive interface, and an optical drive interface, respectively. The drives and their associated computer readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules, and other data for the PC 900. Other types of computer-readable media which can store data that is accessible by a PC, such as magnetic cassettes, flash memory cards, digital video disks, CDs, DVDs, RAMs, ROMs, and the like, may also be used in the exemplary operating environment.
A number of program modules may be stored in the storage devices 930 including an operating system, one or more application programs, other program modules, and program data. Storage of Boltzmann machine specifications, and computer-executable instructions for training procedures, determining objective functions, and configuring a quantum computer can be stored in the storage devices 930 as well as or in addition to the memory 904. A user may enter commands and information into the PC 900 through one or more input devices 940 such as a keyboard and a pointing device such as a mouse. Other input devices may include a digital camera, microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the one or more processing units 902 through a serial port interface that is coupled to the system bus 906, but may be connected by other interfaces such as a parallel port, game port, or universal serial bus (USB). A monitor 946 or other type of display device is also connected to the system bus 906 via an interface, such as a video adapter. Other peripheral output devices 945, such as speakers and printers (not shown), may be included. In some cases, a user interface is display so that a user can input a Boltzmann machine specification for training, and verify successful training.
The PC 900 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 960. In some examples, one or more network or communication connections 950 are included. The remote computer 960 may be another PC, a server, a router, a network PC, or a peer device or other common network node, and typically includes many or all of the elements described above relative to the PC 900, although only a memory storage device 962 has been illustrated in
When used in a LAN networking environment, the PC 900 is connected to the LAN through a network interface. When used in a WAN networking environment, the PC 900 typically includes a modem or other means for establishing communications over the WAN, such as the Internet. In a networked environment, program modules depicted relative to the personal computer 900, or portions thereof, may be stored in the remote memory storage device or other locations on the LAN or WAN. The network connections shown are exemplary, and other means of establishing a communications link between the computers may be used.
With reference to
With reference to
Having described and illustrated the principles of the disclosed technology with reference to the illustrated embodiments, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from such principles. The technologies from any example can be combined with the technologies described in any one or more of the other examples. Alternatives specifically addressed in these sections are merely exemplary and do not constitute all possible examples.
This is the U.S. National Stage of International Application No. PCT/US2015/062848, filed Nov. 28, 2015, which was published in English under PCT Article 2 1(2), which in turn claims the benefit of U.S. Provisional Application No. 62/088,409, filed Dec. 5, 2014. Both applications are incorporated herein in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/062848 | 11/28/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/089711 | 6/9/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7860596 | Guez | Dec 2010 | B2 |
8892857 | Ozols | Nov 2014 | B2 |
20080301077 | Fung | Dec 2008 | A1 |
20140187427 | Macready et al. | Jul 2014 | A1 |
20140365201 | Gao | Dec 2014 | A1 |
20160314406 | Wiebe et al. | Oct 2016 | A1 |
Number | Date | Country |
---|---|---|
102034133 | Apr 2011 | CN |
WO 2014055293 | Apr 2014 | WO |
WO 2015085190 | Jun 2015 | WO |
Entry |
---|
Dumoulin, Vincent, Ian J. Goodfellow, Aaron Courville, and Yoshua Bengio. “On the Challenges of Physical Implementations of RBMs.” arXiv preprint arXiv:1312.5258 (2013). (Year: 2013). |
Tanaka, Toshiyuki. “Information geometry of mean-field approximation.” Neural Computation 12, No. 8 (2000): 1951-1968. (Year: 2000). |
Yapage, Nihal. “Information geometrical study of quantum Boltzmann machines.” (2008). (Year: 2008). |
Brassard, Gilles, Peter Hoyer, Michele Mosca, and Alain Tapp. “Quantum amplitude amplification and estimation.” Contemporary Mathematics 305 (2002): 53-74. (Year: 2002). |
First Office Action from Chinese Patent Application No. 201580066265.4, dated Nov. 21, 2019, 5 pages (with English translation). |
Bang, Jeongho, “Quantum-Classical Hybrid Learning-Simulator for Quantum-Algorithm Design,” available at: http://iqoqi.at/en/events/event/983, 2 pages, retrieved on Oct. 30, 2014. |
Bengio et al., “Greedy Layer-Wise Training of Deep Networks,” Advances in Neural Information Processing Systems 19, 8 pages (Dec. 3, 2007). |
Bian et al., “The Ising Model: Teaching an Old Problem New Tricks,” Technical Report, D-Wave Systems, pp. 1-32 (Aug. 30, 2010). |
Denil et al., “Toward the Implementation of a Quantum RBM,” NIPS'24 Workshop on Deep Learning and Unsupervised Feature Learning, 9 pages (Dec. 16, 2011). |
Dumoulin et al., “On the Challenges of Physical Implementations of RBMs,” Proceedings of the 28th AAAI Conference on Artificial Intelligence, 7 pages (Jul. 27, 2014). |
Dumoulin et al., “On the Challenges of Physical Implementations of RBMs,” http://arxiv.org/abs/1312.5258v2, 7 pages (Oct. 24, 2014). |
Fischer et al., “An Introduction to Restricted Boltzmann Machines,” LNCS, 7441:14-36 (2012). |
Herman, Joshua, “General Quantum Computational Networks Using Nonlinear Operators,” available at: http://arxiv.org/abs/0709.0883v2, 4 pages (Sep. 7, 2007). |
Hinton, Geoffrey, “A Practical Guide to Training Restricted Boltzmann Machines,” Momentum, 9:1-20 (Aug. 2, 2010). |
International Search Report and Written Opinion from International Patent Application No. PCT/US2015/062848, dated Mar. 14, 2016, 15 pages. |
Jordan, Stephen, “Fast quantum algorithm for numerical gradient estimation,” Physical Review Letters, 95:050501-1-050501-4 (Jul. 28, 2005). |
Li et al., “A Hybrid Quantum-Inspired Neural Networks with Sequence Inputs,” Neurocomputing, 117:81-90 (Mar. 4, 2013). |
Resnik et al., “Gibbs Sampling for the Uninitiated,” 23 pages (Jun. 2010). |
Ricks et al., “Training a Quantum Neural Network,” Proceedings of the 17th Annual Conference of Neural Information Processing (NIPS' 16), 8 pages (Dec. 3, 2003). |
Takahashi et al., “Multi-Layer Quantum Neural Network Controller Trained by Real-coded Genetic Algorithm,” Neurocomputing, 134:159-164 (Jun. 25, 2014). |
Tieleman, Tijmen, “Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient,” Proceedings of the 25th International Conference on Machine Learning, pp. 1064-1071 (May 7, 2008). |
U.S. Appl. No. 61/912,450, filed Dec. 5, 2013, 32 pages. |
Vellasco et al., “Quantum-Inspired Evolutionary Algorithms Applied to Neural Network Modeling,” available at: http://www-ma2.upc.edu/sxd/ICAIB/wcci2010-Plenary&InvitedLectures/wcci2010-ijcnn-Vellasco.pdf, pp. 125-150, retrieved on Oct. 30, 2014. |
Welling et al., “A New Learning Algorithm for Mean Field Boltzmann Machines,” Proceedings of the International Conference on Artificial Neural Networks, 10 pages (Jun. 5, 2001). |
Wiebe et al., “Quantum Inspired Training for Boltzmann Machines,” available at: http://arxiv.org/abs/1507.02642v1, pp. 1-18 (Jul. 9, 2015). |
Zhou et al., “Deep Quantum Networks for Classification,” 20th International Conference on Pattern Recognition, 4 pages (Aug. 23, 2010). |
Second Office Action issued in Chinese Patent Application No. 201580066265.4, dated Jul. 27, 2020, 11 pages (with English translation). |
Communication pursuant to Article 94(3) EPC issued in European Patent Application No. 15813186.2, dated Jan. 26, 2021, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20170364796 A1 | Dec 2017 | US |
Number | Date | Country | |
---|---|---|---|
62088409 | Dec 2014 | US |