The invention relates to quantum algorithms and genetic algorithms, and more precisely, to a method of performing a quantum algorithm for simulating a genetic algorithm, a relative hardware quantum gate and a relative genetic algorithm, and a method of designing quantum gates.
Computation, based on the laws of classical physics, leads to different constraints on information processing than computation based on quantum mechanics. Quantum computers promise to address many intractable problems, but, unfortunately, no algorithms for “programming” a quantum computer currently exist. Calculation in a quantum computer, like calculation in a conventional computer, can be described as a marriage of quantum hardware (the physical embodiment of the computing machine itself, such as quantum gates and the like), and quantum software (the computing algorithm implemented by the hardware to perform the calculation). To date, quantum software algorithms, such as Shor's algorithm, used to address problems on a quantum computer have been developed on an ad hoc basis without any real structure or programming methodology.
This situation is somewhat analogous to attempting to design a conventional logic circuit without the use of a Karnaugh map. A logic designer, given a set of inputs and corresponding desired outputs, could design a complicated logic circuit using NAND gates without the use of a Karnaugh map. However, the unfortunate designer would be forced to design the logic circuit more or less by intuition, and trial and error. The Karnaugh map provides a structure and an algorithm for manipulating logical operations (AND, OR, etc.) in a manner that allows a designer to quickly design a logic circuit that will perform a desired logic calculation.
The lack of a programming or program design methodology for quantum computers severely limits the usefulness of the quantum computer. Moreover, it limits the usefulness of the quantum principles, such as superposition, entanglement, and interference that give rise to the quantum logic used in quantum computations. These quantum principles suggest, or lend themselves, to problem-solving methods that are not typically used in conventional computers.
These quantum principles can be used with conventional computers in much the same way that genetic principles of evolution are used in genetic optimizers today. Nature, through the process of evolution, has devised a useful method for optimizing large-scale nonlinear systems. A genetic optimizer running on a computer efficiently addresses many previously difficult optimization problems by simulating the process of natural evolution.
Nature also uses the principles of quantum mechanics to solve problems, including optimization-type problems, searching-type problems, selection-type problems, etc. through the use of quantum logic. However, the quantum principles, and quantum logic, have not been used with conventional computers because no method existed for programming an algorithm using the quantum logic.
Quantum algorithms are also used in quantum soft computing algorithms for controlling a process. The documents WO 01/67186; WO 2004/012139; U.S. Pat. No. 6,578,018; and U.S. 2004/0024750 disclose methods for controlling a process, in particular for optimizing a shock absorber or for controlling an internal combustion engine.
In particular, the documents U.S. Pat. No. 6,578,018 and WO 01/67186 disclose methods that use quantum algorithms and genetic algorithms for training a neural network that control a fuzzy controller which generates a parameter setting signal for a classical PID controller of the process. The quantum algorithms implemented in these methods process a teaching signal generated with a genetic algorithm, and provide it to the neural network to be trained.
Actually, quantum algorithms and genetic algorithms are used as substantially separate entities in these control methods. It would be desirable to have an algorithm obtained as a merging of quantum algorithms and genetic algorithms in order to have the advantage of both the quantum computing and GAs parallelism, as the partial components of general Quantum Evolutionary Programming.
A Quantum Genetic Algorithm (QGA) for merging genetic algorithms and quantum algorithms is provided. QGA (as the component of general Quantum Evolutionary Programming) starts from this idea, which can take advantage of both quantum computing and GAs paradigms.
The general idea is to explore the quantum effects of superposition and entanglement operators to possibly create a generalized coherent state with the increased diversity of quantum population that store individuals and their fitness of successful solutions. Using the complementarity between entanglement and interference operators with a quantum searching process (based on interference and measurement operators) successful solutions from a designed state may be extracted. In particular, a major advantage for a QGA may comprise using the increased diversity of a quantum population (due to superposition of possible solutions) in optimal searching of successful solutions in a non-linear stochastic optimization problem for control objects with uncertainty/fuzzy dynamic behavior.
It is an object of the invention to provide a method for performing a quantum algorithm. A difference between this method and other well known quantum algorithms may include that the superposition, entanglement and interference operators are determined for performing selection, crossover and mutation operations according to a genetic algorithm. Moreover, entanglement vectors generated by the entanglement operator of the quantum algorithm may be processed by a wise controller implementing a genetic algorithm, before being input to the interference operator.
This algorithm may be easily implemented with a hardware quantum gate or with a software computer program running on a computer. Moreover, it may be used in a method for controlling a process and a relative control device of a process which is more robust, requires very little initial information about dynamic behavior of control objects in design process of intelligent control system, or random noise insensitive (invariant) in a measurement system and in a control feedback loop.
Another innovative aspect of this invention may comprise a method of performing a genetic algorithm, wherein the selection, crossover and mutation operations are performed by the quantum algorithm or means of the quantum algorithm of this invention.
According to another innovative aspect of this invention, a method of designing quantum gates may be provided. The method may provide a standard procedure to be followed for designing quantum gates. By following this procedures it may be easy to understand how basic gates, such as the well known two-qubits gates for performing a Hadamard rotation or an identity transformation, may be coupled together to realize a hardware quantum gate for classically performing a desired quantum algorithm.
One embodiment may include a software system and method for designing quantum gates. The quantum gates may be used in a quantum computer or a simulation of a quantum computer. In one embodiment, a quantum gate may be used in a global optimization of Knowledge Base (KB) structures of intelligent control systems that may be based on quantum computing and on a quantum genetic search algorithm (QGSA). In another embodiment, an efficient quantum simulation system may be used to simulate a quantum computer for optimization of intelligent control system structures based on quantum soft computing.
The different aspects and advantages of this invention may be even more evident through a detailed description referring to the attached drawings, wherein:
a-22d illustrates how to design a quantum gate for performing the Deutsch-Jozsa's algorithm in accordance with the invention;
a to 31d illustrate sample probability amplitudes in a Deutsch-Jozsa's algorithm in accordance with the invention;
a illustrates the result interpretation step in a Grover's quantum algorithm in accordance with the invention;
b shows sample results of the Grover's quantum algorithm in accordance with the invention;
c shows a general scheme of a hardware for performing the Grover's quantum algorithm in accordance with the invention;
A new approach in intelligent control system design is considered using a global optimization problem (GOP) approach based on a quantum soft computing optimizer. This approach is the background for hardware (HW) design of a QGA. In order to better explain the various aspects of this invention, the ensuing description is organized in chapters.
1. O
Structure of quantum genetic search algorithm The mathematical structure of the QGSA 1003 can be described as a logical set of operations:
where C is the genetic coding scheme of individuals for a given problem; Ev is the evaluation function to compute the fitness values of individuals; P0 is the initial population; L is the size of the population; Ω is the selection operator; χ is the crossover operator; μ is the mutation operator; Sup is the quantum linear superposition operator; Ent is the quantum entanglement operator (quantum super-correlation); Int is the interference operator. The operator Λ represents termination conditions that include the stopping criteria as a minimum of Shannon/von Neumann entropy, the optimum of the fitness functions, and/or minimum risk. Structure of Quantum Evolutionary Programming is a partial case of Eq. (1) and briefly is described hereinafter in chapter 3 about Quantum Evolutionary Programming (QEP).
On control physical level, in the system 2000, a disturbance block 2003 produces external disturbances (e.g., noise) on a control object model 2004 (the model 2004 includes a model of the controlled object). An output of the model block 2004 is the response of the controlled object and is provided to an input of a GA block 2002.
The GA block 2002 includes GA operators (mutation in a mutation block 2006, crossover in a crossover block 2007 and selection in a selection block 2008) and two fitness functions: a Fitness Function I 2005 for the GA; and a Fitness Function II 2015 for a wise controller 2013 of QSA (Quantum Search Algorithm) termination. Output of the GA block 2002 is input for a KB block 2009 that represents the Knowledge Bases of fuzzy controllers for different types of external excitations from block 2003. An output of block 2009 is provided to a coding block 2010 that provides coding of function properties in look-up tables of fuzzy controllers.
Thus, outputs from the coding block 2010 are provided to a superposition block 2011. An output of the superposition block 2011 (after applying the superposition operator) represents a joint Knowledge Base for fuzzy control. The output from the superposition block 2011 is provided to an entanglement block 2012 that realizes the entanglement operator and chooses marked states using an oracle model. An output of the entanglement block 2012 includes marked states that are provided to a comparator 2018. The output of the comparator 2018 is an error signal that is provided to the wise controller 2013. The wise controller 2013 solves the termination problem of the QSA. Output from the wise controller 2013 is provided to an interference block 2014 that describes the interference operator of the QSA. The interference block 2014 extracts the solutions. Outputs of the wise controller 2013 and the interference block 2014 are used to calculate the corresponding values of Shannon and von Neumann entropies.
The differences of Shannon and von Neumann entropies are calculated by a comparator 2019 and provided to the Fitness Function II 2015. The wise controller 2013 provides an optimal signal for termination of the QSA with measurement in a measurement block 2016 with “good” solutions as answers in an output of Block 2017.
On gate level, in the QGSA 2000, a superposition block 2011 provides a superposition of classical states to an entanglement block 2012. The entanglement block 2012 provides the entangled states to an interference block 2014. In one embodiment, the interference block 2014 uses a Quantum Fast Fourier Transform (QFFT) to generate interference. The interference block 2014 provides transformed states to a measurement and observation/decision block 2013 as wise controller. The observation block 2013 provides observations (control signal u*) to a measurement block 2016. The observation/decision block 2013 includes a fitness function to configure the interference provided in the interference block 2014. Decision data from the decision block 2013 is decoded in a decoding block 2017 and using stopping information criteria 2015, a decision regarding the termination of the algorithm is made. If the algorithm does not terminate, then decision data are provided to the superposition block 2011 to generate a new superposition of states.
Therefore, the superposition block 2011 creates a superposition of states from classical states obtained from the soft computing simulation. The entanglement block 2012 creates entanglement states controlled by the GA 2002. The interference block 2014 applies the interference operations described by the fitness function in the decision block 2005. The decision block 2013 and the stopping information block 2015 determine the QA's stopping problem based on criteria of minimum Shannon/Von Neumann entropy. An example of how the GA 2002 modifies the superposition, entanglement and interference operators, as schematically represented in
The following chapter 3 illustrates how the GA controls the execution of each operation of the quantum search algorithm in practical cases.
A general Quantum Algorithm (QA), written as a Quantum Circuit, can be automatically translated into the corresponding Programmable Quantum Gate for efficient classical simulation of an intelligent control system based on Quantum (Soft) Computing. This gate is represented as a quantum operator in matrix form such that, when it is applied to the vector input representation of the quantum register state, the result is the vector representation of the desired register output state.
In general form, the structure of a QAG can be described as follows:
QAG=[(IntnI)·UF]h+1·[nH
mS] (2)
where I is the identity operator; the symbol denotes the tensor product; S is equal to I or H and dependent on the problem description. One portion of the design process in Eq. (2) is the type-choice of the entanglement problem-dependent operator UF that physically describes the qualitative properties of the function ƒ (such as, for example, the FC-KB in a QSC simulation).
The coherent intelligent states of QA's that describe physical systems are those solutions of the corresponding Schrödinger equations that represent the evolution states with minimum of entropic uncertainty (in Heisenberg-Schrödinger sense, they are those quantum states with “maximum classical properties”). The Hadamard Transform creates the superposition on classical states, and quantum operators such as CNOT create robust entangled states. The Quantum Fast Fourier Transform (QFFT) produces interference.
The efficient implementations of a number of operations for quantum computation include controlled phase adjustment of the amplitudes in the superposition, permutation, approximation of transformations and generalizations of the phase adjustments to block matrix transformations. These operations generalize those used in quantum search algorithms (QSA) that can be realized on a classical computer. This approach is applied below (see Chapter 4) to the efficient simulation on classical computers of the Deutsch QA, the Deutsch-Jozsa QA, the Simon QA, the Shor's QA and/or the Grover QA and any control QSA for simulation of a robust KB (Knowledge Base) of fuzzy control for P-, PD-, or PID-controllers with different random excitations on control objects, or with different noises in information/control channels of intelligent control systems.
2. Structure and main quantum operations of QA simulation system
The common functions include: Superposition building blocks, Interference building blocks, Bra-Ket functions, Measurement operators, Entropy calculation operators, Visualization functions, State visualization functions, and Operator visualization functions.
The algorithm-specific functions include: Entanglement encoders, Problem transformers, Result interpreters, Algorithm execution scripts, Deutsch algorithm execution script, Deutsch Jozsa's algorithm execution script, Grover's algorithm execution script, Shor's algorithm execution script, and Quantum control algorithms as scripts.
The superposition building blocks implement the superposition operator as a combination of the tensor products of Walsh-Hadamard H operators with the identity operator I:
For most algorithms, the superposition operator can be expressed as:
where k1 and k2 are the numbers of the inclusions of H and of S into the corresponding tensor products. Values of k1, k2 depend on the concrete algorithm and can be obtained from Table 1. Operator S, depending on the algorithm, may be the Walsh-Hadamard operator H or the identity operators I.
H
k
H I
I
k
Hk
k
It is convenient to automate the process of the calculation of the tensor power of the Walsh-Hadamard operator as follows:
where i=0, 1, . . . , 2n, j=0, 1, . . . , 2n.
The tensor power of the identity operator can be calculated as follows:
[nI]i,j=1|i=j0|i≠j, (4)
where i=0, 1, . . . , 2n, j=0, 1, . . . , 2n.
Then any superposition operator can be presented as a block matrix of the following form:
where i=0, . . . , 2k
For the superposition operator of Deutsch's algorithm: n=2, k1=1, k2=1, S=I:
The superposition operator of Deutsch-Jozsa's and of Grover's algorithm is, n=3, k1=2, k2=1, S=H:
The superposition operator of Simon's and of Shor's algorithms are, n=4, k1=2, k2=2, S=I:
The interference blocks implement the interference operator that, in general, is different for all algorithms. By contrast, the measurement part tends to be the same for most of the algorithms. The interference blocks compute the k2 tensor power of the identity operator.
This interference operator of Deutsch's algorithm is a tensor product of two Walsh-Hadamard transformations, and can be calculated in general form using Eq. (3) with n=2:
Note that in Deutsch's algorithm, the Walsh-Hadamard transformation in interference operator is used also for the measurement basis.
The interference operator of Deutsch-Jozsa's algorithm is a tensor product of k1 power of the Walsh-Hadamard operator with an identity operator. In general form, the block matrix of the interference operator of Deutsch-Jozsa's algorithm can be written as:
where i=0, . . . , 2k
The interference operator of Deutsch-Jozsa's algorithm for n=3, k1=2, k2=1:
The interference operator of Grover's algorithm can be written as a block matrix of the following form:
where i=0, . . . , 2k
Thus, the interference operator of Grover's algorithm for n=3, k=2, k2=1 is constructed as follows:
Note that as the number of qubits increases, the gain coefficient becomes smaller and the dimension of the matrix increases according to 2k
The interference operator of Simon's algorithm is prepared in the same manner as the superposition operators of Shor's and of Simon's algorithms and can be described as follows (see Eqs. (5), (8)):
In general, the interference operator of Simon's algorithm is similar to the interference operator of Deutsch-Jozsa's algorithm Eq. (10), but each block of the operator matrix Eq. (14) is a k2 tensor product of the identity operator.
Each odd block (when the product of the indexes is an odd number) of the Simon's interference operator Eq. (14), has a negative sign. Actually, if i=0, 2, 4, . . . 2k
The interference operator of Shor's algorithm uses the Quantum Fourier Transformation operator (QFT), calculated as:
where: J—imaginary unit, i=0, . . . , 2k
With k1=1:
Eq. (16) can be also presented in harmonic form using Euler's formula:
Bra and Ket functions are the function used to assign to quantum qubits the actual representation as a corresponding row or column vector using the following relation:
These functions are used for specification of the input of the QA, for calculation of the density matrices of intermediate quantum states, and for fidelity analysis of the QA.
Measurement operators are used to perform the measurement of the current superposition of the state vectors. A QA produces a superposition of the quantum states, in general described as:
During quantum processing in the QA, the probability amplitudes αi of the quantum states |i, i=1, . . . , 2n are transformed in a way such that the probability amplitude aresult of the answer quantum state |result
becomes larger than the amplitudes of the remaining quantum states. The measurement operator outputs a state vector |result
. When all of αi=const, i=1, . . . , 2n, then the measurement operator sends an error message.
Entropy calculation operators are used to estimate the entropy of the current quantum state. Consider the quantum superposition state Eq. (20). The Shannon entropy of the quantum state Eq. (20) is calculated as:
The objective of minimizing the quantity in Eq. (21) can be used as a termination condition for the QA iterations. Shannon entropy describes the uncertainty of the quantum state. It is high when quantum superposition has many states with equal probability. The minimum possible value of the Shannon entropy is equal to the number k2 of outputs (see Table 1) of QA.
Visualization functions are functions that provide the visualization display of the quantum state vector amplitudes and of the structure of the quantum operators.
Algorithm specific functions provide a set of scripts for QA execution in command line and tools for simulation of the QA, including quantum control algorithms. The functions of section 2 prepare the appropriate operators of each algorithm, using as operands the common functions.
3. Quantum Evolutionary Programming (QEP) and learning control of quantum operators in QGSA with genetic operators The so-called Quantum Evolutionary Programming has two major sub-areas, Quantum Inspired Genetic Algorithms (QIGAs), and Quantum Genetic Algorithms (QGAs). The former adopts qubit chromosomes as representations and employs quantum gates for the search of the best solution. The latter tries to address a key question in this field, what GAs will look like as an implementation on quantum hardware. An important point for QGAs is to build a quantum algorithm that takes advantage both of GA's and quantum computing parallelism as well as true randomness provided by quantum computing. Below the difference and common parts as parallelism of both GA's and quantum algorithms are compared.
3.1. Genetic/Evolutionary computation and programming Evolutionary computation is a kind of self-organization and self-adaptive intelligent technique which analogies the process of natural evolution. According to Darwinism and Mendelism, it is through the reproduction, mutation, selection and competition that the evolution of life is fulfilled.
Simply stated, GAs are stochastic search algorithms based on the mechanics of natural selection and natural genetics. GAs applied to its capabilities for searching large and non-linear spaces where traditional methods are not efficient or also attracted by their capabilities for searching a solution in non-usual spaces such as for learning of quantum operators and in design of quantum circuits. An important point for GA's design is to build an algorithm that takes advantage of computing parallelism.
There exist some problems in the initialization of GAs. They can be very demanding in terms of computation and memory, and sequential GAs may get trapped in a sub-optimal region of the search space and thus may be unable to find good quality solutions. So, parallel genetic algorithms (PGAs) are proposed to solve more difficult problems that need large population. PGAs are parallel implementation of GAs which can provide considerable gains in terms of performance and scalability. The most important advantage of PGAs is that in many cases they provide better performance than single population-based algorithms, even when the parallelism is simulated on conventional computers. PGAs are not only an extension of the traditional GA sequential model, but they represent a new class of algorithms in that they search the space of solutions differently. Existing parallel implementations of GAs can be classified into three main types of PGAs: (i) Global single-population master-slave GAs; (ii) Massive parallel GAs; and (iii) Distributed GAs.
Global single-population master-slave GAs explore the search space exactly as a sequential GA and are easy to implement, and significant performance improvements are possible in many cases. Massive parallel GAs are also called fine-grained PGAs and they are suited for massively parallel computers. Distributed GAs are also called coarse-grained PGAs or island-based GA and are the most popular parallel methods because of its small communication overhead and its diversification of the population. Evolutionary algorithm (EA) is such a random searching algorithm based on the above model. It is the origin of the genetic algorithm (GA) that is derived from machine learning, evolutionary strategies (ES) which is brought forward by Rechenberg, “Evolutionstrategie: Optimizirung technischer systeme nach prinzipien der biologischen evolution,” Stuttgard, Germany: Frommann-Holzog, 1973, and Schwefel, “Evolution and optimum seeking,” N.Y.: Wiley, 1995, in numerical optimization, and evolutionary programming (EP).
EP is an efficient algorithm in solving optimization problems, but the criterion EP is of torpid convergence. Compared with GA, EP has some different characteristics. First, the evolution of GA is on the locus of chromosome, while EP directly operates on the population's behavior.
Second, GA is based on the Darwinism and genetics, so the crossover is the major operator. EP stresses on the evolution species, so there are not operations directly on the gene such as crossover, and mutation is the only operator to generate new individuals. Thus mutation is the only operator in EP and consequently it is the breakthrough point of EP. Cauchuy-mutation and logarithm-normal distribution mutation algorithms are examples, which also improved the performance of EP.
Third, there is a transformation of genotype and phenotype in GA, which does not in EP. Fourth, the evolution of EP is smooth and the evolution is much steady than GA, however it relies heavily on its initial distribution.
From the evolution mechanism, the EP that adopts Gauss mutation to generate offspring is characteristics of a slow convergent speed. Therefore, finding more efficient algorithm too speed up the convergence and improve the quantity of solution has become an important subject in the research of EP.
3.2. The fundamental result of quantum computation say that all the computation can be expanded in a circuit, which nodes are the universal gates and in quantum computing universal quantum simulator is possible. These gates offer an expansion of unitary operator U that evolves the system in order to perform some computation.
Thus, naturally two problems are discussed: (1) Given a set of functional points S={(x,y)} find the operator U such that y=U·x; and (2) Given a problem, find the quantum circuit that solves it. The former can be formulated in the context of GAs for learning algorithms while the latter through evolutionary strategies.
Quantum computing has a feature called quantum parallelism that cannot be replaced by classical computation without an exponential slowdown. This unique feature turns out to be the key to most successful quantum algorithms. Quantum parallelism refers to the process of evaluating a function once a superposition of all possible inputs to produce a superposition of all possible outputs. This means that all possible outputs are computed in the time required to calculate just one output with a classical computation. Superposition enables a quantum register to store exponentially more data than a classical register of the same size. Whereas a classical register with N bits can store one value out 2N, a quantum register can be in a superposition of all 2N values. An operation applied to the classical register produces one result. An operation applied to the quantum register produces a superposition of all possible results. This is what is meant by the term “quantum parallelism.”
Unfortunately, all of these outputs cannot be as easily obtained. Once a measurement is taken, the superposition collapses. Consequently, the promise of massive parallelism is offset by the inability to take advantage of it. This situation can be changed with the application hybrid algorithm (one part is Quantum Turing Machine (QTM) and another part is classical Turing Machine) as in Shor's quantum factoring algorithm that took advantage of quantum parallelism by using a Fourier transform.
3.3. Quantum Genetic Algorithm's model. This idea sketched out a Quantum Genetic Algorithm (QGA), which takes advantage of both the quantum computing and GAs parallelism. The key idea is to explore the quantum effects of superposition and entanglement to create a physical state that store individuals and their fitness. When measuring the fitness, the system collapses to a superposition of states that have that observed fitness. QGA starts from this idea, which can take advantage of both quantum computing and GAs paradigms.
Again, the difficulty is that a measurement of the quantum result collapses the superposition so that only one result is measured. At this point, it may seem that we have gained little. However, depending upon the function being applied, the superposition of answers may have common features with interference operators. If these features can be ascertained, it may be possible to divine the answer searching for probabilistically.
The next key feature to understand is entanglement. Entanglement is a quantum (correlation) connection between superimposed states. Entanglement produces a quantum correlation between the original superimposed qubit and the final superimposed answer, so that when the answer is measured, collapsing the superposition into one answer or the other, the original qubit also collapses into the value (0 or 1) that produces the measure answer. In fact, it collapses to all possible values that produce the measured answer. For example, as mentioned above, the key step in QGA is the fitness measurement of a quantum individual. We begin by calculating the fitness of the quantum individual and storing the result in the individual's fitness register. Because each quantum individual is a superposition of classical individuals, each with a potentially different fitness, the result of this calculation is a superposition of the fitnesses of the classical individuals. This calculation is made in such a way as to produce an entanglement between the register holding the individual and the register holding the fitness(es).
An interference operation is used after an entanglement operator for the extraction of successful solutions from superposed outputs of quantum algorithms. The well-known complementarity or duality of particle and wave is one of the deep concepts in quantum mechanics. A similar complementarity exists between entanglement and interference. The entanglement measure is a decreasing function of the visibility of interference.
Example: Complementarity of entanglement and interference. Let us consider the complementarity in a simple two-qubit pure state case. Consider the entangled state |ψ=a|01
|02
+b|11
|12
with the constraint of unitarity: a2+b2=1. Then make a unitary transformation on the first qubit, |01
→cos α|01
+sin α|11
, and obtain |ψ
→|ψ′
=a(cos α|01
+sin α|11
|02
+b(cos α|11
−sin α|01
)|12
.
Finally observe the first qubit without caring about the second one. The probability to get the state |01 is
which is a typical interference pattern if we regard the angle α as a control parameter. The visibility of the interference is: Γ≡|a2−b2| which vanishes when the initial state is maximally entangled, i.e., a2=b2, while it becomes maximum when the state is separable, i.e. a=0 or b=0. On the other hand, the entanglement measure is partially traced von Neumann entropy as follows:
E≡S(ρred)=−a2 log a2−b2 log b2,
where the reduced density operator
ρred=Tr2|ψψ′|=Tr2|ψ
ψ|=a2|01
01|+b2|11
11|.
The entanglement takes the maximum value E=1 when a2=b2 and the minimum value E=0 for a=0 or b=0. Thus, the more the state is entangled, the less visibility of the interference and vice versa. Another popular measure of entanglement such as the negativity may be better for a quick illustration. The negativity is minus twice of the least eigenvalue of the partial transpose of the density matrix. In this case, it is N=2|ab|. The complementarity is for this case as follows: N2+Γ2=1. This constraint between the entanglement and the interference comes from the condition of unitarity: a2+b2=1. Thus, in quantum algorithms these measures of entanglement and interference are not independent and the efficiency simulation of success solutions of quantum algorithms is correlated with equilibrium interrelations between these measures.
3.3.1. Learning control of quantum operator in QGSA with genetic operators. Similar to classical GA in that QGA allows the use of any fitness function that can be calculated on a QTM (Quantum Turing machine) without collapsing a superposition, which is generally a simple requirement to meet. The QGA differs from the classical GA in that each individual is a quantum individual. In the classical GA, when selecting an individual to perform crossover, or mutation, exactly one individual is selected. This is true regardless of whether there are other individuals with the same fitness. This is not the case with a quantum algorithm. By selecting an individual, all individuals with the same fitness are selected. In effect, this means that a single quantum individual in reality represents multiple classical individuals.
Thus, in QGA, each quantum individual is a superposition of one or more classical individuals. To do this several sets of quantum registers are used. Each individual uses two registers: (1) the individual register; and (2) the fitness register. The first register stores the superimposed classical individuals. The second register stores the quantum individual's fitness.
At different times during the QGA, the fitness register will hold a single fitness value (or a quantum superposition of fitness values). A population will be N of these quantum individuals.
Example. Let us consider the tensor product of the qubit chromosomes as follows:
Thus, the qubit will be represented as a superposition of the states |i1i2
i3
i1, i2, i3ε{0,1}, and so it carries information about all of them at the same time.
Such observation points out the fact that the qubit representation has a better characteristic of diversity than the classical approaches, since it can represent superposition of states. In classical representations in the abovementioned example, we will need at least 23=8 chromosomes to keep the information carried in the state, while only 3-qubit chromosome is enough in QGA case.
Thus, QGA uses two registers for each quantum individual. The first one stores an individual, while the second one stores the individual's fitness. A population of N quantum individuals is stored through pairs of registers
Ri={(individual-register)i,(fitness-register)i}, i=1, 2 . . . , N.
Once a new population is generated, the fitness for each individual would be calculated and the result stored in the individual's fitness.
According to the law of quantum mechanics, the effect of the fitness measurement is a collapse and this process reduces each quantum individual to a superposition of classical individuals with a common fitness. It is an important step in the QGA. Then the crossover and mutation operations would be applied. The more significant advantage of QGA's will be an increase in the production of good building blocks (same as schemata in classical GAs) because, during the crossover, the building block is crossed with a superposition of many individuals instead of with only one in classical GAs (see examples below).
To improve the convergence we also need better evolutionary (crossover/mutation) strategies. The evolutionary strategies are efficient to get closer to the solution, but not to complete the learning process that can be realized efficiently with fuzzy neural network (FNN).
3.3.2. Physical requirements to crossover and mutation operator's models in QGAs. In QGAs, each chromosome represents a superposition of all possible solutions in a certain distribution, and any operation performed on such chromosome will affect all possible solutions it represents. Thus, the genetic operators defined on the quantum probability representation have to satisfy the requirement that it should be of the same efficiency to all possible solutions one chromosome represents.
In general, constrained search procedures like imaginary-time propagation frequently become trapped in a local minimum. The probability of trapping can be reduced, to some extent, by introducing a certain degree of randomness or noise (and in fact this can be achieved by increasing the time-step of the propagation). However, random searches are not efficient for problems involving complex hyper-surfaces, as is the case of the ground-state system of a system under the action of a complicated external potential. A completely different and unconventional approach for optimization of quantum systems is based on a genetic algorithm (GA), a technique, which resembles the process of evolution in nature. The GA belongs to a new generation of the so-called intelligent global optimization techniques. GA is a global search method, which simulates the process of evolution in nature. It starts from a population of individuals represented by chromosomes. The individuals go through the process of evolution, i.e., the formation of the off springs from a previous population containing the parents. The selection procedure is based on the principle of the survival of the fittest. Thus, the main ingredients of the method are a fitness function and genetic operations on the chromosomes. The main advantage of GA over other search methods is that it handles problems in highly nonlinear, multidimensional spaces with surprisingly high speed and efficiency. Furthermore, it performs a global search and therefore avoids, to a large extent, local minima. Another important advantage is that it does not require any gradient to perform the optimization. Due to the properties of the GA, the extension to higher dimensions and more particles is numerically less expensive than for other methods.
Thus in classical GA, the purpose of crossover is to exchange information between individuals. Consequently, when selecting individuals to perform crossover, or mutation, exactly one individual is selected. This is true regardless of whether there are other individuals with the same fitness. This is not the case with a QGA.
As mentioned above in the Summary, the major advantage for a QGA is the increased diversity of a quantum population. A quantum population can be exponentially larger than a classical population of the same size because each quantum individual is a superposition of multiple classical individuals. Thus, a quantum population is effectively much large than a similar classical population. This effective size is decreased during the fitness operation when the superposition is reduced to only individuals with the same fitness.
However, it is increased during the crossover operation. Consider two quantum individuals consisting of N and M superpositions each. One point crossover between these individuals results in offspring that are the superposition of N·M classical individuals. Thus, in the QGA, crossover increases the effective size of the population in addition to increasing it diversity.
There is a further benefit to quantum individuals. Consider the case of two individuals of relatively high fitness. If these are classical individuals, it is possible that these individuals are relatively incompatible. That is, any crossover between them is unlikely to produce a very fit offspring. Thus, after crossover, it is likely that the offspring of these individuals will not be selected and their good “genes” will be lost to the GA. If there are two quantum individuals all of the same high fitness is in a superposition. As such, it is very unlikely that all of these individuals are incompatible and it is almost certain that some highly fit offspring will be produced during crossover. At a necessary minimum, the necessary good offspring are somewhere over the classical case. This is a clear advantage of the QGA.
Consider the appearance of a new building block in a QGA. As mentioned above, during crossover, the building block is not crossed with only one other individual (as in classical GA). Instead, it is crossed with a superposition of many individuals. If that building block creates fit offspring with most of the individuals, then by definition, it is a good building block. Furthermore, it is clear that in measuring the superimposed fitness, one of the “good” fitness is likely to be measured (because there are many of them), thereby preserving that building block. In effect, by using superimposed individuals, the QGA removes much of the randomness of the GA. Thus, the statistical advantage of good building blocks should be much greater in the QGA. This should cause the number of good building blocks to grow much more rapidly.
One can also view the evolutionary process as a dynamic map in which populations tends to converge on fixed points in the population space. From this point of view, the advantage of s QGA is that the large effective size allows the population to sample from more basins of attraction. Thus, it is much more likely that the population will include members in the basins of attraction for the higher fitness solutions.
Therefore, in a QGA the evolution information of each individual is well contained in its contemporary evolution target (high fitness). In this case, the contemporary evolution target represents the current evolution state of one individual that have the best solution corresponding to its current fitness. Because the contemporary evolution target represents the current evolution state of one individual, the exchanging contemporary evolution targets by crossover operator of two individuals, the evolution process of one individual will be influenced by the evolution state of the other one.
Example: Crossover operator. The crossover operator for this case satisfies the above requirement:
Thus with this model of crossover operator, the evolution process of one individual will be influenced by the evolution state of the other one.
Example: Mutation operator. The purpose of mutation is to slightly disturb the evolution states of some individuals, and to prevent the algorithm from falling into local optimum. The requirement for designing mutation resembles that for designing crossover. As a probing research, a single qubit mutation operator can be used, but the thought can be generalized easily to the multiple qubits scenarios. Following is the procedure of mutation operator:
Clearly, the mutation operator defined above has the same efficiency to all the superposition states.
Let us briefly consider an example of how an application of GA operation on a quantum computing can be considered.
Example. In GA, a population of an appropriate size is maintained during each iteration. A chromosome in the population is assumed to code with binary strings. Let the length of these binary strings be n. There are a total of 2n such strings. Usually, only a small number (m<<2n) of these strings are chosen to be in the population. A possible state in a quantum computer corresponds to a chromosome in GA. Choosing an initial population is equivalent to setting the amplitude of those states that correspond to the chromosomes in the population to be 1/√{square root over (m)} and 0 otherwise.
Quantum computing is manipulated with unitary operators. A unitary transformation can be constructed so that it will operate on one chromosome or one state and will emulate crossover. If the number of bits after cutting point is k, then a simple unitary transformation that transforms sr to fr and fr to sr can be constructed easily by starting out with a unit matrix, then setting a 1 at the (sr, fr) and (fr, sr) positions, and changing the one at the (sr,sr) and (fr,fr) positions to be 0. The k bits after the cut point can be crossed over by composing k such unitary operators.
As an example, the following matrix that operates at the last two bits does crossover of 1011 and 0110 to 1010 and 0111, where the cutting points is at the middle:
i.e., it is the matrix form of the CNOT-gate that can create entanglement.
Mutation of a chromosome alters one or more genes. It can also be described by changing the bit at a certain position or positions. Switching the bit can be simply carried out by the unitary transformation (negation operator, for example):
at a certain bit position or positions.
The selection/reproduction process involves choosing chromosomes to form the next generation of the population. Selection is based on the fitness values of the chromosomes. Typical selection rules are to replace all parents by the offspring, or retain a few of the best parents, or retain the best among all parents and offspring. When using GA to solve an optimization problem, the objective function value is “the fitness”. We can interpret the objective function as the energy or entropy rate of the state and states with lower energy have a higher probability of surviving.
There are two ways that the selection process can be implemented. First, follow the same steps as in a classical computer. That is, evaluate the “fitness” or “energy” of each chromosome. The fitness has to be stored since the evaluation process is not reversible. Second, we can make use of the quantum behavior of a quantum computing to perform selection, as described below. Selecting a suitable Hamiltonian will be equivalent to choosing a selection strategy. Since members of the successive populations are wave functions, the uncertainty principle has to be taken into account when defining the genetic operations. In QGA this can be achieved by introducing smooth or “uncertain” genetic operations (see example below).
After the selection step, the GA will return to its first step and continue iterations. It will terminate when an observation of the state is performed.
3.4. Mathematical model of genetic-quantum operator's interrelation. The quantum individual |x and its fitness f(x) could be mathematically represented by an entangled state (using crossover operator as unitary CNOT-gate):
In mathematical formulation, each register is a closed quantum system. Thus, all of them can be initialized with this entangled state |ψ. So, if we have M quantum individuals in each generation we need M register pairs (individual register, fitness register). Then, unitary operators as Walsh-Hadamard W will be applied to the first register of the state |x
in order to complete the generation of the initial population. Henceforth, the initialization could encompass the following steps.
|0
= |a
|f(a)
to
is such that the observed fitness for
Then, genetic operators must be applied. Let us consider one of possible model of important genetic operator application as mutation. Example: Mutation operator application. Mutations can be implemented through the following steps.
fitness” as in
The major advantage for a QGA is the increase diversity of a quantum population due to superposition, which is precisely defined above in step 2 of computational algorithm as
This effective size decreases during the measurement of the fitness, when the superposition is reduced to only individuals with the observed fitness according to the expression
However, it would be increased during the crossover and mutation applications. Besides, by increasing diversity, it is much more likely that the population will include members in the basins of attraction for the higher fitness solutions.
Thus, an improved convergence rate must be expected. Besides, classical individuals with high fitness can be relatively incompatible, which is that any crossover between them is unlikely to produce a very fit offspring. However, in the QGA, these individuals can co-exist in a superposition.
3.5. QGA-simulation of quantum physical systems. There are two ways that the selection process can be implemented. First, follow the same steps as in a classical computer. That is, evaluate the “fitness” or “energy” of each chromosome. The fitness has to be stored since the evaluation process is not reversible. Second, we can make use of the quantum behavior of a quantum computing to perform selection, as described below. Selecting a suitable Hamiltonian will be equivalent to choosing a selection strategy.
After the selection step, the GA will return to its first step and continue iterations. It will terminate when an observation of the state is performed. Since members of the successive populations are wave functions, the uncertainty principle has to be taken into account when defining the genetic operations. As abovementioned in QGA this can be achieved by introducing smooth or “uncertain” genetic operations (see below).
Example: QGA model in 1D search space. As we have mentioned before, the GA was developed to optimize (maximize or minimize) a given property (like an area, a volume or an energy). The property in question is a function of many variables of the system. In GA-language this quantity is referred to as the fitness function. There are many different ways to apply GA. One of them is the phenotype version. In this approach, the GA basically maps the degrees of freedom or variables of the system to be optimized onto a genetic code (represented by a vector). Thus, a random population of individuals is created as a first generation. This population “evolves” and subsequent generations are reproduced from previous generations through application of different operators on the genetic codes, like, for instance, mutations, crossovers and reproductions or copies. The mutation operator changes randomly the genetic information of an individual, i.e., one or many components of the vector representing its genetic code. The crossover or recombination operator interchanges the components of the genetic codes of two individuals. In a simple recombination, a random position is chosen at which each partner in a particular pair is divided into two pieces. Each vector then exchanges a section of itself with its partner. The copy or reproduction operator merely transfers the information of the parent to an individual of the next generation without any changes.
In the QGA approach, the vector representing the genetic code is just the wave function ψ(x). The fitness function, i.e., the function to be optimized by the successive generations is the expectation:
where the 1D-Hamiltonian is given by
Here, V(x) is the external potential. In the case of Grover's search algorithm we can write that Ĥ≡GUj.
There are many different ways to describe the evolution of the population and the creation of the offspring. The GA can be described as follows:
Usually, real-space calculations deal with boundary conditions on a box. Therefore, and in order to describe a wave function within a given interval a≦x≦b, we have to choose boundary conditions for ψ(a) and ψ(b). For simplicity we set ψ(a)=ψ(b)=0, i.e., we consider a finite box with infinite walls at x=a and x=b. Inside this box we can simulate different kinds of potentials, and if the size of the box is large enough, boundary effects on the results of our calculations can be reduced.
As an initial population of wave functions satisfying the boundary conditions: ψj(a)=0, ψj(b)=0, we choose Gaussian-like functions of the form
with random values for xjε[a,b] and σjε(0,b−a], whereas the amplitude A is calculated from the normalization condition ∫|ψ(x)|2dx=1 for given values of xj and σj.
As we have mentioned above, three kinds of operations on the individuals can be defined: reproduction and mutation of a function, and crossover between two functions. The reproduction operation has the same meaning as in previous applications of GA. Both the crossover and the mutation operations have to be redefined and applied to the quantum mechanical case. The smooth or “uncertain” crossover is defined as follows. Let us take two randomly chosen “parent” functions ψ1(n+1)(x) and ψ2(n+1)(x) as
where St(x) is a smooth step function involved in the crossover operation. We consider the following case:
where x0 is chosen randomly (x0ε(a,b)) and kc is a parameter, which allows to control the sharpness of the crossover operation. The idea behind the “uncertain” crossover is to avoid large derivatives of the new generated wave functions. Note, that the crossover operation between identical wave functions generates the same wave functions.
The mutation operation in the quantum case must also take into account the uncertainty relations. It is not possible to change randomly the value of the wave function at a given point without producing dramatic changes in the kinetic energy of the state. To avoid this problem we define the mutation operation is defined as ψ(n+1)(x)=ψ(n)(x)+ψr(X), where ψr(x) is the random mutation function. In the present case we choose ψr(x) as a Gaussian
with a random center xrε(a,b), width Rε(0,b−a) and amplitude B. For each step of a GA iteration we randomly perform copy, crossover and mutation operations. After the application of the genetic operation, the newly created functions are normalized.
Example: QGA model in 2D search space. In this case, the QGA maps each wave function onto a genetic code (represented by a matrix containing the values of the wave function at the mash points). The algorithm is implemented as follows. A rectangular box Ω≡{(x,y),0≦x≦d,0≦y≦d} is chosen as a finite region in real space. An initial population of trial two-body wave functions {Ψi}, i=1, . . . , Npop is chosen randomly. For this purpose, we can construct each Ψi, using Gaussian-like one-particle wave functions of the form
with v=1, 2 and random values for
ψv(x,y)|∂Ω=0
So constructed initial population, {Ψi}, corresponds to the initial generation. Now, the fitness of each individual Ψi of the population is determined by evaluating the function
E
i
=E[ψ
i]≡∫Ψi*(r1,r2){circumflex over (H)}(r1,r2)Ψi(r1,r2)dr1dr2,
where Ĥ is the Hamiltonian of the corresponding problem. This means that the expectation value of the energy for a given individual is a measure of its fitness, and we apply the QGA to minimize the energy. By virtue of the variational principle, when the QGA finds the global minimum, it corresponds to the ground state of Ĥ.
Off-springs of the initial generation are formed through application of mutation, crossover and copy operations on the genetic codes. We define continuous analogies of three kinds of genetic operations on the individuals: reproduction, mutation, and crossover. While the reproduction operation has the same meaning as in previous “classical” applications of the GA, both the crossover and the mutation operations have to be redefined to be applied to the quantum mechanical case. The smooth or “uncertain” crossover in two dimensions is defined as follows. Given two randomly chosen single-particle “parent” functions ψiv(old)(x,y) and ψ1μ(old)(x,y) (i, l=1, Npop, μ, v=1, 2), one can construct two new functions ψiv(new)(x,y) and ψ1μ(new)(x,y) as
where St(x,y) is 2D smooth step function which produces the crossover operation. We can define
where a, b, c are chosen randomly. The line ax+by+c=0 cuts Ω into two pieces, kc is a parameter, which allows control of the sharpness of the crossover operation. The idea behind the “uncertain” crossover is to avoid very large derivatives of the newly generated wave functions, i.e., very large kinetic energy of the system. Note that the crossover operation between identical wave functions generates the same wave functions.
As abovementioned the mutation operation in the quantum case should also take into account the uncertainty relations. It is not possible to change randomly the value of the wave function at a given point without producing dramatic changes in the kinetic energy of the state. To avoid this problem we define a new kind of mutation operation for a random “parent” ψiv(old)(x,y) as follows: ψiv(new)(x,y)=ψiv(old)(x,y)+ψr(x,y), where ωr(x,y) is random mutation function. In the present case, we choose ψr(x,y) as a Gaussian-like function
with random values for xr, y, Rx, Ry and Ar. Similarly to 1D space, for each step of a GA iteration, we randomly perform copy, crossover and mutation operations. After the application of the genetic operation, the new-created functions are normalized and orthogonalized. Then, the fitness of the individuals is evaluated and the fittest individuals are selected. The procedure is repeated until convergence of the fitness function (the energy of the system) to a reduced value is reached. Inside the box Ω we can simulate different kinds of external potentials. If the size of the box is large enough, boundary effects are negligible.
4. S
4.1. G
The input of a quantum algorithm is always a function f from binary strings into binary strings. This function is represented as a map table in Box 2201, defining for every string its image. Function f is first encoded in Box 2207 into a unitary matrix operator UF depending on f properties. In some sense, this operator calculates f when its input and output strings are encoded into canonical basis vectors of a Complex Hilbert Space: UF maps the vector code of every string into the vector code of its image by f.
A squared matrix UF on the complex field is unitary if its inverse matrix coincides with its conjugate transpose: UF−1=UF. A unitary matrix is always reversible and preserves the norm of vectors.
When the matrix operator UF has been generated, it is embedded into a quantum gate G, a unitary matrix whose structure depends on the form of matrix UF and on the problem we want to address. The quantum gate is the heart of a quantum algorithm. In quantum algorithms, the quantum gate acts on an initial canonical basis vector (we can always choose the same vector) in order to generate a complex linear combination (let's call it superposition) of basis vectors as the output. This superposition contains the information to answer the initial problem.
After this superposition has been created, measurement takes place in order to extract this information. In quantum mechanics, measurement is a non-deterministic operation that produces as output only one of the basis vectors in the entering superposition. The probability of every basis vector of being the output of measurement depends on its complex coefficient (probability amplitude) in the entering complex linear combination.
The segmental action of the quantum gate and of the measurement constitutes the quantum block (see
4.1.1. The behavior of the encoder block is described in the detailed scheme diagram of
Step 1: The map table of function f: {0,1}n→{0,1}m is transformed in box 2203 into the map table of the injective function F:{0,1}n+m→{0,1}n+m such that:
F(x0, . . . , xn−1, y0, . . . , ym−1)=(x0, . . . , xn−1, f(x0, . . . , xn−1)⊕(y0, . . . , ym−1)).
The need to deal with an injective function comes from the requirement that UF is unitary. A unitary operator is reversible, so it cannot map 2 two different inputs in the same output. Since UF will be the matrix representation of F, F is supposed to be infective. If we directly employed the matrix representation of the function f, we could obtain a non-unitary matrix, since f could be non-injective. So, injectivity is fulfilled by increasing the number of bits and considering the function F instead of the function f. Anyway, function f can always be calculated from F by putting (y0, . . . , ym−1)=(0, . . . , 0) in the input string and reading the last m values of the output string.
Reversible circuits generally realize permutation operations. When can we realize any Boolean circuit F:Bn→Bm by reversible circuit? For this case, we do not calculate the function F:Bn→Bm. We can calculate another function with expanding F⊕:Bn+m→Bn+m that we define with the following relation: F⊕(x,y)=(x,y⊕F(x)) where the operation ⊕ is defined as addition on module 2. Then the value of F(x) is defined as F⊕(x,0)=(x,F(x)).
Step 2: The function F map table is transformed in Box 2205 into UF map table, following the following constraint:
∀sε{0,1}n+m:UF[τ(s)]=τ[F(s)]
The code map τ:{0,1}n+m→C2
Code τ maps bit values into complex vectors of dimension 2 belonging to the canonical basis of C2. Besides, using tensor product, τ maps the general state of a binary string of dimension n into a vector of dimension 2n, reducing this state to the joint state of the n bits composing the register. Every bit state is transformed into the corresponding 2-dimensional basis vector and then the string state is mapped into the corresponding 2n-dimensional basis vector by composing all bit-vectors through tensor product. In this sense the tensor product is the vector counterpart of state conjunction.
If a component of a complex vector is interpreted as the probability amplitude of a system of being in a given state (indexed by the component number), the tensor product between two vectors describes the joint probability amplitude of two systems of being in a joint state. Basis vectors are denoted using the ket notation |i This notation is taken from Dirac description of quantum mechanics.
Step 3: UF map table is transformed in Box 2206 into UF using the following transformation rule:
This rule can easily be understood when vectors |i and |j
are considered as column vectors. Since these vectors belong to the canonical basis, UF defines a permutation map of the identity matrix rows. In general, row |j
is mapped into row |i
.
4.1.2. The heart of the quantum block is the quantum gate, which depends on the properties of the matrix UF. The scheme in
The matrix operator UF in
This matrix operator is first embedded into a more complex gate, the quantum gate G in Box 2303. the unitary matrix G is applied k times to an initial canonical basis vector |i of dimension 2n+m from Box 2302. Every time, the resulting complex superposition G|0 . . . 01 . . . 1
of basis vectors is measured, producing one basis vector |x
as a result. All the measured basis vectors {x1, . . . , xk} are collected together in Box 2306. This collection is the output of the quantum block in Box 2307.
The “intelligence” of our algorithms is in the ability to build a quantum gate that is able to extract the information necessary to find the required property of f and to store it into the output vector collection. We will discuss in detail the structure of the quantum gate for a quantum algorithm and observe that it can be described in a general way.
In order to represent quantum gates, we are going to employ some special diagrams called quantum circuits. An example of quantum circuit is illustrated in
Every rectangle is associated to a matrix 2n×2n, where n is the number of lines entering and leaving the rectangle. For example, the rectangle marked UF is associated with matrix UF.
Quantum circuits. Let us give a high-level description of the gate and, using some transformation rules, we can easily compile them into the corresponding gate-matrix. These rules are described in detail in the U.S. Pat. No. 6,578,018.
4.1.3. The decoder block in Box 75 of
Analog description of Operators and Gate Referring to the Quantum Algorithm general scheme depicted in
As showed in
It can be noted that in all of these cases, direct product can be performed via AND gates. In fact, we have 1*1=11=1; −1*1=−(1
1)=−1; 1*0=(1
0)=0. Taking into account these equalities, H|0> can be obtained as in
If S=I the structure is the same but all signs are positive. However, in this case it is quite evident that AND gates can be bypassed.
Let us focus on tensor products between the resulting vectors. After direct product we can have several of these to be combined:
Some preliminary considerations must be done in order to simplify the problem. For example, vector I|1> is not present in any quantum algorithm. Moreover, H|1> and I|0> are not present in the same algorithm at the same time. So the output of superposition is the result of products like
In both cases, only two values are present in this expression, and therefore logic gates can be used again. From a formal point of view, the two expression are identical (the second one can be considered the normalization between 0 and 1 of the first one).
Let us suppose we wish to calculate [1 1]T[1 0]T. The simple logic gate of
[1 1]T
[1 0]T can therefore be obtained as depicted in
Suppose that A is a vector representing the superposition output of an n qubits algorithm. In order to have an n+1 qubits superposition output vector two operations are possible:
depending on the specific algorithm. These results can be obtained only by replicating (or not) the previous vector A. The resulting vector is ready to be the input of the following block (i.e. the Entanglement block) after a suitable denormalization between −1 and 1 and after being scaled by the factor 1/2(n+1)/2.
The entanglement step comprises, as showed in previous sections, in a direct product among the unitary matrix UF (in which the problem is encoded via a binary function f) and the vector coming out from superposition. The real effect on this vector is in general the permutation of some elements, as shown in
Regarding the interference operator, it could be treated in general like superposition using AND gates for tensor products, but due to important differences among Quantum Algorithms at this step, the best approach is to build a dedicated interchangeable interference block. To this aim it will be discussed case by case in the next sections, including parallelism and possible similarities between algorithms.
4.2. Definition of Deutsch-Jozsa's problem is so stated:
This problem is very similar to Deutsch's problem, but it has been generalized to n>1.
4.2.1. We first deal with some special functions with n=2. This should help the reader to understand the main ideas of this algorithm. Then, we discuss the general case with n=2 and finally we encode a balanced or constant function in the more general situation n>0. We consider the encoding steps process according to the structure on the
Encoding a constant function with value 1.
Let's consider the case:
n=2
∀xε{0,1}n:f(x)=1
In this case f map table is so defined:
The encoder block takes a f map table as input and encodes it into matrix operator UF, which acts inside of a complex Hilbert space.
Step 1 Function f is encoded into the injective function F, built according to the following statement:
F:{0,1}n+1→{0,1}n+1:F(x0,x1,y0)=(x0,x1,f(x0,x1)⊕y0)
Then, F map table is:
Step 2 Let's now encode F into UF map table using the rule:
∀tε{0,1}n+1:UF[τ(t)]=τ[F(t)]
where τ is the code map defined above. This means:
Here, we used ket notation to denote basis vectors.
Step 3 Starting from the map table of UF, we calculate the corresponding matrix operator. This matrix is obtained using the rule:
[UF]i,j=1UF|j
=|i
So, UF is the following matrix:
Using matrix tensor product, UF can be written as:
UF=II
C
where ⊕ is the tensor product, I is the identity matrix of order 2 and C is the NOT-matrix so defined:
Matrix C flips a basis vector: in fact it transforms vector |0> into |1> and |1> into |0>.
If matrix UF is applied to the tensor product of three vectors of dimension 2, the resulting vector is the tensor product of the three vectors obtained applying matrix I to the first two input vectors and matrix C to the third.
Tensor product and entanglement Given m vectors v1, . . . , vm of dimension 2d
(M1Mm)·(v1
vn)=M1·v1
Mm·vn
This means that, if a matrix operator can be written as the tensor product of m smaller matrix operator, the evolutions of the m vectors the operator is applied to are independent, namely no correlation is present among this vector. An important corollary is that if the initial state was not entangled, the final state is also not entangled.
The structure of UF is such that the first two vectors in the input tensor product are preserved (action of I), whereas the third is flipped (action of C). We can easily verify that this action corresponds to the constraints stated by UF map table.
B. Encoding a constant function with value 0
Let's now consider the case:
n=2
∀xε{0,1}n:f(x)=0
In this case f map table is so defined:
Step 1. F map table is:
Step 2. F map table is encoded into UF map table:
Step 3. It is very easy to transform this map table into a matrix. In fact, we can observe that every vector is preserved.
Therefore the corresponding matrix is the identity matrix of order 23.
Using matrix tensor product, this matrix can be written as:
UF=II
I
The structure of UF is such that all basis vectors of dimension 2 in the input tensor product evolve independently. No vector controls any other vector.
C. Encoding a Balanced Function
Consider now the balanced function:
n=2
∀(x1, . . . , xn)ε{0,1}n:f(x1, . . . , xn)=x1⊕ . . . ⊕xn
In this case f map table is the following:
Step 1
The following map table calculated in the usual way represents the injective function F (where f is encoded into):
Step 2. Let's now encode F into UF map table:
Step 3.
The matrix corresponding to UF is:
This matrix cannot be written as the tensor product of smaller matrices. In fact, if we write it as a block matrix we obtain:
This means that the matrix operator acting on the third vector in the input tensor product depends on the values of the first two vectors. If these vectors are |0> and |0>, for instance, the operator acting on the third vector is the identity matrix, if the first two vectors are |0> and |1>, then the evolution of the third is determined by matrix C. We say that this operator creates entanglement, namely correlation among the vectors in the tensor product.
D. General case with n=2 Consider now a general function with n=2. In this general case f map table is the following:
with fiε{0,1}, i=00, 01, 10, 11. If f is constant then ∃yε{0,1}∀xε{0,1}2: f(x)=y. If f is balanced then {fi:fi=0}|=|{fi: fi=1}|
Step 1. Injective function F (where f is encoded) is represented by the following map table calculated in the usual way:
f00
f01
f10
f11
Step 2. Let's now encode F into UF map table:
Step 3. The matrix corresponding to UF can be written as a block matrix with the following general form:
where Mi=I if fi=0 and Mi=C if fi=1,i=00, 01, 10, 11. The structure of this matrix is such that, when the first two vectors are mapped into some other vectors, the null operator is applied to the third vector, generating a null probability amplitude for this transition. This means that the first two vectors are left unchanged. On the contrary, operators Miε{I, C} and they are applied to the third vector when the first two are mapped into themselves. If all Mi coincide, operator UF encodes a constant function. Otherwise it encodes a non-constant function. If |{Mi: Mi=I}|=|{Mi: Mi=C} I then f is balanced.
E. General case Consider now the general case n>0. Input function f map table is the following:
with fiε{0,1}, iε{0,1}n. If f is constant then ∃yε{0,1}∀xε{0,1}n: f(x)=y. If f is balanced then |{fi: fi=0}|=|{fi: fi=1}|.
Step 1. The map table of the corresponding infective function F is:
Step 2. Let's now encode F into UF map table:
Step 3. The matrix corresponding to UF can be written as a block matrix with the following general form:
where Mi=I if fi=0 and Mi=C if fi=1, iε{0,1}n.
This matrix leaves the first n vectors unchanged and applies operator Miε{I, C} to the last vector. If all Mi coincide with I or C, the matrix encodes a constant function and it can be written as nII or nI
C. In this case no entanglement is generated. Otherwise, if the condition |{Mi: Mi=I}|=|{Mi: Mi=C}| is fulfilled, then f is balanced and the operator creates correlation among vectors.
4.2.2. Quantum block Matrix UF, the output of the encoder, is now embedded into the quantum gate of Deutsch-Jozsa's algorithm. As we did for Deutsch's algorithm, we describe this gate using a quantum circuit
Let's consider operator UF in the case of constant and balanced functions. The structure of this operator strongly influences the structure of the whole gate. We shall analyze this structure in the case, f is 1 everywhere, f is 0 everywhere, and in the general case with n=2. Finally, we propose the general form for our gate with n>0.
Constant function with value 1 If f is constant and its value is 1, matrix operator UF can be written as nIC. This means that UF can be decomposed into n+1 smaller operators acting concurrently on the n+1 vectors of dimension 2 in the input tensor product.
The resulting circuit representation according to
Let's observe that every vector in input evolves independently from other vectors. This is because operator UF doesn't create any correlation. So, the evolution of every input vector can be analyzed separately. This circuit can be written in a simpler way as shown in
We can easily show that:
H2=I
Therefore the circuit is rewritten in this way as shown in
Let's now consider the effect of the operators acting on every vector:
Using these results, the circuit shown in
B. Constant function with value 0 A similar analysis can be repeated for a constant function with value 0. In this situation UF can be written as nIH and the final circuit is shown on the
C. General case (n=2) The gate implementing Deutsch-Jozsa's algorithm in the general case is shown in the
where Miε{I, C}, i=00, 01, 10, 11.
Let's calculate the quantum gate G=(2HI)·UF·(2+1H) in this case:
3H
2H I
Now, consider the application of G to vector |001>:
Consider the operator (M00+M01+M10+M11)H under the hypotheses of balanced functions Miε{I, C} and |{M: Mi=I}|=|{Mi: Mi=C}|. Then:
This means that the probability amplitude of vector |001> of being mapped into a vector |000> or |001> is null.
Consider now the operators:
(M00+M01+M10+M11)H
(M00−M01+M10−M11)H
(M00+M01−M10−M11)H
(M00−M01−M10+M11)H
under the hypotheses ∀i: Mi=I, which holds for constant functions with values 0:
Using these calculations, we obtain the following results:
This means that the probability amplitude of vector |001> of being mapped into a superposition of vectors |010>, |011>, |100>, |101>, |110>, |111> is null. The only possible output is a superposition of vectors |000> and |001>, as we showed before using circuits. A similar analysis can be developed under the hypotheses ∀i: Mi=C.
It is useful to outline the evolution of the probability amplitudes of every basis vector while operator 3H, UF and 2HI are applied in sequence, for instance when f has constant value 1. This is done in
Operator 3H in
Finally, 2HI in
Since, in this case, the vectors in the form |x0x10> have the same (negative real) probability amplitude and vectors in the form |x0x11> have the same (positive real) probability amplitude, when |x0x1>=|00>, probability amplitudes interfere positively. Otherwise the terms in the summation interfere destructively annihilating the result.
D. General case (n>0) In the general case n>0, UF has the following form:
where Miε{I, C}, iε{0,1}n.
Let's calculate the quantum gate G=(nHI)·UF·(n+1H):
n+1H
Here we employed binary string operator ·, which represents the parity of the AND bit per bit between two strings.
Priority of bit per bit AND. Given two binary strings x and y of length n, we define:
x·y=x
1
·y
1
⊕x
2
·y
2
⊕ . . . ⊕x
n
·y
n
The symbol · used between two bits is interpreted as the logical AND operator.
We shall prove that matrix n+1H really has the described form. We show that:
The proof is by induction:
Matrix n+1H is obtained from nH by tensor product. Similarly, matrix nHI is calculated:
nH I
We calculated only the first column of gate G since this operator is applied exclusively to input vector |0..01> and so only the first column is involved.
Now consider the case of f constant. We saw that this means that all matrices Mi are identical.
This implies:
since in this summation the number of +1 equals the number of −1. Therefore, the input vector |0.01> is mapped into a superposition of vectors |0.00> and |0..01> as we showed using circuits.
If f is balanced, the number of Mi=1 equals the number of Mi=C. This implies:
And therefore:
This means that input vector |0..01>, in the case of balanced functions, can't be mapped by the quantum gate into a superposition containing vectors |0..00> or |0..01>.
The quantum block terminates with measurement. Considering the results showed till now, we can determine the possible outputs of measurement and their probabilities:
(α0|
0 . . . 01} αi|i>
The set A-B is given by all elements of A, unless those elements belong to B also. This set is sometimes denoted as A/B. The quantum block is repeated only one time in Deutsch-Jozsa's algorithm. So, the final collection is made only by one vector.
4.2.3. Decoder As in Deutsch's algorithm, when the final basis vector has been measured, we must interpret it in order to decide if f is constant or balanced. If the resulting vector is |0..0> we know that the function was constant, otherwise we decide that it is balanced. In fact gate G produces a vector such that, when it is measured, only basis vectors |0..00> and ≡0..01> have a non-null probability amplitude exclusively in the case f is constant. Besides, if f is balanced, these two vectors have null coefficients in the linear combination of basis vectors generated by G. In this way, the resulting vector is easily decoded in order to answer Deutsch-Jozsa's problem:
4.2.4. Computer design process of Deutsch-Jozsa's quantum algorithm gate (D.-J. QAG) and simulation results. Let us consider the design process of D.-J. QAG according to the steps represented in
For constant function
case f(ε{0,1}3=0) and f(ε{0,1}3=1) in
for balanced function
case f(ε{0,1}3=1|x>0110|x≦011) and
accordingly.
Step 1.3 in
Step 1.4 from
In Deutsch-Jozsa's QA the mathematical and physical structures of the interference operator (nHI) differ from its superposition operator (n+1H). The interference operator extracts the qualitative information about the property (constant or balanced property of function f) with operator nH, and separate this property qualitatively with operator I. Deutsch-Jozsa's QA is a decision-making algorithm. For the case of Deutsch-Jozsa's QA only one's iteration is needed without estimation quantitatively the qualitative property of function f and with error probability 0.5 of successful result. It means that the Deutsch-Jozsa's QA is a robust QA. The main role in this decision-making QA plays the superposition and entanglement operators that organize quantum massive parallel computation process (by superposition operator) and robust extraction of function property (by entanglement operator).
4.2.5. Analog description of Deutsch-Jozsa's QA-Operators and Gate-Superposition As reported in H
. . .
H=nH, the output vector Y can be represented in the following way:
Y=[y1y2 . . . yi . . . y2n+1]
where yi=(−1)i+1/2(n+1)/2.
It must be noted that this formula is very general and, due to the particular initial configuration of qubits in the present algorithm, it avoids the use of AND gates providing directly the output vector Y. The dimension n is taken into account by varying index i from 1 to 2n+1. As it will be seen in following sections, the same formula will be used for Grover's algorithm, too.
B. Entanglement In Deutsch-Jozsa's algorithm the Entanglement matrix UF has the same diagonal structure independent from the number of qubits, in fact, the 2×2 well known blocks I and C are always present on principal diagonal. This happens due to the fact that f:{0,1}n→{0,1}, meaning that encoding function f is scalar and therefore the complete evaluation of UF can be avoided by using the input-output approach. So if we consider for example the following expression for f in a 2-qubits case (balanced function)
Of course in Deutsch-Jozsa's entanglement, binary function f could assume more than twice value “1”, but the upper example is taken for sake of simplicity. The output of entanglement G=UF·Y can be directly calculated, as shown in the European patent application EP 1 380 991, by using 2n+1=8 XOR gates, suitably driven by the encoding function f. In fact, the general form of the entanglement output vector G can be the following:
G=[g1g2 . . . gi . . . g2n+1]
And, therefore, according to the scheme in
C. Interference A more difficult task is to deal with interference. In fact, differently from Entanglement, Interference matrix n+1H is not a pseudo-diagonal matrix and therefore it is full of nonzero elements. Moreover, the presence of tensor products, whose number increases dramatically with the dimensions, constitutes an important point at this step. In order to find a suitable input-output relation, it must be considered that the general term of n+1H can be written as
To this aim, being gi the generic term belonging to the input vector, the output vector V=(n+1H)G can be derived as follows:
It must be noted that only sums and differences are necessary and therefore a possible hardware structure could be constituted by a certain number of OPAMPS in which their configuration could be set to “inverting” or “not inverting” in a suitable way. The value 1/2n/2 depends only by the number n of qubits and can be considered as the scaling value of the sum and decided by a suitable choice of feedback resistors.
4.3. Analog description of Shor QA-operators and Gate The Shor's quantum algorithm is well known in the art. For sake of simplicity it is summarized in
By applying the same reasoning carried out for the Deutsch-Jozsa's quantum algorithm it is possible to define the design steps according to this invention, illustrated in
SH
. . .
H=nH, the output vector Y can be represented in the following way:
Y=[y1y2 . . . yi . . . y2n+1]
Where yi=(−1)i+1/2(n+1)/2.
Different considerations have to be done for Shor's algorithm, summarized in
The scheme of Shor's algorithm is illustrated in nI. This fact means that first n qubits have to be multiplied for nH and second ones for nI. Regarding the first ones, it has still been shown how the operation H|0>
H|022
H|0> can be performed neglecting the constant factor 1/23/2 (n=3)
In general, this vector can be indicated in the following way
X=[x1x2 . . . xi . . . x2n]
Where xi=1/2n/2.
Finally, Y=XnI|0..0> that, for n=3, results
It is now simple to find a general form for output Y:
In hardware these values can be easily generated by a CPLD by setting the number n of qubits.
The superposition, entanglement and interference operators are prepared according to step 1 of
B. E
The general form of f in Shor's algorithm is the following:
f(x)=axmodN
where N is the number to factorize, a is one of its coprimes and x can assume values from 0 to N−1. Number of qubits is n=[log2N]+1. Each block Mi of UF results from n tensor products among I or C. So for n=2 the four possible blocks are II, I
C C
I C
C, and for n=3, the eight possible blocks are I
I, I
C, I
I, I
C, C
I, C
C, C
I, C
C and so on. These sequences are related with the binary representation of f(x), if we associate each “0” with I and each “1” with C. This fact allows the use of a 2n×2n matrices instead of a 22n×22n that is the size UF. Moreover, Mi are symmetric and unitary, so a lot of space can be spared in hardware storage.
Another comment relates to the particular form of superposition that have nonzero element in a predictable position. This means that we can obtain output of Entanglement G=UF·Y without the calculated matrix product, but only with knowledge of a corresponding row of diagonal UF matrix. More in detail, we observe that only a first row of each 2n×2n block of entanglement contribute to this output vector meaning a strong reduction of computation complexity. In addition we can easily calculate this rows that have the only nonzero element of each block in position f(xj)+1. Finally we can write output vector G:
C. InI.
Unlike the other quantum algorithms, the interference in the Shor's algorithm is carried out using Quantum Fourier Transformation (QFT). As all other quantum operators, QFT is a unitary operator, acting on the complex vectors of the Hilbert space. QFT transforms each input vector into a superposition of the basis vectors of the same amplitude, but with the shifted phase.
Let us consider the output G of the entanglement block.
G=└g1,g2, . . . , gi, . . . , g2
The Interference matrix QFTnnI has several nonzero elements. More exactly, it has 2n(2n−1) zeros on each column. In order to avoid trivial products, some modification can be made. Y is the interference output vector, its elements yi are:
where int(.) is a function returning the integer part of a real number. The final output vector is therefore the following: Y=[Re[yi]+jIm[yi]].
4.4. Grover's Algorithm Grover's algorithm is described here as a variation on Deutsch-Jozsa's algorithm introduced above. Grover's problem is so stated:
xε{0, 1}n:
∀yε{0, 1}n:x ≠ y
f(y) = 0)
In Deutsch-Jozsa's algorithm we distinguished two classes of input functions and we were supposed to decide what class the input function belonged to. In this case, the problem is in some sense identical in its form even if it is harder because now we are dealing with 2n classes of input functions (each function of the kind described constitutes a class).
4.4.1. Computer design process of Grover's quantum algorithm gate (Gr-QAG) and simulation results Let us consider the design process of Gr-QAG according to steps represented in
Step 1.1 (from
The superposition, entanglement and interference operators are assembled as shown hereinbefore for the Deutsch-Jozsa's quantum algorithm. Similarly for the Deutsch-Jozsa's quantum algorithm, also the entanglement operation of a Grover's quantum algorithm may be implemented by means of XOR logic gates, as shown in
4.4.2. Interpretation of measurement results in simulation of Grover's QSA-QG. In the case of Grover's QSA, this task is achieved (according to the results of this section) by preparing the ancilla qubit of the oracle of the transformation:
Uf:|x,ax,f(x)⊕a
in the state
In this case the operator I|x0 is computationally equivalent to Uf:
and the operator Uf is constructed from a controlled I|x and two one qubit Hadamard transformations. The result interpretation for the Gr-QAG according to general approach in
A measured basis vector comprises the tensor product between the computation qubit results and the ancilla measurement qubit. In Grover's searching process, ancilla qubit do not change during the quantum computing.
As abovementioned, operator Uf, comprises two Hadamard transformations. The Hadamard transformation H (that modeling the constructive interference) applied on the state of the standard computational basis can be seen as implementing a fair coin tossing. It means that if the matrix
is applied to the states of the standard basis, then H2|0=−|1
, H2|1
=|0
, and therefore H2 acts in measurement process of computational result as a NOT-operation, up to the phase sign. In this case the measurement basis separated with the computational basis (according to tensor product). The results of simulation are shown in
Example In boxes 12301 and 12302 we obtain two possibilities:
Boxes 12305 and 12306 demonstrated two searching marked states:
Using a simple random measurement strategy as a fair coin tossing in measurement basis {|0,|1
} we can independently from the measurement basis result received with the certainty the searching marked states. Boxes 12309-12312 show accurate results of searching corresponding marked states.
Final result of interpretation for Gr-QAG application in ,|1
} with implementing a fair coin tossing of measurement in
4.4.3. Hardware implementations of the Grover's algorithm are disclosed in EP 1 267 304, EP 1 383 078 and EP 1 380 991. A general scheme of hardware implementing a Grover's quantum algorithm is depicted in
As contemplated by the method of this invention, a hardware quantum gate for any number of qubits may be obtained simply by connecting in parallel a plurality of gates for 2 qubits. As already disclosed in the above mentioned European patent applications and shown in
Number | Date | Country | Kind |
---|---|---|---|
04106715.8 | Dec 2004 | EP | regional |