The specification generally relates to computing: computing hardware, machine-implemented methods, and machine-implemented systems.
In the following figures, although they may depict various examples of the invention, the invention is not limited to the examples depicted in the figures.
For completeness, a brief introduction to Turing machines is presented in a later section. In [33], Alan Turing introduces the Turing Machine, which is a basis for the current digital computer. Sturgis and Shepherdson present the register machine in [32] and demonstrate the register machine's computational equivalence to the Turing machine: a Turing machine can compute a function in a finite number of steps if and only if a register machine can also compute this function in a finite number of steps. The works [7], [20], [21], [22] and [24] cover computability where other notions of computation equivalent to the Turing machine are also described.
In [23], McCulloch and Pitts present one of the early alternative computing models influenced by neurophysiology. In [27], Rosenblatt presents the perceptron model, which has a fixed number of perceptrons and has no feedback (cycles) in its computation. In [25], Minsky and Papert mathematically analyze the perceptron model and attempt to understand serial versus parallel computation by studying the capabilities of linear threshold predicates. In [16], Hopfield shows how to build a content addressable memory with neural networks that use feedback and where each neuron has two states. The number of neurons and connections are fixed during the computation. In [17], Hopfield presents an analog hardware neural network to perform analog computation on the Traveling-Salesman problem, which is NP-complete [12]. Good, suboptimal solutions to this problem are computed by the analog neural network within an elapsed time of only a few neural time constants.
In [18], Hopfield uses time to represent the values of variables. In the conclusion, he observes that the technique of using time delays is similar to that of using radial basis functions in computer science.
In [15], Hertz et al. discuss the Hopfield model and various computing models that extend his work. These models describe learning algorithms and use statistical mechanics to develop the stochastic Hopfield model. They use some statistical mechanics techniques to analyze the Hopfield model's memory capacity and the capacity of the simpler perceptron model.
For early developments on quantum computing models, see [3], [4], [9], [10], [21] and [22]. In [29], Shor discovers a quantum algorithm showing that prime factorization can be executed on quantum computers in polynomical time (i.e. considerably faster than any known classical algorithm). In [13], Grover discovers a quantum search algorithm among n objects that can be completed in cn0.5 computational steps.
In [8], Deutsch argues that there is a physical assertion in the underlying Church-Turing hypothesis: Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means. Furthermore, Deutsch presents a quantum generalization of the class of Turing machines: a universal quantum computer that covers quantum parallelism and shows an increase in computing speed. This universal quantum computer does not demonstrate the computation of non-Turing computable functions. For the most part, these prior results on computing models have studied the model's speed of computation, memory capacity, learning ability or have demonstrated that a particular computing model is equivalent to the Turing machine (digital computer)—in terms of computability (see [7] pages 10-12).
What are the Limitations of Current Cybersecurity Approaches?
Some prior art (approaches) has tried to conceal and protect a computation by enclosing it in a physical barrier, or by using a virtual barrier, e.g. firewall, or private network. The prior art has not been successful at securing computers, networks and the Internet. Operating system weaknesses and the proliferation of mobile devices and Internet connectivity have enabled malware to circumvent these boundaries.
In regard to confidentiality of data, some prior art uses cryptography based on the P≠NP complexity assumption, which relies on large enough computing bounds to prevent breaking the cryptography. In the future, these approaches may be compromised by more advanced methods such as Shor's algorithm, executing on a quantum computing machine.
In the case of homomorphic cryptography <http://crypto.stanford.edu/craig/> its computing operations are many orders of magnitude too slow. Homomorphic cryptography assumes that the underlying encryption E operations obey the homomorphism ring laws E(x+y)=E(x)+E(y) and E(x)·E(y)=E(x·y) <http://tinyurl.com/4csspud>. If the encrypted execution is tampered with (changed), then this destroys the computation even though the adversary may be unable to decrypt it. This is analogous to a DDoS attack in that you don't have to be able to read confidential data to breach the cybersecurity of a system. Homomorphic cryptography executing on a register machine along with the rest of the prior art is still susceptible to fundamental register machine weaknesses discussed below.
Some prior art has used the evolution of programs executing on a register machine (von=Neumann architecture) architecture. [Fred Cohen, “Operating Systems Protection Through Program Evolution”, IFIP-TC11 ‘Computers and Security’ (1993) V12#6 (October 1993) pp. 565-584].
The von Neumann architecture is a computing model for a stored-program digital computer that uses a CPU and a separate structure (memory) to store both instructions and data. Generally, a single instruction is executed at a time in sequential order and there is no notion of time in von-Neumann machine instructions: This creates attack points for malware to exploit.
In the prior art, computer program instructions are computed the same way at different instances: fixed representation of the execution of a program instruction. For example, the current microprocessors have the fixed representation of the execution of a program instruction property. (See. http://en.wikipedia.org/wiki/Microprocessor.) The processors made by Intel, Qualcomm, Samsung, Texas Instrument and Motorola use a fixed representation of the execution of their program instructions. (See www.intel.com http://en.wikipedia.org/wiki/Intel_processor, http://www.qualcomm.com/, www.samsung.com and http://www.ti.com/).
The ARM architecture, which is licensed by many companies, uses a fixed representation of the execution of its program instructions. (See www.arm.com and http://en.wikipedia.org/wiki/Arm_instruction_set.) In the prior art, not only are the program instructions computed the same way at different instances, there are also a finite number of program instructions representable by the underlying processor architecture. This affects the compilation of a computer program written into the processor's (machine's) program instructions.
As a consequence, the compiled machine instructions generated from a program written in a programming language such as—C, JAVA, C++, Fortran, Go, assembly language, Ruby, Forth, LISP, Haskell, RISC machine instructions, java virtual machine, Python or even a Turing machine program—are computed the same way at different instances. This fixed representation of the execution of a program instruction property in the prior art makes it easier for malware to exploit security weaknesses in these computer programs.
Consider two fundamental questions in computer science, which influence the design of current digital computers and play a fundamental role in hardware and machine-implemented software of the prior art:
In the prior art, the two questions are typically conceived and implemented with hardware and software that compute according to the Turing machine (TM) [33] (i.e., standard digital computer [36]) model, which is the standard model of computation [7,8,12,20,24,33] in the prior art.
In this invention(s), embodiments advance beyond the prior art, by applying new machine-implemented methods and hardware to secure computation and cryptography. The machine embodiments bifurcate the first question into two questions. What is computation? What can computation compute? An machine computation adds two special types of instructions to the standard digital computer [36] instructions 6710, as shown in
One type of special machine instruction is meta instruction 6720 in
The other special instruction is a random instruction 6740 in
Some of the ex-machine programs provided here compute beyond the Turing barrier (i.e., beyond the computing capabilities of the digital computer). Computing beyond this barrier has advantages over the prior art, particularly for embodiments of secure computation and cryptographic computation. Furthermore, these embodiments provide machine programs for computing languages that a register machine, standard digital computer or Turing machine is not able to compute. In this invention(s), a countable set of ex-machines are explicitly specified, using standard instructions, meta instructions and random instructions. Every one of these ex-machines can evolve to compute a Turing incomputable language with probability measure 1, whenever the random measurements (trials) behave like unbiased Bernoulli trials. (A Turing machine [33] or digital computer [36] cannot compute a Turing incomputable language.)
Our invention describes a quantum random, self-modifiable computer that adds two special types of instructions to standard digital computer instructions [36] and [7,12,20,24,32,33]. Before the quantum random and meta instructions are defined, we present some preliminary notation, and specification for standard instructions.
denotes the integers. and + are the non-negative and positive integers, respectively. The finite set Q={0, 1, 2, . . . , n−1}⊂ represents the ex-machine states. This representation of the ex-machine states helps specify how new states are added to Q when a meta instruction is executed. Let ={a1, . . . , an}, where each ai represents a distinct symbol. The set A={0, 1, #}∪ consists of alphabet (memory) symbols, where # is the blank symbol and {0, 1, #}∩ is the empty set. In some ex-machines, A={0, 1, #, Y, N, a}, where ai=Y, a2=N, a3=a. In some ex-machines, A={0, 1, #}, where is the empty set. The alphabet symbols are read from and written from memory. The ex-machine memory T is represented by function T:→A with an additional condition: before ex-machine execution starts, there exists an N>0 so that T(k)=# when |k|>N. In other words, this mathematical condition means all memory addresses contain blank symbols, except for a finite number of memory addresses. When this condition holds for memory T, we say that memory T is finitely bounded.
1.1 Standard Instructions
Machine Specification 1.1.
Execution of Standard Instructions 6710 in
The standard ex-machine instructions satisfy ⊂Q×A×Q×A×{−1, 0, 1} and a uniqueness condition: If (q1, α1, r1, a1, y1)∈ and (q2, α2, r2, a2, y2)∈ and (q1, α1, r1, a1, y1) (q2, a2, r2, a2, y2), then q1≠q2 or α1≠α2. A standard instruction I=(q, a, r, α, y) is similar to a Turing machine tuple [7,33]. When the ex-machine is in state q and the memory head is scanning alphabet symbol a=T(k) at memory address k, instruction I is executed as follows:
In other embodiments, standard instructions 6710 in
In some embodiments, random instruction 6740 may measure a random bit, called random_bit and then non-deterministically execute according to following code:
In other embodiments, the standard instructions 6710 may have a programming language syntax such as assembly language, C++, Fortran, JAVA, JAVA virtual machine instructions, Go, Haskell, RISC machine instructions, Ruby, LISP and execute on hardware 204, shown in
1 Ex-Machine Computer
A Turing machine [33] or digital computer program [36] has a fixed set of machine states Q, a finite alphabet A, a finitely bounded memory, and a finite set of standard ex-machine instructions that are executed according to specification 1.1. In other words, an ex-machine that uses only standard instructions [36] is computationally equivalent to a Turing machine. An ex-machine with only standard instructions is called a standard machine or digital computer. A standard machine has no unpredictability because it contains no random instructions. A standard machine does not modify its instructions as it is computing.
Random Instructions
The random instructions are subsets of Q×A×Q×{−1, 0, 1}={(q, a, r, y): q, r are in Q and a in A and y in {−1, 0, 1} } that satisfy a uniqueness condition defined below.
Machine Specification 1.2.
Execution of Random Instructions 6740 in
In some embodiments, the random instructions satisfy ⊂Q×A×Q×{−1, 0, 1} and the following uniqueness condition: If (q1, α1, r1, y1)∈ and (q2, α2, r2, y2)∈ and (q1, α1, r1, y1)≠(q2, α2, r2, y2), then q1≠q2 or α1≠α2. When the machine head is reading alphabet symbol a from memory and the machine is in machine state q, the random instruction (q, a, r, y) executes as follows:
Repeated independent trials are called random Bernoulli trials (William Feller. An Introduction to Probability Theory and Its Applications. Volume 1, 1968.) if there are only two possible outcomes for each trial (i.e., random measurement) and the probability of each outcome remains constant for all trials. Unbiased means the probability of both outcomes is the same. The random or non-deterministic properties can be expressed mathematically as follows.
Random Measurement Property 1.
Unbiased Trials.
Consider the bit sequence (x1 x2 . . . ) in the infinite product space {0, 1}N. A single outcome xi of a bit sequence (x1 x2 . . . ) generated by randomness is unbiased. The probability of measuring a 0 or a 1 are equal: P(xi=1) P(xi=0)=½.
Random Measurement Property 2.
Stochastic Independence.
History has no effect on the next random measurement. Each outcome x, is independent of the history. No correlation exists between previous or future outcomes. This is expressed in terms of the conditional probabilities: P(xi=1|x1=b1, . . . , xi−1=bi−1)=½ and P(xi=0|x1=b1, . . . , xi−1=bi−1)=½ for each bi∈{0, 1}.
In some embodiments, non-deterministic generator 142 in
Section 2 provides a physical basis for the properties and a discussion of quantum randomness for some embodiments of non-determinism.
Machine instructions 1 lists a random walk machine that has only standard instructions and random instructions. Alphabet A={0, 1, #, E}. The states are Q={0, 1, 2, 3, 4, 5, 6, h}, where the halting state h=7. A valid initial memory contains only blank symbols; that is, # ##. The valid initial state is 0.
There are three random instructions: (0, #, 0, 0), (1, #, 1, 0) and (4, #, 4, 0). The random instruction (0, #, 0, 0) is executed first. If the random source measures a 1, the machine jumps to state 4 and the memory head moves to the right of memory address 0. If the random source measures a 0, the machine jumps to state 1 and the memory head moves to the left of memory address 0. Instructions containing alphabet value E provide error checking for an invalid initial memory or initial state; in this case, the machine halts with an error.
Machine Instructions 1.
Random Walk
Below are 31 computational steps of the ex-machine's first execution. This random walk machine never halts when the initial memory is blank and the initial state is 0. The first random instruction executed is (0, #, 0, 0). The random source measured a 0, so the execution of this instruction is shown as (0, #, 0, 0_qr, 0). The second random instruction executed is (1, #, 1, 0). The random source measured a 1, so the execution of instruction (1, #, 1, 0) is shown as (1, #, 1, 1_qr, 0).
1st Execution of Random Walk Machine. Computational Steps 1-31.
Below are the first 31 steps of the ex-machine's second execution. The first random instruction executed is (0, #, 0, 0). The random bit measured was 1, so the result of this instruction is shown as (0, #, 0, 1_qr, 0). The second random instruction executed is (1, #, 1, 0), which measured a 0, so the result of this instruction is shown as (1, #, 1, 0_qr, 0).
2nd Execution of Random Walk Machine. Computational Steps 1-31.
The first and second executions of the random walk ex-machine verify our statement in the introduction: in contrast with the Turing machine, the execution behavior of the same ex-machine may be distinct at two different instances, even though each instance of the ex-machine starts its execution with the same input, stored in memory, the same initial states and same initial instructions. Hence, the ex-machine is a discrete, non-autonomous dynamical system.
Meta Instructions
Meta instructions are the second type of special instructions, as shown in 6720 of
Define =∪, as the set of standard, quantum random, and meta instructions. To help describe how a meta instruction modifies in self-modification system 6960 of
Before a valid machine execution starts, a hardware machine, shown as a system in
Specification 1.3 is an embodiment of self-modification system 6960 in
Machine Specification 1.3.
Execution of Meta Instructions 6720 in
A meta instruction (q, a, r, α, y, J) in is executed as follows.
In regard to specification 1.3, embodiment 1 shows how instruction I is added to and how new states are instantiated and added to Q.
Adding New Machine States
Consider the meta instruction (q, a1, |Q|−1, α1, y1, J), where J (|Q|−1−1, a2, |Q|, α2, y2). After the standard instruction (q, a1, |Q|−1, α1, y1) is executed, this meta instruction adds one new state |Q| to the machine states Q and also adds the instruction J, instantiated with the current value of |Q|.
In other embodiments, meta instructions 6720 in
Below is an example of how meta instructions 6720 in
The aforementioned LISP code produces the following output when executed:
In the final computational step, lambda function (lambda (x) (+x x)) is returned.
Let be an ex-machine. The instantiation of |Q|−1 and |Q| in a meta instruction I (shown in 6720 of
Machine Specification 1.4.
Simple Meta Instructions
A simple meta instruction has one of the forms (q, a, |Q|−c2, α, y), (q, a, |Q|, α, y), (|Q|−c1, a, r, α, y), (|Q|−c1, a, |Q|−c2, α, y), (|Q|−c1, a, |Q|, α, y), where 0<c1, c2≤|Q|. The expressions |Q|−c1, |Q|−2 and |Q| are instantiated to a state based on the current value of |Q| when the instruction is executed. In the embodiments in this section, ex-machines only self-reflect with the symbols |Q|−1 and |Q|. In other embodiments, an ex-machine may self-reflect with |Q|+c, where c is positive integer.
Execution of Simple Meta Instructions
Let A={0, 1, #} and Q={0}. ex-machine has 3 simple meta instructions.
With an initial blank memory and starting state of 0, the first four computational steps are shown below. In the first step, memory head is scanning a # and the ex-machine state is 0. Since |Q|=1, simple meta instruction (|Q|−1, #, |Q|−1, 1, 0) instantiates to (0, #, 0, 1, 0), and executes.
In the second step, the memory head is scanning a 1 and the state is 0. Since |Q|=1, instruction (||−1, 1, ||, 0, 1) instantiates to (0, 1, 1, 0, 1), executes and updates Q={0, 1}. In the third step, the memory head is scanning a # and the state is 1. Since |Q|=2, instruction (||−1, #, ||−1, 1, 0) instantiates to (1, #, 1, 1, 0) and executes. In the fourth step, the memory head is scanning a 1 and the state is 1. Since |Q|=2, instruction (||−1, 1, ||, 0, 1) instantiates to (1, 1, 2, 0, 1), executes and updates Q={0, 1, 2}. During these four steps, two simple meta instructions create four new instructions and add new states 1 and 2.
Machine Specification 1.5.
Finite Initial Conditions
A machine is said to have finite initial conditions if the following conditions are satisfied by the computing hardware shown in
It may be useful to think about the initial conditions of an ex-machine as analogous to the boundary value conditions of a differential equation. While trivial to verify, the purpose of remark 1.1 is to assure computations by an ex-machine execute on different types of hardware such as semiconductor chips, lasers using photons, biological implementations using proteins, DNA and RNA. See
Remark 1.1.
Finite Initial Conditions
If the machine starts its execution with finite initial conditions, then after the machine has executed l instructions for any positive integer l, the current number of states Q(l) is finite and the current set of instructions (l) is finite. Also, the memory T is still finitely bounded and the number of measurements obtained from the random or non-deterministic source is finite.
Proof.
The remark follows immediately from specification 1.5 of finite initial conditions and machine instruction specifications 1.1, 1.2, and 1.3. In particular, the execution of one meta instruction adds at most one new instruction and one new state to Q. □
Specification 1.6 describes new ex-machines that can evolve from computations of prior ex-machines that have halted. The notion of evolving is useful because the random instructions 6740 and meta instructions 6720 can self-modify an ex-machine's instructions as it executes. In contrast with the ex-machine, after a digital computer program stops executing, its instructions have not changed.
This difference motivates the next specification, which is illustrated by the following. Consider an initial ex-machine 0 that has 9 initial states and 15 initial instructions. 0 starts executing on a finitely bounded memory T0 and halts. When the ex-machine halts, it (now called 1) has 14 states and 24 instructions and the current memory is S1. We say that ex-machine N0 with memory T0 evolves to ex-machine 1 with memory S1.
Machine Specification 1.6.
Evolving an Ex-Machine
Let T0, T1, T2 . . . Ti−1 each be a finitely bounded memory. Consider ex-machine No with finite initial conditions in the ex-machine hardware. 0 starts executing with memory T0 and evolves to ex-machine 1 with memory S1. Subsequently, 1 starts executing with memory T1 and evolves to 2 with memory S2. This means that when ex-machine 1 starts executing on memory T1, its instructions are preserved after the halt with memory S1. The ex-machine evolution continues until i−1 starts executing with memory Ti−1 and evolves to ex-machine 2 with memory S2. One says that ex-machine No with finitely bounded memories T0, T1, T2 . . . Ti−1 evolves to ex-machine 2 after i halts.
When an ex-machine 0 evolves to 1 and subsequently 1 evolves to 2 and so on up to ex-machine n, then ex-machine i is called an ancestor of ex-machine j whenever 0≤i<j≤n. Similarly, ex-machine j is called a descendant of ex-machine i whenever 0≤i<j≤n. The sequence of ex-machines 0→1→ . . . →n . . . is called an evolutionary path. In some embodiments, this sequence of ex-machines may be stored on distinct computing hardware, as shown in machines 214, 216, 218 and 220 of
2 Non-Determinism and Randomness
In some embodiments, the computing machines described in this invention use non-determinism (i.e., see) as a computational tool to make the computation unpredictable. In some embodiments, quantum random measurements are used as a computational tool. Quantum randomness is a type of non-determinism. In Based on the DIEHARD statistical tests on our implementations, according to
In sections 3 and 4, the execution of the standard instructions, random instructions and meta instructions uses the property that for any m, all 2m binary strings are equally likely to occur when a quantum random number generator takes m binary measurements. One, however, has to be careful not to misinterpret quantum random properties 1 and 2.
Consider the Champernowne sequence 01 00 01 10 11 000 001 010 011 100 101 110 111 0000 . . . , which is sometimes cited as a sequence that is Borel normal, yet still Turing computable. The book An Introduction to Probability Theory and Its Applications, (William Feller. Volume 1, Third Edition. John Wiley & Sons, New York, 1968; pp. 202-211, ISBN 0 471 25708-7.) discusses the mathematics of random walks. The Champernowne sequence catastrophically fails the expected number of changes of sign for a random walk as n→∞. Since all 2m strings are equally likely, the expected value of changes of sign follows from the reflection principle and simple counting arguments, as shown in section III.5 of Feller's book.
Furthermore, most of the 2m binary strings (i.e., binary strings of length m) have high Kolmogorov complexity. This fact leads to the following mathematical intuition that enables new computational behaviors that a standard digital computer cannot perform. The execution of quantum random instructions working together with meta instructions enables the ex-machine to increase its program complexity [28] as it evolves. In some cases, the increase in program complexity can increase the ex-machine's computational power as the ex-machine evolves. Also, notice the distinction here between the program complexity of the ex-machine and Kolmogorov complexity. The definition of Kolmogorov complexity only applies to standard machines. Moreover, the program complexity (e.g., the Shannon complexity |Q∥A|) stays fixed for standard machines. In contrast, an ex-machine's program complexity can increase without bound, when the ex-machine executes quantum random and meta instructions that productively work together. (For example, see ex-machine 1, called (x).)
In terms of the ex-machine computation performed, how one of these binary strings is generated from some particular type of non-deterministic process is not the critical issue. Suppose the quantum random generator demonstrated measuring the spin of particles, certified by the strong Kochen-Specker theorem [5,6] outputs the 100-bit string a0 a1 . . . a99=10110000101011110011001100111000100011100101010110111100 00000010011001000011010101101111001101010000 to ex-machine 1.
Suppose a distinct quantum random generator using radioactive decay outputs the same 100-bit string a0 a1 . . . a99 to a distinct ex-machine 2. Suppose 1 and 2 have identical programs with the same initial tapes and same initial state. Even though radioactive decay was discovered over 100 years ago and its physical basis is still phenomenological, the execution behavior of 1 and 2 are indistinguishable for the first 100 executions of their quantum random instructions. In other words, ex-machines 1 and 2 exhibit execution behaviors that are independent of the quantum process that generates these two identical binary strings.
2.1 Mathematical Determinism and Unpredictability
Before some of the deeper theory on quantum randomness is reviewed, we take a step back to view randomness from a broader theoretical perspective. While we generally agree with the philosophy of Eagle (Antony Eagle. Randomness Is Unpredictability. British Journal of Philosophy of Science. 56, 2005, pp. 749-790.) that randomness is unpredictablity, embodiment 3 helps sharpen the differences between indeterminism and unpredictability.
A Mathematical Gedankenexperiment
Our gedankenexperiment demonstrates a deterministic system which exhibits an extreme amount of unpredictability. This embodiment shows that a physical system whose mathematics is deterministic can still be an extremely unpredictable process if the measurements of the process has limited resolution. Some mathematical work is needed to define the dynamical system and summarize its mathematical properties before we can present the gedankenexperiment.
Consider the quadratic map ƒ:→, where ƒ(x)= 9/2x(1−x). Set I0 [0, ⅓] and I1=[⅓, ⅔]. Set B=(⅓, ⅔). Define the set A={x∈[0, 1]: ƒn(x)∈I0∪I1 for all n∈}. 0 is a fixed point of ƒ and ƒ2(⅓)=ƒ2(⅔)=0, so the boundary points of B lie in Λ. Furthermore, whenever x∈B, then ƒ(x)<0 and
This means all orbits that exit Λ head off to −∞.
The inverse image ƒ−1(B) is two open intervals B0⊂I0 and B1⊂I1 such that ƒ(B0)=ƒ(B1)=B. Topologically, B0 behaves like Cantor's open middle third of I0 and B1 behaves like Cantor's open middle third of I1. Repeating the inverse image indefinitely, define the set
Now H∪Λ=[0,1] and H∩Λ=∅.
Using dynamical systems notation, set Σ2={0, 0. Define the shift map σ:Σ2→Σ2, where σ(a0 a1 . . . ) (a1a2 . . . ). For each x in Λ, x's trajectory in I0 and I1 corresponds to a unique point in Σ2: define h:Λ→Σ2 as h(x) (a0 a1 . . . ) such that for each n∈, set an=0 if ƒn(x)∈I0 and an=1 if ƒn(x)∈I1.
For any two points (a0 a1 . . . ) and (b0b1 . . . ) in Σ2, define the metric d((a0 a1 . . . ),
Via the standard topology on inducing the subspace topology on Λ, it is straightforward to verify that h is a homeomorphism from Λ to Σ2. Moreover, h∘ƒ=σ∘h, so h is a topological conjugacy. The set H and the topological conjugacy h enable us to verify that Λ is a Cantor set. This means that Λ is uncountable, totally disconnected, compact and every point of Λ is a limit point of Λ.
We are ready to pose our mathematical gedankenexperiment. We make the following assumption about our mathematical observer. When our observer takes a physical measurement of a point x in Λ2, she measures a 0 if x lies in I0 and measures a 1 if x lies in I1. We assume that she cannot make her observation any more accurate based on our idealization that is analogous to the following: measurements at the quantum level have limited resolution due to the wavelike properties of matter (Louis de Broglie. Recherches sur la theorie des quanta. Ph.D. Thesis. Paris, 1924.) Similarly, at the second observation, our observer measures a 0 if ƒ(x) lies in I0 and 1 if ƒ(x) lies in I1. Our observer continues to make these observations until she has measured whether ƒk−1(X) is in I0 or in I1. Before making her k+1st observation, can our observer make an effective prediction whether ƒk(X) lies in I0 or I1 that is correct for more than 50% of her predictions?
The answer is no when h(x) is a generic point (i.e., in the sense of Lebesgue measure) in Σ2. Set to the Martin-Löf random points in Σ2. Then has Lebesgue measure 1 (Feller, Volume 1) in Σ2, so its complement Σ2− has Lebesgue measure 0. For any x such that h(x) lies in , then our observer cannot predict the orbit of x with a Turing machine. Hence, via the topological conjugacy h, we see that for a generic point x in Λ, x's orbit between I0 and I1 is Martin-Löf random—even though ƒ is mathematically deterministic and ƒ is a Turing computable function.
Overall, the dynamical system (ƒ, Λ) is mathematically deterministic and each real number x in Λ has a definite value. However, due to the lack of resolution in the observer's measurements, the orbit of generic point x is unpredictable—in the sense of Martin-Löf random.
2.2 Quantum Random Theory
The standard theory of quantum randomness stems from the seminal EPR paper. Einstein, Podolsky and Rosen propose a necessary condition for a complete theory of quantum mechanics: Every element of physical reality must have a counterpart in the physical theory. Furthermore, they state that elements of physical reality must be found by the results of experiments and measurements. While mentioning that there might be other ways of recognizing a physical reality, EPR propose the following as a reasonable criterion for a complete theory of quantum mechanics:
They consider a quantum-mechanical description of a particle, having one degree of freedom. After some analysis, they conclude that a definite value of the coordinate, for a particle in the state given by
is not predictable, but may be obtained only by a direct measurement. However, such a measurement disturbs the particle and changes its state. They remind us that in quantum mechanics, when the momentum of the particle is known, its coordinate has no physical reality. This phenomenon has a more general mathematical condition that if the operators corresponding to two physical quantities, say A and B, do not commute, then a precise knowledge of one of them precludes a precise knowledge of the other. Hence, EPR reach the following conclusion:
EPR justifies this conclusion by reasoning that if both physical quantities had a simultaneous reality and consequently definite values, then these definite values would be part of the complete description. Moreover, if the wave function provides a complete description of physical reality, then the wave function would contain these definite values and the definite values would be predictable.
From their conclusion of I OR II, EPR assumes the negation of I—that the wave function does give a complete description of physical reality. They analyze two systems that interact over a finite interval of time. And show by a thought experiment of measuring each system via wave packet reduction that it is possible to assign two different wave functions to the same physical reality. Upon further analysis of two wave functions that are eigenfunctions of two non-commuting operators, they arrive at the conclusion that two physical quantities, with non-commuting operators can have simultaneous reality. From this contradiction or paradox (depending on one's perspective), they conclude that the quantum-mechanical description of reality is not complete.
In the paper—Can Quantum-Mechanical Description of Physical Reality be Considered Complete? Physical Review. 48, Oct. 15, 1935, pp. 696-702—Neil Bohr responds to the EPR paper. Via an analysis involving single slit experiments and double slit (two or more) experiments, Bohr explains how during position measurements that momentum is transferred between the object being observed and the measurement apparatus. Similarly, Bohr explains that during momentum measurements the object is dis-placed. Bohr also makes a similar observation about time and energy: “it is excluded in principle to control the energy which goes into the clocks without interfering essentially with their use as time indicators”. Because at the quantum level it is impossible to control the interaction between the object being observed and the measurement apparatus, Bohr argues for a “final renunciation of the classical ideal of causality” and a “radical revision of physical reality”.
From his experimental analysis, Bohr concludes that the meaning of EPR's expression without in any way disturbing the system creates an ambiguity in their argument. Bohr states: “There is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behavior of the system. Since these conditions constitute an inherent element of the description of any phenomenon to which the term physical reality can be properly attached, we see that the argumentation of the mentioned authors does not justify their conclusion that quantum-mechanical description is essentially incomplete.” Overall, the EPR versus Bohr-Born-Heisenberg position set the stage for understanding whether hidden variables can exist in the theory of quantum mechanics.
Embodiments of quantum random instructions utilize the lack of hidden variables because this contributes to the unpredictability of quantum random measurements. In some embodiments of the quantum random measurements used by the quantum random instructions, the lack of hidden variables are only associated with the quantum system being measured and in other embodiments the lack of hidden variables are associated with the measurement apparatus. In some embodiments, the value indefiniteness of measurement observables implies unpredictability. In some embodiments, the unpredictability of quantum measurements 6730 in
In an embodiment of quantum random measurements 6730 shown in
The Sx=±1 outcomes can be assigned 0 and 1, respectively. Moreover, since
neither of the Sx±1 outcomes can have a pre-determined definite value. As a consequence, bits 0 and 1 are generated independently (stochastic independence) with a 50/50 probability (unbiased). These are quantum random properties 1 and 2.
3 Computing Ex-Machine Languages
A class of ex-machines are defined as evolutions of the fundamental ex-machine (x), whose 15 initial instructions are listed in ex-machine specification 1. These ex-machines compute languages L that are subsets of {a}*={an:n∈}. The expression an represents a string of n consecutive a's. For example, a5=aaaaa and a0 is the empty string. Define the set of languages
Machine Specification 3.1 defines a unique language in for each function ƒ:→{0, 1}.
Machine Specification 3.1.
Language Lƒ
Consider any function ƒ:→{0, 1}. This means ƒ is a member of the set {0, 1. Function ƒ induces the language Lƒ={an: ƒ(n)=1}. In other words, for each non-negative integer n, string an is in the language Lƒ if and only if ƒ(n)=1.
Trivially, Lƒ is a language in . Moreover, these functions ƒ generate all of .
Remark 3.1.
In order to define the halting syntax for the language in that an ex-machine computes, choose alphabet set A={#, 0, 1, N, Y, a}.
Machine Specification 3.2.
Language L in that Ex-Machine Computes
Let be an ex-machine. The language L in that computes is defined as follows. A valid initial tape has the form # #an#. The valid initial tape # ## represents the empty string. After machine starts executing with initial tape # #an#, string an is in 's language if ex-machine halts with tape #an# Y#. String an is not in 's language if halts with tape #an# N#.
The use of special alphabet symbols (i.e., special memory values) Y and N—to decide whether an is in the language or not in the language—follows [?].
For a particular string # #am#, some ex-machine IX could first halt with #am# N# and in a second computation with input # #am# could halt with #am# Y#. This oscillation of halting outputs could continue indefinitely and in some cases the oscillation can be aperiodic. In this case, 's language would not be well-defined according to machine specification 3.2. These types of ex-machines will not be specified in this invention.
There is a subtle difference between (x) and an ex-machine IX whose halting output never stabilizes. In contrast to the Turing machine or a digital computer program, two different instances of the ex-machine (x) can evolve to two different machines and compute distinct languages according to machine specification 3.2. However, after (x) has evolved to a new machine (a0 a1 . . . am x) as a result of a prior execution with input tape # #am#, then for each i with 0≤i≤m, machine (a0 a1 . . . am x) always halts with the same output when presented with input tape # #ai#. In other words, (a0 a1 . . . am x)'s halting output stabilizes on all input strings ai where 0≤i≤m. Furthermore, it is the ability of (x) to exploit the non-autonomous behavior of its two quantum random instructions that enables an evolution of (x) to compute languages that are Turing incomputable (i.e., not computable by a standard digital computer).
We designed ex-machines that compute subsets of {a}* rather than subsets of {0, 1}* because the resulting specification of (x) is much simpler and more elegant. It is straightforward to list a standard machine that bijectively translates each an to a binary string in {0, 1}* as follows. The empty string in {a}* maps to the empty string in {0, 1}*. Let ψ represent this translation map. Hence, a0, aa1, aaa00, a401, a510, a611, a7000, and so on. Similarly, an inverse translation standard machine computes the inverse of ψ. Hence 0a, 1aa, 00aaa, and so on. The translation and inverse translation computations immediately transfer any results about the ex-machine computation of subsets of {a}* to corresponding subsets of {0, 1}* via ψ. In particular, the following remark is relevant for our discussion.
Remark 3.2.
Every subset of {a}* is computable by some ex-machine if and only if every subset of {0, 1}* is computable by some ex-machine.
Proof.
The remark immediately follows from the fact that the translation map ψ and the inverse translation map ψ−1 are computable with a standard machine. □
When the quantum randomness in 's two quantum random instructions satisfy property 1 (unbiased bernoulli trials) and property 2 (stochastic independence), for each n∈, all 2n finite paths of length n—in the infinite, binary tree of
Moreover, there is a one-to-one correspondence between a function ƒ:→{0, 1} and an infinite downward path in the infinite binary tree of
Machine Specification 1.
(x)
A={#, 0, 1, N, Y, a}. States Q={0, h, n, y, t, v, w, x, 8} where halting state h=1, and states n=2, y=3, t=4, v=5, w=6, x=7. The initial state is always 0. The letters are used to represent machine states instead of explicit numbers because these states have special purposes. (This is for the reader's benefit.) State n indicates NO that the string is not in the machine's language. State y indicates YES that the string is in the machine's language. State x is used to generate a new random bit; this random bit determines the string corresponding to the current value of |Q|−1. The fifteen instructions of (x) are shown below.
With initial state 0 and initial memory # #aaaa##, an execution of machine (x) is shown below.
During this execution, (x) replaces instruction (8, #, x, #, 0) with (8, #, y, #, 1). Meta instruction (w, a, ||, a, 1, (||−1, a, ||, a, 1)) executes and replaces (8, a, x, a, 0) with new instruction (8, a, 9, a, 1). Also, simple meta instruction (||−1, a, x, a, 0) temporarily added instructions (9, a, x, a, 0), (10, a, x, a, 0), and (11, a, x, a, 0).
Subsequently, these new instructions were replaced by (9, a, 10, a, 1), (10, a, 11, a, 1), and (11, a, 12, a, 1), respectively. Similarly, simple meta instruction (||−1, #, x, #, 0) added instruction (12, #, x, #, 0) and this instruction was replaced by instruction (12, #, n, #, 1). Lastly, instructions (9, #, y, #, 1), (10, #, n, #, 1), (11, #, y, #, 1), and (12, a, 13, a, 1) were added.
Furthermore, five new states 9, 10, 11, 12 and 13 are added to Q. After this computation halts, the machine states are Q={0, h, n, y, t, v, w, x, 8, 9, 10, 11, 12, 13} and the resulting ex-machine evolved to has 24 instructions. It is called (11010 x).
Machine Instructions 2.
(11010 x)
New instructions (8, #, y, #, 1), (9, #, y, #, 1), and (11, #, y, #, 1) help (11010 x) compute that the empty string, a and aaa are in its language, respectively. Similarly, the new instructions (10, #, n, #, 1) and (12, #, n, #, 1) help (11010 x) compute that aa and aaaa are not in its language, respectively.
The zeroth, first, and third 1 in (11010 x)'s name indicate that the empty string, a and aaa are in (11010 x)'s language. The second and fourth 0 indicate strings aa and aaaa are not in its language. The symbol x indicates that all strings an with n≥5 have not yet been determined whether they are in (11010 x)'s language or not in its language.
Starting at state 0, ex-machine (11010 x) computes that the empty string is in (11010 x)'s language.
Starting at state 0, ex-machine (11010 x) computes that string a is in (11010 x)'s language.
Starting at state 0, (11010 x) computes that string aa is not in (11010 x)'s language.
Starting at state 0, (11010 x) computes that aaa is in (11010 x)'s language.
Starting at state 0, (11010 x) computes that aaaa is not in (11010 x)'s language.
Note that for each of these executions, no new states were added and no instructions were added or replaced. Thus, for all subsequent executions, ex-machine (11010 x) computes that the empty string, a and aaa are in its language. Similarly, strings aa and aaaa are not in (11010 x)'s language for all subsequent executions of (11010 x).
Starting at state 0, we examine an execution of ex-machine (11010 x) on input memory # #aaaaaaa##.
Overall, during this execution ex-machine (11010 x) evolved to ex-machine (11010 011 x). Three quantum random instructions were executed. The first quantum random instruction (x, a, t, 0) measured a 0 so it is shown above as (x, a, t, 0_qr, 0). The result of this 0 bit measurement adds the instruction (13, #, n, #, 1), so that in all subsequent executions of ex-machine (11010 011 x), string as is not in (11010 011 x)'s language. Similarly, the second quantum random instruction (x, a, t, 0) measured a 1 so it is shown above as (x, a, t, 1_qr, 0). The result of this 1 bit measurement adds the instruction (14, #, y, #, 1), so that in all subsequent executions, string a6 is in (11010 011 X)'s language. Finally, the third quantum random instruction (x, #, x, 0) measured a 1 so it is shown above as (x, #, x, 1_qr, 0). The result of this 1 bit measurement adds the instruction (15, #, y, #, 1), so that in all subsequent executions, string a7 is in (11010 011 x)'s language.
Lastly, starting at state 0, we examine a distinct execution of ex-machine (11010 x) on input memory # #aaaaaaa##. A distinct execution of (11010 x) evolves to ex-machine (11010 000 x).
Based on our previous examination of ex-machine (x) evolving to (11010 x) and then subsequently (11010 x) evolving to (11010 011 x), ex-machine 2 specifies (a0 a1 . . . am x) in terms of initial machine states and initial machine instructions.
Machine Specification 2.
(a0 a1 . . . am x)
Let m∈. Set Q={0, h, n, y, t, v, w, x, 8, 9, 10, . . . m+8, m+9}. For 0≤i≤m, each a, is 0 or 1. ex-machine (a0 a1 . . . am x)'s instructions are shown below. Symbol b8=y if a0=1. Otherwise, symbol b8=n if a0=0. Similarly, symbol b9=y if a1=1. Otherwise, symbol b9=n if a1=0. And so on until reaching the second to the last instruction (m+8, #, bm+8, #, 1), symbol bm+8 y if am=1. Otherwise, symbol bm+8 n if am=0.
Machine Computation Property 3.1.
Whenever i satisfies 0≤i≤m, string ai is in (a0 a1 . . . am x)'s language if ai=1; string ai is not in (a0 a1 . . . am x) 's language if ai=0. Whenever n>m, it has not yet been determined whether string an is in (a0 a1 . . . am x) 's language or not in its language.
Proof.
When 0≤i≤m, the first consequence follows immediately from the definition of ai being in (a0 a1 . . . am x)'s language and from ex-machine 2. In instruction (i+8, #, bi+8, #, 1) the state value of bi+8 is y if ai=1 and bi+8 is n if ai=0.
For the indeterminacy of strings an when n>m, ex-machine (a0 . . . am x) executes its last instruction (m+8, a, m+9, a, 1) when it is scanning the mth a in an. Subsequently, for each a on the memory to the right (higher memory addresses) of #am, ex-machine (a0 . . . am x) executes the quantum random instruction (x, a, t, 0).
If the execution of (x, a, t, 0) measures a 0, the two meta instructions (t, 0, w, a, 0, (||−1, #, n, #, 1)) and (w, a, ||, a, 1, (||−1, a, ||, a, 1)) are executed. If the next alphabet symbol to the right is an a, then a new standard instruction is executed that is instantiated from the simple meta instruction (||−1, a, x, a, 0). If the memory address was pointing to the last a in an, then a new standard instruction is executed that is instantiated from the simple meta instruction (||−1, #, x, #, 0).
If the execution of (x, a, t, 0) measures a 1, the two meta instructions (t, 1, w, a, 0, (||−1, #, y, #, 1)) and (w, a, ||, a, 1, (||−1, a, ||, a, 1)) are executed. If the next alphabet symbol to the right is an a, then a new standard instruction is executed that is instantiated from the simple meta instruction (||−1, a, x, a, 0). If the memory address was pointing to the last a in an, then a new standard instruction is executed that is instantiated from the simple meta instruction (||−1, #, x, #, 0).
In this way, for each a on the memory to the right (higher memory addresses) of #am, the execution of the quantum random instruction (x, a, t, 0) determines whether each string am+k, satisfying 1≤k≤n−m, is in or not in (a0 a1 . . . an x)'s language.
After the execution of (||−1, #, x, #, 0), the memory address is pointing to a blank symbol, so the quantum random instruction (x, #, x, 0) is executed. If a 0 is measured by the quantum random source, the meta instructions (x, 0, v, #, 0, (||−1, #, n, #, 1)) and (v, #, n, #, 1, (||−1, a, ||, a, 1)) are executed. Then the last instruction executed is (n, #, h, N, 0) which indicates that an is not in (a0 a1 . . . an x)'s language.
If the execution of (x, #, x, 0) measures a 1, the meta instructions (x, 1, w, #, 0, (||−1, #, y, #, 1)) and (w, #, y, #, 1, (||−1, a, ||, a, 1)) are executed. Then the last instruction executed is (y, #, h, Y, 0) which indicates that an is in (a0 a1 . . . an x)'s language.
During the execution of the instructions, for each a on the memory to the right (higher memory addresses) of #am, ex-machine (a0 a1 . . . am x) evolves to (a0 a1 . . . an x) according to the specification in ex-machine 2, where one substitutes n for m. □
Remark 3.3.
When the binary string a0 a1 . . . am is presented as input, the ex-machine instructions for (a0 a1 . . . am x), specified in ex-machine 2, are constructible (i.e., can be printed) with a standard machine.
In contrast with lemma 3.1, (a0 a1 . . . am x)'s instructions are not executable with a standard machine when the input memory # #ai# satisfies i>m because meta and quantum random instructions are required. Thus, remark 3.3 distinguishes the construction of (a0 a1 . . . am x)'s instructions from the execution of (a0 a1 . . . am x)'s instructions.
Proof.
When given a finite list (a0 a1 . . . am), where each a1 is 0 or 1, the code listing below constructs (a0 a1 . . . am x)'s instructions. Starting with comment ;; Qx_builder.lsp, the code listing is expressed in a dialect of LISP, called newLISP. (See www.newlisp.org). LISP was designed, based on the lambda calculus, developed by Alonzo Church. The appendix of [33] outlines a proof that the lambda calculus is computationally equivalent to digital computer instructions (i.e., standard machine instructions). The following 3 instructions print the ex-machine instructions for (11010 x), listed in machine instructions 2.
Machine Specification 3.3.
Define as the union of (x) and all ex-machines (a0 . . . amx) for each m∈ and for each a0 . . . am in {0, 1}m+1. In other words,
Theorem 3.2.
Each language Lƒ in can be computed by the evolving sequence of ex-machines (x), (ƒ(0) x), (ƒ(0)ƒ(1) x), . . . , (ƒ(0)ƒ(1) . . . ƒ(n) x), . . . .
Proof.
The theorem follows from ex-machine 1, ex-machine 2 and lemma 3.1. □
Machine Computation Property 3.3.
Given function ƒ:→{0, 1}, for any arbitrarily large n, the evolving sequence of ex-machines (ƒ(0)ƒ(1) . . . ƒ(n) x), (ƒ(0)ƒ(1) . . . ƒ(n)ƒ(n+1) x), . . . computes language Lƒ.
Machine Computation Property 3.4.
Moreover, for each n, all ex-machines (x), (ƒ(0)x), (ƒ(0) ƒ(1) x), . . . , (ƒ(0)ƒ(1) . . . ƒ(n) x) combined have used only a finite amount of memory, finite number of states, finite number of instructions, finite number of executions of instructions and only a finite amount of quantum random information measured by the quantum random instructions.
Proof.
For each n, the finite use of computational resources follows immediately from remark 2, specification 7 and the specification of ex-machine 2. □
A set X is called countable if there exists a bijection between X and . Since the set of all Turing machines is countable and each Turing machine only recognizes a single language most (in the sense of Cantor's hierarchy of infinities) languages Lƒ are not computable with a Turing machine. More precisely, the cardinality of the set of languages Lƒ computable with a Turing machine is 0, while the cardinality of the set of all languages is 1.
For each non-negative integer n, define the language tree (a0 a1 . . . an) {Lƒ:ƒ∈{0, 1 and ƒ(i) ai, for i satisfying 0≤i≤n}. Define the corresponding subset of {0, 1 as (a0 a1 . . . an)={ƒ∈{0, 1:ƒ(i) ai for i satisfying 0≤i≤n}. Let Ψ denote this 1-to-1 correspondence, where {0, 1 and (a0 a1 . . . an)(a0 a1 . . . an).
Since the two quantum random axioms 1 and 2 are satisfied, each finite path ƒ(0)ƒ(1) . . . ƒ(n) is equally likely and there are 2n+1 of these paths. Thus, each path of length n+1 has probability 2−(n+1). These uniform probabilities on finite strings of the same length can be extended to the Lebesgue measure μ on probability space {0, 1. Hence each subset (a0 a1 . . . an) has measure 2−(n+1). That is, μ((a0 a1 . . . an))=2−(n+1) and μ({0, 1)=1. Via the Ψ correspondence between each language tree (a0 a1 . . . an) and subset (a0 a1 . . . an), uniform probability measure p induces a uniform probability measure ν on , where ν((a0 a1 . . . an))=2−(n+1) and ν()=1.
Theorem 3.5.
For Functions ƒ:→{0, 1}, the Probability that Language Lƒ is Turing Incomputable has Measure 1 in (ν,).
Proof.
The Turing machines are countable and therefore the number of functions ƒ:→{0, 1} that are Turing computable is countable. Hence, via the Ψ correspondence, the Turing computable languages Lƒ have measure 0 in . □
Moreover, the Martin-Löf random sequences ƒ:→{0, 1} have Lebesgue measure 1 in {0,1 and are a proper subset of the Turing incomputable sequences.
Machine Computation Property 3.6.
(x) is not a Turing machine. Each ex-machine (a0 a1 . . . am x) in is not a Turing machine.
Proof.
(x) can evolve to compute Turing incomputable languages on a set of probability measure 1 with respect to (ν,). Also, (a0 a1 . . . am x) can evolve to compute Turing incomputable languages on a set of measure 2−(m+1) with respect to (ν,). In contrast, each Turing machine only recognizes a single language, which has measure 0. In fact, the measure of all Turing computable languages is 0 in . □
Remark 3.4.
The statements in theorem 3.5 and corollary 3.6 can be sharpened when deeper results are obtained for the quantum random source used by the quantum random instructions.
4 A Non-Deterministic Execution Machine
A brief, intuitive summary of a non-deterministic execution machine is provided first before the formal description in machine specification 4.1. Our non-deterministic execution machine should not be confused with the nondeterministic Turing machine that is described on pages 30 and 31 of [12].
Part of the motivation for our approach is that procedure 4 is easier to analyze and implement. Some embodiments of procedure 4 can be implemented using non-deterministic generator 142 in
In a standard digital computer, the computer program can be specified as a function; from a current machine configuration (q, k, T), where q is the machine state, k is the memory address being scanned and T is the memory, there is exactly one instruction to execute next or the Turing machine halts on this machine configuration.
Instead of a function, in embodiments of our invention(s) of a non-deterministic execution machine, the program is a relation. From a particular machine configuration, there is ambiguity on the next instruction I in to execute. This is computational advantage when the adversary does not know the purpose of the program . Measuring randomness with quantum random measurements 6730 in
Part of our non-deterministic execution machine specification is that each possible instruction has a likelihood of being executed. Some quantum randomness is measured with 6730 in
Before machine specifications are provided, some symbol definitions are reviewed. Symbol + is the positive integers. Recall that represents the rational numbers. [0, 1] is the closed interval of real numbers x such that 0≤x≤1. The expression x∉X means element x is NOT a member of X. For example,
The symbol ∩ means intersection. Note
Machine Specification 4.1.
A Non-Deterministic Execution Machine
A non-deterministic machine is defined as follows:
Non-deterministic machine 4.1 can be used to execute two different instances of a procedure with a different sequence of machine instructions 6700, as shown in
Machine Specification 4.2.
Machine Configurations and Valid Computational Steps
A machine configuration i is a triplet (q, k, Ti) where q is a machine state, k is an integer which is the current memory address being scanned and Ti represents the memory. Consider machine configuration i+1=(r, l, Ti+1). Per machine specification 4.1, then is a valid computational step if instruction I=(q, α, r, β, y) in and memories Ti and Ti+1 satisfy the following four conditions:
Symbol + is the positive integers. Recall that represents the rational numbers. [0, 1] is the closed interval of real numbers x such that 0≤x≤1. The expression x∉X means element x is NOT a member of X. For example,
The symbol ∩ means intersection. Note
The purpose of machine procedure 3 is to describe how a non-deterministic machine selects a machine instruction from the collection of machine instructions 6700 as shown in
Machine Instructions 3.
In some embodiments, machine instructions (procedure) 4 can be implemented with standard instructions 6710, meta instructions 6720, and random instructions 6740, as shown in
In some embodiments, machine instructions non-deterministically selected may be represented in a C programming language syntax. In some embodiments, some of the instructions non-deterministically selected to execute have the C syntax such as x=1; or z=x*y;. In some embodiments, one of instructions selected may be a loop with a body of machine instructions such as:
In some embodiments, a random instruction may measure a random bit, called random_bit and then non-deterministically execute according to the following code:
In other embodiments, the machine instructions may have a programming language syntax such as JAVA, Go Haskell, C++, RISC machine instructions, JAVA virtual machine, Ruby, LISP
Machine Instructions 4.
In machine instructions (procedure) 4, the ex-machine that implements the pseudocode executes a particular non-deterministic machine a finite number of times and if a final state is reached, the procedure increments the corresponding final state score. After the final state scores have been tallied such that they converge inside the reliability interval (0, ϵ), then the ex-machine execution is completed and the ex-machine halts.
Machine Instructions 5.
Machine instructions 5 is useful for executing a computational procedure so that each instance of the procedure executes a different sequence of instructions and in a different order. This unpredictability of the execution of instructions makes it difficult for an adversary to comprehend the computation. Machine instructions 5 can be executed with a finite sequence of standard, meta and random instructions as shown in
In the following analysis, the likelihood of procedure (machine instructions) 4 finding the maximum is estimated.
Machine Computation Property 4.1.
For each x, let vx=ν(Ix). Consider the acceptable computational path
such that machine configuration N is in final state qƒ and p=υj
is 0 when n<N and at least rp when n≥N.
Proof.
When n<N, it is impossible for the inner loop to execute the path
so the expected number of times is 0. When n=N, the independence of each random sample that selects instruction Ij
is an acceptable path implies that the probability of executing this computational path is p=υj
Herein the term “process” refers to and expresses a broader notion than “algorithm”. The formal notion of “Turing machine” and of “algorithm” was presented in Turing's paper [33] and refers to a finite deterministic machine that executes a finite number of instructions with finite memory. “Algorithm” is a deterministic process in the following sense: if the finite machine is completely known and the input to the deterministic machine is known, then the future behavior of the machine can be determined. There are quantum processes and other embodiments that measure quantum effects 6730 in
Some examples of physically non-deterministic processes are as follows. In some embodiments that utilize non-determinism, a semitransparent mirror may be used where photons that hit the mirror may take two or more paths in space. In one embodiment, if the photon is reflected then it takes on one bit value b that is a 0 or a 1; if the photon is transmitted, then it takes on the other bit value 1−b. In another embodiment, the spin of an electron may be sampled to generate the next non-deterministic bit.
In still another embodiment, a protein, composed of amino acids, spanning a cell membrane or artificial membrane, that has two or more conformations can be used to detect non-determinism: the protein conformation sampled may be used to generate a non-deterministic value in {0, . . . n−1} where the protein has n distinct conformations. In an alternative embodiment, one or more rhodopsin proteins could be used to detect the arrival times of photons and the differences of arrival times could generate non-deterministic bits. In some embodiments, a Geiger counter may be used to sample non-determinism. Lastly, any one of procedures in this specification may measure random events 6720 such as a quantum event (non-deterministic process). In some embodiments, these quantum events can be emitted by the light emitting diode (LED) device, shown in
In an embodiment, a transducer measures the quantum effects from the emission and detection of photons, wherein the randomness is created by the non-deterministic process of photon emission and photon detection. In some embodiments, light emitting diode shown in
What are You Trying to Do? why is this Compelling?
Based upon the principles of Turing incomputability and connectedness and novel properties of the Active Element Machine, a malware-resistant computing machine is constructed. This novel computing machine is a non-Turing, non-register machine (non von-Neumann), called an active element machine (AEM). AEM programs are designed so that the purpose of the AEM computations are difficult to apprehend by an adversary and hijack with malware. As a method of protecting intellectual property, these methods can also be used to help thwart reverse engineering of proprietary algorithms, hardware design and other areas of intellectual property.
Some prior art has used the evolution of programs executing on a register machine (von=Neumann architecture) architecture. [Fred Cohen, “Operating Systems Protection Through Program Evolution”, IFIP-TC11 ‘Computers and Security’(1993) V12#6 (October 1993) pp. 565-584].
The von Neumann architecture is a computing model for a stored-program digital computer that uses a CPU and a separate structure (memory) to store both instructions and data. Generally, a single instruction is executed at a time in sequential order and there is no notion of time in von-Neumann machine instructions: This creates attack points for malware to exploit. Some prior art has used obfuscated code that executes on a von-Neumann architecture. See <http://www.ioccc.org/main.html> on the International Obfuscated C code contest.
Some prior art relies on operating systems that execute on a register machine architecture. The register machine model creates a security vulnerability because its computing steps are disconnected. This topological property (disconnected) creates a fundamental mathematical weakness in the register machine so that register machine programs may be hijacked by malware. Next, this weakness is explained from the perspective of a digital computer program (computer science).
In DARPA's CRASH program <http://tinyurl.com/4khv28q>, they compared the number of lines of source code in security software written over twenty years versus malware written over the same period. The number of lines of code in security software grew from about 10,000 to 10 million lines; the number of lines of code in malware was almost constant at about 125 lines. It is our thesis that this insightful observation is a symptom of fundamental security weakness(es) in digital computer programs (prior art of register machines): It still takes about the same number of lines of malware code to hijack digital computer's program regardless of the program's size.
The sequential execution of single instructions in the register and von-Neumann machine make the digital computer susceptible to hijacking and sabotage. As an example, by inserting just one jmp WVCTF instruction into the program or changing the address of one legitimate jmp instruction to WVCTF, the purpose of the program can be hijacked.
From a deterministic machine (DM) perspective, only one output state r of one DM program, command η(q, a)=(r, b, x) needs to be changed to state m combined with additional hijacking DM commands adjoined to the original DM program. After visiting state m, these hijacking commands are executed, which enables the purpose of the original DM program to be hijacked.
Furthermore, once the digital computer program has been hijacked, if there is a friendly routine to check if the program is behaving properly, this safeguard routine will never get executed. As a consequence, the sequential execution of single instructions cripples the register machine program from defending and repairing itself. As an example of this fundamental security weakness of a digital computer, while some malware may have difficulty decrypting the computations of a homomorphic encryption operation, the malware can still hijack a register machine program computing homomorphic encryption operations and disable the program.
What is Novel about the Secure Active Element Machine?
A. A novel non-Turing computing machine—called the active element machine—is presented that has new capabilities. Turing machine, digital computer programs, register machine programs and standard neural networks have a finite prime directed edge complexity. (See definition 4.23.) A digital computer program or register machine program can be executed by a Turing or deterministic machine. (See [7], [20] and [24]).
An active element machine (AEM) that has unbounded prime directed edge complexity can be designed or programmed. This is important advantage because rules describing a AEM program are not constant as a function of time. Furthermore, these rules change unpredictably because the AEM program interpretation can be based on randomness and in some embodiments uses quantum randomness. In some embodiments, quantum randomness uses quantum optics or quantum phenomena from a semiconductor. The changing the rules property of the AEM programs with randomness makes it difficult for malware to apprehend the purpose of an AEM program.
B. Meta commands and the use of time enable the AEM to change its program as it executes, which makes the machine inherently self-modifying. In the AEM, self-modification of the connection topology and other parameters can occur during a normal run of the machine when solving computing problems. Traditional multi-element machines change their architecture only during training phases, e.g. when training neural networks or when evolving structures in genetic programming. The fact that self-modification happens during runtime is an important aspect for cybersecurity of the AEM. Constantly changing systems can be designed that are difficult to reverse engineer or to disable in an attack. When the AEM has enough redundancy and random behavior when self-modifying, multiple instances of an AEM—even if built for the same type of computing problems—all look different from the inside. As a result, machine learning capabilities are built right into the machine architecture. The self-modifying behavior also enables AEM programs to be designed that can repair themselves if they are sabotaged.
C. The inherent AEM parallelism and explicit use of time can be used to conceal the computation and greatly increase computing speed compared to the register machine. There are no sequential instructions in an AEM program. Multiple AEM commands can execute at the same time. As a result, AEM programs can be designed so that additional malware AEM commands added to the AEM program would not effect the intended behavior of the AEM program. This is part of the topological connectedness.
D. An infinite number of spatio-temporal firing interpretations can be used to represent the same underlying computation. As a result, at two different instances a Boolean function can be computed differently by an active element machine. This substantially increases the AEM's resistance to reverse engineering and apprehension of the purpose of an AEM program. This enables a computer program instruction to be executed differently at different instances. In some embodiments, these different instances are at different times. In some embodiments, these different instances of computing the program instruction are executed by different collections of active elements and connections in the machine. Some embodiments use random active element machine firing interpretations to compute a Boolean function differently at two different instances.
E. Incomputability is used instead of complexity. Incomputability means that a general Turing or Deterministic register machine algorithm can not unlock or solve an incomputable problem. This means that a digital computer program can not solve an incomputable problem. This creates a superior level of computational security.
F. Randomness in the AEM computing model. Because the AEM Interpretation approach relies on quantum randomness to dynamically generate random firing patterns, the AEM implementing this technique is no longer subject to current computability theory that assumes the deterministic machine or register machine as the computing model. This means that prior art methods and their lack of solutions for malware that depend on Turing's halting problem and undecidability no longer apply to the AEM in this context. This is another aspect of the AEM's non-Turing behavior (i.e. beyond a digital computer's capabilities) that provides useful novel cybersecurity capabilities.
In some embodiments, the quantum randomness utilized with the AEM helps create a more powerful computational procedure in the following way. An active element machine (AEM) that uses quantum randomness can deterministically execute a universal Turing machine (i.e. digital computer program that can execute any possible digital computer program) such that the firing patterns of the AEM are Turing incomputable. An active element machine (AEM) that uses quantum randomness deterministically executes digital computer instructions such that the firing patterns of the active element machine are Turing incomputable. This means that this security method will work for any digital computer program and the capability works for any digital computer hardware/software implementation and for digital computer programs written in C, C++, JAVA, Fortran, Assembly Language, Ruby, Forth, Haskell, RISC machine instructions (digital computer machine instructions, JVM (java virtual machine), Python and other digital computer languages.
Register machine instructions, Turing machine or digital computer instructions can be executed with active element machine instructions where it is Turing incomputable to understand what the active element machine computation is doing. In these embodiments, the active element machine computing behavior is non-Turing. This enhances the capability of a computational procedure: it secures the computational process (new secure computers) and helps protect a computation from malware.
Why is Now a Good Time?
a. It was recently discovered that an Active Element machine can exhibit non-Turing dynamical behavior. The use of prime directed edge complexity was discovered. Every Turing machine (digital computer program) has a finite prime directed edge complexity. (See 4.20 and 4.23.) An active element machine that has unbounded prime directed edge complexity can be designed using physical randomness. For example, the physical or quantum randomness can be realized with quantum optics or quantum effects in a semiconductor or another quantum phenomena.
b. The Meta command was discovered which enables the AEM to change its program as execution proceeds. This enables the machine to compute the same computational operation in an infinite number of ways and makes it conducive to machine learning and self-repair. The AEM can compute with a language that randomly evolves while the AEM program is executing.
c. It was recently realized that Active Element machine programs can be designed that are connected (in terms of topology), making them resistant to tampering and hijacking.
d. When a Turing machine or register machine (digital computer) executes an unbounded (non-halting) computation, the long term behavior of the program has recurrent points. This demonstrates the machine's predictable computing behavior which creates weaknesses and attack points for malware to exploit. This recurrent behavior in Turing machine and register machine is described in the section titled IMMORTAL ORBIT and RECURRENT POINTS.
e. Randomness can be generated from physical processes using quantum phenomena i.e. quantum optics, quantum tunneling in a semiconductor or other quantum phenomena. Using quantum randomness as a part of the active element machine exhibits non-Turing computing behavior. This non-Turing computing behavior generates random AEM firing interpretations that are difficult for malware to comprehend.
What is Novel about the New Applications that can be Built?
In some embodiments, an AEM can execute on current computer hardware and in some embodiments is augmented. These novel methods using an AEM are resistant to hackers and malware apprehending the purpose of AEM program's computations and in terms of sabotaging the AEM program's purpose; sabotaging a computation's purpose is analogous to a denial of service or distributed denial of service attack. The machine has computing performance that is orders of magnitude faster when implemented with hardware that is specifically designed for AEM computation. The AEM is useful in applications where reliability, security and performance are of high importance: protecting and reliably executing the Domain Name Servers, securing and running critical infrastructure such as the electrical grid, oil refineries, pipelines, irrigation systems, financial exchanges, financial institutions and the cybersecurity system that coordinates activities inside institutions such as the government.
a. AEM firing patterns are randomly generated that are Turing incomputable to determine their computational purpose.
b. AEM representations can be created that are also topologically connected.
c. AEM parallelism is used to solve computationally difficult tasks as shown in the section titled An AEM Program Computes a Ramsey Number.
d. Turing machine computation (digital computer computation) is topologically disconnected as shown by the affine map correspondence in 2.25.
Synthesis of Multiple Methods
In some embodiments, multiple methods are used and the solution is a synthesis of some of the following methods, A-E.
A. An AEM program—with input active elements fired according to b1 b2 . . . bm—accepts [b1 b2 . . . bm] if active elements E1, E2 . . . , En exhibit a set or sequence of firing patterns. In some embodiments, this sequence of firing patterns has Turing incomputable interpretations using randomness.
B. AEM programs are created with an unbounded prime edge complexity. Turing and register machine programs have a finite prime directed edge complexity as shown in the section titled Prime Edge Complexity, Periodic Points & Repeating State Cycles.
C. AEM programs are created with no recurrent points when computation is unbounded with respect to time. This is useful for cybersecurity as it helps eliminate weaknesses for malware to exploit. When a Turing machine or register machine (digital computer) executes an unbounded (non-halting) computation, the long term behavior of the program has recurrent points. The recurrent behavior in a digital computer is described in the section titled Immortal Orbit and Recurrent Points.
D. Multiple AEM firing patterns are computed concurrently and then one can be selected according to an interpretation executed by a separate AEM machine. The AEM interpretation is kept hidden and changes over time. In some embodiments, evolutionary methods using randomness may help build AEMs that utilize incomputability and topological connectedness in their computations.
E. In some embodiments, AEMs programs represent the Boolean operations in a digital computer using multiple spatio-temporal firing patterns, which is further described in the detailed description. In some embodiments, level set methods on random AEM firing interpretations may be used that do not use Boolean functions. This enables a digital computer program instruction to be executed differently at different instances. In some embodiments, these different instances are at different times. In some embodiments, these different instances of computing the program instruction are executed by different collections of active elements and connections in the active element machine.
F. In some embodiments, the parallel computing speed increase of an AEM is substantial. As described in the section titled An AEM Program Computes a Ramsey Number, an AEM program is shown that computes a Ramsey number using the parallelism of the AEM. The computation of Ramsey numbers is an NP-hard problem [12].
Although various embodiments of the invention may have been motivated by various deficiencies with the prior art, which may be discussed or alluded to in one or more places in the specification, the embodiments of the invention do not necessarily address any of these deficiencies. In other words, different embodiments of the invention may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
An active element machine is composed of computational primitives called active elements. There are three kinds of active elements: Input, Computational and Output active elements. Input active elements receive information from the environment or another active element machine. This information received from the environment may be produced by a physical process, such as input from a user, such from a keyboard, mouse (or other pointing device), microphone, or touchpad.
In some embodiments, information from the environment may come from the physical process of photons originating from sunlight or other kinds of light traveling through space. In some embodiments, information from the environment may come from the physical process of sound. The sounds waves may be received by a sensor or transducer that causes one or more input elements to fire. In some embodiments, the acoustic transducer may be a part of the input elements and each input element may be more sensitive to a range of sound frequencies. In some embodiments, the sensor(s) or transducer(s) may be a part of one or more of the input elements and each input element may be more sensitive to a range of light frequencies analogous to the cones in the retina.
In some embodiments, information from the environment may come from the physical process of molecules present in the air or water. In some embodiments, sensor(s) or transducer(s) may be sensitive to particular molecules diffusing in the air or water, which is analogous to the molecular receptors in a person's nose. For example, one or more input elements may fire if a particular concentration of cinnamon molecules are detected by olfactory sensor(s).
In some embodiments, the information from the environment may originate from the physical process of pressure. In some embodiments, pressure information is transmitted to one or more of the input elements. In some embodiments, the sensor(s) that are a part of the input elements or connected to the input elements may be sensitive to pressure, which is analogous to a person's skin. In some embodiments, sensor sensitive to heat may be a part of the input elements or may be connected to the input elements. This is analogous to a person's skin detecting temperature.
Computational active elements receive messages from the input active elements and other computational active elements firing activity and transmit new messages to computational and output active elements. The output active elements receive messages from the input and computational active elements firing activity. Every active element is active in the sense that each one can receive and transmit messages simultaneously.
Each active element receives messages, formally called pulses, from other active elements and itself and transmits messages to other active elements and itself. If the messages received by active element Ei at the same time sum to a value greater than the threshold and Ei's refractory period has expired, then active element Ei fires. When an active element Ei fires, it sends messages to other active elements.
Let Z denote the integers. Define the extended integers as K={m+kdT: m, k∈Z and dT is a fixed infinitesimal}. For more on infinitesimals, see [26] and [14]. The extended integers can also be expressed using the correspondence m+ndT↔(m, n) where (m, n) lies in Z×Z. Then use the dictionary order (m, n)<(k, l) if and only if (m<k) OR (m=k AND n<l). Similarly, m+ndT<k+ldT if and only if (m<k) OR (m=k AND n<l).
Machine Architecture
Γ, Ω, and Δ are index sets that index the input, computational, and output active elements, respectively. Depending on the machine architecture, the intersections Γ∩Ω and Ω∩Δ can be empty or non-empty. A machine architecture, denoted as M(J, E, D), consists of a collection of input active elements, denoted as J={Ei:i∈Γ}; a collection of computational active elements E={Ei:i∈Ω}; and a collection of output active elements D={Ei:i∈Δ}.
Each computational and output active element, Ei, has the following components and properties.
Ψi(t)=sup{s:s<t and gi(s)=1}, where gi(s) is the output function of active element Ei and is defined below. The sup is the least upper bound.
Input active elements that are not computational active elements have the same characteristics as computational active elements, except they have no inputs φki coming from active elements in this machine. In other words, they don't receive pulses from active elements in this machine. Input active elements are assumed to be externally firable. An external source such as the environment or an output active element from another distinct machine M(J, E, D) can cause an input active element to fire. The input active element can fire at any time as long as the current time minus the time the input active element last fired is greater than or equal to the input active element's refractory period.
An active element Ei can be an input active element and a computational active element. Similarly, an active element can be an output active element and a computational active element. Alternatively, when an output active element Ei is not a computational active element, where i∈Δ−Ω, then Ei does not send pulses to active elements in this machine.
Some notions of the machine architecture are summarized. If gi(s)=1, this means active element Ei fired at time s. The refractory period ri is the amount of time that must elapse after active element Ei just fired before Ei can fire again. The transmission time τki is the amount of time it takes for active element Ei to find out that active element Ek has fired. The pulse amplitude Aki represents the strength of the pulse that active element Ek transmits to active element Ei after active element Ek has fired. After this pulse reaches Ei, the pulse width O)ki represents how long the pulse lasts as input to active element Ei. At time s, the connection from Ek to Ei represents the triplet (Aki (s), ωki(s), τki(s)). If Aki=0, then there is no connection from active element Ek to active element Ei.
In an embodiment, each computational element and output element has a refractory period ri, where ri>0, which is a period of time that must elapse after last sending a message before it may send another message. In other words, the refractory period, ri, is the amount of time that must elapse after active element Ei just fired and before active element Ei can fire again. In an alternative embodiment, refractory period ri could be zero, and the active element could send a message simultaneously with receiving a message and/or could handle multiple messages simultaneously.
In an embodiment, each computational element and output element may be associated with a collection of message amplitudes, {Aki}kεΓ∪Λ, where the first of the two indices k and i denote the active element from which the message associated with amplitude Aki is sent, and the second index denotes the active element receiving the message. The amplitude, Aki, represents the strength of the message that active element Ek transmits to active element Ei after active element Ek has fired. There are many different measures of amplitude that may be used for the amplitude of a message. For example, the amplitude of a message may be represented by the maximum value of the message or the root mean square height of the message. The same message may be sent to multiple active elements that are either computational elements or output elements, as indicated by the subscript kεΓ∪Λ. However, each message may have a different amplitude Aki. Similarly, each message may be associated with its own message width, {ωki}kεΓ∪Λ, sent from active element Ei to Ek, where ωki>0 for all kεΓ∪Λ. After a message reaches active Ei, the message width ωki represents how long the message lasts as input to active element Ei.
In an embodiment, any given active element may be capable of sending and receiving a message, in response to receiving one or more messages, which when summed together, have an amplitude that is greater than a threshold associated with the active element. For example, if the messages are pulses, each computational and output active element, Ei, may have a threshold, θi, such that when a sum of the incoming pulses is greater than the threshold the active element fires (e.g., sends an output message). In an embodiment, when a sum of the incoming messages is lower than the threshold the active element does not fire. In another embodiment, it is possible to set the active element such that the active element fires when the sum of incoming messages is lower than the threshold; and when the sum of incoming messages is higher than the threshold, the active element does not fire.
In still another embodiment, there are two numbers α and θ where α≤0 and such that if the sum of the incoming messages lie in [α, θ], then the active element fires, but the active element does not fire if the sum lies outside of [α, θ]. In a variation of this embodiment, the active element fires if the sum of the incoming messages does not lie in [α, θ] and does not fire if the sum lies in [α, θ].
In another embodiment, the incoming pulses may be combined in other ways besides a sum. For example, if the product of the incoming pulses is greater than the threshold the active element may fire. Another alternative is for the active element to fire if the maximum of the incoming pulses is greater than the threshold. In still another alternative, the active element fires if the minimum of the incoming pulses is less than the threshold. In even another alternative if the convolution of the incoming pulses over some finite window of time is greater than the threshold, then the active element may fire.
In an embodiment, each computational and output element may be associated with collection of transmission times, {τki}kεΓ∪Λ, where τki>0 for all kεΓ∪Λ, which are the times that it takes a message to be sent from active element Ek to active element Ei. The transmission time, τki, is the amount of time it takes for active element Ei to find out that active element Ek has fired. The transmission times, τki, may be chosen in the process of establishing the architecture.
In an embodiment, each active element is associated with a function of time, ψi(t), representing the time t at which active element Ei last fired. Mathematically, the function of time can be defined as ψi(t)=supremum {sεR:s<t AND gi(s)=1}. The function ψi(t) always has the value of the last time that the active element fired. In general, throughout this specification the variable t is used to represent the current time, while in contrast s is used as variable of time that is not necessarily the current time.
In an embodiment, each active element is associated with a function of time Ξki (t), which is a set of recent firing times of active element Ek that are within active element Ei's integrating window. In other words, the set of firing times Ξki(t)={sεR: active element k fired at time s and 0≤t−s−τki<ωki}. The integrating window is a duration of time during which the active element accepts messages. The integrating window may also be referred to as the window of computation. Other lengths of time could be chosen for the integrating window. In contrast to ψi(t), Ξki(t) is not a function, but a set of values. Also, where as ψi(t) has a value as long as active element Ei fired at least once, Ξki(t) does not have any values (is an empty set) if the last time that active element Ei fired is outside of the integrating window. In other words, if there are no firing times, s, that satisfy the inequality 0≤t−s−τki<ωki, then Ξki(t) is the empty set. Let |Ξki(t)| denote the number of elements in the set Ξki(t). If Ξki(t) is the empty set, then |Ξki(t)|=0. Similarly, if Ξki(t) has only one element in it then Ξki(t)=1.
In an embodiment, each input element and output element may have associated with it a collection of input functions, {Øki(t)}kεΓ∪Λ. Each input function may be a function of time, and may represent messages coming from computational elements and input elements. The value of input function Øki(t) is given by Øki(t)=|Ξki(t)|Aki, because each time a message from active element Ek reaches active element Ei, the amplitude of the message is added to the last message. The number of messages inside the integrating window is the same as the value of |Ξki(t)|. Since for a static machine the amplitude of the message sent from active element k to i is always the same value, Aki, therefore, the value Øki(t) equals |Ξki(t)|Aki.
Input elements that are not computational elements have the same characteristics as computational elements, except they have no input functions, Øki(t), coming from active elements in this machine. In other words, input elements do not receive messages from active elements in the machine with which the input element is associated. In an embodiment, input elements are assumed to be externally firable. An externally firable element is an element that an external element or machine can cause to fire. In an embodiment, an external source such as the environment or an output element from another distinct machine, M′(J′, E′, D′) can cause an input element to fire. An input element can fire at any time as long as this time minus the time the input element last fired is greater than or equal to the input element's refractory period.
An output function, gi(t), may represent whether the active element fires at time t. The function gi(t) is given by
In other words, if the sum of the input functions Øki(t) is greater than the threshold, θi, and time t is greater than or equal to the refractory period, ri, plus the time, ψi(t), that the active element last fired, then the active element Ei fires, and gi(t)=1. If gi(t0)=1, then active element Ei fired at time t0.
The fact that in an embodiment, output elements do not send messages to active elements in this machine is captured formally by the fact that the index k for the transmission times, message widths, message amplitudes, and input functions lies in Γ∪Λ and not in Δ in that embodiment.
The expression “connection” from k to i represents the triplet (Aki, ωki, τki). If Aki=0, then there is no connection from active element Ek to active element Ei. If Aki≠0, then there is a non-zero connection from active element Ek to active element Ei. In any given embodiment the active elements may have all of the above properties, only one of the above properties, or any combination of the above properties. In an embodiment, different active elements may have different combinations of the above properties. Alternatively, all of the active elements may have the same combination of the above properties.
Active Element Machine Programming Language
This section shows how to program an active element machine and how to change the machine architecture as program execution proceeds. It is helpful to define a programming language, influenced by S-expressions. There are five types of commands: Element, Connection, Fire, Program and Meta.
Syntax 1. AEM Program
In Backus-Naur form, an AEM program is defined as follows.
These rules represent the extended integers, addition and subtraction.
Element Command.
An Element command specifies the time when an active element's values are updated or created. This command has the following Backus-Naur syntax.
The keyword Time indicates the time value s at which the element is created or updated. In some embodiments the time value s is an extended integer. If the name symbol value is E, the keyword Name tags the name E of the active element. The keyword Threshold tags the threshold θE(s) assigned to E. Refractory indicates the refractory value rE(s). The keyword Last tags the last time fired value Ψ(s). Sometimes the time value, name value, threshold value, refractory value, or last time fired value are referred to as parameter values.
Below is an example of an element command.
At time 2, if active element H does not exist, then it is created. Active element H has its threshold set to −3, its refractory period set to 2, and its last time fired set to 0. After time 2, active element H exists indefinitely with threshold=−3, refractory=2 until a new Element command whose name value H is executed at a later time; in this case, the Threshold, Refractory and Last values specified in the new command are updated.
Connection Command.
A Connection command creates or updates a connection from one active element to another active element. This command has the following Backus-Naur syntax.
The keyword Time indicates the time value s at which the connection is created or updated. In some embodiments the time value s is an extended integer. The keyword From indicates the name F of the active element that sends a pulse with these updated values. The keyword To tags the name T of the active element that receives a pulse with these updated values. The keyword Amp indicates the pulse amplitude value ΔFT(s) that is assigned to this connection. The keyword Width indicates the pulse width value ωFT(S). In some embodiments the pulse width value ωFT(s) is an extended integer. The keyword Delay tags the transmission time TFT(S). In some embodiments the transmission time TFT(S) is an extended integer. Sometimes the time value, from name, to name, pulse amplitude value, pulse width value, or transmission time value are referred to as parameter values.
When the AEM clock reaches time s, F and T are name values that must be the name of an element that already has been created or updated before or at time s. Not all of the connection parameters need to be specified in a connection command. If the connection does not exist beforehand and the Width and Delay values are not specified appropriately, then the amplitude is set to zero and this zero connection has no effect on the AEM computation. Observe that the connection exists indefinitely with the same parameter values until a new connection is executed at a later time between From element F and To element T.
The following is an example of a connection command.
At time 2, the connection from active element C to active element L has its amplitude set to −7, its pulse width set to 1, and its transmission time set to 3.
Fire Command.
The Fire command has the following Backus-Naur syntax.
The Fire command fires the active element indicated by the Name tag at the time indicated by the Time tag. Sometimes the time value and name value are referred to as parameter values of the fire command. In some embodiments, the fire command is used to fire input active elements in order to communicate program input to the active element machine. An example is (Fire (Time 3) (Name C)), which fires active element C at t=3.
Program Command.
The Program command is convenient when a sequence of commands are used repeatedly. This command combines a sequence of commands into a single command. It has the following definition syntax.
The Program command has the following execution syntax.
The FireN program is an example of definition syntax.
The execution of the command (FireN (Args 8 E1)) causes element E1 to fire 8 times at times 1, 2, 3, 4, 5, 6, 7, and 8 and then E1 stops firing at time=9.
Keywords clock and dT
The keyword clock evaluates to an integer, which is the current active element machine time. clock is an instance of <ename>. If the current AEM time is 5, then the command
Once command (Element (Time clock) (Name clock) (Threshold 1) (Refractory 1) (Last −1)) is created, then at each time step this command is executed with the current time of the AEM. If this command is in the original AEM program before the clock starts at 0, then the following sequence of elements named 0, 1, 2, . . . will be created.
The keyword dT represents a positive infinitesimal amount of time. If m and n are integers and 0≤m<n, then mdT<ndT. Furthermore, dT>0 and dT is less than every positive rational number. Similarly, −dT<0 and −dT is greater than every negative rational number. The purpose of dT is to prevent an inconsistency in the description of the machine architecture. For example, the use of dT helps remove the inconsistency of a To element about to receive a pulse from a From element at the same time that the connection is removed.
Meta command.
The Meta command causes a command to execute when an element fires within a window of time. This command has the following execution syntax.
To understand the behavior of the Meta command, consider the execution of
where E is the name of the active element. The keyword Window tags an interval i.e. a window of time. 1 is an integer, which locates one of the boundary points of the window of time. Usually, w is a positive integer, so the window of time is [l, l+w]. If w is a negative integer, then the window of time is [l+w, l].
The command C executes each time that E fires during the window of time, which is either [l, l+w] or [l+w, l], depending on the sign of w. If the window of time is omitted, then command C executes at any time that element E fires. In other words, effectively 1=−∞ and w=∞. Consider the example where the FireN command was defined before.
Command C is executed 6 times with arguments clock, a, b. The firing of E1 triggers the execution of command C.
In regard to the Meta command, the following assumption is analogous to the Turing machine tape being unbounded as Turing program execution proceeds. During execution of a finite active element program, an active element can fire and due to one or more Meta commands, new elements and connections can be added to the machine. As a consequence, at any time the active element machine only has a finite number of computing elements and connections but the number of elements and connections can be unbounded as a function of time as the active element program executes.
Active Element Machine Computation
In a prior section, the firing patterns of active elements are used to represent the computation of a boolean function. In the next three definitions, firing patterns, machine computation and interpretation are defined.
Firing Pattern
Consider active element Ei 's firing times in the interval of time W=[t1, t2]. Let s1 be the earliest firing time of Ei lying in W, and sn the latest firing time lying in W. Then Ei's firing sequence F(Ei, W)=[s1, . . . , sn]={s∈W:gi(s)=1} is called a firing sequence of the active element Ei over the window of time W. From active elements {Ei, E2, . . . , E1} create the tuple (F(Ei, W), F(E2, W), . . . , F(En, W)) which is called a firing pattern of the active elements {Ei, E2, . . . , E.} within the window of time W.
At the machine level of interpretation, firing patterns (firing representations) express the input to, the computation of, and the output of an active element machine. At a more abstract level, firing patterns can represent an input symbol, an output symbol, a sequence of symbols, a spatio-temporal pattern, a number, or even a family of program instructions for another computing machine.
Sequence of Firing Patterns.
Let W1, . . . , W1 be a sequence of time intervals. Let F(E, W1)=(F(Ei, W1), F (E2, W1), . . . , F(En, W1)) be a firing pattern of active elements E={E1, . . . , En} over the interval W1. In general, let F(E, Wk)=(F(Ei, Wk), F(E2, Wk), . . . F(En, Wk)) be a firing pattern over the interval of time Wk. From these, a sequence of firing patterns, [F(E, W1), F(E, W2), . . . , F(E, Wn)] is created.
Machine Computation
Let [F(E, W1), F(E, W2), . . . , F(E, Wn)] be a sequence of firing patterns. [F(E, S1), F(E, S2), . . . , F(E, Sm)] is some other sequence of firing patterns. Suppose machine architecture M(I, E, O) has input active elements I fire with the pattern [F(E, S1), F(E, S2), . . . , F(E, Sm)] and consequently M's output active elements O fire according to [F(E, W1), F(E, W2), . . . , F(E, Wn)]. In this case, the machine M computes [F(E, W1), F(E, W2), . . . , F(E, Wn)] from [F(E, S1), F(E, S2), . . . , F(E, Sm)].
An active element machine is an interpretation between two sequences of firing patterns if the machine computes the output sequence of firing patterns from the input sequence of firing patterns.
Concurrent Generation of AEM Commands
This section shows embodiments pertaining to two or more commands about to set parameter values of the same connection or same element at the same time. Consider two or more connection commands, connecting the same active elements, that are generated and scheduled to execute at the same time.
Then the simultaneous execution of these two commands can be handled by defining the outcome to be equivalent to the execution of only one connection command where the respective amplitudes, widths and transmission times are averaged.
In the general case, for n connection commands
For some embodiments of the AEM, averaging the respective amplitudes, widths and transmission times is useful.
For embodiments that use averaging, they can be implemented in active element machine software, AEM hardware or a combination of AEM hardware and software.
For some embodiments, when there is noisy environmental data fed to the input elements and amplitudes, widths and transmission times are evolved and mutated, extremely large (in absolute value) amplitudes, widths and transmission times can arise that skew an average function. In these embodiments, computing the median of the amplitudes, widths and delays provides a simple method to address skewed amplitude, width and transmission time values.
Another alternative embodiment adds the parameter values.
Similarly, consider when two or more element commands—that all specify the same active element E—are generated and scheduled to execute at the same time.
In autonomous embodiments, where evolution of parameter values occurs, the median can also help address skewed values in the element commands.
Another alternative is to add the parameter values.
Rules A, B, and C resolve concurrencies pertaining to the Fire, Meta and Program commands. Rule A. If two or more Fire commands attempt to fire element E at time t, then element E is fired just once at time t.
Rule B. Only one Meta command can be triggered by the firing of an active element. If a new Meta command is created and it happens to be triggered by the same element E as a prior Meta command, then the old Meta command is removed and the new Meta command is triggered by element E.
Rule C. If a Program command is called by a Meta command, then the Program's internal Element, Connection, Fire and Meta commands follow the previous concurrency rules defined. If a Program command exists within a Program command, then these rules are followed recursively on the nested Program command.
An AEM Program Computes A Ramsey Number
This section shows how to compute a Ramsey number with an AEM program. Ramsey theory can be intuitively described as structure which is preserved under finite decomposition. Applications of Ramsey theory include computer science, including lower bounds for parallel sorting, game theory and information theory. Progress on determining the basic Ramsey numbers r(k, l) has been slow. For positive integers k and l, r(k, l) denotes the least integer n such that if the edges of the complete graph Kn are 2-colored with colors red and blue, then there always exists a complete subgraph Kk containing all red edges or there exists a subgraph Ki containing all blue edges. To put our slow progress into perspective, arguably the best combinatorist of the 20th century, Paul Erdos asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of r(5, 5) or they will destroy our planet. In this case, Erdos claims that we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose instead that they ask for r(6, 6). For r(6, 6), Erdos believes that we should attempt to destroy the aliens.
Theorem R. The Standard Finite Ramsey Theorem.
For any positive integers m, k, n, there is a least integer N(m, k, n) with the following property: no matter how we color each of the n-element subsets of S={1, 2, . . . , N} with one of k colors, there exists a subset Y of S with at least m elements, such that all n-element subsets of Y have the same color.
When G and H are simple graphs, there is a special case of theorem R. Define the Ramsey number r(G, H) to be the smallest N such that if the complete graph KN is colored red and blue, either the red subgraph contains G or the blue subgraph contains H. (A simple graph is an unweighted, undirected graph containing no graph loops or multiple edges. In a simple graph, the edges of the graph form a set and each edge is a pair of distinct vertices.) In [10], S. A. Burr proves that determining r(G, H) is an NP-hard problem.
An AEM program is shown that solves a special case of Theorem 3. Similar embodiments can compute larger Ramsey numbers. Consider the Ramsey number where each edge of the complete graph K6 is colored red or blue. Then there is always at least one triangle, which contains only blue edges or only red edges. In terms of the standard Ramsey theorem, this is the special case N(3, 2, 2) where n=2 since we color edges (i.e. 2-element subsets); k=2 since we use two colors; and m=3 since the goal is to find a red or blue triangle. To demonstrate how an AEM program can be designed to compute N(3, 2, 2)=6, an AEM program is shown that verifies N(3, 2, 2) >5.
The symbols B and R represent blue and red, respectively. Indices are placed on B and R to denote active elements that correspond to the K5 graph geometry. The indices come from graph geometry. Let E={{1,2}, {1,3}, {1,4}, {1,5}, {2,3}, {2,4}, {2,5}, {3,4}, {3,5}, {4,5}} denote the edge set of K5.
The triangle set T={{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}, {1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}}. Each edge is colored red or blue. Thus, the red edges are {{1,2}, {1,5}, {2,3}, {3,4}, {4,5}} and the blue edges are {{1,3}, {1,4}, {2,4}, {2,5}, {3,5}}. Number each group of AEM commands for K5, based on the group's purpose. This is useful because these groups will be used when describing the computation for K6.
1. The elements representing red and blue edges are established as follows.
2. Fire element R_24 if edge {j, k} is red.
Fire element B_jk if edge {j, k} is blue where j<k.
3. The following Meta commands cause these elements to keep firing after they have fired once.
4. To determine if a blue triangle exists on vertices {i, j, k}, where {i, j, k} ranges over T, three connections are created for each potential blue triangle.
5. To determine if a red triangle exists on vertex set {i, j, k}, where {i, j, k} ranges
6. For each vertex set {i, j, k} in T, the following elements are created.
Because the threshold is 5, the element R_ijk only fires when all three elements R_ij, R_jk, R_ik fired one unit of time ago. Likewise, the element B_ijk only fires when all three elements B_ij, B_jk, B_ik fired one unit of time ago. From this, we observe that as of clock=3 i.e. 4 time steps, this AEM program determines that N(3, 2, 2)>5. This AEM computation uses
Further, this AEM program creates and uses 3|T|+3|T|+|E|=70 connections.
For K6, the edge set E={{1, 2}, {1, 3}, {1, 4}, {1, 5}, {1, 6}, {2, 3}, {2, 4}, {2, 5}, {2, 6}, {3, 4}, {3, 5}, {3, 6}, {4, 5}, {4, 6}, {5, 6}}. The triangle set T={{1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 2, 6}, {1, 3, 4}, {1, 3, 5}, {1, 3, 6}, {1, 4, 5}, {1, 4, 6}, {1, 5, 6}, {2, 3, 4}, {2, 3, 5}, {2, 3, 6}, {2, 4, 5}, {2, 4, 6}, {2, 5, 6}, {3, 4, 5}, {3, 4, 6}, {3, 5, 6}, {4, 5, 6}}. For each 2-coloring of E, each edge is colored red or blue. There are 2|E|2-colorings of E. For this graph,
To build a similar AEM program, the commands in groups 1 and 2 range over every possible 2-coloring of E. The remaining groups 3, 4, 5 and 6 are the same based on the AEM commands created in groups 1 and 2 for each particular 2-coloring.
This AEM program verifies that every 2-coloring of E contains at least one red triangle or one blue triangle i.e. N(3, 2, 2)=6. There are no optimizations using graph isomorphisms made here. If an AEM language construct is used for generating all active elements for each 2-coloring of E at time zero, then the resulting AEM program can determine the answer in 5 time steps. One more time step is needed, 215 additional connections and one additional element to verify that every one of the 215 AEM programs is indicating that it found a red or blue triangle. This AEM program—that determines the answer in 5 time steps—uses 2|E|(|E|+2|T|)+1 active elements and 2|E|(3|T|+3|T|+|E|+1) connections, where |E|=15 and |T|=20. Some graph problems are related to the computation of Ramsey numbers.
A couple common graph problems are the traveling salesman problem and the traveling purchaser problem. (See <http://en.wikipedia.org/wiki/Taveling_salesman_problem> and <http://en.wikipedia.org/wiki/Taveling_purchaser_problem>.)
The traveling salesman problem can be expressed as a list of cities and their pairwise distances. The solution of the problem consists of finding the shortest possible tour that visits each city exactly once. Methods of solving the traveling salesman problem are useful for FedEx and other shipping companies where fuel costs and other shipping costs are substantial.
The traveling salesman problem has applications in planning, logistics, and the manufacture of microchips. Slightly modified, the traveling salesman problem appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments.
In embodiments similar to the computation of Ramsey numbers, the cities (customers, soldering points or DAN fragments) correspond to active elements and there is one connection between two cities for a possible tour that is being explored. In some embodiments, the thresholds of the two elements are used to account for the distance between the cities while exploring a path for a shortest possible tour. In other embodiments, the different distances between cities can be accounted for by using time.
Multiplying Numbers with an Active Element Machine
This section shows how to multiply numbers with an active element machine. Elements Y0, Y1, Y2, Y3 denote a four bit number and elements Z0, Z1 Z2, Z3 denote a four bit number. The corresponding bit values are y0, y1, y2, y3 and z0, z1, z2, z3. The multiplication of y3 y2 y1 y0*z3 z2 z1 z0 is shown in Table 13. An active element program is constructed based on Table 13.
The following commands set up elements and connections to perform a four bit multiplication of y3 y2 y1 y0*z3 z2 z1 z0 where the result of this multiplication e7 e6 e5 e4 e3 e2 e1 e0 is stored in elements E7, E6, E5, E4, E3, E2, E1 and E0. The computation is encoded by the elements S_jk corresponds to the product yj zk where S_jk fires if and only if yj=1 and zk=1. The elements C_jk help determine the value of ei represented by element Ei where j+k=i.
First, two useful program commands are defined.
The firing activity of element E0 expresses the value of e0. The elements and connections for the product y0 z0 which determine the value of e0 are determined by the following three program commands.
Table 14 shows the amplitude and threshold used to compute the value of e0. Table 15 shows the firing patterns for elements S10 and S01 representing the value of products y1 z0 and y0 z1. Table 16 shows the amplitudes from elements S10 and S01 to elements C01 and C11 and the thresholds of C01 and C11. Table 17 shows the amplitudes from elements C01 and C11 to element E1 and the threshold of E1. The firing activity of element E1 expresses the value of e1. Below are active element machine commands that express the parameter values of these elements and connections shown in table 14, table 15, table 16 and table 17.
Table 18 shows the firing patterns for elements S20, S11, S02 and C11. Table 19 shows the amplitudes from elements S20, S11 S02, C11 to elements C02, C12, C22 C32 and the thresholds of C02, C12, C22 and C32. Table 20 shows the amplitudes from elements C02, C12, C22, C32 to elements P02, P12, P22 and the thresholds of elements P02, P12, P22. Table 21 shows the amplitude and threshold used to compute the value of e2. The firing activity of element E2 expresses the value of e2. Below are active element machine commands that express the parameter values of the elements and connections indicated in table 18, table 19, table 20 and table 21.
Table 22 shows the firing patterns for elements S30, S21, S12, S03, P12 representing the value of products y3 z0, y2 z1, y1 z2 and y0 z3 and the carry value. Table 23 shows the amplitudes from elements S30, S21, S12 and S03 to elements C03, C13, C23, C33, and C43. Table 24 shows the amplitudes from elements C03, C13, C23, C33, and C43 to elements P03, P13, P23 and the thresholds of elements P03, P13, P23. Table 25 shows the amplitude and threshold used to compute the value of e3. The firing activity of element E3 expresses the value of e3. Below are active element machine commands that express the parameter values shown in table 22, table 23, table 24 and table 25.
Table 26 shows the firing patterns for elements S31, S22, S13, P13, P22. Table 27 shows the amplitudes from elements S31, S22, S13, P13, P22 to elements C04, C14, C24, C34, C44 and the thresholds of C04, C14, C24, C34 and C44. Table 28 shows the amplitudes from elements C04, C14, C24, C34, and C44 to elements P04, P14, P24 and the thresholds of elements P04, P14, P24. Table 29 shows the amplitude and threshold used to compute the value of e4. The firing activity of element E4 expresses the value of e4. Below are active element machine commands that express the parameter values shown in table 26, table 27, table 28 and table 29.
Table 30 shows the firing patterns for elements S32, S23, P14, P23. Table 31 shows the amplitudes from elements S32, S23, P14, P23 to elements C05, C15, C25, C35 and the thresholds of C05, C15, C25, C35. Table 32 shows the amplitudes from elements C05, C15, C25, C35 to elements P05, P15, P25 and the thresholds of elements P05, P15, P25. Table 33 shows the amplitude and threshold used to compute the value of e5. The firing activity of element E5 expresses the value of e5. Below are active element machine commands that express the parameter values shown in table 30, table 31, table 32 and table 33.
Table 34 shows the firing patterns for elements S33, P15, P24. Table 35 shows the amplitudes from elements S33, P15, P24 to elements C06, C16, C26 and the thresholds of C06, C16, C26. Table 36 shows the amplitudes from elements C06, C16, C26 to elements P06, P16 and the thresholds of elements P06, P16. Table 37 shows the amplitude of the connection from element P06 to element E6 and the threshold of E6. The firing activity of E6 expresses the value of e6. Below are active element machine commands that express the parameter values shown in table 34, table 35, table 36 and table 37.
The firing activity of element E7 represents bit e7. When element P16 is firing, this means that there is a carry so E7 should fire. The following commands accomplish this.
Table 38 shows how the active element machine commands were designed to compute 1110*0111. Suppose that the AEM commands from the previous sections are called with s=2. Then y0=0. y1=1. y2=1. y3=1. z0=
Element E0 never fires because E0 only receives a pulse of amplitude 2 from Z0 and has threshold 3. The fact that E0 never fires represents that e0=0.
In regard to the value of e1, element S10 fires at time 3 because Y1 and Z0 fire at time 2 and S10 has a threshold of 3 and receives a pulse of amplitude 2 from Y1 and Z0. The following commands set these values.
Element S01 does not fire at time 3 because it only receives a pulse of amplitude 2 from element Z1 and has threshold 3. The firing of S10 at time 3 causes C01 to fire at time 4 because C01's threshold is 1. The following commands set up these element and connection values.
cause E1 to fire at time 5 and E1 continues to fire indefinitely because the input elements Y1, Y2, Y3, Z0, Z1 and Z2 continue to fire at time steps 3, 4, 5, 6, 7, 8, . . . . The firing of element E1 indicates that e1=1.
In regard to the value of e2, since elements Y1 and Z1 fire at time 2, element S11 fires at time 3. Since elements Y2 and Z0 fire at time 2, element S20 also fires at time 3. From the following commands
Observe that element P02 does not fire because C12 sends a pulse with amplitude −2 and C02 sends a pulse with amplitude 2 and element P02 has threshold 1 as a consequence of command (set_element 2 P_02 1 1 0).
Since P02 does not fire, element E2 does not fire as it threshold is 1 and the only connection to element E2 is from P02: (set_connection 2 P_02 E2 2 1 1). Since element E2 does not fire, this indicates that e2=0.
In regard to the value of e3, since elements Y3 and Z0 fire at time 2, element S30 fires at time 3. Since elements Y2 and Z1 fire at time 2, element S21 fires at time 3. Since elements Y1 and Z2 fire at time 2, element S12 fires at time 3. S03 does not fire. From the following commands
then elements C03, C13, C23 fire at time 4 and they will continue to fire every time step 5, 6, 7, 8 . . . because the elements Y1, Y2, Y3, Z0, Z1 and Z2 continue to fire at time steps 3, 4, 5, 6, 7, 8, . . . .
As a result of P12 firing at time 5, C33 fires at time 6, so at time 7, only P23 fires. As a result, the long term behavior (after time step 7) of P03 does not fire. Thus, E3 does not fire after time step 7, which indicates that e3=0.
Similar to that of element E3, in the long term element E4 does not fire, which indicates that e4=0. Similarly, in the long term element E5 fires, which indicates that e5=1. Similarly, in the long term element E6 fires, which indicates that e6=1. Similarly, in the long term element E7 does not fire, which indicates that e7=0.
As a consequence, multiplication of 1110*0111 equals 1100010 in binary, which represents that 14*7=98 in base 10. This active element program can execute any multiplication of two four bit binary numbers. Similar to the multiplication just described, tables 39, 40 and 41 show the multiplication steps for 11*9=99; 15*14=210; and 15*15=225.
In some embodiments, an AEM using randomness executes a universal Turing machine (digital computer program) or a von Neumann machine. In an embodiments, the randomness is generated from a non-deterministic physical process. In some embodiments, the randomness is generated using quantum events such as the emission and detection of photons. In some embodiments, the firing patterns of the active elements computing the execution of these machines are Turing incomputable. In some embodiments, the AEM accomplishes this by executing a universal Turing machine or von Neumann machine instructions with random firing interpretations. In some embodiments, if the state and tape contents of the universal Turing machine—represented by the AEM elements and connections—and the random bits generated from the random—in some embodiments, quantum—source are kept perfectly secret and no information is leaked about the dynamic connections in the AEM, then it is Turing incomputable to construct a translator Turing machine that maps the random firing interpretations back to the sequence of instructions executed by the universal Turing machine or von Neumann machine. As a consequence, in some embodiments, the AEM can deterministically execute any Turing machine (digital computer program) with active element firing patterns that are Turing incomputable. Since Turing incomputable AEM firing behavior can deterministically execute a universal Turing machine or digital computer with a finite active element machine using quantum randomness, this creates a novel computational procedure ([6], [32]). In [20], Lewis and Papadimitriou discuss the prior art notion of a digital computer's computational procedure:
Because the Turing machines can carry out any computation that can be carried out by any similar type of automata, and because these automata seem to capture the essential features of real computing machines, we take the Turing machine to be a precise formal equivalent of the intuitive notion of algorithm: nothing will be considered as an algorithm if it cannot be rendered as a Turing machine.
The principle that Turing machines are formal versions of algorithms and that no computational procedure will be considered as an algorithm unless it can be presented as a Turing machine is known as Church's thesis or the Church-Turing Thesis. It is a thesis, not a theorem, because it is not a mathematical result: It simply asserts that a certain informal concept corresponds to a certain mathematical object. It is theoretically possible, however, that Church's thesis could be overthrown at some future date, if someone were to propose an alternative model of computation that was publicly acceptable as fulfilling the requirement of finite labor at each step and yet was provably capable of carrying out computations that cannot be carried out by any Turing machine. No one considers this likely.
In a cryptographic system, Shannon [28] defines the notion of perfect secrecy. Perfect Secrecy is defined by requiring of a system that after a cryptogram is intercepted by the enemy the a posteriori probabilities of this cryptogram representing various messages be identically the same as the a priori probabilities of the same messages before the interception.
In this context, perfect secrecy means that no information is ever released or leaked about the state and the memory contents of the universal deterministic machine, the random bits generated from a quantum source and the dynamic connections of the active element machine.
In [19], Kocher et al. present differential power analysis. Differential power analysis obtains information about cryptographic computations executed by register machine hardware, by statistically analyzing the electromagnetic radiation leaked by the hardware during its computation. In some embodiments, when a quantum active element computing system is built so that its internal components remain perfectly secret or close to perfectly secret, then it may be extremely challenging for an adversary to carry out types of attacks such as differential power analysis.
In this section, the same boolean function is computed by two or more distinct active element firing patterns, which can be executed at distinct times or by different circuits (two or more different parts) in the active element machine. These methods provide useful embodiments in a number of ways. They show how digital computer program computations can be computed differently at distinct instances. In some embodiments, distinct instances are two or more different times. In some embodiments, distinct instances use different elements and connections of the active element machine to differently compute the same Boolean function. The methods shown here demonstrate the use of level sets so that multiple active element machine firing patterns may compute the same boolean function or computer program instruction. Third, these methods demonstrate the utility of using multiple, dynamic firing interpretations to perform the same task—for example, execute a computer program—or represent the same knowledge.
The embodiments shown here enable one or more digital computer program instructions to be computed differently at different instances. In some embodiments, these different instances are different times. In some embodiments, these different instances of computing the program instruction are executed by different collections of active elements and connections in the active element machine. In some embodiments, the computer program may be an active element machine program.
The following procedure uses a non-deterministic physical process to either fire input element I or not fire I at time t=n where n is a natural number {0, 1, 2, 3, . . . }. This random sequence of 0 and 1's can be generated by quantum optics, or quantum events in a semiconductor material or other physical phenomena. In some embodiments, the randomness is generated by a light emitting diode (
Procedure 1. Randomness generates an AEM, representing a real number in the interval [0, 1]. Using a random process to fire or not fire one input element I at each unit of time, a finite active element program can represent a randomly generated real number in the unit interval [0, 1]. In some embodiments, the non-deterministic process is physically contained in the active element machine. In other embodiments, the emission part of the random process is separate from the active element machine.
The Meta command and a random sequence of bits creates active elements 0, 1, 2, . . . that store the binary representation b0 b1 b2 . . . of real number x lying in the interval [0, 1]. If input element I fires at time t=n, then b1=1; thus, create active element n so that after t=n, element n fires every unit of time indefinitely. If input element I doesn't fire at time t=n, then bn=0 and active element n is created so that it never fires. The following finite active element machine program exhibits this behavior.
Suppose a sequence of random bits—obtained from the environment or from a non-deterministic process inside the active element machine—begins with 1, 0, 1, . . . . Thus, input element I fires at times 0, 2, . . . . At time 0, the following commands are executed.
The execution of (C (Args 0)) causes three connection commands to execute.
Because of the first connection command
the firing of input element I at time 0 sends a pulse with amplitude 2 to element 0. Thus, element 0 fires at time 1. Then at time 1+dT, a moment after time 1, the connection from input element I to element 0 is removed. At time 0, a connection from element 0 to itself with amplitude 2 is created. As a result, element 0 continues to fire indefinitely, representing that b0=1. At time 1, command
is created. Since element 1 has no connections into it and threshold 1, element 1 never fires.
Thus b1=0. At time 2, input element I fires, so the following commands are executed.
The execution of (C (Args 2)) causes the three connection commands to execute.
Because of the first connection command
the firing of input element I at time 2 sends a pulse with amplitude 2 to element 2. Thus, element 2 fires at time 3. Then at time 3+dT, a moment after time 3, the connection from input element I to element 2 is removed. At time 2, a connection from element 2 to itself with amplitude 2 is created. As a result, element 2 continues to fire indefinitely, representing that b2=1.
During a window of time, firing patterns can be put in 1-to-1 correspondence with the boolean functions ƒ: {0, 1}n→{0, 1}. In the next section, the firing pattern methods explained here are combined with procedure 1 so that a randomly chosen firing pattern can compute the functions used to execute a universal Turing machine. Consider four active elements X0, X1, X2 and X3 during window of time W=[a, b]. The refractory periods of X0, X1, X2 and X3 are chosen so that each Xk either fires or doesn't fire during window W. Thus, there are sixteen distinct firing patterns. Five of these firing patterns are shown in
A one-to-one correspondence is constructed with the sixteen boolean functions of the form ƒ: {0, 1}×{0, 1}→{0, 1}. These boolean functions comprise the binary operators: and ∧, or ∨, xor ⊕, equal ↔, and so on. One of these firing patterns is distinguished from the other fifteen by building the appropriate connections to element P, which in the general case represents the output of a boolean function ƒ: {0, 1}n→{0, 1}. A key notion is that element P fires within the window of time W if and only if P receives a unique firing pattern from elements X0, X1, X2 and X3. (This is analogous to the notion of the grandmother nerve cell that only fires if you just saw your grandmother.) The following definition covers the Boolean interpretation explained here and also handles more complex types of interpretations.
Definition 2.1. Number of Firings During a Window
Let X denote the set of active elements {X0, X1, . . . , X−1} that determine the firing pattern during the window of time W. Then |(Xk, W)|=the number of times that element Xk fired during window of time W. Thus, define the number of firings during window W as
Observe that |(X, W)|=0 for firing pattern 0000 shown in
The element command for P is:
If Xk is not supposed to fire during W, then the following connection is established.
The firing pattern is already known because it is determined based on a random source of bits received by input elements, as discussed in procedure 1. Consequently, −2|(X, W)| is already known. How an active element circuit is designed to create a firing pattern that computes the appropriate boolean function is discussed in the following example.
Consider firing pattern 0010. In other words, X2 fires but the other elements do not fire. The AEM is supposed to compute the boolean function exclusive-or A⊕B=(A∨B)∧(¬A∨¬B). The goal here is to design an AEM circuit such that A⊕B=1 if and only if the firing pattern for X0, X1, X2, X3 is 0010. Following definition 2.1, as a result of the distinct firing pattern during W, if A⊕B=1 then P fires. If A⊕B=0 then P doesn't fire. Below are the commands that connect elements A and B to elements X0, X1, X2, X3.
There are four cases for A⊕B: 1⊕0, 0⊕1, 1⊕1 and 0⊕0. In regard to this, choose the refractory periods so that A and B either fire or don't fire at t=0. Recall that W=[a, b]. In this example, assume a=2 and b=3. Thus, all refractory periods of X0, X1, X2, X3 are 1 and all last time fireds are 1. All pulse widths are the length of the window W+1 which equals 2.
Case 1. Element A fires at time t=0 and element B doesn't fire at t=0. Element X0 receives a pulse from A with amplitude 2 at time t=2. Element X1 doesn't fire because its threshold=3. Element X1 receives a pulse from A with amplitude −2 at time t=2. Element X1 doesn't fire during W because X1 has threshold=−1. Element X2 receives a pulse from A with amplitude 2. Element X2 fires at time t=2 because its threshold is 1. Element X3 receives a pulse from A with amplitude 2 but doesn't fire during window W because X3 has threshold=3.
Case 2. Element X0 receives a pulse from B with amplitude 2 at time t=2. Element X0 doesn't fire because its threshold=3. Element X1 receives a pulse from B with amplitude −2 at time t=2. Element X1 doesn't fire during W because X1 has threshold=−1. Element X2 receives a pulse from B with amplitude 2. Element X2 fires at time t=2 because its threshold is 1. Element X3 receives a pulse from B with amplitude 2, but doesn't fire during window W because X3 has threshold=3.
Case 3. Element A fires at time t=0 and element B fires at t=0. Element X0 receives two pulses from A and B each with amplitude 2 at time t=2. Element X0 fires because its threshold=3. Element X1 receives two pulses from A and B each with amplitude −2 at time t=2. Element X1 doesn't fire during W because X1 has threshold=−1. Element X2 receives two pulses from A and B each with amplitude 2. Element X2 fires at time t=2 because its threshold is 1. Element X3 receives two pulses from A and B each with amplitude 2. Element X3 fires at time t=2 because X3 has threshold=3.
Case 4. Element A doesn't fire at time t=0 and element B doesn't fire at t=0. Thus, elements X0, X2 and X3 do not fire because they have positive thresholds. Element X1 fires at t=2 because it has threshold=−1.
For the desired firing pattern 0010, the threshold of P=2|(X, W)|−1=1.
Below is the element command for P.
Below are the connection commands for making P fire if and only if firing pattern 0010 occurs during W.
For cases 1 and 2 (i.e., 1⊕0 and 0⊕1) only X2 fires. A moment before X2 fires at t=2 (i.e., −dT), the amplitude from X2 to P is set to 2. At time t=2, a pulse with amplitude 2 is sent from X2 to P, so P fires at time t=3 since its threshold=1. In other words, 100=1 or 001=1 has been computed. For case 3, (1⊕1), X0, X2 and X3 fire. Thus, two pulses each with amplitude=−2 are sent from X0 and X3 to P. And one pulse with amplitude 2 is sent from X2 to P. Thus, P doesn't fire. In other words, 101=0 has been computed. For case 4, (0⊕0), X1 fires. One pulse with amplitude=−2 is sent to X2. Thus, P doesn't fire. In other words, 0⊕0=0 has been computed.
Level Set Separation Rules
This section describes how any of the sixteen boolean functions are uniquely mapped to one of the sixteen firing patterns by an appropriate active element machine program. The domain {0, 1}×{0, 1} of the sixteen boolean functions has four members {(0, 0), (1, 0), (0, 1), (1, 1)}. Furthermore, for each active element Xk, separate these members based on the (amplitude from A to Xk, amplitude from B to Xk, threshold of Xk, element Xk) quadruplet. For example, the quadruplet (0, 2, 1, X1) separates {(1, 1), (0, 1)} from {(1, 0), (0, 0)} with respect to X1. Recall that A=1 means A fires and B=1 means B fires. Then X1 will fire with inputs {(1, 1), (0, 1)} and X1 will not fire with inputs {(1, 0), (0, 0)}. The separation rule is expressed as
Similarly,
indicates that X2 has threshold −1 and amplitudes 0 and −2 from A and B respectively. Further, X2 will fire with inputs {(1, 0), (0, 0)} and will not fire with inputs {(1, 1), (0, 1)}.
Table 1 shows how to compute all sixteen boolean functions ƒk:{0, 1}×{0, 1}→{0, 1}. For each X, use one of 14 separation rules to map the level set ƒk−1{1} or alternatively map the level set ƒk−1{0} to one of the sixteen firing patterns represented by X0, X1, X2 and X3. The level set method works as follows.
Suppose the nand boolean function ƒ13=−(A∧B) is to be computed with the firing pattern 0101. Observe that ƒ13−1{1}={(1, 0), (0, 1), (0, 0)}. Thus, the separation rules
for k in {0, 2} work because X0 and X2 fire if and only if A fires and B fires. Similarly,
for j in {1, 3} work because X1 and X3 don't fire if and only if A fires and B fires. These rules generate the commands.
The five commands make element P fire if and only if firing pattern 0101 occurs.
Case 1: ¬(0∧0). A doesn't fire and B doesn't fire. Thus, no pulses reach X1 and X3, who each have threshold −3. Thus, X1 and X3 fire. Similarly, no pulses reach X0 and X2, who each have threshold 3. Thus, the firing pattern 0101 shown in table 10 causes P to fire because element X1 and X3 each send a pulse with amplitude 2 to P which has threshold 3. Therefore, ¬(0∧0)=1 is computed.
Case 2: ¬(1∧0). A fires and B doesn't fire. Thus, one pulse from A with amplitude 2 reaches X0 and X2, who each have threshold 3. Thus, X0 and X2 don't fire. Similarly, one pulse from A with amplitude −2 reaches X1 and X3, who each have threshold −3. Thus, the firing pattern 0101 shown in table 11 causes P to fire because element X1 and X3 each send a pulse with amplitude 2 to P which has threshold 3. Therefore, ¬(1∧0)=1 is computed.
Case 3: ¬(0∧1). A doesn't fire and B fires. Thus, one pulse from B with amplitude 2 reaches X0 and X2, who each have threshold 3. Thus, X0 and X2 don't fire. Similarly, one pulse from B with amplitude −2 reaches X1 and X3, who each have threshold −3. Thus, the firing pattern 0101 shown in table 12 causes P to fire because element X1 and X3 each send a pulse with amplitude 2 to P which has threshold 3. Therefore, ¬(0∧1)=1 is computed.
Case 4: ¬(1∧1). A fires and B fires. Thus, two pulses each with amplitude 2 reach X0 and X2, who each have threshold 3. Thus, X0 and X2 fire. Similarly, two pulses each with amplitude −2 reach X1 and X3, who each have threshold −3. As a result, X1 and X3 don't fire. Thus, the firing pattern 1010 shown in table 13 prevents P from firing because X0 and X2 each send a pulse with amplitude −2 to P which has threshold 3. Therefore, ¬(1∧1)=0 is computed.
Overall, any one of the sixteen boolean functions in table 1 are uniquely mapped to one of the sixteen firing patterns by an appropriate AEM program. These mappings can be chosen arbitrarily: as a consequence, each register machine instruction can be executed at different times using distinct AEM firing patterns.
A universal Deterministic Machine (UDM) is a deterministic machine that can execute the computation of any deterministic Machine by reading the other deterministic machine's description and input from the UDM's tape. Table 2 shows Minsky's universal deterministic machine described in [24]. This means that this universal deterministic machine can execute any program that a digital computer, or distributed system of computers, or a von Neumann machine can execute.
The elements of {0, 1}2 are denoted as {00, 01, 10, 11}. Create a one-to-one correspondence between the memory symbols in the alphabet of the universal deterministic machine and the elements in {0, 1}2 as follows: 0↔00, 1↔01, y↔10 and A↔11. Furthermore, consider the following correspondence of the states with the elements of {0, 1}3: q1↔001, q2↔010, q3↔011, q4↔100, q5↔101, q6↔110, q7↔111 and the halting state h↔000. Further consider L↔0 and R↔1 in {0, 1}. An active element machine is designed to compute the universal deterministic machine program i shown in table 3. Since the universal deterministic machine can execute any digital computer program, this demonstrates how to execute any digital computer program with a secure active element machine.
Following the methods in the previous section, multiple AEM firing interpretations are created that compute η. When the universal deterministic machine halts, η(011, 00)=(000, 00, h), this special case is handled with a halting firing pattern that the active element machine enters. Concatenate the three boolean variables U, W, X to represent the current state of the UDM. The two boolean variables Y, Z represent the current memory symbol being read. From table 3, observe that η=(η0 η1 η2, η3 η4, η5). For each k such that 0≤k≤5, the level sets of the function ηk:{0, 1}3×{0, 1}2→{0, 1} are shown below.
The level set η5−1(UWX, YZ){h}={(011, 00)} is the special case when the universal deterministic machine halts. At this time, the active element machine reaches a halting firing pattern H. The next example copies one element's firing state to another element's firing state, which helps assign the value of a random bit to an active element and perform other functions in the UDM.
Copy Program.
This active element program copies active element a's firing state to element b.
When the copy program is called, active element b fires if a fired during the window of time [s, t). Further, a connection is set up from b to b so that b will keep firing indefinitely. This enables b to store active element a's firing state. The following procedure describes the computation of the deterministic program η with random firing interpretations.
Procedure 2. Computing Deterministic Program η with Random Firing Patterns Consider function η3: {0, 1}5→{0, 1} as previously defined. The following scheme for mapping boolean values 1 and 0 to the firing of an active element is used. If active element U fires during window W, then this corresponds to input U=1 in η3; if active element U doesn't fire during window W, then this corresponds to input U=0 in η3. When U fires, W doesn't fire, X fires, Y doesn't fire and Z doesn't fire, this corresponds to computing η3 (101, 00). The value 1=η3(101, 00) is the underlined bit in (011, 10, 0), which is located in row 101, column 00 of table 3. Procedure 1 and the separation rules in table 7 are synthesized so that η3 is computed using random active element firing patterns. In other words, the boolean function η3 can be computed using an active element machine's dynamic interpretation. The dynamic part of the interpretation is determined by the random bits received from a quantum source. The firing activity of active element P3 represents the value of η3 (UWX, YZ). Fourteen random bits are read from a quantum random generator—for example, see [5]. These random bits are used to create a corresponding random firing pattern of active elements R0, R1, . . . R13. Meta commands dynamically build active elements and connections based on the separation rules in table 7 and the firing activity of elements R0, R1, . . . R13. These dynamically created active elements and connections determine the firing activity of active element P3 based on the firing activity of active elements U, W, X, Y and Z. The details of this procedure are described below.
Read fourteen random bits a0, a1, . . . and a13 from a quantum source. The values of these random bits are stored in active elements R0, R1, . . . R13. If random bit ak=1, then Rk fires; if random bit ak=0, then Rk doesn't fire.
Set up dynamical connections from active elements U, X, W, Y, Z to elements D0, D1, . . . D13. These connections are based on Meta commands that use the firing pattern from elements R0, R1, . . . R13.
For D0, follow the first row of separation table 7, reading the amplitudes from U, W, X, Y, Z to D0 and the threshold for D0. Observe that at time s−dT program set_dynamic_C initializes the amplitudes of the connections to AU,D0=−2, AW,D0=−2, AX,D0=−2, AY,D0=2, AZ,D0=2 as if R0 doesn't fire. If R0 does fire, then the Meta command in set_dynamic_C dynamically flips the sign of each of these amplitudes: at time t, the amplitudes are flipped to AU,D0=2, AW,D0=2, AX,D0=2, AY,D0=−2, AZ,D0=−2.
Similarly, the meta command in set_dynamic_E initializes the threshold of D0 to θD0=−5 as if R0 doesn't not fire. If R0 does fire the meta command flips the sign of the threshold of D0; for the D0 case, the meta command sets θD0=5.
Similarly, for elements D1, . . . , D13, the commands set_dynamic_E and set_dynamic_C dynamically set the element parameters and the connections from U, X, W, Y, Z to D1, . . . , D13 based on the rest of the quantum random firing pattern R1, . . . , R13.
Set up connections to active elements G0, G1, G2, . . . G14 which represent the number of elements in {R0, R1, R2, . . . R13} that are firing. If 0 are firing, then only G0 is firing. Otherwise, if k>0 elements in {R0, R1, R2, . . . R13} are firing, then only G1, G2, . . . Gk are firing.
P3 is the output of η3. Initialize element P3's threshold based on meta commands that use the information from elements G0, G1, . . . G13. Observe that t+dT<t+2 dT< . . . <t+15 dT so the infinitesimal dT and the meta commands set the threshold P3 to −2(14−k)+1 where k is the number of firings. For example, if nine of the randomly chosen bits are high, then G9 will fire, so the threshold of P3 is set to −9. If five of the random bits are high, then the threshold of P3 is set to −17. Each element of the level set creates a firing pattern of D0, D1, . . . D13 equal to the complement of the random firing pattern R0, R1, . . . R13 (i.e., Dk fires if and only if Rk does not fire).
Set up dynamical connections from D0, D1, . . . D13 to P3 based on the random bits stored by R0, R1, . . . R13. These connections are based on meta commands that use the firing pattern from elements R0, R1, . . . R13.
Similar procedures use random firing patterns on active elements {A0, A1, . . . A14}, {B0, B1, . . . B15}, {C0, C1, . . . C14}, {E0, E1, . . . E12}, and {F0, F1, . . . F13} to compute η0, η1, η2, η4 and η5, respectively. The outputs of η0, η1, η2, η4 and η5 are represented by active elements P0, P1, P2, P4 and P5, respectively. The level set rules for η0, η1, η2, η4 and η5 are shown, respectively in tables 4, 5, 6, 8 and 9.
Since the firing activity of element Pk represents a single bit that helps determine the next state or next memory symbol during a UDM computational step, its firing activity and parameters can be assumed to remain perfectly secret. Alternatively, if an eavesdropper is able to listen to the firing activity of P0, P1, P2, P3, P4 and P5, which collectively represent the computation of η(UXW, YZ), then this leaking of information could be used to reconstruct some or all of the UDM memory contents.
This weakness can be rectified as follows. For each UDM computational step, the active element machine uses six additional quantum random bits b0, b1, b2, b3, b4 and b5. For element P3, if random bit b3=1, then the dynamical connections from D0, D1, . . . D13 to P3 are chosen as described above. However, if random bit b3=0, then the amplitudes of the connections from D0, D1, . . . D13 to P3 and the threshold of P3 are multiplied by −1. This causes P3 to fire when η(UXW, YZ)=0 and P3 doesn't fire when η(UXW, YZ)=1.
This cloaking of P3's firing activity can be coordinated with a meta command based on the value of b3 so that P3's firing can be appropriately interpreted to dynamically change the active elements and connections that update the memory contents and state after each UDM computational step. This cloaking procedure can also be used for element P0 and random bit b0, P1 and random bit b1, P2 and random bit b2, P4 and random bit b4, and P5 and random bit b5.
Besides representing and computing the program t with quantum random firing patterns, there are other important functions computed by active elements executing the UDM. Assume that these connections and the active element firing activity are kept perfectly secret as they represent the state and the memory contents of the UDM memory contents. These functions are described below.
Three active elements (q 0), (q 1) and (q 2) store the current state of the UDM.
There are a collection of elements to represent the machine address k (memory address k of the digital computer) where k is an integer.
A marker active element locates the leftmost machine address (lowest memory address used by the digital computer) and a separate marker active element locates the rightmost memory address (highest memory address used by the digital computer). Any memory symbols outside these markers are assumed to be blank i.e. 0. If the machine head moves beyond the leftmost memory address, then 's connection is removed and updated one memory address to the left (one memory cell lower in the digital computer) and the machine is reading a 0. If the machine head moves beyond the rightmost memory address, then 's connection is removed and updated one memory address to the right (one memory cell higher in the digital computer) and the machine (digital computer) is reading a 0.
There are a collection of elements that represent the tape contents (memory contents) of the UDM (digital computer). For each memory address k inside the marker elements, there are two elements named (S k) and (T k) whose firing pattern determines the alphabet symbol at memory address k (memory cell k). For example, if elements (S 5) and (T 5) are not firing, then memory address 5 (memory cell 5 of the digital computer) contains alphabet symbol 0. If element (S −7) is firing and element (T −7) is not firing, then memory address −7 (memory cell −7) contains alphabet symbol 1. If element (S 13) is firing and element (T 13) is firing, then memory address 13 (memory cell 13 of the digital computer) contains alphabet symbol A.
Representing alphabet symbol 0 with two active elements that are not firing is convenient because if the machine head moves beyond the initial tape contents (memory contents) of the UDM (digital computer), then the Meta command can add two elements that are not firing to represent the contents of the new square.
The copy program can be used to construct important functionality in the Universal Deterministic machine (digital computer). The following active element machine program enables a new alphabet symbol to be copied to the tape (memory of the digital computer).
The following program enables a new state to be copied.
The sequence of steps by which the UDM (digital computer) is executed with an AEM are described.
1. Tape contents (memory contents) are initialized and the marker elements L and R are initialized.
2. The machine head (location of the next instruction in memory) is initialized to memory address k=0 and the current machine state is initialized to q2. In other words, (q 0) is not firing (q 1) is firing and (q 2) is not firing.
3. (S k) and (T k) are copied to ain and the current state (q 0), (q 1), (q 2) is copied to qin.
4. r/(qin, ain)=(qout, aout, m) is computed where qout represents the new state, aout represents the new tape symbol and m represents the machine head move.
5. If qout=h, then the UDM halts. The AEM reaches a static firing pattern that stores the current tape contents indefinitely and keeps the machine head fixed at memory address k where the UDM halted (where the digital computer stopped executing its computer program).
6. Otherwise, the firing pattern of the three elements representing qout are copied to (q 0), (q 1), (q 2). aout is copied to the current memory address (memory cell) represented by (S k), (T k).
7. If m=L, then first determine if the machine head has moved to the left of the memory address marked by L. If so, then have L remove its current marker and mark memory address k−1. In either case, go back to step 3 where (S k−1) and (T k−1) are copied to ain.
8. If m=R, then first determine if the machine head (location of the next instruction in memory) has moved to the right of the memory address marked by R. If so, then have R remove its current marker and mark memory address k+1. In either case, go back to step 3 where (S k+i) and (T k+i) are copied to ain.
In reference [5], it was shown that quantum randomness is Turing incomputable (digital computer incomputable). Since the firing pattern of the active elements {A0, A1, . . . A14} computing η0; the firing pattern of elements {B0, B1, . . . B15} computing η1; the firing pattern of elements {C0, C1, . . . C14} computing η2; the firing pattern of active elements {D0, D1, . . . D13} computing η3; the firing pattern of active elements {E0, E1, . . . E12} computing η4; and the firing pattern of active elements {F0, F1, . . . F13} computing are all generated from quantum randomness, these firing patterns are Turing incomputable. As a consequence, there does not exist a Deterministic machine (digital computer program) that can map these firing patterns back to the sequence of instructions executed by the universal Deterministic machine (universal digital computer). In summary, these methods demonstrate a new class of computing machines and a new type of computational procedure where the purpose of the program's execution is incomprehensible (Turing incomputable) to malware.
Machine and Affine Map Correspondence
Definition 2.1 Deterministic Machine
A deterministic machine (Q, A, η) satisfies
The η function serves as the machine instructions (computer program) for the machine in the following manner. For each q in Q and α in A, the expression η(q, α)=(r, β, x) describes how machine (Q, A, η) executes one computational step. When in state q and read alphabet symbol a in memory:
Definition 2.2 Memory Contents of the Machine
The machine's memory contents is represented as a function T: Z→A where Z denotes the integers. The memory contents T are M-bounded if there exists a bound M>0 such that for T(k)=T(j) whenever |j|, |k|>M. (In some cases, the blank symbol # is used and T(k)=# when |k|>M) The symbol stored in the kth address of memory is denoted as Tk.
Definition 2.3 Machine Configuration with Memory Address Location
Let (Q, A, η) be a machine with memory contents T. A configuration is an element of the set =(Q∪{h})×Z×{T: T is the memory with range A}. A physical machine has the initial memory contents, which are M-bounded and the memory contents contain only blank symbols, denoted as #, outside the bound M.
If (q, k, T) is a configuration in , then k is called the memory address location. The memory address location is M-bounded if there exists a natural number M>0 such that the address k of memory being read or scanned satisfies |k|≤M. A configuration whose first coordinate equals h is called a halted configuration. The set of non-halting configurations is ={(q, k, T)∈:q≠h}
The purpose of the definition of a configuration is that the first coordinate stores the current state of the machine, the third coordinate stores the contents of the memory, and the second coordinate stores the location (address) of the head reading or scanning the memory. Before presenting some examples of configurations, it is noted that there are different methods to describe the memory contents. One method is
This is a max. {|l|, |n|}-bounded memory. Another convenient representation is to list the memory contents and underline the symbol to indicate the location of the machine head. ( . . . ##αβ## . . . ).
A diagram can also represent the memory, location of the head reading memory, and the machine configuration (q, k, T). See
Consider configuration (p, 2, . . . ##αβ## . . . ). The first coordinate indicates that the machine is in machine state p. The second coordinate indicates that the machine is currently scanning memory address 2, denoted as T2 or T(2). The third coordinate indicates that memory address 1 stores symbol α, memory address 2 stores symbol β, and all other memory addresses contain the # symbol.
A second example of a configuration is (1, 6, . . . 1111233111 . . . ). This configuration is a halted configuration. The first coordinate indicates that the machine is in halt state 1. The second coordinate indicates that the machine head is scanning memory address 6. The underlined 2 in the third coordinate indicates that the machine head is currently scanning a 2. In other words, T(6)=2, T(7)=3, T(8)=3, and T(k)=1 when k<6 OR k>8.
Definition 2.6 Deterministic Machine Computational Step
Consider machine (Q, A, η) with machine configuration (q, k, T) such that T(k)=a, which means that memory address k stores symbol α. After the execution of one computational step, the new configuration is one of the three cases such that for all three cases S(k)=β and S(j)=T(j) whenever j≠k:
If the machine is currently in machine configuration (q0, k0, T0) and over the next n steps the sequence of machine configurations (points) is (q0, k0, T0), (q1, k1, T1), . . . , (qn, kn, Tn) then this execution sequence is sometimes called the next n+1 computational steps.
If Deterministic machine (Q, A, η) with initial configuration (s, k, T) reaches the halt state h after a finite number of execution steps, then the machine execution halts.
Otherwise, it is said that the machine execution is immortal on initial machine configuration (s, k, T).
The program symbol η, representing the deterministic computer program induces a map η:→ where η(q, k, T)=(r, k−1, S) when η(q, α)=(r, β, L) and η(q, k, T)=(r, k+1, S) when η(q, α)=(r, β, R).
Definition 2.7 Computer Program Size
The program size is the number of elements in the domain of η. The program size is denoted as |η|. Observe that |η|=|Q×A|=|Q∥A|. Note that in [7] and [32], they omit quintuples. (q, a, r, b, x) when r is the halting state. In our representation, η(q, a)=(1, b, x) or η(q, a)=(h, b, x).
Definition 2.8 Memory Head glb , lub , Window of Execution [,]
Suppose a deterministic machine begins or continues its execution with machine head at memory address k. During the next N computational steps, the greatest lower bound of the machine head is the left most (smallest integer) memory address that the machine head reads during these N computational steps; and the least upper bound of the machine head is the right most (largest integer) memory address that the machine head visits during these N computational steps. The window of execution denoted as [, ] or [, +1, . . . , −1, ] is the sequence of integers representing the memory address that the machine head visited during these N computational steps. The length of the window of execution is −+1 which is also the number of distinct memory addresses visited (read) by the machine head during these N steps. To express the window of execution for the next n computational steps, the lower and upper bounds are expressed as a function of n: [(n), (n)].
Halting state=h
The machine execution steps are shown in
Remark 2.10
If j≤k, then [(j), (j)]⊆[(k), (k)]
This follows immediately from the definition of the window of execution.
Since the machine addresses may be renumbered without changing the results of the machine execution, for convenience it is often assumed that the machine starts execution at memory address 0. In example 2.9, during the next 8 computational steps—one cycle of the immortal periodic point—the window of execution is [0, 6]. The length of the window of execution is 7. Observe that if the machine addresses are renumbered and the goal is to refer to two different windows of execution, for example [(j), (j)] and [(k), (k)], then both windows are renumbered with the same relative offset so that the two windows can be compared.
Definition 2.11 Value Function and Base
Suppose the alphabet A={a1, a2, . . . , aJ} and the machine states are Q={q1, q2, . . . qK}. Define the symbol value function v: A∪Q∪{h}→N where N denotes the natural numbers. v(h)=0. v(ak)=k. v(qk)=k+|A|. v(qK)=|Q|+|A|. Choose the number base B=|Q|+|A|+1. Observe that 0≤v(x)<B and that each symbol chosen from A∪Q∪{h} has a unique value in base B.
Definition 2.12 Deterministic Machine Program Isomorphism
Two deterministic computing machines M1(Q1, A1, η1) and M2(Q2, A2, η2) have a program isomorphism denoted as Ψ: M1→M2 if
Remark 2.13
If alphabet A={a}, then the halting behavior of the deterministic machine is completely determined in ≤|Q|+1 execution steps.
Proof.
Suppose Q={q1, q2, . . . qK}. Observe that the program length is |η|=|Q|. Also, after an execution step every tape symbol on the tape must be a. Consider the possible execution steps: η(qS(1), a)→η(qS(2), a)→η(qS(3), a) . . . →η(qS(K+1), a). If the program execution does not halt in these |Q|+1 steps, then S(i)=S(j) for some i≠j; and the tape contents are still all a's. Thus, the program will exhibit periodic behavior whereby it will execute η(qs(i), a)→ . . . →η(qs(j), a) indefinitely. If the program does not halt in |Q|+1 execution steps, then the computer program will never halt.
As a result of Remark 2.13, from now on, it is assumed that |A|≥2. Further, since at least one state is needed, then from here on, it is assumed that the base B≥3.
Definition 2.14
Register Machine Configuration to x-y plane P correspondence. See
y(q, k, T)=q Tk−1·Tk−2 Tk−3 Tk−4 . . . where this decimal sequence in base B represents a rational number as
Define function φ:→P as φ(q, k, T)=(x(q, k, T), y(q, k, T)). φ is the function that maps machine configurations into points into the plane P.
Definition 2.15 Equivalent Configurations
With respect to Deterministic machine (Q, A, η), the two configurations (q, k, T) and (q, j, V) are equivalent [i.e. (q, k, T)˜(q, j, V)] if T(m)=V(m+j−k) for every integer m. Observe that ˜ is an equivalence relation on . Let ′ denote the set of equivalence classes [(q, k, T)] on . Also observe that φ maps every configuration in equivalence class [(q, k, T)] to the same ordered pair of real numbers in P. Recall that (D, X) is a metric space if the following three conditions hold.
(ρ, ′) is a metric space where ρ is induced via φ by the Euclidean metric in P. Consider points p1, p2 in P with p1=(x1, y1) and p2=(x2, y2) where (d, P) is a metric space with Euclidean metric d(p1, p2)=√{square root over ((x1−x2)2+(y1−y2)2)}
Let u=[(q, k, S)], w=[(r, l, T)] be elements of ′. Define ρ:′×′→R as ρ(u, w)=d(φ(u), φ(w))=√{square root over ([x(q,k,S)−x(r,l,T)]2+[y(q,k,S)−y(r,l,T)]2)}
The symmetric property and the triangle inequality hold for ρ because d is a metric. In regard to property (i), ρ(u, w)≥0 because d is a metric. The additional condition that η(u, w)=0 if and only if u=w holds because d is a metric and because the equivalence relation ˜ collapses non-equal equivalent configurations (i.e. two configurations in the same state but with different machine head locations and with all corresponding symbols on their respective tapes being equal) into the same point in ′.
The unit square U(└x┘, └y┘) has a lower left corner with coordinates (└x┘, └y┘) where └x┘=Bv(Tk)+v(Tk+1) and └y┘=Bv(q)+v(Tk−1). See
Definition 2.16 Left Affine Function
This is for case I. where η(q, Tk)=(r, β, L). See
Thus, m=Tk−1β−Tk where the subtraction of integers is in base B.
Thus, n=rTk−2−qTk−1 Tk−2 where the subtraction of integers is in base B.
Define the left affine function F(└x┘, └y┘):U(└x┘, └y┘)→P where
m=Bv(Tk−1)+v(β)−v(Tk) and n=Bv(r)−B2v(q)−Bv(Tk−1).
Lemma 2.17 Left Affine Function⇔Register Machine Computational Step
Let (q, k, T) be a Deterministic machine configuration. Suppose η(q, Tk)=(r, b, L) for some state r in Q∪{h} and some alphabet symbol b in A and where Tk=a. Consider the next Deterministic Machine computational step. The new configuration is (r, k−1, Tk) where Tb(j)=T(j) for every j≠k and Tb(k)=b. The commutative diagram φ η(q, k, T)=F(└x┘, └y┘) φ(q, k, T) holds. In other words, F(└x┘, └y┘) [x(q, k, T), y(q, k, T)]=[x(r, k−1, Tb), y(r, k−1, Tb)].
Proof.
The x coordinate of
The y coordinate of
Remark 2.18 Minimum Vertical Translation for Left Affine Function
As in 2.16, n is the vertical translation. |Bv(r)−Bv(Tk−1)|=B|v(r)−v(Tk−1)|≤B(B−1). Since q is a state, v(q)≥(|A|+1). This implies |−B2v(q)|≥(|A|+1)B2 This implies that |n|≥(|A|+1)B2−B(B−1)≥|A| B2+B.
Thus, |n|≥|A| B2+B.
Definition 2.19 Right Affine Function
This is for case II. where η(q, Tk)=(r, β, R). See
Thus, m=Tk−1β−Tk where the subtraction of integers is in base B.
Thus, n=rβ−q where the subtraction of integers is in base B.
Define the right affine function G(└x┘, └y┘):U(└x┘, └y┘)→P such that
where m=−B2v(Tk) and n=Bv(r)+v(β)−v(q).
Lemma 2.20 Right Affine Function⇔Deterministic Machine Computational Step
Let (q, k, T) be a Deterministic machine configuration. Suppose η(q, Tk)=(r, b, R) for some state r in Q∪{h} and some alphabet symbol b in A and where Tk=a. Consider the next Deterministic Machine computational step. The new configuration is (r, k+1, Tb) where Tb(j)=T(j) for every j≠k and Tb(k)=b. The commutative diagram φη(q, k, T)=G(└x┘, └y┘) (q, k, T) holds.
In other words, G(└x┘, └y┘) [x(q, k, T), y(q, k, T)]=[x(r, k+1, Tb), y(r, k+1, Tb)].
Proof.
From η(q, Tk)=(r, b, R), it follows that x(r, k+1, Tb)=Tk+1 Tk+2·Tk+3 Tk+4 . . . .
The x coordinate of
From η(q, Tk)=(r, b, R), it follows that y(r, k+1, Tb)=rb·Tk−1 Tk−2 Tk−3 . . . .
They coordinate of
Remark 2.21 Minimum Vertical Translation for Right Affine Function
First,
Definition 2.22 Function Index Sequence and Function Sequence
Let {ƒ1, ƒ2, . . . , ƒI} be a set of functions such that each function ƒk:X→X. Then a function index sequence is a function S:N→{1, 2, . . . , I} that indicates the order in which the functions {ƒ1, ƒ2, . . . , ƒI} are applied. If p is a point in X, then the orbit with respect to this function index sequence is [p, ƒS(1)(p), ƒS(2) ƒS(1)(p), . . . , ƒS(m) ƒS(m−1) . . . ƒS(2) ƒS(1)(p), . . . ]. Square brackets indicate that the order matters. Sometimes the first few functions will be listed in a function sequence when it is periodic. For example, [ƒ0, ƒ1, ƒ0, ƒ1, . . . ] is written when formally this function index sequence is S:N→{0, 1} where S(n)=n mod 2.
on domain U(0,0) and
on U(4,0)
(0, 0) is a fixed point of gƒ. The orbit of any point p chosen from the horizontal segment connected by the points (0, 0) and (1,0) with respect to the function sequence [ƒ, g, ƒ, g, . . . ] is a subset of U(0, 0)∪U(4, 0). The point p is called an immortal point. The orbit of a point Q outside this segment exits (halts) U(0, 0)∪U(4, 0).
Definition 2.24 Halting and Immortal Orbits in the Plane.
Let P denote the two dimensional x,y plane. Suppose ƒk:Uk→P is a function for each k such that whenever j≠k, then Uj∩Uk=Ø. For any point p in the plane P an orbit may be generated as follows. The 0th iterate of the orbit is p. Given the kth iterate of the orbit is point q, if point q does not lie in any Uk, then the orbit halts. Otherwise, q lies in at least one Uj. Inductively, the k+1 iterate of q is defined as ƒj(q). If p has an orbit that never halts, this orbit is called an immortal orbit and p is called an immortal point. If p has an orbit that halts, this orbit is called a halting orbit and p is called a halting point.
Theorem 2.25 Register Machine Execution⇔Affine Map Orbit Halting/Immortal Orbit Correspondence Theorem
Consider Register machine (Q, A, η) with initial machine configuration (s, 0, T). W.L.O.G., it is assumed that the machine begins executing with the machine head at zero. Let ƒ1, ƒ2, . . . , ƒI denote the I affine functions with corresponding unit square domains W1, W2, W3, . . . , WI determined from 2.14, 2.15, 2.16 and 2.19. Let p=(x(s, 0, T), y(s, 0, T)). Then from 2.14,
Also,
There is a 1 to 1 correspondence between the mth point of the orbit
and the mth computational step of the deterministic machine (Q, A, η) with initial configuration (s, 0, T). In particular, the Deterministic Machine halts on initial configuration (s, 0, T) if and only if p is a halting point with respect to affine functions ƒk:Wk→P where 1≤k≤I Dually, the Deterministic Machine is immortal on initial configuration (s, 0, T) if and only if p is an immortal point with respect to affine functions ƒk:Wk→P where 1≤k≤I.
Proof.
From lemmas 2.17, 2.20, definition 2.14 and remark 2.15, every computational step of (Q, A, η) on current configuration (q, k, T′) corresponds to the application of one of the unique affine maps ƒk, uniquely determined by remark 2.15 and definitions 2.16, 2.19 on the corresponding point p=[x(r, k, T′), y(r, k, T′)]. Thus by induction, the correspondence holds for all n if the initial configuration (s, 0, T) is an immortal configuration which implies that [x(s, 0, T), y(s, 0, T)] is an immortal point. Similarly, if the initial configuration (s, 0, T) is a halting configuration, then the machine (Q, A, η) on (s, 0, T) halts after N computational steps. For each step, the correspondence implies that the orbit of initial point
on the Nth iteration of the orbit. Thus, p0 is a halting point.
Corollary 2.26 Immortal Periodic Points, Induced by Configurations, Correspond to Equivalent Configurations that are Immortal Periodic.
Proof.
Suppose p [x(q, k, T), y(q, k, T)] with respect to (Q, A, η) and p lies in k=1 such that ƒS(N) ƒS(N−1) . . . ƒS(1)(p)=p. Starting with configuration (q, k, T), after N execution steps of (Q, A, η), the resulting configuration (q, j, V) satisfies x(q, k, T)=x(q, j, V) and y(q, k, T)=y(q, j, V) because of ƒS(N) ƒS(N−1) . . . ƒS(1)(p)=p and Theorem 2.25. This implies that (q, k, T) is translation equivalent to (q, j, V).
By induction this argument may be repeated indefinitely. Thus, (q, k, T) is an immortal configuration such that for every N computational steps of (Q, A, η), the kth resulting configuration (q, jk, Vk) is translation equivalent to (q, k, T).
Corollary 2.27 Immortal Periodic Points, Induced by Configurations, Correspond to Equivalent Configurations that are Immortal Periodic.
Proof.
Suppose p [x(q, k, T), y(q, k, T)] with respect to (Q, A, η) and p lies in
such that ƒS(N) ƒS(N−1) . . . ƒS(1)(p)=p. Starting with configuration (q, k, T), after N execution steps of (Q, A, η), the resulting configuration (q, j, V) satisfies x(q, k, T)=x(q, j, V) and y(q, k, T)=y(q, j, V) because of ƒS(N) ƒS(N−1) . . . ƒS(1)(p)=p and Theorem 2.25. This implies that (q, k, T) is translation equivalent to (q, j, V).
By induction this argument may be repeated indefinitely. Thus, (q, k, T) is an immortal configuration such that for every N computational steps of (Q, A, η), the kth resulting configuration (q, jk, Vk) is translation equivalent to (q, k, T).
Lemma 2.28
Two affine functions with adjacent unit squares as their respective domains are either both right affine or both left affine functions. (Adjacent unit squares have lower left x and y coordinates that differ at most by 1. It is assumed that |Q|≥2, since any deterministic program with only one state has a trivial halting behavior that can be determined in |A| execution steps when the tape is bounded.)
Proof.
The unit square U(└x┘, └y┘) has a lower left corner with coordinates (└x┘, └y┘) where └x┘=Bv(Tk)+v(Tk+1) and └y┘=Bv(q)+v(Tk−1). A left or right affine function (left or right move) is determined by the state q and the current memory address Tk. If states q≠r, then |Bv(q)−Bv(r)|≥B. If two alphabet symbols a, b are distinct, then |v(a)−v(b)|<|A|.
Thus, if two distinct program instructions have different states q≠r, then the corresponding unit squares have y-coordinates that differ by at least B−|A|=|Q|≥2, since any Deterministic program with just one state has trivial behavior that can be determined in |A| execution steps when the tape is bounded. Otherwise, two distinct program instructions must have distinct symbols at Tk. In this case, the corresponding unit squares have x-coordinates that differ by at least B−|A|=|Q|≥2.
Definition 2.29 Rationally Bounded Coordinates
Let ƒ1, ƒ2, . . . , ƒI denote the I affine functions with corresponding unit square domains W1, W2, . . . , WI. Let p be a point in the plane P with orbit [p, ƒS(1)(p), ƒS(2)ƒS(1)(p), . . . , ƒS(m) ƒS(m−1) . . . ƒS(2) ƒS(1)(p), . . . ]. The orbit of p has rationally bounded coordinates if conditions I & II hold.
I) For every point in the orbit z=ƒS(k) ƒS(k−1) . . . ƒS(2) ƒS(1)(p) the x-coordinate of z, x(z), and the y-coordinate of z, y(z), are both rational numbers.
II) There exists a natural number M such that for every point
in the orbit, where p1, p2, q1, and q2 are integers in reduced form, then |p1|<M, |p2|<M, |q1|<M, and |q2|<M.
An orbit is rationally unbounded if the orbit does not have rationally bounded coordinates.
Theorem 2.30
An orbit with rationally bounded coordinates is periodic or halting.
Proof.
Suppose both coordinates are rationally bounded for the whole orbit and M is the natural number. If the orbit is halting we are done. Otherwise, the orbit is immortal. Since there are less than 2 M integer values for each one of p1, p2, q1 and q2 to hold, then the immortal orbit has to return to a point that it was already at. Thus it is periodic.
Corollary 2.31
A Deterministic machine execution whose machine head location is unbounded over the whole program execution corresponds to an immortal orbit.
Theorem 2.32
Suppose the initial tape contents is bounded as defined in definition 2.2. Then an orbit with rationally unbounded coordinates is an immortal orbit that is not periodic.
Proof.
If the orbit halts, then the orbit has a finite number of points. Thus, it must be an immortal orbit. This orbit is not periodic because the coordinates are rationally unbounded.
Corollary 2.33
If the Deterministic Machine execution is unbounded on the right half of the tape, then in regard to the corresponding affine orbit, there is a subsequence S(1), S(2), . . . , S(k), . . . of the indices of the function sequence g1, g2, . . . , gk, . . . such that for each natural number n the composition of functions gS(n)gS(n−1) . . . gS(1) iterated up to the s(n)th orbit point is of the form
where ms(n), ts(n) are rational numbers.
Corollary 2.34
If the Deterministic Machine execution is unbounded on the left half of the tape, then in regard to the corresponding affine orbit, there is a subsequence S(1), S(2), . . . , S(k), . . . of the indices of the function sequence g1, g2, . . . , gk, . . . such that for each natural number n the composition of functions gS(n) gS(n−1) . . . gS(1) iterated up to the s(n)th orbit point is of the form:
where ms(n), ts(n) are rational numbers.
Theorem 2.35 M-Bounded Execution Implies a Halting or Periodic Orbit
Suppose that the Deterministic Machine (Q, A, η) begins or continues execution with a configuration such that its machine head location is M-bounded during the next (2M+1)|Q∥A|2M+1+1 execution steps. Then the Deterministic Machine program halts in at most (2M+1)|Q∥A|2M+1+1 execution steps or its corresponding orbit is periodic with period less than or equal to (2M+1)|Q∥A|2M+1+1.
Proof.
If the program halts in (2M+1)|Q∥A|2M+1+1 steps, then the proof is completed. Otherwise, consider the first (2M+1)|Q∥A|2M+1+1 steps. There are a maximum of |Q∥A| program commands for each machine head location. There are a maximum of (2M+1) machine head locations. For each of the remaining 2M memory address, each square can have at most |A| different symbols, which means a total of |A|2M possibilities for these memory address. Thus, in the first (2M+1)|Q∥A|2M+1+1 points of the corresponding orbit in P, there are at most distinct (2M+1)|Q∥A|2M+1 points so at least one point in the orbit must occur more than once.
Prime Edge Machine Computation
Any digital computer program (e.g., a program written in C, Python, C++, JAVA, Haskell) can be compiled to a finite number of prime directed edges. Prime directed edges acts as machine instructions. Overlap matching and intersection patterns determine how the prime directed edges store the result of one step of the computation in the memory.
Definition 3.1 Overlap Matching & Intersection Patterns
The notion of an overlap match expresses how a part or all of one pattern may match part or all of another pattern. Let V and W be patterns. (V, s) overlap matches (W, t) if and only if V(s+c)=W(t+c) for each integer c satisfying λ≤c≤μ such that λ=min{s, t} and μ=min{|V|−1−s, |W|−1−t} where 0≤s<|V| and 0≤t<|W|. The index s is called the head of pattern V and t is called the head of pattern W. If V is also a subpattern, then (V, s) submatches (W, t).
If (V, s) overlap matches (W, t), then define the intersection pattern I with head u=λ as (I, u)=(V, s)∩(W, t), where I(c)=V(c+s−λ) for every integer c satisfying 0≤c≤(μ+λ) where λ=min{s, t} and μ=min{|V|−1−s, |W|−1−t}.
Definition 3.2 Edge Pattern Substitution Operator
Consider pattern V=v0 v1 . . . vn, pattern W=w0 w1 . . . wn with heads s, t satisfying 0≤s, t≤n and pattern P=p0 p1 . . . pm with head u satisfying 0≤u≤m. Suppose (P, u) overlap matches (V, s). Then define the edge pattern substitution operator ⊕ as E=(P, u)⊕[(V, s)⇒(W, t)] according to the four different cases A., B., C. and D.
Case A.) u>s and m−u>n−s. See
where the head of E is u+t−s. Observe that |E|=m+1
Case B.) u>s and m−u≤n−s. See
Case C.) u≤s and m−u≤n−s. See
E(k)=W(k) when 0≤k≤n and the head of E is t. Also, |E|=|W|=n+1.
Case D.) u≤s and m−u>n−s. See
where the head of E is t. Also, |E|=m+s−u+1
Overlap and intersection matching and edge pattern substitution are useful for computing algorithms that execute all possible finite machine configurations for a digital computer program.
Set pattern P=0101 110. Set pattern V=11 0101. Set pattern W=01 0010. Then (P, 0) overlap matches (V, 2). Edge pattern substitution is well-defined so E=(P, 0)⊕[(V, 2)⇒(W, 4)]=01 0010 110. The head or index of pattern E=4. Also, (P, 4) overlap matches (V, 0). F=(P, 4)⊕[(V, 0)⇒(W, 4)]=0101 010010. The index of pattern F=u+t−s=4+4−0=8.
Remark 3.5 Any Prime State Cycle has Length≤|Q|
This follows from the Dirichlet principle and the definition of a prime machine state cycle.
Remark 3.6 Maximum Number of Distinct Prime State Cycles
Given an alphabet A and states Q, consider an arbitrary prime state cycle with length 1, (q, a)→(q, b). There are |Q∥A| choices for the first input machine instruction and |A| choices for the second input machine instruction since the machine states must match. Thus, there are |Q∥A|2 distinct prime state cycles with length 1. Similarly, consider a prime state cycle with window of execution whose length is 2, this can be represented as (q1, a1)→(q2, a2)→(q1, b1).
Then there are |Q∥A| choices for (q1, a1) and once (q1, a1) is chosen there is only one choice for q2 because it is completely determined by η(q1, a1)=(q2, b1) where η is the program in (Q, A, η). Similarly, there is only one choice for b1. There are |A| choices for a2. Thus, there are |Q∥A|2 distinct choices.
For an arbitrary prime state cycle (q1, a1)→(q2, a2)→ . . . →(qn, an)→(q1, an+1) with window of execution of length k then there are |Q∥A| choices for (q1, a1) and |A| choices for a2 since the current window of execution length after the first step increases by 1. There is only one choice for q2 because it is determined by η(q1, a1). Similarly, for the jth computational step, if the current window of execution length increases by 1, then there are |A| choices for (qj+1, aj+1). Similarly, for the jth computational step, if the current window of execution stays unchanged, then there is only one choice for aj+1 that was determined by one of the previous j computational steps. Thus, there are at most |Q∥A|k distinct prime state cycles whose window of execution length equals k. Definitions 2.8 and remark 2.10 imply that a prime k-state cycle has a window of execution length less than or equal to k. Thus, from the previous and 3.5, there are at most
distinct prime state cycles in (Q, A, η).
Remark 3.7 any State Cycle Contains a Prime State Cycle
Proof.
Relabeling if necessary let S(q1, q1)=(q1, a1)→ . . . →(qn, an)→(q1, an+1) be a state cycle. If q1 is the only state visited twice, then the proof is completed. Otherwise, define μ=min{|S(qk, qk)|: S(qk, qk) is a subcycle of S(q1, q1)}. Then exists because S(q1, q1) is a subcycle of S(q1, q1). Claim: Any state cycle S(qj, qj) with |S(qj, qj)|=μ must be a prime state cycle. Suppose not. Then there is a state r≠qj that is visited twice in the state cycle S(qj, qj). But then S(qr, qr) is a cycle with length less than which contradicts μ's definition.
Definition 3.9 Execution Node for (Q, A, η)
An execution node (or node) is a triplet H=[q, w0 w1 . . . wn, t] for some state q in Q where w0 w1 . . . wn is a pattern of n+1 alphabet symbols each in A such that t is a non-negative integer satisfying 0≤t≤n. Intuitively, w0 w1 . . . wn is the pattern of alphabet symbols on n+1 consecutive memory addresses in the computer memory and t represents the location (address) of memory, where the next computational step (i.e. machine instruction) of the prime directed edge computation will be stored in memory.
Prime Directed Sequences
Definition 4.1 Prime Directed Edge from Head and Tail Execution Nodes a Prime Head Execution Node Δ=[q, v0 v1 . . . vn, s] and Prime Tail Execution Node
Γ=[r, w0 w1 . . . wn, t] are called a prime directed edge if and only if all of the following hold:
A prime directed edge is denoted as Δ⇒Γ or [q, v0 v1 . . . vn, s]⇒[r, w0 w1 . . . wn, t]. The number of computational steps Nis denoted as |Δ⇒Γ|.
Definition 4.2 Prime Input Instruction Sequence
3.4 introduced input instructions. If (q1, a1)→ . . . →(qn, an) is an execution sequence of input instructions for (Q, A, η), then (q1, a1)→ . . . →(qn, an) is a prime input command sequence if qn is visited twice and all other states in the sequence are visited once. In other words, a prime input command sequence contains exactly one prime state cycle.
Definition 4.23 Prime Directed Edge Complexity
Definition 4.24 Overlap Matching of a Node to a Prime Head Node
Execution node Π overlap matches prime head node Δ if and only if the following hold.
Lemma 4.25 Overlap Matching Prime Head Nodes are Equal
If Δj=[q, P, u] and Δk=[q, V, s] are prime head nodes and they overlap match, then they are equal. (Distinct edges have prime head nodes that do not overlap match.)
Proof.
0≤u≤|Δj| and 0≤s≤|Δk|. Let (I, m)=(P, u)∩(V, s) where m=min{s, u}. Suppose the same machine begins execution on memory I, pointing to memory address m in the machine is in state q. If s=u and |Δj|=|Δk|, then the proof is complete.
Otherwise, s≠u or |Δj|≠|Δk| or both. Δj has a window of execution [0, |Δj|−1] and Δk has window of execution [0, |Δk|−1]. Let the ith step be the first time that the machine head exits finite tape I. This means the machine would execute the same machine instructions with respect to Δj and Δk up to the ith step, so on the ith step, Δj and Δk must execute the same machine instruction. Since it exits memory I at the ith computational step, this would imply that either pattern P stored in memory or pattern V stored in memory, are exited at the ith computational step. This contradicts either that [0, |Δj|−1] is the window of execution for Δj or [0, |Δk|−1] is the window of execution for Δk.
The Edge Node Substitution Operator is a method of machine computation.
Definition 4.26 Edge Node Substitution Operator Π⊕(Δ⇒Γ)
Let Δ⇒Γ be a prime directed edge with prime head node Δ=[q, v0 v1 . . . vn, s] and tail node Γ=[r, w0 w1 . . . wn, t]. If execution node Π=[q, p0 p1 . . . pm, u] overlap matches Δ, then the edge pattern substitution operator from 3.2 induces a new execution node Π⊕(ΔΓ)=[r, (P, u)⊕[(V, s)(W, t)], k] with head k=u+t−s if u>s and head k=t if u≤s such that 0≤s, t≤n and 0≤u≤m and patterns V=v0 v1 . . . vn and W=w0 w1 . . . wn and P=p0 p1 . . . pm.
Definition 4.27 Prime Directed Edge Sequence and Link Matching
A prime directed edge sequence is defined inductively. Each element is a coordinate pair with the first computational element being a prime directed edge and the second computational element is an execution node. Each computational element is expressed as (ΔkΓk, Πk).
The first computational element of a prime directed edge sequence is (Δ1⇒Γ1, Π1) where Π1=Γ1, and Δ1⇒Γ1 is some prime directed edge in P. In some embodiments, (Δ1⇒Γ1, Π1) is executed with one or more register machine instructions. For simplicity in this definition, the memory indices (addresses) in P are relabeled if necessary so the first element has memory indices (addresses) equal to 1. If Π1 overlap matches some non-halting prime head node Δ2, the second element of the prime directed edge sequence is (Δ2⇒Γ2, Π2) where Π2=Π1⊕(Δ2⇒Γ2). This machine computation is called a link match step.
Otherwise, Π1 overlap matches a halting node, then the prime directed edge sequence terminates. This is expressed as [(Δ1⇒Γ1, Γ1), HALT]. In this case it is called a halting match step.
If the first k−1 steps are link match steps, then the prime directed edge sequence is denoted as [(Δ1⇒Γ1, Π1), (Δ2⇒Γ2, Π2), . . . , (Δk⇒Γk, Πk)] where Πj overlap matches prime head node Δj+1 and Πj+1=Πj⊕(Δj+1=⇒Γj+1) for each j satisfying 0≤j<k.
Notation 4.28 Edge Sequence Notation E([p1, p2, . . . , pk], k)
To avoid subscripts of a subscript, pk and the subscript p(j) represent the same number. As defined in 4.27, P={Δ1⇒Γ1, . . . , Δk⇒Γk, . . . , ΔN⇒ΓN} denotes the set of all prime directed edges. E([p1], 1) denotes the edge sequence [(Δp(1)=⇒Γp(1), Πp(1))] of length 1 where Πp(1)=Γp(1) and 1≤p1≤|P|. Next E([p1, p2], 2) denotes the edge sequence [(Δp(1)⇒Γp(1), Πp(1)), (Δp(2)⇒Γp(2), Πp(2))] of length 2 where Πp(2)=Πp(1)⊕(Δp(2)⇒Γp(2)) and 1≤p1, p2≤|P|.
In general, E([p1, p2, . . . , pk], k) denotes the edge sequence of length k which is explicitly [(Δp(1)⇒Γp(1), Πp(1)), (Δp(2)⇒Γp(2), Πp(2)), . . . , (Δp(k)⇒Γp(k), Πp(k))] where Πp(j+1)=Πp(j)⊕(Δp(j+1)⇒Γp(j+1)) for each j satisfying 1≤j≤k−1 and 1≤p(j)≤|P|.
Definition 4.29 Edge Sequence Contains a Consecutive Repeating State Cycle
Lemma 4.19 implies that an edge sequence corresponds to a composition of prime input instructions. The expression an edge sequence contains a consecutive repeating state cycle is used if the corresponding sequence of prime input instructions contains a consecutive repeating state cycle.
Theorem 4.30 Any consecutive repeating state cycle of (Q, A, η) is contained in an edge sequence of (Q, A, η).
Proof.
This follows immediately from definition 4.29 and lemmas 4.15 and 4.19.
Remark 4.31 Period of an Immortal Periodic Point Contained in Edge Sequence
If E([p1, p2, . . . , pr], r) contains a consecutive repeating state cycle, then the corresponding immortal periodic point has
Proof.
This follows from lemma 3.11 that a consecutive repeating state cycle induces an immortal periodic point. The length of the state cycle equals the period of the periodic point. Further, the number of input instructions corresponding to the number of computational steps equals |Δp(k)⇒Γp(k)| in directed edge Δp(k)⇒Γp(k).
Method 4.32 Finding a Consecutive Repeating State Cycle in an Edge Sequence
Method 4.34 Prime Directed Edge Search Method
Remark 4.35 Prime Directed Edge Search Method Finds all Prime Directed Edges
Method 4.34 Finds all prime directed edges of (Q, A, η) and all halting nodes.
Proof.
Let Δ⇒Γ be a prime directed edge of (Q, A, η). Then Δ⇒Γ has a head node A=[r, v0 v1 . . . vn, s], for some state r in Q, for some tape pattern v0 v1 . . . vn that lies in Δn+1 such that n≤|Q| and 0≤s≤n. In the outer loop of 4.34, when r is selected from Q and in the inner loop when the tape pattern a−|Q|. . . a−2 a−1 a0 a1 a2 . . . a|Q| is selected from A2|Q|+1 such that
then the machine execution in 4.34 will construct prime directed edge Δ⇒Γ. When the head node is a halting node, the machine execution must halt in at most |Q| steps. Otherwise, it would visit a non-halting state twice and thus, be a non-halting head node. The rest of the argument for this halting node is the same as for the non-halting head node.
Method 4.36 Immortal Periodic Point Search Method
Remark 4.37 |Φ(k)| is finite and |Φ(k)|≤|P|k
Proof.
|Φ(1)|=|P|. Analyzing the nested loops, in method 4.36
For each edge sequence E([p1, p2, . . . , pk], k) chosen from Φ(k), at most |P| new edge sequences are put in Φ(k+1). Thus |Φ(k+1)|≤|P∥Φ(k)|, so |Φ(k)|≤|P|k.
Each embodiment disclosed herein may be used or otherwise combined with any of the other embodiments disclosed. Any element of any embodiment may be used in any embodiment.
Although the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the true spirit and scope of the invention. In addition, modifications may be made without departing from the essential teachings of the invention.
This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/462,260, entitled “Navajo Active Element Machine” filed Jan. 31, 2011, which is incorporated herein by reference. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/465,084, entitled “Unhackable Active Element Machine” filed Mar. 14, 2011, which is incorporated herein by reference. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/571,822, entitled “Unhackable Active Element Machine Using Randomness” filed Jul. 6, 2011, which is incorporated herein by reference. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/572,607, entitled “Unhackable Active Element Machine Unpredictable Firing Interpretations” filed Jul. 18, 2011, which is incorporated herein by reference. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/572,996, entitled “Unhackable Active Element Machine with Random Firing Interpretations and Level Sets” filed Jul. 26, 2011, which is incorporated herein by reference. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/626,703, entitled “Unhackable Active Element Machine with Turing Undecidable Firing Interpretations” filed Sep. 30, 2011, which is incorporated herein by reference. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/628,332, entitled “Unhackable Active Element Machine with Turing Incomputable Firing Interpretations” filed Oct. 28, 2011, which is incorporated herein by reference. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 61/628,826, entitled “Unhackable Active Element Machine with Turing Incomputable Computation” filed Nov. 7, 2011, which is incorporated herein by reference. This application is a continuation-in-part of U.S. Non-provisional patent application Ser. No. 13/373,948, entitled “Secure Active Element machine”, filed Dec. 6, 2011, which is incorporated herein by reference. This application is a continuation-in-part of European application EP 12 742 528.8, entitled “SECURE ACTIVE ELEMENT MACHINE”, filed Jan. 31, 2012, which is incorporated herein by reference. This application is a continuation-in-part of U.S. Non-provisional patent application Ser. No. 14/643,774, entitled “Non-Deterministic Secure Active Element Machine”, filed Mar. 10, 2015. This application is a continuation-in-part of U.S. Non-provisional patent application Ser. No. 16/365,694, entitled “Secure Non-Deterministic Self-Modifiable Computing Machine”, filed Mar. 27, 2019. This application claims priority benefit of U.S. Provisional Patent Application Ser. No. 62/682,979, entitled “Quantum Random Self-Modifiable Computer”, filed Jun. 10, 2018.
Number | Name | Date | Kind |
---|---|---|---|
6626960 | Gillam | Sep 2003 | B1 |
6742164 | Gillam | May 2004 | B1 |
20010031050 | Domstedt | Oct 2001 | A1 |
20020046147 | Livesay | Apr 2002 | A1 |
20030041230 | Rappoport | Feb 2003 | A1 |
20040215708 | Higashi | Oct 2004 | A1 |
20050071720 | Dattaram Kadkade | Mar 2005 | A1 |
20100257544 | Kleban | Oct 2010 | A1 |
20110066833 | Fiske | Mar 2011 | A1 |
20120096434 | Rama | Apr 2012 | A1 |
20120198560 | Fiske | Aug 2012 | A1 |
20140137188 | Bartholomay | May 2014 | A1 |
20150169893 | Desai | Jun 2015 | A1 |
20150261541 | Fiske | Sep 2015 | A1 |
20160004861 | Momot | Jan 2016 | A1 |
20160062735 | Wilber | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
WO-2016094840 | Jun 2016 | WO |
Entry |
---|
Jennewein et al., “Quantum Cryptography with Entangled Photons”, Physical Review Letters vol. 84, No. 20, pp. 4729-4732, May 15, 2000 (Year: 2000). |
Number | Date | Country | |
---|---|---|---|
20220019930 A1 | Jan 2022 | US |
Number | Date | Country | |
---|---|---|---|
62682979 | Jun 2018 | US | |
61462260 | Jan 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16365694 | Mar 2019 | US |
Child | 17402520 | US | |
Parent | 14643774 | Mar 2015 | US |
Child | 16365694 | US | |
Parent | 13373948 | Dec 2011 | US |
Child | 14643774 | US |