Claims
- 1. A method for computation using a neural network architecture comprising the steps of:
using a plurality of layers, each layer including a plurality of computational nodes, an input processing layer, a central processing layer, and an output processing layer; using at least one feedforward channel for inputs; using full lateral and feedback connections within the processing layers; using an output channel for outputs; using re-entrant feedback from the output channel to at least one of the processing layers; using local update processes to update each of the plurality of computational nodes; and using re-entrant feedback from the output channel to perform minimalization for general computation.
- 2. The method of claim 1, wherein the output channel uses feedforward connections between the output channel and at least one of the processing layers.
- 3. The method of claim 1, wherein the output channel uses bi-directional connections between the output channel and at least one of the processing layers.
- 4. The method of claim 1, wherein the re-entrant feedback is uni-directional.
- 5. The method of claim 1, wherein the re-entrant feedback is bi-directional.
- 6. The method of claim 1, wherein the local update processes are any one of: random processes, non-stationary random processes, Polya processes and Bose-Einstein processes.
- 7. The method of claim 1, wherein the local update processes effect a phase change.
- 8. The method of claim 1, wherein the local update processes are equivalent to Bose-Einstein condensation.
- 9. The method of claim 1, wherein the local update processes are equivalent to taking a quantum measurement.
- 10. The method of claim 1, wherein the local update processes are derived from nearest-neighbor normalization.
- 11. The method of claim 1, wherein the local update processes create a Delaunay tessellation from one layer to a next layer.
- 12. The method of claim 1, wherein the local update processes include inhibition.
- 13. The method of claim 1, wherein the local update processes cause fractal percolation.
- 14. The method of claim 1, wherein the minimalization recalibrates the computation nodes in at least one processing layer.
- 15. The method of claim 1, wherein the minimalization is triggered by fractal percolation.
- 16. The method of claim 1, wherein the minimalization is a quantum measurement.
- 17. The method of claim 1, wherein the plurality of layers comprise one module in an architecture that has a plurality of modules.
- 18. The method of claim 17, wherein one of the plurality of modules is an attention module with at least two layers connected by bi-directional connections, and with lateral connectivity to at least two processing layers in other modules.
- 19. The method of claim 17, wherein one of the plurality of modules is a digital memory.
- 20. The method of claim 17, wherein one of the plurality of modules is a dynamic memory having the plurality of layers including the plurality of computational nodes,
the computational nodes using feedforward inputs from an output channel of the remainder of the plurality of modules and using feedforward connectivity from a first to a last of the plurality of layers, and the computational nodes using at least one feedforward re-entrant connection to the first of the plurality of layers and using bi-directional connectivity between the one of the plurality of modules and at least one processing layer from the remainder of modules.
- 21. The method of claim 17, wherein the plurality of modules have standardized dimensions, number of layers and number of computational nodes, wherein the computational nodes use connectivity from the input processing layer in each module to neighboring modules, and lateral connectivity between their corresponding processing layers to permit leaky processing.
- 22. The method of claim 17, wherein the plurality of modules provide converging and diverging connections from the output layer of one module to at least one processing layer of other modules.
- 23. The method of claim 17, wherein each of the plurality of modules includes a logic gate.
- 24. The method of claim 17, wherein at least one of the plurality of modules includes a NAND gate.
- 25. The method of claim 17, wherein at least one of the plurality of modules includes a CNOT gate.
- 26. The method of claim 1, further including the steps of:
copying values for critical probabilities for a lattice formed by the computational nodes from the neural network architecture; and writing the values to another neural network.
- 27. An apparatus for implicit computation comprising:
neural network architecture means having a plurality of layer means, each layer means including a plurality of adaptive computational node means, the plurality of layer means further including:
processing layer means including:
input processing layer means, central processing layer means, and output processing layer means; feedforward input channel means; full lateral and feedback connection means within the processing layer means; output channel means; re-entrant feedback means from the output channel means to the processing layer means; means for updating each of the plurality of adaptive computational node means using local update processes; and means for using re-entrant feedback from the output channel means to perform minimalization for general computation.
- 28. The apparatus of claim 27, wherein the output channel means uses feedforward connection means between the output channel means and the processing layer means.
- 29. The apparatus of claim 27, wherein the output channel means uses bi-directional connection means between the output channel means and the processing layer means.
- 30. The apparatus of claim 27, wherein the re-entrant feedback means is uni-directional.
- 31. The apparatus of claim 27, wherein the re-entrant feedback means is bi-directional.
- 32. The apparatus of claim 27, wherein the local update processes are any one of: random processes between the plurality of adaptive computational node means, non-stationary random processes between the plurality of adaptive computational node means, Polya processes between the plurality of adaptive computational node means, and Bose-Einstein processes between the plurality of adaptive computational node means.
- 33. The apparatus of claim 27, wherein the local update processes are Bose-Einstein processes and the plurality of adaptive computational node means are lasers.
- 34. The apparatus of claim 27, wherein the local update processes are a phase change in the plurality of adaptive computational node means.
- 35. The apparatus of claim 27, wherein the local update processes are a Bose-Einstein condensation in the plurality of adaptive computational node means.
- 36. The apparatus of claim 27, wherein the local update processes are quantum measurements performed on the plurality of adaptive computational node means.
- 37. The apparatus of claim 27, wherein the local update processes perform nearest-neighbor normalization among the plurality of adaptive computational node means.
- 38. The apparatus of claim 27, wherein the local update processes create a Delaunay tessellation from one layer means to a next layer means.
- 39. The apparatus of claim 27, wherein the local update processes include inhibition between at least one of adaptive computational node means and at least one other of the plurality of adaptive computational node means.
- 40. The apparatus of claim 27, wherein the local update processes cause fractal percolation among the plurality of adaptive computational node means.
- 41. The apparatus of claim 27, wherein the minimalization recalibrates the adaptive computation node means in the processing layer means.
- 42. The apparatus of claim 27, wherein the minimalization step is triggered by fractal percolation across the plurality of adaptive computation node means.
- 43. The apparatus of claim 27, wherein the minimalization step is a quantum measurement performed on the plurality of adaptive computation node means.
- 44. The apparatus of claim 27, wherein the plurality of layer means is one module means in an architecture with a plurality of module means.
- 45. The apparatus of claim 44, wherein one of the plurality of module means is an attention module means that includes:
at least two layer means connected by bi-directional connection means; and lateral connectivity means to at least two processing layer means belonging to other module means.
- 46. The apparatus of claim 44, wherein one of the plurality of module means is a standard digital memory means.
- 47. The apparatus of claim 44, wherein one of the plurality of module means is a dynamic memory means that includes:
a plurality of layer means including a plurality of adaptive computational node means; feedforward input means from the output channel means of other module means; feedforward connectivity means from a first to a last layer means; and feedforward re-entrant connection means to the first layer means using bi-directional connectivity means between the module means and the processing layer means from the other module means.
- 48. The apparatus of claim 44, wherein the module means have standardized dimensions number of layer means and a standardized number of adaptive computational node means, and wherein each module means includes:
connectivity means from the input processing layer means of each module means to a neighboring module means; and lateral connectivity means between a corresponding processing layer means to permit leaky processing.
- 49. The apparatus of claim 44, wherein the plurality of module means provide converging and diverging connection means from the output layer means of each module means to processing layer means of other module means.
- 50. The apparatus of claim 44, wherein one module means includes at least one of a logic gate, a NAND gate, and a CNOT gate.
- 51. A method for computation using a neural network architecture comprising the steps of:
using at least one input from an environment; using a plurality of locally connected computation nodes for fractal percolation; using a minimalization step for computation; and outputting at least one output.
- 52. The method of claim 51, further including the steps of:
copying values for critical probabilities for a lattice formed by the computational nodes from the neural network architecture; and writing the values to another neural network.
- 53. The method of claim 51, wherein the plurality of locally connected computation nodes has a state space of dimension of at least two.
- 54. The method of claim 51, wherein the plurality of locally connected computation nodes has a Hilbert state space.
- 55. The method of claim 51, wherein connections among the plurality of locally connected computation nodes extend beyond nearest neighbors.
- 56. The method of claim 51, wherein connections among the plurality of locally connected computation nodes are feedforward connections, leading to “first-pass” percolation.
- 57. The method of claim 51, wherein at least one connection among the plurality of locally connected computation nodes is an inhibitory connection.
- 58. The method of claim 51, wherein the minimalization step is performed by re-entrant connections to the plurality of locally connected computation nodes.
- 59. The method of claim 51, wherein the fractal percolation across the plurality of locally connected computational nodes occurs by one of a random process, a Poisson process, a non stationary random process, a Polya process, and a Bose-Einstein statistical process.
- 60. The method of claim 51, wherein the fractal percolation across the plurality of locally connected computation nodes occurs by nearest-neighbor renormalization.
- 61. The method of claim 51, wherein the minimalization step is performed by re-scaled weights among the plurality of locally connected computation nodes.
- 62. The method of claim 51, wherein the minimalization step is performed by a quantum measurement.
- 63. The method of claim 51, wherein the plurality of locally connected computational nodes are one module in an architecture with a plurality of modules.
- 64. The method of claim 63 wherein one of the plurality of modules is an attention module using bi-directional connections with at least one other module.
- 65. The method of claim 63, wherein one of the plurality of modules is a standard digital memory.
- 66. The method of claim 63, wherein one of the plurality of modules is a dynamic memory including:
a plurality of locally connected computational nodes, with feedforward inputs from another module, feedforward connectivity among the plurality of locally connected computationalnodes using at least one feedforward re-entrant connection to the first of the plurality of locally connected computational nodes; and bi-directional connectivity between the modules.
- 67. The method of claim 63, wherein the plurality of modules have standardized dimensions and numbers of the plurality of locally connected computational nodes, wherein random connectivity among neighboring modules permits leaky processing.
- 68. The method of claim 63, wherein the plurality of modules provide converging and diverging connections from the at least one output of one module to another module.
- 69. The method of claim 63, wherein each module includes at least one of a logic gate, a NAND gate, and a CNOT gate.
- 70. An apparatus for implicit computation comprising:
neural network architecture means including:
input means from an environment; and output means a plurality of locally connected computation node means for fractal percolation, wherein a minimalization step is used for computation.
- 71. The apparatus of claim 70, wherein the plurality of locally connected computation node means has a state space of dimension of at least or equal to two.
- 72. The apparatus of claim 70, wherein the plurality of locally connected computation node means has a Hilbert state space.
- 73. The apparatus of claim 70, wherein the plurality of locally connected computation node means include Rydberg atoms.
- 74. The apparatus of claim 70, wherein the plurality of locally connected computation node means include molecular magnets.
- 75. The apparatus of claim 70, wherein the local connection means among the plurality of locally connected computation node means extend beyond nearest neighbors.
- 76. The apparatus of claim 70, wherein connection means connecting plurality of locally connected computation node means are all feedforward, leading to “first-pass” percolation.
- 77. The apparatus of claim 70, wherein at least one connection means connecting the plurality of locally connected computation node means is an inhibitory connection means.
- 78. The apparatus of claim 70, wherein the fractal percolation across the plurality of locally connected computation node means occurs by any one of a Polya process, Bose-Einstein condensation, a Poisson process, a non-stationary random process, nearest-neighbor renormalization, and a random process.
- 79. The apparatus of claim 70, wherein the fractal percolation across the plurality of locally connected computation node means occurs across an Ising lattice.
- 80. The apparatus of claim 70, wherein the plurality of locally connected computation node means includes at least one laser.
- 81. The apparatus of claim 70, wherein the minimalization step user re-scaled weights among the plurality of locally connected computation node means.
- 82. The apparatus of claim 70, wherein the minimalization step is performed by one of electron spin resonance pulses, quantum measurement, re-entrant connection means to the plurality of locally connected computation node means and coherent radiation.
- 83. The apparatus of claim 70, wherein the plurality of locally connected computational node means is one module means in an architecture with a plurality of module means.
- 84. The apparatus of claim 83, wherein one of the plurality of module means is an attention module means using bi-directional connection means to connect with another module means.
- 85. The apparatus of claim 83, wherein one of the plurality of module means is a standard digital memory means.
- 86. The apparatus of claim 83, wherein one of the plurality of modules is a dynamic memory using a plurality of locally connected computational node means, and further including:
feedforward input means from the output means of a remainder of the module means; feedforward connectivity means among the plurality of locally connected computational node means; at least one feedforward re-entrant connection means to a first of the plurality of locally connected computational node means, bi-directional connectivity means between the module means and both the plurality of locally connected computational node means and the remainder of the plurality of module means.
- 87. The apparatus of claim 83, wherein the plurality of module means have standardized dimensions and standardized numbers of the plurality of locally connected computational node means,
the computational node means using random connectivity means to connect to neighboring members of the plurality of module means to permit leaky processing.
- 88. The apparatus of claim 83, wherein the plurality of module means provide converging and diverging connection means from one output means of the plurality of module means to at least one other module means.
- 89. The apparatus of claim 83, wherein each module means includes one of a logic gate, a NAND gate, and a CNOT gate.
- 90. The apparatus of claim 70, further comprising:
storage means for copying values for critical probabilities for a lattice formed by the computational nodes from the neural network architecture; and transfer means to write the values to another neural network means.
Parent Case Info
[0001] This Application is a Continuation-In-Part of U.S. patent application Ser. No. 09/240,052, filed Jan. 29, 1999, which is a Continuation-In-Part of U.S. patent application Ser. No. 08/713,470, filed Sep. 13, 1996, now issued as U.S. Pat. No. 6,009,418, which claims the benefit of U.S. Provisional Patent Application Serial No. 60/016,707 filed May 2, 1996. The entire disclosures of these applications, including references incorporated therein, are incorporated herein by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60016707 |
May 1996 |
US |
Continuation in Parts (2)
|
Number |
Date |
Country |
Parent |
09240052 |
Jan 1999 |
US |
Child |
09870946 |
Jun 2001 |
US |
Parent |
08713470 |
Sep 1996 |
US |
Child |
09240052 |
Jan 1999 |
US |