SELECTIVE DELAY OF DATA RECEIPT IN STOCHASTIC COMPUTATION

Information

  • Patent Application
  • 20120158986
  • Publication Number
    20120158986
  • Date Filed
    February 22, 2011
    13 years ago
  • Date Published
    June 21, 2012
    12 years ago
Abstract
Circuitry for stochastic computation includes processing nodes, including a first processing node and a second processing node, each configured to process an outcome stream having a plurality of outcomes, each outcome being in one of a plurality of states, wherein an outcome from said outcome stream is in a particular state with a particular probability; communication links configured to transmit outcome streams between pairs of said processing nodes; and a delay module on each of said communication links, said delay module configured to delay outcome streams traversing said communication link by an assigned delay; wherein said first and second processing nodes are connected by a plurality of data paths, at least one of which comprises a plurality of communication links, each of said data paths causing an aggregate delay to an outcome stream traversing said data path; wherein no two aggregate delays impose the same delay on an outcome stream.
Description
FIELD OF DISCLOSURE

This disclosure relates to stochastic computers, and in particular, to directing message traffic within stochastic computers.


BACKGROUND

In a stochastic computer, values are represented as a stream of outcomes of a Bernoulli process. In such a computer, each value is represented by the probability of a particular state in the Bernoulli process. For example, a value of “0.7” would be represented by a stream of outcomes in which the probability that a particular outcome is in a first state is 0.7 and the probability that a particular outcome is in a second state is 0.3.


Accordingly, in a stochastic computer, one can estimate the value that is being represented by observing the outcome stream that represents that value. The longer one observes the outcome stream, the more accurate the estimate will be.


The use of outcome streams to represent values offers numerous advantages. For example, to multiply two numbers, a conventional computer would need to carry out a fairly complex procedure. In contrast, to multiply the same two numbers in a stochastic computer, one need only use an “AND” gate to “and” together corresponding bits in the two outcome streams as they arrive.


In a typical stochastic computer, the outcomes of the Bernoulli process are generated by a random number generator. A difficulty that arises, however, is that the numbers generated by a practical random number generator are only pseudo-random. These pseudo-random numbers are random enough for many purposes. However, the lack of true randomness becomes apparent when such random number generators are used in stochastic computers.


For example, since the random number generators can only generate pseudo-random numbers, the string of random numbers will eventually repeat itself. This repetition can cause errors in calculations that rely on the randomness of two incoming outcome streams. In other cases, there may be correlation between what are intended to be two independent outcome streams.


To overcome such difficulties, many stochastic computers use additional random number generators to re-randomize incoming outcome streams. These re-randomizers are analogous to repeaters in communication circuits, except that while repeaters are intended to boost a signal to avoid having it be lost in noise, the re-randomizers are intended to boost the noise to drown out any unwanted signal.


A difficulty that arises with the proliferation of re-randomizers is that each one consumes both additional power and additional floor-space. In a stochastic computer in which messages are being passed simultaneously between hundreds, and possibly thousands of node pairs, the additional power and floor-space required by these re-randomizers becomes considerable.


SUMMARY

In one aspect, the invention features circuitry for stochastic computation. Such circuitry includes a plurality of processing nodes, including a first processing node and a second processing node, each of the processing nodes configured to process an outcome stream having a plurality of outcomes, each of the outcomes in the outcome stream being in one of a plurality of states, wherein an outcome from the outcome stream is in a particular state with a particular probability; communication links configured to transmit outcome streams between pairs of the processing nodes; and a delay module on each of the communication links, the delay module configured to delay outcome streams traversing the communication link by an assigned delay; wherein the first and second processing nodes are connected by a plurality of data paths, at least one of which includes a plurality of communication links, each of the data paths causing an aggregate delay to an outcome stream traversing the data path; wherein no two aggregate delays impose the same delay on an outcome stream.


In some embodiments, at least one delay module has a randomly assigned delay.


In other embodiments, each communication link is assigned a color, each color is assigned a delay, and for all processing nodes, no two communication links to the processing node have the same color.


In yet other embodiments, the plurality of processing nodes and communication links define a sub-graph of a larger graph.


In other embodiments, the delay module is configured to delay an incoming outcome stream by an integer multiple of an interval between adjacent outcomes in the incoming outcome stream.


Among the embodiments include those in which the processing nodes are selected from the group consisting of function nodes and variable nodes, and wherein the communication links are configured such that no two function nodes are connected to each other by a communication link and no two variable nodes are connected to each other by a communication link.


In some embodiments, the processing nodes and the communication links define a bipartite graph.


In other embodiments, the processing nodes are configured to process an outcome stream derived from a Bernoulli process.


In another aspect, the invention features a method of sending an outcome stream between processing nodes in a stochastic computer. Such a method includes transmitting an outcome stream from a first node to a second node along a first communication path; transmitting an outcome stream from a first node to a second node along a second communication path; causing a first aggregate delay in the first communication path; and causing a second aggregate delay in the second communication path, the second aggregate delay being less than the first aggregate delay.


In some practices, causing a second aggregate delay including causing a different between the first and second aggregate delay to be an integer multiple of an interval between adjacent outcomes in the outcome stream.


In other practices, transmitting an outcome stream includes transmitting a stream of outcomes, wherein each outcome assumes a particular state with a particular probability.


In yet other practices, transmitting an outcome stream includes transmitting a stream of outcomes includes simulating a Bernoulli process to generate a stream of outcomes having a predefined probability.


In another aspect, the invention features an article of manufacture having encoded thereon software for executing a stochastic computer, the software including instructions that, when executed by a computer, cause the computer to: define a plurality of processing nodes, including a first processing node and a second processing node, each of the processing nodes configured to process an outcome stream having a plurality of outcomes, each of the outcomes in the outcome stream being in one of a plurality of states, wherein an outcome from the outcome stream is in a particular state with a particular probability; define communication links configured to transmit outcome streams between pairs of the processing nodes; and to assign a delay to each of the communication links for delaying outcome streams traversing the communication link; wherein the first and second processing nodes are connected by a plurality of data paths, at least one of which includes a plurality of communication links, each of the data paths causing an aggregate delay to an outcome stream traversing the data path; wherein no two aggregate delays impose the same delay on an outcome stream.





DESCRIPTION OF THE FIGURES


FIG. 1 is a cut-away schematic diagram of an integrated circuit;



FIG. 2 is a schematic diagram of representative circuitry from the integrated circuit of FIG. 1;



FIG. 3 shows the result of applying a delay to an outcome stream;



FIG. 4 is a graphical representation of the circuitry from FIG. 2; and



FIG. 5 is a bipartite graph showing a sub graph with delay lines.





DETAILED DESCRIPTION


FIG. 1 shows a cutaway view of an integrated circuit 10 containing circuitry 11 for implementing a particular stochastic computer to which the methods described herein are tied. The integrated circuit 10 features a plurality of pins 12, including a grounding pin 14 connected to ground and a power pin 16 connected to a DC power source 18.


The illustrated circuitry 11, shown in more detail in FIG. 2, includes processing nodes 20 connected to one or more other processing nodes by either unidirectional or bidirectional communication links 22. Each such processing node 20 generates an output that depends on its inputs. The inputs and outputs are stream of outcomes of a Bernoulli process having a probability that represents the value to be represented. A finite segment of such an outcome stream is referred to herein as a “message.”


Each communication link 22 includes a delay module 24 that delays the outcome stream traversing that communication link. The extent of the delay at each delay module 24 can be fixed at the time of manufacture. Or the extent of the delay can be programmable at run time.



FIG. 3 shows an exemplary incoming outcome stream 26 entering a delay module 24. The particular delay module 24 is configured to output an outcome stream 26′ that is the same as the incoming outcome stream 26, but delayed by an integer multiple of the interval, δ, between adjacent outcomes. In the illustrated example, the delay is 4δ.


For ease of analysis, the circuitry 11 shown in FIG. 2 is more conveniently represented as a circuit diagram, or graph 28, as shown in FIG. 4. In such a graph, edges 22 connect nodes 20 to each other. Each edge 22 has an associated delay 24. Between pairs of nodes 20, there exist multiple message paths, each of which can comprise multiple edges 22 connecting intermediary nodes 20. For each message path, there exists an aggregate delay obtained by adding together the delays 24 for each edge 22 on the message path. For example, in FIG. 4, the paths from node A to H would include the single edge path directly to node H (path AH), as well as paths ABFEGH, ABEGH, and ABCDEGH.


The topology of the graph associated with a particular stochastic computer depends in part on the application of the stochastic computer. For example, when the stochastic computer is intended for decoding, the graph is a bipartite graph 30 such as that shown in FIG. 5. In the bipartite graph 30 shown in FIG. 5, the nodes 20 are either function nodes 32 or variable nodes 34. The variable nodes 34 hold values that are intended to converge to correct values; the function nodes 32 carry out functions to modify the values held in the variable nodes 34 in such a way as to bring those values progressively closer to the correct values. In a graph used for decoding, the function nodes 32 are often XOR nodes.


A difficulty that can arise when pseudo-random number generators are used is that the outcome stream can repeat itself. The period that elapses before the sequence repeats itself is referred to herein as a “PRNG (pseudo-random number generator) cycle length.” A message that is shorter than this cycle length is therefore said to be “cycle free.” If the computation is not complete before the end of the PRNG cycle length, the algorithmic behavior can be severely compromised.


In general, any first and second processing node 20 can be connected by two or more paths, each of which comprises one or more edges 22, as discussed above in connection with FIG. 4. To reduce the probability of error in a stochastic computer, it is useful to reduce the extent to which outcome streams traversing these different paths between the first and second nodes are correlated with each other. The correlation between outcome streams traversing different paths can be controlled by delaying the outcome streams by different amounts. It is for this purpose that delay modules 24 are placed on each communication link 22.


In general, the extent to which outcome streams from a first node to a second node are correlated can be reduced by ensuring that the no two paths between the first and second nodes have the same aggregate delay. This, in the context of FIG. 4., the paths ABFEGH, ABEGH, ABCDEGH, and AH would all have different aggregate delays. The same can be said for all paths connecting any pair of nodes in FIG. 4.


The choice of how much delay should be imparted by a particular delay module 24 is subject to the constraint that for any pair of nodes, no two paths between those nodes have the same aggregate delay. For relatively simple graphs, suitable delays can be derived by inspection. For more complex graphs, delays can be assigned randomly across each edge. In such a case, the delays can be selected from a uniform distribution.


Although the probabilistic method of assigning delays is convenient to use, and although it represents an improvement over the case in which each edge has the same delay, it is not guaranteed to ensure that the foregoing constraint is met. For example, there exists a small probability, when using the probabilistic method, that the delays on each edge will be the same. This would result in no decrease in correlation between outcome streams traversing different paths.


Another approach to assigning delays to edges is to do so indirectly by assign colors to edges in such a way that all edges that connect to a particular node have different colors. Then, one would assign a particular delay to each color.


In practice, delay values need only be assigned to edges within a sub-graph of a larger graph, as shown in FIG. 5. The extent of a sub-graph is typically defined by a graph depth. For example, a sub-graph may be defined by the set of all nodes that can be reached from a particular node by traversing at most m edges.


As described here, the processing nodes 20, communication links 22, and delay modules 24 are implemented on an application specific integrated circuit. However, they can also be implemented in any hardware, for example on a FPGA, or on a general purpose digital computer executing suitable software.

Claims
  • 1. Circuitry for stochastic computation, said circuitry comprising: a plurality of processing nodes, including a first processing node and a second processing node, each of said processing nodes configured to process an outcome stream having a plurality of outcomes, each of said outcomes in said outcome stream being in one of a plurality of states, wherein an outcome from said outcome stream is in a particular state with a particular probability;communication links configured to transmit outcome streams between pairs of said processing nodes; anda delay module on each of said communication links, said delay module configured to delay outcome streams traversing said communication link by an assigned delay;wherein said first and second processing nodes are connected by a plurality of data paths, at least one of which comprises a plurality of communication links, each of said data paths causing an aggregate delay to an outcome stream traversing said data path;wherein no two aggregate delays impose the same delay on an outcome stream.
  • 2. The circuitry of claim 1, wherein at least one delay module has a randomly assigned delay.
  • 3. The circuitry of claim 1, wherein each communication link is assigned a color, each color is assigned a delay, and for all processing nodes, no two communication links to said processing node have the same color.
  • 4. The circuitry of claim 1, wherein said plurality of processing nodes and communication links define a sub-graph of a larger graph.
  • 5. The circuitry of claim 1, wherein said delay module is configured to delay an incoming outcome stream by an integer multiple of an interval between adjacent outcomes in said incoming outcome stream.
  • 6. The circuitry of claim 1, wherein said processing nodes are selected from the group consisting of function nodes and variable nodes, and wherein said communication links are configured such that no two function nodes are connected to each other by a communication link and no two variable nodes are connected to each other by a communication link.
  • 7. The circuitry of claim 1, wherein said processing nodes and said communication links define a bipartite graph.
  • 8. The circuitry of claim 1, wherein said processing nodes are configured to process an outcome stream derived from a Bernoulli process.
  • 9. A method of sending an outcome stream between processing nodes in a stochastic computer, said method comprising: transmitting an outcome stream from a first node to a second node along a first communication path;transmitting an outcome stream from a first node to a second node along a second communication path;causing a first aggregate delay in said first communication path; andcausing a second aggregate delay in said second communication path, said second aggregate delay being less than said first aggregate delay.
  • 10. The method of claim 9, wherein causing a second aggregate delay comprising causing a different between said first and second aggregate delay to be an integer multiple of an interval between adjacent outcomes in said outcome stream.
  • 11. The method of claim 9, wherein transmitting an outcome stream comprises transmitting a stream of outcomes, wherein each outcome assumes a particular state with a particular probability.
  • 12. The method of claim 9, wherein transmitting an outcome stream comprises transmitting a stream of outcomes comprises simulating a Bernoulli process to generate a stream of outcomes having a predefined probability.
  • 13. An article of manufacture having encoded thereon software for executing a stochastic computer, said software comprising instructions that, when executed by a computer, cause the computer to: define a plurality of processing nodes, including a first processing node and a second processing node, each of said processing nodes configured to process an outcome stream having a plurality of outcomes, each of said outcomes in said outcome stream being in one of a plurality of states, wherein an outcome from said outcome stream is in a particular state with a particular probability;define communication links configured to transmit outcome streams between pairs of said processing nodes; and toassign a delay to each of said communication links for delaying outcome streams traversing said communication link;wherein said first and second processing nodes are connected by a plurality of data paths, at least one of which comprises a plurality of communication links, each of said data paths causing an aggregate delay to an outcome stream traversing said data path;wherein no two aggregate delays impose the same delay on an outcome stream.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 61/306,880, titled “SELECTIVE DELAY OF DATA RECEIPT IN STOCHASTIC COMPUTATION,” filed on Feb. 22, 2010. The contents of which are incorporated herein by reference

STATEMENT AS TO FEDERALLY SPONSORED RESEARCH

This invention was made with government support under contract FA8750-07-C-0231 awarded by the Defense Advanced Research Projects Agency (DARPA). The government has certain rights in the invention.

Provisional Applications (1)
Number Date Country
61306880 Feb 2010 US