Claims
- 1. A neural network associative memory, comprising,
- a single layer of processing elements having source and destination ones of said elements which are respectively referenced as (i) and (j) elements,
- each of said elements having available a summation means for summing weighted inputs to said elements and a transfer function means for computing transformation of said summed weighted inputs,
- an input means for and associated with individual ones of said elements for receiving patterns to be learned and patterns to be identified,
- an output means for and associated with individual ones of said elements for outputting patterns generated by said individual ones of said processing elements,
- a first set of unidirectional connections comprising a first set of variable value weights extending respectively from said a plurality of said output means of said elements to a plurality of said summation means of other individual ones of said elements,
- a second set of unidirectional connections forming a second set of variable value weights extending respectively from said output means of a plurality of said elements to said summation means of the same ones of said elements, and
- means for varying the values of said first set of weights pursuant to the rule .DELTA.w.sub.ij =.eta.p.sub.i .delta..sub.j and the values of said second set of weights pursuant to the rule .DELTA.w.sub.jj =.eta.p.sub.j .delta..sub.j, wherein (w.sub.ij) are variable connection weights between functionally adjacent ones (i) and (j) of said elements, (.eta.) is a constant that determines the learning rate, (w.sub.jj) are variable feedback connection weights for each of said elements, (p.sub.i) are predetermined values of said patterns to be learned and identified associated with said (i) elements and (.delta..sub.j) are error signals respectively of said (j) elements.
- 2. A neural network associative memory in accordance with claim 1 wherein said first set of connections extend respectively from said output means of all of said elements to said summation means of all of said elements.
- 3. A neural network associative memory in accordance with claim 1 wherein said transfer function associated with at least some of said processing elements has a sigmoidal function.
- 4. A neural network associative memory, comprising:
- a plurality of processing elements each having summation means and for summing weighted inputs to said elements and sigmoid transfer function means for computing the transformation of said summed weighted inputs,
- input means for individual ones of said elements for receiving patterns to be learned and patterns to be identified,
- output means for individuals ones of said elements for outputting patterns generated by said processing elements,
- connection means for forming variable value weights connecting said output means of some of said elements and said summation means of other of said elements,
- each of said processing elements having envelope means for providing an envelope for said sigmoid transfer function means thereof and random value output means for providing random output values within the boundary of said envelope for corresponding values output of said summations means, and learning algorithm means activated iteratively and means for varying the values of said weights pursuant thereto at each iteration.
- 5. A neural network according to claim 4 wherein said prototype element includes means for varying the size of said envelope starting with a predetermined size and becoming smaller with subsequent iteration.
- 6. A neural network according to claim 4 or 5 wherein said envelope has the form
- 1/(1+exp[-net.sub.j +T]).ltoreq.o.sub.j .ltoreq.1/(1+exp[-net.sub.j -T]) ##EQU4## are variable connection weights between functionally adjacent ones (i) and (j) of said elements, (.theta..sub.j) are threshold values for said elements, (o.sub.i) and (o.sub.j) are output values of said adjacent ones of said elements, and (T) is a variable for determining the size of said envelope.
- 7. A method for storing patterns in a neural network associated memory which memory comprises:
- a single layer of processing elements having source and destination ones of said elements which are respectively referenced to as (i) and (j) elements,
- each of said elements having available a summation means for summing weighted inputs to said elements and a transfer function means for computing transformation of said summed weighted inputs,
- an input means for and associated with individual ones of said elements for receiving patterns to be learned and patterns to be identified,
- an output means for and associated with individual ones of said elements for outputting patterns generated by said individual ones of said processing elements,
- a first set of unidirectional connections comprising a first set of variable value weights (w.sub.ij) extending respectively from said output means of each of said elements (i) to said summation means of other ones of said elements (j), and
- a second set of unidirectional connections (w.sub.jj) forming variable value self weights extending respectively from said output means of said elements (j) to said summation means of the same ones of said elements (j),
- said method comprising the steps of:
- (a) applying a pattern to be learned to said input means,
- (b) iteratively calculating changes of said weights for said first and second sets of connections in accordance with the rule .DELTA.w.sub.ij =.eta.p.sub.i .delta.j and .DELTA.w.sub.jj =.eta.p.sub.j .delta..sub.j wherein (.eta.) is a constant that determines the learning rate, (p.sub.i) and (p.sub.j) are predetermined values of said patterns being learned and identified and (.delta..sub.j) are error terms, and
- (c) continuing step (b) until said weights are stabilized, and then storing said patterns.
- 8. A method according to claim 7 wherein each of said elements (j) has a threshold (.theta..sub.j) and each of said output means (o.sub.j) of a said element (j) has a sigmoid function ##EQU5## said step (b) being further characterized by said (.delta..sub.j) being equal to (p.sub.j -o.sub.j)o'.sub.j where (p.sub.j) are values of said patterns to be learned, (o.sub.j) are respective actual output values of said units (j), and (o'.sub.j) is the derivative of said (o.sub.j) with respect to the quantity ##EQU6## with said derivative being equal to o.sub.j (1-o.sub.j).
- 9. A processing element assembly for use in a neural network having a plurality of such assemblies and wherein each two processing elements of functionally adjacent source and destination ones of said assemblies may be considered a pair and are referenced, respectively, as processing elements (i) and (j) of such pair,
- said assembly comprising,
- a processing element (j) having available a summation section for summing the values of weighted inputs and a transfer section for computing a transfer function for said summed weighted inputs,
- fan-in connection means for said summation section comprising externally connectable lines connected to said summation section,
- output means for said transfer section having fan-out connection means with multiple output lines for connection from said transfer section,
- a plurality of adjustable weight means associated respectively with said fan-in connection lines,
- a weight adjusting learning algorithm means for adjusting said weight means having associated memory means for storage of patterns,
- pattern input means for inputting a pattern to said memory means,
- circuit means so constructed and assembled for providing an initializing mode such that (1) a pattern element (p.sub.j) placed on said pattern input means is directed to said memory means and to said multiple lines of said fan-out connection means by such circuit means, and (2) pattern elements (p.sub.i) from source ones of said processing elements (i) on said fan-in externally connectable lines are directed to said memory means and to said summation section via said means by said circuit means, and wherein,
- said circuit means is also so constructed and assembled for providing a learning mode wherein (1) an output (o.sub.j) of said transfer section output means is directed by said circuit means to said memory means and to said multiple lines of said fan-out connection means and (2) outputs (o.sub.i) from source ones of said processing elements (i) on said fan-in externally connectable lines are directed by said circuit means to said memory means and to said summation section via said weight means thereof,
- said learning rule comprising the form .DELTA.wij=.eta.p.sub.i .delta..sub.j wherein (wij) values are a representational weighting value of said adjustable weight means between functionally adjacent ones of said source and destination processing elements, (.delta..sub.j) are calculated error signals equal to (p.sub.j -o.sub.j)o'.sub.j, and .eta. is a constant that determines the learning rate.
- 10. A processing element assembly according to claim 9 wherein said fan-in connection lines include a fan-in feedback line, said output (o.sub.j) in said learning mode being also directed to said feedback line, and said learning rule also comprising the form .DELTA.w.sub.jj =.eta..alpha.p.sub.j j wherein w.sub.jj represents the weighting value of said adjustable weight means in said fan-in feedback line.
- 11. A processing element assembly according to claim 9 or 10 wherein said fan-in connection lines include a biased threshold line, and said learning rule also comprising the form v.theta..sub.j =.eta..delta..sub.j wherein .theta..sub.j represents the weighting value of said adjustable weight means in said biased threshold line.
Parent Case Info
This application is a continuation-in-part of application Ser. No. 200,384, filed May 31, 1988 now abandoned.
US Referenced Citations (6)
Non-Patent Literature Citations (2)
Entry |
"An Introduction to Computing with Neural Nets", Apr. 1987, IEEE ASSP Magazine. |
"Computing with Neural Networks", May 1987, High Technology. |
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
200384 |
May 1988 |
|