Claims
- 1. A learning apparatus for teaching a neural network, including a plurality of input nodes and a plurality of output nodes, each of the plurality of output nodes representing a class with a different meaning, said learning apparatus comprising:
- initialization means for providing an input learning vector to the plurality of input nodes of said neural network, said neural network applying a weighting vector to the input learning vector to produce an initial output learning vector at the plurality of output nodes;
- first classifying means including,
- first selecting means for selecting two of the plurality of output nodes with a first and second largest value,
- first detecting means for detecting if the selected output node with the first largest value represents the class to which the input learning vector belongs, and
- weight adjusting means for adjusting the weighting vector if the selected output node with the first largest value does not represent the class to which the input learning vector belongs wherein the adjusted weighting vector is applied to the input learning vector to produce an adjusted output learning vector at the plurality of output nodes, said weight adjusting means adjusting the weighting vector until the first largest value represents the class to which the input vector belongs; and
- second classifying means including,
- second selecting means for selecting the two of the plurality of output nodes with the first and second largest values,
- second detecting means for detecting if the selected output node with the first largest value represents the class to which the input learning vector belongs,
- ratio calculating means for calculating a ratio of the first largest value to the second largest value if the first largest value represents the class to which the input learning vector belongs, and
- ratio increasing mean for increasing the ratio of the first largest value to the second largest value if the ratio is within a predetermined range.
- 2. The learning apparatus of claim 1, wherein said initializing means produces the initial output weighting vector according to: ##EQU8## where O.sub.jk (t,u)=the initial output vector,
- I(t)=the input learning vector, and
- W.sub.jk (u)=the weighting vector.
- 3. The learning apparatus of claim 1, wherein said weight adjusting means adjusts the weighting vector according to:
- W.sub.j1k1 =W.sub.j1k1 -K.sub.1 {I-W.sub.j1k1 } and
- W.sub.j2k2 =W.sub.j2k2 +K.sub.1 {I-W.sub.j1k1 }
- where
- W.sub.j1k1 =the adjusted weighting vector applied to the selected output mode with the first largest value,
- W.sub.j2k2 =the adjusted weighting vector applied to the selected output mode with the second largest value,
- I=the input learning vector, and
- K.sub.1 =is a constant.
- 4. The learning apparatus of claim 1, wherein the predetermined range is between 1.0 and 1.5.
- 5. The learning apparatus of claim 4, wherein an optimal value in the predetermined range is 1.2.
- 6. The learning apparatus of claim 1, wherein said ratio increasing means increases the ratio of the first largest value to the second largest value by adjusting the weighting vector according to:
- W.sub.j1k1 =W.sub.j1k1 +K.sub.2 {I-W.sub.j1k1 }
- W.sub.j2k2 =W.sub.j2k2 -K.sub.2 {I-W.sub.j2k2 }
- where
- W.sub.j1k1 =the adjusted weighting vector applied to the selected output mode with the first largest value,
- W.sub.j2k2 =the adjusted weighting vector applied to the selected output mode with the second largest value,
- I=the input learning vector, and
- K.sub.2 =is a constant.
- 7. A learning apparatus for teaching a neural network, including a plurality of input nodes and a plurality of output nodes, each of the plurality of output nodes representing a class with a different meaning, said learning apparatus comprising:
- initialization means for providing an input learning vector to the plurality of input nodes of said neural network, said neural network applying a weighting vector to the input learning vector to produce an initial output learning vector at the plurality of output nodes; and
- classifying means including,
- selecting means for selecting two of the plurality of output modes with the first and second largest values,
- detecting means for detecting if the selected output node with the first largest value represents the class to which the input learning vector belongs,
- ratio calculating means for calculating a ratio of the first largest value to the second largest value if the first largest value represents the class to which the input learning vector belongs, and
- ratio increasing means for increasing the ratio of the first largest value to the second largest value if the ratio is within a predetermined range.
- 8. The learning apparatus of claim 7, wherein said initializing means produces the initial output weighting vector according to: ##EQU9## where O.sub.jk (t,u)=the initial output vector,
- I(t)=the input learning vector, and
- W.sub.jk (u)=the weighting vector.
- 9. The learning apparatus of claim 7, wherein the predetermined range is between 1.0 and 1.5.
- 10. The learning apparatus of claim 9, wherein an optimal value in the predetermined range is 1.2.
- 11. The learning apparatus of claim 7, wherein said ratio increasing means increases the ratio of the first largest value to the second largest value by adjusting the weighting vector according to:
- W.sub.j1k1 =W.sub.j1k1 +K.sub.2 {I-W.sub.j1k1 }
- W.sub.j2k2 =W.sub.j2k2 -K.sub.2 {I-W.sub.j2k2 .tbd.
- where
- W.sub.j1k1 =the adjusting weighting vector applied to the selected output mode with the first largest value,
- W.sub.j2k2 =the adjusted weighting vector applied to the selected output mode with the second largest value,
- I=the input learning vector, and
- K.sub.2 =is a constant.
- 12. A learning method for teaching a neural network, including a plurality of input nodes and a plurality of output nodes, each of the plurality of output nodes representing a class with a different meaning, said learning method comprising the steps of:
- (a) providing an input learning vector to the plurality of input nodes of said neural network, said neural network applying a weighting vector to the input learning vector to produce an initial output learning vector at the plurality of output nodes;
- (b) selecting two of the plurality of output nodes with a first and second largest value;
- (c) detecting if the selected output node with the first largest value represents the class to which the input learning vector belongs;
- (d) adjusting the weighting factor if the selected output node with the first largest value does not represent the class to which the input learning vector belongs wherein the adjusted weighting vector is applied to the input learning vector to produce an adjusted output learning vector at the plurality of output nodes, said step (d) adjusting the weighting vector until the first largest value represents the class to which the input vector belongs;
- (e) selecting the two of the plurality of output nodes with the first and second largest values;
- (f) detecting if the selected output node with the first largest value represents the class to which the input learning vector belongs;
- (g) calculating a ratio of the first largest value to the second largest value if the first largest value represents the class to which the input learning vector belongs; and
- (h) increasing the ratio of the first largest value to the second largest value if the ratio is within a predetermined range.
- 13. The learning method of claim 12, wherein said step (a) produces the initial output weighting vector according to: ##EQU10## where O.sub.jk (t,u)=the initial output vector,
- I(t)=the input learning vector, and
- W.sub.jk (u)=the weighting vector.
- 14. The learning method of claim 12, wherein said step (d) adjusts the weighting vector according to:
- W.sub.j1k1 =W.sub.j1k1 -K.sub.1 {I-W.sub.j1k1 } and
- W.sub.j2k2 =W.sub.j2k2 +K.sub.1 {I-W.sub.j2k2 }
- where
- W.sub.j1k1 =the adjusted weighting vector applied to the selected output mode with the first largest value,
- W.sub.j2k2 =the adjusted weighting vector applied to the selected output mode with the second largest value,
- I=the input learning vector, and
- K.sub.1 =is a constant.
- 15. The learning method of claim 12, wherein the predetermined range is between 1.0 and 1.5.
- 16. The learning method of claim 15, wherein an optimal value in the predetermined range is 1.2.
- 17. The learning method of claim 12, wherein said step (h) increases the ratio of the first largest value to the second largest value by adjusting the weighting vector according to:
- W.sub.j1k1 =W.sub.j1k1 +K.sub.2 {I-W.sub.j1k1 }
- W.sub.j2k2 =W.sub.j2k2 -K.sub.2 {I-W.sub.j2k2 }
- where
- W.sub.j1k1 =the adjusted weighting vector applied to the selected output mode with the first largest value,
- W.sub.j2k2 =the adjusted weighting vector applied to the selected output mode with the second largest value,
- I=the input learning vector, and
- K.sub.2 =is a constant.
- 18. A learning method for teaching a neural network, including a plurality of input nodes and a plurality of output nodes, each of the plurality of output nodes representing a class with a different meaning, said learning method comprising the steps of:
- (a) providing an input learning vector to the plurality of input nodes of said neural network, said neural network applying a weighting vector to the input learning vector to produce an initial output learning vector at the plurality of output nodes;
- (b) selecting two of the plurality of output nodes with the first and second largest values;
- (c) detecting if the selected output node with the first largest value represents the class to which the input learning vector belongs;
- (d) calculating a ratio of the first largest value to the second largest value if the first largest value represents the class to which the input learning vector belongs; and
- (e) increasing the ratio of the first largest value to the second largest value if the ratio is within a predetermined range.
- 19. The learning method of claim 18, wherein said step (a) produces the initial output weighting vector according to: ##EQU11## where O.sub.jk (t,u)=the initial output vector,
- I(t)=the input learning vector, and
- W.sub.jk (u)=the weighting vector.
- 20. The learning method of claim 18, wherein the predetermined range is between 1.0 and 1.5.
- 21. The learning method of claim 18, wherein an optimal value in the predetermined range is 1.2.
- 22. The learning method of claim 18, wherein said step (a) increases the ratio of the first largest value to the second largest value by adjusting the weighting vector according to:
- W.sub.j1k1 =W.sub.j1k1 +K.sub.2 {I-W.sub.j1k1 }
- W.sub.j2k2 =W.sub.j2k2 -K.sub.2 {I-W.sub.j2k2)}
- where
- W.sub.j1k1 =the adjusted weighting vector applied to the selected output mode with the first largest value,
- W.sub.j2k2 =the adjusted weighting vector applied to the selected output mode with the second largest value,
- I=the input learning vector, and
- K.sub.2 =is a constant.
Priority Claims (1)
Number |
Date |
Country |
Kind |
1-60327 |
Mar 1989 |
JPX |
|
Parent Case Info
This application is a continuation of application Ser. No. 07/491,732 filed on Mar. 12, 1990, now abandoned.
US Referenced Citations (3)
Non-Patent Literature Citations (4)
Entry |
"Parall distributed Processing", vol. 1, ch-8-, Rumelhart et al, 1986. |
"An Introduction to Computing with Neural Nets", Richard P. Lippmann, IEEE ASSP Mag., Apr. 1987. |
"Statistical Pattern Recognition with Neural Networds: Benchmarking Studies" by T. Kohonen et al, vol. I, pp. 61-68, Jul. 1988. |
"An Introduction to Neural Computing" by T. Kohonen. |
Continuations (1)
|
Number |
Date |
Country |
Parent |
491732 |
Mar 1990 |
|