Claims
- 1. A method for providing automated pattern recognition, comprising:selecting a central component of an input signal; positioning a plurality of complex units with respect to said central component of said input signal, each of said complex units associated with a plurality of simple units, one of said plurality of simple units designated as a central unit, wherein said positioning associates each of said central units with said central component of said input signal; determining, for each of said simple units, a greatest weighted activation level responsive to at least one detection unit activation level within an associated reception field; combining, for each of said complex units, said greatest weighted activation levels of each of said associated plurality of simple units, said combining resulting in a respective complex unit activation level for each of said complex units; selecting one of said complex units associated with a greatest one of said complex unit activation levels; recording an object feature associated with said central unit of said selected one of said complex units; selecting, responsive to a receptive field of one of said simple units associated with said selected one of said complex units other than said central unit, a new central component of said input signal; repositioning said plurality of complex units with respect to said new central component of said input signal, wherein said repositioning associates each of said central units with said new central component of said input signal; and providing a recognition indication when a set of recorded object features comprises a set of object features associated with an object to be recognized.
- 2. The method of claim 1, further comprising:determining at least one contextual expectation with regard to said input signal; and wherein said selecting said new central component of said input signal is further responsive to said at least one contextual expectation with regard to said input signal.
- 3. The method of claim 2, wherein said determining said at least one contextual expectation comprises determining a temporal expectation with regard to said input signal.
- 4. The method of claim 2, wherein said determining said at least one contextual expectation comprises determining a locational expectation with regard to said input signal.
- 5. The method of claim 2, wherein said determining said at least one contextual expectation comprises determining a contextual expectation with respect to at least one component of a recognizable object.
- 6. The method of claim 5, wherein said input signal represents handwriting, wherein said recognizable object comprises at least one word, and wherein said at least one contextual expectation with respect to said at least one component of said recognizable object comprises at least one expected letter within said at least one word.
- 7. The method of claim 1, wherein said input signal represents audio information.
- 8. The method of claim 1, wherein said input signal represents video information.
- 9. The method of claim 1, further comprising:determining at least one contextual expectation with regard to said input signal; and wherein said determining said greatest weighted activation level is further responsive to said at least one contextual expectation with regard to said input signal.
- 10. The method of claim 9, wherein said determining said at least one contextual expectation comprises determining a temporal expectation with regard to said input signal.
- 11. The method of claim 9, wherein said determining said at least one contextual expectation comprises determining a locational expectation with regard to said input signal.
- 12. The method of claim 9, wherein said determining said at least one contextual expectation comprises determining a contextual expectation with respect to at least one component of a recognizable object.
- 13. The method of claim 12, wherein said input signal represents handwriting, wherein said recognizable object comprises at least one word, and wherein said at least one contextual expectation with respect to said at least one component of said recognizable object comprises at least one expected letter within said at least one word.
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims priority under 35 U.S.C. §119(e) to provisional patent application Ser. No. 60/117,613 filed Jan. 28, 1999.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
The development of this invention was supported at least in part by the United States Office of Naval Research in connection with grant number N0014-91-J1316. Accordingly, the United States Government may have certain rights in the invention.
US Referenced Citations (3)
| Number |
Name |
Date |
Kind |
|
4326259 |
Cooper et al. |
Apr 1982 |
A |
|
5261009 |
Bokser |
Nov 1993 |
A |
|
5500905 |
Martin et al. |
Mar 1996 |
A |
Provisional Applications (1)
|
Number |
Date |
Country |
|
60/117613 |
Jan 1999 |
US |