Claims
- 1. The method of matching a pixel version of an unknown input symbol with a library of enhanced pixel templates by recognition enhancement of a library of L unenhanced pixel images (I.sub.1 I.sub.2 I.sub.3 . . . I.sub.j . . . I.sub.L) with respect to a pre-existing group of G classes of symbols (S.sub.1 S.sub.2 S.sub.3 . . . S.sub.i . . . S.sub.G) at least some of which include multiple pixel versions of the same symbol, for providing a library of G recognition enhanced pixel templates (T.sub.1 T.sub.2 T.sub.3 . . . T.sub.i . . . T.sub.G) one enhanced template corresponding to each of the symbol classes, comprising the steps of:
- providing a library of L unenhanced pixel images (I.sub.1 I.sub.2 I.sub.3 . . . I.sub.j . . . I.sub.L) to be enhanced to provide a library of G recognition enhanced pixel templates (T.sub.1 T.sub.2 T.sub.3 . . . T.sub.i . . . T.sub.G);
- providing a pre-existing group of G symbol classes (S.sub.1 S.sub.2 S.sub.3 . . . S.sub.i . . . S.sub.G) with the multiple pixel versions therein, a total of V pixel versions;
- comparing each of the V pixel versions with each of the L pixel images to obtain V.times.L comparisons forming V sets of L comparisons (C.sub.1 C.sub.2 C.sub.3 . . . C.sub.j . . . C.sub.L), one set of L comparisons for each of the V pixel versions, each set of comparisons having a comparison C.sub.i for each pixel image I.sub.j ;
- identifying a primary comparison C* from the L comparisons within each of the V sets of comparisons having the closest comparison with the pixel version for that set of comparisons forming a collection of V primary comparisons C* (C.sub.1 * C.sub.2 * C.sub.3 * . . . C.sub.i * . . . C.sub.v *);
- identifying a secondary comparison C** from the L-1 remaining comparisons within each of the V sets of comparisons having the next closest comparison with the pixel version for that set of comparisons forming a collection of V secondary comparisons C** (C.sub.1 ** C.sub.2 ** C.sub.3 ** . . . C.sub.i ** C.sub.V **), to provide V pairs of identified comparisons C* and C**, one pair from each of the V sets of comparisons;
- determining V recognition margins (M.sub.1 M.sub.2 M.sub.3 . . . M.sub.i . . . M.sub.V), one recognition margin between each pair of identified comparisons C* and C**;
- selecting the single pair of identified comparisons C* and C** having the smallest recognition margin M* of all of the V pairs of identified comparisons from the V sets of comparisons;
- identifying a single symbol within a class S* corresponding to the selected single comparison C*, and determining if the class S* has multiple pixel versions, and excluding the multiple pixel versions, if any, in the class S* from the remaining steps leaving the identified single symbol;
- identifying the single pair of pixel images I* and I** corresponding to the selected single pair of identified comparisons C* and C**;
- weighting certain pixels of the closest pixel image I* and the next closest pixel image I** corresponding to the selected single pair of identified comparisons C* and C** in order to incrementally increase the recognition margin M* therebetween causing the pixel image I* to become the closest enhanced pixel template T* and the pixel image I** to become the next closest enhanced pixel template T**;
- iterating the comparing, identifying, determining, selecting, and weighting steps until the library of pixel images has become the library of G recognition enhanced symbol templates (T.sub.1 T.sub.2 T.sub.3 . . . T.sub.i . . . T.sub.G) which have been recognition enhanced with respect to the pre-existing group of G pixel symbols (S.sub.1 S.sub.2 S.sub.3 . . . S.sub.i . . . S.sub.G), at least some templates of which have weighted pixel aberrations not present in the pixel version within each of the pre-existing group of G corresponding symbol classes; and
- matching a pixel version of an unknown input symbol of the group of G symbol classes (S.sub.1 S.sub.2 S.sub.3 . . . S.sub.i . . . S.sub.G) with the library of enhanced templates (T.sub.1 T.sub.2 T.sub.3 . . . T.sub.i . . . T.sub.G) by comparing the pixel version with each of the enhanced pixel templates in the library of enhanced templates and selecting the enhanced template with the closest comparison.
- 2. The method of claim 1, wherein the recognition enhancement of the unenhanced pixel images is terminated when the smallest recognition margin is greater than a predetermined minimum value.
- 3. The method of claim 1, wherein the recognition enhancement of the unenhanced pixel images is terminated when the incremental increase in the smallest recognition margin each iteration is smaller than a predetermined minimum increase.
- 4. The method of claim 1, wherein the recognition enhancement of the unenhanced pixel images is terminated when a specified number of weighting iterations have been executed.
- 5. The method of claim 1, wherein the recognition enhancement of the unenhanced pixel images is terminated when a preassigned period of iteration processing time has expired.
- 6. The method of claim 1, wherein the recognition enhancement of the unenhanced pixel images is terminated when the individual templates in clusters of similarly shaped templates have progressed to the same recognition margin and have become mutual anti-characters.
- 7. The method of claim 6 wherein the pixel versions within a given symbol class are font variations of the alpha-numeric symbol corresponding to that symbol class.
- 8. The method of claim 6 wherein the pixel versions within a given symbol class are implementation variations of the alpha-numeric symbol corresponding to that symbol class.
- 9. The method of claim 6 wherein the pixel versions within a given symbol class are font variations and implementation variations of the alpha-numeric symbol corresponding to that symbol class.
- 10. The method of claim 6 wherein the pixel versions within a given symbol class are variations of the alpha-numeric symbol corresponding to that symbol class, and divergent pixel versions in at least some of the symbol classes form related sub-classes of pixel versions within that symbol class.
- 11. The method of claim 10 wherein each of the sub-classes of divergent pixel versions has a corresponding sub-image which becomes a corresponding sub-template in the library of templates.
- 12. The method of claim 11 wherein during the determination of the recognition margins of a divergent pixel version in a given sub-class, the recognition margins with of the sub-templates of related sub-classes are excluded.
- 13. The method of claim 12 wherein the inter-subclass recognition margins are excluded by not determining a recognition margin with the sub-templates of the related sub-classes.
- 14. The method of claim 12 wherein the inter-subclass recognition margins are excluded by not comparing the given sub-class with the sub-templates of the related sub-classes.
- 15. The method of claim 1, wherein the V.times.L comparisons are numerical coefficients of comparison, the value of which indicates the degree of pixel similarity between the pixel version within a symbol class S.sub.i and the pixel image I.sub.j under comparison Cij.
- 16. The method of claim 15, wherein a coefficient having a high value indicates a close comparison between the input pixel version s.sub.i and the pixel image under comparison I.sub.j, and a coefficient having a low value indicates a remote comparison between the input pixel version s.sub.i and the pixel image under comparison I.sub.j.
- 17. The method of claim 15, wherein the recognition margin is the difference between the comparisons C* and C** of each selected pair of comparisons.
- 18. The method of claim 17, wherein the recognition enhancement of a particular pixel image I.sub.j with respect to a particular pixel version s.sub.i involves maximizing the minimum recognition margin between the primary comparison C* and the maximum secondary comparison C** which form the selected pair of identified comparisons C* and C** for the pixel version s.sub.i, in the general relationship:
- maximize M=min[C*-max(C**)]
- where
- M is the recognition margin between C* and C**,
- C* is the primary comparison for the template T* which is the closest template in the library to the particular pixel version s.sub.i, and
- C** is the secondary comparison for the template T** which is the second closest template in the library to the pixel version s.sub.i.
- 19. The method of claim 18, wherein only the closest pixel image I* is weighted causing the closest pixel image I* to become the closest pixel template T*.
- 20. The method of claim 19, wherein the weighting added to the closest template T* each iteration is determined by the first derivative of the recognition margin M* relative to the template T*:
- W*=u*(dM/dT*)=u*(dC*/dT*-dC**/dT*)
- where
- W* is the weighting added each iteration, and
- u* is a weighting factor mu* for dM/dT*.
- 21. The method of claim 20, wherein each iteration of the enhancement process produces a new T* template which is slightly different from the old T* template: ##EQU3## where n is the number of the iteration,
- T*.sub.n+1 is the new T* template, and
- T*.sub.n is the old T* template.
- 22. The method of claim 18, wherein only the next closest pixel image I** is weighted causing the next closest pixel image I** to become the next closest pixel template T**.
- 23. The method of claim 22, wherein the weighting added to the next closest template T** each iteration is determined by the first derivative of the recognition margin M** relative to the template T**:
- W**=u**(dM/dT**)=u**(dC*/dT**-dC**/dT**)
- where
- W** is the weighting added each iteration, and
- u** is a weighting factor mu** for dM/dT**.
- 24. The method of claim 23, wherein each iteration of the enhancement process produces a new T** template which is slightly different from the old T** template: ##EQU4## where n is the number of the iteration,
- T*.sub.n+1 is the new T** template, and
- T**.sub.n is the old T** template.
- 25. The method of claim 18, wherein both the closest pixel image I* and the next closest pixel image I** are weighted causing the closest pixel image I* to become the closest pixel template T* and the next closest pixel image I** to become the next closest pixel template T**.
- 26. The method of claim 25, wherein the weighting added to T* and to template T** each iteration are determined by the first derivative of the recognition margins M* and M** relative to templates T* and T**:
- W*=u*(dM/dT*)=u*(dC*/dT*-dC**/dT*)
- W**=u**(dM/dT**)=u**(dC*/dT**-dC**/dT**)
- where
- W* is the weighting added to T* each iteration,
- u* is a weighting factor mu, for dM/dT*,
- W** is the weighting added to T** each iteration, and
- u** is a weighting factor mu** for dM/dT**.
- 27. The method of claim 26, wherein the weighting factor u* is equal to the weighting factor u**.
- 28. The method of claim 26, wherein each iteration produces a new T* and T** templates which are slightly different from the old T* and T** templates:
- T*.sub.n+1 =T*.sub.n +W*=T*.sub.n +u,(dC*/dT*-dC**/dT*)
- T**.sub.n+1 =T**.sub.n +W**=T**.sub.n +u**(dC*/dT**-dC**/dT**)
- where
- n is the number of the iteration,
- T*.sub.n+1 is the new T* template,
- T*.sub.n is the old T* template,
- T**.sub.n+1 is the new T** template, and
- T**.sub.n is the old T** template.
- 29. The method of claim 15, wherein the value of each numerical coefficient of comparison C.sub.j is based on a pixel by pixel comparison between each pixel of pixel version s.sub.i and each pixel of pixel image I.sub.j under comparison.
- 30. The method of claim 29, wherein the pixel by pixel comparison is the dot product between each pixel of pixel version s.sub.i and each pixel of pixel image I.sub.j under comparison.
- 31. The method of claim 29, wherein the pixel by pixel comparison is based on the Cauchy-Shwartz Function:
- Cauchy-Shwartz Function==(s.sub.i).multidot.(T.sub.i)/(.vertline..vertline.s.sub.i .vertline..vertline.)(.vertline..vertline.T.sub.i .vertline..vertline.).
- where
- (s.sub.i) is the input version under comparison,
- (T.sub.i) is the enhanced template under comparison,
- (.vertline..vertline.s.sub.i .vertline..vertline.) is the version norm, and
- (.vertline..vertline.T.sub.i .vertline..vertline.) is the template norm.
Parent Case Info
This application is a continuation-in-part of application Ser. No. 08/008,556, filed Jan. 22 1993, now abandoned.
US Referenced Citations (2)
Number |
Name |
Date |
Kind |
5204914 |
Mason et al. |
Apr 1993 |
|
5379349 |
Avi-Itzhak |
Jan 1995 |
|
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
8556 |
Jan 1993 |
|