The present invention relates to a method for character recognition. More particularly, the present invention relates to a method for character recognition based on Gabor filters.
Traditional methods for character recognition carry out recognition based on binary character images. When recognizing various low-quality images, for example, the low-resolution image, the image of an identification card, an automobile license plate and a image of natural scenery, because their binarized images are extremely low quality, traditional methods for character recognition present a poor performance in recognizing characters regarding above-mentioned images.
Because of the disadvantages of traditional methods for character recognition, many prior arts does not carry out recognition based on binarization, but directly extract recognition features from a gray character image.
To directly extract recognition features from a gray character image, two specific methods are as follows:
One method extracts topographical and gradient features, etc. by means of analyzing the gradient distribution of a gray character image. Because the distribution of image's brightness needs to meet with some specific rules, the method owns a poor capability to resist noises.
The other method extracts recognition features by means of simple orthogonal transformation, for example, FFT or DCT Because the transformations reflect only global information of a character image, the method cannot extract local stroke direction and structural information of the character image, resulting in a poor capability to resist to the change in image's brightness and the deformation of the character.
Gabor filter possesses excellent joint spatial/spatial-frequency localization and capability to efficiently extract local structural features, thus, the method emerges at present to employ Gabor filters to extract the recognition features of handwritten digital images. However, current methods have following disadvantages:
When designing the Gabor filters, because the method selects different parameter values on the basis of different recognition ratios, the procedure of the method is tedious and the amount of calculation is tremendous, resulting in a poor capability to recognize.
The method does not apply Gabor filter to gray character image of low quality to carry out the recognition procedure, but apply Gabor filter only to binary image to recognize the characters. Therefore, when the method is applied to gray character image of low quality, the capability to discriminate the strokes off backgrounds of character images is poor.
Upon the above, though Gabor filter is used for character recognition, the method does not make full use of Gabor filter's excellent joint spatial/spatial-frequency localization and capability to efficiently extract local structural features.
One objective of the invention is to provide a method for character recognition based on Gabor filters. With the method, Gabor filter's excellent joint spatial/spatial-frequency localization and capability to efficiently extract characters' local structural features are employed to extract, from the character image, the most important and most stable information—the information of the stroke direction of characters—as the recognition information of characters so as to improve the capability to resist the change in image's brightness and the deformation of characters.
Another objective is to provide a method for character recognition based on Gabor filters. With the method, according to the statistic result of the stroke direction information of character image, a simple and effective parameter design method is put forward to optimally design the Gabor filter, ensuring a preferable recognition performance.
Another objective is to provide a method for character recognition based
on Gabor filters. With the method, a corrected Sigmoid function is used to non-linear adaptively process the stroke direction information output from the Gabor filter so that the processed stroke direction information possesses a better restrain on the change in image's brightness, stroke's illegibility and fracture and disturbance from the background, and the robustness of the recognition features is dramatically enhanced.
Another objective is to provide a method for character recognition based on Gabor filter group. With the method, when extracting the histogram feature from blocks, Gaussian filter array is used to process the positive and negative values output from Gabor filter group to enhance the discrimination ability of the extracted features.
In order to realize the objectives of the invention, according to one embodiment of the invention, the method comprises steps:
As stated above, due to Gabor filter's excellent optimum joint spatial/spatial-frequency localization and capability to efficiently extract character's local structural features, and since the main features of a character image is the structure of the character's strokes, the basic ideas of the present invention is to employ Gabor filter to extract the information of strokes in different direction of a character to be recognized so as to form recognition features highly robust to the disturbance of the image and the deformation of the strokes, then employ classifier to get the character label (or code) for the character to be recognized based on obtained recognition features.
As shown in
pre-processing a character image, wherein receiving, pre-processing the character image to be recognized, thus, obtaining a binary or gray image with N×N pixels for each character to be recognized, said the binary or gray image for each character to be recognized represented by a matrix [A(i, j)]N×N;
extracting stroke direction information of characters, wherein employing a Gabor filter group composed of K two-dimension Gabor filters to extract stroke information in K different directions from the matrix [A(i, j)]N×N for the image of each character to be recognized, thus, obtaining K matrixes [Gm(i, j)]M×M for the image of each character to be recognized, m=1 . . . K, each of said matrix possessing M×M pixels and representing the stroke information of one direction;
adaptively processing, wherein employing corrected Sigmoid function to adaptively process the K matrixes [Gm(i, j)]M×M, m=1 . . . K , obtained in the step of extracting stroke direction information of characters, each of said matrixes representing the stroke information of one direction, thereby, obtaining K matrixes [Fm(i, j)]M×M for the image of each character to be recognized;
extracting features from blocks, wherein extracting features from blocks of said K matrixes of the image of each character to be recognized, i.e., [Fm(i, j)]M×M obtained in the step of adaptively processing, thus, obtaining a initial recognition feature vector V with a high dimension for the image of each character to be recognized;
compressing the features, wherein compressing the initial recognition feature vector V, thereby, obtaining a recognition feature vector Vc with a low dimension for the image of each character to be recognized;
recognizing characters, wherein employing a specific classifier to calculate the distance between the recognition feature vector Vc for the image of each character to be recognized and the category center vector of each of the character categories, and selecting a nearest distance from the distances and the character category corresponding to the nearest distance, then calculating the character code for each character to be recognized according to national standard code or ASCII code of the character category.
For a more complete understanding of the detailed steps of the method of the present invention, reference is made to the following descriptions taken in conjunction with the accompanying drawings.
I. Step of Pre-Processing the Character Image
The step of pre-processing the character image comprises three steps as follows:
cutting the character image, wherein employing the row/column projection algorithm or character block orientation algorithm to cut the image of each character to be recognized from the whole image;
reversing the brightness of the image, wherein turning the color of each image obtained from cutting step into white and the color of the background into black, i.e., the brightness value of the pixels of the strokes of the image higher than that of the background;
normalizing the size of the image, wherein normalizing the images with different sizes into images with the same size so as to be convenient for the consecutive step of features extracting and achieve a stable capability to recognize the variance in size and the slight deformation of characters during the course of recognition, and generally employing well-known algorithms, i.e., gravity-center to shape-center normalization algorithm or line density equalization algorithm, to normalize the size of images.
After the step of pre-processing, each of obtained images are all gray or binary images with the unified size, and characters in white and the background in black, i.e., the brightness value of the pixels of the strokes of the image higher than that of the background. Assuming the size of the gray or binary image for each of character to be recognized is N×N pixels, the image can be represented by a matrix [A(i, j)]N×N, wherein A(i, j) represents the value of the pixel on ith row and jth column in the image.
Step of extracting the stroke information of a character
Said step mainly includes employing the Gabor filter group to extract the information of the strokes in K different directions from [A(i, j)]N×N for the image of each character to be recognized, thereby, obtaining K matrixes [Gm(i, j)]M×M, m=1 . . . K, each of said matrix possessing M×M pixels and representing the stroke information of one direction.
It is necessary to optimize the parameter of the Gabor filter group before its usage. The objective of optimizing the parameter is to reduce the similarity among output matrixes {[Gm(i, j)]M×M}, (1<=m<=K) by means of setting the optimal parameters for the Gabor filter group.
The optimization of Gabor filter group's parameters involves the following points: statistic of the width and direction of strokes in images of characters, realization of Gabor filter group, determination of initial parameters of Gabor filter group and optimization of the parameters of Gabor filter group by means of principle of information entropy.
Now the key points in optimization procedure are illustrated as follows.
Statistic of the width and direction of strokes in images of characters
When determining the initial parameters of Gabor filter group, the average width and direction of strokes of the character image is used. Therefore, it is necessary to obtain the statistic of the average width and direction of strokes in images of characters in advance.
As shown in
Realization of Gabor filter group
Gabor filter group is generally composed of a plurality of two-dimension Gabor filters, the number of which is dependent on the actual requirement. Usually the impulse response function of two-dimension filter takes the following form:
Therefore, the parameters of Gabor filter group composed of K Gabor filters, whose impulse response function is abbreviated as hm(x,y) are {λm, φm, σxm, σym, D}(1≦m≦K), wherein D is the extracting spacing in the space of the Gabor filter group.
After pre-processing, the image [A(i, j)]N×N is processed by a Gabor filter group composed of K Gabor filters, thereby, obtaining K matrixes, each of whom is represented by {[Gm(i, j)]M×M}, (1<=m<=K), wherein, M=N/D.
[Gm(i, j)]M×M can be obtained by following convolution calculation:
Wherein the space range of the impulse response function hm(x,y) can be defined as−C<=x<=C, −C<=y<=C, because hm(X,Y) is a function which attenuates very quickly, when its absolute value is less than a certain threshold value, for example, 10−3, hm(x,y) imposes slight influence on the result and it is unnecessary to take it into consideration during the course of calculation, wherein C=N/4.
Determination of the initial parameters of Gabor filter group
According to the statistic result of average width W and typical direction θm of strokes of the image, the parameters {λm, φm, σxm, σym, D} (1≦m≦K) of the Gabor filter group composed of K Gabor filters are determined as follows:
λm=2W,φm=θm−90°,σxm=σym=σ, a≦σ≦b, D≦σ/√{square root over (2)}, 1≦m≦K
wherein:
Wherein, as for σ, only a range is defined and the specific value will be defined in the procedure of parameter optimization.
Optimizing the parameters of Gabor filter group by means of information entropy principle.
During the procedure of parameter optimization, in order to determine the degree of similarity among {[Gm(i, j)]M×M} (1<=m<=K) output from Gabor filters under the condition that parameter a takes different values, entropy correlation coefficient is employed.
Entropy correlation coefficient is a method for describing the matching similarity degree between two images by means of mutual information. For two images A and B with the same sizes, assuming their entropies are respectively H(A) and H(B), combinative entropy is H(A,B), mutual information is I(A, B)=H(A)+H(B)−H(A,B), the entropy correlation coefficient ECC is defined as:
ECC(A,B)=2*I(A,B)/(H(A)+H(B))
wherein, entropy H(A) and H(B) are respectively calculated by means of the brightness histogram of images A and B.
Assuming the distribution of the brightness histogram of image A is Hist(i), 0<=i<=255 and the total number of the pixels in the image is PN, the entropy of image A is:
For image B, H(B) can be calculated in the same way.
For combinative entropy H(A,B), firstly it is needed to calculate the statistic of the combinative histogram distribution Hist2(i,j) of images A and B, wherein Hist2(i,j) is defined as, in the same position, the total number of the pixels whose brightness are i or j in the images A and B. After obtaining Hist2(i,j), the combinative H(A, B) of images A and B is calculated as follows:
Upon the above, ECC is only dependent on the statistic of the brightness distribution of the pixels corresponding to two images, while ECC has no relationship with the specific difference of brightness. Therefore, a preferable capability to match images is obtained.
According to above-mentioned calculation of entropy correlation coefficient, a method for calculating the average entropy correlation coefficient of {[Gm(i, j)]M×M} (1<=m<=K) is illustrated as follows.
The {[Gm(i, j)]M×M} (1<=m<=K) resulting from processing character image
[A(i, j)]N×N by a Gabor filter group is firstly homogeneously quantified as {[G*m(i,j)]M×M}(1≦m≦K ) taking the values of integer 0˜255:
wherein, └ ┘ is for round-off.
Then, calculating the average entropy correlation coefficient
Seen from above equation, average entropy correlation coefficient
When the parameter σ of Gabor filter group takes different values within a predetermined range, after the Gabor filter group processes the same character image, different matrixes are obtained, then, calculating average entropy correlation coefficient
For a deep understanding of the procedure of parameter optimization, reference is now made to the following descriptions taken in conjunction with
Initiating
Characters' images, which are typical and whose total number is SNUM, are selected from training sample character image database (step 10), then pre-processing the selected images and obtaining the statistic of the average width and direction of the strokes (step 20); according to the statistic, determining initiating parametersλ,φi (i=1,2 . . . K) for the Gabor filter group, the value range [a, b] of σ and the step length Δ of σ:Δ=(b−a)/N, for every step length Δ within the range [a, b], determining a series values for σi(i=1,2,3, . . . N+1) (step 30).
Calculating the average entropy correlation coefficient
Firstly to set parameter σ of the Gabor filter group as σ1, i.e., σ=a (step 40); then, employing the Gabor filter group to calculate, in sequence, the average entropy correlation coefficient
Acquiring the optimal value of σ
The smallest average entropy correlation coefficient
In
III. Adaptively processing
Within the step, employing corrected Sigmoid function to non-linear adaptively process the K matrixes [Gm(i, j)]M×M, (1<=m<=K), obtained in the step of extracting stroke direction information of characters, each of said matrixes representing the stroke information of one direction, thereby, obtaining K matrixes [Fm(i, j)]M×M, (1<=m<=K), for the image of each character to be recognized.
For binary image, after processed by optimized Gabor filter group, recognition features can be directly extracted from obtained matrixes representing information of strokes in different direction. However, for gray character image under an adverse circumstance, recognition features cannot be directly extracted from the results after processed by optimized Gabor filter group, because the gray character images acquired under different illumination circumstances or during different taken courses have different brightness and contrast; even within a image, due to un-uniform illumination, the brightness of every stroke may vary. Even though Gabor filter has the capability to resist disturbance, after processed by Gabor filter group, the disturbance in the gray character image under the adverse circumstance has not been completely eliminated, because the disturbance line with low brightness, the disturbance lines with the width much less than that of stroke and the change in brightness with the width much-greater than that of stroke will generate output with a very small argument.
In order to restrain the influence of the change in brightness of image on the capability to recognize characters (abbreviated as invariability of brightness), it is necessary to further process the output from the Gabor filter group so as to restrain the output resulted from the disturbance.
Therefore, in the invention, corrected Sigmoid function ψm(t) (1≦m≦K) is employed to respectively carry out non-linear adaptive process on each of {[Gm(i,j)]M×M} (1≦m≦K) output from the Gabor filter group in order to restrain the disturbance, wherein, function ψm(t) (1≦m≦K) is the translation result of the Sigmoid function φ(t), as shown in
Sigmoid function φ(t) will saturate the large input, which can compensate the difference in brightness of each part of the image and the slight change in the width of strokes. After correction, corrected Sigmoid function ψm(t) (1≦m≦K) not only retain the saturation feature for large inputs, but also put up a restrain feature for weak inputs, thus, avoiding function φ(t)'s amplifying action on weak input and achieving restrain on output of noises and backgrounds.
Sigmoid function φ(t) is:,
φ(t)=tan h(αt)=(e2αt−1)/(e2αt+1)
Corrected Sigmoid function ψm(t) is:
ψm(t)=tan h(αm(t−χm))+βm, 1≦m≦K
Differentiating the corrected Sigmoid function ψm(t):
Upon the above, when t<vm, the gradient of ψm(t)<1, putting up a restrain feature on weak input; when t>χm+(χm−νm), the gradient of ψm(t)<1, i.e., putting up a strong saturation feature on large input.
In order to employ function ψm(t), the parameters χm, αm and βm inside the function need to be determined. In the invention, defining χm, αm as follows: 0.1<χm<0.5, 2<αm<10, the specific value of χm and αm will be selected dependent on different recognition capabilities. The value of βm is determined dependent on different qualifications.
Reference is made in conjunction with
1. Normalizing matrixes (step 120)
Before each of the matrixes {[Gm(i, j)]M×M} (1<=m<=K) output from Gabor filter group is non-linear adaptively processed, they shall be normalized in a simple way, thus, obtaining {[G′m(i,j)]M×M} (1≦m≦K ). The normalization is carried out based on following equation:
G′m(i,j)=Gm(i,j)/MAXm
wherein, MAXm=max(|Gm(i,j)|)
2. Determining Values of Parameters of Corrected Sigmoid Function ψm(t) (step 130)
Within the range of 0.1<χm<0.5, 2<αm<10, the values of χm and αm are determined upon the requirements of recognition capability and experiences. The value of βm is determined upon experiences.
3. Adaptively processing by means of corrected Sigmoid function (step 140)
First to, turn the corrected Sigmoid function ψm(t) with parameters determined to odd symmetry, thus, the corrected Sigmoid function ψm(t) can also, process the negative values in the matrixes {[G′m(i,j)]M×M} (1≦m≦K ), that is:
Then to process {[G′m(i,j)]M×M} (1≦m≦K ), thus, obtaining output after processing, Fm(i,j)]M×M(1≦m≦K):
In this way, both the positive values and negative values in {[Gm(i,j)]M×M} (1≦m≦K) output from Gabor filter group are simultaneously processed, and achieved the result {[Fm(i,j)]M×M} (1≦m≦K).
IV. Extracting histogram features from blocks
Within the step, extracting histogram features from blocks of said K matrixes of the image of each character to be recognized, i.e., [Fm(i, j)]M×M, m=1 . . . K, obtained in the step of adaptively processing, thus, obtaining a initial recognition feature vector V with a high dimension for the image of each character to be recognized;
The following is detailed description of the procedure of extracting features from blocks in conjunction with
1. Evenly dividing (step 160)
For each of matrixes {[Fm(i,j)]M×M} (1≦m≦K), evenly dividing into P×P rectangular areas overlapped with each other and the length of side of each rectangular are L.
2. Calculating Feature Vector Sm+ and Sm− for each matrix (step 170)
At the center r(x, y) of each rectangular area, respectively calculating the weighted sum of positive values and weighted sum of negative values for each area; then, based on the weighted sum of positive values and weighted sum of negative values of each rectangular area of each matrix, summing by Gaussian weighting, thereby, obtained the feature vector Sm+ of positive values and the feature vector Sm− of negative values, wherein the dimensions of Sm+ and Sm− are both P2. The equations for calculating Sm+ and Sm− are as follows:
wherein, G(x,y)=exp{−(x2+y2)/2}/(2π) is Gaussian weighting function.
3. Calculating initial recognition feature vector V (step 180)
The feature vector Sm+ of positive values and the feature vector Sm− of negative values for each matrix are orderly merged as a feature vector, whose dimension is 2 KP2, thus, obtaining initial recognition feature vector V:
V=[S1+ S1− S2+ S2− . . . SK+ SK−].
V. Compressing features
With in the step, compressing the initial recognition feature vector V, thereby, obtaining a recognition feature vector Vc with a lower dimension for the image of each character to be recognized.
The objective of the step is to further compress the initial recognition feature vector V so as to improve the distribution of features and after-mentioned capability of character recognition. Usually, compressing feature step carries out by means of LDA method.
The feature compression is realized by multiplying the initial recognition feature vector V by transformation matrix φ. The dimension of the transformation matrix φ is 2KP2×n, wherein n≦2KP2. The matrix φ is calculated by means of LDA method.
Following reference is made to describe the procedure of calculating transformation matrix φ in conjunction with
1. Calculating feature vector sequence for each character category (step 210)
Assuming the total number of character categories is Co, above-mentioned the steps of pre-processing, extracting information of strokes′ direction of characters, adaptively processing and extracting from blocks are carries out for images of all training sample characters of each character category, thereby, obtaining feature vector sequence {Vi1, Vi2, . . . ViQi} for each character category i, wherein, Qi the number of total images of training sample characters of character category i.
2. Calculating category center μi of each character category i and category center μ of all character categories
According to feature vector sequence {Vi1, Vi2, . . . ViQi} of each character category i, calculating category center μi of each character category i (step 220):
According to the category center μi of each character category i, calculating category center μ of all character categories (step 230):
3. Calculating between-class scatter matrix Σb and within-class scatter matrix Σw
According to the category center μi of each character category i and the category center μ of all character categories, calculating inter-category discrete degree matrix Σb (step 240):
According to the category center μi of each character category i, calculating inner-category discrete degree matrix Σw (step 250):
4. Calculating the transformation matrix φ used for feature compression.
The transformation matrix is found when the matrix maximize the value of |φτΣb φ|/|φτΣw φ|. By means of matrix calculating tool, first n largest non-zero eigenvalues of matrix Σ−1wΣb are achieved, and first n eigenvectors corresponding to first n largest non-zero eigenvalues are arranged column by column to form the transformation matrix φ (step 260, 270).
After achieving the transformation matrix φ, the compression on the initial recognition feature vector V can be carried out according to the following equation:
Vc=φTV
Wherein: VC represents the compressed recognition feature vector.
The compressed recognition feature vector VC accords with the hypothesis of Gaussian distribution better in statistics, thus, the capability of classifiers improves and meanwhile, the complexity and the amount of calculation of classifiers are reduced.
Acquiring recognition feature vector VC means that the step of extracting features from the character image to be recognized is finished. The following step is to recognize by means of put the recognition feature vector VC into classifiers.
VI. Character recognition
In the step, the classifiers used are the statistical classifiers based on Bayes decision theory, for example, Gaussian distance classifiers.
During the procedure of character recognition, category center vector and its national standard code of each character category shall be acquired in advance.
Following reference is made to describe how to acquire category center vector and its national standard code of each character category in conjunction with
1. Calculating Compressed Feature Vector Sequence of each Character Category
Assuming the total number of character categories is Co, above-mentioned steps of pre-processing, extracting information of stroke's direction of characters, adaptively processing, extracting from blocks and compressing feature are carries out for images of all training sample characters of each character category, thereby, obtaining compressed feature vector sequence {Vc1i, Vc2i, . . . VcQii} for each character category i (step S50), wherein, Qi is the number of total images of training sample characters of character category i.
2. Calculating category center of each character category
According to the compressed feature vector sequence {Vc1i, Vc2i, . . . VcQii} for each character category i, calculating n-dimension category center μi* of each character category i (step S60):
3. Storing national standard code and category center vector of each character category
According to a certain sequence, the national standard code GB(i) and n-dimension category center vector μi* of each character category i are stored in the documents of recognition library (step S70).
Reference is now made to describe the procedure of character recognition by means of Euclidean distance classifier in conjunction with
1. Reading the national standard code and category center vector of each character category
Category center vector {μi*}1≦i≦C and national standard code GB(i) of each character category are read out from the recognition library (step S10).
2. Calculating the distance between the recognition feature vector VC and category center vector of each category
The recognition feature vector VC of the character image to be recognized acquired in the step of compressing features is input, then the Euclidean distance {DISTi}1≦i≦C between the recognition vector VC and category center vector {μi*}1≦i≦C is calculated as follows (step S20):
3. Calculating the character code for each character to be recognized
According to the Euclidean distance {DISTi}1≦i≦C between the recognition vector VC and the category center vector of each character category, the smallest distance and the character category the smallest distance corresponds to are determined. According to the character category found out, the character code “CODE” of the image of each the character to be recognized is calculated (step S30, S40).
CODE=GB(p).
The code “CODE” is the recognition result of the character to be recognized, which can be output to the database of computers, the text files or directly displayed on the screen of computers.
Upon the above, the method for character recognition based on Gabor filter group can be divided into two states: training stage and character recognition stage.
During the course of the training: 1. According to the properties of image sample of characters, a method for extracting features is optimally designed, i.e., firstly, Gabor filter group is constructed and the parameters thereof are optimized; secondly, according to the requirements of recognition capabilities, the values of the parameters of corrected Sigmoid function ψm(t) used in the step of adaptively processing are set; thirdly, the number of blocks each matrix is divided into in the step of extracting features from blocks is determined; the transformation matrix used in the step of compressing features is calculated; 2. Suitable classifier is selected; 3. Optimal method for extracting features is used, i.e., firstly, recognition features sequences of all character categories are obtained by extracting features from character's sample image of all character categories; secondly, all recognition features sequences are processed by selected classifier, and category center vectors of all character categories are obtained; finally, the category center vectors and the national standard codes of all character categories are stored in the recognition library.
During the course of recognizing characters, firstly, the character image is processed by means of the optimal method of extracting features obtained in the stage of training, and the recognition features vector is obtained; furthermore, the classifier calculates the distance between the recognition feature vector of the image of characters to be recognized and the category center vector of each category; moreover, according to the distances, the smallest distance and the character category which the smallest distance corresponds to are determined; finally, according to the national standard code of the character category found out, the character code “CODE” of the character image to be recognized is calculated.
Generally, the training stage of the present invention is carried out in the units developing the technique, while the character recognition stage is carried out at the user terminals.
The following description relates to an embodiment of a method for recognizing the images of Chinese characters based on Gabor filter group.
During the course of training:
1. Statistic of the width and direction of strokes
A plurality of typical images of characters are selected from Chinese character training samples and these images are normalized as images of [A(i,j)]N×N of 64×64 pixels and the statistic of the average width and direction of the strokes of these typical images is obtained. The statistic result is shown in
2. Constructing Gabor filter group and optimizing the parameters
In the embodiment, the Gabor filter group comprises four Gabor filters, i.e., K=4. According to the statistic result of the average width and direction of the strokes, the parameters of the Gabor filter group are optimized, the optimized parameters are as follows:
λ=10, {φk}k=1,2,3,4={−90°,−45°,0°,45°}, σ=5.6, D=4.
3. Setting the parameters of corrected Sigmoid function ψm(t)
In the embodiment, all matrixes output from the Gabor filter group are processed by a same corrected Sigmoid function ψm(t). According to the requirements of recognition capability, the setting are as follows: α=7,χ=0.59, β=1.00.
4. Determining the number of blocks divided in the step of extracting features from blocks
In the embodiment, the number of blocks of each matrix is 8×8=64 and the length of side of each block is L=8. Thus, the initial recognition feature vector of the image of each character to be recognized is 512-dimension.
5. Determining the transformation matrix φ used in the step of compressing features
In the embodiment, the 512-dimension initial recognition feature vector of the character image to be recognized can be compressed into 120-dimension recognition feature vector by the transformation matrix φ by means of LDA transformation.
6. Determining classifier
In the embodiment, Euclidean distance classifier is selected.
7. Generating a recognition library of chinese characters
The 3755 Chinese characters contained in the first class of national standard code are processed to obtain the category center vectors of 3755 Chinese characters, and the vectors and the national standard codes of these characters are stored in the documents of the recognition library.
During the course of recognizing characters:
1. Pre-processing
Firstly, the image of characters to be recognized is cut and the images of single characters are obtained; secondly, normalization process on each image of single character is carried out and the images [A(i,j)]N×N obtained are all 64×64 pixels and in the images the color of the characters is white and that of background is black.
2. Extracting direction information of strokes
Each of the images [A(i,j)]N×N of Chinese characters obtained in the step of pre-process is processed the Gabor filter group {λm, φm, σxm, σym, D} (1≦m≦4)obtained during the course of training processes and the output in four directions {[Gm(i,j)]M×M} (1≦m≦4) are obtained.
3. Non-linear processing
The output {[Gm(i,j)]M×M}(1≦m≦4) of the image of each Chinese character to be recognized from the Gabor filter group is non-linear adaptively processed and matrix {[Fm(i,j)]M×M} (1≦m≦4) of each Chinese character to be recognized is obtained.
4. Extracting histogram features from blocks
According to the flow chart shown in
5. Compressing features
The 512-dimension initial recognition feature vector of the image of the Chinese character to be recognized can be compressed into 120-dimension recognition feature vector by the transformation matrix φ, obtained during the course of training, by means of LDA transformation.
6. Recognizing Chinese characters
According to the 120-dimension recognition feature vector of the image of each Chinese character to be recognized, the Euclidean distance between the 120-dimension recognition feature vector of the image of each Chinese character and category center vector of categories which 3755 Chinese characters belong to is calculated by Euclidean distance classifier selected during the course of training. According to the distances obtained, the smallest Euclidean distance and the Chinese character the smallest Euclidean distance corresponds to are selected. Then according to the character's national standard code stored in the recognition library, the character code of the image of each Chinese character to be recognized is obtained.
At a Custom Administration Office, images of 41 identification cards with different qualities are collected by means of the system of identification card. And the names and numbers on the cards are recognized. Seen from the result shown in
In sum, seen from the comparison between an embodiment of the present invention and the best recognition method, at present, of extracting directional edge histogram features from the binary images, when recognizing high-quality images of print Chinese characters, the recognition capability of the method disclosed in present invention is same as that of the best method; when recognizing low-quality images which are in low resolution and with disturbance, the recognition capability of the invention is far better than that of the best method at present; when recognizing the images of handwritten characters, it can be seen that the method of the present invention possesses a strong resistance to the change in handwritten characters and obtains the highest recognition ratio at present,
Number | Date | Country | Kind |
---|---|---|---|
02 1 17865 | May 2002 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
5164992 | Turk et al. | Nov 1992 | A |
Number | Date | Country | |
---|---|---|---|
20040017944 A1 | Jan 2004 | US |