The present invention relates to a process of handwriting recognition that allows in an efficient, reliable and inexpensive way to recognise a writer's handwriting on the basis of psycho-physiological aspects of the handwriting mode, inferring from the trace on the paper (or any other means on which the writer writes by hand) the interpretation of writing, i.e. the sequence of characters that the trace is intended to represent.
The present invention further relates to the apparatus configured to execute such process of handwriting, and the tools allowing the execution of the same process.
Although in the following reference is mainly made to an application of the process according to the invention to the recognition of handwriting with ink on paper, it may be applied for recognising handwriting on any other means on which a writer may write by hand, such as for instance in case of electronic tablets on which a user may write with a stylus, still remaining within the scope of protection as defined by the attached claims.
In the context of automatic recognition of writing (in which for instance OCR techniques of optical character recognition are included), techniques of handwriting recognition have a significant and increasing importance.
The currently available techniques of handwriting recognition may be subdivided into two macro-categories: analytical techniques and holistic techniques, which both typically make use of neural networks.
Processes of the first category assume that the basic units to recognize are the single characters, and therefore they comprise the steps of segmenting the traces in subparts, each one of which is assumed as representing a character, encoding each subpart through an adequate set of features, and finally classifying each subpart by comparing its features with those associated to a set of prototypes each associated to a different class, i.e. to a different character. Hence, the analytical techniques are potentially capable to recognise any sequence of characters written in a given alphabet. Some processes of the first category are disclosed in documents U.S. Pat. No. 4,718,103 A, U.S. Pat. No. 5,303,312 A, U.S. Pat. No. 5,307,423 A, EP 0892360 A1 and US 2006/282575 A1.
However, the analytical techniques suffer from the drawback to be extremely sensitive to segmentation errors. Moreover, such techniques model each class (i.e. each character) independently from one another, and since in the handwritten word the actual shape of a character is influenced by the shape of at least the character that precedes and/or the one that follows, the analytical techniques requires very numerous training sets or prototypes sets in order to be able to take account of the variability exhibited by the various instances of the same character in different words and to reduce the recognition error rate. The most sophisticated analytical processes integrate segmentation and recognition procedures, i.e. they provide alternative segmentation hypotheses and they rely on a character recognition engine to either validate or reject each hypothesis, under the assumption that, in order to be correctly recognized, a character must have been correctly segmented.
Processes of the holistic techniques assume, on the contrary, a whole word as basic unit to be recognised, and consequently they represent each trace through a suitable set of features of which they perform a classification by comparing such set with those of a set of prototypes, each one associated to a different class, i.e. to a different word. Holistic techniques have the advantage of not requiring a segmentation of handwriting tracts. Processes belonging to the holistic techniques are disclosed in documents U.S. Pat. No. 5,515,455A and US6754386B1.
However, holistic techniques suffer from the drawback that they need as many different classes as the number of different words to be recognized. Moreover, they require very large training set, containing a sufficient number of instances of each words in the recognisable lexicon, for training the prototypes of the classes. This limits the application of holistic techniques to lexicons of a few thousands of words, as in case of bank check processing or handwriting recognition used for postal item address.
Recently, on the basis of advancements in understanding the motor control aspects involved in handwriting and of developments of corresponding computational models, those skilled in the art have ascertained that handwriting is a discrete movement, resulting from time superimposition of elementary movements, called “strokes”, and that the velocity of the movement varies along the trace, such that the parts of the trace which are produced more slowly correspond to the intersection between successive strokes. Accordingly, some methods of handwriting recognition have been proposed, which typically make use of Hidden Markov Models, which adopt the strokes as the basic units for segmenting the trace. From this stroke-based representation of the trace, a probabilistic model of the variation of both the shape of the strokes and the sequences of them for each class (a class may correspond to a character or word depending on which approach is adopted) is estimated and used for the recognition. Stroke based methods have been successfully adopted in a large majority of on-line handwriting recognition systems, and there are only few attempts to use them in off-line recognition systems. The reason is that in the first case the system performs the recognition while the trace is produced by means of a device, such as for instance an electronic tablet, that also provides dynamic information about the velocity of the pen (or stylus) tip, and therefore the actual elementary movements (strokes) made by the writer during writing may be reliably extracted from the analysis of the velocity profile of the trace. Instead, in the other case, such dynamic information is not available, and therefore extracting the actual strokes of the trace gets much harder and the results are less robust and less reliable, whereby different representations might be associated to traces produced by the same sequence of actual strokes.
However, even the stroke-based representation methods suffer from some drawbacks due to the fact that they need to model the sequence of strokes, i.e. to model both the variability of the strokes and the variability with which the strokes are arranged in the sequences representing the traces which would be desired to ascribe to the same class. In order to face both sources of variability, the training phase of such systems, that aims at building the model, requires so huge training sets, that they are difficult to collect and also very expensive.
It is an object of this invention, therefore, to allow handwriting recognition in an efficient, reliable and inexpensive way.
It is specific subject-matter of the present invention a process of handwriting recognition comprising a running mode wherein the process executes the steps of:
According to another aspect of the invention, operation B4 may determine the dynamic Lexicon also on the basis of the sequences of one or more strokes into which said one or more image fragments classified as portions of cursive writing, if any, have been segmented by excluding from the dynamic Lexicon the known words included in the Lexicon which comprise at least one stroke sequence corresponding to a known portion of cursive writing of the Reference Set, having an expected number Sexp of strokes, in a position corresponding to an image fragment classified as portion of cursive writing, the sequence of one or more strokes of which has an actual number Sact of strokes, such that the difference between the expected number Sexp of strokes and the actual number Sact of strokes is larger than an exclusion threshold.
According to a further aspect of the invention, operation B2 may comprise, for each one of said one or more image fragments classified as isolated characters, the following substeps:
According to an additional aspect of the invention, operation B3 may comprise, for each image fragment classified as portion of cursive writing, the following ordered substeps:
According to another aspect of the invention, substep B3.3 may provide the temporal sequence of points of the corrected skeleton on the basis of a search in a graph, comprising a plurality of nodes and a plurality of arcs which connects nodes, that describes topological properties of a polyline associated to the corrected skeleton obtained from substep B3.2, wherein each node of the graph has a type, selected between end point EP and branch point BP, and a degree, equal to the number of connecting arcs branching from the same node and that depends on the number of lines of the polyline which cross the node, substep B3.3 comprising:
According to a further aspect of the invention, substep B3.4 may segment the unfolded corrected skeleton through a decomposition method using a concept of perceptual saliency based on a multi-scale representation of the unfolded corrected skeleton obtained from substep B3.3 that is used to build a saliency map to highlight the points of the unfolded corrected skeleton in which curvature variations are recorded at different scales larger than a curvature variation threshold and to assume such points of the unfolded corrected skeleton as segmentation points, wherein the decomposition method optionally comprises:
According to an additional aspect of the invention, substep B3.5 may analyse the sequence of strokes obtained from substep B3.4 by means of the following validation criteria:
According to another aspect of the invention, operation B6 may performs, for each image fragment classified as portion of cursive writing, the comparison by measuring the shape similarity of the sequence of one or more strokes into which the image fragment has been segmented with the shape of said one or more stroke sequences included in the Dynamic Reference Set at different scales on the basis of a multi-scale representation of the sequence of one or more strokes into which the image fragment has been segmented that is used for building a saliency map to highlight the stroke sequences included in the Dynamic Reference Set which are most similar to the sequence of one or more strokes into which the image fragment has been segmented, the multi-scale representation optionally using as starting scale the length K, equal to the number of strokes, of the longest common sequence of compatible strokes between the sequence of one or more strokes into which the image fragment has been segmented and the stroke sequence included in the Dynamic Reference Set with which the comparison is performed, the successive scales being obtained by considering the subsequences of compatible strokes of length progressively decreased by 1, whereby K−1 similarity maps are obtained, the comparison being more optionally performed on the basis of one or more compatibility criteria.
According to a further aspect of the invention, in operation B6 the shape of a stroke may be described by a chain code that encodes the orientations of the segments of the polyline describing the stroke at the resolution σ, and operation B6 may comprise the following ordered substeps:
where:
According to an additional aspect of the invention, operation B7 may comprise the following substeps:
According to another aspect of the invention, the process may further comprise a configuration mode wherein the process executes the steps of:
where:
It is further specific subject-matter of the present invention a computerised apparatus, in particular computer or computer network, for handwriting recognition, characterised in that it comprises processing means capable to execute the process of handwriting recognition just described.
It is also specific subject-matter of the present invention a set of one or more computer programs comprising code means adapted to perform, when operating on processing means of a computerised apparatus, the process of handwriting recognition just described.
It is further specific subject-matter of the present invention a set of one or more computer-readable memory media, having a set of one or more computer programs stored therein, characterised in that the set of one or more computer programs is the set of one or more computer programs just mentioned.
The inventors have developed a process psycho-physiological aspects involved in generation and perception of handwriting for directly inferring from the trace on the paper (or any other means on which the writer writes by hand) the interpretation of writing, i.e. the sequence of characters that the trace is intended to represent.
The process according to the invention may deal with any kind of trace, including those only partially representing writing movements, as it happens when some movements are performed while the pen tip is not in touch with the paper. These lifts of the pen tip may occur anywhere in the handwriting, i.e. between successive characters as well as within a single character.
In contrast to prior art processes for handwriting recognition, the process according to the invention does not perform any feature extraction and classification, and therefore it does not need to be trained to learn class prototypes. All the information that the process needs is extracted from two sources: a set of traces and their interpretations (the setting set, in the following also denoted as “Setup Set”) and a list of possible interpretations for the unknown words (the lexicon, in the following also denoted as “Lexicon”)
The traces in the Setup Set are not constrained to represent words of the Lexicon, and therefore the same Setup Set may be used with different samples of Lexicon, provided that both the Setup Set and the Lexicon refer to the same alphabet and, optionally, to the same language. Thus, the system may reliably recognise any word of the Lexicon, including those for which there were no instances in the Setup Set.
The present invention will be now described, by way of illustration and not by way of limitation, according to its preferred embodiments, by particularly referring to the Figures of the annexed drawings, in which:
In the Figures identical reference numerals will be used for alike elements.
In the following of the present description and in the claims, the terms “trace” and “cursive trace” mean the set of pixels which may be considered as ink signs in the image of handwriting of an entire word (i.e., in case of writing with black ink on white paper, the set of black pixels of the image), and the terms “tract” and “cursive tract” mean the set of pixels may be considered as ink signs in the portion of image of handwriting related to a part of the entire word separated from all the other ones.
In the following, reference will be made to a handwriting through black traces on white background. However, it should be understood that the process according to the invention may be applied to any combination of colours for writing and background, e.g. blue or red traces on white or gray or yellow background or white traces on black background, still remaining within the scope of protection as defined by the attached claims.
From a general point of view, the preferred embodiment of the process according to the invention assumes as input the digital image of a trace corresponding to an unknown word to recognize and it provides as output a string of characters constituting its interpretation, which string of characters is selected from those included in a lexicon, or a special character indicating that no interpretation has been found among those included in the lexicon. To properly perform its functions, the process according to the invention needs the lexicon (Lexicon), comprising a list of possible interpretations of the unknown words of the application, and of a setting set (Setup Set) comprising a collection of handwritten traces and their transcript (through a string of characters). The traces of the Setup Set do not necessarily represent handwritten samples (in the following also called instances) of entire words of the Lexicon; however, both the Setup Set and Lexicon set refer to the same alphabet and, optionally, to the same language.
The preferred embodiment of the process according to the invention comprises the following functional units (i.e. operations executed by the process):
The process according to the invention has two operation modes: a configuration mode, schematically shown in
Making reference to
The fragments of the Character Training Set 151 are used by the ICR functional unit 101 for training a classification engine 155 based on neural networks that is then used in the running mode.
Each fragment of the Cursive Training Set 152 is passed to the StS functional unit 102 that segments the portion of cursive writing into a stroke sequence 153. The StL unit 103 associates to each stroke sequence 153 its transcript, such that each stroke is associated to the character of the transcript to which it belongs. The collection of stroke sequences 153 and their labels (i.e. the character corresponding to the transcript to which it belongs) form a set 154 of reference (Reference Set) that is used in the running mode.
Making reference to
The ICR functional unit 101 executes the classification of the fragments 201 and outputs a list 203 of interpretations for each fragment and a parameter indicative of a classification cost (that will be better described later) for each interpretation.
The DKBR functional unit 104A-104B receives as input the list 203 of interpretation-cost pairs provided by the ICR functional unit 101, the stroke sequences 206 provided by the StS functional unit 102, the relative position of each fragment 201 within the word image 200 as calculated by the TD functional unit 100 and it outputs:
The IM functional unit 105 compares the stroke sequence 206 provided by the StS functional unit 102 with the sequences included in the Dynamic Reference Set 205, and, in the case where a set of matching criteria is satisfied, it provides as cursive interpretation 207 for the stroke sequence 206 the transcript of the matching stroke sequences of the Dynamic Reference Set 205 and its cost. After the execution of the matching, there may be unmatched stroke sequences of the fragments 202, i.e. stroke sequences 206 of fragments which does not match any sequence included in the Dynamic Reference Set 205, and/or overlapping sequences, i.e. stroke sequences 206 of fragments 202 which matches a plurality of sequences included in the Dynamic Reference Set 205 with different transcripts.
Finally, the WV functional unit 106 computes the total cost associated to each element in the Dynamic Lexicon 204 for which one or more cursive interpretations 207 for its fragments 201-202 have been found, by combining the costs associated to its fragments 201-202 and the costs for unmatched and/or overlapping stroke sequences, and it provides as final output the top ranking interpretation 208 or it rejects the unknown word in the case where the total cost of such interpretation is larger than a threshold.
The IM functional unit 105 then compares the stroke sequences 206A and 206B with the sequences included in the Dynamic Reference Set 205, and, in the case where a set of matching criteria is satisfied, provides as cursive interpretation 207 for the stroke sequences 206A and 206B the transcript of the matching stroke sequences of the Dynamic Reference Set 205 and their cost (in the example of
In the following, the functional units of the preferred embodiment of the process according to the invention are described in greater detail.
As stated, the TD functional unit 100 extracts from a word image 200 the sub-images corresponding to the fragments for classifying them as fragments 201 of isolated characters or fragments 202 of portions of cursive writing. Due to both acquisition noise (that may artificially subdivide the original ink trace into pieces) and writing habits (that may lead the writer to lift the pen from the paper while writing), an isolated character as well as a portion of cursive writing may be segmented into a plurality of two or more pieces, which must be merged for reconstructing the original meaning. To this end, the sub-images corresponding to each piece are first extracted and for each one of them a set of features suitable to be used in the classification step are then computed. Preliminarily, the unit locates the central zone, the upper zone and the lower zone of the entire word. After the features have been computed, each piece is classified as portion of cursive writing, or isolated character, or vertical line, or horizontal line, or dot, or noise, or rejected writing, and then a set of heuristic rules (illustrated in detail later) are applied for the merging of two or more pieces for forming either an isolated character 201 or a fragment 202 of portion of cursive writing. By way of example, in
In order to estimate the features of the fragments of ink tracts, the TD functional unit 100 proceeds as follows. First of all, the word image is processed for extracting the bounding box of each piece, i.e. of each set of connected black pixels. In the following such sets of pixels are called components. Afterwards, each component is analysed by considering its size, the number and the distribution of its black pixels and the size of the word to which the same component belongs. In particular, in the preferred embodiment of the process according to the invention, the TD functional unit 100 considers the Cartesian coordinates of the top-left vertices (Xmin, Ymax) Y and bottom right vertices (Xmax, Ymin) of the bounding box, the width Wcomp and the height Hcomp of the bounding box, the total number Pcomp of pixels and the number of black pixels BPcomp included in the bounding box, the width W word and the height H the bounding box of the word. Starting Hword of from these basic features, a further set of features is namely the height ratio HR:
the ratio AR between width Wcomp and height Hword (also known with the term of aspect ratio):
the proportional aspect ratio PAR:
and the fill factor FF:
The features HR, AR and PAR are meant to capture the temporal extension of the handwriting, while the feature FF is meant to capture the spatial density of ink.
Moreover, in order to evaluate the shape complexity of the ink trace, the number of transitions between white pixels (belonging to the background) and black pixels (belonging to the writing) along consecutive rows and columns of the component are additional features of which the TD functional unit 100 takes account. In particular, (as described, e.g., by R. C. Gonzalez and R. E. Woods in “Digital Image Processing”, Addison-Wesley, Reading, Mass., 1992) their values are arranged in two histograms, namely a histogram of the number of transitions per column on the horizontal axis and a histogram of the number of transitions per row on the vertical axis. On such histograms, a number of Δx (with Δx optionally equal to 2) consecutive columns and a group of Δy (with Δy optionally equal to 4) consecutive rows, respectively, are considered and the highest value among those which are present is stored, thus obtaining the vectors IMx and IMy, each one of which has a number of elements equal to the ratio between the horizontal/vertical size of the matrix and the intervals Δx and Δy, respectively. As stated, the features of the numbers of transitions between white and black pixels along consecutive rows and columns of the component provide a measurement of the complexity of the shape of the ink trace: an empty or flat ink-mark on both horizontal and vertical axes suggests that the component presents scattered black pixels and it is likely to be noise, while higher values correspond to more complex shapes.
Finally, the TD functional unit 100 estimates the position of the handwriting zones in the word image, by locating the set of horizontal lines corresponding to the upper and lower boundaries of each zone (e.g., as described by Gonzalez and Woods in the handbook cited above). Making reference to
The rules designed for classifying the components are reported in Algorithm 1 in
pseudo-natural (English) language shown in the present description, wherein the classifications are defined as follows: NOISE is noise; VERTICAL LINE is a vertical line; HORIZONTAL LINE is a horizontal line; DOT is a dot; CONFUSION is confusion; ISOLATED CHARACTER is an isolated character; CURSIVE is a portion of cursive writing; and REJECT is a rejection of the fragment. In particular, the names assigned to each threshold indicate the class (or the classes) the classification rules of which use the threshold and the feature to which the threshold is applied. For instance, the threshold NOISE_DOT_FF_MIN is used in the classification rules for the noise (NOISE) and dot (DOT) classes, it is applied to the feature FF and it is used as minimum value (MIN). The only exception to these guidelines for reading Algorithm 1 is the threshold OFFSET_CZ, that represents the maximum allowable offset between the position of the lower limit of the box delimiting the fragment and the line of upper delimitation of the central zone 301 of
After the classification, the merging rules reported in Algorithm 2 in pseudo-natural language shown later are applied to components classified as dots, horizontal and vertical lines, in order to group them together or with an isolated character and with a portion of cursive writing. Possibly, components classified as confusion are sent to both the ICR functional unit 101 and the StS functional unit 102, while components classified as rejections are ignored in successive processing.
As stated with reference to
The first functional subunit of description of the fragments 201 associates to the binary digital image of each fragment 201 a feature vector containing the representation of that fragment, that will be used by the second subunit of classification. In this regard, the preferred embodiment of the process according to the invention takes account of two different feature sets, namely the Central Geometrical Moments (CGM) of the binary images up to the 7th order (e.g., described by Gonzalez and Woods in the handbook cited above), and the mean of the pixels belonging to the disjoint sub-images of 8×8 pixels size that may be extracted from the binary image (MBI: Mean of Binary Image, i.e. the mean of the values of the image pixels, wherein the value of black pixels is equal to 1 and the value of white pixels is equal to 0). Hence, each fragment 201 to classify is described by means of two feature vectors: the first vector contains 33 real values, while the second vector is composed of at most 64 real values (it is assumed that an image containing an entire character, known as bitmap, has maximum size equal to 64×64 pixels). The images of the samples of fragments included in the set Char_TS 151 of
In the second subunit of multi-expert classification, that makes use of neural networks, unknown fragments 201 are classified through an ensemble of experts. In particular, the preferred embodiment of the process according to the invention takes account of 20 experts, obtained by using as classification scheme a feed-forward-type neural network, trained with the back-propagation algorithm (back-propagation). The first 10 experts are trained by using the training set Char_TS_CGM with different random initialisation of the network parameters. Similarly, the other 10 experts are obtained by using the training set Char_TS_MBI.
The third subunit of combination of the results receives as input the responses provided by the ensemble of experts for a given fragment 201 and it provides as output the list of interpretations for such fragment, together with the cost for each interpretation. The inventors have developed such third subunit by reformulating the problem of the combination of the classifiers (i.e. of the experts) as a problem of recognition of shapes (also known as “pattern recognition”), wherein the shape (i.e. the pattern) represents collective behavior of the experts when classifying a fragment 201. In this way, the collective behavior of the experts is represented by the set of labels provided by the experts when classifying that fragment 201, and the dependencies among the experts are modelled by estimating the joint probability distributions among the outputs of the classifiers and the true class. The inventors have developed the third subunit of combination of the results by using a Bayesian Network for automatically inferring the probability distribution for each class, and by defining a new weighted majority vote rule, that uses the joint probabilities as weights, for combining the classifier outputs. The final decision is made by taking into account both the votes received by each class and the statistical behavior of the classifiers.
The architecture of the third subunit of combination of the results is shown in
The third subunit of combination of the results may be defined as a higher level classifier that works on a L-dimensional discrete-values feature space. The combiner uses a supervised learning strategy, which consists in observing both the responses {e1, . . . , eL} and the “true” class c for each fragment of the training set, in order to compute the joint probability p(c, e1, . . . , eL).
Once this joint probability has been learned from a set of training data, the combiner classifies unknown fragments 201 by using a weighted voting strategy. In particular, the combiner computes the class c* of the unknown fragment x by using the formula:
where ri,k is a function the value of which is 1 when the classifier Ei classifies the unknown fragment x as belonging to the class k, and 0 otherwise, while wk represents the weight related to the k-th class and it has been set equal to the joint probability:
wk=p(c=k,e1, . . . ,eL) (2)
A high value for the weight wk means that the set of responses {e1, . . . , eL} provided by the experts is very frequent in the training set in correspondence with the class k.
A Bayesian Network (in the following also indicated as BN) is used for learning the joint probabilities. This choice is motivated by the fact that the BN provides a natural and compact way to encode exponentially sized joint probability distributions (through the Direct Acyclic Graph structure—in the following also indicated as DAG) and it allows to learn causal relationships, and hence to gain understanding about complex problem domains. In order to implement this mathematical tool into an application the definition of both the network structure (DAG) and the related conditional probabilities is necessary. This can be achieved by using learning algorithms which are capable to derive them from training fragments. The learning algorithm alternates between two phases: a first phase, called structural learning, is aimed at capturing the relation between the variables e and hence the structure of the dependencies in the DAG. A second phase, called parameter learning, evaluates the conditional probability parameters between variables.
For both the structural learning and the parameter learning the inventors have followed the guidelines described by D. Heckerman, D. Geiger and D. Chickering in “Learning Bayesian networks: The combination of knowledge and statistical data”, Machine Learning, 20, 1995, pp. 197-243, in order to reduce the computational cost: according to such guidelines, the inventors have implemented a sub-optimal algorithm, that solves the two problems separately: such sub-optimal algorithm learns the DAG structure first and it then computes the parameter for such a structure.
When there are more classes exhibiting similar values for the product wkri,k, the combiner does not provide a single class as result, but rather the list of the most likely interpretations. In practice, when the difference between the best interpretation according to formula (1) and the second best interpretation is lower than a threshold θ (the value of which has been experimentally set), the combiner also introduces this interpretation in the list provided as output. The same consideration is repeated for the second best and the third best and so on. Finally, a cost, represented by the product wkri,k, is associated to each interpretation in the output list.
As previously illustrated, the StS functional unit 102 of segmentation of the strokes decomposes the tract (or the trace) contained in a fragment 202 of portion of cursive writing into a sequence of elementary movements (called strokes). To this end, as shown in
The ink present in the fragment 2020 is represented in the binary digital image as a “ribbon” the width of which (i.e. the thickness of which) depends on the writing instrument, paper, writing pressure (i.e. the pressure that the writer exerts through the writing instrument on the paper) and scanner resolution. The first skeletonisation subunit 501 transforms this ribbon into a line having width equal to a single pixel, so as to eliminate the variability introduced by the just mentioned factors. This is achieved by computing the Medial Axis Transform (MAT) of the ribbon. The MAT determines the connected sets of points including the centers of all the circles with maximum radius that may be inscribed in the ribbon. In other words, the MAT transform is the local axis of symmetry of the ribbon. At the end of this processing, thus, the ribbon is represented by a unitary width digital line, computed through any one of the algorithms proposed in the literature; by way of example, the skeletonisation algorithm based on the MAT may be the one described by C. Arcelli and G. Sanniti di Baja in “A thinning algorithm based on prominence detection”, Pattern Recognition, vol. 13, no. 3, 1981, pp. 225-235 wherein a label representing the distance of the pixel from the border of the ink trace is associated to each pixel of the trace, and the skeleton is obtained by considering all the points the label of which is a local maximum and all and only those necessary to their connection so as to guarantee that the skeleton has the same order of connection of the trace.
Independently from the specific algorithm that is used, the skeleton computed by means of the MAT transform may have some geometrical distortions in correspondence of the regions wherein the trace intersects itself, so that the shape of the skeleton does not faithfully reflect the one of the trace, as in case of the skeleton 2021 of
In the just illustrated steps 3, 4 and 5, the tests on the distances are introduced in order to guarantee that the segments which are added still lay within the trace.
Returning to make reference to
The unfolding algorithm carried out by the third functional subunit 503 recovers the sequence of points followed by the writer by reformulating the problem of writing order recovery in terms of graph search, where the graph describes the topological properties of the polyline associated to the skeleton obtained at the end of the correction distortion steps. Each node of the graph is characterised by two features, the type (selected between EP or BP) and the degree (i.e. the number of segments or connections branching from the same node), that depends on the number of digital lines which cross the node. The unfolding is obtained by selecting a path within the graph that crosses all the nodes and, at the same time, that minimises the number of nodes crossed more than once. For the existence of such a path, the original graph is transformed into a semi-Eulerian graph, i.e. a graph in which all the nodes have an even degree, with the exception of the source and destination nodes. In order to transform the graph structure into that of a semi-Eulerian graph, two steps are followed by using heuristic criteria. In the first step, the source and destination nodes are selected among the odd nodes and all the remaining odd nodes are transformed into even nodes, adding further connections among them. Finally, the Fleury's algorithm, modified on the basis of handwriting generation criteria, allows the path that crosses all the nodes and minimises the number of nodes crossed more than once to be found. The selected path represents the reconstructed dynamics of the ink trace. More in detail, the unfolding algorithm comprises the following steps:
The segmentation algorithm carried out by the fourth functional subunit 504 subdivides the skeleton of the unfolded tract (or unfolded trace—as that of
The segmentation into strokes is obtained with a decomposition method that exploits the concept of perceptual saliency used to model attentive vision of human beings (and more in general of primates). The method is based on a multi-scale representation (as described, e.g., by Lindeberg T. in “Scale-Space Theory in Computer Vision”, Kluwer Academic Publishers, 1994) of the unfolded skeleton that is used to build a saliency map for highlighting the so-called “focus of attention”, i.e. the regions of the image representing salient information for the considered application. In this case, the focus of attention are the points of the unfolded skeleton in which significant curvature variations are recorded at different scales, and therefore they represent the desired decomposition points. The segmentation algorithm comprises the following steps:
The validation algorithm carried out by the fifth functional subunit 505 analyses the sequence of strokes provided by the fourth functional subunit 504 of segmentation and it validates (or not) the unfolding by mean of the following criteria:
The possibly found errors are encoded into an error vector, having as many elements as the EP and BP points of the unfolded skeleton in which each element is a Boolean variable set to “true” when a segmentation error is found in the point corresponding to the element of the error vector. This information is then exploited for deciding, on the basis of the number of detected errors, whether it is possible to reconstruct the writing order or not. Such a decision is based on the concept that path reconstruction is more difficult when most part of information related to the trajectory is not available. In particular, the trajectory described by the pen tip when the latter is lifted from the paper is not represented in the ink tract (or trace) and, therefore, in order to reconstruct the path when the pen is lifted, it is necessary to infer such missing information from the existing ink tract. Of course, as more information is not available, more complex is building a reliable reconstruction of the original path and much more errors can be made. Consequently, if the number of the error exceeds a threshold (optionally equal to 2), the ink tract (or trace) is rejected. The implementation of the validation algorithm carried out by the fifth functional subunit 505 is reported in the following in Algorithm 3 in pseudo-natural (English) language, wherein:
Whenever the segmented ink tract does not meet either or both the aforementioned criteria, but the total number of errors is below the threshold, the error vector is sent back to the unfolding algorithm carried out by the third functional subunit 503 and it is exploited to modify the following three features of the path in the graph that gives rise to the unfolded skeleton:
According to the information provided by the validation algorithm, the unfolding algorithm executes two steps:
As shown in
For the preferred embodiment of the process according to the invention, Tables 1A and 1B report (with terms in English language immediately comprehensible to those skilled in the art—consistently with what reported in Table 2), the set of features and the criterion according to which each feature is associated to a stroke, having a and b as starting and ending points, respectively.
The final output provided by the StS functional unit 102 is therefore the sequence of strokes and the corresponding sequence of features, represented in
In the configuration mode, the functional unit StL 103 of
More in particular, the first step 1200 of distribution generation generates a probability distribution 1251 for each class of characters, representing the probability that a character is composed of a certain number of strokes. The probability mass functions 1251 are obtained by solving a set of systems of linear equations. Each equation is obtained from a segmented ink tract (or trace) 2032, by considering the number nchar of strokes of each character as an unknown variable, the occurrences xchar of each character as coefficients, and the number nstroke of strokes of the segmented ink tract (or trace) as the constant term:
naxa+ . . . +nzxz+nAxA+ . . . ++nZxZ+=nstrokes
Each system of linear equations is constituted by k equations (with k≧C, where C is the number of characters of the alphabet as stated above) and by C unknown variables. By solving a set of m systems, m vectors of solutions are obtained each comprising C elements (namely an element for each character of the alphabet). Each i-th vector of solution (with i ranging from 1 a m) is assigned to a corresponding vector of reliability parameters R, also having C elements (one for each character) each one of which is equal to the ratio between the occurrence of the corresponding character within the equation system and the deviation of the considered solution (for that character) from the average of the solutions (for that character):
where:
The distributions are then created on the basis of the m vectors of solutions and of the corresponding vectors of reliability parameters R.
A stated, the second step 1201 of feature analysis analyses the features associated to the strokes and locates a set of anchor points 1252. Certain characters are characterised by a particular sequence of features at the beginning or end of their ink tract. A list of these characters and their properties, in case of English alphabet, is reported in Table 2. Whenever the transcript contains these characters, the actual features provided by the StS unit 102 are compared to the expected ones. In the case where the actual features correspond to the expected ones, the sequence of strokes is divided into subsequences according to the located anchor points 1252 (as shown in
The third step 1202 of label association carried out by the functional unit StL 103 analyzes each subsequence found in the second step 1201 of features analysis and it generates a sequence of labels, each representing the transcript of the character to which the corresponding stroke belongs. In particular, according to the transcript associated to the subsequence, the labelling algorithm executed by the third step 1202 exploits the probability distributions 1251 for finding the combination of strokes associated to each character that gives rise to the maximum value of probability and that, at the same time, meet the constraint of the total number of strokes composing the subsequence.
The labelling algorithm starts from the absolute maxima of the distributions 1251 associated to the characters belonging to the transcript, which are inserted in a vector Pmax (having a number of elements equal to the number of characters of the transcript), it generates a vector S, also containing a number of elements equal to the number of characters of the transcript, wherein each element is equal to the number of strokes associated to the respective character, and it calculates the expected number Sexp of strokes of the subsequence. Subsequently, if the expected number Sexp of strokes is different from the actual number Sact, the labelling algorithm selects another local maximum for each distribution 1251, forming a new vector Pmax; in particular, the local maximum within a distribution 1251 to insert as element of the vector Pmax is searched to the left of the absolute maximum if Sexp>Sact, while it is searched to the right of the absolute maximum if Sexp<Sact. On the basis of the vector Pmax, the labelling algorithm changes the number di strokes associated to the character having the largest value of local maximum. According to the new values of the elements of the vector S, the new expected number Sexp is calculated that is then compared with the actual number Sact. This first part of the labelling algorithm ends when Sexp corresponds to Sact or when all the local maxima have been explored.
Whenever there is no combination of local maxima that meets the constraint of the total number of strokes (i.e. Sexp=Sact), the labelling algorithm restarts from the absolute maxima, it searches for the local maxima (to the left of the absolute maxima if Sexp>Sact or to the right if Sexp<Sact) and it adds (if Sexp<Sact) or subtracts (if Sexp>Sact) one stroke to the number of strokes associated to the character having the largest value of local maximum and it calculates the new value of Sexp. The labelling algorithm ends when Sexp corresponds to Sact. The labelling algorithm is reported in Algorithm 6 in pseudo-natural (English) language, wherein:
With reference to the example application to the recognition of an image 13-200 of an unknown handwritten word (corresponding to the Italian word “Contrada” included in the Lexicon—with writing different from the image 200 of
First of all, as schematically shown in
The DKBR functional unit carries out as first operation the one of ordering the lists of the interpretations of the image fragments classified as isolated characters on the basis of their position within the image 13-200 of the unknown word (such order is indicated in
As schematically shown in
As already stated with reference to
The execution of the matching of the stroke sequences is carried out by measuring the shape similarity of the stroke sequence at different scales, by combining the multi-scale representation into a saliency map and by selecting the most salient points, which correspond to the most similar stroke sequence. The rationale behind such matching technique is that by evaluating the similarity at different scales and then combining this information across the scales, the sequences of strokes which are “globally” more similar than others stand out in the saliency map. The “global” nature of the saliency guarantees that its map provides a more reliable estimation of ink tract similarity than that provided by “local” criteria (which are usually proposed in the prior art).
In order to implement such an approach, it is necessary to define a scale space, to find a similarity measure to be adopted at each scale, to compute the saliency map, and to select the matching sequences of strokes.
With regard to the scale space, the preferred embodiment of the process according to the invention adopts as scale the number of strokes in the sequences the similarity of which is being measured. Such a number is indicated in the following of the present description also as “length” of the sequence. Accordingly, the number of scales corresponds to the length K of the longest common sequence of compatible strokes between the sequence of strokes of the fragments 202 and the sequence of strokes of the Dynamic Reference Set 205 with which the matching is verified. With reference to Table 3, in order to
decide whether two strokes are compatible, i.e. whether they bring the same contextual information even if they have different shapes, the features associated to the strokes are compared by adopting the compatibility criteria reported in Table 3, that shows an array each element of which indicates the compatibility (if the element has the symbol “x”) or non compatibility (if the element is void) between the features of the characters reported on the respective row and the respective column of the same element. The successive scales are obtained by considering the subsequences of compatible strokes of length K−1, K−2, . . . , 2 strokes. Hence, at the end of this procedure, K−1 similarity maps are obtained, each one of which measures the similarity among all the subsequences which may be extracted from the sequence of length K
The similarity between two strokes is their shape similarity. To this end, the shape of a stroke is described by a chain code encoding the orientations (i.e. the changes of curvature) of the segments of the polyline that describes the stroke at the resolution σ selected by the StS functional unit 102. The orientation is uniformly quantised into 16 intervals, and each interval is denoted by one of the letters of the subset [A-P] in such a way that the letter A corresponds to the first interval (wherein the orientation goes from 0 to (2π/16) with respect to the horizontal axis), the letter B corresponds to the second interval (wherein the orientation goes from (2π/16) to (2·2π/16) with respect to the horizontal axis), and so on; obviously, in other embodiments of the process according to the invention the subset [A-P] may have a different number of elements and/or a representation for each element different from the uppercase Latin letter (e.g., a number, a hexadecimal symbol, a Greek letter). Through this encoding, the shape of the stroke is described by a string of labels that encodes the local orientation of the selected representation of the original ink tract, as shown by way of example, and not by way of limitation, in
As similarity measure between two strokes, the preferred embodiment of the process according to the invention adopts the weighted edit distance, known as WED, between the respective chain codes. The WED distance is based on the concept of string stretching: it does not introduce nor delete any label in the strings to compare, but it simply extends, i.e. stretches, the shortest strings up to the longest one. Hence, by denoting with Lmin and Lmax the lengths of the two strings, respectively, there are (Lmax−Lmin) labels which must be included in the stretched string. In order to decide which symbols must be inserted and where, the integer part l of the ratio (Lmax/Lmin) is computed and each symbol of the shortest string is replicated by (l−1) times. The remaining ((Lmax−Lmin)−l) symbols are uniformly located in the stretched string and their values are the same values of those of the labels to their left.
After the stretching, the WED distance between two strings of labels, namely between a first string X=(x1, x2, . . . , xL) of labels and a second string Y=(y1, y2, . . . , yL) of labels, is computed according to the following formula:
where:
In other words, E(xi,yi) is the lexical distance between the symbols [A . . . P] adopted for encoding the curvature changes, which lexical distance is constrained to be symmetric, so that the lexical distance between A and B is 1, but also the distance between A and P is 1. Consequently, the maximum distance SDmax between two symbols is equal to 8 and the distance WED(X, Y) ranges from 0 (for an identical shape but with opposite drawing order) and 100 (for an identical shape and the same drawing order). In the comparison between two sequences X and Y having respectively N and M strokes, the WED distance assumes the form of a matrix of N×M elements, the element WEDij of which denotes the WED distance between the i-th stroke of the first fragment and the j-th stroke of the second fragment; in the case where the two strokes are incompatible, it is WEDij=0. By way of example and not by way of limitation,
Other embodiments of the process according to the invention may use a formula different from formula [W1] for the computation of the WED distance between two strings X=(x1, x2, . . . , xL) and Y=(y1, y2, . . . , yL) of labels, such as for instance the following formula:
that differs from the formula [W1] in that the maximum distance SDmax between two symbols may be different from 8 and the distance WED(X, Y) may range from 0 to WEDmax, with WEDmax that may be different from 100.
Subsequently, the IM unit 105 computes the average value μ(WED) on the values WEDij which are different from zero (i.e. on the values WEDij≠0). In order to build the saliency map S, that is also a matrix of N×M elements Sij, initially set equal to zero, the IM unit 105 determines the length K of the longest common sequence of compatible strokes and, for each pair of strokes p and q of such sequence such that WEDpq>μ(WED), the IM unit 105 increments by one the saliency of the pair of strokes p and q (i.e. Spq=Spq+1).
Finally, the IM unit 105 decrements by one the length (i.e. K=K−1), it searches for all the possible sequences of length K and it updates the saliency of their strokes as above, until it reaches K=2; in particular,
In the case where there are two (or more) matching sequences which correspond to multiple interpretations for the same stroke sequence of the unknown word, all these matching sequences are retained and ranked on the basis of their reliability. In this way, after having carried out the matching of the unknown word with all the references, a set of interpretations for each stroke sequence of the unknown word is available. Algorithm 7 formally summarises the procedure that executes the matching of the ink trace in pseudo-natural (English) language, wherein SAVE is the value previously indicated with μ(S).
Still with reference to
The ICR unit 101 provides a list of pairs of values (interpretation, cost) for each fragment 201 classified as isolated character, while the IM unit 105 provides a set of matches for each fragment 202 classified as portion of cursive writing. Therefore, in order to assign a score to each interpretation, the WV unit 106 computes the cost for each fragment 202 classified as portion of cursive writing on the basis of the received matches.
The problem of computing the cost of a fragment 202 classified as portion of cursive writing is reformulated as searching the cheapest and the most exhaustive sequence of matches for the same fragment. Each match is a sequence of strokes, described by a starting and an ending strokes and a label corresponding to the sequence of characters (or n-gram) coded by the sequence. In the example shown in
The desired sequence of matches is defined as the sequence that:
reference transcripts, further showing for each stroke the transcript of the character to which such stroke belongs.
The sequence to validate is computed by the WV unit 106 by verifying the existence of a path in a directed weighted graph wherein the nodes are the matches and the arcs are the possible connections between pairs of consecutive matches; by way of example,
The construction of the graph is guided by the interpretation provided by the Dynamic Lexicon 204. The nodes of the graph are determined on the basis of the following three rules, which take account of the fact that there can be a plurality of matches associated to the same stroke sequence of the (portion of) unknown word:
The cost assigned to each node introduced by rules 1)-3) above is equal to the difference between the maximum number of matches assigned to one of the nodes identified as above by the IM functional unit 105 for that particular fragment 202 classified as portion of cursive writing and the number of matches associated to each node, as reported in Table 5 for the matches of Table 4.
As far as the arcs of the graph are concerned, they are determined on the basis of the following three rules:
In order to determine the costs to associate to the arcs introduced in the graph by the rules, it is considered that most frequently matches either partially overlap each other or have gaps between them, since some strokes may receive different labels while other do not receive any label from the IM unit 105. In order to take account of the overlaps and/or the gaps between connected nodes, the cost of each arc depends on the length of the overlaps/gaps between matches. In particular, if L denotes the length of the overlap/gap and Llow denotes the length of the cheapest node of the pair, Nhigh the node of the graph with the highest cost and Lhigh its length, the cost for the arc Aij going from node Ni to node Nj is defined as follows:
where cost(node) is the cost of the node. Table 6 shows the arcs of the graph and the costs associated thereto according to the described process. The elements of Table 6 to which no costs correspond are related to pairs of nodes not connected by arcs.
Consequently, the cost of the path going from node i to node j is equal to:
Cij=cost(Ni)+Aij+cost(Nj)
The WV unit 106 carries out the validation and calculates the score of the interpretation associated to the fragment 202 classified as portion of cursive writing through the algorithm described by J. Y. Yen in “Finding the k shortest loopless paths in a network”, Management Science, 17(11), 1971, pp. 712-716. In the example of
After a cost has been assigned to all the fragments 202 classified as portions of cursive writing belonging to the (image 200 of the) trace, the WV unit 106 calculates the score of the interpretation of the unknown word by adding the costs of each fragment 201 classified as isolated character and of each fragment 202 classified as cursive tract, as shown in the example of
The preferred embodiments of this invention have been described and a number of variations have been suggested hereinbefore, but it should be understood that those skilled in the art can make other variations and changes, without so departing from the scope of protection thereof, as defined by the attached claims.
Number | Date | Country | Kind |
---|---|---|---|
RM2013A0022 | Jan 2013 | IT | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2014/058194 | 1/11/2014 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2014/108866 | 7/17/2014 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20080123940 | Kundu | May 2008 | A1 |
Entry |
---|
Claudio De Stefano et al., “Reading Cursive Handwriting”, Frontiers in Handwriting Recognition (ICFHR), 2010, Nov. 16, 2010, pp. 95-100. |
Tal Steinherz et al., “Offline Cursive Script Word Recognition—a Survey”, International Journal on Document Analysis and Recognition Dec. 1, 1999, pp. 90-110. |
P. Foggia et al., “Character Recognition by Geometrical Moments on Structural Decompositions”, Proceedings of the 4th International Conference on Document Analysis and Recognition, (ICDAR), Aug. 18-20, 1997, vol. 1, Aug. 18, 1997, pp. 6-10. |
G. Boccignone et al., “Recovering Dynamic Information from Static Handwriting”, Pattern Recognition, Elsevier, GB, vol. 26, No. 3, Mar. 1, 1993, pp. 409-418. |
L.P. Cordella et al., “Writing Order Recovery from Off-line Handwriting by Graph Traversal”, Pattern Regognition (ICPR), 2010 20th, Aug. 23, 2010, pp. 1896-1899. |
Antonio Clavelli et al., “Indexing and Retrieving Cursive Documents without Recognition”, 19th International Conference on Pattern Recognition, 2008: ICPR 2008; Dec. 8-11, 2008, Dec. 8, 2008, pp. 1-4. |
Réjean Plamondon et al., “On-line and Off-line Handwriting Recognition: A comprehensive Survey”, IEEE Transaction on Pattern Analysis and Machine Intelligence, Jan. 1, 2000, pp. 63-84. |
D. Impedovo et al., “Handwritten Signature Verification: New Advancements and Open Issues”, Frontiers in Handwriting Recognition (ICFHR), 2012 International COnference ON, IEEE, Sep. 18, 2012, pp. 367-372. |
Number | Date | Country | |
---|---|---|---|
20150339525 A1 | Nov 2015 | US |