The present invention describes a method and system for an automatic capture of data of interest from a plurality of electronic documents (e.g. in TIFF, PDF or JPG formats) once a single example of a document from the same source and layout with known field positions and data is available. The source of electronic documents could be accounting systems, enterprise resource management software, accounts receivable management software, etc.
The number of documents that are exchanged between different businesses is increasing very rapidly. Every institution, be it a commercial company, an educational establishment or a government organization receives hundreds and thousands of documents from other organizations every day. All these documents have to be processed as fast as possible and information contained in them is vital for various functions of both receiving and sending organizations. It is, therefore, highly desirable to automate the processing of received documents. Typically, commercial documents such as invoices, purchase orders, bills of lading and others are created by a software program that specifies a layout of information on each page of the document so that the document contains permanent information such legends/keywords designating the data fields (e.g. Invoice Number, Bill Number, Carrier Name, etc.) and variable information (an actual invoice number, a specific carrier name) that needs to be captured from these documents. The salient feature of the document creating software is that mutual relations between the permanent information and the variable information for each originating system rarely, if at all, changes. In other words, the layout of the documents from individual sources rarely changes and if, for instances, the “ship to” address is placed underneath the “ship to” legend for one instance of the bill of lading from a given source it stays in the same relative position for another instance of the bill. Of course, there are thousands and thousands different layouts produced by individual originating entities.
The references described below and the art cited in those references is incorporated in the background of the present invention. There are many data capture systems known in the art. There are commercially available systems from companies such Kofax, ABBYY, AnyDoc, and many others. U.S. Pat. No. 8,660,294 B2 describes the typical data capture methods deployed by these companies.
Briefly, the method comprises two parts, the setup for each individual layout and the actual data capture utilizing a normally laborious setup. The setup process consists of a usually highly qualified technician or a programmer who creates a detailed formalized description of the mutual relations between permanent and variable elements of each individual layout either within a specially created user interface or actually using a programming language to create a program encoding these relations. The disadvantages of such methods are well known and described in U.S. Pat. No. 8,660,294 B2, the chief one being a high labor intensity that is coupled with difficulty maintaining the systems that utilize them. U.S. Pat. No. 8,660,294 B2 therefore discloses a method that utilizes data entered by an operator from an instance of a document, that data being locations of the fields of interest in the image that instance of the document. It also describes the use of keywords (in itself a well-known mechanism universally utilized) such as “Total” by instructing the system to find the data of interest “to the right of the printed word ‘Total’ on the physical form”. The main recipe in U.S. Pat. No. 8,660,294 B2 prescribes finding in the new incoming image the words closest to the words already found by the operator in an instance (template) of the document of the same origin as the incoming image.
There are potential problems that limit the efficiency of these methods: the keywords such as “Total” could be corrupted or obscured by some obstacles such as preprinted lines or by the noise introduced by the scanning process and with the manifold of words found in images the words closest to a known location maybe not the words sought.
The present invention provides a method and system for automatically finding and capturing data in electronic images of documents having a specific fixed originating mechanism such as computer printing program. This is accomplished with the help of a known example of a document originating from the same source. It is assumed that in accordance with the previously disclosed art this example is completely known with all the words in it and their positions and attributes such as their lengths in characters.
In what follows the system operates with two images from the same source: the image I from which the data has to be captured and the image T on which the system has been trained and learned all the data of interest.
The first step according to the preferred embodiment of the present invention is for any image I of a page of the document to find all the words in that image with all their bounding rectangles and their OCR identities (
GeoDist(W,w)=|x1−x3|+|y1−y3|+|x2−x4|+|y2−y4|,
where (x1, y1) and (x2, y2) are the Cartesian coordinates of the left upper corner and the right lower corner of word W and (x3, y3) and (x4, y4) are the corresponding coordinates of corners of word w. (
WordDistance(W,w)=u GeoDist(W,w)+v StringDist(W,w),
for each W and for each candidate word w, where u and v are appropriate weights. So, if there are k fields/words W captured in image T, k different distances are used.
Once the distance between W and w has been defined, a matrix of pair-wise distances WordDistance (W,w) is obtained for pairs of words (W,w) in two images I and T. The preferred embodiment for the present invention utilizes assignment algorithms that calculate the optimal correspondence/mapping of words (W,w) (matching in the sense of the shortest distance) based on the distance described above. Assignment algorithms are described in R. Burkard, M. Dell'Amico, S. Martello, Assignment Problems, SIAM, 2009, and incorporated by reference herein. The net result of this mapping is the captured set of fields in image I, as the desired subset X of words w that is in the one-to-one correspondence with the words W<->X.
If the same two permanent legends K and k can be found automatically and correlated in images I and T (such as unique words “Invoice Number” in both of them) then in another embodiment of the present invention it may be sufficient to calculate displacements of all the words W relative to K and apply the same displacements to find words X relative to legend k. It is not always possible to find permanent legends in images since they can be printed in a very noisy fashion or negatively or obscured by lines or other obstacles. However, the images I and T are most frequently shifted as a whole relative to one another providing largely the same displacement of fields of interest in two images. This circumstance also allows an independent verification of the results of the assignment method described above. The assignment algorithm runs in strongly polynomial time, thus making it an efficient method of using learning for data capture. If the displacement can be estimated from K and k only the words w having approximately the same displacement would participate in the calculations.
A modification of this method would utilize the same word distance as defined above but with the standard string (edit) distance between the legends K and k to arrive at the optimal correspondence of legends even if some of them are corrupted or only partially recognizable. This optimal correspondence of legends immediately allows the calculation of the displacement vector s between the images I and T, since all the legends and the corresponding fields are typically shifted in unison barring more severe non-linear distortions that are rarely observed outside of fax images. In essence, this is a process of an automatic registration of images. If the scanning process is sufficiently accurate only vertical and horizontal shifts will be present so that the application of the displacement vector s is sufficient. If skew or more severe affine distortions are present this method applied to three or more legends will provide the parameters of the full affine transformation that converts the coordinates of the fields in image I to the coordinates of corresponding fields in image T. The application of the assignment algorithm with WordDistance as defined above to all the pairs of training image fields of interest and all the candidate words in image I transformed via displacement vector s (or affine transformed if need be) will result in capturing of all the fields of interest in the image I.
Some fields of interest are multi-word fields such as addresses. The coordinates and extents of such fields are precisely known in the image T. Typically, the printing program allocates a fixed amount of real estate to each address. Once the correspondence of single word fields has been established it is possible to calculate the displacement of all multi-word fields in I relative to corresponding fields in the image T and thus capture them accurately (
All geometrical lines are known in the training image T including those that potentially border the fields of interest in images. The lines in the image I corresponding to the lines in the image T could be used to provide the positions of fields in the image I. Horizontal and vertical geometric line distances and optimal correspondence of these lines in two images were defined in U.S. Pat. No. 8,831,361 B2 which is incorporated as a reference herein. While there are several ways to define distances between geometric lines any good distance will provide a suitable measure of proximity between lines. In images with close layouts the corresponding distances between the lines bordering fields in the images I and T are designed to be the same and therefore the knowledge of these distances in T provides the knowledge of the corresponding distances in I thus providing the positions of sought fields. Namely, a distance between a horizontal line and a word can be defined as a vertical distance between the left upper corner of the bounding box of the word and the ordinate of the horizontal line. Similarly, a distance between a vertical line and a word can be defined as a horizontal distance between the left upper corner of the bounding box of the word and the abscissa of the vertical line. Measuring these distances in the image T provides estimates of the corresponding distances in the image I.