The invention generally relates to methods and apparatus for managing documents. More particularly, the present invention relates to methods and apparatus for document management, which capture image data from electronic document sources as diverse as facsimile images, scanned images, and other document management systems and provides, for example, indexed, accessible data in a standard format which can be easily integrated and reused throughout an organization or network-based system.
For many organizations, efficiently managing documents and transaction-centric business processes is a major challenge. Key business processes involving the use of numerous printed documents and/or document images are far too often fraught with inefficiencies and opportunities for error.
Without a mechanism for efficiently capturing and accessing documents and related content on-line, organizations have little opportunity to use and build on the vast information in their documents by integrating such information with the companies business processes, such as, for example, its customer relationship management process.
The widespread use of paper and form-based processes also limits an organization's ability to take full advantage of the information flowing into, within and out of the company.
Many organizations are moving toward the goal of a paperless office by implementing document-management solutions which allow them to store documents and forms as electronic images in a document management repository. In many organizations, a document is received, scanned and then a bit-mapped document image is circulated among relevant personnel. Although this approach may eliminate multiple circulating hard copies of documents, the documents must be read, understood, and often times later retrieved by the various personnel quickly from different applications.
A need exists for a document management system which efficiently analyzes and indexes such bit mapped images of documents to determine the nature of the document, and to efficiently generate index information for the document. Such index information, for example, would identify that the document is a bank statement from a particular bank, for a particular month.
The inventors have recognized that a need exists for methods and apparatus for efficiently storing, retrieving, searching and routing electronic documents so that users can easily access them.
The illustrative embodiments describe exemplary document management systems which increase the efficiency of organizations so that they may quickly search, retrieve and reuse information that is embedded in printed documents and scanned images. The illustrative embodiments permit manually associating key words as indices to images using the described document management system. In this fashion, key words are extracted and data from the images become automatically available for reuse in various other applications.
The illustrative embodiments provide integrated document management applications which capture and process all the types of documents an organization receives, including e-mails, faxes, postal mail, applications made over the web and multi-format electronic files. The document management applications process these documents and provide critical data in a standard format which can be easily integrated and reused throughout an organization's networks.
In an illustrative embodiment of the present invention, a client-server application referred to herein as the “Image Collaborator” is described. Image collaborator is also referred to herein as IMAGEdox, which may be viewed as an illustrative embodiment of the Image Collaborator. The Image Collaborator is used as part of a highly scalable and configurable universal platform based server which processes a wide variety of documents: 1) printed forms, 2) handwritten forms, and 3) electronic forms, in formats ranging from Microsoft Word to PDF images, Excel spreadsheets, faxes and scanned images. The described server extracts and validates critical content embedded in such documents and stores it, for example, as XML data or HTML data, ready to be integrated with a company's business applications. Data is easily shared between such business applications, giving users the information in the form they want it. Advantageously, the illustrative embodiments make businesses more productive and significantly reduce the cost of processing documents and integrating them with other business applications.
In accordance with an exemplary embodiment described herein, the Image Collaborator-based document management system includes modules for image capture, image enhancement, image identification, optical character recognition, data extraction and quality assurance. The system captures data from electronic documents as diverse as facsimile images, scanned images and images from document management systems. It processes these images and presents the data in, for example, a standard XML format.
The Image Collaborator described herein, processes both structured document images (ones which have a standard format) and unstructured document images (ones which do not have a standard format). The Image Collaborator can extract images directly from a facsimile machine, a scanner or a document management system for processing.
In accordance with an exemplary embodiment, a sequence of images which have been scanned may be, for example, a multiple page bank statement. The Image Collaborator may identify and index such a statement by, for example, identifying the name of the associated bank, the range of dates that the bank statement covers, the account number and other key indexing information. The remainder of the document may be processed through an optical character recognition module to create a digital package which is available for a line of business application.
The system advantageously permits unstructured, non-standard forms to be processed by processing a scanned page and extracting key words from the scanned page. The system has sufficient intelligence to recognize documents based on such key words or variations of key words stored in unique dictionaries.
The exemplary implementations provide a document management system which is highly efficient, labor saving and which significantly enhances document management quality by reducing errors and providing the ability to process unstructured forms.
A document management method and apparatus in accordance with the exemplary embodiments may have a wide range of features which may be modified and combined in various fashions depending upon the needs of a particular application/embodiment. Some exemplary features which are contemplated and described herein include:
Image Capture
Data extraction from unstructured documents
Data extraction from structured documents may be accomplished by using various unstructured techniques including locating a marker, e.g., a logo, and using that as a floating starting point for structured forms.
In addition to above supporting structured documents data extraction using location based mechanism (Zone based)
Indexing & Collation Logic
Predictive Modeling and auto-tuning
Intelligent Document Recognition
These, as well as other features of the present exemplary embodiments will be better appreciated by reading the following description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings of which
As shown in
The Image Collaborator servers 6, 12 and 18 have access to a database server 24 via hub 20. In this fashion, the results of the document management processing by Image Collaborator 6, 12 or 18 may be stored in database server 24 for forwarding, for example, to a line of business application 26.
Each Image Collaborator server 6, 12, and 18 is likewise coupled to a quality assurance desktop 22. As explained below, in an exemplary implementation, the quality assurance desktop 22 runs client side applications to provide, for example, a verification function to verify each record about which the automated document management system had accuracy questions.
The application captures data from electronic documents as diverse as facsimile images, scanned images, and images from document management systems interconnected via any type of computer network. The Image Collaborator server 6 processes these images and presents the data in a standard format such as XML or HTML. The Image Collaborator 6 processes both structured document images (ones which have a standard format) and unstructured/semi-structured document images (ones which do not have a standard format or where only a portion of the form is structured). It can collect images directly from a fax machine, a scanner or a document management system for processing. The Image Collaborator 6 is operable to locate key fields on a page based on, for example, input clues identifying a type of font or a general location on a page.
As shown in
The image enhancement module 32 operates to clean up an image to make the optical character recognition more accurate. Inaccurate optical character recognition is most often caused by a poor quality document image. The image might be skewed, have holes punched on it that appear as black circles, or have a watermark behind the text. Any one of these conditions can cause the OCR process to fail. To prevent this, the illustrative document Image Collaborator pre-processes and enhances the image. The application's image enhancement module 32 automatically repairs broken horizontal and vertical lines from scanned forms and documents. It preserves the existing text and repairs any text that intersected the broken lines by filling in its broken characters. The document image may also be enhanced by removing identified handwritten notations on a form.
The image enhancement tool 32 also lets the user remove spots from the image and makes it possible to separate an area from the document image before processing the data. The Image Collaborator, in the exemplary embodiment, uses a feedback algorithm to identify problems with the image, isolate it and enhance it. The image enhancement module 32 preferably is implemented using industry standard enhancement components such as, for example, the FormFix Forms Processing C/C++ Toolkit. Additionally, in the present implementation, the Image Collaborator optimizes the image for optical character recognition utilizing a results repository and predictive models module 46, which is described further below.
The Image Collaborator 6 also includes an image identification module 34 which, for example, may compare an incoming image with master images stored in a template library. Once it finds a match, the image identification module 34 sends a master image and transaction image to an optical character recognition module 36 for processing. The image identification module 34, provides the ability to, for example, distinguish a Bank of America bank statement from a Citibank bank statement or from a utility bill.
The optical character recognition module 36 operates on a received bit-mapped image, retrieves characters embodied in the bit mapped image and determines, for example, what type face was used, the meaning of the associated text, etc. This information is translated into text files. The text files are in a standard format such as, for example, XML or HTML. The optical character recognition module 36 provides multiple-machine print optical character recognition that can be used individually or in combination depending upon the user's requirements for speed and accuracy. The OCR engine, in the exemplary embodiment, supports both color and gray scale images and can process a wide range of input file types, including .tif, .tiff, JPEG and .pdf.
The Image Collaborator 6 also includes a data extraction module 37, which receives the recognized data in accordance with an exemplary embodiment from the character recognition module 36 either as rich text or as HTML text which retains the format and location of the data as it appeared in the original image. Data extraction module 37 then applies dictionary clues, regular expression rules and zone information (as will be explained further below), and extracts the data from the reorganized data set. The data extraction module 37 can also execute validation scripts to check the data against an external source. Once this process is complete, the Image Collaborator server 6 saves the extracted data, for example, in an XML format for the verification and quality assurance module 42. The data extraction module 37, upon recognizing, for example, that the electronic document is a Bank of America bank statement, operates to extract such key information as the account number, statement date. The other data received from the optical character recognition module 36 is made available by the data extraction module 37.
The Image Collaborator 6 also includes an unstructured image processing module 38, which processes, for example, a bank statement in a non-standard format and finds key information, such as account number information, even though the particular bank statement in question has a distinct format for identifying an account number (e.g., by acct.).
The Image Collaborator 6 unstructured image processing module 38 allows users to process unstructured documents and extract critical data without having to mark the images with zones to indicate the areas to search, or to create a template for each type of image. Instead, users can specify the qualities of the data they want by defining dictionary entries and clues, descriptions of the data's location on the image, and by building and applying regular expressions—pattern-matching search algorithms. Dictionary entries may, for example, specify all the variant ways an account number might be specified and identify these variants as synonyms.
Unstructured forms processing module 38 also allows the user to reassemble a document from the image by marking a search zone on the image and extracting data from it. The user can copy the data extracted from the search zone to the clipboard in Windows and paste it into a document-editing application.
When users mark a search zone, they can also indicate the characteristics of the data they want—for example, data types (dates, numerals, alphanumeric characters) or qualities of the text (hand-printed, dot-matrix, bolded text, font faces).
Another advantage of Image Collaborator's unstructured image processing module 38 is that users can do free-form processing of a document image to convert it into an editable document and still keep the same formatting as the original.
Exemplary features the Image Collaborator application uses to process unstructured images includes:
The Image Collaborator 6 additionally includes a structured image processing module 40. The structured image processing module 40 recognizes that a particular image is, for example, a standard purchase order. In such a standard document, the purchase order fields are well defined, such that the system knows where all key data is to be found. The data may be located by, for example, well-defined coordinates of a page.
Normally, if a user wants to extract data from a structured image such as a form, he must create a template that identifies the data fields to search and all the locations on the document where the data may occur. If he needs to process several types of documents, the user needs to create templates for each of them. The Image Collaborator 6 with its structured image processing module 40 makes this easy to do, and once the templates are in place, the application processes the forms automatically.
Even though a business may build templates before processing its structured documents, it is cost-effective if the company has standard forms, such as credit applications or subscription forms, since the company can then rapidly do document management and data capture.
In accordance with the exemplary embodiments, the following features are used by the Image Collaborator application to process structured images:
Further details of structured image processing module 40 may be found in the applicants' copending application Ser. No. 10/837,889 and entitled “DOCUMENT/FORM PROCESSING METHOD AND APPARATUS USING ACTIVE DOCUMENTS AND MOBILIZED SOFTWARE” filed on May 4, 2004 by PANDIAN et al., which application is hereby incorporated by reference in its entirety. Still further details of structured image processing module 40 may be found in the applicants' copending application Ser. No. 10/361,853 and entitled “FACSIMILE/MACHINE READABLE DOCUMENT PROCESSING AND FORM GENERATION APPARATUS AND METHOD filed on Feb. 3, 2003 by Riss et al, which application is hereby incorporated by reference in its entirety.
The quality assurance/verifier module 42 allows the user to verify and correct, for example, the extracted XML output from the OCR module 36. It shows the converted text and the source image side-by-side on the desk top display screen and marks problem characters in colors so that an operator can quickly identify and correct them. It also permits the user to look at part of the image or the entire page in order to check for problems. Once the user finishes validating the image, Image Collaborator 6 via the quality assurance module 42 generates and saves the corrected XML data. It then becomes immediately available to the organization's other business applications.
The Image Collaborator 6 also includes a results repository and predictive models module 46. This module, for example, monitors the quality assurance/verifier module 42 to analyze the errors that have been identified. The module 46, in an exemplary embodiment, determines the causes of the problems and the solutions to such problems. In this fashion, the system may prevent recurring problems which are readily correctable to be automatically corrected.
In accordance with an exemplary embodiment, the above-described Image Collaborator 6 allows a user to specify data to find. In accordance with such an illustrative embodiment, the system also includes a template studio 28 which permits a user to define master zone templates and builds a master template library which the optical character recognition module 36 uses as it processes and extracts data from structured images.
In accordance with an exemplary embodiment, a user may define dictionary entries in, for example, three ways: by entering terms and synonyms in a dictionary, by providing clues to the location of the data on the document image, and by setting up regular expressions—pattern-matching search algorithms. With these tools, the Image Collaborator 6 can process nearly any type of structured or unstructured form.
Defining Zones
When a user defines a zone, the user can specify the properties of the data he or she wants to extract: a type of data (integer, decimal, alphanumeric, date), or a data-input format (check box, radio button, image, table), for example. The user can build regular expressions, algorithms that further refine the search and increase its accuracy. The user can also enter a list of values he wants to find in the zone. To define a zone called “State,” for example, the user could enter a list of the 50 states. He can also associate a list of data from an external database, and can specify the type of validation the application will do on fields once the data is extracted.
Defining Entries in the Dictionary
The Image Collaborator 6 uses a dictionary of terms and synonyms to search the data it extracts. Users can add, remove, and update dictionary entries and synonyms. The Image Collaborator 6 can also put a common dictionary in a shared folder which any user of the program can access.
Defining Clues and Regular Expressions
In an exemplary embodiment, the Image Collaborator 6 allows the user to define clues and regular expressions. Coupled with the search terms and synonyms in the dictionary, these make it possible to do nearly any kind of data extraction. Clues instruct the extraction engine to look for a dictionary entry in a specific place on the image (for example, in the top left-hand corner of the page). Regular expressions allow the user to describe the format of the data he wants.
Using these tools permits a user to do highly sophisticated searches. For example, he or she can set up generic clues and regular expressions in a standard dictionary then create a custom dictionary with other synonyms, clues, and regular expressions. If the user loads these two dictionaries into Image Collaborator, the application will use both of them to process and extract data. The program uses the custom dictionary for the first pass and the default dictionary for the second.
There can be multiple regular expressions for each synonym in the dictionary. For example, the user can write an algorithm which says, “Statement Date is a date field. It may occur in any of the following formats: mm/dd/yy, mm/dd/yyyy, mnim/dd/yy, or month/ddlyyyy.”
Here is an example of dictionary entries, synonyms, clues, and regular expressions:
A Set of Dictionary Entries
Image collaborator 6 is a client server application that extracts data from document images. As indicated above, it can accept structure images, from documents with a known, standard format or unstructured images which do not have the standard format.
The application extracts data corresponding to key words from a document image. It allows the user to find key words, verify their consistency, perform word analysis, group related documents and publish the results as index files which are easy to read and understand. The extracted data is converted to, for example, a standard format such as XML and can be easily integrated with line of business applications.
It should be understood, that the components used to implement the Image Collaborator represented in
As described above, the image enhancement module 35 operates to clean up an image to make character recognition more accurate. The image enhancement module 35 uses one or more image enhancement techniques 1, 2 . . . n, to correct, for example, the orientation of an image that might be skewed, eliminate hole punch marks that appear as black circles on an image, eliminate watermarks, repair broken horizontal or vertical lines, etc. Additionally, as is explained further below, the image enhancement processing also utilizes various parameters which are set and may be fine tuned to optimize the chances of successful OCR processing. In an exemplary embodiment, these parameters are stored and/or monitored by results repository and predictive models module 49.
The enhanced image is then processed by OCR module 37. OCR module 37 attempts to perform an OCR operation on the enhanced image and generates feedback relating to the quality of the OCR attempt which, for example, is stored in the results repository and predictive models module 49. The feedback may be utilized to apply further image enhancement techniques and/or to modify image parameter settings in order to optimize the quality of the OCR output. The OCR module 37 may, for example, generate output data indicating that, with a predetermined set of image parameter settings and image enhancement techniques, the scanned accuracy was 95 percent. The results repository and predictive model module 49 may then trigger the use of an additional image enhancement technique designed to improve the OCR accuracy.
OCR module 37, as is explained further below, utilizes various OCR techniques 1, 2 . . . n. The OCR output is coupled to feedback loop 39, which in turn is coupled to the results repository and predictive models module 49. Feedback loop 39 may provide feedback directly to the enhanced image module 35 to perform further image enhancing techniques and also couples the OCR output to the results repository and predictive models module 49 for analysis and feedback to the enhanced image module 35 and the OCR module 37. Thus, based on an analysis of results, the optimal techniques can be determined for getting the highest quality OCR output. This process is repeated multiple times until the OCR output is obtained of the desired degree of quality.
In accordance with an exemplary implementation, a template library 31 for structured forms processing is utilized by the enhanced image module 35 and OCR module 37 for enabling the modules 35 and 37 to identify structured forms which are input via input image module 33. In this fashion, a form template may be accessed and compared with an input image to identify that the input image has a structure which is known, for example, to be a Bank of America account statement. Such identification may occur by identifying, for example, a particular logo in an area of an input image by comparison with a template. The identification of a particular structured form from the template library 31 may be utilized to determine the appropriate image enhancement and/or OCR techniques to be used.
After an OCR has been obtained of the desired degree of quality, further processing occurs as is explained in detail below, via an intelligent document recognition system 41. The intelligent document recognition system 41 includes all the post OCR processing that occurs in the system that is explained further below. This processing includes dictionary-entry extraction to identify key fields in an input image, verification of extracted data and generation of indexed and collated documents preferably in a standard format such as XML (47).
Dictionary module 43 represents one or more dictionaries that are described further below that identify, for example, the set of synonyms that represent a key document term such as a user's account number, which may be represented in the dictionary by acct., account no., acct. number, etc. As is explained further below, the intelligent document recognition module 41 accesses one or more dictionaries during the document recognition process.
Proximity parser 45 provides to the intelligent document recognition module 41, information indicating, for example, that certain data should appear in a predetermined portion of a particular form. For example, proximity parser 45 may indicate that a logo should appear in the top left hand corner of the form, thereby identifying a certain proximate location of the logo.
If the document is not recognized, then one or more dictionaries is utilized (58) to extract dictionary entries from processed images to retrieve, for example, a set of “synonyms” relating to an account number, date, etc. In this fashion, all variant representations of a particular field are retrieved from dictionary 60. The dictionary 60 is applied by scanning the page to search for terms that are in the dictionary. In this fashion, the bank statement may be scanned for an account number, looking for all the variant forms of “account number” and further searched to identify all the various fields that are relevant for processing the document.
The data extraction process also involves proofing and verifying the data (62). A quality assurance desktop computer may be used to display what the scanned document looks like together with a presentation of, for example, an account number. An operator is then queried as to whether, for example, the displayed account number is correct. The operator may then indicate that the account number is accurate or, after accessing a relevant data base, correct the account number as appropriate.
The output of the data extraction process is preferably a file in a standardized format, such as XML or HTML (64). The processed image document may, for example, contain the various indexed fields in a specially structured XML file. The actual optical character recognition output text may also be placed in an XML file. The system also stores collated files 66 to permit users to group together associated files to identify where a multiple page file begins and ends. The indexed files 68 contain the key fields that were found in the dictionary together with the field values to include, for example, the account numbers and dates for a given bank statement.
The Image Collaborator 6 will now be described in further detail. As indicated above, in an illustrative embodiment, the Image Collaborator 6 is a client-server application that extracts data from structured images from documents with a known, standard format, or unstructured or semi-structured images, which do not have a standard format. In accordance with an exemplary embodiment, the application includes the following illustrative features:
In accordance with an exemplary embodiment, as indicated above, the Image Collaborator 6 is built on a client-server framework. Image enhancements and processing are performed in the server. Verification of the extracted data occurs on the client. In this illustrative implementation, the server operates automatically without waiting for instructions from the user.
A significant feature of the application is to extract valuable information from a set of input documents in the form of digital images. In an illustrative embodiment, the user specifies the desired data to extract by entering keywords in a dictionary.
In the illustrative embodiment, while processing the input images, the application first checks the validity of the input images. The image pickup service picks up, for example, TIF (or JPEG) images and processes them only if they satisfy the image properties of compression, color scale, and resolution required for image identification and OCR extraction. Next, the application checks the input images for quality problems and corrects them if necessary.
The Image Collaborator 6 allows the user to store image templates for easy identification and zone information, to mark document images with the areas from which to extract data. When the input image's pattern matches a pre-defined template, the file is identified and grouped separately. The application applies zone information from the template to the image before sending it for optical character recognition.
If an input image does not match a predefined template (if it doesn't have zones to search), the OCR module extracts data from the entire document image. The application then performs a regular-expression-based search on the output of the OCR module, in order to extract the values for the dictionary entries.
The user then uses the Data Verifier 42 to validate the extracted data.
In accordance with exemplary embodiments, the application also:
Since it presents the extracted data as XML, the industry standard data type, Image Collaborator 6 allows the user to immediately work with it in his line of business applications.
Once the input image (75) from a facsimile or scanner is received in an image pickup folder (77), the system automatically proceeds to process such image data without any required user intervention. Thus, upon a user scanning a five page bank statement and transmitting the scanned statement to a predetermined storage folder (77), the system detects that image data is present and begins processing.
In an exemplary implementation, the Image Collaborator may require the image files to be in a predetermined format, such as in a TIF or JPEG format. The image validation processing (79) assures that the image is compatible with the optical character recognition module requirements. The image validation module 79 validates the file properties of the input images to make sure that they have the appropriate types of compression, color scale, and resolution and then sends them on for further processing.
If the files don't have the appropriate properties, the module 79 puts them in an invalid images folder (81). The particular file properties that are necessary to validate the image file will vary depending upon the optical character recognition system being utilized. Input images in the invalid images folder (81) may be the subject of further review either by a manual study of the input image file or, in accordance with alternative embodiments, an automated invalid image analysis. If the input image is from a known entity or document type, the appropriate corrective action may be readily determined and taken to correct the existing image data problem.
If the image data is determined to be valid, pre-identification image enhancement processing (83) takes place. The pre-identification image enhancement processing serves to enhance the OCR recognition quality and assist in successful data extraction. The pre-identification enhancement module 83 cleans and enhances the images. As indicated above, in an illustrative embodiment, the module 83 removes watermarks, handwritten notations, speckles, etc. The pre-identification image enhancement may also perform deskewing operations to correctly orient the image data which was misaligned due, for example, to the data not being correctly aligned with respect to the scanner.
The image collaborate 6, after pre-identification image enhancement, performs image identification processing (85). In image identification processing 85, the application attempts to recognize the document images by matching them against a library of document templates.
If a match is found, the application applies post identification enhancement (87) to the image by applying search zones to them. In this fashion, for example, the image identification 85 may recognize a particular logo in a portion of the document, which may serve to identify the document as being a particular bank statement form by Citibank. The image identification software may be, for example, the commercially available FormFix image identification software.
As a result of the image identification processing (85), images are either identified or not identified. Upon identification of an image, the image data undergoes post-identification image enhancement (87). In the post-identification image enhancement module 87, the application uses the zone fields in the document template to apply zones to the document image. Zones are the areas the user has marked on the template from which the user desires to extract data. Thus, the zones identify the portion of an identified image which has information of interest such as, for example, an account number. For identified images image enhancement can be optimized for the type of document. As an illustrative example, a particular document may be known to always contain a watermark, therefore, enhancement can be tuned accordingly.
If a match is not found, the image file is forwarded to module 89 where an optical character recognition extraction is performed on unidentified/unrecognized image files. Thus, OCR extraction is performed on document images which have no zones. Such image data cannot be associated with a template, and is therefore characterized as being “unidentified.” Therefore, the OCR module extracts the content from the entire data file. Under these circumstances, the OCR module will scan the entire page and read whatever it can read from an unidentified document.
For identified objects, after post-identification image enhancement (87) has been performed, the OCR module 89 processes images which have been identified by matching them to a template. In this case, OCR module 89 performs optical character recognition only on the data within the zones marked on the image. The template may also contain document specific OCR tuning information., e.g. a particular document type may always be printed on a dot matrix printer with Times Roman 10 point font.
After optical character recognition processing 89, dictionary-entry extraction and pattern matching operations are performed (91). In the exemplary embodiment, a HTML parser conducts a regular-expression search for the dictionary entries in dictionary and clue files 93. The application writes the extracted data to, for example, an XML or HTML file and sends it to a client side process for data verification. In this fashion, the output of the optical character recognition module 89 is scanned to look for terms that have been identified from the dictionary and clues file 93 (e.g., account number, date, etc.) and extract the values for such terms from the image data. In an illustrative implementation, the user defines the dictionary entries he or she wants to extract. The application writes them to the dictionary and clues file 93.
The output of the optical character recognition module 89 or the output of the dictionary-entry extraction module 91 results in unverified extracted files of a standard format such as XML or HTML. These files are forwarded to a data verifier module 95. The data verifier module 95 permits a user to verify and correct extracted data. When the user/administrator at the data verifier module 95 accepts the batch of images, the application saves the data as, for example, XML data. After data verification operations (95), a field and document level validation (97) may be performed to, for example, verify document account numbers or other fields. The output of the field and document level validation consists of verified data in, for example, either an XML or HTML file.
The verified data may then be sent to a line of business application 99 for integration therewith or to a module for collation into a multi-page document (101) and/or for indexing (103) processing. Indexing (103) is a mechanism that involves pulling out key fields in an image file, such as account number, date. These key fields are then used for purposes of indexing the files for retrieval. In this fashion, bank statements, indexed to a particular account number are readily retrieved. Thus, in an illustrative embodiment, the application collates the image files into multi-page documents, indexes them and integrates them with the dictionary entries in the verified XML output.
The Image Collaborator provides access to the system API (109) to permit a user to perform the operations described herein in a customized fashion tailored to a particular user's needs. The API gives the user access to the raw components of the various modules described herein to provide customization and application specific flexibility.
The raw image data from the image files is then processed by an image validator/verifier (111), which as previously described, verifies whether the image data is supported by the system's optical character recognition module (121). If the image fails the validation check, then the image file is rejected and forwarded to a rejected image folder (108).
If it is verified that the image data is supported by the system, the image data is transferred to an image converter (113). The image converter 113 may, for example, convert the image from a BMP file to an OCR-friendly TIF file. Thus, certain deficiencies in the image data which may be corrected, are corrected during image converter processing (113).
After processing by image converter 113, an OCR friendly image is forwarded to an image enhancement module 115 for pre-identification image enhancement, where, for example, as described above, watermarks, etc. are removed. Thereafter, a form identification mechanism 117 is applied to identify the document based on an identified type of form. In this embodiment, structured forms are detected by Form Identification 117, and directed to, for example, the applicants' SubmitIT Server for further processing as described in the applicants copending application FACSIMILE/MACHINE READABLE DOCUMENT PROCESSING AND FORM GENERATION APPARATUS AND METHOD, Ser. No. 10/361,853, filed on Feb. 11, 2002. For unstructured and semi-structured forms, depending upon the detected type of form, the image data may be processed together with forms of the same ilk in different processing bins. Thus, in this fashion, bank statements from, for example, Bank of America may be efficiently processed together by use of a sorting mechanism. After processing by Form Identification 117, an unstructured or semi-structured form is forwarded to post identification image enhancement where an identified form may be further enhanced using form specific enhancements. As indicated in
The output from the image enhancement module 119 is then run through an OCR module 121 using, for example, commercially available ScanSoft OCR software. The output of OCR module 121 may be XML and/or HTML. This output contains recognized characters as well as information relating to, for example, the positions of the characters on the original image and the detected font information.
In
As represented in
For each key field in the document specific dictionary, the processing at 127 will search the OCR text. For each match, it filters the match with the document specific abstract characteristics or “gestures” to accept only matches that satisfy all requirements. If all required key fields are found the document is deemed to be of the specific type. As such, all remaining fields in the document specific dictionary are search for in like manner. If all required key fields are not found in the image, the document specific dictionary processing is bypassed. After applying document specific dictionary 127 is complete, the default dictionary is applied 137. The OCR text is searched for all fields in the default dictionary and similarly filtered with the default dictionary abstract characteristics.
If a match is found for an entry in a document specific dictionary or an entry in the default dictionary and that entry specifies a script callout, the script callout is executed (139), which will attempt to validate the data in the associated field. The script callout 139 may perform the validation by checking an appropriate database. Thus, for every element that is in the dictionary, an opportunity exists for specialized script to be created to initiate, for example, a validation check. In this fashion, a particular user may verify that the account number is a valid account number for a particular entity, such as Bank of America.
After the script callout 139 validation, if any is specified, the system creates an unverified XML file (141) which may be stored (142) for subsequent later use and to ensure that the OCR operations need not be repeated.
After creating the unverified XML, pre-verification indexing processing (143) is performed to determine whether verification is even necessary in light of checks performed on indexing information associated with the file. If the document need not go through the verification process, it is stored in index files 144 or, alternatively, the routine stops if the document cannot be verified (151).
If the unverified XML needs to be verified, it is forwarded to a client side verification station, where a user will inspect the XML file for verification purposes. The verified XML file may be stored 148 or sent to post-verification indexing to repeat any prior indexing since the verification processing may have resulted in a modified file. In this fashion, the index file is indexed, for example, based on a corrected account number, which was taken care of during verification processing at 145. Thereafter, collation operations on, for example, a multi-page file such as bank statement may be performed (149) after which the routine stops (153).
Turning back to the
Turning first to the
As shown in
Turning next to the
If a file is found that needs image verification, a check is made at block 194 to determine whether the file satisfies certain file properties for OCR compatibility, such as those identified above, which properties may vary depending upon a given application implementation. If the file satisfies the file properties criteria, then the file is processed for pre-identification enhancement (196). If the file does not satisfy the file properties, then the file is placed in the invalid file folders for the user to correct (198). In accordance with an exemplary embodiment, all files which may be automatically converted to have file properties which are compatible with the particular OCR software used in a given application will be automatically converted and not placed in an invalid file folder for manual correction.
Turning next to the
Additionally, the pre-identification enhancement module straightens skewed images, straightens rotated images, and removes document borders and background noise. For example, a black background around a scanned image adds significantly to the size of the image file. The application's pre-identification enhancement settings automatically remove the border.
Further, the pre-identification enhancement settings may, in an exemplary embodiment, be used to ignore annotations such that forms will be identified correctly, even when the input image files contain such annotations that were not part of an original document template. Similarly, the settings are used to correctly identify a form even when the image contains headers or footers that were not on the original document template.
The pre-identification enhancement processing additionally removes margin cuts and identifies incomplete images. Thus, the settings identify forms even when there are margin cuts in the image. The application aligns a form with a master document to help find the correct data. The settings correctly identify incomplete images.
In an exemplary embodiment, white text on black background will be turned into black text on a white background. Since in this exemplary embodiment, the OCR software cannot recognize white text in black areas of the image, the pre-identification enhancement settings create reversed out text by converting the white text to black and removing the black boxes. Further, in accordance with an exemplary embodiment, the pre-identification enhancement processing removes lines and boxes around text, removes background noise and dot shading. Thus, the system has a wide range of pre-identification enhancement settings that may vary from application to application. By way of example only, illustrative settings may be as follows:
If a file is found which needs pre-identification image enhancement, then such pre-identification image enhancement is performed (204) and the file is processed for image enhancement (206) to repair faint or broken characters, remove watermarks, straighten skewed images, straighten rotated images, remove borders and background noise, ignore annotations and headers and footers, remove margin cuts, etc.
Turning next to the
In the document image identification process, the image identification processing module 85 compares input images with the set of document templates in its library. The application looks for a match. If it finds one, it puts the document in a “package,” which is a folder containing other documents of that type. If no package exists, the application creates one. When the application finds more documents of that type, it drops them into the same package, so that all similar documents are in the same folder.
If a document image doesn't match any of the templates, the application drops it into an unidentified images folder.
An unstructured image is one which doesn't have a standard format. It is most often the type of image the application considers “unidentified.” The unstructured images are, however, processed utilizing the dictionary methodology described herein. Settings for the image identification module are stored in the application settings in a package identification file.
If a match does exist, a check is then made to determine whether a package exists for the file (233). If a package exists for the file, thereby indicating that a folder exists for other documents of that type, then the file is placed in the corresponding folder (235), to thereby appropriately sort the file.
If the check at block 233 indicates that a package does not exist for the file, then a package is created and the file is placed in it (237).
A check is then made at block 239 to determine whether all files have been processed. If all files have not been processed, the routine branches back to block 225 to access a further file. If all files have been processed, then post-identification enhancement is initiated (241).
Turning back to the
When the post identification image enhancement processing module 87 finds a match for a structured-document image (when it locates a template that matches and knows what type of document it is), the application maps the zones from the template onto the image. Later, when it performs the optical character recognition, the OCR module searches for data only within those zones. The application stores the extracted values against the same field name in the zone file. It can also merge the extracted data into a clean, master image, preserving the values in non-data fields.
For unstructured, unidentified images, OCR is performed on the entire image. Afterward, the extraction of the necessary data takes place means of the dictionaries and search logic, which is described herein.
A check is then made to determine whether all the image files have been processed (256). If all the files have not been processed, the routine branches back to block 250 to process the next file. If the check at block 256 indicates that all the files have been processed, then post-identification enhancement processing stops (258). Thus, as result of the post-identification enhancement for structured documents, the zone information is accessed and it is made available for OCR.
Turning next to the Image Collaborator system flowchart optical character recognition processing module 89, this module performs optical character recognition on the image documents and extracts the necessary data, storing it, for example, in an XML or HTML format. The optical character recognition on the images resulting in the extraction of necessary data is stored as HTML in an exemplary embodiment for compatibility with the searching mechanism that is utilized to find synonyms for a given term.
Turning back to
In an exemplary embodiment an image 33 would be enhanced 35 using image enhancement technique #1. OCR 37 would process the enhanced image using OCR technique #1 and make an entry in the results repository 49 as to the quality of the conversion, e.g. percent conversion accuracy. The feedback loop mechanism 39 would apply a predictive model to suggest a change to be made, for example, to image enhancement technique #1 yielding image enhancement technique #2. Next it would cause return control to image enhancement 35 where image enhancement technique #2 would be applied along with OCR technique #1. The feedback mechanism 39 would analyze the results to determine if the change improved or degraded the overall quality of the results. If the result was deemed beneficial, the change would be made permanent. Next, the feedback mechanism might adjust OCR technique #1 into technique #2 and the process would repeat. In this way the configurations of image enhancement 35 and OCR 37 could be optimized.
If zone files are available for structured images, the required dictionary entries are extracted directly. For unstructured images, when zone files are not available, in an exemplary embodiment, a HTML parser extracts the dictionary entries.
All images have undergone pre-identification enhancement, so OCR accuracy is optimized, ensuring that the optical character recognition in the exemplary embodiment is much more accurate than in basic OCR engines.
The various parameter values for OCR tuning vary from application to application. The following Table is an illustrative example of parameters for tuning the optical character recognition module:
Exemplary Values for OCR Tuning
A check is then made to determine whether all the files have been processed (283). If all the files have not been processed, the routine branches back to block 275 to process the next file. Once all the files have been processed, the optical character recognition processing ends (285).
Turning next to the Image Collaborator system flowchart dictionary-entry extraction (91). In an exemplary embodiment, an HTML parser extracts the dictionary entries. It converts the HTML source generated during OCR extraction into a single string. In the exemplary embodiment, the parser writes the content that is between the <Body> and the </Body> HTML tags in the string to a TXT file. The parser then conducts a regular-expression-based search on the text files for the dictionary entries and extracts the necessary data. It populates the extracted entries into an extracted XML file.
A check is then made to determine whether all files have been processed (308). If all files have not been processed then the routine branches to block 300 to process the next file. When all the files have been processed, the dictionary entry extraction processing ends (310).
The dictionary and the clues file contain the dictionary entries the user wants to extract and their regular expressions. Sometimes the application misses a certain field while extracting dictionary entries from a set of images. The Image Collaborator 6 allows the user to write a call-out, a script to pull data out of the processing stream, perform an action upon it, and then return it to the stream. The call-out function helps the user to integrate Image Collaborator 6 with the user's system during the data-extraction process. With a call-out script, a user can check the integrity of data, copy a data file, or update a database. One exemplary embodiment of this script would be Microsoft Visual Basic Script (VBScript). Two types of call-out scripts are supported. Field level call-out scripts are performed on a field by field basis as the data is being extracted. Document level call-out scripts are performed once per document image at the completion of the extraction process. Therefore, a document level call-out script allows the document as a whole to be evaluated for consistency and completeness.
In data validation in processing a set of bank statements, for example, the dictionary entries might be “Bank Name”, “Account Number,” and “Transaction Period.” If, on a given page, the application fails to extract “Bank Name,” but correctly identifies “Account Number”, the document level call-out function reasons that that account number can only refer to one bank name. It makes the correct inference and fills in the value for “Bank Name” that corresponds to the account number it has discovered.
The user sets up the call-out script in the dictionary editor. The script can do data validation at the field- and document-levels.
Using a Call-Out for Field-Level Validation
On a bank statement, the dictionary entries might be “Bank Name”, “Account Number,” and “Transaction Period.” To validate extracted field level results, the user can define a field level validation in the dictionary editor. He can specify that “Account Number” for Bank XYZ must be exactly 10 digits. During data extraction, if the data for the field “Account Number” for Bank XYZ does not meet this requirement, the administrator can create an error message for the application to display in the Data Verifier as a quick reminder to the user.
Using a Call-Out for Document-Level Validation
A call-out can also do validation at the document level. For example, again extracting dictionary entries from a bank statement, if the OCR process has correctly extracted the dictionary entries, but has interchanged the values of the “From” date and “To” date in a particular document. This error, then, leads to wrong transaction dates, since a “From” date cannot be later then a “To” date. The user can write a script in the dictionary editor to reverse the problem, or show an error message.
As noted above in conjunction with
For each regular expression pattern, a check is made at 329 to determine whether a document specific key dictionary entry is found. If a document specific key dictionary entry is found, then a search is conducted for the regular expression pattern of all the other dictionary entries defined for the specific document (337).
If the document specific dictionary entry is not found as indicated by the check at 329, then a search for the regular expression pattern of the dictionary entries is defined from the default section of clues (331). Thus, the check at block 329 determines if the document being searched is the type of document for which the document specific dictionary was designed.
After the search is performed at block 337 for other entries defined for the specific document, a check is made as to whether the regular expression pattern for the dictionary entry specific to the document was found (339). If so, the dictionary entry and its corresponding value are stored in a table (335).
Similarly, after the search is performed at block 331, a check is made as to whether the regular expression pattern is found for a dictionary entry from the default section of the clues file (333). If a regular expression pattern is found based on the check at block 333, the dictionary entry and its corresponding value is stored in a table (335).
If the processing at blocks 333 or 339 do not result in a regular expression pattern being found, the routine branches to 341 and nothing is stored against the corresponding dictionary entry in the table (341). The result of the storage operation at 335 results in the generation of a table containing dictionary entries and their corresponding extracted values (343).
Thereafter, the dictionary entries and the corresponding values are written from the table (343) into an XML file along with the zone coordinates where the data was found (345). The zone coordinates define a location on the document where the data was found.
A check is then made to determine whether all the TXT files have been processed. If not, the routine branches back to block 325 for the next TXT file. If the check at 347 reveals that all the text files have been processed, then the dictionary-related routine ends (349).
The Image Collaborator 6, as noted previously, also performs various client side functions to allow a user to perform the following functions:
Data Verifier
The Image Collaborator 6 extracts dictionary entries from the input images and stores the content as temporary XML files in the Output folder. The user can then verify the data with the Data Verifier module. It displays both the dictionary entry and the source image it came from. The user can visually validate and modify the extracted data for all of the fields on a document. Users can also customize messages to identify invalid data. The Data Verifier also stores the values the user has recently accessed, allowing the user to easily fill in and correct related fields.
The application saves the data the user has verified in the Output folder as XML.
In order to simplify the user's work and improve his efficiency, Image Collaborator provides the following functionalities in the Data Verifier.
Smart Copy
The Smart Copy function enables the Data Verifier to fill in or replace a field value with the value of a similar field on the last verified page.
For example: If the value of the “Bank Name” field on the last verified page of a bank statement is SunTrust Bank, but the field is blank or incorrect on the current page, the Smart Copy function gives the user the option to fill in the value “SunTrust Bank” by clicking the “Bank Name” field on the current page and selecting Smart Copy. The application copies in the previously-verified value.
Copy All Indexes
Copy All Indexes operates like Smart Copy. It copies the values of all of the fields from the page the user last verified to the same fields on the current page.
Indexing
Image Collaborator allows users to index the verified XML data. The indexes are defined in the Index Variable file, under Application Settings.
Collation
Collating regroups a document's separate pages into one document again. Inputs to Image Collaborator are single-page image files. They were often originally a part of a complete document but were separated in order to be scanned. Once Image Collaborator has processed the page files, the application needs to collate them in order to recombine them into a single document file again. It groups them together based on a set value, the collation key, defined in the Application Settings.XML file. The key is usually a field-name defined in the dictionary or clues file.
The user collates the verified documents by clicking the Approve Batch button in the Data Verifier.
Integration With Line of Business Applications
Since the output from Image Collaborator is XML, it is immediately available to the user's line of business applications.
In using the Image Collaborator application, before document image data is processed, the user will configure the Image Collaborator based on the needs of a given implementation and define dictionary entries that are desired to be found. Application parameters are configured in an application's setting window. Resource files that are needed are located, folder locations are chosen for input data/output data, intermediate storage, air logs, reports, etc.
In an illustrative implementation, the following resource files and folder locations may be used:
Resource Files
The resource files generally used are:
1. Document (document-specific) dictionary. The clues (XML) file to which the dictionary entries defined in the dictionary are written.
2. Standard (default) dictionary. The dictionary that contains the entries to be extracted from the input images.
3. Image-checking parameter file. The file that contains the required file properties that need to be verified before processing an input image.
4. Image classification file. The file that contains image templates with zones for extracting data.
5. OCR settings file. The file that contains all configurable parameters that directly affect the OCR output.
6. Package identification file. This file contains package templates for various image types. These templates are matched with input images. If a match is found, a separate package is created for the matched image.
7. Pre-identification enhancement configuration file. This file contains all the necessary parameters for cleaning up an input image.
8. Unidentified enhancement configuration file. This file contains all the necessary parameters for cleaning up and processing the input image that do not match a template images.
9. Image index variable file. This file contains the variables which the application uses to index the output files.
Folder Locations
The folder locations include:
1. Image pickup folder. The location from which the application picks up the input images for processing.
2. Collated image output folder. Once the user verifies the images and approves the batch, the application collates the images, regrouping them into documents again, and puts the collated images in this folder.
3. Invalid files folder. When the input images do not meet the required image properties the application stops processing them and puts them in this folder.
4. Package(s) folder. The location where packages are created for the identified input images.
5. Unverified output folder. The XML file to which the application writes extracted dictionary entries.
6. Processed input files folder. The folder, in which the application stores processed, enhanced images.
7. Zone files folder. The folder which contains the zone files that provide zone information for the identified files.
8. Indexed files folder. The folder which holds the indexed XML files the application creates after processing and data verification.
9. Intermediate files folder. The folder in which the application temporarily stores all intermediate files.
Other application settings include ones for setting up the package folder prefix, the unidentified package name, the unidentified document name, and the general settings for displaying and logging error messages.
With respect to creating and managing a dictionary, dictionary entries and regular expressions and clue entries may be defined as follows:
Defining Dictionary Entries
A dictionary is a reference file containing a list of words with information about them. In Image Collaborator, the dictionary contains a list of terms that the user is looking for in a document.
The user defines the dictionary entries he needs, and provides all the necessary support information by creating or editing a dictionary file.
Support information for a dictionary entry includes synonyms (words which are similar to the original entry) and regular expressions, pattern-matching search algorithms.
Defining Regular Expressions
A regular expression is a pattern used to search a text string. We can call the string the “source.” The search can match and extract a text sub-string with the pattern specified by the regular expression from the source. For example:
‘1[0-9]+’ matches 1 followed by one or more digits.
Image Collaborator gives users the flexibility to define regular expressions for the dictionary entries they want to find, at both the field and the document level.
At the field level, the user defines regular expressions for every dictionary entry, including all possible formats in which they may occur. For example, a regular expression for “From” date might include the following:
Last Statement:\s+(?<FromDate>\w+\s+\d+, \s+\d+)
BEGINNING DATE\s+(?<FromDate>\w+\s+\d+, \s+\d+)
At the document level, the user defines regular expressions for all the dictionary entries he wants to find in a given type of document. The application identifies the document based on a “key” dictionary entry. For example, while processing bank statements, the regular expressions for a bank named “Bank One” could be defined as:
Key dictionary entry: “Bank Name”=Bank One
The document-level regular expressions narrow the search to a limited set of regular expressions defined for a specific document. For example, while processing bank statements, when the application recognizes a specific bank name, the HTML parser searches for only those regular expression patterns defined at the document level for that particular bank.
The working of field-level regular expressions and document-level regular expressions can be better illustrated after defining the clues.
The Clues File
The clues file is an XML file that contains the dictionary entries the user wants to extract from the processed images.
All the information defined in the dictionary is written to an XML file when the dictionary is loaded. The user can create and keep a number of dictionaries, and can choose and apply one at a time while processing a set of documents.
Dictionary entries and their regular expressions are grouped into two categories in the clues file: document and default.
The document group contains all regular expressions specific to a document based on the key.
The default group contains all possible regular expressions for the fields defined.
When the parser makes a search, it looks for the key dictionary entry. If it finds a match for a document specific term it searches for the remaining dictionary entries only based on the regular expression patterns defined for that specific document.
The search looks up default regular expressions only when it fails to find a match for an entry in the document group.
The application then stores the matched values for the dictionary entries in a table. It creates an XML file, made up of the extracted values for the dictionary entries and the X and Y coordinates of the location where the information was found in the processed document.
A determination is then made whether the image is recognized based on the image document cover page (359). If a cover page is recognized, then a determination is effectively made that it is the beginning of the next batch and that the current package (if it exists) is completed (and stored in the file system) (361).
Thereafter, a new package is created based on the detected cover page (363) and the routine branches to block 371 to determine whether all the documents have been processed.
If the image is not recognized as a cover page as determined by the check at block 359, then the documents are added to the current package/batch (365). Thereafter, the image is saved into the package file system (367) and the image document is deleted from the file system input queue (369).
A check is then made to determine whether all documents are processed (371). If not, the routine branches to 357 to process the next document. If all the documents have been processed, the routine ends (373).
A check is then made at block 385 to determine whether the enhancement is pre-identification enhancement. If so, then the pre-enhancement section from the tbl file is loaded (387). A check is then made to determine whether the options are loaded correctly (389). If the options are not loaded correctly, then default options are defined in the enhancement INI options. If the forms are not identified, the default enhancement INI options are used (391). If the options are loaded correctly as determined by the check at 389, the routine branches to 399 to apply the enhancement options.
If the check at block 385 reveals that the image enhancement is not a pre-identification enhancement, the enhancement section is loaded from the tbl file (393). A check is then made to determine whether the options are loaded correctly (395). If so, then the enhancement options are applied (399). If the options are not loaded correctly, then default options defined in the enhancement INI are utilized. As noted above, if the forms are not identified, then the default enhancement INI options are utilized.
After the enhancement options have been applied, a determination is made as to whether there is any error or exception (401). If so, then the error is logged and the error object is passed to the calling application (403). If there is no error the routine ends (405).
If the processing mode is not a synchronous mode as determined by the check at block 427, then an event occurred, such as a package having been created, thereby triggering data being obtained from the OCR engine (429). Whether the data is obtained from the database or from an OCR engine, a package dictionary is then identified (433), thereby determining the appropriate dictionary, e.g., the document specific dictionary or the default dictionary, to use as explained above in conjunction with
Thereafter, the dictionary metadata is obtained (435) to, for example, obtain all the synonyms for a particular document term such as “account number.” Thereafter, for each file in the package (437), the relevant extraction logic is applied (439). Therefore, the page is processed against the document specific dictionary or the default dictionary as discussed above. Thereafter, the data is saved in a Package_Details table (441).
A check is then made at block 443 to determine whether all the files have been processed. If all the files have not been processed, then the routine branches to block 437 to retrieve the next file. If all the files have been processed then the consolidated data is saved in the package table (445) and the routine waits for the next package (447) by branching back to block 427. Thereafter, the routine stops after all the packages have been processed.
A check is then made at block 458 to determine whether synchronous pattern matching is to be performed. If so, then the data is sent to the pattern matching component (460) as explained in conjunction with the processing in
If the check at block 458 reveals that no synchronous pattern matching is to be performed, then a check is made as to whether the data is to be stored in the database (462). If the data is to be stored in the database, then the data is saved in the database (464). If the data is not to be stored in the database, then a check is made as to whether all files have been processed (466). If all files have not been processed then the routine branches back to 454 to retrieve the next file.
If all the files in the package have been processed, then the routine waits for the next event by branching back to 452, after which the routine ends (470).
In one embodiment of the present invention data extraction is deferred until a later time. In such an embodiment pattern matching would not be synchronous and the results of OCR processing would be stored in a database to enable pattern matching at another time.
The above-described Image Collaborator system may be implemented in accordance with a wide variety of distinct embodiments. The following exemplary embodiment is referred to hereinafter as IMAGEdox and provides further illustrative examples of the many unique aspects of the system described herein.
IMAGEdox Overview
IMAGEdox automates the extraction of information from an organization's document images. Using configurable intelligent information extraction to extract data from structured, semi-structured, and unstructured imaged documents, IMAGEdox enables an organization to:
The examples below describe the processing of a common type of document: a bank statement. IMAGEdox can be used to process any type of document simply by creating a dictionary that contains the commonly used business terms that the user wants to recognize and extract from that specific type of document.
It is assumed that the documents that are being processed are scanned images from paper documents.
The steps illustrated in the graphic are described below. Configuration steps are described in the next section.
Configuring IMAGEdox
The IMAGEdox installation program creates default configuration settings. The configuration settings are stored in a number of files that can be accessed or modified using the Application Settings screen. These settings are grouped into the following five categories:
The settings in each category are described in the sections that follow.
Editing Input Settings
Input Data settings define the folder that contains the user's scanned images and the dictionaries that are used to process them. Complete the following procedure to edit the user's input settings:
1. Click Options>Application Settings.
The Applications Settings window is displayed with the Input Data option selected by default:
2. Click Browse to select a different document dictionary to process the documents in the associated Image Pickup Folder.
The document-specific dictionary is designed to extract data from known document types. For example, if you know that a Bank of America statement defines the account number as Acct Num: you can define it this way while creating the dictionary.
For more information about creating dictionaries, see the description beginning with the description of
3. Click Browse to select a different standard dictionary to process the documents in the associated Image Pickup Folder.
The standard dictionary (also known as the default dictionary) is used if a match is not found in the document-specific dictionary. It should be designed for situations where the exact terminology used is not known. For example, if you were processing statements from an unknown bank, your standard dictionary must be able to recognize any number of Account Number formats, including Acct Num:, Account Number, Acct, Acct #, and Acct Number.
4. Click Browse to select a different second standard dictionary.
The second standard dictionary enables you to treat your preexisting dictionaries as modules that can be combined rather than creating yet another dictionary.
5. Click the Apply Excel Sheet for Correction check box if you want to specify an Excel file be used to check extracted data accuracy.
6. Click Browse to select a different document Image Pickup folder.
This is the folder where IMAGEdox looks for your scanned images to begin the data extraction process. To begin using IMAGEdox, you must manually copy your scanned images into this location, or configure your scanner to use this folder as its output folder.
7. Update the files that store your current configuration settings (serverAppsettings.xml and AppSettings.xml) as follows:
8. Open the Services window (Start>Settings>Control Panel>Administrative Tools>Services).
9. Click the SHSMe Image Data Extractor service.
10. Click Restart Servicesto stop and restart the service.
Continue the IMAGEdox configuration as described in the next section.
Editing Output Folders
Output Data settings define the folders where the data extracted from your scanned images, and the processed input files are stored. Extracted data is stored in XML files. Complete the following procedure to edit your output settings:
1. Click Options>Application Settings to display the Application Settings window if it is not already open.
2. Click the Output Data option.
The Output Data frame is displayed:
3. Click Browse to specify a different Collated Image Output folder.
Collated images are created by combining multiple related files into a single file. For example, if a bank statement is four pages, and each page is scanned and saved as a single file, the four single page files can be collated into a single four page file during the data approval process.
4. Click Browse to specify a different Invalid Files folder.
Invalid files are the files that cannot be recognized or processed by the optical character recognition engine. These files will need to be processed manually.
5. Click Browse to specify a different Unverified Output folder.
This folder stores all of the output data until it is verified by the end-user using the data verification functionality (as described in “Verifying extracted data” beginning with the description of
6. Click Browse to specify a different Processed Input Files folder.
This folder stores all of the input files (your scanned images) after they have had the data extracted from them.
Note: IMAGEdox moves the files from the input file location (defined in step 5 above, in conjunction with the description of
7. Click Browse to specify a different Indexed Files folder.
This folder stores the files created by extracting only the fields you specify as index fields. For example, a bank statement may contain 20 types of information, but if you create an index for only four of them (bank name, account number, from date, and to date), only those indexed values are stored in the index files. The user-defined file that defines which terms should be considered index fields must be specified in the Application Setting Processing Data window, as specified in step 6 in the next section.
8. Click OK to update the files that store your current configuration settings (Appsettings.xml and ServerAppSettings.xml), or click Save As to create a new configuration file.
Continue the IMAGEdox configuration as described in the next section.
Defining Processing Data Settings
Processing Data settings specify the files that are used during the processing of images.
Complete the following procedure to edit your processing settings:
1. Click Options >Application Settings to display the Application Settings window if it is not already open.
2. Click the Processing Data option.
3. The Processing Data frame is displayed:
4. Click Browse to specify a different location for Intermediate File folder.
This folder temporarily stores the files that are created during data extraction. The contents of the folder are automatically deleted after the extraction is complete.
5. Click Browse to specify a different location for the OCR Settings file.
You can edit the contents of this file to change the behavior of the OCR engine.
6. Click Browse to specify a different location for the Image Index Variable file folder.
This folder contains files that specify the index variables used to create index files.
7. Update the files that store your current configuration settings (serverAppsettings.xml and AppSettings.xml) as follows:
Continue the IMAGEdox configuration as described in “Editing general settings”.
Editing Package Splitter Settings
These settings are not used in this release of IMAGEdox. Do not edit any of these settings.
Editing General Settings
Complete the following procedure to edit your general settings:
1. Click Options>Application Settings to display the Application Settings window if it is not already open.
2. Click the General option.
3. The General frame is displayed:
4. Click any of the following check boxes to edit the default settings:
(Open the last used Workspace and Save Report as a File are note supported in this release)
5. Click Browse to edit the default location of the Processing Statistics Log file.
This file logs the amount of time spent processing image enhancement, OCR, and data extraction.
6. Update the files that store your current configuration settings (serverAppsettings.xml and AppSettings.xml) as follows:
Working With Dictionaries
A dictionary is a pattern-matching tool IMAGEdox uses to find and extract data. Dictionaries are organized as follows:
Term—A word or phrase you want to find in a document and extract a specific value. For example, for the term Account Number, the specific value that is extracted would be 64208996.
Synonym—A list of additional ways to represent a term. For example, if the dictionary entry is Account Number, synonyms could include Account, Account No., and Acct.
Search pattern—A regular expression you create and use to find data. It enables you to define a series of conditions to precisely locate the information you want. Every search pattern is linked to a dictionary entry and the entry's synonyms.
Dictionary Types
IMAGEdox enables a user to define two types of dictionaries:
Document-specific—Designed to extract data from known documents. For example, if you know that a Bank of America statement defines the account number as an eight-digit number proceeded by Acct Num: you can define it precisely this way while creating the dictionary used to process Bank of America statements.
Standard (also known as default)—Designed for situations where the exact terminology used is not known. For example, if you were processing statements from an unknown bank, your standard dictionary must be able to recognize any number of Account Number formats, including Acct Num:, Account Number, Acct, Acct #, and Acct Number. Also note that they are not grouped by primary dictionary entry such as Bank Name.
A user should create at least one of each type of dictionary.
When IMAGEdox processes a document image (for example, a bank statement), it first applies the document-specific dictionary in an attempt to match the primary dictionary entry: Bank Name. Until the bank name is found, none of the other information associated with a bank is important. If the document is from Wells Fargo Bank, IMAGEdox searches each section of the document until it recognizes “Wells Fargo Bank.”
After finding a match for the primary dictionary entry in the document-specific dictionary, it then attempts to match the secondary dictionary entry, for example, Account Number. If IMAGEdox cannot find a match, it processes the document image using the standard dictionary. It applies one format after another until it finds a match for the particular entry. After IMAGEdox exhausts all the dictionary entries in the standard dictionary, it processes the next document image.
Dictionaries are created and managed using the IMAGEdox Dictionary window which is introduced in the next section.
The Dictionary Window
The dictionary window contains four main sections: toolbar, Term pane, Synonym pane, and Pattern pane. The options available in each are described in the tables that follow.
Creating a Dictionary
The Dictionary window enables you to define the terms (and their synonyms) for which you want IMAGEdox to extract a value. After creating a new dictionary (as described in this section), there are three major steps to define the new dictionary:
These general steps are described in detail in the sections that follow.
Complete the following procedure to create a new dictionary:
1. Double-click the desktop icon to start IMAGEdox.
The IMAGEdox window is displayed:
2. Click Dictionary.
The Dictionary window is displayed:
3. Click File>New or New DictionaryA new dictionary called untitled.dic is created.
You cannot save and rename the untitled dictionary until you add a term to it as described in the next section.
A new dictionary is created by clicking on “create dictionary” in the screen display shown in
Adding Terms to a Dictionary
Complete the following procedure to define terms for your dictionary.
1. If you have not already, create a new dictionary as described in the previous section.
2. In the Term pane, click Add Term
The Add Term dialog box is displayed with the Standard Patterns tab selected:
3. Enter the name of the term that you want to define in the Term Name field.
You can click Done and configure the search pattern as described in “Modifying search patterns” below, or you can define a basic search pattern from this screen. If you are defining a date search, and the format is predefined (that is, it is in the list), continue with step 4, otherwise, define the search pattern later.
4. Double-click to select the standard patterns that you want to define for the term. There are three standard pattern types from which you can select:
Alphanumeric—Contains letters (a-z, A-Z) and numbers (0-9) only; cannot contain symbols, or spaces.
Email—Contains an email address using the username@dorna in. corn format.
Date—Ten date formats are predefined for your convenience.
If these standard patterns do not meet your needs, you can define custom search patterns as described in “Adding search patterns” below. These custom search patterns are displayed when on the User Defined Patterns tab.
5. Click Done.
The new term is added to the Terms pane, and its associated search pattern is displayed in the Pattern pane. Terms are listed in alphabetical order, and the pattern is only displayed for the term that is selected.
6. Click File>Save.
The Save Dictionary dialog box is displayed.
7. Enter a descriptive name for your dictionary and select the location for it.
Depending on which type of dictionary this is (document-specific or standard), the location of the dictionary must match the location specified in step 2 or 3 in “Editing input settings” above.
8. Click Save.
The new name and the location are displayed in the Dictionary window's title bar, and in the lower right-hand corner.
Unless you need to modify or delete terms (as described in the next sections), continue building your dictionary as described in “Adding synonyms” below.
Modifying Terms
1. Open the IMAGEdox dictionary that contains the term you want to modify.
2. In the Term pane, click the name of the term you want to modify.
3. Click Modify Term
The Modify Term dialog box is displayed.
4. Change the term name (effectively deleting the old term and creating a new term) or the search pattern.
5. Click Done.
Deleting Terms
1. Open the IMAGEdox dictionary that contains the term you want to delete.
2. In the Term pane, click the name of the term you want to delete.
3. Click Delete Term
You are prompted to confirm the deletion.
4. Click Yes.
Adding Synonyms for a Term
Synonyms are words (or phrases) that have the same, or nearly the same, meaning as another word. During the data extraction phase, IMAGEdox searches for dictionary terms and related synonyms (if defined). Synonyms are especially useful when creating a default (or standard) dictionary to process document images that contain unknown terminology. You can define one or more synonym for every term in your dictionary.
Complete the following procedure to define a'synonym for an existing term in your dictionary:
1. Open the IMAGEdox dictionary that contains the term for which you want to define a synonym.
2. In the Term pane, click the term for which you want to define a synonym.
3. In the Synonym pane, click Add Synonym
The Add Synonym dialog box is displayed:
4. Enter the synonym in the Synonym Name field.
5. Either click Add Synonymif you want to add more synonyms, or Done. The synonyms are added (and prioritized) in the order they are added.
You can change a synonym's priority using the Priority upor Priority down buttons either in the Add Synonym dialog box or the Dictionary window's Synonym pane. When IMAGEdox is searching for a term, the term's synonyms are prioritized from the first synonym (top of the list) to the last (bottom).
Unless you need to modify or delete synonyms (as described in the next sections), continue building your dictionary as described in “Creating visual clues” below.
Modifying Synonyms
1. Open the IMAGEdox dictionary that contains the synonym you want to modify.
2. In the Synonym pane, click the name of the synonym you want to modify.
3. Click Modify Synonym
The Modify Synonym dialog box is displayed. You can change the synonym's priority using the Priority upor Priority downbuttons.
4. Click Visual Clue.
The Modify Synonym Visual Clue dialog box is displayed. For detailed information about defining visual clues see “Creating visual clues.”
5. Click Done.
Deleting Synonyms
1. Open the IMAGEdox dictionary that contains the synonym you want to delete.
2. In the Synonym pane, click the name of the synonym you want to delete.
3. Click Delete Term
You are prompted to confirm the deletion.
4. Click Yes.
Creating Visual Clues
IMAGEdox dictionaries can be configured to use visual information during the data extraction phase to recognize and extract information. Visual clues tell the OCR engine where in an image file to look for terms and synonyms whose value you want to extract. Additionally, visual clue information can tell the OCR engine to look for specific fonts (typefaces), font sizes, and font variations (including bold and italic).
Visual clues can be used with either document-specific or default (standard) dictionaries, but are extremely powerful when you can design a document-specific dictionary with a sample of the document (or document image) nearby.
Visual clues can also be useful when trying to determine which of duplicate pieces of information is the value you want to extract. For example, if you have a document image in which you are searching for a statement date and the document contains two dates: one in a footer that states the date the file was last updated and the one you are interested in-the statement date. You can configure your dictionary to ignore any dates that appear in the bottom two inches of the page (where the footer is) effectively filtering it out.
Complete the following procedure to define visual clues:
1. Open the Visual Clues window as described in “Modifying synonyms” above.
The Modify Synonym—Visual Clues window is displayed for the selected synonym:
2. Specify one or more of the following:
Positional attributes—Tells the OCR engine where to locate the value of the selected synonym using measurements. You can “draw” a box around the information you want to extract by entering a value (in inches) in the Left, Top, Right, and Bottom fields. If you enter just one value, for example 2″ in the Bottom field, IMAGEdox will ignore the bottom two inches of the document image.
Textual attributes—Tells the OCR engine where to locate the value of the selected synonym using text elements (line number, paragraph number, table column number, or table row number). For example, if the account number is contained in the first line of a document, enter 1 in the Line No field.
Font Attributes—Tells the OCR engine how to locate the value of the selected synonym using text styles (font or typeface, font size in points, and font style or variation). If you know that a piece of information that you want to extract is using an italic font, you can define it in the Font Style field.
You can click Attribute to display the Font dialog box where you can apply all three font attributes, and preview the result in the Sample field. When you click OK, the information is transferred to the Modify Synonym—Visual Clues window.
3. Click Done to add the visual clues to the selected synonym.
Continue building your dictionary as described in the next section.
Adding Search Patterns to a Dictionary
Search patterns define how IMAGEdox recognizes and extracts data from the document image being processed. You can define search patterns while creating terms (as described in conjunction with
To add new search pattern, complete the following procedure:
1. Open the dictionary that contains the term for which you want to add a search pattern.
2. Click a term in the Term pane.
3. In the Pattern pane, click Add Pattern
The Define Pattern (1 of 2) dialog box is displayed.
4. Enter the name of the pattern you are adding.
5. Select the type of pattern.
If you select Email or Custom, you are prompted to launch the Regular Expression Editor. Click No to apply a standard regular expression to validate the Email address, click Yes, to display the Regular Expression Editor (Define Pattern Step 2 of 2) where you can create a custom regular expression:
Define the regular expression that will be used to match the data you want to extract from the document images. For more information about regular expressions,refer to the Microsoft .NET Framework SDK documentation.
If you select Date, the Define Pattern (2 of 2) dialog box is displayed containing predefined formats available. Select a format, and click Done. The regular expression associated with the selected format is applied. You can also click Advanced to create a custom regular expression.
If you select Alphanumeric, the Define Pattern (2 of 2) dialog box is displayed. Enter the minimum and maximum number of characters allowed, and the special characters (if any) that can be included. The regular expression being created is displayed as you make entries, and applied when you click done. You can also click Advanced to create a custom regular expression.
Modifying Search Patterns
1. Open the dictionary that contains the term associated with the search pattern.
2. In the Term pane, click the term associated with the search pattern.
3. In the Pattern pane, click Modify Pattern
4. The Define Pattern (1 of 2) dialog box is displayed.
5. Edit the pattern name (effectively deleting the existing pattern and creating a new one) or type, or click Next Step to display the Regular Expression Editor where you can edit the regular expression.
Deleting Search Patterns
1. Open the dictionary that contains the term associated with the search pattern.
2. In the Term pane, click the term associated with the search pattern.
3. In the Pattern pane, click Delete Pattern
Creating Regular Expressions
The Dictionary window's Advanced Editor enables you to build and apply advanced search tools known as regular expressions. Regular expressions are pattern-matching search algorithms that help you to locate the exact data you want. Your regular expression can focus on two levels of detail:
Term level—You instruct the search engine how to extract the value of the term by defining a series of general formats that may describe the term's value. You create a regular expression for each of these general formats attempting to consider every way an account number could be represented. These are general descriptions that could be used for any bank.
For a new document, IMAGEdox may be able to find a match quicker using the term level, since it has no training in the specifics of the document.
Document level—For each bank, you write a regular expression to show the search engine how to extract the values of the document for that specific bank. In effect, you are telling IMAGEdox, “On a Wells Fargo Bank monthly statement, ‘Bank name’ looks like this . . . , ‘To’ date” looks like this . . . , and “From' date” looks like this.”
These formats, specific to a certain document for a certain bank, are more accurate than the general formats, but they are slower to apply.
Validation Scripts
Validation scripts are Visual Basic scripts that check the validity of the data values IMAGEdox has extracted as raw, unverified XML. You can create your own scripts, or contract Sand Hill Systems Consulting Services to create them. Validation scripts are optional and do not need to be part of your dictionaries.
The script compares the found value to an expected value and may be able to suggest a better match. You can run validation scripts on two levels:
Document level—Using your knowledge of the structure and purpose of the document, checks that all the parts of the document are integrated. For example, the script can ensure that the value of the Opening Date is earlier than the value of the Closing Date, or that the specific account number exists at that specific bank. If you know the characteristics of the statements that Bank of America issues, all you need to find is the name “Bank of America” to know whether the extracted account number has the correct number of digits and is in a format that is specific to Bank of America.
Term level—Checks for consistency in the data type for a term. For example, it ensures that an account number contains only numbers. This type of script can also check for data integrity by querying a database to see whether the extracted account number exists, or whether an extracted bank name belongs to a real bank.
To create and run a validation script, complete the following procedure:
1. Open the dictionary that contains the terms you want to validate.
2. In the Terms pane, click the button that corresponds with the level on which you want to run the script, either:
The screen that corresponds with your selection is displayed, either Document Level:
or Term level (AccountNumber in this example):
3. In the Default Input Value field (of either screen), enter the sample value for the validation script to test.
4. On the VBScript Code tab, create the script that validates the extracted, unverified value.
For example, you may want the script to ensure that every Bank of America account number contains 11 digits and no letters or special characters. You must have the script return an error status, an error message, and a suggested value (which can be defined and tested as output parameters on the Test tab).
5. Click Save to compile and execute the script.
Extracting Data
By default, IMAGEdox automatically begins processing any image documents that are located in the input folder specified in the Application Settings Input screen (as described in “Editing input settings” above).
By default, the input folder is C: \SHSlmageCollaborator\Data\Input\PollingLocation. This document refers to the folder as the input folder.
You can get your document images into the input folder by either:
When IMAGEdox finds files in the input folder it performs the following steps:
Moves the document image files from the input folder into the workspace.
Performs optical character recognition on the image files
Applies the definitions contained in the document-specific and—if necessary—the default dictionaries to locate the data in which you are interested.
Extracts the data and moves the processed files to the appropriate output folder (as described in “Editing output folders” above).
Verifying Extracted Data
IMAGEdox client GUI enables you to review and verify (and, if required, modify) the extracted data. Using the GUI, you can navigate to each field in a document image (a field is each occurrence of a dictionary term or one of its synonyms), or between documents.
Introducing the Verify Data Window
This section introduces and explains the various GUI elements in the Verify Data window. The procedures associated with these elements are described in the sections that follow.
The left-hand pane is known as the Data pane. It displays the data extracted from the document image as specified by your dictionaries. The document image from which the data was extracted is displayed in the Image pane on the right. The Image pane displays the document image that results from the OCR processing.
The following table describes the elements in each pane.
When the application extracts data from document images, it puts the data in the Unverified Output folder and shows you the images.
Using the Verify Data Window
The Data Verifier window enables you to review and confirm (or correct) data extracted from your scanned document images. The Data Verifier window enables you compare the extracted data and the document image from which it was extracted simultaneously.
1. Double-click the IMAGEdox icon on your desktop, or locate and double-click the IMAGEdox executable file (by default, located in C: \SHSImageCollaborator\Client\bin\ImageDox.exe).
The IMAGEdox Screen is Displayed.
2. Click Verify Data.
The Verify Data window is displayed with the value (1235321200 in this example) for the dictionary entry (AccountNumber) extracted and displayed in the Data pane. The extracted value is also outlined in red in the Image pane.
3. Visually compare the extracted value in the Image pane to ensure it matches the outlined value in the document pane (you can use the magnification tools to resize the image in either pane).
Also ensure the ASCII format text in the Found Result field matches the value in the Extracted Value field. If, for example, you searched for a company name, and a custom typeface (font) was used in the company's logo, it may be difficult to render them in ASCII fonts.
If the extracted values do not match, type a new value in the Corrected Result field.
If there is a validation script running, the value it recommends is displayed in the Suggested Value-field. It may also display an error message. Consider the information before confirming the extracted value.
When you are satisfied with the result, proceed to step 4.
4. Click Next Fieldto display the next dictionary entry's extracted value.
The extracted value is automatically confirmed and saved when you move to the next field.
5. Repeat the verification procedure (steps 3 and 4).
6. When you have confirmed all the dictionary entries in the current document, click Next Imageto display the next document image.
7. For each new document, repeat steps 3 through 6.
8. When you confirm all the dictionary entries in the last document image, click Approve.
IMAGEdox uses the values defined in the IndexVariables.xml to collate:
These files are created in the Collated Image output folder (by default,
C:\SHSImageCollaborator\Data\Output\Collated Image Output).
It also saves individual XML files (which correspond with each input image document) in the VerifiedOutput folder (by default, C: \SHSImageCollaborator\Data\Output\VerifiedOutput).
The XML files that contain the extracted data values are described in the next section.
Extracted Data and Output Folders
IMAGEdox translates the data it extracts from your images into standard XML. The XML uses your terms (dictionary entries) as tags, and the extracted data as the tag's value. For example, if your dictionary entry is BankName, and you approved the value Wells Fargo that was returned by the data extraction process, the resulting AL would generally look like this:
<BankName>
Wells Fargo
</BankName>
The XML files created by the IMAGEdox extraction process contain the specific data that you want to make available to your other enterprise applications. The information is stored in a variety of files, located in the following output folders (by default, located in C: \SHSImageCollaborator\Data\Output):
The sections that follow describe the files that are created and placed in each of these folders.
CollatedFiles Folder
The CollatedFiles folder contains files that are created by IMAGEdox when a group (or batch) of processed image documents are approved at the end of the data verification procedure. Two types of files are created for each batch that is approved:
An image file—Multi-page .t if file that is created by combining each approved, single-page, TIFF-format document image.
One or more data files—XML files that are created by combining the extracted data values from each document image processed in the batch. The contents of each collated XML file is determined by the definitions in the IndexVariable.XML file.
These definitions control where one file ends and another begins. For example, if two five-page bank statements are processed, they are read as 10 individual graphic files. When the collation is done, the IndexVariable.XML file can define that a new document be created each time a new bank name value is located. In this example, the new bank name would be located in the sixth image file. Therefore, the first five pages would be collated into an XML file, as would the second five pages.
The location of the IndexVariable.XML file is defined in the Processing Data Application Settings described above. By default, it is located in C: \SHSImageCollaborator\Config\ApplicationSettings.
The Index Variable.XML file also is used to generate index XML files that populate the Index folder as described below.
Note the following in the graphic:
The plus sign (+) can be clicked to expand the list of attributes as follows (after clicking it, the plus sign is displayed as a minus sign (−):
The additional attributes show that visual clues (Zones) were used to define an area where to look for the terms and their corresponding values.
Index Folder
The Index folder contains an XML output file that corresponds with each input file (the document image files). Each index file contains the values extracted for each of the index values defined in the user-created IndexVariable.XML file. For example, the Indexvariable.XML file in
UnverifiedOutput Folder
The unverifiedOutput folder contains XML files where some of the terms being searched for are not found and no value is entered in the Corrected Result field by the user doing the data verification. These files are often the last pages of a statement tat do not contain the information for which you were searching.
VerifiedOutput Folder
The VerifiedOutput folder contains the XML files that contain values that have been confirmed by the user doing the data verification.
The following is an illustrative software development kit (SDK) for an illustrative implementation of IMAGEdox.
IMAGEdox SDK Overview
The IMAGEdox SDK is an integral product component that supports creating and running the workflow, batch-processing, and Windows service applications which involve data extraction from images.
This section provides an overview of IMAGEdox SDK functionality.
For example, you can process of set of bank statement images that represent multiple pages in the same physical document.
In the illustrative implementation, the following software programs are required by IMAGEdox SDK:
Additionally, you have the option of installing image enhancement software from FormFix version 2.8.
Image Library
The image library functionality is implemented in the following classes:
SHS.ImageDataExtractor.DLL provides the following three sets of functionalities.
General Image Operations Using the ImageProperty Class:
Retrieving image metadata.
Converting image file format and changing the compression technique used in the image.
Related classes:
Image Collation Using the PageFrameData and ImagerUtility Classes:
Merging multiple images in to a single image
Splitting multi-page image in to multiple images.
Checking and Making Images Acceptable to the OCR Engine Using the ImageProperty and ImageUtility Classes.
These functions are provided to aid the integrating module to provide an image that can be processed by the OCR engine. If the integrating module does not control the nature of the image, it should ensure the given image can be processed by the OCR engine. If it cannot, determine the reasons and correct these issues before invoking the data extraction functions.
The OCR engine can reject images for the following reasons:
The IMAGEdox SDK image library can correct the first two cases of rejection; the calling module must correct the third and fourth cases.
General Image Operations
A set of functions are provided to retrieve the image properties including—but not limited to-file format, compression technique, width, height, and resolution. This functionality also contains a set of functions for converting images from one format to another format, and changing the compression technique used on the image.
Retrieving Image Properties Example:
Converting Bitmap to jpeg Example:
ImageUtility.ConvertImage(@ “C:\Temp\Sample.bmp”, @“C:\Temp\Sample.jpg”, “image/jpeg”, EncoderValue.CompressionNone);
Converting Bitmap to tiff Example:
ImageUtility.Convertlmage(@“C:\Temp\Sample.bmp”, @“C:\Temp\Sample.tif”, “image/tiff”, EncoderValue.CompressionCCITT4);
Image Collation and Separation
A scanned image can either contain one page or all of the pages from a batch. Because a single-page image may be part of a multi-page document, IMAGEdox needs to be able to collate the related single-page images into a single multi-page image.
Similarly, a multi-page image may contain more than one document (for example, one image file containing three different bank statements). In this case, IMAGEdox needs to divide the image into the multi-page image into multiple image files containing only the related pages.
If the source is a multi-page image, the collation function provides the ability to specify page numbers within the source. This information is captured using the PageFrameData class. The structure captures the source image and the page number. The target image is created from the pages specified through the input PageFrameData set. The PageFrameData set can point to any number of different images. The number of pages in the target image is controlled by the number of PageFrameData elements passed to the function. This same function also can be used to separate the images into multiple images.
This API can also be used to create a single-page TIFF file from a multi-page TIFF file.
Page Separation Example:
This example shows dividing a multi-page TIFF file into multiple single-page files. PageFrameData can also be used to divide a multi-page TIFF file into a multiple multi-page TIFF files.
Ensuring Images are Supported by the OCR Engine
Because the OCR engine does not support all possible image formats and compression techniques, the invoking module must ensure that the image can be processed by the OCR engine.
The OCR engine can reject images for the following reasons:
The IMAGEdox SDK image library can correct the first two cases of rejection; the calling module must correct the third and fourth cases.
Before passing an image to the OCR engine, the invoking module can check whether an image is acceptable to the OCR engine. If the image is not acceptable, the application module should determine the reason why it is not acceptable.
If the reason the file is not acceptable is:
The file format or compression technique (or both) is not supported—The application module can correct the problem by using the appropriate function. The modified image can then be submitted to the OCR engine.
Image resolution is greater than 300 dpi, or the image size or width (or both) is greater than 6600 pixels—IMAGEdox can route the image to a separate workflow for manual correction before being submitted for data extraction.
OCR-Friendly Image Example:
The following example ensures an image is OCR-friendly. If the image is not acceptable to the OCR engine, the image is converted to the desired format which is acceptable.
Application Configuration Settings
This functionality captures and provides information for the successful processing of:
It also captures information about the temporary file folders where IMAGEdox stores the temporary files created during processing.
The settings are captured in AppSettingsOptions class found in SHS.DataExtractor.DLL. The parameters in the AppSettingsOptions class are described in the following table.
Infrastructure to Handle Groups of Images
The IMAGEdox SDK provides infrastructure to handle a group of images that shares some common information or behavior. For example, the SDK tracks the index of the previous image so that it can generate a proper index for the current image when some information is missing.
The JobContext class tracks the context of the batch currently being processed. It exposes AppSettingsOptions property that contains the configuration associated with the current batch processing.
An object of the JobContext class takes three parameters
The first parameter is the file path for the application settings that needs to be used for this batch processing.
The second parameter informs the IMAGEdox SDK whether or not the caller is interested in acquiring the OCR result in the OCR engine's native format.
The third parameter informs the IMAGEdox SDK whether or not the caller is interested in acquiring the OCR result in HTML format.
IMAGEdox SDK always provides the OCR result in XML format irrespective of whether or not the two aforementioned formats are requested. The OCR result in XML format can be reused to extract a different set of data.
The OCR native format document and the OCR HTML document are transient files and these needs to be stored somewhere by the caller before the next item in the batch is processed—otherwise the caller will delete this information.
Data Extraction
Data Extraction is the process of extracting data from an image file. This process involves using optical character recognition (OCR) on an image and converting the pixel information in the image to textual characters that can be used by IMAGEdox and other applications.
The Image SDK provides an optional Image Enhancement component to increase the quality of the image so that the accuracy of OCR component can be improved to a maximum extent.
The extraction process involves the following steps.
This involves validation and verification of the data before it can be considered.
Call custom scripts for further validation and data filtering.
Create an index for the image using a subset of variables for extracted data if an index has been defined.
Return the result to the calling application.
The IMAGEdox SDK also provides a mechanism to extract data from the OCR data that has been extracted as part of prior processing. This prevents the time consuming operation of OCR processing an image more than once.
As previously mentioned, the IMAGEdox SDK provides infrastructure to perform document collation. Document collation is a process in which individual pages of a multi-page document are collated to form a single document. This involves collating individual page images in to a single multi-page image along with collating each page's extracted data in to a single set of data for that document. This collation is done with the help of index variables defined by the calling application.
Data extraction involves multiple processing phases. If an error occurs, an output parameter returns the specific processing phase in which the error occurred along with the error. This helps the calling application to build a workflow to handle the error cases along with capturing the intermediate result of successful phases. This enables you to avoid repeatedly processing successfully completed phases in the same document image.
The data extraction module can be used as a library or it can be used as a module in a workflow. Because the workflow process involves combining disparate components, it is possible that a module that precedes the IMAGEdox component would be different from a module that follows this IMAGEdox component. In these cases, the preceding module can pass information about what should be done with the data extraction result of a given item through the item's context object to the next module that would handle the data extraction result.
Use Cases
Data extraction from an image:
1. Select the application settings.
2. Create the JobContext object for the chosen application settings options.
3. Get the image file path.
4. Create Docltem instance for the image in step 3.
5. Call ProcessJobltem.
6. Save the enhanced image for the further use.
7. Save the resulting OCR data in XML format for further data extraction.
8. Save the extracted data.
9. Save the index value.
10. Repeat the process if more than one image needs to be processed.
11. If more than one image is processed, then based on index value, do the following:
Data extraction from resulting OCR data in XML format:
1. Select the application settings
2. Create the job context object for the chosen application settings
3. Get the file which contains OCR result data in XML data format
4. Create Docltem instance for the above OCR XML data
5. Call FindVarlablevalues function
6. Save the extracted data
7. Repeat the above process if more than one OCR result data exists from which data needs to be extracted.
The following classes and enumeration implement the data extraction functionality.
ProcessPhase Enumeration
This enumeration defines the set of phases that are present in the processing algorithm. If any error occurs during processing, the IMAGEdox SDK returns the phase in which the error occurred along with the exception object.
Docltem Class
This class is implemented as a structure that carries input information for the processing function and carries back result of processing to the calling application.
The Docltem instance is passed as input in the following data extraction cases:
Input Parameters to the Processing Function:
Methods for Processing Input Data:
The following output fields will have values only when the image enhancement option is turned on, and metadata is defined using the FormFix component to identify the images using imaging technique. Values for the following fields are generated during image enhancement phase of the processing.
Values for the following output fields are generated during image OCR phase:
Values for the following output fields are generated during data extraction phase:
Values for the following output fields are generated during indexing phase:
Search Variable Class
This class is implemented as a structure that carries output information that is generated as part of data extraction.
Parameters Used in the SearchVariable Class:
The following is a description of illustrative Image Collaborator/IMAGEdox API.
Library Module: SHS.ImageDataExtractor.DLL Namespace: SHS.ImageDataExtractor
JobContext Class
This class initializes all resources needed to process a specific class item. This class exposes an AppSettingsOptions field that contains configuration settings for this specific class of documents.
JobContext( )
Purpose: Creates an instance of the JobContext class.
Syntax: JobContext (string_appSettingsFileName, bool_persistSSDoc, bool_persistOCRHtml);
Returns: JobContext( ) creates an instance of this class. An exception is thrown if any error occurs during the initialization of this instance.
Docltem Class
This class is used as an item context to track an item's information and its process results.
Fields:
DocItem( )
Purpose: Creates an instance of the Docltem class.
Syntax: DocItem (object_itemContext, string_imagePath, int _imagePageNo, AppSettingsOptions_appSettings);
Returns:
DocItem( ) creates an instance for the given input image. An exception is thrown if any error occurs during the initialization of this instance.
NewItem( )
Purpose: Clears the contents of the object and reinitializes the current instance to a new item.
Syntax: Void NewItem (object_itemContext, string_imagePath, int _imagePageNo, AppSettingsOptions_appSettings);
Returns: Void
Cleanup( )
Purpose: Clears the contents of the item and frees all intermediate results and resources used by the item.
Syntax: Void Cleanup (void);
Parameters: None
Returns: Void
PageFrameData Class
This class is used as an item context to track an item's information and its process results.
Fields:
PageFrameData( )
Purpose: Creates an instance of the PageFrameData class.
Syntax: public PageFrameData(string_imagePath, int_pageNo);
Returns:
PageFrameData( ) creates an instance for the given input image. An exception is thrown if any error occurs during the initialization of this instance.
SearchVariable Class
This class provides a set of information associated with the extracted data. This information is generated by the library during data extraction.
Fields:
ServiceUtility Class
This class exposes a set of library calls that can be called by third-party applications to perform data extraction processes. All functions in this class are static (they do have to be used with an object).
ProcessJobItem( )
Purpose: Performs the data extraction from the given image.
Syntax: ProcessPhase ProcessJobltem(JobContext_jobContext, Docltem _item, out Exception_exception);
Returns: A ProcessPhase enumeration value is returned indicating the last phase that has been completed successfully. The phase values are:
UnknowPhase,
PreProcessing,
ImageProcessing,
OCRRecognition,
DataExtraction,
Verification,
Indexing,
PostProcessing,
Completion
If a value other than Completion is returned, the next value (from the list) is the phase where the failure occurred.
FindVariableValue( )
Purpose: Performs data extraction from the XML document which was generated as part of the earlier data extraction using an OCR document as the input. It extracts data as dictionary terms.
Syntax: ProcessPhase FindVariableValue(JobContext_jobContext, DocItem_item, out Exception_exception);
Returns: A ProcessPhase enumeration value is returned indicating the last phase that has been completed successfully. The phase values are:
UnknowPhase,
PreProcessing,
ImageProcessing,
OCRRecognition,
DataExtraction,
Verification,
Indexing,
PostProcessing,
Completion
If a value other than Completion is returned, the next value (from the list) is the phase where the failure occurred.
Collate( )
Purpose: Performs the image collation. This function can either be used to collate multiple images in to single image file or separate single, multi-page images in to multiple images.
Syntax: void Collate(PageFrameData[]_pageInfo, string _targetImagePath, EncoderValue_compressionType);
Returns: True (if successful) or False (if it fails)
ChangeCompressionMode( )
Purpose: Saves the source image in the target path with the specified compression applied to it. This function can be also used to remove any compression used in the source image.
Syntax: void ChangeCompressionMode(string_srcImage, string _targetImage, EncoderValue_compressionType);
Returns: True (if successful) or False (if it fails)
ImageProperty Class
This class exposes a set of library calls that can be called by third-party applications to validate whether or not the given image is OCR friendly (that is, the OCR engine recognizes it as an acceptable image).
IsOCRFriendlyImage(string_imagePath)
Purpose: Checks whether the image will be accepted for OCR.
Syntax: bool IsOCRFriendlyImage(string_imagePath);
Returns:
Returns true if the image will be accepted for OCR.
An image will be rejected for OCR if any of the following conditions are true:
IsOCRFriendlyImage(image_image)
Purpose: Checks whether the image will be accepted for OCR.
Syntax: bool IsOCRFriendlyImage(string_imagePath);
Returns: True if the image is accepted by the OCR engine.
UsesOCRFrinedlyCompression(string_imagePath)
Purpose: Checks whether the image uses compression that is accepted by the OCR engine.
Syntax: bool UsesOCRFriendlyCompression(string_imagePath);
Returns:
Returns true if the image uses compression that is accepted by the OCR engine.
SubmitIt Server Image Collaborator API Guide© 2004 Sand Hill Systems 18
UsesOCRFrinedlyCompression(Image_image)
Purpose: Checks whether the image uses compression that is accepted by the OCR engine.
Syntax: bool UsesOCRFriendlyCompression(string_imagePath);
Returns:
Returns true if the image uses compression that is accepted by the OCR engine. SubmitIt Server Image Collaborator API Guide C© 2004 Sand Hill Systems 19
ImageUtility Class
This class exposes a set of library calls that can be called by third party applications to manipulate input images making them acceptable to the OCR engine. All functions in this class are static (they do have to be used with an object).
ConvertToOCRFriendlyImage(string_srcImage, string_targetImage)
Purpose: Replaces the compression technique used in the image with a compression technique that is acceptable to the OCR engine.
Syntax: void ConverToOCRFriendlyImage(string_srcImage, string 'targetImage);
Returns: True (if successful) or False (if it fails)
Sample Code
The following C# sample code is used to process a single .tif file (which can have one or more pages):
1. Include SHS.ImageDataExtractor.DLL in the project by browsing to the SHSimagecollaborator\bin directory.
2. Include the following code as part of your calling application:
using System;
using SHS.ImageDataExtractor; —This is the namespace that needs to be included in the code. Also as part of the project SHS.ImageDataExtractor.DLL
Where BankStatementsSettings.xml is the application setting which includes details including:
For each document group the user can choose to have separate setting files. The following file shows a sample BankStatementsSettings.xml:
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
This application claims the benefit of Provisional Application No. 60/579,277, filed on Jun. 15, 2004, which application is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
60579277 | Jun 2004 | US |