The hardware can comprise at least one communications/output unit 105, at least one display unit 110, at least one centralized processing unit (CPU) 115, at least one hard disk unit 120, at least one memory unit 125, and at least one input unit 130. The communications/output unit 105 can send results of extraction processing to, for example, a screen, printer, disk, computer and/or application. The display unit 110 can display information. The CPU 115 can interpret and execute instructions from the hardware and/or software components. The hard disk unit 120 can receive information (e.g., documents, data) from CPU 115, memory unit 125, and/or input 130. The memory unit 125 can store information. The input unit 130 can receive information (e.g., a document image or other data) for processing from, for example, a screen, scanner, disk, computer, application, keyboard, mouse, or other human or non-human input device, or any combination thereof.
The software can comprise one or more databases 145, at least one localization module 150, at least one image processing module 155, at least one OCR module 160, at least one document input module 165, at least one document conversion module 170, at least one text processing statistical analysis module 175, at least one document/output post processing module 180, and at least one systems administration module 185. The database 145 can store information. The image processing module 155 can include software which can process images. The OCR module 160 can include software which can generate a textual representation of the image scanned in by the input unit 130 (e.g., scanner). It should be noted that multiple OCR modules 160 can be utilized, in one embodiment. The document input module 165 can include software which can work with preprocessed documents (e.g., preprocessed in system 100 or elsewhere) to obtain information (e.g., used for training). Document representation (e.g., images and/or OCR text) can be sent to the localization module 150. The document conversion module 170 can include software which can transform a document from one form to another (e.g., from Word to PDF). A text processing statistical analysis module 175 can include software which can provide statistical analysis of the generated text to pre-process the textual information. For example, information such as the frequency of words, etc. can be provided. A document/output post processing module 180 can include software which can prepare a result document in a particular form (e.g., a format requested by a user). It can also send result information to an external or internal application for additional formatting and processing. The system administration module 185 can include software which allows an administrator to manage the software and hardware. In one embodiment, individual modules can be implemented as software modules that can be connected (via their specific interface) and their output can be routed to modules desired for further processing. All described modules can run on one or many CPUs, virtual machines; mainframes, or shells 4 within the described information processing infrastructure, such as CPU 115. Database 145 can be stored on hard disk drive unit 120.
The localization module 150 can utilize at least one document classifier, at least one dynamic variance network (DVN), at least one dynamic sensory map (DSM), or at least one fuzzy format engine, or any combination thereof. A document classifier can be used to classify a document using, for example, a class identifier (e.g., invoice, remittance statement, bill of lading, letter, e-mail; or by sender; vendor, or receiver identification). The document classifier can help narrow down the documents that need to be reviewed or to be taken into account for creating the learn sets. The document classifier can also help identify which scoring applications (e.g., DVNs, DSMs, and/or fuzzy format engines) should be used when reviewing new documents. For example, if the document classifier identifies a new document as an invoice from company ABC, this information can be used to pull information learned by the DVN, DSM, and fuzzy format engine from other invoices from company ABC. This learned information can then be applied to the new document in an efficient manner, as the learned information may be much more relevant than, for example, information learned from invoices from company BCD. The document classifier is described in more detail with respect to
As mentioned above, the localization module 150 can include numerous scoring applications, such as, but not limited to, DVNs, DSMs, or fuzzy format engines, or any combination thereof. DVNs can be used for determining possible target values by using references on a document or piece of a document to determine possible locations for any targets. A score can be given for each possible target value identified by the DVN. DVNs are discussed further below with respect to
Information generated by the localization module 150 can be sent to the databases(s) 145 or to external inputs (e.g., input unit 130, communication network 101, hard disk unit 120, and administration module 185). The output or part of the output of the localization module 150 can be stored, presented or used as input parameters in various components (e.g., communications/output unit 105, display unit 110, hard disk unit 120, memory unit 125, communication network 101, conversion module 170, database(s) 145, OCR module 160, statistical analysis module 175) either using or not using the post-processing module 180. Such a feedback system can allow for iterative refinement.
Document Classifier
As indicated above, the document classifier can be used to classify a document using, for example, a class identifier (e.g., invoice, remittance statement, bill of lading, letter, e-mail; or by sender, vendor, or receiver identification). The document classifier can operate based on text in a document. The document classifier can also be based on positional information about text in a document. Details relating to how a document classifier can classify a document using any combination of textual and/or positional information about text from the document is explained in more detail in the following patent application/patents, which are herein incorporated by reference: US2009/0216693, U.S. Pat. Nos. 6,976,207, and 7,509,578 (all entitled “Classification Method and Apparatus”).
Once the text information and text positional information is obtained for at least one training document, this information can be used to return an appropriate class identifier for a new document. (It should also be noted that a human or other application can provide this information.) For example, if invoices issued by company ABC are to be reviewed, certain text (e.g., “ABC”) or text positional information (e.g., where “ABC” was found to be located on training documents using, for example, DVNs or DSMs) found on the training set of documents can be searched on new documents to help determine if the new document is an invoice issued by company ABC. Documents identified as invoices issued by company ABC can be reviewed with company ABC-specific DVNs, DSMs and/or fuzzy searching machines.
It should be noted that the document classification search can be performed in a fuzzy manner. For example, punctuation or separation characters, as well as leading or lagging alphabetical characters and leading or lagging zeroes can be ignored. Thus, for example, “123-45”, “1/2345”, “0012345”, “INR1234/5” can be found if a fuzzy search is done for the string “12345”. Those of ordinary skill in the art will see that many types of known fuzzy searching applications can be used to perform the document classification search. Other examples of fuzzy representations and their respective classification are described in further detail in the following patent application/patents, which are herein incorporated by reference: US 2009/0193022, U.S. Pat. Nos. 6,983,345, and 7,433,997 (all entitled “Associative Memory”).
As explained above, the document classifier can help narrow down the documents that need to be reviewed. The document classifier can also help identify which scoring applications (e.g., DVNs, DSMs, and/or fuzzy format engines) should be used when reviewing new documents. This learned information from the DVNs, DSMs, and/or fuzzy format engines can then be applied to the new document in an efficient manner, as the learned information may be much more relevant than, for example, information learned from invoices from company BCD.
Dynamic Variance Networks (DVNs)
It should be noted that the above method 200 can provide increased redundancy and accuracy. Because every reference is a potential basis for target localization, there can be hundreds of reference anchors per page for each target. Thus, even for torn pages, where all classical keywords are missing, a target localization can be found.
In addition, it should be noted that a reference with a typo or misrecognized by an OCR engine at a particular position can automatically be used as an anchor based on where the reference is found. Thus, in some embodiments, there is no need to specify traditional keywords or apply any limitation to anchor references. In this way, strict and/or fuzzy matching can be utilized to match any similar reference to at least one reference in a new document.
Furthermore, the following characteristics of the reference can be taken into account when matching: font; font size; style; or any combination thereof. Additionally, the reference can be: merged with at least one other reference; and/or split into at least two references.
In 315, variance filtering can be performed by selecting similar reference vectors. The variance filtering can compare the references and the reference vectors for all documents in the learn set, compare the type of references, and keep similar reference vectors. Similar reference vectors can be similar in terms of position, content similar, and/or type similar for the reference. A reference can be positionally similar when the reference is usually found in one or more particular places on a page. Content similarity relates to references having the same type of content (e.g., when the references are all the same word or similar words). Type similarity relates to the reference usually being a particular type (e.g., a numerical value, a word, a keyword, a font type, etc.). Similarity types can be tied to other similarity types (e.g., when the references are all content similar, the same word or similar words, but only when the references are type similar as well (e.g., all of the type “date”)).
It should be noted that the reference consistency tests can be fuzzy. An example of fuzzy testing with a positionally similar reference is when everything within a defined x and y coordinate space is utilized, and the space parameters are able to be adjusted. An example of content consistency is determined by comparing words. Thus, “Swine-Flu”, “swineflu”, “Schweinegrippe” and “H1N1” can be assumed to be identical for a special kind of fuzzy comparison. “Invoice Number”, “Inv0!ce No.” and “invoiceNr” can be assumed to be identical for another kind of fuzzy comparison. An example of type similar fuzzy testing is when more than one type can be used (e.g., both “number” type and “number/letter” type for a date).
In 320, the similar reference filters are used to create the DVN. For example,
It should be noted that the content, position and type of reference can be used to filter reference vectors and construct the DVN, especially when only totally similar reference vectors are used.
Note that the image of a reference can be blurry in some situations because identical content with small positional changes can render the words readable but blurry. When the content is not the same (e.g., numbers for the invoice date, invoice number, order date and order number), the content may be unreadable in the overlay. As shown in
For example, using the example of 710, 805 and 810 of
In 410, all of the reference vectors that relate to the “keyword” references can be used to point towards the target. In 415, the integrating of the pointer information from all of the reference vectors and the reference keywords can then used to localize (determine) the target.
For example, in
Once possible positions for the locality of any targets are found using the DVNs, possible values for the targets can be found (e.g., Jan. 10, 2009 as the value for the target “invoice date”). Each possible value for the target can be given a score. The score can be determined by the ratio of the reference vectors hitting the target against the reference vectors not pointing to the target. Additionally, the fuzzy edit distance between the learned reference(s) (e.g., text) and the reference(s) used for localization can be integrated as a weight. For example, if all possible reference words on a document could be found exactly at the same relative position from the target as the ones stored in the learn set, the highest score can be returned. Additional references not contained in the learn set, or references with no vectors pointing towards the respective target can reduce the score.
It should be noted that DVNs can be used for many additional tasks, including, but not limited to: the addition of reference vectors, reference correction, document classification, page separation, recognition of document modification, document summarization, or document compression, or any combination thereof. These tasks are explained in more detail below.
Addition and/or removal of Reference Vectors. DVNs can be dynamically adapted after target localization. When at least one reference vector is learned and used to localize a target, all other possible reference vectors can be created and dynamically added to the DVN learned in 210 of
Reference Correction. Reference vectors can be used for reference correction. An example is illustrated in
Another example of reference vectors being used for reference correction is when the reference vectors are used to locate a target of, for example, a specific type. Additional information present can then be used to correct a potentially corrupted target. For example, if the reference vectors point towards the reference “29 Sep. 1009”, and this reference is known to be a date field target from a currently retrieved document, then a correction of that target to “29 Sep. 2009” is possible. To do this correction, the high similarity between “September” and “September” is used in a fuzzy content comparison and additional information about the entry being a date can be used to correct the year to a (configurable) time period that seems to be valid. It should also be noted that, if a date field target is clearly located, then the reference vectors can be followed back to the potential anchor references. It for example, the position & information for such an anchor reference perfectly fits, then the actual reference present there, but not fitting to the anchor reference present in the learned DVN could be replaced by the one from the learned DVN. For example, if the invoice number field target was located, the surrounding classical keyword which is corrupted and shows “Inv0!ce Number” could be replaced by the one stored for this position from the learned DVN. Thus, after that correction, “Invoice Number”, could be read at that position.
Document Classification. As explained earlier with respect to
Page Separation. Positional information regarding anchor references can also be used for page separation. In a stack of different documents (e.g., single documents, multi-page documents), the changes in the DVNs positional information (also referred to as “quality of fit” can provide information about the starting page of a new document. This method can be used to, for example, repackage piles of documents into single documents.
Recognition of Document Modification. DVNs can also be used in a reverse manner (e.g., after having located a target, looking up how well the anchor words on the present document fit to the learned anchor words of the DVN), to recognize a document modification. For example, in
Document Summarization. DVNs can also be used to automatically summarize document content. This process is illustrated in
Document Compression. DVNs can also be used for compression of a document or set of documents. In
Dynamic Sensory Maps (DSMs)
FIG., 18 illustrates details related to applying the DSM in 1630, according to one embodiment. In 1810, the DSM is overlaid onto the document where a target is to be localized. In 1820, the possible position(s) of the target (along with the probability for each possible position) is obtained from the DSM. In 1830, these possible positions can be sorted so that the position with the highest probability can be deemed to be the position of the target. Once the position of the target is determined, information about the target (e.g., an amount listed in the “total amount” field) can be found.
Fuzzy Format Engines
Fuzzy format engines can collect a list of fuzzy formats for at least one target from training documents. During the extraction phase, the fuzzy format engine can calculate a score for the matching of the learned formats to the potential target. For example, given the target value “102.65$” for an amount type target, the fuzzy format engine could learn from the training documents that, in the representation “ddd.ddR”, d represents digit and R represents a currency signal. If the fuzzy format engine then finds a string “876.27$”, then this string can be determined to be a potential target value with a very high score (e.g., 10). However, if the string “1872,12$” is found, the score could be reduced by one for the additional digit, and reduced by another one for the comma instead of the period, for a score of 8. As another example, a fuzzy format engine could learn that “INVNR-10234” could be represented as “CCCC-ddddd”, where C represents capital characters and d represents digits. Those of ordinary skill will see that many type of fuzzy format engines can be used, and there can also be many types of scoring utilized. Examples of other possible scoring systems are, for example: the different handling of missing or additional characters and digits (e.g., having a 0.125 score penalty per missing or additional character vs. a 0.25 penalty for a missing or additional digit); character string similarity measures that can be obtained as described in the following patent application/patents, which are herein incorporated by reference: US 2009/0193022, U.S. Pat. Nos. 6,983,345, and 7,433,997 (all entitled “Associative Memory”).
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope of the present invention. Thus, the present invention should not be limited by any of the above-described exemplary embodiments.
In addition, it should be understood that the figures described above, which highlight the functionality and advantages of the present invention, are presented for example purposes only. The architecture of the present invention is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown in the figures.
Further, the purpose of the Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract of the Disclosure is not intended to be limiting as to the scope of the present invention in any way.
Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112, paragraph 6. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112, paragraph 6.
Number | Name | Date | Kind |
---|---|---|---|
4731861 | Blanton et al. | Mar 1988 | A |
4799188 | Yoshimura | Jan 1989 | A |
4864501 | Kucera et al. | Sep 1989 | A |
5191525 | LeBrun et al. | Mar 1993 | A |
5201047 | Maki et al. | Apr 1993 | A |
5245672 | Wilson et al. | Sep 1993 | A |
5267165 | Sirat | Nov 1993 | A |
5276870 | Shan et al. | Jan 1994 | A |
5278980 | Pedersen et al. | Jan 1994 | A |
5344132 | LeBrun et al. | Sep 1994 | A |
5537491 | Mahoney et al. | Jul 1996 | A |
5546503 | Abe et al. | Aug 1996 | A |
5550931 | Bellegarda et al. | Aug 1996 | A |
5619709 | Caid et al. | Apr 1997 | A |
5649068 | Boser et al. | Jul 1997 | A |
5671333 | Catlett et al. | Sep 1997 | A |
5675710 | Lewis | Oct 1997 | A |
5689620 | Kopec et al. | Nov 1997 | A |
5745889 | Burrows | Apr 1998 | A |
5778362 | Deerwester | Jul 1998 | A |
5787201 | Nelson et al. | Jul 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5809476 | Ryan | Sep 1998 | A |
5864855 | Ruocco et al. | Jan 1999 | A |
5889886 | Mahoney | Mar 1999 | A |
5918223 | Blum et al. | Jun 1999 | A |
5937084 | Crabtree et al. | Aug 1999 | A |
5956419 | Kopec et al. | Sep 1999 | A |
5987457 | Ballard | Nov 1999 | A |
5999664 | Mahoney | Dec 1999 | A |
6006221 | Liddy et al. | Dec 1999 | A |
6009196 | Mahoney | Dec 1999 | A |
6043819 | LeBrun et al. | Mar 2000 | A |
6047299 | Kaijima | Apr 2000 | A |
6069978 | Peairs | May 2000 | A |
6076088 | Paik et al. | Jun 2000 | A |
6101515 | Wical et al. | Aug 2000 | A |
6115708 | Fayyad et al. | Sep 2000 | A |
6161130 | Horvitz et al. | Dec 2000 | A |
6185576 | McIntosh | Feb 2001 | B1 |
6188010 | Iwamura | Feb 2001 | B1 |
6192360 | Dumais et al. | Feb 2001 | B1 |
6212532 | Johnson et al. | Apr 2001 | B1 |
6243713 | Nelson et al. | Jun 2001 | B1 |
6275610 | Hall, Jr. et al. | Aug 2001 | B1 |
6289334 | Reiner et al. | Sep 2001 | B1 |
6324551 | Lamping et al. | Nov 2001 | B1 |
6327387 | Naoi et al. | Dec 2001 | B1 |
6453315 | Weissman et al. | Sep 2002 | B1 |
6477551 | Johnson et al. | Nov 2002 | B1 |
6574632 | Fox et al. | Jun 2003 | B2 |
6611825 | Billheimer | Aug 2003 | B1 |
6622134 | Sorkin | Sep 2003 | B1 |
6629097 | Keith | Sep 2003 | B1 |
6665668 | Sugaya et al. | Dec 2003 | B1 |
6665841 | Mahoeny et al. | Dec 2003 | B1 |
6732090 | Shanahan et al. | May 2004 | B2 |
6741724 | Bruce et al. | May 2004 | B1 |
6741959 | Kaiser | May 2004 | B1 |
6772164 | Reinhardt | Aug 2004 | B2 |
6785810 | Lirov | Aug 2004 | B1 |
6944340 | Shah | Sep 2005 | B1 |
6976207 | Rujan et al. | Dec 2005 | B1 |
6983345 | Lapir et al. | Jan 2006 | B2 |
6990238 | Saffer | Jan 2006 | B1 |
7149347 | Wnek | Dec 2006 | B1 |
7433997 | Lapir et al. | Oct 2008 | B2 |
7440938 | Matsubayashi et al. | Oct 2008 | B2 |
7472121 | Kothari et al. | Dec 2008 | B2 |
7483570 | Knight | Jan 2009 | B1 |
7509578 | Rujan et al. | Mar 2009 | B2 |
7610281 | Gandhi et al. | Oct 2009 | B2 |
7720721 | Goldstein et al. | May 2010 | B1 |
7805446 | Potok et al. | Sep 2010 | B2 |
7865018 | Abdulkader | Jan 2011 | B2 |
7908430 | Lapir et al. | Mar 2011 | B2 |
8015198 | Rabald et al. | Sep 2011 | B2 |
8051139 | Musat | Nov 2011 | B1 |
8554852 | Burnim | Oct 2013 | B2 |
20010042083 | Saito et al. | Nov 2001 | A1 |
20020022956 | Ukrainczyk et al. | Feb 2002 | A1 |
20020023085 | Keith, Jr. | Feb 2002 | A1 |
20020129015 | Caudill et al. | Sep 2002 | A1 |
20020133476 | Reinhardt | Sep 2002 | A1 |
20020156760 | Lawrence et al. | Oct 2002 | A1 |
20020156816 | Kantrowitz et al. | Oct 2002 | A1 |
20030099399 | Zelinski | May 2003 | A1 |
20040049411 | Suchard et al. | Mar 2004 | A1 |
20040054666 | Lapir et al. | Mar 2004 | A1 |
20040243601 | Toshima | Dec 2004 | A1 |
20040255218 | Tada et al. | Dec 2004 | A1 |
20050021508 | Matsubayashi et al. | Jan 2005 | A1 |
20050160369 | Balabanovic et al. | Jul 2005 | A1 |
20060142993 | Menendez-Pidal | Jun 2006 | A1 |
20060210138 | Hilton | Sep 2006 | A1 |
20060212413 | Rujan et al. | Sep 2006 | A1 |
20060212431 | Lapir et al. | Sep 2006 | A1 |
20070033252 | Combest | Feb 2007 | A1 |
20070091376 | Calhoon et al. | Apr 2007 | A1 |
20070244882 | Cha et al. | Oct 2007 | A1 |
20070288882 | Kniffin et al. | Dec 2007 | A1 |
20080040660 | Georke et al. | Feb 2008 | A1 |
20080126335 | Gandhi et al. | May 2008 | A1 |
20080208840 | Zhang et al. | Aug 2008 | A1 |
20080212877 | Franco | Sep 2008 | A1 |
20080252924 | Gangai | Oct 2008 | A1 |
20090193022 | Lapir et al. | Jul 2009 | A1 |
20090198677 | Sheehy et al. | Aug 2009 | A1 |
20090216693 | Rujan et al. | Aug 2009 | A1 |
20090226090 | Okita | Sep 2009 | A1 |
20090228777 | Henry et al. | Sep 2009 | A1 |
20090274374 | Hirohata et al. | Nov 2009 | A1 |
20090307202 | Rabald et al. | Dec 2009 | A1 |
20100325109 | Bai | Dec 2010 | A1 |
20110078098 | Lapir et al. | Mar 2011 | A1 |
20110103688 | Urbschat et al. | May 2011 | A1 |
20110103689 | Urbschat et al. | May 2011 | A1 |
20110106823 | Urbschat et al. | May 2011 | A1 |
Number | Date | Country |
---|---|---|
0 320 266 | Jun 1989 | EP |
0 572 807 | Dec 1993 | EP |
0 750 266 | Dec 1996 | EP |
0809219 | Nov 1997 | EP |
1049030 | Nov 2000 | EP |
1 128 278 | Aug 2001 | EP |
1 182 577 | Feb 2002 | EP |
1 288 792 | Mar 2003 | EP |
1 315 096 | May 2003 | EP |
2172130 | Sep 1986 | GB |
01-277977 | Nov 1989 | JP |
02-186484 | Jul 1990 | JP |
04-123283 | Nov 1992 | JP |
07-271916 | Oct 1995 | JP |
10-91712 | Apr 1998 | JP |
11-184894 | Jul 1999 | JP |
11-184976 | Jul 1999 | JP |
2000-155803 | Jun 2000 | JP |
2003-524258 | Aug 2003 | JP |
2005-038077 | Feb 2005 | JP |
2009-238217 | Oct 2009 | JP |
WO 8801411 | Feb 1988 | WO |
WO 8904013 | May 1989 | WO |
WO 9110969 | Jul 1991 | WO |
WO 9801808 | Jan 1998 | WO |
WO 9847081 | Oct 1998 | WO |
WO 01 42984 | Jun 2001 | WO |
WO 0163467 | Aug 2001 | WO |
WO 0215045 | Feb 2002 | WO |
WO 03019524 | Mar 2003 | WO |
WO 03044691 | May 2003 | WO |
Entry |
---|
File History of U.S. Appl. No. 12/191,774. |
A. Krikelis et al., “Associative Processing and Processors” Computer, US, IEEE Computer Society, Long Beach, CA, US, vol. 27, No. 11, Nov. 1, 1994, pp. 12-17, XP000483129. |
International Search Report issued in related International Application No. PCT/EP01/09577, mailed Nov. 5, 2003. |
Motonobu Hattori, “Knowledge Processing System Using Mulit-Mode Associate Memory”, IEEE, vol. 1, pp. 531-536 (May 4-9, 1998). |
International Search Report issued in International Application No. PCT/EP01/01132, mailed May 30, 2001. |
H. Saiga at al., “An OCR System for Business Cards”, Proc. of the 2nd Int. Conf. on Document Analysis and Recognition, Oct. 20-22, 1993, pp. 802-805, XP002142832. |
Junliang Xue et al., “Destination Address Block Location on handwritten Chinese Envelope”, Proc. of the 5th Int Conf. on Document Analysis and Recognition, Sep. 20-22, 1999, pp. 737-740, XP002142833. |
Machine Translation of JP 11-184894. |
Simon Berkovich et al., “Organization of Near Matching In Bit Attribute Matrix Applied to Associative Access Methods In Information Retrieval”, Pro. Of the 16th IASTED Int. Conf. Applied Informatics, Feb. 23-25 1998, Garmisch-Partenkirchen, Germany. |
European Office Action issued in Application No. 01127768.8 mailed Sep. 10, 2008. |
European Office Action issued in Application No. 01120429.4 mailed Sep. 16, 2008. |
Frietag, “Information extraction from HTML: application of a general machine learning approach”, Pro. 15th National Conference on Artificial Intelligence (AAAI-98); Tenth Conference on Innovative Applications of Artificial Intelligence, Proceedings of the Fifteenth National Conference on Artificial Intelligence, Madison, WI, USA pp. 517-523, SP002197239 1998, Menlo Park, CA, USA, AAAI Press/MIT Press, USA ISBN: 0-262-51098-7. |
European Office Action issued in Application No. 01120429.4 mailed Jun. 30, 2006. |
European Office Action issued in Application No. 01120429.4 mailed Apr. 23, 2004. |
E. Appiani et al., “Automatic document classification and indexing in high-volume applications”, International Journal on Document Analysis and Recognition, Dec. 2001, Springer-Verlag, Germany, vol. 4, No. 2, pp. 69-83, XP002197240, ISSN: 1433-2833. |
A. Ting et al., “Form recognition using linear structure”, Pattern Recognition, Pergamon Press Inc., Elmsford, NY, US, vol. 32, No. 4, Apr. 1999, pp. 645-656, XP004157536, ISSN: 0031-3203. |
International Search Report issued in International Application PCT/US02/27132 issued Nov. 12, 2002. |
“East text search training”, Jan. 2000. |
European Search Report issued in European Office Action 01120429.4 mailed Jul. 19, 2002. |
International Search Report issued in International Application PCT/DE97/01441, mailed Nov. 12, 1997. |
Voorheas et al., “Vector expansion in a large collection”, NIST Special Publication, US, Gaithersburg, MD pp. 343-351. |
M. Marchard et al., “A Convergence Theorem for Sequential Larning in Two-Layer Perceptrons”, Europhysics Letters, vol. 11, No. 6, Mar. 15, 1990, pp. 487-492. |
F. Aurenhammer, Voronoi Diagrams—“A survey of a fundamental geometric data structure”, ACM Computing Surveys, vol. 23, No. 3, Sep. 1991, pp. 345-405. |
C. Reyes et al., “A Clustering Technique for Random Data Classification”, International Conference on Systems, Man and Cybernetics, IEEE, p. 316-321. |
International Search Report issued in PCT/EP00/03097 mailed Sep. 14, 2000. |
International Preliminary Examination Report issued in PCT/EP00/03097 mailed Jul. 31, 2001. |
Written Opinion issued in PCT/EP00/03097 mailed Apr. 21, 2001. |
Japanese Office Action issued in Japanese Application No. 2003-522903, mailed Jul. 29, 2008. |
English language translation of Japanese Office Action issued in Japanese Application No. 2003-522903, mailed Jul. 29, 2008. |
European Office Action issued in European Application No. 01 905 729.8, mailed Nov. 22, 2007. |
Foreign Office Action issued in EP 00117850.8 mailed May 20, 2008. |
Foreign Office Action issued in EP 00117850.8 mailed Jul. 20, 2006. |
EP Search Report issued in EP 00117850.8 mailed Jun. 12, 2001. |
International Notification issued in PCT/EP00/03097 mailed May 5, 2001. |
G. Lapir, “Use of Associative Access Method for Information Retrieval System”, Proc. 23rd Pittsburg Conf. on Modeling & Simulation, 1992, pp. 951-958. |
Witten et al, “Managing Gigabytes” pp. 128-142. |
C.J. Date, “An Introduction to Database Systems, Sixth Edition”, Addison-Wesley Publishing Company, pp. 52-65 (1995). |
International Search Report for PCT/US00/23784 mailed Oct. 26, 2000. |
Australian Office Action in Australian application 2002331728 mailed Nov. 16, 2006. |
Australian Notice of Acceptance issued in Australian application 2002331728 mailed Feb. 20, 2008. |
Foreign Office Action issued in EP 01127768.8 mailed Feb. 5, 2007. |
Foreign Office Action issued in EP 01127768.8 mailed Sep. 8, 2005. |
Bo-ren Bai et al. “Syllable-based Chinese text/spoken documents retrieval using text/speech queries”, Int'l Journal of Pattern Recognition. |
Foreign Search Report issued in EP 01127768.8 mailed Sep. 12, 2002. |
European Search Report issued in EP 00103810.8, mailed Aug. 1, 2000. |
European Office Action issued in EP 00103810.8, mailed May 23, 2002. |
International Preliminary Examination Report issued in International Application No. PCT/DE97/01441, mailed Jul. 21, 1998. |
European Office Action issued in EP 00926837.6, mailed Nov. 28, 2006. |
European Office Action issued in EP 00926837.6, mailed Oct. 11, 2007. |
Australian Office Action issued in AU 2001282106, mailed Jul. 18, 2006. |
Australian Office Action issued in AU 2001233736, mailed Aug. 26, 2005. |
Australian Office Action issued in AU 2001233736, mailed Aug. 23, 2006. |
European Office Action issued in EP 97931718.7, mailed Jul. 9, 2001. |
European Office Action issued in EP 97931718.7, mailed May 8, 2002. |
International Search Report issued in International Application No. PCT/EP98/00932, mailed Jul. 28, 1998. |
Richard G. Casey et al., “A Survey of Methods and Strategies in Character Segmentation”, IEEE Transactions on Pattern Analysis and machine Intelligence, vol. 18, No. 7, pp. 690-706 (Jul. 1996). |
File History of U.S. Appl. No. 11/240,525. |
File History of U.S. Appl. No. 09/561,196. |
File History of U.S. Appl. No. 11/240,632. |
File History of U.S. Appl. No. 10/362,027. |
File History of U.S. Appl. No. 12/106,450. |
Office Action issued in Canadian Patent Application No. 2,459,182, mailed Oct. 26, 2010. |
File History of U.S. Appl. No. 10/208,088. |
European Office Action issued in EP 01127768.8, mailed Feb. 17, 2011. |
R.M. Lea et al., “An Associative File Store Using Fragments for Run-Time Indexing and Compression”, SIGIR'80, Proceedings of the 3rd Annual ACM Conference on Research and Development in Information Retrieval, pp. 280-295 (1980). |
M. Koga et al., “Lexical Search Approach for Character-String Recognition”, DAS'98, LNCS 1655, pp. 115-129 (1999). |
Office Action issued in Canadian Application No. CA 2,419,776, dated Aug. 16, 2011. |
Notice of Allowance issued in Canadian Application No. 2,459,182, dated Oct. 28, 2011. |
James Wnek, “Learning to Identify Hundreds of Flex-Form Documents”, IS&T/SPIE Conference on Document Recognition and Retrieval VI, San Jose, CA, SPIE vol. 3651, pp. 173-182 (Jan. 1999). |
International Search Report issued in PCT/IB2010/003252, mailed Jan. 24, 2012. |
Written Opinion issued in PCT/IB2010/003252, mailed Jan. 24, 2012. |
A. Dengal et al., “Chapter 8:: Techniques for Improving OCR Results”, Handbook of Character Recognition and Document Image Analysis, pp. 227-258, Copyright 1997 World Scientific Publishing Company. |
Remy Mullot, “Les Documents Ecrits”, Jan. 1, 2006, Lavoisier—Hermes Science, pp. 351-355, “Section 7.6.3 Reconnaissance des caracteres speciaux ou endommages”. |
L. Solan, “The Language of the Judges”, University of Chicago Press, pp. 45-54, Jan. 1, 1993. |
File History of U.S. Appl. No. 13/024,086. |
English translation of Remy Mullot, “Les Documents Ecrits”, Jan. 1, 2006, Lavoisier—Hermes Science, pp. 351-355, “Section 7.6.3 Reconnaissance des caracteres speciaux ou endommages”. |
Dave, et al., Mining the Peanut Gallery: Opinion Extraction and Semantic Classification of Product Reviews, ACM 1-58113-680-3/03/0005, May 20-24, 2003, pp. 1-10. |
Dreier, Blog Fingerprinting: Identifying Anonymous Posts Written by an Author of Interest Using Word and Character Frequency Analysis, Master's Thesis, Naval Postgraduate School, Monterey, California. Sep. 2009, pp. 1-93. |
File History of U.S. Appl. No. 12/208,088. |
File History of U.S. Appl. No. 12/570,412. |
File History of U.S. Appl. No. 13/548,042. |
File History of U.S. Appl. No. 13/624,443. |
File History of U.S. Appl. No. 10/204,756. |
Office Action issued in Canadian Application No. CA 2,776,891, mailed Oct. 22, 2013. |
Office Action issued in Canadian Application No. CA 2,796,392, mailed Sep. 26, 2013. |
Office Action issued in Canadian Application No. CA 2,796,392, mailed Apr. 30, 2014. |
File History of U.S. Appl. No. 12/588,928. |
File History of U.S. Appl. No. 12/610,937. |
File History of U.S. Appl. No. 13/192,703. |
File History of U.S. Appl. No. 13/477,645. |
Office Action issued in Japanese Application No. 2012-532203, mailed Jul. 15, 2014. |
Office Action issued in Canadian Application No. 2,776,891 dated May 13, 2014. |
English language translation of Office Action issued in Japanese Application No. 2012-537459, mailed Jun. 17, 2014. |
English language abstract and translation of JP 10-091712 published Apr. 10, 1998. |
English language abstract of JP 1-277977 published Nov. 8, 1989. |
English language abstract of JP 2-186484 published Jul. 20, 1990. |
English language abstract and translation of JP 7-271916 published Oct. 20, 1995. |
English language abstract and translation of JP 11-184976 published Jul. 9, 1999. |
English language abstract and translation of JP 2000-155803 published Jun. 6, 2000. |
U.S. Appl. No. 09/561,196, filed Apr. 27, 2000. |
U.S. Appl. No. 10/531,298, filed Apr. 15, 2005. |
U.S. Appl. No. 11/429,436, filed May 8, 2006. |
U.S. Appl. No. 11/620,628, filed Jan. 5, 2007. |
U.S. Appl. No. 11/896,746, filed Sep. 5, 2007. |
U.S. Appl. No. 13/024,086, filed Feb. 9, 2011. |
U.S. Appl. No. 13/192,703, filed Jul. 28, 2011. |
International Search Report issued in PCT/IB2010/003251, mailed may 2, 2011. |
Written Opinion issued in PCT/IB2010/003251, mailed May 2, 2011. |
International Search Report issued in PCT/IB2010/003250, mailed may 2, 2011. |
Written Opinion issued in PCT/IB2010/003250, mailed May 2, 2011. |
Suzanne Liebowitz Taylor et al. “Extraction of Data from Preprinted Forms,” Machine Vision and Applications, vol. 5, pp. 211-222, Jan. 1992. |
International Search Report issued in PCT/IB2010/050087, mailed May 27, 2011. |
Written Opinion issued in PCT/IB2010/050087, mailed May 27, 2011. |
File History U.S. Appl. No. 12/106,450. |
File History U.S. Appl. No. 12/570,412. |
File History U.S. Appl. No. 13/024,086. |
Partial English language translation of Office Action issued in Japanese Application No. 2012-532203, mailed Jul. 15, 2014. |
English language abstract and translation of JP 2003-524258 published Aug. 12, 2003. |
Office Action issued in Japanese Application No. JP 2012-537457 dated Apr. 18, 2014. |
English language translation of Office Action issued in Japanese Application No. JP 2012-537457 dated Apr. 18, 2014. |
English language abstract of JP 2009-238217, published Oct. 15, 2009. |
English language abstract and machine translation of JP 2005-038077, published Feb. 10, 2005. |
File History U.S. Appl. No. 13/624,443. |
G. Salton et al., “A Vector Space Model for Automatic Indexing”, Communications of the ACM, vol. 18, No. 11, pp. 613-620, Nov. 1975. |
Office Action issued in Australian Application No. 2010311066 dated Oct. 29, 2014. |
Suzanne Liebowitz Taylor et al., “Extraction of Data from Preprinted Forms”, Machine Vision and Applications, vol. 5, No. 3, pp. 211-222 (1992). |
Office Action issued in Australian Application No. 2010311067 dated Oct. 21, 2014. |
Office Action issued in Japanese Application No. 2012-537458 mailed Jun. 17, 2014. |
English language translation of Office Action issued in Japanese Application No. 2012-537458, mailed Jun. 17, 2014. |
Office Action issued in Australian Application No. 2013205566 dated Dec. 9, 2014. |
Stephen V. Rice et al., “The Fifth Annual Test of OCR Accuracy”, Information Science Research institute, TR-96-01, Apr. 1996 (48 pages). |
Thomas A. Nartker et al., “Software Tools and Test Data for Research and Testing of Page-Reading OCR Systems”, IS&T/SPIE Symposium on Electronic Imaging Science and Technology, Jan. 2005 (11 pages). |
Tim Kam Ho et al., “OCR with No Shape Training”, International Conference on Pattern Recognition (2000) (4 pages). |
Karen Kukich, “Techniques for Automatically Correcting Words in Text”, ACM Computing Surveys, vol. 24, No. 4, pp. 377-439, Dec. 1992. |
Atsuhiro Takasu, “An Approximate Multi-Word Matching Algorithm for Robust Document Retrieval”, CIKM'06, Nov. 5-11, 2006 (9 pages). |
Kenneth Ward Church et al., “Word Association Norms, Mutual Information, and Lexicography”, Computational Linguistics, vol. 16, No. 1, pp. 22-29, Mar. 1990. |
Anil N. Hirani et al., “Combining Frequency and Spatial Domain Information for Fast Interactive Image Noise Removal”, Proceedings of the 23rd Annual Conference on Computer Graphics and interactive Techniques, pp. 269-276 (1996). |
Leonid L. Rudin et al., “Nonlinear Total Variation Based Noise Removal Algorithms”, Physica D, vol. 60, pp. 259-268 (1992). |
Richard Alan Peters, II, “A New Algorithm for Image Noise Reduction Using Mathematical Morphology”, IEEE Transactions on Image Processing, vol. 4, No. 3, pp. 554-568, May 1995. |
George Nagy, “Twenty Years of Document Image Analysis in PAMI”, IEEE Transactions on Pater Analysis and Machine Intelligence, vol. 22, No. 1, pp. 38-62, Jan. 2000. |
Stephen V. Rice et al., “Optical Character Recognition: An Illustrated Guide to the Frontier”, Kluwer Academic Publishers, Copyright Apr. 1999. |
File History U.S. Appl. No. 10/204,756. |
File History U.S. Appl. No. 12/588,928. |
File History U.S. Appl. No. 12/610,937. |
File History U.S. Appl. No. 13/192,703. |
File History U.S. Appl. No. 13/477,645. |
File History U.S. Appl. No. 13/548,042. |
English language translation of Office Action issued in Japanese Application No. 2012-537458 mailed Jun. 17, 2014. |
Andreas R. Dengel et al., “smartFlX: A Requirement Driven System for Document Analysis and Understanding”, In: D. Lopresti et al., Document Analysis V, Springer, pp. 433-444 (Copyright 2002). |
Bertin Klein et al., “On Benchmarking of Invoice Analysis Systems” DAS 2006, LNCS 3872, pp. 312-323 (2006). |
T.A. Bayer et al., “A Generic System for Processing Invoices”, 4th Int. Conf. on Document Analysis and Recognition, pp. 740-744 (1997). |
D.P. Lopresti et al., “A Fast Technique for Comparing Graph Representations with Applications to Performance Evaluation”, Int. Journal on Document Analysis and Recognition (IJDAR), vol. 6, pp. 219-229 (2004). |
Takashi Hirano et al., “Field Extraction Method from Existing Forms Transmitted by Facsimile”, In Proceedings of the Sixth International Conference on Document Analysis and Recognition (ICDAR), pp. 738-742, Sep. 10-13, 2001. |
Donato Malerba et al., “Automated Discovery of Dependencies Between Logical Components in Document Image Understanding”, In Proceedings of the Sixth International Conference on Document Analysis and Recognition (ICDAR), Sep. 10-13, 2001 (5 pages). |
Frederick Schultz et al., “Seizing the Treasure: Transferring Layout Knowledge in Invoice Analysis”, Proceedings of the 10th International Conference on Document Analysis and Recognition (ICDAR) (2009). |
“Brainware: Invoice Processing”, downloaded from Internet Archive at http://web.archive.org/web/20080512043505/http://www.brainware.com/invoice.php, archived May 12, 2008 (1 page). |
“Kofax: Solutions: Invoice Processing”, downloaded from Internet Archive at http://web.archive.org/web/20081012062841/http://vvww.kofax.com/solutions/invoice-processing.asp, archived Oct. 12, 2008 (1 page). |
“Kofax: Solutions: Kofax White Papers”, downloaded from Internet Archive at http://web.archive.org/web/20080928052442/http://vvww.kofax.com/solutions/white-papers.asp, archived Sep. 28, 2008 (2 pages). |
Number | Date | Country | |
---|---|---|---|
20110106823 A1 | May 2011 | US |