Systems and methods for enrollment and identity management using mobile imaging

Information

  • Patent Grant
  • 12008543
  • Patent Number
    12,008,543
  • Date Filed
    Friday, November 19, 2021
    3 years ago
  • Date Issued
    Tuesday, June 11, 2024
    6 months ago
Abstract
Systems and methods for automatic enrollment and identity verification based upon processing a captured image of a document are disclosed herein. Various embodiments enable, for example, a user to enroll in a particular service by taking a photograph of a particular document (e.g., his driver license) with a mobile device. One or more algorithms can then extract relevant data from the captured image. The extracted data (e.g., the person's name, gender, date of birth, height, weight, etc.) can then be used to automatically populate various fields of an enrollment application, thereby reducing the amount of information that the user has to manually input into his mobile device in order to complete the enrollment process. In some embodiments, a set of internal and/or external checks can be run against the data to ensure that the data is valid, has been read correctly, and is consistent with other data.
Description
BACKGROUND
1. Field of the Invention

Various embodiments described herein relate generally to the field of identity verification through image processing. More particularly, various embodiments are directed in one exemplary aspect to support automatic enrollment and identity verification upon processing a captured image of a document.


2. Related Art

Mobile phone adoption continues to escalate, including ever-growing smart phone adoption and tablet usage. Mobile imaging is a discipline where a consumer takes a picture of a document, and that document is processed, extracting and extending the data contained within it for selected purposes. The convenience of this technique is powerful and is currently driving a desire for this technology throughout Financial Services and other industries.


At the same time, consumers are looking for quicker and easier ways to enroll in new products and services. During a typical enrollment process, consumers must identify themselves, providing common personal and demographic data. However, since mobile devices are increasingly becoming the desired device for such purposes, typing such data into a mobile device can be cumbersome. Additionally, the institution providing the desired service must verify and validate the identity of the customer in order to manage their business risk. Presently, there is no means or mechanism for providing automatic enrollment and identity verification through mobile imaging.


SUMMARY

Disclosed herein are systems and methods for automatic enrollment and identity verification based on processing of a captured image of a document. Various embodiments enable, for example, a user to enroll in a particular service by taking a photograph of a particular document (e.g., a driver license) with a mobile device. One or more processes can then extract relevant identifying data from the captured image. The extracted identifying data (e.g., the person's name, gender, date of birth, height, weight, etc.) can then be used to automatically populate various fields of an enrollment application, thereby reducing the amount of information that the user has to manually input into the mobile device in order to complete an enrollment process. In some embodiments, a set of internal and/or external checks can be run against the data to ensure that the data is valid, has been read correctly, and is consistent with other data.


In a first exemplary aspect, a computer readable medium is disclosed. In one embodiment, the computer readable medium contains instructions which, when executed by a computer, perform a process comprising: receiving an image of a document; preprocessing the image of the document in preparation for data extraction; extracting a set of identifying data from the image of the document; and automatically populating fields of an enrollment form based at least in part upon the extracted set of identifying data.


Other features and advantages should become apparent from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

Various embodiments disclosed herein are described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict typical or exemplary embodiments. These drawings are provided to facilitate the reader's understanding and shall not be considered limiting of the breadth, scope, or applicability of the embodiments. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.



FIG. 1 is a block diagram illustrating an exemplary network topology that may be used for automatic enrollment and identity management according to embodiments.



FIG. 2 is a block diagram illustrating an exemplary server adapted to perform automatic enrollment and identity management according to embodiments.



FIG. 3 is a flow diagram illustrating an exemplary method of performing automatic enrollment and identity management according to embodiments.



FIG. 4 is a block diagram that illustrates an embodiment of a computer/server system upon which an embodiment of the inventive methodology may be implemented.





The various embodiments mentioned above are described in further detail with reference to the aforementioned figured and the following detailed description of exemplary embodiments.


DETAILED DESCRIPTION


FIG. 1 is a block diagram illustrating an exemplary network topology that may be used for automatic enrollment and identity management according to embodiments. As shown by this figure, the network topology can comprise a mobile device 102, such as a cellular phone, smartphone, tablet, laptop, wearable device, etc. that is interfaced with a server 108 over a connected network 106. The mobile device 102 may be configured with an image capture device to capture one or more images 104 which are then used in the enrollment and identity verification process described below. One or more verification sources 110 may also be connected with the network 106 in order to communicate verification data to verify the extracted identifying data from the image, as will be described further herein. The following processing steps to process the image and extract the identifying data may be carried out on one or both of the mobile device and server 108.



FIG. 2 is a block diagram illustrating an exemplary server 108 adapted to perform automatic enrollment and identity management according to embodiments. As shown by this figure, server 108 may include a power supply module 202, a network interface module 206, one or more processors 204, and memory 208. Various modules such as a preprocessing module 210, data extraction module 212, data validation module 214, and data verification module 216 can be resident within memory.



FIG. 3 is a flow diagram illustrating an exemplary method of performing automatic enrollment and identity management according to embodiments. As depicted in this figure, the exemplary method can comprise five steps according to various embodiments. First, in step 302, an image of an identity document can be captured on a mobile device. Second, in step 304, the image can be preprocessed in order to prepare the image for data extraction. Third, in step 306, key identity-related data can be obtained from the image of the identity document. Fourth, in step 308, the extracted data can be validated in order to assess the quality of the data. Fifth, in step 310, the extracted data can be verified to assess identity risk using independent external data sources. Each of these steps is now discussed in further detail below.


In one embodiment, the results from the validating and verifying steps are organized into a Mobile Identity Risk Scorecard. This scorecard is a structured information model for presenting the risks associated with a proposed identity to Financial Services or other organizations. The exact contents of the scorecard can vary according to the intended use, but will generally include numeric indicators (0 to 1000), graphical indicators (red-yellow-green) or other patterned indicators which denote aspects of identity risk.


Document Capture


At block 302, an image of a document is captured. According to some embodiments, an application or browser session initiates the capture sequence on a mobile device or tablet. This can be implemented in the form of a library, embedded in a downloaded mobile application, a hybrid application invoked from within a mobile browser, or an automatic capture utility embedded in a mobile application. The capture sequence can guide the user through obtaining a mobile imaging-ready picture of the document. In some embodiments, one or more characteristics can be optimized before image capture, including, without limitation—focus, corner detection, lighting conditions, reflective properties, and closeness. Also, in some embodiments, feedback can provided to the user through an interactive set of visual cues, informing the user, for example, of how “well they are doing.”


In one form of the above, the consumer takes a picture of the front of their Driver's License. In another form, the MRZ line on a passport is read. In a third form, a full identity document is read, such as a government-issued ID or military ID.


Optionally, the user can also provide one or more “hints”—information which can be used to more accurately determine information on the document. For example, the user might provide their last name, which could be used to more accurately determine the location of the name and address on the document.


In some embodiments, the capture process can also read a barcode present on the identity document and extract key information relating to identity. This information can be used to cross-validate the information obtained during the Data Extraction process.


Pre-Processing


At block 304, the mobile image, once captured on the mobile device, can be preprocessed. Preprocessing can include a number of operations, including cropping, deskewing, and/or dewarping the image. Additionally, shadows can be eliminated, lighting issues can be enhanced, and the overall readability of the document image can be improved through one or more mathematical algorithms. The image can also be converted to a bitonal image in preparation for data extraction. Depending on the specific needs of the document type, multiple versions of the binarized image may be needed to handle document-specific readability issues, such as reverse text. In these cases, the preprocessing engine can create multiple bitonal images which can be used in combination during the data extraction process. In addition, a series of image quality and assurance (IQA) test scores can be calculated, indicating the quality of the original image.


Data Extraction


At block 306, relevant data can be extracted from the image of the document. A set of fields known to be available can be determined based on the document type. For example, in an Illinois Driver License, the fields known to be available can include a person's name, address, date of birth, height, weight, document expiration date, and other data.


In some embodiments, individual field confidence scores can also be calculated. For example, in one embodiment, confidence scores can be defined in a range from 0 to 1000, with 1000 representing high technical confidence in the readability of that field, and 0 representing low technical confidence. The confidence scores are calculated using a mathematical formula based on the ability to identify the characters included in each field, including such factors as sharpness. These statistical measures can be used when presenting the data to the user (for example, a low-confidence field can be highlighted, requesting that the user to confirm the data that has been extracted).


The confidence scores would be used by the application leveraging a Mobile Photo Account Opening and Identity Management solution, including applying thresholds to the confidence scores, highlighting those fields with a confidence score below a fixed value (example: highlight fields below 500). If a PDF417 barcode was scanned, the deconstructed string is parsed, identifying each of the relevant fields. A rules engine is applied, to handle a variety of exceptions in the content of the string, including missing fields, concatenated fields, abbreviated fields, and other state-level and local-level deviations. To date, more than 200 variations have been identified, so the use of a rules engine to organize the parsing of the string is a key component of the overall solution


Data Validation


At block 308, the extracted data can be validated using a variety of data validation techniques. As used herein, the term “validation” refers to the evaluation of data using rules and internally-consistent controls available within the mobile imaging process. These techniques can include, without limitation: validation that the information scanned from the PDF417 barcode matches the data obtained during data extraction, if available; validation that the information scanned using the barcode matches the data obtained during data extraction, if available; comparison of date fields to verify date format (This may be used to improve the data (for example, it is not possible to have a 13th month) or to validate the document (for example, exceptions would be flagged, such as expiration dates in the past, birthdates less than 16 years ago, birthdates over 100 years ago, etc.); validation that the expiration date is greater than today; validation that the date of birth is some date earlier than today; validation of data fields to known masks (example: zip code—(either XXXXX or XXXXX-XXXX) in the United States. Exceptions may be able to be corrected, by using a USPS database, or flagged as low-confidence); and validation of data fields to known minimum and maximum field lengths (ex. Validation of state field to defined set of 2-character abbreviations. Exceptions may be able to be corrected, by using a USPS database, or flagged as low-confidence). A myriad of other techniques for validation are possible in accordance with the scope of various embodiments.


Data Verification


At block 310, the extracted data can then be verified using a variety of data verification techniques. As used herein, the term “verification” refers to the evaluation of data using external data sources (110 in FIG. 1) or external verification logic. These techniques may include, without limitation: establishing that a person exists with the extracted name; determining if a unique individual can be determined given the extracted data; determining if a social security number can be identified with the given data; attempting to match an address to a United States Postal Service (USPS) database of known addresses; verifying that the individual has lived at the extracted address; verifying that the name matches the name associated with the extracted driver license number; verifying that the name matches a name associated with an extracted social security number. A myriad of other techniques for verification are possible in accordance with the scope of various embodiments.


Applications


In one embodiment, a Mobile Photo Account Opening and Identity Management solution may allow a consumer to fund the account once the information from the identity document is used to create a new account. To do this, the consumer would do one of the following: take a picture of a completed check, depositing it in the new account; take a picture of a blank check, to collect the routing and account number from the MICR line, to facilitate an ACH transfer; automatically scan a credit card, using an automatic capture utility, by holding the card in front of the camera of the mobile device, automatically detecting the 15-digit or 16-digit account number on the face of the card. This information is used by the calling application to pre-fill the information needed to complete a credit card funding transaction.


Multiple embodiments of potential applications are now provided herein.


In one embodiment, a system of automatically scanning a credit card, using an automatic capture utility, by holding the card in front of the camera of the mobile device, automatically detecting the 15-digit or 16-digit account number on the face of the card.


A system of Mobile Photo Account Opening and Identity Management, including the following: US Driver's License Capture (front of document), US Driver's License PDF417 scan (on back of document), Preprocessing of image, Data extraction from image, Deconstruction of PDF417 contents using a rules engine, Validation, including comparison of PDF417 contents to extracted data and Funding


A system of Mobile Photo Account Opening and Identity Management, including the following: US Driver's License Capture (front of document), Preprocessing of image, Data extraction from image, Validation, Funding,


A system of Mobile Photo Account Opening and Identity Management, including the following: US Driver's License Capture (front of document), US Driver's License PDF417 scan (on back of document), Preprocessing of image, Data extraction from image, Deconstruction of PDF417 contents using a rules engine, Validation, including comparison of PDF417 contents to extracted data.


A system of Mobile Photo Account Opening and Identity Management, including the following: US Driver's License Capture (front of document), Preprocessing of image, Data extraction from image, Validation.


A system of Mobile Photo Account Opening and Identity Management, including the following: Passport Capture (MRZ contents), Preprocessing of image, Data extraction from MRZ, Validation, Funding.


A system of Mobile Photo Account Opening and Identity Management, including the following: Passport Capture (MRZ contents), Preprocessing of image, Data extraction from MRZ, Validation.


A system of Mobile Photo Account Opening and Identity Management, including the following: Government or other identity document capture, Preprocessing of image, Data extraction, Validation, Funding.


A system of Mobile Photo Account Opening and Identity Management, including the following: Government or other identity document capture, Preprocessing of image, Data extraction, Validation.


Computer-Enabled Embodiment


For the purposes of the embodiments described herein, the term “computer” as used throughout this disclosure may be implemented as any computing device, including a mobile phone or a tablet.



FIG. 4 is a block diagram that illustrates an embodiment of a computer/server system 400 upon which an embodiment of the inventive methodology may be implemented. The system 400 includes a computer/server platform 401 including a processor 402 and memory 403 which operate to execute instructions, as known to one of skill in the art. The term “computer-readable storage medium” as used herein refers to any tangible medium, such as a disk or semiconductor memory, that participates in providing instructions to processor 402 for execution. Additionally, the computer platform 401 receives input from a plurality of input devices 404, such as a keyboard, mouse, touch device or verbal command. The computer platform 401 may additionally be connected to a removable storage device 405, such as a portable hard drive, optical media (CD or DVD), disk media or any other tangible medium from which a computer can read executable code. The computer platform may further be connected to network resources 406 which connect to the Internet or other components of a local public or private network. The network resources 406 may provide instructions and data to the computer platform from a remote location on a network 407. The connections to the network resources 406 may be via wireless protocols, such as the 802.11 standards, Bluetooth® or cellular protocols, or via physical transmission media, such as cables or fiber optics. The network resources may include storage devices for storing data and executable instructions at a location separate from the computer platform 401. The computer interacts with a display 408 to output data and other information to a user, as well as to request additional instructions and input from the user. The display 408 may therefore further act as an input device 404 for interacting with a user.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not of limitation. The breadth and scope should not be limited by any of the above-described exemplary embodiments. Where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future. In addition, the described embodiments are not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated example. One of ordinary skill in the art would also understand how alternative functional, logical or physical partitioning and configurations could be utilized to implement the desired features of the described embodiments.


Furthermore, although items, elements or components may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.

Claims
  • 1. A method comprising using at least one hardware processor to: receive an image of a document captured by a camera, wherein the document has a plurality of fields;preprocess the image of the document;receive content of at least one of the plurality of fields in the document via manual entry by a user;use the content to determine a location of the at least one field in the document;automatically determine locations of a plurality of other fields in the image of the document based on the determined location of the at least one field for which the content was manually entered by the user;extract identity data, associated with an individual, from the preprocessed image of the document, based on the determined locations of the plurality of fields;automatically populate fields of an enrollment form for a transaction based at least in part upon the identity data;create a new financial account based on the enrollment form; andfund the new financial account by receiving an image of a check or credit card captured by the camera,extracting at least an account number from the image of the check or credit card, andinitiating a transfer of funds to the new financial account from an existing financial account associated with the extracted account number.
  • 2. The method of claim 1, further comprising using the at least one hardware processor to: validate the identity data to assess a quality of the identity data; andverify the identity data to assess an identity risk of the individual to a financial services organization, wherein the identity risk is a risk that the identity data of the individual is unreliable.
  • 3. The method of claim 2, further comprising using the at least one hardware processor to organize results of the validation and verification into a mobile identity risk scorecard, wherein the mobile identity risk scorecard comprises a structured information model that indicates risks associated with the set of identity data and comprises one or more indicators which denote aspects of identity risk.
  • 4. The method of claim 3, wherein the one or more indicators comprise one or more numeric indicators which denote identity risk.
  • 5. The method of claim 3, wherein the one or more indicators comprise one or more graphical indicators which denote identity risk.
  • 6. The method of claim 1, wherein the method is implemented as a software library executed by the at least one hardware processor.
  • 7. The method of claim 6, wherein the software library is embedded in a mobile application.
  • 8. The method of claim 1, wherein the document is a government-issued identity document.
  • 9. The method of claim 8, wherein the government-issued identity document is a driver's license.
  • 10. The method of claim 8, wherein the government-issued identity document is a passport.
  • 11. The method of claim 8, wherein the government-issued identity document is a military identification card.
  • 12. The method of claim 1, wherein preprocessing comprises cropping the image of the document.
  • 13. The method of claim 1, wherein preprocessing comprises de-skewing the image of the document.
  • 14. The method of claim 1, wherein preprocessing comprises de-warping the image of the document.
  • 15. The method of claim 1, wherein preprocessing comprises converting text in the image of the document into reverse text.
  • 16. The method of claim 1, wherein preprocessing comprises creating one or more bi-tonal images from the image of the document.
  • 17. The method of claim 1, wherein extracting the identity data comprises: calculating a confidence score for each of a plurality of fields; andindividually highlighting each of the plurality of fields for which the calculated confidence score is below a fixed value.
  • 18. The method of claim 1, wherein extracting the identity data comprises: reading a barcode in the preprocessed image of the document to produce a string;parsing the string to identify one or more barcode fields, while applying a rules engine to handle exceptions in the string; andcross-validating content of the plurality of fields with content of the one or more barcode fields.
  • 19. A system comprising: at least one hardware processor; andone or more software modules that are configured to, when executed by the at least one hardware processor, receive an image of a document captured by a camera, wherein the document has a plurality of fields,preprocess the image of the document,receive content of at least one of the plurality of fields in the document via manual entry by a user,use the content to determine a location of the at least one field in the document,automatically determine locations of a plurality of other fields in the image of the document based on the determined location of the at least one field for which the content was manually entered by the user,extract identity data, associated with an individual, from the preprocessed image of the document, based on the determined locations of the plurality of fields,automatically populate fields of an enrollment form for a transaction based at least in part upon the identity data,create a new financial account based on the enrollment form, andfund the new financial account by receiving an image of a check or credit card captured by the camera,extracting at least an account number from the image of the check or credit card, andinitiating a transfer of funds to the new financial account from an existing financial account associated with the extracted account number.
  • 20. A non-transitory computer-readable medium having instructions stored thereon, wherein the instructions, when executed by a processor, cause the processor to: receive an image of a document captured by a camera, wherein the document has a plurality of fields;preprocess the image of the document;receive content of at least one of the plurality of fields in the document via manual entry by a user;use the content to determine a location of the at least one field in the document;automatically determine locations of a plurality of other fields in the image of the document based on the determined location of the at least one field for which the content was manually entered by the user;extract identity data, associated with an individual, from the preprocessed image of the document, based on the determined locations of the plurality of fields;automatically populate fields of an enrollment form for a transaction based at least in part upon the identity data;create a new financial account based on the enrollment form; andfund the new financial account by receiving an image of a check or credit card captured by the camera,extracting at least an account number from the image of the check or credit card, andinitiating a transfer of funds to the new financial account from an existing financial account associated with the extracted account number.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 17/064,465, filed on Oct. 6, 2020, which is a continuation of U.S. patent application Ser. No. 16/282,250, filed on Feb. 21, 2019, which is a continuation in part of U.S. patent application Ser. No. 15/077,801, filed on Mar. 22, 2016, which is a continuation of U.S. patent application Ser. No. 13/844,748, filed on Mar. 15, 2013, which is a continuation in part of U.S. patent application Ser. No. 12/778,943, filed on May 12, 2010, which are all hereby incorporated herein by reference as if set forth in full. In addition, U.S. patent application Ser. No. 16/282,250 is a continuation in part of U.S. patent application Ser. No. 14/217,192, filed on Mar. 17, 2014, which claims priority to U.S. Provisional Patent Application No. 61/802,098, filed on Mar. 15, 2013, which are also both hereby incorporated herein by reference as if set forth in full.

US Referenced Citations (374)
Number Name Date Kind
4311914 Huber Jan 1982 A
5326959 Perazza Jul 1994 A
5600732 Ott et al. Feb 1997 A
5751841 Leong et al. May 1998 A
5761686 Bloomberg Jun 1998 A
5920847 Kolling et al. Jul 1999 A
5966473 Takahashi et al. Oct 1999 A
5999636 Juang Dec 1999 A
6038351 Rigakos Mar 2000 A
6038553 Hyde, Jr. Mar 2000 A
6070150 Remington et al. May 2000 A
6125362 Elworthy Sep 2000 A
6282326 Lee et al. Aug 2001 B1
6304684 Niczyporuk et al. Oct 2001 B1
6345130 Dahl Feb 2002 B1
6408094 Mirzaoff et al. Jun 2002 B1
6621919 Mennie et al. Sep 2003 B2
6735341 Horie et al. May 2004 B1
6807294 Yamazaki Oct 2004 B2
6947610 Sun Sep 2005 B2
6985631 Zhang Jan 2006 B2
6993205 Lorie et al. Jan 2006 B1
7020320 Filatov Mar 2006 B2
7072862 Wilson Jul 2006 B1
7133558 Ohara et al. Nov 2006 B1
7245765 Myers et al. Jul 2007 B2
7283656 Blake et al. Oct 2007 B2
7301564 Fan Nov 2007 B2
7331523 Meier et al. Feb 2008 B2
7376258 Klein et al. May 2008 B2
7377425 Ma et al. May 2008 B1
7426316 Vehvil?inen Sep 2008 B2
7433098 Klein et al. Oct 2008 B2
7478066 Remington et al. Jan 2009 B2
7548641 Gilson et al. Jun 2009 B2
7558418 Verma et al. Jul 2009 B2
7584128 Mason et al. Sep 2009 B2
7593595 Heaney, Jr. et al. Sep 2009 B2
7606741 King et al. Oct 2009 B2
7636483 Yamaguchi et al. Dec 2009 B2
7735721 Ma et al. Jun 2010 B1
7778457 Nepomniachtchi et al. Aug 2010 B2
7793831 Beskitt Sep 2010 B2
7793835 Coggeshall et al. Sep 2010 B1
7817854 Taylor Oct 2010 B2
7869098 Corso et al. Jan 2011 B2
7873200 Oakes, III et al. Jan 2011 B1
7876949 Oakes, III et al. Jan 2011 B1
7949176 Nepomniachtchi May 2011 B2
7950698 Popadic et al. May 2011 B2
7953268 Nepomniachtchi May 2011 B2
7982770 Kahn et al. May 2011 B1
7974899 Prasad et al. Jul 2011 B1
7978900 Nepomniachtchi et al. Jul 2011 B2
7983468 Ibikunle et al. Jul 2011 B2
7986346 Kaneda et al. Jul 2011 B2
7995196 Fraser Aug 2011 B1
7996317 Gurz Aug 2011 B1
8000514 Nepomniachtchi et al. Aug 2011 B2
8023155 Jiang Sep 2011 B2
8025226 Hopkins, III et al. Sep 2011 B1
8109436 Hopkins, III Feb 2012 B1
8118216 Hoch et al. Feb 2012 B2
8121948 Gustin et al. Feb 2012 B2
8126252 Abernethy et al. Feb 2012 B2
8160149 Demos Apr 2012 B2
8180137 Faulkner et al. May 2012 B2
8233714 Zuev et al. Jul 2012 B2
8238638 Mueller et al. Aug 2012 B2
8290237 Burks et al. Oct 2012 B1
8300917 Borgia et al. Oct 2012 B2
8320657 Burks et al. Nov 2012 B1
8326015 Nepomniachtchi Dec 2012 B2
8339642 Ono Dec 2012 B2
8340452 Marchesotti Dec 2012 B2
8358826 Medina, III et al. Jan 2013 B1
8370254 Hopkins, III et al. Feb 2013 B1
8374383 Long et al. Feb 2013 B2
8379914 Nepomniachtchi et al. Feb 2013 B2
8442844 Trandal et al. May 2013 B1
8532419 Coleman Sep 2013 B2
8538124 Harpel et al. Sep 2013 B1
8540158 Lei et al. Sep 2013 B2
8542921 Medina Sep 2013 B1
8559766 Tilt et al. Oct 2013 B2
8582862 Nepomniachtchi et al. Nov 2013 B2
8688579 Ethington et al. Apr 2014 B1
8699779 Prasad et al. Apr 2014 B1
8837833 Wang et al. Sep 2014 B1
8861883 Tanaka Oct 2014 B2
8879783 Wang et al. Nov 2014 B1
8959033 Oakes, III et al. Feb 2015 B1
8977571 Bueche, Jr. et al. Mar 2015 B1
9058512 Medina, III Jun 2015 B1
9208393 Kotovich et al. Dec 2015 B2
9460141 Coman Oct 2016 B1
9613258 Chen et al. Apr 2017 B2
9679214 Kotovich et al. Jun 2017 B2
9710702 Nepomniachtchi et al. Jul 2017 B2
9773186 Nepomniachtchi et al. Sep 2017 B2
9786011 Engelhorn et al. Oct 2017 B1
9842331 Nepomniachtchi et al. Dec 2017 B2
10095947 Nepomniachtchi et al. Oct 2018 B2
10102583 Strange Oct 2018 B2
10275673 Kotovich et al. Apr 2019 B2
10360447 Nepomniachtchi et al. Jul 2019 B2
10373136 Pollack et al. Aug 2019 B1
10452908 Ramanathan et al. Oct 2019 B1
10546206 Nepomniachtchi et al. Jan 2020 B2
10621660 Medina et al. Apr 2020 B1
10789496 Kotovich et al. Sep 2020 B2
10789501 Nepomniachtchi et al. Sep 2020 B2
10891475 Nepomniachtchi et al. Jan 2021 B2
10909362 Nepomniachtchi et al. Feb 2021 B2
11157731 Nepomniachtchi et al. Oct 2021 B2
11380113 Nepomniachtchi et al. Jul 2022 B2
11393272 Kriegsfeld et al. Jul 2022 B2
20010014183 Sansom-Wai et al. Aug 2001 A1
20010016084 Pollard et al. Aug 2001 A1
20010019334 Carrai et al. Sep 2001 A1
20010019664 Pilu Sep 2001 A1
20010044899 Levy Nov 2001 A1
20020003896 Yamazaki Jan 2002 A1
20020012462 Fujiwara Jan 2002 A1
20020023055 Antognini et al. Feb 2002 A1
20020037097 Hoyos et al. Mar 2002 A1
20020041717 Murata et al. Apr 2002 A1
20020044689 Roustaei et al. Apr 2002 A1
20020046341 Kazaks et al. Apr 2002 A1
20020067846 Foley Jun 2002 A1
20020073044 Singhal Jun 2002 A1
20020077976 Meyer et al. Jun 2002 A1
20020080013 Anderson, III et al. Jun 2002 A1
20020085745 Jones et al. Jul 2002 A1
20020120846 Stewart et al. Aug 2002 A1
20020128967 Meyer et al. Sep 2002 A1
20020138351 Houvener et al. Sep 2002 A1
20020143804 Dowdy Oct 2002 A1
20020150279 Scott et al. Oct 2002 A1
20030009420 Jones Jan 2003 A1
20030072568 Lin et al. Apr 2003 A1
20030086615 Dance et al. May 2003 A1
20030099379 Monk et al. May 2003 A1
20030156201 Zhang Aug 2003 A1
20030161523 Moon et al. Aug 2003 A1
20030177100 Filatov Sep 2003 A1
20040012679 Fan Jan 2004 A1
20040017947 Yang Jan 2004 A1
20040037448 Brundage Feb 2004 A1
20040081332 Tuttle et al. Apr 2004 A1
20040109597 Lugg Jun 2004 A1
20040205474 Eskin et al. Oct 2004 A1
20040213434 Emerson et al. Oct 2004 A1
20040213437 Howard et al. Oct 2004 A1
20040218799 Mastie et al. Nov 2004 A1
20040236688 Bozeman Nov 2004 A1
20050011957 Attia et al. Jan 2005 A1
20050065893 Josephson Mar 2005 A1
20050071283 Randle et al. Mar 2005 A1
20050080698 Perg et al. Apr 2005 A1
20050091161 Gustin et al. Apr 2005 A1
20050097046 Singfield May 2005 A1
20050100216 Myers et al. May 2005 A1
20050125295 Tidwell et al. Jun 2005 A1
20050141028 Koppich Jun 2005 A1
20050143136 Lev et al. Jun 2005 A1
20050163362 Jones et al. Jul 2005 A1
20050180661 El Bernoussi et al. Aug 2005 A1
20050192897 Rogers et al. Sep 2005 A1
20050196069 Yonaha Sep 2005 A1
20050196071 Prakash et al. Sep 2005 A1
20050213805 Blake et al. Sep 2005 A1
20050219367 Kanda et al. Oct 2005 A1
20050220324 Klein et al. Oct 2005 A1
20050229010 Monk et al. Oct 2005 A1
20050242186 Ohbuchi Nov 2005 A1
20050261990 Gocht et al. Nov 2005 A1
20060008167 Yu et al. Jan 2006 A1
20060008267 Kim Jan 2006 A1
20060012699 Miki Jan 2006 A1
20060039629 Li et al. Feb 2006 A1
20060045322 Clarke et al. Mar 2006 A1
20060045342 Kim et al. Mar 2006 A1
20060045344 Paxton et al. Mar 2006 A1
20060045379 Heaney et al. Mar 2006 A1
20060071950 Kurzweil et al. Apr 2006 A1
20060072822 Hatzav et al. Apr 2006 A1
20060088214 Handley et al. Apr 2006 A1
20060106717 Randle et al. May 2006 A1
20060140504 Fujimoto et al. Jun 2006 A1
20060164682 Lev Jul 2006 A1
20060177118 Ibikunle et al. Aug 2006 A1
20060182331 Gilson et al. Aug 2006 A1
20060186194 Richardson et al. Aug 2006 A1
20060210192 Orhun Sep 2006 A1
20060221415 Kawamoto Oct 2006 A1
20060242063 Peterson et al. Oct 2006 A1
20060280354 Murray Dec 2006 A1
20060291727 Bargeron Dec 2006 A1
20070009155 Potts et al. Jan 2007 A1
20070053574 Verma et al. Mar 2007 A1
20070058851 Quine et al. Mar 2007 A1
20070064991 Douglas et al. Mar 2007 A1
20070071324 Thakur Mar 2007 A1
20070076940 Goodall et al. Apr 2007 A1
20070081796 Fredlund et al. Apr 2007 A1
20070084911 Crowell Apr 2007 A1
20070086642 Foth Apr 2007 A1
20070086643 Spier et al. Apr 2007 A1
20070110277 Hayduchok et al. May 2007 A1
20070114785 Porter May 2007 A1
20070118391 Malaney et al. May 2007 A1
20070131759 Cox et al. Jun 2007 A1
20070140678 Yost et al. Jun 2007 A1
20070154071 Lin et al. Jul 2007 A1
20070156438 Popadic et al. Jul 2007 A1
20070168382 Tillberg et al. Jul 2007 A1
20070171288 Inoue et al. Jul 2007 A1
20070174214 Welsh et al. Jul 2007 A1
20070195174 Oren Aug 2007 A1
20070206877 Wu et al. Sep 2007 A1
20070211964 Agam et al. Sep 2007 A1
20070214078 Coppinger Sep 2007 A1
20070244782 Chimento Oct 2007 A1
20070265887 Mclaughlin et al. Nov 2007 A1
20070288382 Narayanan et al. Dec 2007 A1
20070297664 Blaikie Dec 2007 A1
20080010215 Rackley, III et al. Jan 2008 A1
20080031543 Nakajima et al. Feb 2008 A1
20080040259 Snow et al. Feb 2008 A1
20080040280 Davis et al. Feb 2008 A1
20080062437 Rizzo Mar 2008 A1
20080086420 Gilder et al. Apr 2008 A1
20080089573 Mori et al. Apr 2008 A1
20080128505 Challa et al. Jun 2008 A1
20080152238 Sarkar Jun 2008 A1
20080174815 Komaki Jul 2008 A1
20080183576 Kim et al. Jul 2008 A1
20080192129 Walker et al. Aug 2008 A1
20080193020 Sibiryakov et al. Aug 2008 A1
20080212901 Castiglia et al. Sep 2008 A1
20080231714 Estevez et al. Sep 2008 A1
20080235263 Riaz et al. Sep 2008 A1
20080247629 Gilder et al. Oct 2008 A1
20080249931 Gilder et al. Oct 2008 A1
20080249936 Miller et al. Oct 2008 A1
20080267510 Paul et al. Oct 2008 A1
20080306787 Hamilton et al. Dec 2008 A1
20090041377 Edgar Feb 2009 A1
20090063431 Erol et al. Mar 2009 A1
20090092322 Erol et al. Apr 2009 A1
20090108080 Meyer et al. Apr 2009 A1
20090114716 Ramachandran May 2009 A1
20090125510 Graham et al. May 2009 A1
20090141962 Borgia et al. Jun 2009 A1
20090159659 Norris et al. Jun 2009 A1
20090185241 Nepomniachtchi Jul 2009 A1
20090185736 Nepomniachtchi Jul 2009 A1
20090185737 Nepomniachtchi Jul 2009 A1
20090185738 Nepomniachtchi Jul 2009 A1
20090185752 Dwivedula et al. Jul 2009 A1
20090190830 Hasegawa Jul 2009 A1
20090196485 Mueller et al. Aug 2009 A1
20090198493 Hakkani-Tur et al. Aug 2009 A1
20090201541 Neogi et al. Aug 2009 A1
20090216672 Zulf Aug 2009 A1
20090261158 Lawson Oct 2009 A1
20090265134 Sambasivan et al. Oct 2009 A1
20090271287 Halpern Oct 2009 A1
20090285444 Erol et al. Nov 2009 A1
20100030524 Warren Feb 2010 A1
20100037059 Sun et al. Feb 2010 A1
20100038839 Dewitt et al. Feb 2010 A1
20100073735 Hunt et al. Mar 2010 A1
20100074547 Yu et al. Mar 2010 A1
20100080471 Haas et al. Apr 2010 A1
20100082470 Walach et al. Apr 2010 A1
20100102119 Gustin et al. Apr 2010 A1
20100104171 Faulkner et al. Apr 2010 A1
20100114765 Gustin et al. May 2010 A1
20100114766 Gustin et al. May 2010 A1
20100114771 Gustin et al. May 2010 A1
20100114772 Gustin et al. May 2010 A1
20100150424 Nepomniachtchi et al. Jun 2010 A1
20100161466 Gilder Jun 2010 A1
20100200660 Moed et al. Aug 2010 A1
20100208282 Isaev Aug 2010 A1
20100239160 Enomoto et al. Sep 2010 A1
20100246972 Koyama et al. Sep 2010 A1
20100253787 Grant Oct 2010 A1
20100254604 Prabhakara et al. Oct 2010 A1
20100284611 Lee et al. Nov 2010 A1
20110013822 Meek et al. Jan 2011 A1
20110052065 Nepomniachtchi et al. Mar 2011 A1
20110075936 Deaver Mar 2011 A1
20110081051 Tayal et al. Apr 2011 A1
20110091092 Nepomniachtchi et al. Apr 2011 A1
20110134248 Heit et al. Jun 2011 A1
20110170740 Coleman Jul 2011 A1
20110194750 Nepomniachtchi Aug 2011 A1
20110249905 Singh et al. Oct 2011 A1
20110255795 Nakamura Oct 2011 A1
20110280450 Nepomniachtchi et al. Nov 2011 A1
20110289028 Sato Nov 2011 A1
20120010885 Hakkani-Tr et al. Jan 2012 A1
20120023567 Hammad Jan 2012 A1
20120030104 Huff et al. Feb 2012 A1
20120033892 Blenkhorn et al. Feb 2012 A1
20120070062 Houle et al. Mar 2012 A1
20120086989 Collins et al. Apr 2012 A1
20120106802 Hsieh et al. May 2012 A1
20120109792 Eftekhari et al. May 2012 A1
20120113489 Heit et al. May 2012 A1
20120150773 Dicorpo et al. Jun 2012 A1
20120197640 Hakkani-Tr et al. Aug 2012 A1
20120201416 Dewitt et al. Aug 2012 A1
20120226600 Dolev Sep 2012 A1
20120230577 Calman et al. Sep 2012 A1
20120265655 Stroh Oct 2012 A1
20120278336 Malik et al. Nov 2012 A1
20120308139 Dhir Dec 2012 A1
20130004076 Koo et al. Jan 2013 A1
20130022231 Nepomniachtchi et al. Jan 2013 A1
20130051610 Roach et al. Feb 2013 A1
20130058531 Hedley et al. Mar 2013 A1
20130085935 Nepomniachtchi et al. Apr 2013 A1
20130120595 Roach et al. May 2013 A1
20130148862 Roach et al. Jun 2013 A1
20130155474 Roach et al. Jun 2013 A1
20130181054 Durham et al. Jul 2013 A1
20130182002 Macciola et al. Jul 2013 A1
20130182951 Shustorovich et al. Jul 2013 A1
20130182973 Macciola et al. Jul 2013 A1
20130202185 Irwin, Jr. et al. Aug 2013 A1
20130204777 Irwin, Jr. et al. Aug 2013 A1
20130223721 Nepomniachtchi et al. Aug 2013 A1
20130272607 Chattopadhyay et al. Oct 2013 A1
20130297353 Strange et al. Nov 2013 A1
20130311362 Milam et al. Nov 2013 A1
20130317865 Tofte et al. Nov 2013 A1
20130325706 Wilson et al. Dec 2013 A1
20140032406 Roach et al. Jan 2014 A1
20140037183 Gorski et al. Feb 2014 A1
20140040141 Gauvin et al. Feb 2014 A1
20140044303 Chakraborti Feb 2014 A1
20140046841 Gauvin et al. Feb 2014 A1
20140064621 Reese et al. Mar 2014 A1
20140108456 Ramachandrula et al. Apr 2014 A1
20140126790 Duchesne et al. May 2014 A1
20140133767 Lund et al. May 2014 A1
20140172467 He et al. Jun 2014 A1
20140188715 Barlok et al. Jul 2014 A1
20140233837 Sandoz et al. Aug 2014 A1
20140254887 Amtrup et al. Sep 2014 A1
20140258838 Evers et al. Sep 2014 A1
20140270540 Spector et al. Sep 2014 A1
20140281871 Brunner et al. Sep 2014 A1
20140307959 Filimonova et al. Oct 2014 A1
20150012382 Ceribelli et al. Jan 2015 A1
20150012442 Ceribelli et al. Jan 2015 A1
20150040001 Kannan et al. Feb 2015 A1
20150142545 Ceribelli et al. May 2015 A1
20150142643 Ceribelli et al. May 2015 A1
20150334184 Liverance Nov 2015 A1
20160092730 Smirnov et al. Mar 2016 A1
20170185972 Bozeman Jun 2017 A1
20170316263 Nepomniachtchi et al. Nov 2017 A1
20180101751 Ghosh et al. Apr 2018 A1
20180101836 Nepomniachtchi et al. Apr 2018 A1
20180240081 Doyle et al. Aug 2018 A1
20200304650 Roach et al. Sep 2020 A1
20200342248 Nepomniachtchi et al. Oct 2020 A1
20210090372 Kriegsfeld et al. Mar 2021 A1
20220351161 Roach et al. Nov 2022 A1
Foreign Referenced Citations (7)
Number Date Country
2773730 Apr 2012 CA
20040076131 Aug 2004 KR
20070115834 Dec 2007 KR
03069425 Aug 2003 WO
2006075967 Jul 2006 WO
2006136958 Dec 2006 WO
2012144957 Oct 2012 WO
Non-Patent Literature Citations (33)
Entry
PDF417, Wikipedia: the free encyclopedia, Oct. 21, 2008, https://en.wikipedia.org/w/index.php?title=PDF417&oldid=246681430 (Year: 2008).
International Search Report issued in related International Application No. PCT/US2011/056593 dated May 30, 2012 (3 pages).
Office Action dated Jul. 11, 2019 for related U.S. Appl. No. 15/614,456 in 45 pages.
Office Action dated Jul. 26, 2019 for related U.S. Appl. No. 16/282,250 in 21 pages.
Office Action dated Jan. 9, 2020 for related U.S. Appl. No. 16/397,728 in 56 pages.
Office Action dated Mar. 20, 2020 in related U.S. Appl. No. 16/282,250, in 20 pages.
Office Action dated May 27, 2020 for related U.S. Appl. No. 16/282,250 in 18 pages.
Notice of Allowance for related U.S. Appl. No. 16/742,439, dated Sep. 18, 2020, in 39 pages.
“OCR: The Most Important Scanning Feature You Never Knew You Needed.” hp (blog), Feb. 24, 2012. Accessed May 13, 2015. http://h71036.www7.hp.com/hho/cache/608037-0-0-39-121.html.
Abdulkader et al. “Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment.” Proceedings of the 10th International Conference on Document Analysis and Recognition (ICDAR '09). IEEE Computer Society, Washington, D.C., USA. pp. 576-580. http://dx.doi.org/10.1109/ICDAR.2009.24.
Bassil, Youssef. “OCR Post-Processing Error Correction Algorithm Using Google's Online Spelling Suggestion.” Journal of Emergin Trends in Computing and Information Sciences 3, No. 1 (Jan. 2012): 1. Accessed May 13, 2015. http://arxiv.org/ftp/arxiv/papers/1204/1204.0191.pdf.
Bienieki et al. “Image preprocessing for improving OCR accuracy.” Perspective Technologies and Methods in MEMS Design, 2007. International Conference on MEMSTECH 2007. IEEE, 2007.
Chattopadhyay et al. “On the Enhancement and Binarization of Mobile Captured Vehicle Identification No. for an Embedded Solution.” 10th IAPR International Workshop on Document Analysis Systems (DAS), 2012. pp. 235-239. Mar. 27-29, 2012.
Cook, John. “Three Algorithms for Converting Color to Grayscale.” Singular Value Consulting. Aug. 24, 2009. Accessed May 13, 2015. http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/.
Gatos et al. “Improved Document Image Binarization by Using a Combination of Multiple Binarization Techniques and Adapted Edge Information.” 19th International Conference on Pattern Recognition, 2008. IEEE.
He et al., “Comer deterctor Based on Global and Local Curvature Properties ”Optical Engineering 47(5), 0570008 (2008).
International Search Report and Written Opinion received in PCT/US2011/056593, dated May 30, 2012, 9 pages.
Notice of Allowance dated Feb. 22, 2023 received in U.S. Appl. No. 17/236,373 in 30 pages.
Notice of Allowance for related U.S. Appl. No. 16/160,796, dated Jan. 22, 2021, in 17 pages.
Notice of Allowance for related U.S. Appl. No. 16/579,625, dated Jan. 13, 2020 in 27 pages.
Notice of Allowance for related U.S. Appl. No. 17/829,025, dated Apr. 11, 2023, in 13 pages.
Office Action dated Feb. 1, 2023 in related U.S. Appl. No. 16/987,782, in 104 pages.
Office Action dated Jan. 12, 2023 in related U.S. Appl. No. 17/479,904, in 34 pages.
Office Action dated Sep. 25, 2019 for related U.S. Appl. No. 16/518,815, in 10 pages.
Office Action for related CA Patent Application No. 2,773,730, dated Aug. 21, 2017, in 4 pages.
Office Action for related U.S. Appl. No. 16/259,896, dated Dec. 12, 2019, in 22 pages.
Office Action for related U.S. Appl. No. 17/983,785, dated Mar. 30, 2023, in 46 pages.
Relativity. “Searching Manual.” Aug. 27, 2010. Accessed May 13, 2015. http://www.inventus.com/wp-content/uploads/2010/09/Relativity-Searching-Manual-6.6.pdf.
Shah et al. “OCR-Based chassis-No. recognition using artificial neural networks.”2009 IEEE Conference on Vehicular Electronics and Safety. pp. 31-31. Nov. 11-12, 2009.
Stevens. “Advanced Programming in the UNIX Enrivonment.” Addison-Wesley Publishing Company, pp. 195-196 (1992).
“Tokenworks Introduces IDWedge ID Scanner Solution.” 2008.
Junker et al. “Evaluating OCR and Non-OCR Text Representation for Learning Document Classifiers.” Proceedings of the 4th International Conference on Document Analysis and Recognition. Ulm, Germany. Aug. 18-20, 1997. p. 1606-1066 (1997). Accessed http://citeseerxist.psu.eduviewdoc/download?doi=10.1.1.6.6732&rep=rep1-&type=pdf.
PDF417, Wikipedia: the free encyclopedia, Oct. 21, 2008, https://en.wikipedia.org/w/index.php?title=PDF417&oldid=246681430 (Year: 2008), 3 pages.
Related Publications (1)
Number Date Country
20220076008 A1 Mar 2022 US
Provisional Applications (1)
Number Date Country
61802098 Mar 2013 US
Continuations (2)
Number Date Country
Parent 17064465 Oct 2020 US
Child 17531464 US
Parent 16282250 Feb 2019 US
Child 17064465 US
Continuation in Parts (4)
Number Date Country
Parent 15077801 Mar 2016 US
Child 16282250 US
Parent 14217192 Mar 2014 US
Child 16282250 US
Parent 13844748 Mar 2013 US
Child 16282250 US
Parent 12778943 May 2010 US
Child 15077801 US