The embodiments described herein generally relate to automated document processing and more particularly, to systems and methods for mobile capture and processing of documents for enrollment in an automated clearing house (ACH) transaction.
An Automated Clearing House (ACH) transaction is a type of electronic funds transfer (EFT) that occurs between two separate entities, often known as a receiver and an originator. The receiver is a party which grants authorization to the originator for the transfer of funds to or from the receiver's bank account. The originator may be a business or service provider to which the receiver owes money, such as a utility company, loan provider, etc. For example, a customer may want to have the amount of their monthly phone bill debited directly from their checking account and transferred to the phone company. By setting up an ACH transaction, the customer, or receiver, does not have to remember to mail out a check to the phone company every month. The phone company, or originator, also benefits in that it does not need to physically process a check that arrives in the mail and can instead electronically debit an amount from the customer's checking account on the same day of the month.
ACH transactions also work in the seemingly opposite situation, such as when a company wants to make a recurring payment to an individual's checking account; for example, an employer that wishes to pay an employee twice a month. The employee is still considered the receiver, while the employer is still termed the originator. Again, the benefit to the employee is that they do not have to wait to receive a physical check and then go to a bank to deposit the money, as the money is electronically deposited into their bank account on a particular date. The employer does not have to mail out physical checks and wait for them to be deposited.
However, the receiver is required to authorize any ACH transaction involving their account, regardless of whether the transaction is a debit or credit to the receiver's bank account. Authorizing an ACH transaction requires an oral, written or electronic authorization by the receiver, as well as basic information on the receiver's bank account, such as the account number and the bank's routing number. Additional information may be obtained, including the receiver's name and address, a driver's license number, or other type of personal information that may be used to confirm the identity of the receiver.
In many cases, an ACH transaction requires authorization from the receiver in the form of a blank or voided check which contains the routing number, account number, name and address of the receiver. The receiver must then mail the check to the originator to complete ACH enrollment for a particular transaction. For example, if a receiver wants their phone company to automatically debit the amount of the receiver's monthly phone bill from the receiver's bank account, an ACH transaction is setup by having the receiver send the voided check to the originator for processing. The ACH enrollment process will therefore take several days to complete and require that the receiver mail a check and an enrollment form to the originator. Thus, the ACH enrollment process is both complicated and time consuming.
Systems and methods for mobile enrollment in automated clearing house (ACH) transactions using mobile-captured images of financial documents are provided. Applications running on a mobile device provide for the capture and processing of images of documents needed for enrollment in an ACH transaction, such as a blank check, remittance statement and driver's license. Data from the mobile-captured images that is needed for enrolling in ACH transactions is extracted from the processed images, such as a user's name, address, bank account number and bank routing number. The user can edit the extracted data, select the type of document that is being captured, authorize the creation of an ACH transaction and select an originator of the ACH transaction. The extracted data and originator information is transmitted to a remote server along with the user's authorization so the ACH transaction can be setup between the originator's and receiver's bank accounts.
In a first exemplary embodiment, a computer-readable medium comprises instructions which, when executed by a computer, perform a process of mobile enrollment in an automated clearing house (ACH) transaction, comprise: receiving an identity of at least one originator for the automated clearing house (ACH) transaction; receiving an image of a document captured by a mobile device of a receiver of the ACH transaction; correcting at least one aspect of the image to create a corrected image; executing one or more image quality assurance tests on the corrected image to assess the quality of the corrected image; and extracting ACH enrollment data from the corrected image that is needed to enroll a user in the ACH transaction between the originator and the receiver.
In a second exemplary embodiment, a system for mobile enrollment in an automated clearing house (ACH) transaction comprises: a mobile device which captures an image of a document, receives an authorization of a receiver for enrollment in the ACH transaction and receives an originator identity of at least one originator of the ACH transaction; a mobile ACH enrollment server which receives the captured image, authorization and originator identity and extracts ACH enrollment data from the captured image; and an originator server which receives the authorization, originator identity and ACH enrollment data and enrolls the receiver in an ACH transaction with the originator.
Other features and advantages of the present invention should become apparent from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.
The various embodiments provided herein are described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict typical or example embodiments. These drawings are provided to facilitate the reader's understanding of the invention and shall not be considered limiting of the breadth, scope, or applicability of the embodiments. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.
The embodiments described herein are directed towards automated document processing and systems and methods for document image processing using mobile devices. System and methods are provided for processing a remittance coupon captured using a mobile device. Generally, in some embodiments an original color image of a document is captured using a mobile device and then converted the color image to a bi-tonal image. More specifically, in some embodiments, a color image of a document taken by a mobile device is received and converted into a bi-tonal image of the document that is substantially equivalent in its resolution, size, and quality to document images produced by “standard” scanners.
The term “standard scanners” as used herein, but is not limited to, transport scanners, flat-bed scanners, and specialized check-scanners. Some manufacturers of transport scanners include UNISYS®, BancTec IBM®, and Canon®. With respect to specialized check-scanners, some models include the TellerScan® TS200 and the Panini® My Vision X. Generally, standard scanners have the ability to scan and produce high quality images, support resolutions from 200 dots per inch to 300 dots per inch (DPI), produce gray-scale and bi-tonal images, and crop an image of a check from a larger full-page size image. Standard scanners for other types of documents may have similar capabilities with even higher resolutions and higher color-depth.
The term “color images” as used herein, but is not limited to, images having a color depth of 24 bits per a pixel (24 bit/pixel), thereby providing each pixel with one of 16 million possible colors. Each color image is represented by pixels and the dimensions W (width in pixels) and H (height in pixels). An intensity function I maps each pixel in the [W×H] area to its RGB-value. The RGB-value is a triple (R,G,B) that determines the color the pixel represents. Within the triple, each of the R(Red), G(Green) and B(Blue) values are integers between 0 and 255 that determine each respective color's intensity for the pixel.
The term, “gray-scale images” as used herein, but is not limited to, images having a color depth of 8 bits per a pixel (8 bit/pixel), thereby providing each pixel with one of 256 shades of gray. As a person of ordinary skill in the art would appreciate, gray-scale images also include images with color depths of other various bit levels (e.g. 4 bit/pixel or 2 bit/pixel). Each gray-scale image is represented by pixels and the dimensions W (width in pixels) and H (height in pixels). An intensity function I maps each pixel in the [W×H] area onto a range of gray shades. More specifically, each pixel has a value between 0 and 255 which determines that pixel's shade of gray.
Bi-tonal images are similar to gray-scale images in that they are represented by pixels and the dimensions W (width in pixels) and H (height in pixels). However, each pixel within a bi-tonal image has one of two colors: black or white. Accordingly, a bi-tonal image has a color depth of 1 bit per a pixel (1 bit/pixel). The similarity transformation, as utilized by some embodiments of the invention, is based off the assumption that there are two images of [W×H] and [W′×H′] dimensions, respectively, and that the dimensions are proportional (i.e. W/W′=H/H′). The term “similarity transformation” may refer to a transformation ST from [W×H] area onto [W′×H′] area such that ST maps pixel p=p(x,y) on pixel p′=p′(x′,y′) with x′=x*W′/W and y=y*H′/H.
The systems and methods provided herein advantageously allow a user to capture an image of a remittance coupon, and in some embodiments, a form of payment, such as a check, for automated processing. Typically, a remittance processing service will scan remittance coupons and checks using standard scanners that provide a clear image of the remittance coupon and accompanying check. Often these scanners produce either gray-scale and bi-tonal images that are then used to electronically process the payment. The systems and methods disclosed herein allow an image of remittance coupons, and in some embodiments, checks to be captured using a camera or other imaging device included in or coupled to a mobile device, such as a mobile phone. The systems and methods disclosed herein can test the quality of a mobile image of a document captured using a mobile device, correct some defects in the image, and convert the image to a format that can be processed by remittance processing service.
Images of the documents taken using the mobile device or downloaded to the mobile device can be transmitted to mobile remittance server 310 via network 330. Network 330 can comprise one or more wireless and/or wired network connections. For example, in some cases, the images can be transmitted over a mobile communication device network, such as a code division multiple access (“CDMA”) telephone network, or other mobile telephone network. Network 330 can also comprise one or more connections across the Internet. Images taken using, for example, a mobile device's camera, can be 24 bit per pixel (24 bit/pixel) JPG images. It will be understood, however, that many other types of images might also be taken using different cameras, mobile devices, etc.
Mobile remittance server 310 can be configured to perform various image processing techniques on images of remittance coupons, checks, or other financial documents captured by the mobile device 340. Mobile remittance server 310 can also be configured to perform various image quality assurance tests on images of remittance coupons or financial documents captured by the mobile device 340 to ensure that the quality of the captured images is sufficient to enable remittance processing to be performed using the images. Examples of various processing techniques and testing techniques that can be implemented on mobile remit server 210 are described in detail below.
Mobile remittance server 310 can also be configured to communicate with one or more remittance processor servers 315. According to an embodiment, the mobile remittance server 310 can perform processing and testing on images captured by mobile device 340 to prepare the images for processing by a third-party remittance processor and to ensure that the images are of a sufficient quality for the third-party remittance processor to process. The mobile remittance server 310 can send the processed images to the remittance processor 315 via the network 330. In some embodiments, the mobile remittance server 310 can send additional processing parameters and data to the remittance processor 315 with the processed mobile image. This information can include information collected from a user by the mobile device 340. According to an embodiment, the mobile remittance server 310 can be implemented using hardware or a combination of software and hardware.
According to an embodiment, the mobile remittance server 310 can be configured to communicate to one or more bank server 320 via the network 330. Bank server 320 can be configured to process payments in some embodiments. For example, in some embodiments, mobile device 340 can be used to capture an image of a remittance coupon and an image of a check that can be used to make an electronic payment of the remittance payment. For example, the remittance processor server 315 can be configured to receive an image of a remittance coupon and an image of a check from the mobile remittance server 310. The remittance processor 315 can electronically deposit the check into a bank account associated with the entity for which the electronic remittance is being performed. According to some embodiments, the bank server 320 and the remittance processor 315 can be implemented on the same server or same set of servers.
In other embodiments, the remittance processor 315 can handle payment. For example, the remittance processor can be operate by or on behalf of an entity associated with the coupon of
When the user elects to pay a bill, the camera application can be launched as illustrated in
Once the image is captured and corrected, and the data is extracted and adjusted, then the image, data, and any required credential information, such as username, password, and phone or device identifier, can be transmitted to the mobile remittance server 310 for further processing. This further processing is described in detail with respect to the remaining figure sin the description below.
First,
An image of a remittance coupon is captured using a camera or other optical device of the mobile device 340 (step 405). For example, a user of the mobile device 340 can click a button or otherwise activate a camera or other optical device of mobile device 340 to cause the camera or other optical device to capture an image of a remittance coupon.
According to an embodiment, the mobile device 340 can also be configured to optionally receive additional information from the user (step 410). For example, in some embodiments, the mobile device can be configured to prompt the user to enter data, such as a payment amount that represents an amount of the payment that the user wishes to make. The payment amount can differ from the account balance or minimum payment amount shown on the remittance coupon. For example, the remittance coupon might show an account balance of $1000 and a minimum payment amount of $100, but the user might enter a payment amount of $400.
According to an embodiment, the mobile device 340 can be configured to perform some preprocessing on the mobile image (step 415). For example, the mobile device 340 can be configured to convert the mobile image from a color image to a grayscale image or to bitonal image. Other preprocessing steps can also be performed on the mobile device. For example, the mobile device can be configured to identify the corners of the remittance coupon and to perform geometric corrections and/or warping corrections to correct defects in the mobile image. Examples of various types of preprocessing that can be performed on the mobile device 340 are described in detail below.
Mobile device 340 can then transmit the mobile image of the remittance coupon and any additional data provided by user to mobile remittance server 310.
Mobile remittance server 310 can receive the mobile image and any data provided by the user from the mobile device 340 via the network 330 (step 505). The mobile remittance server 310 can then perform various processing on the image to prepare the image for image quality assurance testing and for submission to a remittance processor 315 (step 510). Various processing steps can be performed by the mobile remittance server 310. Examples of the types of processing that can be performed by mobile remittance server 310 are described in detail below.
Mobile remittance server 310 can perform image quality assurance testing on the mobile image to determine whether there are any issues with the quality of the mobile image that might prevent the remittance provider from being able to process the image of the remittance coupon (step 515). Various mobile quality assurance testing techniques that can be performed by mobile remittance server 310 are described in detail below.
According to an embodiment, mobile remittance server 310 can be configured to report the results of the image quality assurance testing to the mobile device 340 (step 520). This can be useful for informing a user of mobile device 340 that an image that the user captured of a remittance coupon passed quality assurance testing, and thus, should be of sufficient quality that the mobile image can be processed by a remittance processor server 315. According to an embodiment, the mobile remittance server 310 can be configured to provide detailed feedback messages to the mobile device 340 if a mobile image fails quality assurance testing. Mobile device 340 can be configured to display this feedback information to a user of the device to inform the user what problems were found with the mobile image of the remittance coupon and to provide the user with the opportunity to retake the image in an attempt to correct the problems identified.
If the mobile image passes the image quality assurance testing, the mobile remittance server 310 can submit the mobile image plus any processing parameters received from the mobile device 340 to the remittance processor server 315 for processing (step 525). According to an embodiment, mobile remittance server 310 can include a remittance processing server configured to perform the steps 525 including the methods illustrated in
Image Processing
Mobile device 340 and mobile remittance server 310 can be configured to perform various processing on a mobile image to correct various defects in the image quality that could prevent the remittance processor 215 from being able to process the remittance due to poor image quality.
For example, an out of focus image of a remittance coupon or check, in embodiments where the mobile device can also be used to capture check images for payment processing, can be impossible to read an electronically process. For example, optical character recognition of the contents of the imaged document based on a blurry mobile image could result in incorrect payment information being extracted from the document. As a result, the wrong account could be credited for the payment or an incorrect payment amount could be credited. This may be especially true if a check and a payment coupon are both difficult to read or the scan quality is poor.
Many different factors may affect the quality of an image and the ability of a mobile device based image capture and processing system. Optical defects, such as out-of-focus images (as discussed above), unequal contrast or brightness, or other optical defects, can make it difficult to process an image of a document, e.g., a check, payment coupon, deposit slip, etc. The quality of an image can also be affected by the document position on a surface when photographed or the angle at which the document was photographed. This affects the image quality by causing the document to appear, for example, right side up, upside down, skewed, etc. Further, if a document is imaged while upside-down it might be impossible or nearly impossible to for the system to determine the information contained on the document.
In some cases, the type of surface might affect the final image. For example, if a document is sitting on a rough surface when an image is taken, that rough surface might show through. In some cases the surface of the document might be rough because of the surface below it. Additionally, the rough surface may cause shadows or other problems that might be picked up by the camera. These problems might make it difficult or impossible to read the information contained on the document.
Lighting may also affect the quality of an image, for example, the location of a light source and light source distortions. Using a light source above a document can light the document in a way that improves the image quality, while a light source to the side of the document might produce an image that is more difficult to process. Lighting from the side can, for example, cause shadows or other lighting distortions. The type of light might also be a factor, for example, sun, electric bulb, florescent lighting, etc. If the lighting is too bright, the document can be washed out in the image. On the other hand, if the lighting is too dark, it might be difficult to read the image.
The quality of the image can also be affected by document features, such as, the type of document, the fonts used, the colors selected, etc. For example, an image of a white document with black lettering may be easier to process than a dark colored document with black letters. Image quality may also be affected by the mobile device used. Some mobile camera phones, for example, might have cameras that save an image using a greater number of mega pixels. Other mobile cameras phones might have an auto-focus feature, automatic flash, etc. Generally, these features may improve an image when compared to mobile devices that do not include such features.
A document image taken using a mobile device might have one or more of the defects discussed above. These defects or others may cause low accuracy when processing the image, for example, when processing one or more of the fields on a document. Accordingly, in some embodiments, systems and methods using a mobile device to create images of documents can include the ability to identify poor quality images. If the quality of an image is determined to be poor, a user may be prompted to take another image.
Detecting an Out of Focus Image
Mobile device 340 and mobile remittance server 310 can be configured to detect an out of focus image. A variety of metrics might be used to detect an out-of-focus image. For example, a focus measure can be employed. The focus measure can be the ratio of the maximum video gradient between adjacent pixels measured over the entire image and normalized with respect to an image's gray level dynamic range and “pixel pitch”. The pixel pitch may be the distance between dots on the image. In some embodiments a focus score might be used to determine if an image is adequately focused. If an image is not adequately focused, a user might be prompted to take another image.
According to an embodiment, the mobile device 340 can be configured to detect whether an image is out of focus using the techniques disclosed herein. In an embodiment, the mobile remittance server 310 can be configured to detect out of focus images. In some embodiments, the mobile remittance server 310 can be configured to detect out of focus images and reject these images before performing mobile image quality assurance testing on the image. In other embodiments, detecting and out of focus image can be part of the mobile image quality assurance testing.
According to an embodiment, an image focus score can be calculated as a function of maximum video gradient, gray level dynamic range and pixel pitch. For example, in one embodiment:
Image Focus Score=(Maximum Video Gradient)*(Gray Level Dynamic Range)*(Pixel Pitch) (eq. 1)
The video gradient may be the absolute value of the gray level for a first pixel “i” minus the gray level for a second pixel “i+1”. For example:
Video Gradient=ABS[(Grey level for pixel “i”)−(Gray level for pixel “i+1”)] (eq. 2)
The gray level dynamic range may be the average of the “n” lightest pixels minus the average of the “n” darkest pixels. For example:
Gray Level Dynamic Range=[AVE(“N” lightest pixels)−AVE(“N” darkest pixels)] (eq. 3)
In equation 3 above, N can be defined as the number of pixels used to determine the average darkest and lightest pixel gray levels in the image. In some embodiments, N can be chosen to be 64. Accordingly, in some embodiments, the 64 darkest pixels are averaged together and the 64 lightest pixels are averaged together to compute the gray level dynamic range value.
The pixel pitch can be the reciprocal of the image resolution, for example, in dots per inch.
Pixel Pitch=[1/Image Resolution] (eq. 4)
In other words, as defined above, the pixel pitch is the distance between dots on the image because the Image Resolution is the reciprocal of the distance between dots on an image.
Detecting and Correcting Perspective Distortion
The dotted frame 2504 comprises the image frame obtained by the camera. The image frame is be sized h×w, as illustrated in the figure. Generally, it can be preferable to contain an entire document within the h×w frame of a single image. It will be understood, however, that some documents are too large or include too many pages for this to be preferable or even feasible.
In some embodiments, an image can be processed, or preprocessed, to automatically find and “lift” the quadrangle 2502. In other words, the document that forms quadrangle 502 can be separated from the rest of the image so that the document alone can be processed. By separating quadrangle 2502 from any background in an image, it can then be further processed.
The quadrangle 2502 can be mapped onto a rectangular bitmap in order to remove or decrease the perspective distortion. Additionally, image sharpening can be used to improve the out-of-focus score of the image. The resolution of the image can then be increased and the image converted to a black-and-white image. In some cases, a black-and-white image can have a higher recognition rate when processed using an automated document processing system in accordance with the systems and methods described herein.
An image that is bi-tonal, e.g., black-and-white, can be used in some systems. Such systems can require an image that is at least 200 dots per inch resolution. Accordingly, a color image taken using a mobile device can need to be high enough quality so that the image can successfully be converted from, for example, a 24 bit per pixel (24 bit/pixel) RGB image to a bi-tonal image. The image can be sized as if the document, e.g., check, payment coupon, etc., was scanned at 200 dots per inch.
Image Correction Module
According to an embodiment, the image correction module can also be configured to detect an out of focus image using the technique described above and to reject the mobile image if the image focus score for the image falls below a predetermined threshold without attempting to perform other image correction techniques on the image. According to an embodiment, the image correction module can send a message to the mobile device 340 indicating that the mobile image was too out of focus to be used and requesting that the user retake the image.
The image correction module can be configured to first identify the corners of a coupon or other document within a mobile image (step 605). One technique that can be used to identify the corners of the remittance coupon in a color image is illustrated in
The image correction module can be configured to then build a perspective transformation for the remittance coupon (step 610). As can be seen in
A geometrical transformation of the document subimage can be performed using the perspective transformation built in step 610 (step 615). The geometrical transformation corrects the perspective distortion present in the document subimage. An example of results of geometrical transformation can be seen in
A “dewarping” operation can also be performed on the document subimage (step 620). An example of a warping of a document in a mobile image is provided in
According to an embodiment, the document subimage can also binarized (step 625). A binarization operation can generate a bi-tonal image with color depth of 1 bit per a pixel (1 bit/pixel). Some automated processing systems, such as some Remote Deposit systems require bi-tonal images as inputs. A technique for generating a bi-tonal image is described below with respect to
Once the image has been binarized, the code line of the remittance coupon can be identified and read (step 630). As described above, many remittance coupons include a code line that comprises computer-readable text that can be used to encode account-related information that can be used to reconcile a payment received with the account for which the payment is being made. Code line 205 of
Often, a standard optical character recognition font, the OCR-A font, is used for printing the characters comprising the code line. The OCR-A font is a fixed-width font where the characters are typically spaced 0.10 inches apart. Because the OCR-A font is a standardized fixed-width font, the image correction module can use this information to determining a scaling factor for the image of the remittance coupon. The scaling factor to be used can vary from image to image, because the scaling is dependent upon the position of the camera or other image capture device relative to the document being imaged and can also be dependent upon optical characteristics of the device used to capture the image of the document.
Once the scaling factor for the image has been determined, a final geometrical transformation of the document image can be performed using the scaling factor (step 635). This step is similar to that in step 615, except the scaling factor is used to create a geometrically altered subimage that represents the actual size of the coupon at a given resolution. According to an embodiment, the dimensions of the geometrically corrected image produced by set 635 are identical to the dimensions of an image produced by a flat bed scanner at the same resolution.
During step 635, other geometrical corrections can also be made, such as correcting orientation of the coupon subimage. The orientation of the coupon subimage can be determined based on the orientation of the text of the code line.
Once the final geometrical transformation has been applied, a final adaptive binarization can be performed on the grayscale image generated in step 635 (step 640). The bi-tonal image output by the this step will have the correct dimensions for the remittance coupon because the bi-tonal image is generated using the geometrically corrected image generated in step 635.
According to an embodiment, the image correction module can be configured to use several different binarization parameters to generate two or more bi-tonal images of the remittance coupon. The use of multiple images can improve data capture results. The use of multiple bi-tonal images to improve data captures results is described in greater detail below.
Detecting Document within Color Mobile Image
Referring now to
The method of
A color reduction operation is then applied to the color “icon” image at step 906. During the operation, the overall color of the image can be reduced, while the contrast between the document and its background can be preserved within the image. Specifically, the color “icon” image of operation 904 can be converted into a gray “icon” image (also known as a gray-scale “icon” image) having the same size. An example, color depth reduction process is described with further detail with respect to
The corners of the document are then identified within the gray “icon” image (step 910). As previously noted above with respect to
Binarization
A binarization operation generates a bi-tonal image with color depth of 1 bit per a pixel (1 bit/pixel). In the case of documents, such as checks and deposit coupons, a bi-tonal image is required for processing by automated systems, such as Remote Deposit systems. In addition, many image processing engines require such an image as input. The method of
A gray-scale image of the document is received at step 1602, the method 1600 chooses a pixel p(x,y) within the image at step 1604. In
Subsequent to the conversion of the pixel at either step 1610 or operation 1612, the next pixel is chosen at step 1614, and operation 1606 is repeated until all the gray-scale pixels (8 bit/pixel) are converted to a bi-tonal pixel (1 bit/pixel). However, if no more pixels remain to be converted 1618, the bi-tonal image of the document is then outputted at step 1620.
Conversion of Color Image to Icon Image
Referring now to
C(p′)=ave{C(q): q in S×S-window of p}, where (eq. 5)
Small “dark” objects within the image can then be eliminated (step 1204). Examples of such small “dark” objects include, but are not limited to, machine-printed characters and hand-printed characters inside the document. Hence, assuming operation 1204 receives image I′ from step 1202, step 1204 creates a new color image I″ referred to as an “icon” with width W″ set to a fixed small value and height H″ set to W″*(H/W), thereby preserving the original aspect ratio of image I. In some embodiments, the transformation formula can be described as the following:
C(p″)=max{C(q′):q′ in S′×S′-window of p′}, where (eq. 6)
The reason for using the “maximum” rather than “average” is to make the “icon” whiter (white pixels have a RGB-value of (255,255,255)).
In the next operation 1206, the high local contrast of “small” objects, such as lines, text, and handwriting on a document, is suppressed, while the other object edges within the “icon” are preserved. Often, these other object edges are bold. In various embodiments of the invention, multiple dilation and erosion operations, also known as morphological image transformations, are utilized in the suppression of the high local contrast of “small” objects. Such morphological image transformations are commonly known and used by those of ordinary skill in the art. The sequence and amount of dilation and erosion operations used is determined experimentally. Subsequent to the suppression operation 1206, a color “icon” image is outputted at operation 1208.
Color Depth Reduction
Referring now to
Then, at step 1304, the “central part” of the icon, which is usually the center most grid element, has its color averaged. Next, the average color of the remaining parts of the icon is computed at step 1306. More specifically, the grid elements “outside” the “central part” of the “icon” have their colors averaged. Usually, in instances where there is a central grid element, e.g. 3×3 grid, the “outside” of the “central part” comprises all the grid elements other than the central grid element.
Subsequently, a linear transformation for the RGB-space is determined at step 1308. The linear transformation is defined such that it maps the average color of the “central part” computed during operation 1304 to white, i.e. 255, while the average color of the “outside” computed during operation 1306 maps to black, i.e. 0. All remaining colors are linearly mapped to a shade of gray. This linear transformation, once determined, is used at operation 1310 to transform all RGB-values from the color “icon” to a gray-scale “icon” image, which is then outputted at operation 1312. Within particular embodiments, the resulting gray “icon” image, also referred to as a gray-scale “icon” image, maximizes the contrast between the document background, assuming that the document is located close to the center of the image and the background.
Referring now to
In accordance with one embodiment, this goal is achieved by first looking for the “voting” points in the half of the “icon” that corresponds with the current side of interest. For instance, if the current side of interest is the document's top side, the upper part of the “icon” (Y<H/2) is examined while the bottom part of the “icon” (Y≥H/2) is ignored.
Within the selected half of the “icon,” the intensity gradient (contrast) in the correct direction of each pixel is computed. This is accomplished in some embodiments by considering a small window centered in the pixel and, then, breaking the window into an expected “background” half where the gray intensity is smaller, i.e. where it is supposed to be darker, and into an expected “doc” half where the gray intensity is higher, i.e. where it is supposed to be whiter. There is a break line between the two halves, either horizontal or vertical depending on side of the document sought to be found. Next the average gray intensity in each half-window is computed, resulting in an average image intensity for the “background” and an average image intensity of the “doc.” The intensity gradient of the pixel is calculated by subtracting the average image intensity for the “background” from the average image intensity for the “doc.”
Eventually, those pixels with sufficient gray intensity gradient in the correct direction are marked as “voting” points for the selected side. The sufficiency of the actual gray intensity gradient threshold for determining is established experimentally.
Continuing with method 1400, candidate sides, i.e. line segments that potentially represent the sides of the document, i.e. left, top, right, and bottom sides, are found. In order to do so, some embodiments find all subsets within the “voting” points determined in step 1402 that could be approximated by a straight line segment (linear approximation). In many embodiments, the threshold for linear approximation is established experimentally. This subset of lines is defined as the side “candidates.” As an assurance that the set of side candidates is never empty, the gray “icon” image's corresponding top, bottom, left, and right sides are also added to the set.
Next, in step 1406 chooses the best candidate for each side of the document from the set of candidates selected in operation 1404, thereby defining the position of the document within the gray “icon” image. In accordance with some embodiments, the following process is used in choosing the best candidate for each side of the document:
The process starts with selecting a quadruple of line segments {L, T, R, B}, where L is one of the candidates for the left side of the document, T is one of the candidates for the top side of the document, R is one of the candidates for the right side of the document, and B is one of the candidates for the bottom side of the document. The process then measures the following characteristics for the quadruple currently selected.
The amount of “voting” points is approximated and measured for all line segments for all four sides. This amount value is based on the assumption that the document's sides are linear and there is a significant color contrast along them. The larger values of this characteristic increase the overall quadruple rank.
The sum of all intensity gradients over all voting points of all line segments is measured. This sum value is also based on the assumption that the document's sides are linear and there is a significant color contrast along them. Again, the larger values of this characteristic increase the overall quadruple rank.
The total length of the segments is measured. This length value is based on the assumption that the document occupies a large portion of the image. Again, the larger values of this characteristic increase the overall quadruple rank.
The maximum of gaps in each corner is measured. For example, the gap in the left/top corner is defined by the distance between the uppermost point in the L-segment and the leftmost point in the T-segment. This maximum value is based on how well the side-candidates suit the assumption that the document's shape is quadrangle. The smaller values of this characteristic increase the overall quadruple rank.
The maximum of two angles between opposite segments, i.e. between L and R, and between T and R, is measured. This maximum value is based on how well the side-candidates suit the assumption that the document's shape is close to parallelogram. The smaller values of this characteristic increase the overall quadruple rank.
The deviation of the quadruple's aspect ratio from the “ideal” document aspect ratio is measured. This characteristic is applicable to documents with a known aspect ratio, e.g. checks. If the aspect ratio is unknown, this characteristic should be excluded from computing the quadruple's rank. The quadruple's aspect ratio is computed as follows:
This aspect ratio value is based on the assumption that the document's shape is somewhat preserved during the perspective transformation. The smaller values of this characteristic increase the overall quadruple rank.
Following the measurement of the characteristics of the quadruple noted above, the quadruple characteristics are combined into a single value, called the quadruple rank, using weighted linear combination. Positive weights are assigned for the amount of “voting” points, the sum all of intensity gradients, and the total length of the segments. Negatives weights are assigned for maximum gaps in each corner, maximum two angles between opposite segments, and the deviation of the quadruple's aspect ratio. The exact values of each of the weights are established experimentally.
The operations set forth above are repeated for all possible combinations of side candidates, eventually leading to the “best” quadruple, which is the quadruple with the highest rank. The document's corners are defined as intersections of the “best” quadruple's sides, i.e. the best side candidates.
In, step 1408 the corners of the document are defined using the intersections of the best side candidates. A person of ordinary skill in the art would appreciate that these corners can then be located on the original mobile image by transforming the corner locations found on the “icon” using the similarity transformation previously mentioned. Method 1400 concludes at step 1410 where the locations of the corners defined in step 1408 are output.
Geometric Correction
In instances where the document is in landscape orientation (90 or 270 degrees), as illustrated by the check in
According to some embodiments, a mathematical model of projective transformations is built and converts the distorted image into a rectangle-shaped image of predefined size. According to an embodiment, this step corresponds to step 610 of
Continuing with reference to the method of
The other path of operations begins at step 1502, where the positions of the document's corners within the gray “icon” image are received. Based off the location of the corners, the orientation of the document is determined and the orientation is corrected (step 1506). In some embodiments, this operation uses the corner locations to measure the aspect ratio of the document within the original image. Subsequently, a middle-point between each set of corners can be found, wherein each set of corners corresponds to one of the four sides of the depicted document, resulting in the left (L), top (T), right (R), and bottom (B) middle-points (step 1506). The distance between the L to R middle-points and the T to B middle points are then compared to determine which of the two pairs has the larger distance. This provides step 1506 with the orientation of the document.
In some instances, the correct orientation of the document depends on the type of document that is detected. For example, as illustrated in
If it is determined in step 1506 that an orientation correction is necessary, then the corners of the document are shifted in a loop, clock-wise in some embodiments and counter-clockwise in other embodiments.
At step 1510, the projective transformation is built to map the image of the document to a predefined target image size of width of W pixels and height of H pixels. In some embodiments, the projective transformation maps the corners A, B, C, and D of the document as follows: corner A to (0,0), corner B to (W,0), corner C to (W,H), and corner D to (0,H). Algorithms for building projective transformation are commonly known and used amongst those of ordinary skill in the art.
At step 1516, the projective transformation created during step 1514 is applied to the mobile image in gray-scale as outputted as a result of step 1512. The projective transformation as applied to the gray-scale image of step 1512 results in all the pixels within the quadrangle ABCD depicted in the gray-scale image mapping to a geometrically corrected, gray-scale image of the document alone.
Correcting Landscape Orientation
Upon receiving the bi-tonal image of the check at operation 1702, the MICR-line at the bottom of the bi-tonal check image is read at operation 1704 and an MICR-confidence value is generated. This MICR-confidence value (MC1) is compared to a threshold value T at operation 1706 to determine whether the check is right-side-up. If MC1>T at operation 1708, then the bi-tonal image of the check is right side up and is outputted at operation 1710.
However, if MC1≤T at operation 1708, then the image is rotated 180 degrees at operation 1712, the MICR-line at the bottom read again, and a new MICR-confidence value generated (MC2). The rotation of the image by 180 degree is done by methods commonly-known in the art. The MICR-confidence value after rotation (MC2) is compared to the previous MICR-confidence value (MC1) plus a Delta at operation 1714 to determine if the check is now right-side-up. If MC2>MC2+Delta at operation 1716, the rotated bi-tonal image has the check right-side-up and, thus, the rotated image is outputted at operation 1718. Otherwise, if MC2<MC2+Delta at operation 1716, the original bi-tonal image of the check is right-side-up and outputted at operation 1710. Delta is a positive value selected experimentally that reflects a higher apriori probability of the document initially being right-side-up than upside-down.
Size Correction
Since many image processing engines are sensitive to image size, it is crucial that the size of the document image be corrected before it can be properly processed. For example, a form identification engine may rely on the document size as an important characteristic for identifying the type of document that is being processed. Generally, for financial documents such as checks, the image size should be equivalent to the image size produced by a standard scanner running at 200 DPI.
In addition, where the document is a check, during the geometric correction operation of some embodiments of the invention, the geometrically corrected predefined image size is at 1200×560 pixels (See, for e.g.,
Referring now to
SF=AW200/AW, where (eq. 7)
The scaling factor is used at operation 1810 to determine whether the bi-tonal image of the check requires size correction. If the scaling SF is determined to be less than or equal to 1.0+Delta, then the most recent versions of the check's bi-tonal image and the check's the gray-scale image are output at operation 1812. Delta defines the system's tolerance to wrong image size.
If, however, the scaling factor SF is determined to be higher than 1.0+Delta, then at operation 1814 the new dimensions of the check are computed as follows:
AR=HS/WS (eq. 8)
W′=W*SF (eq. 9)
H′=AR*W′, where (eq. 10)
Subsequent to re-computing the new dimensions, operation 1814 repeats geometrical correction and binarization using the newly dimensioned check image. Following the repeated operations, operation 1812 outputs the resulting bi-tonal image of the check and gray-scale image of the check.
Image Quality Assurance
Once the mobile remittance server 310 has processed a mobile image (see step 510 of the method illustrated in
The processing parameters 2107 can include various information that the MDIPE 2100 can use to determine which tests to run on the mobile image 2105. For example, the processing parameters 2107 can identify the type of device used to capture the mobile image 2105, the type of mobile application that will be used to process the mobile image if the mobile image passes the IQA testing, or both. The MDIPE 2100 can use this information to determine which tests to select from test data store 2132 and which test parameters to select from test parameter data store 2134. For example, if a mobile image is being tested for a mobile deposit application that expects an image of a check, a specific set of tests related to assessing the image quality for a mobile image of a check can be selected, such as an MICR-line test, or a test for whether an image is blurry, etc. The MDIPE 2100 can also select test parameters from test parameters data store 2134 that are appropriate for the type of image to be processed, or for the type of mobile device that was used to capture the image, or both. In an embodiment, different parameters can be selected for different mobile phones that are appropriate for the type of phone used to capture the mobile image. For example, some mobile phones might not include an autofocus feature.
The preprocessing module 2110 can process the mobile document image to extract a document snippet that includes the portion of the mobile document that actually contains the document to be processed. This portion of the mobile document image is also referred to herein as the document subimage. The preprocessing module 2110 can also perform other processing on the document snippet, such as converting the image to a grayscale or bi-tonal document snippet, geometric correction of the document subimage to remove view distortion, etc. Different tests can require different types of preprocessing to be performed, and the preprocessing module 2110 can produce mobile document snippets from a mobile document image depending on the types of mobile IQA tests to be executed on the mobile document image.
The test execution module 2130 receives the selected tests and test parameters 2112 and the preprocessed document snippet (or snippets) 120 from the preprocessing mobile 110. The test execution module 2130 executes the selected tests on the document snippet generated by the preprocessing module 2110. The test execution module 2130 also uses the test parameters provided by the preprocessing module 2110 when executing the test on the document snippet. The selected tests can be a series of one or more tests to be executed on the document snippets to determine whether the mobile document image exhibits geometrical or other defects.
The test execution module 2130 executes each selected test to obtain a test result value for that test. The test execution module 2130 then compares that test result value to a threshold value associated with the test. If the test result value is equal to or exceeds the threshold, then the mobile image has passed the test. Otherwise, if the test result value is less than the threshold, the mobile document image has failed the test. According to some embodiments, the test execution module 2130 can store the test result values for the tests performed in test results data store 2138.
According an embodiment, the test threshold for a test can be stored in the test parameters data store 2134 and can be fetched by the preprocessing module 2110 and included with the test parameters 2112 provided to the test execution module 2130. According to an embodiment, different thresholds can be associated with a test based on the processing parameters 2107 received by the preprocessing module 2110. For example, a lower threshold might be used for an image focus IQA test for image capture by camera phones that do not include an autofocus feature, while a higher threshold might be used for the image focus IQA test for image capture by camera phones that do include an autofocus feature.
According to an embodiment, a test can be flagged as “affects overall status.” These tests are also referred to here as “critical” tests. If a mobile image fails a critical test, the MDIPE 2100 rejects the image and can provide detailed information to the mobile device user explaining why the image was not of a high enough quality for the mobile application and that provides guidance for retaking the image to correct the defects that caused the mobile document image to fail the test, in the event that the defect can be corrected by retaking the image.
According to an embodiment, the test result messages provided by the MDIPE 2100 can be provided to the mobile application that requested the MDIPE 2100 perform the quality assurance testing on the mobile document image, and the mobile application can display the test results to the user of the mobile device. In certain embodiments, the mobile application can display this information on the mobile device shortly after the user takes the mobile document image to allow the user to retake the image if the image is found to have defects that affect the overall status of the image. In some embodiments, where the MDIPE 2100 is implemented at least in part on the mobile device, the MDIPE 2100 can include a user interface module that is configured to display the test results message on a screen of the mobile device.
The mobile image 2105 captured by a mobile device is received (step 2205). The mobile image 2105 can also be accompanied by one or more processing parameters 2107.
As described above, the MDIPE 2100 can be implemented on the mobile device, and the mobile image can be provided by a camera that is part of or coupled to the mobile device. In some embodiments, the MDIPE 2100 can also be implemented at least in part on a remote server, and the mobile image 2105 and the processing parameters 2107 can be transmitted to the remove server, e.g., via a wireless interface included in the mobile device.
Once the mobile image 2105 and the processing parameters 2107 have been received, the mobile image is processed to generate a document snippet or snippets (step 2210). For example, preprocessing module 2110 of MDIPE 2100 can be used to perform various preprocessing on the mobile image. One part of this preprocessing includes identifying a document subimage in the mobile image. The subimage is the portion of the mobile document image that includes the document. The preprocessing module 2110 can also perform various preprocessing on the document subimage to produce what is referred to herein as a “snippet.” For example, some tests can require that a grayscale image of the subimage be created. The preprocessing module 2110 can create a grayscale snippet that represents a grayscale version of the document subimage. In another example, some tests can require that a bitonal image of the subimage be created. The preprocessing module 2110 can create a bitonal snippet that represents a bitonal version of the document subimage. In some embodiments, the MDIPE 2100 can generate multiple different snippets based on the types of tests to be performed on the mobile document image.
After processing the mobile document image to generate a snippet, the MDIPE 2100 then selects one or more tests to be performed on the snippet or snippets (step 2215). In an embodiment, the tests to be performed can be selected from test data store 2132. In an embodiment, the MDIPE 2100 selects the one or more tests based on the processing parameters 2107 that were received with the mobile image 2105.
After selecting the tests from the test data store 2132, test parameters for each of the tests can be selected from the test parameters data store 2134 (step 2220). According to an embodiment, the test parameters can be used to configure or customize the tests to be performed. For example, different test parameters can be used to configure the tests to be more or less sensitive to certain attributes of the mobile image. In an embodiment, the test parameters can be selected based on the processing parameters 2107 received with the mobile image 2105. As described above, these processing parameters can include information, such as the type of mobile device used to capture the mobile image as well as the type of mobile application that is going to be used to process the mobile image if the mobile image passes scrutiny of the mobile image IQA system.
Once the tests and the test parameters have been retrieved and provided to the test execution module 2130, a test is selected from tests to be executed, and the test is executed on the document snippet to produce a test result value (step 2225). In some embodiments, more than one document snippet may be used by a test. For example, a test can be performed that tests whether images of a front and back of a check are actually images of the same document can be performed. The test engine can receive both an image of the front of the check and an image of the back of the check from the preprocessing module 2110 and use both of these images when executing the test.
The test result value obtained by executing the test on the snippet or snippets of the mobile document is then compared to test threshold to determine whether the mobile image passes or fails the test (step 2230) and a determination is made whether the test results exceed the threshold (step 2235). According to an embodiment, the test threshold can be configured or customized based on the processing parameters 2107 received with the mobile image. For example, the test for image blurriness can be configured to use a higher threshold for passing if the image is to be used to for a mobile deposit application where the MICR-line information needs to be recognized and read from the document image. In contrast, the test for blurriness can be configured use a lower threshold for passing the mobile image for some mobile applications. For example, the threshold for image quality may be lowered for if a business card is being imaged rather than a check. The test parameters can be adjusted to minimize the number of false rejects and false accept rate, the number of images marked for reviewing, or both.
The “affects overall status” flag of a test can also be configured based on the processing parameters 2107. For example, a test can be marked as not affecting the overall status for some types of mobile applications or for documents being processed, or both. Alternatively, a test can also be marked as affecting overall status for other types of mobile applications or documents being processed, or both. For example, a test that identifies the MICR-line of a check can be marked as “affecting overall status” so that if the MICR-line on the check cannot be identified in the image, the image will fail the test and the image will be rejected. In another example, if the mobile application is merely configured to receive different types of mobile document image, the mobile application can perform a MICR-line test on the mobile document image in an attempt to determine whether the document that was imaged was a check. In this example, the MICR-line may not be present, because a document other than a check may have been imaged. Therefore, the MICR-line test may be marked as not “affecting overall status,” and if a document fails the test, the transaction might be flagged for review but not marked as failed.
Since different camera phones can have cameras with very different optical characteristics, image quality may vary significantly between them. As a result, some image quality defects may be avoidable on some camera phones and unavoidable on the others and therefore require different configurations. To mitigate the configuration problem, Mobile IQA test can be automatically configured for different camera phones to use different tests, or different thresholds for the tests, or both. For example, as described above, a lower threshold can be used for an image focus IQA test on mobile document images that are captured using a camera phone that does not include an autofocus feature than would be used for camera phones that do include an autofocus feature, because it can be more difficult for a user to obtain as clear an image on using a device that doesn't an autofocus feature.
In certain embodiments, if the test result exceeded or equaled the threshold, the image passed the test and a determination is made whether there are more tests to be executed (step 2240). If there are more tests to be executed, the next test can be selected and executed on the document snippet (step 2225). Otherwise, if there were not more tests to be executed, the test results, or test messages, or both are output by MDIPE 2100 (step 2270). There can be one or more test messages included with the results if the mobile image failed one more of the tests that were executed on the image.
In such embodiments, if the test result was less than the threshold, then the mobile image has failed the test. A determination is made whether the test affects the overall status (step 250). If the test affects the overall status of the image, detailed test result messages that explain why the image failed the test can be loaded from the test message data store 134 (step 2255) and the test result messages can be added to the test results (step 2260). The test results and test messages can then be output by the MDIPE 2100 (step 2270).
Alternatively, if the test did not affect the overall status, the test results can be loaded noted and the transaction can be flagged for review (step 2265). By flagging the transaction for review, a user of a mobile device can be presented with information indicating that a mobile image has failed at least some of the test that were performed on the image, but the image still may be of sufficient quality for use with the mobile application. The user can then be presented with the option to retake the image or to send the mobile image to the mobile application for processing. According to some embodiments, detailed test messages can be loaded from the test message data store 134 for all tests that fail and can be included with the test results, even if the test is not one that affects the overall status of the mobile image.
According to some embodiments, the mobile IQA test can also be configured to eliminate repeated rejections of a mobile document. For example, if an image of a check is rejected as have too low a contrast by a contrast test, the image is rejected, and the user can retake and resubmit the image via the mobile application, the processing parameters 2107 received with the mobile image can include a flag indicating that the image is being resubmitted. In some embodiments, the thresholds associated with the tests that the image failed can be lowered to see if the image can pass the test with a lower threshold. In some embodiments, the thresholds are only lowered for non-critical tests. According to an embodiment, the processing parameters 2107 can also include a count of the number of times that an image has been resubmitted and the thresholds for a test are only lowered after a predetermined number of times that the image is resubmitted.
The method illustrated in
The mobile image 2105 captured by a mobile device is received (step 2305). In an embodiment, image of the front and back sides of the check can be provided. The mobile image 2105 can also be accompanied by one or more processing parameters 2107. Check data can also be optionally received (step 2307). The check data can be optionally provided by the user at the time that the check is captured. This check data can include various information from the check, such as the check amount, check number, routing information from the face of the check, or other information, or a combination thereof. In some embodiments, a mobile deposition application requests this information from a user of the mobile device, allows the user to capture an image of a check or to select an image of a check that has already been captured, or both, and the mobile deposit information provides the check image, the check data, and other processing parameters to the MDIPE 2100.
Once the mobile image 2105, the processing parameters 2107, and the check data have been received, the mobile image is processed to generate a document snippet or snippets (step 2310). As described above, the preprocessing can produce one or more document snippets that include the portion of the mobile image in which the document was located. The document snippets can also have additional processing performed on them, such as conversion to a bitonal image or to grayscale, depending on the types of testing to be performed.
After processing the mobile document image to generate a snippet, the MDIPE 2100 then selects one or more tests to be performed on the snippet or snippets (step 2315). In an embodiment, the tests to be performed can be selected from test data store 2132. In an embodiment, the MDIPE 2100 selects the one or more tests based on the processing parameters 2107 that were received with the mobile image 2105.
After selecting the tests from the test data store 2132, test parameters for each of the tests can be selected from the test parameters data store 2134 (step 2320). As described above, the test parameters can be used to configure or customize the tests to be performed.
Once the tests and the test parameters have been retrieved and provided to the test execution module 2130, a test is selected from tests to be executed, and the test is executed on the document snippet to produce a test result value (step 2325). In some embodiments, more than one document snippet can be used by a test. For example, a test can be performed that tests whether images of a front and back of a check are actually images of the same document can be performed. The test engine can receive both an image of the front of the check and an image of the back of the check from the preprocessing module 2110 and use both of these images when executing the test. Step 2325 can be repeated until each of the tests to be executed is performed.
The test result values obtained by executing each test on the snippet or snippets of the mobile document are then compared to test threshold with that test to determine whether the mobile image passes or fails the test (step 2330) and a determination can be made whether the mobile image of the check passed the test indicating that image quality of mobile image is acceptable (step 2335). If the mobile document image of the check passed, the MDIPE 2100 passes then executes one or more Check 21 tests on the snippets (step 2340).
The test result values obtained by executing the Check 21 test or tests on the snippet or snippets of the mobile document are then compared to test threshold with that test to determine whether the mobile image passes or fails the test (step 2345) and a determination can be made whether the mobile image of the check passed the test indicating that image quality of mobile image is acceptable under the requirements imposed by the Check 21 Act (step 2350). Step 345 can be repeated until each of the Check 21 tests is performed. If the mobile document image of the check passed, the MDIPE 2100 passes the snippet or snippets to the mobile application for further processing (step 2370).
If the mobile document image of the check failed one or more mobile IQA or Check 21 tests, detailed test result messages that explain why the image failed the test can be loaded from the test message data store 134 (step 2355) and the test result messages can be added to the test results (step 2360). The test results and test messages are then output to the mobile application where they can be displayed to the user (step 2365). The user can use this information to retake the image of the check in an attempt to remedy some or all of the factors that caused the image of the check to be rejected.
Mobile IQA Tests
In some embodiments, a mobile IQA test generates a score for the subimage on a scale that ranges from 0-1000, where “0” indicates a subimage having very poor quality while a score of “1000” indicates that the image is perfect according to the test criteria.
Some tests use a geometrically corrected snippet of the subimage to correct view distortion. The preprocessing module 2110 can generate the geometrically corrected snippet.
Image Focus IQA Test
According to some embodiments, an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them. The user can be provided detailed information to assist the user in taking a better quality image of the document. For example, the blurriness may have been the result of motion blur caused by the user moving the camera while taking the image. The test result messages can suggest that the user hold the camera steadier when retaking the image.
Mobile devices can include cameras that have significantly different optical characteristics. For example, a mobile device that includes a camera that has an auto-focus feature can generally produce much sharper images than a camera that does not include such a feature. Therefore, the average image focus score for different cameras can vary widely. As a result, the test threshold can be set differently for different types of mobile devices. As described above, the processing parameters 2107 received by MDIPE 2100 can include information that identifies the type of mobile device and/or the camera characteristics of the camera used with the device in order to determine what the threshold should be set to for the Image Focus IQA Test.
An in-focus mobile document image, such as that illustrated in
According to an embodiment, the focus of the image can be tested using various techniques, and the results can then be normalized to the 0-1000 scale used by the MDIPE 2100.
In an embodiment, the Image Focus Score can be computed using the following technique: The focus measure is a ratio of maximum video gradient between adjacent pixels, measured over the entire image and normalized with respect to image's gray level dynamic range and “pixel pitch.” According to an embodiment, the image focus score can be calculated using the following equation described in “The Financial Services Technology Consortium,” Image Defect Metrics, IMAGE QUALITY & USABILITY ASSURANCE: Phase 1 Project, Draft Version 1.0.4. May 2, 2005, which is hereby incorporated by reference:
Image Focus Score=(Maximum Video Gradient)/[(Gray Level Dynamic Range)*(Pixel Pitch)]
where Video Gradient=ABS [(Gray level for pixel “i”)−(Gray level for pixel “i+1”)]
Gray Level Dynamic Range=[(Average of the “N” Lightest Pixels)−(Average of the “N” Darkest Pixels)]
Pixel Pitch=[1/Image Resolution (in dpi)]
The variable N is equal to the number of pixels used to determine the average darkest and lightest pixel gray levels in the image. According to one embodiment, the value of N is set to 64. Therefore, the 64 lightest pixels in the image are averaged together and the 64 darkest pixels in the image are averaged together, to compute the “Gray Level Dynamic” range value. The resulting image focus score value is the multiplied by 10 in order to bring the value into the 0-1000 range used for the test results in the mobile IQA system.
The Image Focus Score determined using these techniques can be compared to an image focus threshold to determine whether the image is sufficiently in focus. As described above, the threshold used for each test may be determined at least in part by the processing parameters 2107 provided to MDIPE 2100. The Image Focus score can be normalized to the 0-1000 range used by the mobile IQA tests and compared to a threshold value associated with the test. If the Image Focus Score meets or exceeds this threshold, then the mobile document image is sufficiently focused for use with the mobile application.
Shadow Test
According to some embodiments, a Shadow Test can be executed on a mobile image to determine whether a portion of the image is covered by a shadow. A shadow can render parts of a mobile image unreadable. This test helps to identify whether a shadow coverage a least a portion of a subimage in a mobile document image, and to reject images if the shadow has too much of an effect on the image quality, so that the user can attempt to take a better quality image of the document where the shadow is not present.
According to an embodiment, the presence of a shadow is measured by examining boundaries in the mobile image that intersect two or more sides of the document subimage.
The presence of shadows can be measured using the area and contrast. If a shadow covers the entire image, the result is merely an image that is darker overall. Such shadows generally do not worsen image quality significantly. Furthermore, shadows having a very small surface area also do not generally worsen image quality very much.
According to an embodiment, the Image Shadowed Score can be calculated using the following formula to determine the score for a grayscale snippet:
Image Shadowed score=1000 if no shadows were found, otherwise
Image Shadowed score=1000−min (Score(S[i])), where Score(S[i]) is computed for every shadow S[i] detected on the grayscale snippet
In an embodiment, the Score for each shadow can be computed using the following formula:
Given shadow S[i] in the grayscale image, the score can be calculated Score(S[i]) as Score(S[i])=2000*min(A[i]/A,1−A[i]/A)*(Contrast/256), where A[i] is the area covered by shadow S[i] (in pixels), A is the entire grayscale snippet area (in pixels), and Contrast is the difference of brightness inside and outside of the shadow (the maximum value is 256).
Due to the normalization factor 2000, Score(S[i]) fits into 0-1000 range. It tends to assume larger values for shadows that occupy about ½ of the snippet area and have high contrast. Score(S[1]) is typically within 100-200 range. In an embodiment, the Image Shadowed score calculated by this test falls within a range of 0-1000 as do the test results from other tests. According to an embodiment, a typical mobile document image with few shadows will have a test result value in a range form 800-900. If no shadows are on are found the document subimage, then the score will equal 1000. The Image Shadowed score can then be compared to a threshold associated with the test to determine whether the image is of sufficiently high quality for use with the mobile application requesting the assessment of the quality of the mobile document image.
Contrast Test
According to some embodiments, a Contrast Test can be executed on a mobile image to determine whether the contrast of the image is sufficient for processing. One cause of poor contrast is images taken with insufficient light. A resulting grayscale snippet generated from the mobile document image can have low contrast, and if the grayscale snippet is converted to a binary image, the binarization module can erroneously white-out part of the foreground, such as the MICR-line of a check, the code line of a remittance coupon, an amount, or black-out part of the background. The Contrast Test measures the contrast and rejects poor quality images, and instructs the user to retake the picture under brighter light to improve the contrast of the resulting snippets.
A histogram of the grayscale values in the grayscale snippet can then be built (step 2815). In an embodiment, the x-axis of the histogram is divided into bins that each represents a “color” value for the pixel in the grayscale image and the y-axis of the histogram represents the frequency of that color value in the grayscale image. According to an embodiment, the grayscale image has pixel in a range from 0-255, and the histogram is built by iterating through each value in this range and counting the number of pixels in the grayscale image having this value. For example, frequency of the “200” bin would include pixels having a gray value of 200.
A median black value can then be determined for the grayscale snippet (step 2820) and a median white value is also determined for the grayscale snippet (step 2825). The median black and white values can be determined using the histogram that was built from the grayscale snippet. According to an embodiment, the median black value can be determined by iterating through each bin, starting with the “0” bin that represents pure black and moving progressively toward the “250” bin which represents pure white. Once a bin is found that includes at least 20% of the pixels included in the image, the median black value is set to be the color value associated with that bin. According to an embodiment, the median white value can be determined by iterating through each bin, starting with the “255” bin which represents pure white and moving progressively toward the “0” bin which represents pure black. Once a bin is found that includes at least 20% of the pixels included in the image, the median white value is set to be the color value associated with that bin.
Once the median black and white values have been determined, the difference between the median black and white values can then be calculated (step 2830). The difference can then be normalized to fall within the 0-1000 test range used in the mobile IQA tests executed by the MDIPE 2100 (step 2835). The test result value can then be returned (step 2840). As described above, the test result value is provided to the test execution module 2130 where the test result value can be compared to a threshold value associated with the test. See for example,
Planar Skew Test
According to some embodiments, a Planar Skew Test can be executed on a mobile image to determine whether the document subimage is skewed within the mobile image. See
According to an embodiment, document skew can be measured by first identifying the corners of the document subimage using one of the techniques described above. The corners of the documents subimage can be identified by the preprocessing module 130 when performing projective transformations on the subimage, such as that described above with respect to
View Skew Test
“View skew” denotes a deviation from direction perpendicular to the document in mobile document image. Unlike planar skew, the view skew can result in the document subimage having perspective distortion.
According to an embodiment, the view skew of a mobile document can be determined using the following formula:
View Skew score=1000−F(A,B,C,D), where
F(A,B,C,D)=500*max(abs(|AB|−|CD|)/(|DA|+|BC|),abs(|BC|−|DA|)/(AB+CD)),
One can see that View Skew score can be configured to fit into [0, 1000] range used in the other mobile IQA tests described herein. In this example, the View Skew score is equal to 1000 when |AB|=|CD| and |BC|=|DA|, which is the case when there is no perspective distortion in the mobile document image and camera-to-document direction was exactly perpendicular. The View Skew score can then be compared to a threshold value associated with the test to determine whether the image quality is sufficiently high for use with the mobile application.
Cut Corner Test
Depending upon how carefully the user framed a document when capturing a mobile image, it is possible that one or more corners of the document can be cut off in the mobile document image. As a result, important information can be lost from the document. For example, if the lower left-hand corner of a check is cut off in the mobile image, a portion of the MICR-line of a check or the code line of a remittance coupon might be cut off, resulting in incomplete data recognition.
A corner of the document is selected (step 3220). In an embodiment, the four corners are received as an array of x and y coordinates CM, where I is equal to the values 1-4 representing the four corners of the document.
A determination is made whether the selected corner of the document is within the mobile document image (step 3225). The x & y coordinates of the selected corner should be at or between the edges of the image. According to an embodiment, the determination whether a corner is within the mobile document image can be determined using the following criteria: (1) C[I]x>=0 & C[I].x<=Width, where Width=the width of the mobile document image and C[I]x=the x-coordinate of the selected corner; and (2) C[I].y>=0 & C[I].y<=Height, where Height=the height of the mobile document image and C[I].y=the y-coordinate of the selected corner.
If the selected corner fails to satisfy the criteria above, the corner is not within the mobile image and has been cut-off. A corner cut-off measurement is determined for the corner (step 3230). The corner cut-off measurement represents the relative distance to the edge of the mobile document image. According to an embodiment, the corner cut-off measurement can be determined using the following:
An overall maximum cut-off value is also updated using the normalized cut-off measure of the corner (step 3235). According to an embodiment, the following formula can be used to update the maximum cut-off value: MaxCutOff=max(MaxCutOff, CutOff[I]). Once the maximum cut-off value is determined, a determination is made whether more corners are to be tested (step 3225).
If the selected corner satisfies the criteria above, the corner is within the mobile document image and is not cut-off. A determination is then made whether there are additional corners to be tested (step 3225). If there are more corners to be processed, a next corner to be test is selected (step 3215). Otherwise, if there are no more corners to be tested, the test result value for the test is computing using the maximum test cut-off measurement. In an embodiment, the test result value V=1000−MaxCutOff. One can see that V lies within [0-1000] range for the mobile IQA tests and is equal to 1000 when all the corners are inside the mobile image and decreases as one or more corner move outside of the mobile image.
The test result value is then returned (3245). As described above, the test result value is provided to the test execution module 2130 where the test result value can be compared to a threshold value associated with the test. If the test result value falls below the threshold associated with the test, detailed test result messages can be retrieved from the test result message data store 136 and provided to the user to indicate why the test failed and what might be done to remedy the test. The user may simply need to retake the image with the document corners within the frame.
Cut-Side Test
Depending upon how carefully the user framed a document when capturing a mobile image, it is possible that one or more sides of the document can be cut off in the mobile document image. As a result, important information can be lost from the document. For example, if the bottom a check is cut off in the mobile image, the MICR-line might be cut off, rendering the image unusable for a Mobile Deposit application that uses the MICR information to electronically deposit checks. Furthermore, if the bottom of a remittance coupon is cut off in the mobile image, the code line may be missing, the image may be rendered unusable by a Remittance Processing application that uses the code information to electronically process the remittance.
A side of the document is selected (step 3420). In an embodiment, the four corners are received as an array of x and y coordinates C[I], where I is equal to the values 1-4 representing the four corners of the document.
A determination is made whether the selected corner of the document is within the mobile document image (step 3425). According to an embodiment, the document subimage has four side and each side S[I] includes two adjacent corners C1[I] and C2[I]. A side is deemed to be cut-off if the corners comprising the side are on the edge of the mobile image. In an embodiment, a side of the document is cut-off if any of the following criteria are met:
If the side does not fall within the mobile image, the test result value is set to zero indicating that the mobile image failed the test (step 3430), and the test results are returned (step 3445).
If the side falls within the mobile image, a determination is made whether there are more sides to be tested (step 3425). If there are more sides to be tested, an untested side is selected (step 3415). Otherwise, all of the sides were within the mobile image, so the test result value for the test is set to 1000 indicating the test passed (step 3440), and the test result value is returned (step 3445).
Warped Image Test
The warped image test identifies images where document is warped.
The mobile image is received (step 3605). In an embodiment, the height and width of the mobile image can be determined by the preprocessing module 2110. The corners of the document subimage are then identified in the mobile document image (step 3610). Various techniques can be used to identify the corners of the image, including the various techniques described above. In an embodiment, the preprocessing module 2110 identifies the corners of the document subimage.
A side of the document is selected (step 3615). According to an embodiment, the document subimage has four side and each side S[I] includes two adjacent corners C1[I] and C2[I].
A piecewise linear approximation is built for the selected side (step 3620). According to an embodiment, the piecewise-linear approximation is built along the selected side by following the straight line connecting the adjacent corners C1[I] and C2[I] and detecting position of the highest contrast starting from any position within [C1[I], C2[I]] segment and moving in orthogonal direction.
After the piecewise linear approximation is built along the [C1[I], C2[I]] segment, the [C1[I], C2[I]] segment is walked to compute the deviation between the straight line and the approximation determined using piecewise linear approximation (step 3625). Each time the deviation is calculated, a maximum deviation value (MaxDev) is updated to reflect the maximum deviation value identified during the walk along the [C1[I], C2[I]] segment.
The maximum deviation value for the side is then normalized to generate a normalized maximized deviation value for the selected size of the document image (step 3630). According to an embodiment, the normalized value can be determined using the following formula:
NormMaxDev[I]=1000*MaxDev[I]/Dim, where Dim is the mobile image dimension perpendicular to side S[I].
An overall normalized maximum deviation value is then updated using the normalized deviation value calculated for the side. According to an embodiment, the overall maximum deviation can be determined using the formula:
OverallMaxDeviation=max(OverallMaxDeviation,NormMaxDev[I])
A determination is then made whether there are anymore sides to be tested (step 3640). If there are more sides to be tested, an untested side is selected for testing (step 3615). Otherwise, if no untested sides remain, the warped image test value is computed. According to an embodiment, the warped image test value can be determined using the following formula:
V=1000−OverallMaxDeviation
One can see that V lies within [0-1000] range used by the image IQA system and is equal to 1000 when the sides S[I] are straight line segments (and therefore no warp is present). The computed test result is then returned (step 3650). As described above, the test result value is provided to the test execution module 2130 where the test result value can be compared to a threshold value associated with the test. If the test result value falls below the threshold associated with the test, detailed test result messages can be retrieved from the test result message data store 136 and provided to the user to indicate why the test failed and what might be done to remedy the test. For example, the user may simply need to retake the image after flattening out the hardcopy of the document being imaged in order to reduce warping.
Image Size Test
The Image Size Test detects the actual size and the effective resolution of the document subimage. The perspective transformation that can be performed by embodiments of the preprocessing module 2110 allows for a quadrangle of any size to be transformed into a rectangle to correct for view distortion. However, a small subimage can cause loss of detail needed to process the sub image.
A subimage average width is computed (step 3815). In an embodiment, the subimage average width can be calculated using the following formula:
Subimage average width as AveWidth=(|AB|+|CD|)/2, where |PQ| represents the Euclidian distance from point P to point Q.
A subimage average height is computed (step 3820). In an embodiment, the subimage average height can be calculated using the following formula:
AveHeight=(BC|+|DA|)/2
The average width and average height values are then normalized to fit the 0-1000 range used by the mobile IQA tests (step 3822). The following formulas can be used determine the normalize the average width and height:
NormAveWidth=1000*AveWidth/Width
NormAveHeight=1000*AveWidth/Height
A minimum average value is then determined for the subimage (step 3825). According to an embodiment, the minimum average value is the smaller of the normalized average width and the normalized average height values. The minimum average value falls within the 0-1000 range used by the mobile IQA tests. The minimum average value will equal 1000 if the document subimage fills the entire mobile image.
The minimum average value is returned as the test result (step 3865). As described above, the test result value is provided to the test execution module 2130 where the test result value can be compared to a threshold value associated with the test. If the test result value falls below the threshold associated with the test, detailed test result messages can be retrieved from the test result message data store 2136 and provided to the user to indicate why the test failed and what might be done to remedy the test. For example, the user may simply need to retake the image by positioning the camera closer to the document.
MICR-Line Test
The MICR-line Test is used to determine whether a high quality image of a check front has been captured using the mobile device according to an embodiment. The MICR-line Test can be used in conjunction with a Mobile Deposit application to ensure that images of checks captures for processing with the Mobile Deposit information are of a high enough quality to be processed so that the check can be electronically deposited. Furthermore, if a mobile image fails the MICR-line Test, the failure may be indicative of incorrect subimage detections and/or poor overall quality of the mobile image, and such an image should be rejected anyway.
Code Line Test
The Code Line Test can be used to determine whether a high quality image of a remittance coupon front has been captured using the mobile device according to an embodiment. The Code Line Test can be used in conjunction with a Remittance Processing application to ensure that images of remittance coupon captures for processing with the Remittance Processing information are of a high enough quality to be processed so that the remittance can be electronically processed. Furthermore, if a mobile image fails the Code Line Test, the failure may be indicative of incorrect subimage detections and/or poor overall quality of the mobile image, and such an image should be rejected anyway.
Aspect Ratio Tests
The width of a remittance coupon is typically significantly longer than the height of the document. According to an embodiment, an aspect ratio test can be performed on a document subimage of a remittance coupon to determine whether the aspect ratio of the document in the image falls within a predetermined ranges of ratios of width to height. If the document image falls within the predetermined ranges of ratios, the image passes the test. An overall confidence value can be assigned to different ratio values or ranges of ratio values in order to determine whether the image should be rejected.
According to some embodiments, the mobile device can be used to capture an image of a check in addition to the remittance coupon. A second aspect ratio test is provided for two-sided documents, such as checks, where images of both sides of the document may be captured. According to some embodiments, a remittance coupon can also be a two-sided document and images of both sides of the document can be captured. The second aspect ratio test compares the aspect ratios of images that are purported to be of the front and back of a document to determine whether the user has captured images of the front and back of the same document according to an embodiment. The Aspect Ratio Test could be applied to various types two-sided or multi-page documents to determine whether images purported to be of different pages of the document have the same aspect ratio.
A front mobile image is received (step 4005) and a rear mobile image is received (step 4010). The front mobile image is supposed to be of the front side of a document while the rear mobile image is supposed to be the back side of a document. If the images are really of opposite sides of the same document, the aspect ratio of the document subimages should match. Alternatively, images of two different pages of the same document may be provided for testing. If the images are really of pages of the same document, the aspect ratio of the document subimages should match.
The preprocessing module 2110 can process the front mobile image to generate a front-side snippet (step 4015) and can also process the back side image to generate a back-side snippet (step 4020).
The aspect ratio of the front-side snippet is then calculated (step 4025). In an embodiment, the AspectRatioFront=Width/Height, where Width=the width of the front-side snippet and Height=the height of the front-side snippet.
The aspect ratio of the back-side snippet is then calculated (step 4030). In an embodiment, the AspectRatioBack=Width/Height, where Width=the width of the back-side snippet and Height=the height of the back-side snippet.
The relative difference between the aspect ratios of the front and rear snippets is then determined (step 4035). According to an embodiment, the relative difference between the aspect ratios can be determined using the following formula:
RelDiff=1000*abs(AspectRatioFront−AspectRatioBack)/max(AspectRatioFront,AspectRatioBack)
A test result value is then calculated based on the relative difference between the aspect ratios (step 4040). According to an embodiment, the test value V can be computed using the formula V=1000−RelDiff.
The test results are then returned (step 4045). As described above, the test result value is provided to the test execution module 2130 where the test result value can be compared to a threshold value associated with the test. If the test result value falls below the threshold associated with the test, detailed test result messages can be retrieved from the test result message data store 136 and provided to the user to indicate why the test failed and what might be done to remedy the test. For example, the user may have mixed up the front and back images from two different checks having two different aspect ratios. If the document image fails the test, the user can be prompted to verify that the images purported to be the front and back of the same document (or images of pages from the same document) really are from the same document.
Front-as-Rear Test
The Front-as-Rear Test is a check specific Boolean test. The test returns a value of 0 if an image fails the test and a value of 1000 if an image passes the test. According to an embodiment, if a MICR-line is identified on what is purported to be an image of the back of the check, the image will fail the test and generate a test message that indicates that the images of the check have been rejected because an image of the front of the check was mistakenly passed as an image of the rear of the check. Similarly, if a code line is identified on what is purported to be the back of a remittance coupon, the image will fail the test and generate a test message that indicates that the images of the remittance coupon have been rejected because an image of the front of the coupon was mistakenly passed as an image of the rear of the coupon.
An image of the rear of the document is received (step 4105) and the image is converted to a bitonal snippet by preprocessor 110 of the MDIPE 2100 (step 4110). The image may be accompanied by data indicating whether the image is of a check or of a remittance coupon. In some embodiments, no identifying information may be provided, and the testing will be performed to identify either a code line or an MICR line in the bitonal snippet.
If the document is identified as a check, a MICR recognition engine can then be applied to identify a MICR-line in the bitonal snippet (step 4115). Various techniques for identifying the MICR-line in an image of a check are described above. The results from the MICR recognition engine can then be normalized to the 0-1000 scale used by the mobile IQA tests, and the normalized value compared to a threshold value associated with the test. If the document is identified as a remittance coupon, a code line recognition engine can be applied to identify the code line in the image of the coupon. Various techniques for identifying the code line in an image of a remittance coupon are described above, such as identifying text in OCR-A font within the image. If no information as to whether the image to be tested includes a check or a remittance coupon is provided, both MICR-line and code line testing can be performed to see if either a MICR-line or code line can be found. In an embodiment, the highest normalized value from the MICR-line and code line tests can be selected for comparison to the threshold.
According to an embodiment, the test threshold can be provided as a parameter to the test along with the with mobile document image to be tested. According to an embodiment, the threshold used for this test is lower than the threshold used in the MICR-line Test described above.
If the normalized test result equals or exceeds the threshold, then the image includes an MICR-line or code line and the test is marked as failed (test result value=0), because a MICR-line or code line was identified in what was purported to be an image of the back of the document. If the normalized test result is less than the threshold, the image did not include a MICR line and the test is marked as passed (test result value=1000). The test results value is then returned (step 4125).
Form Identification of Remittance Coupon
According to an embodiment, the remittance processing step 525 of the method illustrated in
Form identification can be used in a number of different situations. For example, form identification can be used for frequently processed remittance coupons. If the layout of the coupon is known, capturing the data from known locations on the coupon can be more accurate than relying on a dynamic data capture technique to extract the data from the coupon.
Form identification can also be used for remittance coupons that lack keywords that can be used to identify key data on the coupon. For example, if a coupon does not include an “Account Number” label for an account number field, the dynamic data capture may misidentify the data in that field. Misidentification can become even more likely if multiple fields have similar formats. Form identification can also be used for coupons having ambiguous data. For example, a remittance coupon might include multiple fields that include data having a similar format. If a remittance coupon includes multiple unlabeled fields having similar formats, dynamic data capture may be more likely to misidentify the data. However, if the layout of the coupon is known, the template information can be used to extract data from known positions in the image of the remittance coupon.
Form identification can also be used for remittance coupons having a non-OCR friendly layout. For example, a remittance coupon may use fonts where identifying keywords and/or form data is printed using a non-OCR friendly font. Form identification can also be used to improve the chance of correctly capturing remittance coupon data when a poor quality image is presented. A poor quality image of a remittance coupon can make it difficult to locate and/or read data from the remittance coupon.
A matching algorithm is executed on the bi-tonal image of the remittance coupon in an attempt to find a matching remittance coupon template (step 4210). According to an embodiment, the remittance server 310 can include a remittance template data store that can be used to store templates of the layouts of various remittance coupons. Various matching techniques can be used to match a template to an image of a coupon. For example, optical character recognition can be used to identify and read text content from the image. The types of data identified and the positions of the data on the remittance coupon can be used to identify a matching template. According to another embodiment, a remittance coupon can include a unique symbol or identifier that can be matched to a particular remittance coupon template. In yet other embodiments, the image of the remittance coupon can be processed to identify “landmarks” on the image that may correspond to labels and/or data. In some embodiments, these landmarks can include, but are not limited to positions of horizontal and/or vertical lines on the remittance coupon, the position and/or size of boxes and/or frames on the remittance coupon, and/or the location of pre-printed text. The position of these landmarks on the remittance coupon may be used to identify a template from the plurality of templates in the template data store. According to some embodiments, a cross-correlation matching technique can be used to match a template to an image of a coupon. In some embodiments, the positions of frames/boxes found on image and/or other such landmarks, can be cross-correlated with landmark information associated a template to compute the matching confidence score. If the confidence score exceeds a predetermined threshold, the template is considered to be a match and can be selected for use in extracting information from the mobile image of the remittance coupon.
A determination is made whether a matching template has been found (step 4215). If no matching template is found, a dynamic data capture can be performed on the image of the remittance coupon (step 4225). Dynamic data capture is described in detail below and an example method for dynamic data capture is illustrated in the flow chart of
If a matching template is found, data can be extracted from the image of the remittance coupon using the template (step 4220). The template can provide the location of various data, such as the code line, amount due, account holder name, and account number. Various OCR techniques can be used to read text content from the locations specified by the template. Because the location of various data elements are known, ambiguities regarding the type of data found can be eliminated. The mobile remittance server 310 can distinguish between data elements having a similar data type.
Dynamic Data Capture
According to an embodiment, a keyword-based detection technique can be used to locate and read the data from the bitonal image of the remittance coupon in steps 4310 and 4315 of the method of
According to an embodiment, a format-based detection technique can be used to locate and read the data from the bitonal image of the remittance coupon in steps 4310 and 4315 of the method of
According to yet another embodiment, a combination of keyword-based and format-based matching can be used to identify and extract field data from the bitonal image (steps 4310 and 4315). This approach can be particularly effective where multiple fields of the same or similar format are included on the remittance coupon. A combination of keyword-based and format-based matching can be used to identify field data can be used to disambiguate the data extracted from the bitonal image.
According to an embodiment, a code-line validation technique can be used to locate and read the data from the bitonal image of the remittance coupon in steps 4310 and 4315 of the method of
According to an embodiment, a cross-validation technique can be used where multiple bitonal images of a remittance coupon have been captured, and one or more OCR techniques are applied the each of the bitonal images, such as the techniques described above. The results from the one or more OCR technique from one bitonal image can be compared to the results of OCR techniques applied one or more other bitonal images in order to cross-validate the field data extracted from the images. If conflicting results are found, a set of results having a higher confidence value can be selected to be used for remittance processing.
Exemplary Hardware Embodiments
The mobile device 4400 also includes an image capture component 4430, such as a digital camera. According to some embodiments, the mobile device 4400 is a mobile phone, a smart phone, or a PDA, and the image capture component 4430 is an integrated digital camera that can include various features, such as auto-focus and/or optical and/or digital zoom. In an embodiment, the image capture component 4430 can capture image data and store the data in memory 4220 and/or data storage 4440 of the mobile device 4400.
Wireless interface 4450 of the mobile device can be used to send and/or receive data across a wireless network. For example, the wireless network can be a wireless LAN, a mobile phone carrier's network, and/or other types of wireless network.
I/O interface 4460 can also be included in the mobile device to allow the mobile device to exchange data with peripherals such as a personal computer system. For example, the mobile device might include a USB interface that allows the mobile to be connected to USB port of a personal computer system in order to transfers information such as contact information to and from the mobile device and/or to transfer image data captured by the image capture component 4430 to the personal computer system.
As used herein, the term module might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the present invention. As used herein, a module might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, logical components, software routines or other mechanisms might be implemented to make up a module. In implementation, the various modules described herein might be implemented as discrete modules or the functions and features described can be shared in part or in total among one or more modules. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared modules in various combinations and permutations. Even though various features or elements of functionality may be individually described or claimed as separate modules, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common software and hardware elements, and such description shall not require or imply that separate hardware or software components are used to implement such features or functionality.
Where components or modules of processes used in conjunction with the operations described herein are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or processing module capable of carrying out the functionality described with respect thereto. One such example-computing module is shown in
Referring now to
Computing module 1900 might also include one or more memory modules, referred to as main memory 1908. For example, random access memory (RAM) or other dynamic memory might be used for storing information and instructions to be executed by processor 1904. Main memory 1908 might also be used for storing temporary variables or other intermediate information during execution of instructions by processor 1904. Computing module 1900 might likewise include a read only memory (“ROM”) or other static storage device coupled to bus 1902 for storing static information and instructions for processor 1904.
The computing module 1900 might also include one or more various forms of information storage mechanism 1910, which might include, for example, a media drive 1912 and a storage unit interface 1920. The media drive 1912 might include a drive or other mechanism to support fixed or removable storage media 1914. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive. Accordingly, storage media 1914 might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive 1912. As these examples illustrate, the storage media 1914 can include a computer usable storage medium having stored therein particular computer software or data.
In alternative embodiments, information storage mechanism 1910 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing module 1900. Such instrumentalities might include, for example, a fixed or removable storage unit 1922 and an interface 1920. Examples of such storage units 1922 and interfaces 1920 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units 1922 and interfaces 1920 that allow software and data to be transferred from the storage unit 1922 to computing module 1900.
Computing module 1900 might also include a communications interface 1924. Communications interface 1924 might be used to allow software and data to be transferred between computing module 1900 and external devices. Examples of communications interface 1924 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface. Software and data transferred via communications interface 1924 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 1924. These signals might be provided to communications interface 1924 via a channel 1928. This channel 1928 might carry signals and might be implemented using a wired or wireless communication medium. These signals can deliver the software and data from memory or other storage medium in one computing system to memory or other storage medium in computing system 1900. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
Computing module 1900 might also include a communications interface 1924. Communications interface 1924 might be used to allow software and data to be transferred between computing module 1900 and external devices. Examples of communications interface 1924 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMAX, 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port, Bluetooth interface, or other port), or other communications interface. Software and data transferred via communications interface 1924 might typically be carried on signals, which can be electronic, electromagnetic, optical or other signals capable of being exchanged by a given communications interface 1924. These signals might be provided to communications interface 1924 via a channel 1928. This channel 1928 might carry signals and might be implemented using a wired or wireless medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to physical storage media such as, for example, memory 1908, storage unit 1920, and media 1914. These and other various forms of computer program media or computer usable media may be involved in storing one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing module 1900 to perform features or functions of the present invention as discussed herein.
ACH Enrollment
According to some embodiments, at least one of the mobile device 340 and the mobile remittance server 310 may include functionality for Automated Clearing House enrollment (ACH enrollment). ACH enrollment allows an originator (such as a gas company, a cable company, or an individual's employer) to directly debit or credit a receiver's bank account upon receiving written, verbal, or electronic authorization from the receiver along with the receiver's customer account number as well as a bank routing number. ACH transactions include a wide variety of transaction types, including direct deposit payroll and vendor payments made to a receiver's bank account, as well as consumer payments on insurance premiums, mortgage loans, and utility bill payments made by a receiver.
Once an originator has been selected, the application may then prompt the user to provide an authorization for ACH enrollment with a particular originator (as discussed below with respect to
The application will then require that the user capture an image of a document that contains account information needed to set up the ACH. The document may be a check bearing both the user's account and bank routing numbers, and the image of the check may be blank or voided; or in the alternative, be a cleared or canceled check. In additional embodiments described further herein, the document may be a bill or statement from a particular originator (such as the car insurance statement) or even an identification card such as a driver's license with the user's personal information. According to some embodiments, the captured image may then be preprocessed and transmitted to the mobile ACH Enrollment Server 4710 for image correction as well as image quality assurance (IQA). Any of the techniques already described above for image correction or IQA may be used for these purposes according to embodiments of the present invention.
If the processed image passes the specified IQA tests, data relevant to enrolling in an ACH service may then be extracted from the processed image (e.g., customer account and bank routing number). Otherwise, if IQA fails, the Mobile ACH Enrollment Server 4710 (and/or the application) may send a message to the user requesting a new image of the document.
In some embodiments, upon successful extraction of the relevant data from the processed image, the Mobile ACH Enrollment Server 4710 may open a network connection with an originator server 4715. A server address of the originator server 4715 may be locatable within an originator lookup table resident within the mobile ACH Enrollment Server 4710, or it may be determined at the mobile ACH Enrollment Server 4710 based upon user input. The Mobile ACH Enrollment Server 4710 may then forward the authorization and relevant data—such as the customer account number and bank routing number—to the originator server 4715. This data may then be used to enable the originator server 4715 to perform ACH transactions authorized by the user. The originator server 4715 may then interface with a bank server 320 associated with the customer account and bank routing number in order to perform the authorized ACH transactions.
In other embodiments, the Mobile ACH Enrollment Server 4710 may set a flag or otherwise mark an editable portion of a file (such as a file containing at least the data of the processed image) upon performing one or more successful image quality assurance tests and data extraction from the processed image. Alternatively, the Mobile ACH Enrollment Server 4710 may create a special file type indicating that such quality assurance tests and data extraction were successfully performed. This data may then be sent by the application to the originator server 4715 along with the user's authorization, customer account number, and bank routing number. The originator server 4715 may then perform the authorized ACH transactions by interfacing with a bank server 320 associated with the user's customer account and bank routing number.
When the option to set up ACH enrollment is selected by the user at block 4810, the application may then prompt the user to select an originator at block 4812. If ACH enrollment has already been configured for one or more originators, the user may be given the option to update ACH enrollment information associated with some or all of those originators. Otherwise, the user will be given the option to add ACH enrollment information for a new originator.
In some embodiments, the user may select an originator from a pre-populated or downloadable list of supported originators. In other embodiments, a special type of originator information file (i.e., containing information readable by the application) may be received by the user from an external memory source (such as CD, DVR, or flash memory), or otherwise this data may be downloadable from a remote location (such as a website associated with the originator). In still other embodiments, the user may have the option to manually enter data associated with a specific originator. Additionally, as previously mentioned, the originator may be identified by extracting data from a mobile-captured image of a document for which ACH enrollment is desired, such as a car insurance billing statement.
At block 4814, once the originator has been successfully selected, the user will be prompted to capture an image of a document with data relevant to ACH enrollment, such as a check, a billing or remittance statement, a driver's license, identification card, etc. The document may include a customer account number and bank routing number, a customer address, an originator name and address, and other pertinent information about the customer and originator.
The user will also be prompted to provide some form of ACH authorization at block 4816. ACH authorization may be electronic, oral, or written. Thus, according to some embodiments, an electronic form requesting ACH authorization may be completed by a user (e.g., via keyboard/mouse/touch input) and subsequently submitted to the requesting application. In other embodiments, a menu may prompt the user to sign ACH authorization forms and mail them to a designated mailing address. In still other embodiments, the application may receive a file from the user containing audio and/or video content that contains a user's verbal authorization for allowing a specific originator to perform ACH transactions. Conventional authentication mechanisms known in the art may be used to verify that the user granting ACH authorization is a user who has the power to perform bank transactions with respect to the specified bank account. Authorization may also be completed by requesting that the user capture an image of their driver's license, as this is a form of identification that the user normally has in their possession and which is difficult to fake. An image of the driver's license includes a plurality of personal information about the user which can be used to verify the user's identity, including an address, phone number, driver's license number, etc.
In some embodiments, a customer account number and a bank routing number may be directly provided to the requesting application via one or more peripheral input devices (e.g., a keyboard, mouse, and/or touch input device). In some embodiments, this information may be persistently stored in one or more computing devices order to expedite the completion of future ACH enrollment entries by the user (thereby enabling the same account and routing information to be loaded when configuring ACH transactions with other originators). Optionally, conventional cryptographic techniques may be used to secure the data so that it is not accessible by unauthorized individuals.
In other embodiments, the user may be prompted to provide the application with a captured image (e.g., a picture of a blank or voided check) bearing the customer's account and bank routing numbers. The captured image may then be preprocessed and transmitted to the mobile ACH Enrollment Server 4710 for image correction as well as image quality assurance. Any of the techniques described above for image correction or image quality assurance may be used for these purposes according to embodiments of the present invention. If the processed image passes the specified image quality assurance tests, the relevant data may then be extracted from the processed image (e.g., customer account and bank routing number). Otherwise, the Mobile ACH Enrollment Server 4710 (and/or the application) may send a message to the user requesting a new image of a check.
The ACH enrollment authorization, along with the data containing the customer account and bank routing numbers may then be provided to an originator server 4715 at block 4818. The originator server 4715 can then use this data to set up the authorized ACH transactions, and the process then ends.
At block 4904, the document image may then be processed in any of the manners already described above to correct image defects such as skewing, warping, lighting, etc. In some embodiments, image quality assurance tests may then be run on the processed image. This is shown at block 4906.
If any or all of the image quality assurance tests fail, the user may be requested to provide a new image of a check. Otherwise, if the processed image passes the designated image quality assurance tests, data from the processed image may then be extracted at block 4908. This data may include, for example, a customer account number and a bank routing number. Any of the techniques mentioned above for extracting data from an image may be used for this purpose.
The data may then be forwarded to a remote computing device such as the originator server 4715 at block 4910. In one embodiment, if the image processing and IQA tests are performed at the mobile device 340, the data may be extracted at the mobile device 340 and then forwarded to the ACH enrollment server 4710. Optionally, an authorization for ACH enrollment may be forwarded to the remote computing device along with the data. The process then ends.
In
Once the image has been accepted by the user, it is processed for further correction and then run through the IQA tests, as described above. If the image passes the IQA tests, the data, or content of the check is extracted to obtain the fields from the check needed to enroll the user in ACH. In
In
In
In one embodiment illustrated in
The hinting process may be useful when the system cannot determine certain aspects of the image or certain basic fields, such as if the user does not know which state issued their driver's license.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not of limitation. The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments. Where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future. In addition, the invention is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated example. One of ordinary skill in the art would also understand how alternative functional, logical or physical partitioning and configurations could be utilized to implement the desired features of the present invention.
Furthermore, although items, elements or components of the invention may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
This application is a continuation of U.S. application Ser. No. 17/751,346, which is a continuation of U.S. patent application Ser. No. 16/691,378, filed on Nov. 21, 2019, which is a continuation of U.S. patent application Ser. No. 13/526,532, filed on Jun. 19, 2012, which is a continuation-in-part of U.S. patent application Ser. No. 12/906,036, filed on Oct. 15, 2010, which is a continuation-in-part of U.S. patent application Ser. No. 12/778,943, filed on May 12, 2010, and a continuation-in-part of U.S. patent application Ser. No. 12/346,026, filed on Dec. 30, 2008, which claims priority to U.S. Provisional Patent Application No. 61/022,279, filed on Jan. 18, 2008, which are all hereby incorporated herein by reference in their entireties as if set forth in full. This application is also related to U.S. patent application Ser. No. 12/717,080, filed on Mar. 3, 2010, which is hereby incorporated herein by reference in its entirety as if set forth in full.
Number | Name | Date | Kind |
---|---|---|---|
4311914 | Huber | Jan 1982 | A |
5326959 | Perazza | Jul 1994 | A |
5600732 | Ott et al. | Feb 1997 | A |
5751841 | Leong et al. | May 1998 | A |
5761686 | Bloomberg | Jun 1998 | A |
5920847 | Kolling et al. | Jul 1999 | A |
5966473 | Takahashi et al. | Oct 1999 | A |
5999636 | Juang | Dec 1999 | A |
6038351 | Rigakos | Mar 2000 | A |
6038553 | Hyde, Jr. | Mar 2000 | A |
6070150 | Remington et al. | May 2000 | A |
6125362 | Elworthy | Sep 2000 | A |
6282326 | Lee et al. | Aug 2001 | B1 |
6304684 | Niczyporuk et al. | Oct 2001 | B1 |
6345130 | Dahl | Feb 2002 | B1 |
6408094 | Mirzaoff et al. | Jun 2002 | B1 |
6516078 | Yang et al. | Feb 2003 | B1 |
6621919 | Mennie et al. | Sep 2003 | B2 |
6735341 | Horie et al. | May 2004 | B1 |
6807294 | Yamazaki | Oct 2004 | B2 |
6947610 | Sun | Sep 2005 | B2 |
6985631 | Zhang | Jan 2006 | B2 |
6993205 | Lorie et al. | Jan 2006 | B1 |
7020320 | Filatov | Mar 2006 | B2 |
7072862 | Wilson | Jul 2006 | B1 |
7133558 | Ohara et al. | Nov 2006 | B1 |
7245765 | Myers et al. | Jul 2007 | B2 |
7283656 | Blake et al. | Oct 2007 | B2 |
7301564 | Fan | Nov 2007 | B2 |
7331523 | Meier et al. | Feb 2008 | B2 |
7376258 | Klein et al. | May 2008 | B2 |
7377425 | Ma et al. | May 2008 | B1 |
7426316 | Vehvilinen | Sep 2008 | B2 |
7433098 | Klein et al. | Oct 2008 | B2 |
7478066 | Remington et al. | Jan 2009 | B2 |
7548641 | Gilson et al. | Jun 2009 | B2 |
7558418 | Verma et al. | Jul 2009 | B2 |
7584128 | Mason et al. | Sep 2009 | B2 |
7593595 | Heaney, Jr. et al. | Sep 2009 | B2 |
7606741 | King et al. | Oct 2009 | B2 |
7636483 | Yamaguchi et al. | Dec 2009 | B2 |
7735721 | Ma et al. | Jun 2010 | B1 |
7778457 | Nepomniachtchi et al. | Aug 2010 | B2 |
7793831 | Beskitt | Sep 2010 | B2 |
7793835 | Coggeshall et al. | Sep 2010 | B1 |
7817854 | Taylor | Oct 2010 | B2 |
7869098 | Corso et al. | Jan 2011 | B2 |
7873200 | Oakes, III et al. | Jan 2011 | B1 |
7876949 | Oakes, III et al. | Jan 2011 | B1 |
7949176 | Nepomniachtchi | May 2011 | B2 |
7950698 | Popadic et al. | May 2011 | B2 |
7953268 | Nepomniachtchi | May 2011 | B2 |
7974899 | Prasad et al. | Jul 2011 | B1 |
7978900 | Nepomniachtchi et al. | Jul 2011 | B2 |
7982770 | Kahn et al. | Jul 2011 | B1 |
7983468 | Ibikunle et al. | Jul 2011 | B2 |
7986346 | Kaneda et al. | Jul 2011 | B2 |
7995196 | Fraser | Aug 2011 | B1 |
7996317 | Gurz | Aug 2011 | B1 |
8000514 | Nepomniachtchi et al. | Aug 2011 | B2 |
8023155 | Jiang | Sep 2011 | B2 |
8025226 | Hopkins, III et al. | Sep 2011 | B1 |
8109436 | Hopkins, III | Feb 2012 | B1 |
8118216 | Hoch et al. | Feb 2012 | B2 |
8121948 | Gustin et al. | Feb 2012 | B2 |
8126252 | Abernethy et al. | Feb 2012 | B2 |
8160149 | Demos | Apr 2012 | B2 |
8180137 | Faulkner et al. | May 2012 | B2 |
8233714 | Zuev et al. | Jul 2012 | B2 |
8238638 | Mueller et al. | Aug 2012 | B2 |
8290237 | Burks et al. | Oct 2012 | B1 |
8300917 | Borgia et al. | Oct 2012 | B2 |
8320657 | Burks et al. | Nov 2012 | B1 |
8326015 | Nepomniachtchi | Dec 2012 | B2 |
8339642 | Ono | Dec 2012 | B2 |
8340452 | Marchesotti | Dec 2012 | B2 |
8358826 | Medina, III et al. | Jan 2013 | B1 |
8370254 | Hopkins, III et al. | Feb 2013 | B1 |
8374383 | Long et al. | Feb 2013 | B2 |
8379914 | Nepomniachtchi et al. | Feb 2013 | B2 |
8442844 | Trandal et al. | May 2013 | B1 |
8532419 | Coleman | Sep 2013 | B2 |
8538124 | Harpel et al. | Sep 2013 | B1 |
8540158 | Lei et al. | Sep 2013 | B2 |
8542921 | Medina | Sep 2013 | B1 |
8559766 | Tilt et al. | Oct 2013 | B2 |
8582862 | Nepomniachtchi et al. | Nov 2013 | B2 |
8688579 | Ethington et al. | Apr 2014 | B1 |
8699779 | Prasad et al. | Apr 2014 | B1 |
8837833 | Wang et al. | Sep 2014 | B1 |
8861883 | Tanaka | Oct 2014 | B2 |
8879783 | Wang et al. | Nov 2014 | B1 |
8959033 | Oakes, III et al. | Feb 2015 | B1 |
8977571 | Bueche, Jr. et al. | Mar 2015 | B1 |
9058512 | Medina, III | Jun 2015 | B1 |
9208393 | Kotovich et al. | Dec 2015 | B2 |
9460141 | Coman | Oct 2016 | B1 |
9613258 | Chen et al. | Apr 2017 | B2 |
9679214 | Kotovich et al. | Jun 2017 | B2 |
9710702 | Nepomniachtchi et al. | Jul 2017 | B2 |
9773186 | Nepomniachtchi et al. | Sep 2017 | B2 |
9786011 | Engelhorn et al. | Oct 2017 | B1 |
9842331 | Nepomniachtchi et al. | Dec 2017 | B2 |
10095947 | Nepomniachtchi et al. | Oct 2018 | B2 |
10102583 | Strange | Oct 2018 | B2 |
10275673 | Kotovich et al. | Apr 2019 | B2 |
10360447 | Nepomniachtchi et al. | Jul 2019 | B2 |
10373136 | Pollack et al. | Aug 2019 | B1 |
10452908 | Ramanathan et al. | Oct 2019 | B1 |
10546206 | Nepomniachtchi et al. | Jan 2020 | B2 |
10621660 | Medina et al. | Apr 2020 | B1 |
10789496 | Kotovich et al. | Sep 2020 | B2 |
10789501 | Nepomniachtchi et al. | Sep 2020 | B2 |
10891475 | Nepomniachtchi et al. | Jan 2021 | B2 |
10909362 | Nepomniachtchi et al. | Feb 2021 | B2 |
11157731 | Nepomniachtchi et al. | Oct 2021 | B2 |
11380113 | Nepomniachtchi et al. | Jul 2022 | B2 |
11393272 | Kriegsfeld et al. | Jul 2022 | B2 |
20010014183 | Sansom-Wai et al. | Aug 2001 | A1 |
20010016084 | Pollard et al. | Aug 2001 | A1 |
20010019334 | Carrai et al. | Sep 2001 | A1 |
20010019664 | Pilu | Sep 2001 | A1 |
20010044899 | Levy | Nov 2001 | A1 |
20020003896 | Yamazaki | Jan 2002 | A1 |
20020012462 | Fujiwara | Jan 2002 | A1 |
20020023055 | Antognini et al. | Feb 2002 | A1 |
20020037097 | Hoyos et al. | Mar 2002 | A1 |
20020041717 | Murata et al. | Apr 2002 | A1 |
20020044689 | Roustaei et al. | Apr 2002 | A1 |
20020046341 | Kazaks et al. | Apr 2002 | A1 |
20020067846 | Foley | Jun 2002 | A1 |
20020073044 | Singhal | Jun 2002 | A1 |
20020077976 | Meyer et al. | Jun 2002 | A1 |
20020080013 | Anderson, III et al. | Jun 2002 | A1 |
20020085745 | Jones et al. | Jul 2002 | A1 |
20020120846 | Stewart et al. | Aug 2002 | A1 |
20020128967 | Meyer et al. | Sep 2002 | A1 |
20020138351 | Houvener et al. | Sep 2002 | A1 |
20020143804 | Dowdy | Oct 2002 | A1 |
20020150279 | Scott et al. | Oct 2002 | A1 |
20030009420 | Jones | Jan 2003 | A1 |
20030072568 | Lin et al. | Apr 2003 | A1 |
20030086615 | Dance et al. | May 2003 | A1 |
20030099379 | Monk et al. | May 2003 | A1 |
20030099401 | Driggs et al. | May 2003 | A1 |
20030156201 | Zhang | Aug 2003 | A1 |
20030161523 | Moon et al. | Aug 2003 | A1 |
20030177100 | Filatov | Sep 2003 | A1 |
20040012679 | Fan | Jan 2004 | A1 |
20040017947 | Yang | Jan 2004 | A1 |
20040024769 | Forman et al. | Feb 2004 | A1 |
20040037448 | Brundage | Feb 2004 | A1 |
20040081332 | Tuttle et al. | Apr 2004 | A1 |
20040109597 | Lugg | Jun 2004 | A1 |
20040205474 | Eskin et al. | Oct 2004 | A1 |
20040213434 | Emerson et al. | Oct 2004 | A1 |
20040213437 | Howard et al. | Oct 2004 | A1 |
20040218799 | Mastie et al. | Nov 2004 | A1 |
20040236688 | Bozeman | Nov 2004 | A1 |
20040236690 | Bogosian et al. | Nov 2004 | A1 |
20040247168 | Pintsov et al. | Dec 2004 | A1 |
20050011957 | Attia et al. | Jan 2005 | A1 |
20050065893 | Josephson | Mar 2005 | A1 |
20050071283 | Randle et al. | Mar 2005 | A1 |
20050080698 | Perg et al. | Apr 2005 | A1 |
20050091161 | Gustin et al. | Apr 2005 | A1 |
20050097046 | Singfield | May 2005 | A1 |
20050100216 | Myers et al. | May 2005 | A1 |
20050125295 | Tidwell et al. | Jun 2005 | A1 |
20050129300 | Sandison et al. | Jun 2005 | A1 |
20050141028 | Koppich | Jun 2005 | A1 |
20050143136 | Lev et al. | Jun 2005 | A1 |
20050163362 | Jones et al. | Jul 2005 | A1 |
20050180661 | El Bernoussi et al. | Aug 2005 | A1 |
20050192897 | Rogers et al. | Sep 2005 | A1 |
20050196069 | Yonaha | Sep 2005 | A1 |
20050196071 | Prakash et al. | Sep 2005 | A1 |
20050213805 | Blake et al. | Sep 2005 | A1 |
20050219367 | Kanda et al. | Oct 2005 | A1 |
20050220324 | Klein et al. | Oct 2005 | A1 |
20050229010 | Monk et al. | Oct 2005 | A1 |
20050242186 | Ohbuchi | Nov 2005 | A1 |
20050261990 | Gocht et al. | Nov 2005 | A1 |
20060008167 | Yu et al. | Jan 2006 | A1 |
20060008267 | Kim | Jan 2006 | A1 |
20060012699 | Miki | Jan 2006 | A1 |
20060039629 | Li et al. | Feb 2006 | A1 |
20060045322 | Clarke et al. | Mar 2006 | A1 |
20060045342 | Kim et al. | Mar 2006 | A1 |
20060045344 | Paxton et al. | Mar 2006 | A1 |
20060045379 | Heaney et al. | Mar 2006 | A1 |
20060071950 | Kurzweil et al. | Apr 2006 | A1 |
20060072822 | Hatzav et al. | Apr 2006 | A1 |
20060088214 | Handley et al. | Apr 2006 | A1 |
20060106717 | Randle et al. | May 2006 | A1 |
20060140504 | Fujimoto et al. | Jun 2006 | A1 |
20060164682 | Lev | Jul 2006 | A1 |
20060177118 | Ibikunle et al. | Aug 2006 | A1 |
20060182331 | Gilson et al. | Aug 2006 | A1 |
20060186194 | Richardson et al. | Aug 2006 | A1 |
20060210192 | Orhun | Sep 2006 | A1 |
20060221415 | Kawamoto | Oct 2006 | A1 |
20060242063 | Peterson et al. | Oct 2006 | A1 |
20060280354 | Murray | Dec 2006 | A1 |
20060291727 | Bargeron | Dec 2006 | A1 |
20070009155 | Potts et al. | Jan 2007 | A1 |
20070053574 | Verma et al. | Mar 2007 | A1 |
20070058851 | Quine et al. | Mar 2007 | A1 |
20070064991 | Douglas et al. | Mar 2007 | A1 |
20070071324 | Thakur | Mar 2007 | A1 |
20070076940 | Goodall et al. | Apr 2007 | A1 |
20070081796 | Fredlund et al. | Apr 2007 | A1 |
20070084911 | Crowell | Apr 2007 | A1 |
20070086642 | Foth et al. | Apr 2007 | A1 |
20070086643 | Spier et al. | Apr 2007 | A1 |
20070110277 | Hayduchok et al. | May 2007 | A1 |
20070114785 | Porter | May 2007 | A1 |
20070118391 | Malaney et al. | May 2007 | A1 |
20070131759 | Cox et al. | Jun 2007 | A1 |
20070140678 | Yost et al. | Jun 2007 | A1 |
20070154071 | Lin et al. | Jul 2007 | A1 |
20070156438 | Popadic et al. | Jul 2007 | A1 |
20070168382 | Tillberg et al. | Jul 2007 | A1 |
20070171288 | Inoue et al. | Jul 2007 | A1 |
20070174214 | Welsh et al. | Jul 2007 | A1 |
20070195174 | Oren | Aug 2007 | A1 |
20070206877 | Wu et al. | Sep 2007 | A1 |
20070211964 | Agam et al. | Sep 2007 | A1 |
20070214078 | Coppinger | Sep 2007 | A1 |
20070244782 | Chimento | Oct 2007 | A1 |
20070265887 | Mclaughlin et al. | Nov 2007 | A1 |
20070288382 | Narayanan | Dec 2007 | A1 |
20070297664 | Blaikie | Dec 2007 | A1 |
20080010215 | Rackley, III et al. | Jan 2008 | A1 |
20080031543 | Nakajima et al. | Feb 2008 | A1 |
20080040259 | Snow et al. | Feb 2008 | A1 |
20080040280 | Davis et al. | Feb 2008 | A1 |
20080062437 | Rizzo | Mar 2008 | A1 |
20080086420 | Gilder et al. | Apr 2008 | A1 |
20080089573 | Mori et al. | Apr 2008 | A1 |
20080128505 | Challa et al. | Jun 2008 | A1 |
20080152238 | Sarkar | Jun 2008 | A1 |
20080174815 | Komaki | Jul 2008 | A1 |
20080183576 | Kim et al. | Jul 2008 | A1 |
20080192129 | Walker et al. | Aug 2008 | A1 |
20080193020 | Sibiryakov et al. | Aug 2008 | A1 |
20080212901 | Castiglia et al. | Sep 2008 | A1 |
20080231714 | Estevez et al. | Sep 2008 | A1 |
20080235263 | Riaz et al. | Sep 2008 | A1 |
20080247629 | Gilder et al. | Oct 2008 | A1 |
20080249931 | Gilder et al. | Oct 2008 | A1 |
20080249936 | Miller et al. | Oct 2008 | A1 |
20080267510 | Paul et al. | Oct 2008 | A1 |
20080306787 | Hamilton et al. | Dec 2008 | A1 |
20090041377 | Edgar | Feb 2009 | A1 |
20090063431 | Erol et al. | Mar 2009 | A1 |
20090092322 | Erol et al. | Apr 2009 | A1 |
20090108080 | Meyer et al. | Apr 2009 | A1 |
20090114716 | Ramachandran | May 2009 | A1 |
20090125510 | Graham et al. | May 2009 | A1 |
20090141962 | Borgia et al. | Jun 2009 | A1 |
20090159659 | Norris et al. | Jun 2009 | A1 |
20090185241 | Nepomniachtchi | Jul 2009 | A1 |
20090185736 | Nepomniachtchi | Jul 2009 | A1 |
20090185737 | Nepomniachtchi | Jul 2009 | A1 |
20090185738 | Nepomniachtchi | Jul 2009 | A1 |
20090185752 | Dwivedula et al. | Jul 2009 | A1 |
20090190830 | Hasegawa | Jul 2009 | A1 |
20090196485 | Mueller et al. | Aug 2009 | A1 |
20090198493 | Hakkani-Tur et al. | Aug 2009 | A1 |
20090201541 | Neogi et al. | Aug 2009 | A1 |
20090216672 | Zulf | Aug 2009 | A1 |
20090261158 | Lawson | Oct 2009 | A1 |
20090265134 | Sambasivan et al. | Oct 2009 | A1 |
20090271287 | Halpern | Oct 2009 | A1 |
20090285444 | Erol et al. | Nov 2009 | A1 |
20100030524 | Warren | Feb 2010 | A1 |
20100037059 | Sun et al. | Feb 2010 | A1 |
20100038839 | Dewitt et al. | Feb 2010 | A1 |
20100073735 | Hunt et al. | Mar 2010 | A1 |
20100074547 | Yu et al. | Mar 2010 | A1 |
20100080471 | Haas et al. | Apr 2010 | A1 |
20100082470 | Walach et al. | Apr 2010 | A1 |
20100102119 | Gustin et al. | Apr 2010 | A1 |
20100104171 | Faulkner et al. | Apr 2010 | A1 |
20100114765 | Gustin et al. | May 2010 | A1 |
20100114766 | Gustin et al. | May 2010 | A1 |
20100114771 | Gustin et al. | May 2010 | A1 |
20100114772 | Gustin et al. | May 2010 | A1 |
20100150424 | Nepomniachtchi et al. | Jun 2010 | A1 |
20100161466 | Gilder | Jun 2010 | A1 |
20100200660 | Moed et al. | Aug 2010 | A1 |
20100208282 | Isaev | Aug 2010 | A1 |
20100239160 | Enomoto et al. | Sep 2010 | A1 |
20100246972 | Koyama et al. | Sep 2010 | A1 |
20100253787 | Grant | Oct 2010 | A1 |
20100254604 | Prabhakara et al. | Oct 2010 | A1 |
20100284611 | Lee et al. | Nov 2010 | A1 |
20110013822 | Blackson et al. | Jan 2011 | A1 |
20110026810 | Hu | Feb 2011 | A1 |
20110052065 | Nepomniachtchi et al. | Mar 2011 | A1 |
20110075936 | Deaver | Mar 2011 | A1 |
20110081051 | Tayal et al. | Apr 2011 | A1 |
20110091092 | Nepomniachtchi et al. | Apr 2011 | A1 |
20110134248 | Heit et al. | Jun 2011 | A1 |
20110170740 | Coleman | Jul 2011 | A1 |
20110188759 | Filimonova et al. | Aug 2011 | A1 |
20110194750 | Nepomniachtchi | Aug 2011 | A1 |
20110249905 | Singh et al. | Oct 2011 | A1 |
20110255795 | Nakamura | Oct 2011 | A1 |
20110280450 | Nepomniachtchi et al. | Nov 2011 | A1 |
20110289028 | Sato | Nov 2011 | A1 |
20120010885 | Hakkani-Tr et al. | Jan 2012 | A1 |
20120023567 | Hammad | Jan 2012 | A1 |
20120030104 | Huff et al. | Feb 2012 | A1 |
20120033892 | Blenkhorn et al. | Feb 2012 | A1 |
20120051649 | Saund et al. | Mar 2012 | A1 |
20120070062 | Houle et al. | Mar 2012 | A1 |
20120072859 | Wang et al. | Mar 2012 | A1 |
20120086989 | Collins et al. | Apr 2012 | A1 |
20120106802 | Isieh et al. | May 2012 | A1 |
20120109792 | Eftekhari et al. | May 2012 | A1 |
20120113489 | Heit et al. | May 2012 | A1 |
20120150773 | Dicorpo et al. | Jun 2012 | A1 |
20120197640 | Hakkani-Tr et al. | Aug 2012 | A1 |
20120201416 | Dewitt et al. | Aug 2012 | A1 |
20120226600 | Dolev | Sep 2012 | A1 |
20120230577 | Calman et al. | Sep 2012 | A1 |
20120265655 | Stroh | Oct 2012 | A1 |
20120278336 | Malik et al. | Nov 2012 | A1 |
20120308139 | Dhir | Dec 2012 | A1 |
20130004076 | Koo et al. | Jan 2013 | A1 |
20130022231 | Nepomniachtchi et al. | Jan 2013 | A1 |
20130051610 | Roach et al. | Feb 2013 | A1 |
20130058531 | Hedley et al. | Mar 2013 | A1 |
20130085935 | Nepomniachtchi et al. | Apr 2013 | A1 |
20130120595 | Roach et al. | May 2013 | A1 |
20130148862 | Roach et al. | Jun 2013 | A1 |
20130155474 | Roach et al. | Jun 2013 | A1 |
20130181054 | Durham et al. | Jul 2013 | A1 |
20130182002 | Macciola et al. | Jul 2013 | A1 |
20130182951 | Shustorovich et al. | Jul 2013 | A1 |
20130182973 | Macciola et al. | Jul 2013 | A1 |
20130202185 | Irwin, Jr. et al. | Aug 2013 | A1 |
20130204777 | Irwin, Jr. et al. | Aug 2013 | A1 |
20130223721 | Nepomniachtchi et al. | Aug 2013 | A1 |
20130272607 | Chattopadhyay et al. | Oct 2013 | A1 |
20130297353 | Strange et al. | Nov 2013 | A1 |
20130311362 | Milam et al. | Nov 2013 | A1 |
20130317865 | Tofte et al. | Nov 2013 | A1 |
20130325706 | Wilson et al. | Dec 2013 | A1 |
20140032406 | Roach et al. | Jan 2014 | A1 |
20140037183 | Gorski et al. | Feb 2014 | A1 |
20140040141 | Gauvin et al. | Feb 2014 | A1 |
20140044303 | Chakraborti | Feb 2014 | A1 |
20140046841 | Gauvin et al. | Feb 2014 | A1 |
20140064621 | Reese et al. | Mar 2014 | A1 |
20140108456 | Ramachandrula et al. | Apr 2014 | A1 |
20140126790 | Duchesne et al. | May 2014 | A1 |
20140133767 | Lund et al. | May 2014 | A1 |
20140172467 | He et al. | Jun 2014 | A1 |
20140188715 | Barlok et al. | Jul 2014 | A1 |
20140233837 | Sandoz et al. | Aug 2014 | A1 |
20140254887 | Amtrup et al. | Sep 2014 | A1 |
20140258838 | Evers et al. | Sep 2014 | A1 |
20140270540 | Spector et al. | Sep 2014 | A1 |
20140281871 | Brunner et al. | Sep 2014 | A1 |
20140307959 | Filimonova et al. | Oct 2014 | A1 |
20150012382 | Ceribelli et al. | Jan 2015 | A1 |
20150012442 | Ceribelli et al. | Jan 2015 | A1 |
20150040001 | Kannan et al. | Feb 2015 | A1 |
20150142545 | Ceribelli et al. | May 2015 | A1 |
20150142643 | Ceribelli et al. | May 2015 | A1 |
20150334184 | Liverance | Nov 2015 | A1 |
20160092730 | Smirnov et al. | Mar 2016 | A1 |
20170185972 | Bozeman | Jun 2017 | A1 |
20170316263 | Nepomniachtchi et al. | Nov 2017 | A1 |
20180101751 | Ghosh et al. | Apr 2018 | A1 |
20180101836 | Nepomniachtchi et al. | Apr 2018 | A1 |
20180240081 | Doyle et al. | Aug 2018 | A1 |
20200304650 | Roach et al. | Sep 2020 | A1 |
20200342248 | Nepomniachtchi et al. | Oct 2020 | A1 |
20210090372 | Kriegsfeld et al. | Mar 2021 | A1 |
20220351161 | Roach et al. | Nov 2022 | A1 |
Number | Date | Country |
---|---|---|
2773730 | Apr 2012 | CA |
1020040076131 | Aug 2004 | KR |
20070115834 | Dec 2007 | KR |
03069425 | Aug 2003 | WO |
2006075967 | Jul 2006 | WO |
2006136958 | Dec 2006 | WO |
2012144957 | Oct 2012 | WO |
Entry |
---|
“OCR: The Most Important Scanning Feature You Never Knew You Needed.” hp (blog), Feb. 24, 2012. Accessed May 13, 2015. http://h71036.www7.hp.com/hho/cache/608037-0-0-39-121.html. |
Abdulkader et al. “Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment.” Proceedings of the 10th International Conference on Document Analysis and Recognition (ICDAR '09). IEEE Computer Society, Washington, D.C., USA. Pages 576-580. http://dx.doi.org/10.1109/ICDAR.2009.24. |
Bassil, Youssef. “OCR Post-Processing Error Correction Algorithm Using Google's Online Spelling Suggestion.” Journal of Emergin Trends in Computing and Information Sciences 3, No. 1 (Jan. 2012): 1. Accessed May 13, 2015. http://arxiv.org/ftp/arxiv/papers/1204/1204.0191.pdf. |
Bienieki et al. “Image preprocessing for improving OCR accuracy.” Perspective Technologies and Methods in MEMS Design, 2007. International Conference on MEMSTECH 2007. IEEE, 2007. |
Chattopadhyay et al. “On the Enhancement and Binarization of Mobile Captured Vehicle Identification Number for an Embedded Solution.” 10th IAPR International Workshop on Document Analysis Systems (DAS), 2012. pp. 235-239. Mar. 27-29, 2012. |
Cook, John. “Three Algorithms for Converting Color to Grayscale.” Singular Value Consulting. Aug. 24, 2009. Accessed May 13, 2015. http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/. |
Gatos et al. “Improved Document Image Binarization by Using a Combination of Multiple Binarization Techniques and Adapted Edge Information.” 19th International Conference on Pattern Recognition, 2008. IEEE. |
He et al, “Comer deterctor Based on Global and Local Curvature Properties” Optical Engineering 47(5), 0570008 (2008). |
International Search Report and Written Opinion received in PCT/US2011/056593, mailed May 30, 2012, 9 pages. |
Notice of Allowance dated Feb. 22, 2023 received in U.S. Appl. No. 17/236,373 in 30 pages. |
Notice of Allowance for related U.S. Appl. No. 16/160,796, mailed on Jan. 22, 2021, in 17 pages. |
Notice of Allowance for related U.S. Appl. No. 16/579,625, mailed on Jan. 13, 2020 in 27 pages. |
Notice of Allowance for related U.S. Appl. No. 17/829,025, mailed on Apr. 11, 2023, in 13 pages. |
Office Action dated Feb. 1, 2023 in related U.S. Appl. No. 16/987,782, in 104 pages. |
Office Action dated Jan. 12, 2023 in related U.S. Appl. No. 17/479,904, in 34 pages. |
Office Action dated Sep. 25, 2019 for related U.S. Appl. No. 16/518,815, in 10 pages. |
Office Action for related CA Patent Application No. 2,773,730, dated Aug. 21, 2017, in 4 pages. |
Office Action for related U.S. Appl. No. 16/259,896, mailed on Dec. 12, 2019, in 22 pages. |
Office Action for related U.S. Appl. No. 17/983,785, mailed on Mar. 30, 2023, in 46 pages. |
Relativity. “Searching Manual.” Aug. 27, 2010. Accessed May 13, 2015. http://www.inventus.com/wp-content/uploads/2010/09/Relativity-Searching-Manual-6.6.pdf. |
Shah et al. “OCR-Based chassis-No. recognition using artificial neural networks.”2009 IEEE Conference on Vehicular Electronics and Safety. pp. 31-31. Nov. 11-12, 2009. |
Stevens. “Advanced Programming in the UNIX Enrivonment.” Addison-Wesley Publishing Company, pp. 195-196 (1992). |
“Tokenworks Introduces IDWedge ID Scanner Solution.” 2008. |
Junker et al. “Evaluating OCR and Non-OCR Text Representation for Learning Document Classifiers.” Proceedings of the 4th International Conference on Document Analysis and Recognition. Ulm, Germany. Aug. 18-20, 1997. p. 1606-1066 (1997). Accessed http://citeseerxist.psu.eduviewdoc/download?doi=10.1.1.6.6732&rep=rep1-&type=pdf. |
PDF417, Wikipedia: the free encyclopedia, Oct. 21, 2008, https://en.wikipedia.org/w/index.php?title=PDF417&oldid=246681430 (Year: 2008); 3 pages. |
Number | Date | Country | |
---|---|---|---|
20230290166 A1 | Sep 2023 | US |
Number | Date | Country | |
---|---|---|---|
61022279 | Jan 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17751346 | May 2022 | US |
Child | 18105865 | US | |
Parent | 16691378 | Nov 2019 | US |
Child | 17751346 | US | |
Parent | 13526532 | Jun 2012 | US |
Child | 16691378 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12906036 | Oct 2010 | US |
Child | 13526532 | US | |
Parent | 12778943 | May 2010 | US |
Child | 12906036 | US | |
Parent | 12346026 | Dec 2008 | US |
Child | 12906036 | US |