The embodiments described herein relate to processing images of documents captured using a mobile device, and more particularly to real-time processing and feature extraction of images of payment documents for classifying the payment document therein.
Financial institutions and other businesses have become increasingly interested in electronic processing of checks and other financial documents in order to expedite processing of these documents. Some techniques allow users to scan a copy of the document using a scanner or copier to create an electronic copy of the document that can be digitally processed to extract content. This provides added convenience over the use of a hardcopy original, which would otherwise need to be physically sent or presented to the recipient for processing. For example, some banks can process digital images of checks and extract check information from the image needed to process the check for payment and deposit without requiring that the physical check be routed throughout the bank for processing. However, the type of information and the accuracy of information which can be processed from an image of a check are limited. As a result, some checks cannot be processed and are rejected during the mobile deposit process. Furthermore, the types of documents which can be processed are often limited to checks, as other financial documents have varying formats and sizes which are too difficult to process electronically.
Mobile devices that incorporate cameras have also become ubiquitous and may also be useful to capture images of financial documents for mobile processing of financial information. The mobile device may be connected with a financial institution or business through a mobile network connection. However, the process of capturing and uploading images of financial documents is often prone to error and produces images of poor quality which cannot be used to extract data. The user is often unaware of whether the captured document image is sufficient and ready for processing by a business or financial institution. Additionally, the variety of formats, sizes and content found on different types of financial documents makes capturing a quality image and accurately extracting content a difficult process.
Therefore, there is a need for identifying a document type from a digital image of a document captured by a mobile device and accurately extracting content from the document.
Systems and methods are provided for processing an image of a financial payment document captured using a mobile device and classifying the type of document in order to extract the content therein. These methods may be implemented on a mobile device or a central server, and can be used to identify content on the payment document and determine whether the payment document is ready to be processed by a business or financial institution. The system can identify the type of payment document by identifying features on the payment document and performing a series of steps to determine probabilities that the payment document belongs to a specific document type. The identification steps are arranged starting with the fastest step in order to attempt to quickly determine the payment document type without requiring lengthy, extensive analysis.
The payment document may be classified as a check, it may be classified as a personal check, business check, traveler's check, cashier's check, rebate check, etc, based on features identified on the image and compared using databases which store payment type information. Once the type of payment document is determined, known information about the type of payment document is utilized to determine whether the content on the payment document is sufficient for processing of the payment (such as depositing the check) or whether any risk or indications of fraud associated with a particular type of payment document require further processing. Customized rules can be created by a business or financial institution to provide specific actions depending on the type of payment document which is being deposited. Additional portions of the payment document, including a signature line, addressee field, etc. can be checked to ensure that the check is ready to be deposited by the bank.
These and other features, aspects, and embodiments are described below in the section entitled “Detailed Description.”
Features, aspects, and embodiments are described in conjunction with the attached drawings, in which:
The following detailed description is directed to certain specific embodiments. However, it will be understood that these embodiments are by way of example only and should not be seen as limiting the systems and methods described herein to the specific embodiments, architectures, etc. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout.
Systems and methods are provided for processing an image of a financial payment document captured using a mobile device and classifying the type of payment document in order to extract the content therein. By correcting aspects of the image and extracting relevant features, the type of payment document can be identified, which then provides for faster and more accurate content extraction. An image of a payment document captured by a mobile device is first processed to improve the quality of the image and increase the ability to extract content from the image. Various features are then extracted from the image, and these features are analyzed using a probabilistic approach with a hierarchical classification algorithm to quickly and accurately classify the payment document in the fewest number of steps. Existing information about types of payment documents is utilized to determine if the features and characteristics indicate a particular payment document classification. Additionally, geometric characteristics of the payment document or keyword identification based on an optical character recognition (OCR) step may be used to classify the payment document.
In one embodiment, the payment document is a type of check, wherein the check is classified as a personal check, business check, traveler's check, cashier's check, rebate check, money order, gift certificate, IRS refund check. By classifying the type of check, stored information about particular check types can be used to more accurately capture the content of the check and verify the accuracy of the information on the check. The identified check type may also tell a third party recipient of the check, such as a bank or business, whether the payment document is authentic or fraudulent and whether further processing of the payment document is needed before it can be accepted for payment. Types of checks may be more generally sorted into categories that may be customized by the recipient, such as a group of “regular” check types which are known to be authentic or a group of “irregular checks” that are known to be fraudulent or which are difficult to process to accurately extract their content. In one embodiment, the system may determine that the payment document is simply not a check at all if the features, geometric characteristics and keywords do not provide any indications that the document is a payment document.
These systems and methods may be implemented on a mobile device or at a central server, and either can access stored information about different payment documents and their formats from databases connected with the mobile device or central service over a network. Users are also able to customize the functionality of the classification algorithm to determine the thresholds for various feature matching steps, but also to set customized classifiers which may group certain sets of payment documents into one group and the rest in another based on the user's preferences or experiences in handling checks.
Once the payment document has been classified, content can be extracted based on customized payment document information about where certain fields are located on the form and the expected values of some of those fields. The content of the payment document is more quickly and accurately captured, allowing third parties who need to process the payment document—such as a bank receiving the check for a mobile deposit process—to have greater confidence in the ability to extract the correct information.
I. System Overview
II. Features of Payment Documents
In one embodiment, the system uses prior knowledge about each of the payment document types and includes this into a payment document model stored in the feature knowledge database 106. Each payment document type is characterized according to known information about: 1) the presence of certain image features (such as barcodes, code-lines, check numbers, “lock” icons etc); 2) the values of some geometric characteristic of the payment document (such as width, height, aspect ratio etc); 3) the presence of certain keywords (such as “Money Order,” “Pay to the order of” or “Cashier's Check”); and 4) cross-validation between certain features which have the same values (such as a check number in an upper right corner and the check number included in a MICR line), etc.
III. Probabilistic Feature Identification
Each feature is described in probabilistic terms, as in the probability that the payment document is a certain type of payment document based on the presence of a certain feature. For example, a “lock” icon (see
The probabilistic approach helps to build a payment classifier which is both fast and robust to the many distortions that occur in mobile images.
IV. Hierarchical Identification
The system uses document identification logic based on a hierarchical classification algorithm, according to the flowchart in
In order to finish the classification process, the system uses known probabilistic distributions of features within the payment document types and between the payment document types. At each decision point (see steps 500, 700 and 900 in
As illustrated in
The cropped, corrected and binarized image may then be provided for the hierarchical classification steps, as will be described in detail immediately below.
Image Preprocessing
The hierarchical classification begins with an image pre-processing step 100. In this step, the image is processed in order to extract all features needed to classify the payment document type. Examples of features that may be identified are listed below, although one of skill in the art will appreciate that other features may be identified and used to determine payment document types. Image pre-processing may include the following operations:
In step 200, geometric image characteristics of the payment document may be determined, such as a width, height and aspect ratio. This step also provides the ability to estimate possible positions of the “lock” icon 904 and barcode line 602 by grouping found connected components. The system uses the size, alignment and adjacency characteristics to detect the “lock” icon and barcode. For example, a barcode contains at least N connected components, which are closer than Y pixels to each other and have height-to-width ration of at least Z.
Pre-Classification Using Geometric Features
In step 300, the geometrical features identified in step 200 are analyzed in order to narrow down the subset of possible document types. This step filters and eliminates inapplicable document types using geometrical characteristics of the given image. For instance, business checks are wider than personal checks; personal checks differ from most irregular check types by aspect ratio, etc.
Lock Icon Detection—First Decision Point
In step 400, the system looks for the “lock” icon 904 on the image in the areas where it is typically found on a personal check, cashier's check and money order. The results of the geometric feature extraction 200 helps identify the possible search areas for the lock icon 904. The “lock” detection is based on a technique designed for “lock” classification. The lock detection technique is a symbol recognition technique similar to the one used to recognize MICR-characters. It is based on a comparison of connected components (potential location of “lock”) against several hundred template images (from a training data base). The result of the lock icon detection 400 includes a set of found alternative lock positions within the image. These positions can then be compared with the known positions of the lock icon in different payment document types to hopefully determine the payment document type at this stage in the process.
At step 500, if the “lock” icon has been found in step 400 above and its position has been matched with a payment document type or types, the identification process is completed. Otherwise, further analysis is needed. In one embodiment, a standard Bayesian model is used for developing the probabilistic approach.
MICR Line Analysis—Second Decision Point
In step 600, if the document was not identified by the lock icon detection decision point 500, the process tries to complete identification by finding unique values and/or positions of the MICR line components 1102 that may be present in each document type, as illustrated in
In step 700, if the MICR line analysis provides a sufficient probability that the payment document is a specific type, the classification process is complete. If not, more detailed analysis is needed.
Store Rebate Detection—Third Decision Point
In step 800, the payment document is analyzed to find specific linear patterns 1202 indicated by the dashed line along a bottom edge of the rebate check 1200 in
In step 900, if one or more of the aforementioned patterns were found and geometrical features confirm the patterns, the classification process is complete. If not, more detailed analysis is needed.
Keyword Classification
If none of the above steps have been able to classify the payment document, keyword classification is performed in step 1000 by performing optical character recognition (OCR) on the payment document and analyzing words and phrases to determine a payment document type. This is the slowest classification step since it uses OCR, which is why it is performed only at the end of the classification process and only if other classification steps have failed. Keyword classification looks for predefined keywords that are known to appear on different payment document types, such as the word “money order” 1402 on the money order 1400 in
If the keyword classification step 1000 fails to identify the document, the classification algorithm outputs a full set of hypotheses for each of the decision points listed above which correspond to the previous classification stages. The output of the hypotheses may provide insight into the type of payment document based on the determined probabilities in relation to each other.
V. User-Specific Configurations
The classification algorithm is adjustable and flexible enough to match a user's preferences, such as which document types should be distinguished and which should be placed to the same terminal category, according to the document models above. For instance, the user can configure the algorithm to classify an IRS refund check as a “Regular Check,” or configure it to ignore differences between Cashier's Checks and Traveler's Checks by placing them into the same output category.
In one embodiment, differentiating between types of checks provides additional information to a bank being asked to deposit the check as to the potential risk of the check being fraudulent. The risk of fraudulent checks varies depending on the type of check, and so a bank can set up customized rules for each type of check that it may receive during a mobile deposit process. If the check type is one that is commonly associated with fraud, the bank may immediately deny the request, or request additional processing of the check image before deciding whether to deposit the check. The user is sent a message if the deposit is denied, and may be provided with instructions to manually deposit the check so that the bank can review the original check.
VI. Computer-Implemented Embodiment
The mobile device 4400 also includes an image capture component 4430, such as a digital camera. According to some embodiments, the mobile device 4400 is a mobile phone, a smart phone, or a PDA, and the image capture component 4430 is an integrated digital camera that can include various features, such as auto-focus and/or optical and/or digital zoom. In an embodiment, the image capture component 4430 can capture image data and store the data in memory 4220 and/or data storage 4440 of the mobile device 4400.
Wireless interface 4450 of the mobile device can be used to send and/or receive data across a wireless network. For example, the wireless network can be a wireless LAN, a mobile phone carrier's network, and/or other types of wireless network.
I/O interface 4460 can also be included in the mobile device to allow the mobile device to exchange data with peripherals such as a personal computer system. For example, the mobile device might include a USB interface that allows the mobile to be connected to USB port of a personal computer system in order to transfers information such as contact information to and from the mobile device and/or to transfer image data captured by the image capture component 4430 to the personal computer system.
As used herein, the term unit might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the present invention. As used herein, a unit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, logical components, software routines or other mechanisms might be implemented to make up a module. In implementation, the various modules described herein might be implemented as discrete modules or the functions and features described can be shared in part or in total among one or more modules. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared modules in various combinations and permutations. Even though various features or elements of functionality may be individually described or claimed as separate modules, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common software and hardware elements, and such description shall not require or imply that separate hardware or software components are used to implement such features or functionality.
Where components or modules of processes used in conjunction with the operations described herein are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or processing module capable of carrying out the functionality described with respect thereto. One such example-computing module is shown in
Various embodiments are described in terms of this example-computing module 1900. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computing modules or architectures.
Referring now to
Computing module 1900 might also include one or more memory modules, referred to as main memory 1908. For example, random access memory (RAM) or other dynamic memory might be used for storing information and instructions to be executed by processor 1904. Main memory 1908 might also be used for storing temporary variables or other intermediate information during execution of instructions by processor 1904. Computing module 1900 might likewise include a read only memory (“ROM”) or other static storage device coupled to bus 1902 for storing static information and instructions for processor 1904.
The computing module 1900 might also include one or more various forms of information storage mechanism 1910, which might include, for example, a media drive 1912 and a storage unit interface 1920. The media drive 1912 might include a drive or other mechanism to support fixed or removable storage media 1914. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive. Accordingly, storage media 1914 might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive 1912. As these examples illustrate, the storage media 1914 can include a computer usable storage medium having stored therein particular computer software or data.
In alternative embodiments, information storage mechanism 1910 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing module 1900. Such instrumentalities might include, for example, a fixed or removable storage unit 1922 and an interface 1920. Examples of such storage units 1922 and interfaces 1920 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units 1922 and interfaces 1920 that allow software and data to be transferred from the storage unit 1922 to computing module 1900.
Computing module 1900 might also include a communications interface 1924. Communications interface 1924 might be used to allow software and data to be transferred between computing module 1900 and external devices. Examples of communications interface 1924 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface. Software and data transferred via communications interface 1924 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 1924. These signals might be provided to communications interface 1924 via a channel 1928. This channel 1928 might carry signals and might be implemented using a wired or wireless communication medium. These signals can deliver the software and data from memory or other storage medium in one computing system to memory or other storage medium in computing system 1900. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
Computing module 1900 might also include a communications interface 1924. Communications interface 1924 might be used to allow software and data to be transferred between computing module 1900 and external devices. Examples of communications interface 1924 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMAX, 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port, Bluetooth interface, or other port), or other communications interface. Software and data transferred via communications interface 1924 might typically be carried on signals, which can be electronic, electromagnetic, optical or other signals capable of being exchanged by a given communications interface 1924. These signals might be provided to communications interface 1924 via a channel 1928. This channel 1928 might carry signals and might be implemented using a wired or wireless medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to physical storage media such as, for example, memory 1908, storage unit 1920, and media 1914. These and other various forms of computer program media or computer usable media may be involved in storing one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing module 1900 to perform features or functions of the present invention as discussed herein.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not of limitation. The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments. Where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future. In addition, the invention is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated example. One of ordinary skill in the art would also understand how alternative functional, logical or physical partitioning and configurations could be utilized to implement the desired features of the present invention.
Furthermore, although items, elements or components of the invention may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
This application is a continuation of U.S. application Ser. No. 15/077,801, filed on Mar. 22, 2016, which is a continuation of U.S. application Ser. No. 13/844,748, filed on Mar. 15, 2013, now U.S. Pat. No. 9,292,737, which is a continuation-in-part of U.S. patent application Ser. No. 12/778,943, filed on May 12, 2010, now U.S. Pat. No. 8,582,862, which is a continuation-in-part of U.S. patent application Ser. No. 12/717,080, filed on Mar. 3, 2010, now U.S. Pat. No. 7,778,457, which is a continuation-in-part of U.S. patent application Ser. No. 12/346,071, filed on Dec. 30, 2008, now U.S. Pat. No. 7,953,268, which is a continuation-in-part of U.S. patent application Ser. No. 12/346,091, filed on Dec. 30, 2008, now U.S. Pat. No. 7,949,176, all of which claim priority to U.S. Provisional Application No. 61/022,279, filed Jan. 18, 2008; and all of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5966473 | Takahashi et al. | Oct 1999 | A |
6038351 | Rigakos | Mar 2000 | A |
6125362 | Elworthy | Sep 2000 | A |
7072862 | Wilson | Jul 2006 | B1 |
7426316 | Vehvilainen | Sep 2008 | B2 |
7584128 | Mason et al. | Sep 2009 | B2 |
7735721 | Ma et al. | Jun 2010 | B1 |
7869098 | Corso et al. | Jan 2011 | B2 |
7873200 | Oakes, III et al. | Jan 2011 | B1 |
7876949 | Oakes, III et al. | Jan 2011 | B1 |
7978900 | Nepomniachtchi et al. | Jul 2011 | B2 |
8358826 | Medina, III et al. | Jan 2013 | B1 |
8538124 | Harpel et al. | Sep 2013 | B1 |
8959033 | Oakes, III et al. | Feb 2015 | B1 |
9058512 | Medina, III | Jun 2015 | B1 |
9613258 | Chen et al. | Apr 2017 | B2 |
9842331 | Nepomniachtchi et al. | Dec 2017 | B2 |
10360447 | Nepomniachtchi et al. | Jul 2019 | B2 |
20020037097 | Hoyos | Mar 2002 | A1 |
20020085745 | Jones | Jul 2002 | A1 |
20030099379 | Monk | May 2003 | A1 |
20040017947 | Yang | Jan 2004 | A1 |
20040037448 | Brundage | Feb 2004 | A1 |
20040081332 | Tuttle | Apr 2004 | A1 |
20040109597 | Lugg | Jun 2004 | A1 |
20040236688 | Bozeman | Nov 2004 | A1 |
20050097046 | Singfield | May 2005 | A1 |
20050125295 | Tidwell | Jun 2005 | A1 |
20050163362 | Jones | Jul 2005 | A1 |
20050180661 | El Bernoussi et al. | Aug 2005 | A1 |
20050220324 | Klein et al. | Oct 2005 | A1 |
20050229010 | Monk | Oct 2005 | A1 |
20060045322 | Clarke et al. | Mar 2006 | A1 |
20060045344 | Paxton et al. | Mar 2006 | A1 |
20060177118 | Ibikunle et al. | Aug 2006 | A1 |
20070009155 | Potts et al. | Jan 2007 | A1 |
20070086643 | Spier et al. | Apr 2007 | A1 |
20070110277 | Hayduchok | May 2007 | A1 |
20070114785 | Porter | May 2007 | A1 |
20070154071 | Lin et al. | Jul 2007 | A1 |
20070168382 | Tillberg et al. | Jul 2007 | A1 |
20070206877 | Wu et al. | Sep 2007 | A1 |
20070211964 | Agam et al. | Sep 2007 | A1 |
20070288382 | Narayanan et al. | Dec 2007 | A1 |
20070297664 | Blaikie | Dec 2007 | A1 |
20080040280 | Davis | Feb 2008 | A1 |
20080086420 | Gilder | Apr 2008 | A1 |
20080152238 | Sarkar | Jun 2008 | A1 |
20080174815 | Komaki | Jul 2008 | A1 |
20080183576 | Kim et al. | Jul 2008 | A1 |
20090063431 | Erol et al. | Mar 2009 | A1 |
20090185736 | Nepomniachtchi | Jul 2009 | A1 |
20090185737 | Nepomniachtchi | Jul 2009 | A1 |
20090198493 | Hakkani-Tur et al. | Aug 2009 | A1 |
20100038839 | Dewitt | Feb 2010 | A1 |
20100102119 | Gustin et al. | Apr 2010 | A1 |
20100114765 | Gustin et al. | May 2010 | A1 |
20100114766 | Gustin et al. | May 2010 | A1 |
20100114771 | Gustin et al. | May 2010 | A1 |
20100114772 | Gustin et al. | May 2010 | A1 |
20100200660 | Moed et al. | Aug 2010 | A1 |
20100208282 | Isaev | Aug 2010 | A1 |
20100254604 | Prabhakara | Oct 2010 | A1 |
20110013822 | Meek et al. | Jan 2011 | A1 |
20110052065 | Nepomniachtchi et al. | Mar 2011 | A1 |
20110091092 | Nepomniachtchi et al. | Apr 2011 | A1 |
20110280450 | Nepomniachtchi et al. | Nov 2011 | A1 |
20120010885 | Hakkani-Tur et al. | Jan 2012 | A1 |
20120030104 | Huff et al. | Feb 2012 | A1 |
20120033892 | Blenkhorn et al. | Feb 2012 | A1 |
20120070062 | Houle et al. | Mar 2012 | A1 |
20120150773 | DiCorpo et al. | Jun 2012 | A1 |
20120197640 | Hakkani-Tur et al. | Aug 2012 | A1 |
20120201416 | Dewitt | Aug 2012 | A1 |
20120230577 | Calman | Sep 2012 | A1 |
20120278336 | Malik et al. | Nov 2012 | A1 |
20130058531 | Hedley et al. | Mar 2013 | A1 |
20130120595 | Roach et al. | May 2013 | A1 |
20130325706 | Wilson | Dec 2013 | A1 |
20140037183 | Gorski | Feb 2014 | A1 |
20140040141 | Gauvin et al. | Feb 2014 | A1 |
20140064621 | Reese | Mar 2014 | A1 |
20140108456 | Ramachandrula | Apr 2014 | A1 |
20140126790 | Duchesne et al. | May 2014 | A1 |
20140133767 | Lund et al. | May 2014 | A1 |
20140307959 | Filimonova | Oct 2014 | A1 |
20160092730 | Smirnov | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
10-2007-0115834 | Dec 2007 | KR |
Entry |
---|
International Search Report issued in related International Application No. PCT/US2011/056593 dated May 30, 2012 (3 pages). |
Office Action for related CA Patent Application No. 2,773,730, dated Aug. 21, 2017, in 4 pages. |
Number | Date | Country | |
---|---|---|---|
20200019770 A1 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
61022279 | Jan 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15077801 | Mar 2016 | US |
Child | 16579625 | US | |
Parent | 13844748 | Mar 2013 | US |
Child | 15077801 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12778943 | May 2010 | US |
Child | 13844748 | US | |
Parent | 12717080 | Mar 2010 | US |
Child | 12778943 | US | |
Parent | 12346071 | Dec 2008 | US |
Child | 12717080 | US | |
Parent | 12346091 | Dec 2008 | US |
Child | 12346071 | US |