Systems and methods for managing and detecting fraud in image databases used with identification documents

Information

  • Patent Grant
  • 7804982
  • Patent Number
    7,804,982
  • Date Filed
    Wednesday, November 26, 2003
    21 years ago
  • Date Issued
    Tuesday, September 28, 2010
    14 years ago
Abstract
We provide a system for issuing identification documents to a plurality of individuals, comprising a first database, a first server, and a workstation. The first database stores a plurality of digitized images, each digitized image comprising a biometric image of an individual seeking an identification document. The first server is in operable communication with the first database and is programmed to send, at a predetermined time, one or more digitized images from the first database to a biometric recognition system, the biometric recognition system in operable communication with a second database, the second database containing biometric templates associated with individuals whose images have been previously captured, and to receive from the biometric recognition system, for each digitized image sent, an indicator, based on the biometric searching of the second database, as to whether the second database contains any images of individuals who may at least partially resemble the digitized image that was sent. The a workstation is in operable communication with the first server and is configured to permit a user to review the indicator and to make a determination as to whether the individual is authorized to be issued an identification document or to keep an identification document in the individual's possession.
Description
TECHNICAL FIELD

Embodiments of the invention generally relate to devices, systems, and methods for data processes. More particularly, embodiments of the invention relates to systems and methods for improving the searching accuracy, use, and management of databases containing biometric information relating to individuals and for improving the accuracy of facial recognition processing.


BACKGROUND AND SUMMARY OF THE INVENTION

Identity theft and other related fraudulent identification activity has the potential to become a major problem to the economy, safety and stability of the United States. Identity theft refers to one individual fraudulently assuming the identity of another and may include activities such as opening credit cards in the name of another, obtaining loans, obtaining identification documents (e.g., drivers licenses, passports), obtaining entitlement/benefits cards (e.g., Social Security Cards, welfare cards, etc.), and the like. Often, these activities are performed without the consent or knowledge of the victim. Other fraudulent identification activity can also be problematic. An individual may, for example, use either his or her “real” identity to obtain a document, such as an identification card, but may further obtain additional identification cards using one or more identification credentials that belong to another and/or one or more fictitious identification credentials.


For example, to obtain an identification document such as a drivers license, a given individual may attempt to obtain multiple drivers licenses under different identities, may attempt to obtain a drivers license using false (e.g., “made up”), identification information, or may attempt to assume the identity of another to obtain a drivers license in that individual's name. In addition, individuals may alter legitimate identification documents to contain fraudulent information and may create wholly false identification documents that purport to be genuine documents.


It is extremely time consuming and expensive to apprehend and prosecute those responsible for identity theft and identity fraud. Thus, to help reduce identity theft and identity fraud, it may be advisable for issuers of identity-bearing documents to take affirmative preventative steps at the time of issuance of the identity documents. Because of the large number of documents that are issued every day and the large history of already issued documents, however, it is difficult for individual employees of the issuers to conduct effective searches at the time such documents are issued (or re-issued). In addition, the complexity and amount of the information stored often precludes manual searching, at least as a starting point.


For example, many government and business organizations, such as motor vehicle registries, store large databases of information about individuals. A motor vehicle registry database record may include information such as an operator's name, address, birth date, height, weight, and the like. Some motor vehicle registry databases also include images of the operator, such as a facial image and/or a fingerprint image. Unless the database is fairly small, it is nearly impossible for it to be searched manually.


In some databases, part or all of the database record is digitally encoded, which helps to make it possible to perform automated searches on the database. The databases themselves, however, can still be so large that automated searching is time consuming and error prone. For example, some states do not delete “old” images taken of a given individual. Each database record might be associated with a plurality of images. Thus, a database that contains records for 10 million individuals, could, in fact, contain 50-100 million images. If a given motor vehicle registry uses both facial and fingerprint images, the total number of images may be doubled still.


One promising search technique that can be used to perform automated searching of information and which may help to reduce identity theft and identity fraud is the use of biometric authentication and/or identification systems. Biometrics is a science that refers to technologies that can be used to measure and analyze physiological characteristics, such as eye retinas and irises, facial patterns, hand geometry, and fingerprints. Some biometrics technologies involve measurement and analysis of behavioral characteristics, such as voice patterns, signatures, and typing patterns. Because biometrics, especially physiological-based technologies, measures qualities that an individual usually cannot change, it can be especially effective for authentication and identification purposes.


Commercial manufacturers, such as Identix Corp of Minnetonka, Minn. manufacture biometric recognition systems that can be adapted to be capable of comparing two images. For example, the IDENTIX FACE IT product may be used to compare two facial images to determine whether the two images belong to the same person. Other commercial products are available that can compare two fingerprint images and determine whether the two images belong to the same person. For example, U.S. Pat. Nos. 6,072,894, 6,111,517, 6,185,316, 5,224,173, 5,450,504, and 5,991,429 further describe various types of biometrics systems, including facial recognition systems and fingerprint recognition systems.


Some face recognition applications use a camera to capture one or more successive images of a subject, locate the subject's face in each image, and match the subject's face to a one or faces stored in a database of stored images. In some face recognition applications, the facial images in the database of stored images are stored as processed entities called templates. A template represents the preprocessing of an image (e.g., a facial image) to a predetermined machine readable format. Encoding the image as a template helps enable automated comparison between images. For example, in a given application, a video camera can capture the image of a given subject, perform processing necessary to convert the image to a template, then compare the template of the given subject to one or more stored templates in a database, to determine if the template of the subject can be matched to one or more stored templates.


Facial recognition has been deployed for applications such as surveillance and identity verification. In surveillance, for example, a given facial recognition system may be used to capture multiple images of a subject, create one or more templates based on these captured images, and compare the templates to a relatively limited “watch list” (e.g., set of stored templates), to determine if the subject's template matches any of the stored templates. In surveillance systems, outside human intervention may be needed at the time of enrolling the initial image for storage in the database, to evaluate each subject's image as it is captured and to assist the image capture process. Outside human intervention also may be needed during surveillance if a “match” is found between the template of a subject being screened and one or more of the stored templates.


For example, some driver license systems include a large number of single images of individuals collected by so called “capture stations.” When configured for face recognition applications, these identification systems build template databases by processing each of the individual images collect at a capture station to provide a face recognition template thereby creating a template for every individual. A typical driver license system can include millions of images. The face recognition template databases are used to detect individuals attempting to obtain multiple licenses. Another application provides law enforcement agencies with an investigative tool. The recognition database can discover other identities of a known criminal or may help identify an unidentified decedent.


One difficulty in adapting commercial biometric systems to databases such as motor vehicle databases is the very large number of images that may be stored in the database. Some types of biometrics technologies can produce high numbers of false positives (falsely identifying a match between a first image and one or more other images) when the database size is very large. High numbers of false positives are sometimes seen with large databases of facial images that are used with facial recognition systems.


Another potential problem with searching large databases of biometric images can be the processing delays that can accompany so-called “one to many” searches (comparing a probe image with an “unidentified” image, such as a face or finger image presented for authentication, to a large database of previously enrolled “known” images. In addition, the “many” part of “one-to-many” can vary depending on the application and/or the biometric being used. In some types of applications (such as surveillance, terrorist watch lists, authentication for admission to a facility), the “many” can be as few as a few hundred individuals, whereas for other applications (e.g., issuance of security documents, such as passports, drivers licenses, etc.), the “many” can be many millions of images.


Because many known facial recognition systems are used for surveillance applications, these facial recognition systems are optimized to work with surveillance conditions, including working with databases having relatively small numbers of templates of images (e.g., fewer than 1 million records). In addition, some facial recognition applications are able to processes multiple images captures of the same subject and, as noted previously may have an outside operator assist in initial capture of the images.


For some applications, however, the optimization of the facial recognition system may be less than ideal. For example, systems such as drivers license databases may contain far more images in their databases a given surveillance application. The databases of drivers license images maintained by the Department of Motor Vehicles (DMV) in some states range from a few million records to more than 80 million records. In some instances, the DMV databases grow larger every day, because at least some DMVs do not delete any customer images, even those of deceased license holders. Another possible complication with some DMV databases is that, during the license renewal cycle, duplicate images may be created of the same person. In some instances, it may be rare to see more than two images of the same person in a DMV database, however.


Still another complication with applying facial recognition processing to at least some DMV databases is the lack of operator intervention during image capture. It is time consuming, expensive, and often impossible to re-enroll the “legacy” database of DMV images so that the images are optimized for automated facial recognition.


To address at least some of these and other problems, we have developed systems and methods for performing automated biometric searching of databases of captured images, where the databases can be very large in size. These systems and methods can be used during the creation and maintenance of the database as well as during the search of the database. In one embodiment, we provide a browser based system with an operator friendly interface that enables the operator to search a database of captured images for matches to a given so-called “probe” image. When matches are detected, if the operator determines that fraud or other issues may exist, the operator can add an indicator to the image and/or the image file so that future investigators are aware that issues may exist with the image. In an application such as a DMV, the DMV can use the systems and methods of the invention to prevent the issuance of a driver's license if fraud is detected and/or to track down whether a driver's license already issued was issued based on fraudulent information and/or images.


At least some systems and methods of the embodiments of the invention described herein also may help to detect patterns of fraud, geographically locate entities (including individuals, organizations, terrorist groups, etc.) committing and/or attempting to commit fraud, and help to prevent fraud.


In one embodiment, the invention employs a facial recognition technique that is based on local feature analysis (LFA), such as is provided in the Identix FACE IT product.


In one embodiment, we provide a system for issuing identification documents to a plurality of individuals, comprising a first database, a first server, and a workstation. The first database stores a plurality of digitized images, each digitized image comprising a biometric image of an individual seeking an identification document. The first server is in operable communication with the first database and is programmed to send, at a predetermined time, one or more digitized images from the first database to a biometric recognition system, the biometric recognition system in operable communication with a second database, the second database containing biometric templates associated with individuals whose images have been previously captured, and to receive from the biometric recognition system, for each digitized image sent, an indicator, based on the biometric searching of the second database, as to whether the second database contains any images of individuals who may at least partially resemble the digitized image that was sent. The a workstation is in operable communication with the first server and is configured to permit a user to review the indicator and to make a determination as to whether the individual is authorized to be issued an identification document or to keep an identification document in the individual's possession.


The digitized image can, for example, be at least one of a facial, fingerprint, thumbprint, and iris image. The identification document can, for example, be a driver's license.


The biometric recognition system can be programmed to create a biometric template based on the digitized image received from the first server and to use that biometric template to search the second database. The first server can be programmed to create a biometric template and provide that template to the biometric recognition system.


The indicator can comprise a user interface the user interface retrieving from the third database the images of at least a portion of the images of individuals that the biometric recognition system has determined may at least partially resemble the digitized image that was sent. In at least one embodiment, the user interface is operable to permit a user to do at least one of the following functions:


visually compare the digitized image that was sent directly to an image of an individual whose data was returned in the indicator by the facial recognition search system;


visually compare demographic information associated with the individual whose digitized image was sent directly to demographic information of an individual whose data was returned in the indicator by the facial recognition search system;


visually compare the other biometric information associated with the digitized image that was sent to other biometric information associated with an individual whose data was returned in the indicator by the facial recognition search system;


create a new biometric template of the digitized image that was sent and conduct a new search of the biometric recognition search system using the new biometric template;


perform a re-alignment of the digitized image and use the re-alignment data to conduct a new search of the biometric recognition search system;


capture a new image of the individual whose digitized image was sent;


adding a notification to a record associated with at least one of the digitized image that was sent and the data that was returned in the indicator by the biometric recognition search system, the notification providing an alert that there may be a problem with the record; and


selecting at least one of the images of an individual whose data was returned in the indicator by the facial recognition search system and sending that image to the biometric recognition search system to run a search on that image.


In one embodiment, we provide a method for screening a plurality of applicants each seeking to be issued an identification document, comprising:


(a) storing a digitized image of each applicant in a first database;


(b) providing a predetermined portion of the images in the first database, at a predetermined time, to a biometric searching system, the biometric searching system comparing the digitized image of each applicant to a plurality of previously captured images of individuals stored in a third database and returning to a second database, for each applicant, an result containing a list of matches to each image, each match having a score;


(c) selecting from the second database those results having a score above a predetermined threshold and providing the results to a fourth database;


(d) providing the selected results to an investigator; and


(e) displaying to the investigator, upon request, information about each selected result.


The method can also include the steps of receiving a notification from the investigator relating to at least one of the results, and adding a notification to a record associated with the corresponding result, the notification remaining in the record until removed by an authorized individual and being visible to other investigators until removed.


In another embodiment we provide a computer implemented method of creating a biometric template of an individual for facial recognition processing, comprising:


sending an image of the individual to a plurality of eye finding modules, each eye finding module configured to find the location of at least one eye of the individual in the image;


receiving locations of the at least one eye from each respective eye finding module in the plurality of eye finding modules; and


applying at least one rule to the received locations to determine the eye location to be used for creation of the biometric template.


In one embodiment, the predetermined rule can comprise at least one or more of the following rules;


selecting as an eye location the average of the received eye locations;


selecting as an eye location a weighted average of the received eye locations;


selecting as an eye location the location that is closest to the eye location determined by a majority of the plurality of eye finding modules;


removing from the received eye locations any eye locations that are outside of a predetermined boundary;


selecting as an eye location an eye location that is the center of gravity of the received eye locations;


removing from the received eye locations any eye locations that do not fit known eye characteristics, and


removing from the received eye locations any eye locations that are not within a predetermined distance or slope from the eye locations of the other eye of the individual


In one embodiment, we provide a method of searching a database of biometric templates, each biometric template associated with a corresponding facial image of an individual, for an image of an individual who substantially resembles an individual in a probe image, comprising:


receiving a probe image of an individual at a client;


determining the eye locations of the individual;


applying a predetermined rule to determine if the eye locations are acceptable;


if the eye locations are acceptable, creating a probe biometric template using the eye locations; and


searching a database of biometric templates using the probe biometric template.


In another embodiment we provide a system for investigating an image of an individual, comprising:


a first database, the first database storing at least one digitized image, the digitized image comprising a biometric image of an individual seeking an identification document;


a second database, the second database storing a plurality of digitized images of individuals whose images have been previously captured;


means for determining whether any of the images in the second database match any of the images in the first database to a predetermined degree and for providing such matches an investigator, the means for determining being in operable communication with the first and second databases; and


means for allowing the investigator to compare information associated with the first digitized image with information associated with any of the matches, the means for allowing being in operable communication with at least a third database capable of providing the information associated with the first digitized image and information associated with any of the matches.


These and other embodiments of the invention are described below





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following description and the drawings in which:



FIG. 1 is a block diagram of a computer system usable in the embodiments of the invention described herein;



FIG. 2 is a block diagram of a system for biometric searching in accordance with a first embodiment of the invention;



FIG. 3 is a block diagram of a system for biometric searching in accordance with a second embodiment of the invention;



FIG. 4 is a block diagram of a system for biometric searching in accordance with a third embodiment of the invention;



FIG. 5A is a diagram illustrating a first process for communication between a photo verification system and a facial recognition search system, in accordance with one embodiment of the invention;



FIG. 5B is a diagram illustrating a second process for communication between a photo verification system and a facial recognition search system, in accordance with one embodiment of the invention;



FIG. 6A is a flow chart of a first method for alignment of an image, in accordance one embodiment of the invention;



FIG. 6B is a flow chart of a second method for alignment of an image, in accordance one embodiment of the invention;



FIG. 7 is a flow chart of a method for conducting biometric searches at a biometric search engine, in accordance with one embodiment of the invention;



FIG. 8 is a flow chart of a method for conducting biometric searches at a user workstation, in accordance with one embodiment of the invention;



FIG. 9 is an illustrative example of a screen shot of a user interface showing an image that can be used as a probe image, in accordance with one embodiment of the invention;



FIG. 10 is an illustrative example of a screen shot of a probe image verification list, in accordance with one embodiment of the invention;



FIGS. 11A-11B are illustrative examples of probe images and returned results, respectively, for the system of any one of FIGS. 2-4;



FIG. 12A-12B are illustrative examples of a side by side comparison of a probe image and a retrieved image, respectively, including demographic and biometric data, for the system of any one of FIGS. 2-4;



FIG. 13 is an illustrative example of a screen shot of a candidate list screen presented to a user, in accordance with one embodiment of the invention;



FIG. 14 is an illustrative example of a screen shot of a side by side comparison showing portraits and limited demographic information, in accordance with one embodiment of the invention;



FIG. 15 is an illustrative example of a screen shot of a side by side comparison screen showing fingerprints and signatures, in accordance with one embodiment of the invention; and



FIG. 16 is a flow chart of a process a biometric search that includes evaluation of eye locations, in accordance with one embodiment of the invention.





The drawings are not necessarily to scale, emphasis instead is generally placed upon illustrating the principles of the invention. In addition, in the drawings, like reference numbers indicate like elements. Further, in the figures of this application, in some instances, a plurality of system elements or method steps may be shown as illustrative of a particular system element, and a single system element or method step may be shown as illustrative of a plurality of a particular systems elements or method steps. It should be understood that showing a plurality of a particular element or step is not intended to imply that a system or method implemented in accordance with the invention must comprise more than one of that element or step, nor is it intended by illustrating a single element or step that the invention is limited to embodiments having only a single one of that respective elements or steps. In addition, the total number of elements or steps shown for a particular system element or method is not intended to be limiting; those skilled in the art will recognize that the number of a particular system element or method steps can, in some instances, be selected to accommodate the particular user needs.


DETAILED DESCRIPTION

Before describing various embodiments of the invention in detail, it is helpful to define some terms used herein and explain further some of the environments and applications in which at least some embodiments of the invention can be used.


Identification Documents


In the foregoing discussion, the use of the word “ID document” or “identification document” or “security document” is broadly defined and intended to include all types of ID documents, including (but not limited to), documents, magnetic disks, credit cards, bank cards, phone cards, stored value cards, prepaid cards, smart cards (e.g., cards that include one more semiconductor chips, such as memory devices, microprocessors, and microcontrollers), contact cards, contactless cards, proximity cards (e.g., radio frequency (RFID) cards), passports, driver's licenses, network access cards, employee badges, debit cards, security cards, visas, immigration documentation, national ID cards, citizenship cards, social security cards, security badges, certificates, identification cards or documents, voter registration and/or identification cards, police ID cards, border crossing cards, security clearance badges and cards, legal instruments, gun permits, badges, gift certificates or cards, membership cards or badges, and tags. Also, the terms “document,” “card,” “badge” and “documentation” are used interchangeably throughout this patent application). In at least some aspects of the invention, ID document can include any item of value (e.g., currency, bank notes, and checks) where authenticity of the item is important and/or where counterfeiting or fraud is an issue.


In addition, in the foregoing discussion, “identification” at least refers to the use of an ID document to provide identification and/or authentication of a user and/or the ID document itself. For example, in a conventional driver's license, one or more portrait images on the card are intended to show a likeness of the authorized holder of the card. For purposes of identification, at least one portrait on the card (regardless of whether or not the portrait is visible to a human eye without appropriate stimulation) preferably shows an “identification quality” likeness of the holder such that someone viewing the card can determine with reasonable confidence whether the holder of the card actually is the person whose image is on the card. “Identification quality” images, in at least one embodiment of the invention, include covert images that, when viewed using the proper facilitator (e.g., an appropriate light or temperature source), provide a discernable image that is usable for identification or authentication purposes.


Further, in at least some embodiments, “identification” and “authentication” are intended to include (in addition to the conventional meanings of these words), functions such as recognition, information, decoration, and any other purpose for which an indicia can be placed upon an article in the article's raw, partially prepared, or final state. Also, instead of ID documents, the inventive techniques can be employed with product tags, product packaging, business cards, bags, charts, maps, labels, etc., etc., particularly those items including marking of an laminate or over-laminate structure. The term ID document thus is broadly defined herein to include these tags, labels, packaging, cards, etc.


Many types of identification cards and documents, such as driving licenses, national or government identification cards, bank cards, credit cards, controlled access cards and smart cards, carry thereon certain items of information which relate to the identity of the bearer. Examples of such information include name, address, birth date, signature and photographic image; the cards or documents may in addition carry other variant data (i.e., data specific to a particular card or document, for example an employee number) and invariant data (i.e., data common to a large number of cards, for example the name of an employer). All of the cards described above will hereinafter be generically referred to as “ID documents”.


As those skilled in the art know, ID documents such as drivers licenses can contain information such as a photographic image, a bar code (which may contain information specific to the person whose image appears in the photographic image, and/or information that is the same from ID document to ID document), variable personal information, such as an address, signature, and/or birthdate, biometric information associated with the person whose image appears in the photographic image (e.g., a fingerprint), a magnetic stripe (which, for example, can be on the a side of the ID document that is opposite the side with the photographic image), and various security features, such as a security pattern (for example, a printed pattern comprising a tightly printed pattern of finely divided printed and unprinted areas in close proximity to each other, such as a fine-line printed security pattern as is used in the printing of banknote paper, stock certificates, and the like).


An exemplary ID document can comprise a core layer (which can be pre-printed), such as a light-colored, opaque material (e.g., TESLIN (available from PPG Industries) or polyvinyl chloride (PVC) material). The core is laminated with a transparent material, such as clear PVC to form a so-called “card blank”. Information, such as variable personal information (e.g., photographic information), is printed on the card blank using a method such as Dye Diffusion Thermal Transfer (“D2T2”) printing (described further below and also described in commonly assigned U.S. Pat. No. 6,066,594, the contents of which are hereby incorporated by reference). The information can, for example, comprise an indicium or indicia, such as the invariant or nonvarying information common to a large number of identification documents, for example the name and logo of the organization issuing the documents. The information may be formed by any known process capable of forming the indicium on the specific core material used.


To protect the information that is printed, an additional layer of transparent overlaminate can be coupled to the card blank and printed information, as is known by those skilled in the art. Illustrative examples of usable materials for overlaminates include biaxially oriented polyester or other optically clear durable plastic film.


In the production of images useful in the field of identification documentation, it may be desirable to embody into a document (such as an ID card, drivers license, passport or the like) data or indicia representative of the document issuer (e.g., an official seal, or the name or mark of a company or educational institution) and data or indicia representative of the document bearer (e.g., a photographic likeness, name or address). Typically, a pattern, logo or other distinctive marking representative of the document issuer will serve as a means of verifying the authenticity, genuineness or valid issuance of the document. A photographic likeness or other data or indicia personal to the bearer will validate the right of access to certain facilities or the prior authorization to engage in commercial transactions and activities.


Identification documents, such as ID cards, having printed background security patterns, designs or logos and identification data personal to the card bearer have been known and are described, for example, in U.S. Pat. No. 3,758,970, issued Sep. 18, 1973 to M. Annenberg; in Great Britain Pat. No. 1,472,581, issued to G.A.O. Gesellschaft Fur Automation Und Organisation mbH, published Mar. 10, 1976; in International Patent Application PCT/GB82/00150, published Nov. 25, 1982 as Publication No. WO 82/04149; in U.S. Pat. No. 4,653,775, issued Mar. 31, 1987 to T. Raphael, et al.; in U.S. Pat. No. 4,738,949, issued Apr. 19, 1988 to G. S. Sethi, et al.; and in U.S. Pat. No. 5,261,987, issued Nov. 16 1993 to J. W. Luening, et al.


Commercial systems for issuing ID documents are of two main types, namely so-called “central” issue (CI), and so-called “on-the-spot” or “over-the-counter” (OTC) issue. CI type ID documents are not immediately provided to the bearer, but are later issued to the bearer from a central location. For example, in one type of CI environment, a bearer reports to a document station where data is collected, the data are forwarded to a central location where the card is produced, and the card is forwarded to the bearer, often by mail. In contrast to CI identification documents, OTC identification documents are issued immediately to a bearer who is present at a document-issuing station. An OTC assembling process provides an ID document “on-the-spot”. (An illustrative example of an OTC assembling process is a Department of Motor Vehicles (“DMV”) setting where a driver's license is issued to person, on the spot, after a successful exam.). Further details relating to various methods for printing and production of identification documents can be found in the following commonly assigned patent applications, all of which are hereby incorporated by reference:

    • Identification Card Printed With Jet Inks and Systems and Methods of Making Same (application Ser. No. 10/289,962, Inventors Robert Jones, Dennis Mailloux, and Daoshen Bi, filed Nov. 6, 2002);
    • Laser Engraving Methods and Compositions, and Articles Having Laser Engraving Thereon (application Ser. No. 10/326,886, filed Dec. 20, 2002—Inventors Brian Labrec and Robert Jones);
    • Multiple Image Security Features for Identification Documents and Methods of Making Same (application Ser. No. 10/325,434, filed Dec. 18, 2002—Inventors Brian Labrec, Joseph Anderson, Robert Jones, and Danielle Batey); and
    • Identification Card Printer-Assembler for Over the Counter Card Issuing (application Ser. No. 10/436,729, filed May 12, 2003—Inventors Dennis Mailloux, Robert Jones, and Daoshen Bi).


Biometrics


Biometrics relates generally to the science of measuring and analyzing biological characteristics, especially those of humans. One important application of biometrics is its use in security-related applications, such as identification of an individual or authentication of an individual's identity by using measurable, individualized, and often unique, human physiological characteristics. Examples of human physiological characteristics that can be used as biometric identifiers include (but are not limited to) face, fingerprint (including use for both fingerprint recognition systems and Automated Fingerprint Identification Systems (AFIS)), thumbprint, hand print, iris, retina, hand geometry, finger geometry, thermogram (heat signatures of a given physiological feature, e.g. the face, where the image is captured using a device such as an infrared camera and the heat signature is used to create a biometric template used for matching), hand vein (measuring the differences in subcutaneous features of the hand using infrared imaging), signature, voice, keystroke dynamic, odor, breath, and deoxyribonucleic acid (DNA). We anticipate that any one or more of these biometrics is usable with the embodiments of the invention described herein.


The reader is presumed to be familiar with how each of the biometrics listed above works and how biometric templates are created with each method. We note, however, that embodiments of the invention can utilize many different types of information to create biometric templates. For example, to create face and/or finger templates, information that can be used may include (but is not limited to), law enforcement images (e.g., mug shots, fingerprint exemplars, etc.), print images from any source (e.g., photographs, video stills, etc.), digitized or scanned images, images captured at a capture station, information provided by other databases, and/or sketches (e.g., police sketches).


DETAILED DESCRIPTION OF THE FIGURES

Systems and methods described herein in accordance with the invention can be implemented using any type of general purpose computer system, such as a personal computer (PC), laptop computer, server, workstation, personal digital assistant (PDA), mobile communications device, interconnected group of general purpose computers, and the like, running any one of a variety of operating systems. FIG. 1 is a block diagram of a computer system usable as the workstation 10 in the embodiments described herein


Referring briefly to FIG. 1, the workstation 10 includes a central processor 12, associated memory 14 for storing programs and/or data, an input/output controller 16, a network interface 18, a display device 20, one or more input devices 22, a fixed or hard disk drive unit 24, a floppy disk drive unit 26, a tape drive unit 28, and a data bus 30 coupling these components to allow communication therebetween.


The central processor 12 can be any type of microprocessor, such as a PENTIUM processor, made by Intel of Santa Clara, Calif. The display device 20 can be any type of display, such as a liquid crystal display (LCD), cathode ray tube display (CRT), light emitting diode (LED), and the like, capable of displaying, in whole or in part, the outputs generated in accordance with the systems and methods of the invention. The input device 22 can be any type of device capable of providing the inputs described herein, such as keyboards, numeric keypads, touch screens, pointing devices, switches, styluses, and light pens. The network interface 18 can be any type of a device, card, adapter, or connector that provides the computer system 10 with network access to a computer or other device, such as a printer. In one embodiment of the present invention, the network interface 18 enables the workstation 10 to connect to a computer network such as the Internet.


Those skilled in the art will appreciate that computer systems embodying the present invention need not include every element shown in FIG. 1, and that equivalents to each of the elements are intended to be included within the spirit and scope of the invention. For example, the workstation 10 need not include the tape drive 28, and may include other types of drives, such as compact disk read-only memory (CD-ROM) drives. CD-ROM drives can, for example, be used to store some or all of the databases described herein.


In at least one embodiment of the invention, one or more computer programs define the operational capabilities of the workstation 10. These programs can be loaded into the computer system 10 in many ways, such as via the hard disk drive 24, the floppy disk drive 26, the tape drive 28, or the network interface 18. Alternatively, the programs can reside in a permanent memory portion (e.g., a read-only-memory (ROM)) chip) of the main memory 14. In another embodiment, the workstation 10 can include specially designed, dedicated, hard-wired electronic circuits that perform all functions described herein without the need for instructions from computer programs.


In at least one embodiment of the present invention, the workstation 10 is networked to other devices, such as in a client-server or peer to peer system. For example, referring to FIG. 1, the workstation 10 can be networked with an external data system 17. The workstation 10 can, for example, be a client system, a server system, or a peer system. In one embodiment, the invention is implemented at the server side and receives and responds to requests from a client, such as a reader application running on a user computer.


The client can be any entity, such as a the workstation 10, or specific components thereof (e.g., terminal, personal computer, mainframe computer, workstation, hand-held device, electronic book, personal digital assistant, peripheral, etc.), or a software program running on a computer directly or indirectly connected or connectable in any known or later-developed manner to any type of computer network, such as the Internet. For example, a representative client is a personal computer that is x86-, PowerPC®, PENTIUM-based, or RISC-based, that includes an operating system such as IBM®, LINUX, OS/2® or any member of the MICROSOFT WINDOWS family (made by Microsoft Corporation of Redmond, Wash.) and that includes a Web browser, such as MICROSOFT INTERNET EXPLORER, NETSCAPE NAVIGATOR (made by Netscape Corporation, Mountain View, Calif.), having a Java Virtual Machine (JVM) and support for application plug-ins or helper applications. A client may also be a notebook computer, a handheld computing device (e.g., a PDA), an Internet appliance, a telephone, an electronic reader device, or any other such device connectable to the computer network.


The server can be any entity, such as the workstation 10, a computer platform, an adjunct to a computer or platform, or any component thereof, such as a program that can respond to requests from a client. Of course, a “client” can be broadly construed to mean one who requests or gets the file, and “server” can be broadly construed to be the entity that sends or forwards the file. The server also may include a display supporting a graphical user interface (GUI) for management and administration, and an Application Programming Interface (API) that provides extensions to enable application developers to extend and/or customize the core functionality thereof through software programs including Common Gateway Interface (CGI) programs, plug-ins, servlets, active server pages, server side include (SSI) functions and the like.


In addition, software embodying at least some aspects of the invention, in one embodiment, resides in an application running on the workstation 10. In at least one embodiment, the present invention is embodied in a computer-readable program medium usable with the general purpose computer system 10. In at least one embodiment, the present invention is embodied in a data structure stored on a computer or a computer-readable program medium. In addition, in one embodiment, an embodiment of the invention is embodied in a transmission medium, such as one or more carrier wave signals transmitted between the computer system 10 and another entity, such as another computer system, a server, a wireless network, etc. The invention also, in at least one embodiment, is embodied in an application programming interface (API) or a user interface. In addition, the invention, in at least one embodiment, can be embodied in a data structure.


Note that the system 10 of FIG. 1 is not limited for use with workstations. Some or all of the system 10 can, of course, be used for various types of processing taking place in the systems described herein, as will be appreciated by those skilled in the art. Further, in at least some embodiments, a plurality of systems 10 can be arranged as a parallel computing system.


As used herein, the Internet refers at least to the worldwide collection of networks and gateways that use the transmission control protocol/Internet protocol (TCP/IP) suite of protocols to communicate with one another. The World Wide Web (WWW) refers at least to the total set of inter-linked hypertext documents residing on hypertext transport protocol (HTTP) servers all around the world. As used herein, the WWW also refers at least to documents accessed on secure servers, such as HTTP servers (HTTPS), which provide for encryption and transmission through a secure port. WWW documents, which may be referred to herein as web pages, can, for example, be written in hypertext markup language (HTML). As used herein, the term “web site” refers at least to one or more related HTML documents and associated files, scripts, and databases that may be presented by an HTTP or HTTPS server on the WWW. The term “web browser” refers at least to software that lets a user view HTML documents and access files and software related to those documents.


It should be appreciated that any one or more of the elements illustrated in the following embodiments may be located remotely from any or all of the other elements, and that any of the elements of a given embodiment may, in fact, be part of another system altogether. For example, a database accessed by one or more of the elements of a given embodiment may be part of a database maintained by an organization entirely separate from the system of the invention.


In addition, it should be understood that, for the following embodiments, although they are described in connection with a facial recognition system, the invention is not so limited. Many aspects of the invention are usable with other biometric technologies, including but not limited to fingerprint recognition systems, iris recognition systems, hand geometry systems, signature recognition systems, etc. We have found that at least some embodiments of the invention are especially advantageous for biometric application that utilize information that can be captured in an image.


First Illustrative Embodiment


FIG. 2 is an illustrative block diagram of a system implemented in accordance with a first embodiment of the invention. Referring to FIG. 2, the following elements are provided.



FIG. 2 is a block diagram of a first system 5 for biometric searching, in accordance with one embodiment of the invention. The system 5 includes a workstation 10 (such as the one described more fully in FIG. 1) which is capable of receiving inputs from a number of sources, including image and/or data capture systems 15, external data systems 17 (such as remote clients in communication with the workstation 10 and/or which conduct searches using the workstation 10, data acquisition devices such as scanners, palm top computers, etc.), manual inputs 19 (which can be provided locally or remotely via virtually any input device, such as a keyboard, mouse, scanner, etc.), and operator inputs 21 (e.g., voice commands, selections from a menu, etc.). In one embodiment, the workstation in this embodiment is programmed convert captured images and/or received data into templates usable by the facial recognition search system, 25 (described further below). However, those skilled in the art will appreciate that the function of converting captured data into biometric templates can, of course, be performed by a separate system (not shown). Biometric templates, after being created at (or otherwise inputted to) the workstation 10 can be added to the database of enrolled biometric templates 25.


The system 5 also includes a biometric search system which in this embodiment includes a facial recognition search system 25. Of course, it will be appreciated that instead of a face recognition search system 25 as the biometric search system, the system 5 of FIG. 2 could instead use a search system that utilized a different biometric, e.g., fingerprint, iris, palm print, hand geometry, etc. In addition, we expressly contemplate that hybrid biometrics systems (systems that use more than one biometric) are also usable as a biometric search system; one such system is described in our patent application entitled “Systems and Methods for Recognition of Individuals Using Multiple Biometric Searches”, Ser. No. 10/686,005, filed Oct. 14, 2003, which is incorporated herein by reference. We also expressly contemplate that certain graphics processing programs, such as CyberExtruder, can be adapted to work with this and other embodiments of the invention.


Referring again to FIG. 2, the facial recognition search system 25 includes a search engine capable of searching the database of previously enrolled biometric templates 35. In one embodiment, the facial recognition search system 25 is a facial recognition system employing a local features analysis (LFA) methodology, such as the FACE-IT facial recognition system available from Identix of Minnesota. Other facial recognition systems available from other vendors (e.g., Cognitec FaceVACS, Acsys, Imagis, Viisage, Eyematic, VisionSphere, DreamMirth, C-VIS, etc.) are, of course, usable with at least some embodiments of the invention, as those skilled in the art will appreciate.


The system 5 also includes a biometric template database 35 comprising previously enrolled biometric templates (e.g., templates adapted to work with the facial recognition search system 25) and a demographic database 37 comprising demographic information 37 associated with each respective biometric template in the biometric template database 25. For example, in one embodiment, the biometric template database 35 and demographic database 37 are associated with a plurality of records of individuals who have obtained an identification document (e.g., a driver's license) in a given jurisdiction. Either or both of the biometric template database 35 and demographic database 37 can be part of a database of official records (e.g., a database maintained by an issuer such as a department of state, department of motor vehicles, insurer, employer, etc.).


Either or both of the biometric template database 25 and demographic database 37 also can be linked to (or part of) databases containing many different types of records, which can enable an user to “mine” the data and link to other information (this may be more advantageous in the web-server embodiment of FIG. 3). For example, an investigator could use selected aspects of an original probe to probe other databases, and/or use the matches as probes for more matches (as described in our patent application entitled “Systems and Methods for Recognition of Individuals Using Multiple Biometric Searches”, Ser. No. 10/686,005, filed Oct. 14, 2003), which is hereby incorporated by reference. The system 5 can use biometrics, such as faces or fingerprints, for the follow-up search, or it can other data, such as names, addresses and date-of-births, for the follow-up search. Effectively, the system 5 turns matches into probes for more matches, and cross-references the results. In addition, the system 5 could search other databases, such as those linked to the individual's social security number or phone number, and cross-reference these results.


Our testing of embodiments of the invention using large (10 million or more images) databases of images has found that such recursive database searching and/or database mining has the potential for significant fraud detection. For example, we have found multiple people sharing one license number. We have also found that people tend to get several fake licenses in a few months. These are patterns that such further analysis can detect and track. In at least some embodiments of the invention, we link this type of data to itself (and even the employees of the issuing authority) to help determine and/or investigate collusion, such as that by criminals, operators and/or administrators of the issuing authority, information technology (IT) operators, consultants, etc.


Referring again to FIG. 2, in some embodiments, the system 5 further includes a search results database 23 for storing the results of searches conducted by the workstation 10. As those skilled in the art will appreciate, the search results database 23, biometric template database 35 and the demographic database 37 can be implemented using any type of memory device capable of storing data records or electrical signals representative of data and permitting the data records or electrical signals to be retrieved, including but not limited to semiconductor memory devices (e.g., RAM, ROM, EEPROM, EPROM, PROM, etc), flash memory, memory “sticks” (e.g., those manufactured by Sony), mass storage devices (e.g., optical disks, tapes, disks), floppy disk, a digital versatile disk (DVD), a compact disk (CD), RAID type memory systems, etc.


In at least some embodiments, the system 5 includes an image/data capture system 15, which can be any system capable of acquiring images and/or data that can be used (whether directly or after conversion to a template) for biometric system. The particular elements of the image/data capture system 15 will, of course be dependent on the particular biometrics used. For example, signature pads may be used to acquire signatures of individuals, camera systems may be used to acquire particular types of images (e.g., facial images, iris images), retinal scanners may be used to acquire retinal scans, fingerprint scanning and capture devices may be used to capture fingerprint images, IR cameras can acquire thermogram images, etc. Those skilled in the art will readily understand what particular pieces of equipment may be required to capture or otherwise acquire a given piece of data or a given image.


In an advantageous embodiment, the image/data capture system 15 can be implemented to automatically locate and capture biometric information in an image that it receives. For example, in one embodiment of the invention that implements a face recognition biometric, we utilize proprietary Find-A-Face™ software available from the assignee of the present invention (Digimarc Corporation of Burlington, Mass.). Find-A-Face™ software has the intelligence to automatically (without the need for any operator intervention):


(i) follow a multitude of instructions and extensive decision and judgment logic to reliably perform the difficult task of locating human faces (with their many variations) within captured digital data (a digital picture);


(ii) once the particular human face is found within the captured digital data, evaluate multiple aspects of the found human face in the image;


(iii) determine, based upon this face location and evaluation work, how the system should position the human face in the center of the digital image, adjust the gamma level of the image, and provide contrast, color correction and color calibration and other related adjustments and enhancements to the image; and


(iv) repeatedly and reliably implement these and other functions for the relatively large volume of image capture s associated with producing a large volume of identification documents


In another advantageous embodiment, we have found that biometric templates created based on the data captured using the image/data capture system 15 can be further improved by utilizing of various methods to improve finding particular biometric features, such as eyes, which can further be used to improve the performance of biometric searches that use facial recognition. For example, in one embodiment we use systems and methods described in commonly assigned provisional patent applications No. 60/480,257, entitled “Systems and Methods for Detecting Skin, Eye Region, and Pupils” and/or “Detecting Skin, Eye Region, and Pupils in the Presence of Eyeglasses” (Application No. 60/514,395, Inventor Kyungtae Hwang), filed Oct. 23, 2003, both of which are hereby incorporated by reference. In addition, as described further in this application (in connection with FIG. 16), in at least some embodiments we implement a system that improves facial recognition by improving the eye coordinate locations used by the templates.


The systems and methods described in this patent application are, in one embodiment, implemented using a computer, such as the workstation. 10 of FIG. 1.


Referring again to FIG. 2, in at least some embodiments the workstation 10 can be in operable communication with an ID document production system 39, which can, for example, include a printer controller 27 that controls the printing of ID documents by an ID document printing system 29. The ID document production system 39 can, for example, be a CI or OTC type document production system (as described previously and also as described in commonly assigned U.S. patent application Ser. No. 10/325,434, entitled “Multiple Image Security Features for Identification Documents and Methods of Making Same”, which is incorporated herein by reference). In at least some embodiments, the workstation 10 communicates with the ID document production system 39 to control whether or not a given ID document will be created (for issuance to an individual) based on the results of biometric searching.


Note that FIG. 2 is a version of the invention implemented without use of a web server, whereas FIG. 3 (described further herein) is generally similar, but includes a web server as part of the interface between the rest of the system and the biometric searching subsystem. Those skilled in the art will appreciate that systems can, of course, be implemented that operate in some modes using a web server, and in some modes not using a web server.


Second Illustrative Embodiment


FIG. 3 is a block diagram of a system 50 for biometric searching in accordance with a second embodiment of the invention. Note that in the following discussion, all references to particular brands and/or suppliers (e.g., Digimarc, Visionics) are provided by way of illustration and example only and are not limiting. Those skilled in the art can appreciate that other products from other suppliers having equivalent functionality and/or features can, of course, be substituted.


System Overview


In this embodiment, images in digital form are captured periodically (e.g., daily) by an issuing party, such as a DMV. The captured digital images are enrolled in a specialized Identification Fraud Prevention Program (IDFPP) facial recognition database which makes it possible to search the database and find matches using only pictures. Enrollment is a numerical process which reduces facial characteristics to a series of numbers called a template. The IDFPP manages the periodic (e.g., daily) batches of images which are enrolled and searched and also manages the presentation of results in a form convenient to the user.


The production of a Driver's License (DL) or Identification (ID) document requires many processes involving various people and systems. The following summary (presented in the context of the issuance of a driver's license, for illustrative purpose only) summarizes some of the steps of such a process. Basically the production of a DL/ID consists of a session in which the applicant is present, and ends when the applicant leaves (with or without a document). In one example, the session starts with an applicant being greeted at the Intake station 62. This greeting process accumulates information on the applicant, and the DMV mainframe 64 participates in the accumulation of this information. Subsequently, the DMV mainframe 64 issues a request to the image capture station 66. This request causes the image capture workstation 66 to collect the relevant data (images, demographic data, etc.) of the applicant and to produce the appropriate document on a special printer which prints the DL/ID documents. The printer can be present at the DMV (e.g., a so-called “over-the-counter” (OTC) system) or can be remotely located for later issuance to the applicant (e.g., a so-called “central issue” (CI) system, or in some combination of the two). The central image server (“CIS”) 58 participates in the collection of image and demographic data, as needed. The CIS 58 also receives any new images which may be captured during a given session.


In one embodiment, the DMV mainframe 64 communicates only once with the image capture Station 66. This communication is one-way from the mainframe 64 to the capture station 66. The communication takes the form of a print request stream containing the request and certain relevant data required by the capture station 66 to produce the required document capture stations can comprise many different elements, but in at least one embodiment consists of a workstation (e.g., as in workstation 10 of FIG. 1), camera tower, signature capture device, and (if applicable) connections to a DL/ID printer as well and/or a paper printer (such as for printing so-called “temporary” ID documents). Images captured by the capture station 66 are “uploaded” to the CIS 58 over a computer network such as the DMV network. Although not illustrated in FIG. 3, a capture station can be located remotely and communicate images to the CIS 58 over the world wide web, via a wireless data link, etc.


In the embodiment of FIG. 3, two general methods are used to help detect fraud. The first is a physical method, and the second is an investigative method. Physical verification involves features on the ID document itself, such as overt and/or covert security features (including but not limited to ultraviolet ink, optically variable ink, microprinting, holograms, etc., as are well understood by those skilled in the art). These features help to provide verification that the document was produced by an authorized issuer and not produced in a fraudulent manner.


Investigative methods utilizes processes such software to assist in the biometric determination of fraud. Specifically, this method helps to detect whether the same individual is applying for several different identities. This method has two actions associated with it:


1. Selection of a list of “candidate images” (in the stored database of images) which might match the face of the applicant


2. Verification (by visual inspection) as to whether fraud is in fact being committed


The first action (selection) can be purely based in software. Since each person's face remains the same (for example, in the case of driver's licenses, during those ages in which people are allowed to drive automobiles by themselves), a system which can compare the faces of all the people applying for a drivers license to all others who are applying (or have applied in the past) for a driver's license, would identify all faces that are the “same”. Thus, if a single person keeps applying for driver's licenses under various assumed names, this method would be effective if it is applied consistently.


Many DMVs have a large number of “legacy” images of those who are issued driver's licenses. For example, a state like Colorado may have approximately 9 million facial images stored over several years of use. The process of manually checking these images against each new applicant would be humanly impossible. Therefore, the IDFPP implemented by FIG. 3 helps to provide a reliable and automated way to check the identity of each new applicant against images that are already stored in the state's legacy of driver license images (note that it is preferable that the legacy images be digitized to facilitate conversion to templates). However, although the system of FIG. 3 can help to select a list of potential candidates, it may not determine decisively that fraud is being attempted. For example, the system of FIG. 3 may bring up an image of a person who is the biological twin of the probe image (a legitimate person who exists and looks exactly like the applicant). Thus, another level of intervention, such as review by a human user, can help to finalize a suspicion of fraud. Thus, the system 50 of FIG. 3 permits a human user to perform such “verification” steps


The following description illustrates the various software modules, databases and processes needed to implement the Facial Recognition System (FRS) of FIG. 3 as part of the Fraud Prevention Program (IDFPP) of this embodiment of the invention.


The system 50 of FIG. 3 includes a facial recognition search system 52 (“FRS 52”), a web server 54, an investigative workstation 56, a central image server 58 (including a facial recognition interface 60), an intake station 62, a mainframe 64, and a capture station 66. The investigative workstation 56, intake station 62, mainframe 64 each can be a workstation similar to the workstation 10 of FIG. 1. The capture station 66 can be similar to the image/data capture system 15 of FIG. 2.


Referring again to FIG. 3, the facial recognition (FR) interface 60 is an interface used to provide necessary information for communications between the CIS 58 and the facial recognition search system 52 (“FRS 52”). The details of operation of the FR interface 60 will, of course, depend on the particular facial recognition search engine used, and we anticipate that those skilled in the art will readily understand how to create and/or adapt an interface for communication between any two system elements. We also note that, depending on the system elements, an interface such as the FR interface 60 may or may not be necessary.


The web server 54 can be any type of web server. In one embodiment, the web server 54 is a Covalent Web Server that includes an SQL server, an Identix Enterprise Server, an MSMQ Server, and various Digimarc applications.


The facial recognition search system 52 (“FRS 52”) is a system that stores a “mathematical equivalent” (often called a “Template”) of each digital image that is in the CIS 58. As discussed further herein, the process of selecting a group (if one or more exists) of candidate images which “match” a given probe image, involves searching through all the template images stored by the FRS 52. This is also known as a one-to-many search.


The central image server 58 (“CIS 58”) is a server that stores all the digital images of all individuals whose image has been captured and who have been issued an identification document over a predetermined time period. The CIS 58 can be a database of images that have been captured at a given location (e.g., a DMV), but is not limited to those kinds of images. For example, the CIS 58 can consist of one or more databases stored by differing entities (e.g., governments, law enforcement agencies, other states, etc.). In this manner, the system of the invention can be used for inter-jurisdictional searching.


In one embodiment, the CIS 58 is a relational database linked to a repository of image files, along with a software module that provides access to them called a Polaroid Central Image Management System (PCIMS) that manages the input and retrieval of the image data. Of course, any system capable of managing the input and retrieval of image data would be usable as an image management system, and the use here of a proprietary PCIMS system is not intended to be limiting. Thus, the CIS 58 stores images and provides for their later retrieval (via the web server 54, intake station 62, and the investigative workstation 56).


In one embodiment the CIS 58 comprises a Sun 450 Solaris system that includes subsystems handling demographics and locations of personal object files (called “poff” files). Poff files have a file format that is designed to encapsulate all the data needed to process an individual ID document, such as an ID card. All the data needed to print and handle the card will be included in the file. This permits this file to be shipped as an entity across a network where it can be printed, displayed or verified without need for additional information. The specific fields and their order in text area are not specified, there is a provision for a separate block of labels for the fields for display purposes. The format is suitable for encoding on ‘smart cards’ as well as transmission and printing of the records. More information about the POFF file format can be found in commonly assigned U.S. patent application Ser. No. 10/325,434, entitled “Multiple Image Security Features For Identification Documents and Methods of Making Same”, filed on Dec. 18, 2002 and published as US 2003/0183695 A1 on Oct. 2, 2003. The contents of this application are hereby incorporated by reference. Of course, it is not necessary for the invention that the POFF file format be used. Those skilled in the art will appreciate that many different file formats can be utilized to manage data for printing onto an identification document.


In at least one embodiment, the CIS 58 performs one or more of the following functions:

    • accepts (from the capture station 66) and stores a predetermined portion (e.g., all) information for cards which require identification fraud prevention (IDFPP) processing
    • responds to the FRS 52 when required to do so, by providing a predetermined portion (e.g., ALL) images captured during the business day so that these images can be enrolled in the FR database
    • responds to the FRS 52 when required to do so, by providing a predetermined portion (e.g., all) PROBE images captured during a specified business day so that these probe images can be processed by the FRS 52
    • allows for changes in the status of IDFPP related information for stored images. For example, status designators can be assigned, such as “Pending Investigative Review”, and “Void by Enforcement” (these designators are not, of course, limiting). The changes in status may be initiated from the FRS 52, or Investigative Workstation 56, depending on various functions exercised by the FRS 52 and Investigative workstation 56
    • operates such that any change to the IDFPP status of a record, which causes it to become a predetermined status (e.g., “Void by enforcement”) causes a corresponding change to the document status, (e.g., where the status is marked as “void”).
    • supports identification of approved/suspected ID documents during a nightly (or other periodic) processing phase (in at least one embodiment, records are uploaded in a batch process for further investigation, but that is not limiting—processing can occur as each image is captured and/or enrolled, or at virtually any other time)
    • produces a printed report of the daily expected number of identification documents which require IDFPP processing. If desired, this report can be sorted in any desired manner (e.g., by last name).


Two of the functions of the FRS 52 that pertain to the CIS 58 are enrollment and searching. Enrollment is the addition of facial images to the FRS 52 from the CIS 58 and searching is the matching of a particular image against this image repository on the FRS 52. In one embodiment, the FRS 52 has read and write access to the database of the CIS 58 and read access to its image files. The database of the CIS 58 is used both to initiate the processing of records and to store the results of that processing. The Poff files are used for the images (e.g., portraits) they contain, as it is these portraits that assist during the investigations described herein. The PCIMS of the CIS 58 are arranged so that the records in the database can be marked for enrollment or searching.


Addition of images, in one embodiment, occurs as part of an enrollment process. In one embodiment, images are periodically uploaded from the CIS 58 to the FRS 52. Advantageously, this can occur outside of normal operating hours of the issuing authority. These images are then converted into templates and stored (in a process called “enrollment”) in the FRS 52. After these images are stored, the FRS 52 retrieves all the images which are marked as probe images. Then, each probe is used to perform a one-to-many search for a list of candidates which match that probe. For each probe which actually results in two or more matches (the probe is already in storage and will match itself), the corresponding data in the CIS 58 is marked as “Awaiting Verification”. This concludes the selection operation. In at least one embodiment, all other images (other than the image of the probe itself) can be “candidates” that are searched. In at least one embodiment, the images being searched are classified in advance by some parameter (e.g., race, hair color, specific demographic data associated with the image, etc.) to improve the speed and/or accuracy of searching.


An investigator can later retrieve all probe images which have been marked as “Awaiting Verification”. In at least one embodiment, the investigator is provided with a predetermined ordered method for comparing the candidates with the probe image. The investigator will visually (and otherwise) determine if a given probe actually matches the candidates selected by the selection operation. If the investigator concludes that one or more candidate images are indeed the same as the probe image, the investigator will “reject” the probe image and also select and reject one or all of the candidate images. This will cause each image (probe and selected candidates) to be marked as “Void by Enforcement” in the CIS database. In addition, all candidate images which were not rejected by the investigator have the “Awaiting Verification” marking removed from them.


If the investigator concludes that none of the candidate images match the probe image, the investigator will “accept” the candidate image. This will cause the “Awaiting Verification” to be removed from the probe and all its related candidates. This may conclude the investigation/fraud detection/prevention activity, or further action may occur. For example, if an identification document had been issued to a candidate “instantly” (in, for example, an over-the-counter system) and it is later determined that the applicant may be committing fraud, law enforcement officials may be notified. In another example, if the system is a central issue type of system (where the identification document is provided to the applicant at a later time), and the investigation of an applicant raises concerns (applicant is not “accepted”), issuance of the identification document from the central issuing authority may be delayed or prevented until further investigation occurs. Many other outcomes are, of course, possible.


In one embodiment, when the FRS 52 communicates with the CIS 58, it provides transaction data (e.g., data taken when an individual attempts to get an identification document—such as a driver's license—from an issuer—such as a DMV). The transaction data includes FRS 52 specific data. The PCIMS of the CIS 58 receives the data and stores data in the database and image files of the CIS 58. The transaction data includes an indicator signaling whether or not a search should be performed using data (e.g., a portrait) contained in the transaction data, and the PCIMS module of the CIS 58 sets a corresponding indicator in an FRS data table (in the CIS 58) based on the search indicator. Another indicator can also be set to trigger enrollment of the portrait in the FRS 52 database. The FRS 52 can then read these values and process the records accordingly.


Third Illustrative Embodiment


FIG. 4 is a block diagram of a first system for biometric searching 70, in accordance with a third embodiment of the invention. Note that elements of the system of FIG. 4 that have common names and/or functions to the systems of FIGS. 2 and 3 (e.g., “investigator workstation”) can, in at least one embodiment, be implemented using the same hardware and/o software described previously, and these elements, in at least some embodiments, operate similarly to their namesakes in FIGS. 2 and 3. Of course, the names provided here are not limiting.


Referring again to FIG. 4, the first system for biometric searching 70 includes a photo validation system (PVS) 91 (also referred to herein as an identification fraud prevention system (IDFPP) 91) and a facial recognition search subsystem 52. The photo validation system 91 includes an investigator workstation 56, image/person server 72, an image/person database (DB) 74, a data entry workstation 76, a non template face data server 78, a non template face data database 80 (also called a face image database 80), a system administrator workstation 82, an optional adaptable application programming interface (API) 84, and an optional external alignment engine(s) 79. The facial recognition search subsystem 52 includes a message queue server 84, a face search server 86, a face alignment server 88, one or more alignment engines 90, one or more face image data (FID) file handlers 92, one or more search engines 94, and a face template database 96 (also referred to the FID database 96). Each of these elements is described further below.


The first system for biometric searching 70 provides requisite utilities for system administration and maintenance, which are done via the system administrator workstation 82 (which can, for example, be a workstation similar to workstation 10 of FIG. 1). These utilities are tools, programs, and procedures that system administrators can use to maintain the database, software, and other system components. The utilities are not necessarily directly related to the match-search or the enrollment processes, and the typical operator/investigator need not necessarily be required to know anything about them to use the system effectively. Examples of operations that the System Administrator Workstation can perform include:


Initializing a face image database 80 and/or face template database 96: Process the existing set of face images at the time of initial system install (align all or a predetermined portion of existing face images in the face image database 80). Direct the alignment server 88 to create a face template for each existing face image.


Update face image database 80 and/or face template database 96: Periodically add newly acquired images and direct the search engine 94 to create a record for each new face image. This can typically occur on a daily basis. An illustrative system is capable of acquiring 8000 new images per day, but this is not limiting.


The image/subject database server 72 (also referred to as an Image/Person server 72) is a storage/retrieval system for face images and the related subject data. It is analogous to the CIS 58 (FIG. 3). It accesses a plurality of face images and the corresponding subject data for the face images. In one embodiment, the face images and corresponding subject data are stored in the Image/Person database (DB) 74. The number of face images can steadily increase as images are enrolled to the system, and the face recognition system can incorporate on a regular basis newly added image/subject records and can be designed to scale to a large number of images. For example, in one implementation, we have worked with databases of around 11 million images. We do not, however, limit our invention to image databases of this size, and we have been able to scale various embodiments of our invention to about 20-40 million images. We expressly contemplate that embodiments of our invention can be scaled and adapted to work with databases of images that are as large as desired.


The system of FIG. 4 also includes utilities for system administration and maintenance. These are tools, programs, and procedures that system administrators can use to maintain and update the database, software and other system components. In this embodiment, the system administration utilities are not necessarily directly related to the facial match search, and the typical operator/investigator is not necessarily required to know anything about these utilities to use the system effectively.


A user, such as an investigator/operator, controls the search process through the investigator workstation 56. In this embodiment, the investigator workstation 56 has a graphical user interface (GUI) with the ability to be customized to duplicate the “look and feel” of systems that already exist at customer sites with similar functionality. Advantageously, the investigator workstation is designed to be easy for an operator to use. Illustrative examples of the “look and feel”, as well as the operation, of an exemplary operator workstation and an exemplary user interface are further described in connection with the screen shots of some embodiments of the invention that are provided herein. Note that although only a single investigator workstation 56 is illustrated in FIG. 4, systems implemented in accordance with the invention may contain one or more investigator workstations.


The data entry workstation 76 (which can, for example, be a workstation similar to the workstation 10 of FIG. 1) is used to add, update and remove non face recognition data to the image/subject (also called image/person) database 74. In this embodiment, the functionality of the data entry workstation 76 is highly dependent on the customers' needs. For example, in one embodiment, the capture of subject images can be integrated in the data entry workstation 76. In one embodiment, the printing of identification document also can be integrated into the data entry workstation 76 or coupled to the data entry workstation 76 (as in FIG. 2). In addition, we expressly contemplate that the capture station described in commonly assigned U.S. patent application Ser. No. 10/676,362, entitled “All In One Capture station for Creating Identification Documents”, filed Sep. 30, 2003, can be used with (or instead of) the data entry workstation 76, and the contents of this patent application are incorporated herein by reference. We also note that although only one data entry workstation is illustrated in FIG. 2, the system as implemented in accordance with the invention may contain one or more data entry workstations.


The face alignment server 88 receives alignment requests from the workstation 82 via the message queue server 84, and distributes the alignment requests to the alignment engines 90, and returns the alignment success to the requesting workstation 82. Alignment, in the context of facial recognition searching, refers to locating selected elements (e.g., eyes) in an image, so that a corresponding biometric template can be created. Also, the alignment server 88 can read specific alignment requests (see the “AlignOnlyRequest in FIG. 4) from a face alignment request queue in the message queue server 84. In at least one embodiment, the alignment service is scalable. For example, to be able to serve large numbers of alignments per day, the alignment server can distribute the alignment requests to one or many alignment engine(s) 90. The scaling of the alignment service is, in an illustrative embodiment, designed to correlate to the number of new facial images (e.g., 8000 images) that are acquired in a given time period (e.g., per day). To accommodate the need of investigators for on the spot alignment, in at least one embodiment, single alignment requests can be posted with a higher priority, so they get places at the top of the alignment queue.


Note, also, that in at least some embodiments, the PVS 91 optionally can include (or be in communication with) one or more external alignment engines 79, each of which is capable of aligning an image. As will be explained further in connection with FIG. 16, using an external alignment engine 79 can enable the PVS 91 to send images to the facial recognition search system 52 already aligned (e.g., with a set of eye coordinates). As explained further in connection with FIG. 8, in one embodiment, if the facial recognition search system 52 receives an image that is already aligned, it does not itself align the image, but instead uses the alignment information provided to it by the PVS 91 to conduct its searching. In a further embodiment (relating to FIG. 16), the PVS 91 can use one or more external alignment engines 79 (instead of or in addition to the alignment engines 90 of the facial recognition search system 52) to compute sets of eye coordinates, to apply one or more predetermined rules to determine which eye coordinates are likely to the most accurate.


The message queue server 84 handles the face align request queue and contains the list of all alignment requests that have not yet been completely serviced. When the alignment server 88 begins its alignment cycle, it reads the alignment request at the top of the face align request queue in the message queue server 84.


The alignment engine 90 receives an alignment request from the alignment server 88, creates the face template and other related data and stores it in the face database 96 (also called the FID Files 96). The alignment engine 90 can perform an automatic alignment procedure and return a success/failure result, or take manual alignment information (see FIG. 6B) to create the face template. In one embodiment, the alignment engine 90 performs only one alignment at a time, and can then return the result to the alignment server 88. When there is a successful alignment, the alignment engine 90 optionally can send the face template and the other alignment information to the face database server 92 (e.g., FID file Handler 92) for storage in the face database (FID Files 96). Two forms of alignment request (automatic and manual) can be sent to the alignment engine 90. FIG. 6A is a flowchart of a method used for an automatic alignment request, and FIG. 6B is a flowchart of a method used for a manual alignment request. Each of these methods is described further below.


The message queue server 84 maintains a request queue for the Face Search and contains the list of all search requests that have not yet been completely serviced. When the face search server 86 begins its search cycle, it can read the search request at the top of the search queue.


As noted above, use of the adaptable API 85 is optional and is not required for all embodiments of the invention. In at least one embodiment, the photo validation system 91 communicates directly to a specific facial recognition search system (the Identix FACEit system) via the message queue server 84, using Microsoft Message Queue (MSMQ) protocol. This is illustrated in FIG. 4A. Referring to FIG. 5A, the facial recognition system 52 provided by Identix includes a subsystem of Identix Engines 95 (including, for example, the alignment engines 90 and search engines 94 of FIG. 4, along with associated file handlers, etc.), the message queue server 84, and a layer called DBEnterprise 89. DBEnterprise 89 is a layer added on top of the Visionics application to manage queue messages and finalize the search results.


In this embodiment (which includes only the message queue server 84), enroll and identify modules in the photo validation system 91 constantly monitor the information in an SQL tracking database 93. The SQL tracking database 93 is a repository that tracks what information is eventually going to uploaded to the image repositories (e.g., the CIS 58 (FIG. 3) and/or the image/person database 74, face images database 80). When new enroll or identify request becomes available, the SQL tracking database 93 formats an MSMQ message and places it on the request queue of the message queue server 84. DBEnterprise 89 extracts each request message and in turn formats another message and places it on a queue for one or more of the engines in Identix Engine(s) 95 (or, optionally, in every engine's queue) for identify requests. The Identix Engine(s) 95 receiving the message then process the request and place the results in the specified response queue on the message queue server 84. Appropriate modules in the photo validation system 91 can extract the responses from these queues and then process the results.


In one embodiment, the photo validation search system 91 includes an adaptable API 85. The adaptable API 85 is an optional feature that can enable the photo validation system 91 to communicate with one or more facial recognition search systems 52, each of which may have different interfaces. With the adaptable API 85, the photo validation system 91 can communicate with facial recognition search systems 52 from different vendors, so that the photo validation system 91 need not be specifically designed and configured to work with a system from just a single vendor. The photo validation system 91 communicates with the facial recognition search system 52 (which can, for example, be a third party system from a company such as Identix or Cognitec), to perform at least two operations for the first system for biometric searching 70:


Enroll

    • Analyzes a facial image and create a template describing the image characteristics like eye coordinates, facial characteristics, etc.


Identify

    • Searches the database of previously enrolled images and create a template ID list of possible matches based on the number of matched images and the confidence level of the matched image.


The adaptable API 85, in one embodiment, is configured to work in accordance with a Berkeley Sockets Network (BSNET). (The reader is presumed to be familiar with Berkeley Sockets Networks, and this technology is not explained here). The adaptable API 85 works with the PVS 91 to enroll and identify images using BSNET to interface to an external facial recognition system 52. FIG. 5B is a diagram showing the process flow that occurs between the PVS 91 (client) and facial recognition system 52 (server), in accordance with one embodiment of the invention.


The PVS 91 has a process running on it that periodically monitors predetermined tables in the tracking database 93 for new enrollment and/or identify requests. When a new request becomes available, the information is collected and a call to an extensible markup language (XML) encoder will create a string of tag/data/endtag elements. In at least one embodiment, BioAPI compliant XML format is used to be compliant with standards such as the BioAPI standards. This string is then passed to the server application along with the data size and the actual data in text format.


On the server side, the receiving module sends an acknowledgement to the fact that a request is received. A unique request id links each request to a response. A RequestMgr on the server side decodes the XML data buffer and places each token into proper structure format used by the 3rd party application. The server then spawns a thread and passes the request structure to it. The spawned thread makes a 3rd party API call to process the request. At this point, the thread waits for the response. When a response becomes available, the process collects the data, makes a XML call to format the data and creates a response message. The response message is then send to a module called ResponseMgr (which is responsible for processing all requests generated by the server application). The response manager examines the response and based on the unique id of the original request, processes the results by populating the tracking database records and setting this step's completion flag.


The process flow for this embodiment is shown in FIG. 5B. In at least one embodiment, enroll and identify processes use polling. In at least one embodiment, enroll and identify process can be triggered by virtually any mechanism, such as a database status change, or some other mechanism like launchPad, a 3rd party application.


Enroll Sending Process


In this embodiment, the enroll process is generally (but not always) triggered by the changes in database records stored in a location such as a CIS 58 (FIG. 3). When a new license is issued, a record is added to the CIS database and predetermined tables in the database (in one embodiment, these tables include facial recognition or other biometric information, POFF information, and demographic information) are populated with information related to the new person. In addition, a status flag is set to ready, ‘R’. In one embodiment (batch mode), these new records are accumulated until the predetermined time when images are batch processed (e.g., an end of the day process.)


At the predetermined time, on the tracking database 93, a new daily schedule record is created in a predetermined table (e.g., a table called the frDailySchedule table). This is a new batch job. In this embodiment, each batch job can process up to 5000 enroll and/or search requests. This batch job triggers a backend process (which we call frsmain) to check the CIS database for records with the frstatus flag set to ready ‘R’ (in other words, to check for records that are ready to be searched against a set of previously enrolled images, whether enrolled to this system or some other system).


If such records exist (i.e., records ready to be searched), then, the backend process reads each record, up to predetermined maximum set in the batch record (currently 5000), accesses the records poff file indicated by poff_location, extracts the person's portrait image, sets the frstatus flag to ‘R’ and populates a queue that we call the frENROLLMENTQ of the tracking database. These new records in the enrollment table have status flag, ‘VisionicsStatus’, set to ‘N’ while the frsmain transferring all batch records. When all batch records are transferred, then, frsmain sets the ‘VisionicsStatus’ flag to ‘R’ for ready. In addition, each record gets a new unique ‘PersonID’ which is used by the 3rd party application as the template id.


The enroll process polls the frENROLLMENTQ for ready records to be enrolled, or can be triggered by the stored procedure which sets the completion flags for the new records to ready ‘R’.


In one embodiment, the enroll process includes the following steps


i) read each ready record from the enrollment table,


ii) set the ‘VisionicsStatus’ flag to started ‘S’,


iii) get the image URL path from the database record,


iv) fill in the request structure,


vi) call XML encoder to encode the request structure,


vi) call bsnet send request with the above information, and


vii) process the acknowledgment.


The unique request id, which can the same as the current person id, is a relevant data element in enrolling the image and receiving the results.


Enroll Receiving Process


This is a function call inside the ResponseMgr. The response manager will accept the arguments passed to it by the RequestMgr. The response message indicates align, enroll and/or a search request. If enroll, then, it calls ‘pvsProcessEnrollResponse’ module. This module reads the response, locates the record having the same unique id, and updates the record information, such as eye coordinates, alignment confidence level, and date time information. It also sets the ‘VisionicsStatus’ flag to done ‘D’, and moveToCIS to ready ‘R’. This last flag allows the backend to update CIS with the new enrollment data.


Identify Sending Process


There are multiple ways that a record is ready for identification, such as


End of an enroll process for a batch job,


User initiated identify request, via GUI or programming interface


Through drill down (see our patent application entitled “Systems and Methods for Recognition of Individuals Using Multiple Biometric Searches”, which is referenced and incorporated by reference elsewhere herein), and/or


Through operator using database table frPROBEQ status flags


The identify request, similar to the enrollment request, examines the alignment information for the image being searched. If no alignment information available, it will make a call to the alignment module. The record will remain unchanged until the alignment information is available.


When all information, including the image, the alignment information, maximum number of matched records, and maximum confidence level is known, a search request is formatted and XML encoded and sent to the server.


Identify Receive Process


This is a function call inside the ResponseMgr. The identify response includes a list of matched template IDs and each ID's confidence level. The identify response process makes a database call to get the information for the record being searched. This includes whether this is a single search or a batch search and number of records requested with maximum threshold value.


The single search is when an investigator initiates a search from a list of existing search candidates, or the investigator uses unique CIS record id to access the image, or search is done using an image file, or a drill down search. The batch search is result of a daily batch enroll and search process.


The receive process updates the searched record with the number of matches returned, inserts a new record into the frCANDIDATES table with the records id (template id) and the confidence level, and sets the candidate flag to ready ‘R’. The candidate records will be updated by the backend.


The backend process reads each ready candidate record and using the record id extracts CIS information about the person, including location of the poff, demographics information, portrait image, finger print images, signature image, and updates the tracking database. When all candidates of a searched record are populated, the searched record becomes ready for viewing.


Ping Process


This module is used to ping the server to make sure the server is up and listening for calls. Each module in this component can to make this call to make sure the server is up and running. If not, a message is logged and the process terminates.


Operation of System of FIG. 4


Referring again to FIGS. 4 and 5, the alignment server 88 begins an alignment by reading the alignment request from the Face Alignment Request queue in the message queue server 84. Batch alignment requests are handled by putting a portion of alignment requests on the alignment request queue. If the face image is part of the alignment request, it can be parsed out of the request (step 200 of FIG. 6A). For example, software (such as the previously described Find-A-Face, the previously described “pupil detection” patent applications, etc.) can be used to find a face and/or the eyes in the image (step 202). Specific software adapted to locate eyes in an image can be part of a facial recognition search engine 94. For example many facial recognition search engine vendors include eye finding as part of their offerings. We have found that, in at least one embodiment of the invention, the accuracy of facial searching can be improved by using the eye finding of a first vendor (e.g., Cognitec) together with the facial recognition searching of a second vendor (e.g., Identix). In another embodiment, we have found it advantageous to use an inventive method for selecting the best eye location; this method is described more fully in connection with FIG. 16, herein.


Referring again to FIG. 4, the face image and the related alignment settings are then sent to the next available alignment engine 90. The Face Alignment Server 88 waits for the alignment engine(s) 90 to return a result. The alignment engine 90 can return a result such as a success, failed or low confidence result together with the computed alignment information (eye positions). In at least one embodiment, to prevent results from being lost due to network failures, workstation crashes, etc., the results are not discarded until the requesting workstation acknowledges receiving the result.


Referring again to FIG. 6A, based on the alignment information created for the face (step 204), a face template is created (step 206). If there is a failure in creating a face template (step 208) (for example, if the eyes were not located in the image), then an error message is returned (step 210). If there is no failure (e.g., if the returned result is success or low confidence), then the alignment information and face template are returned to the face data server 78 via the message queue server 84 (step 214). In one embodiment, we have found that an alignment engine 88 can performs an average of 30 alignments per minute. Scaling can, improve performance; for example, in one embodiment, to accommodate to higher performance needs, several alignment engines 90 are arranged serve one alignment server 88.


Referring to FIG. 6B, for the manual alignment request, the image is parsed out of the alignment request (step 220), and alignment information (e.g., manually entered eye locations) are also parsed out of the alignment request. The manually entered eye locations can be selected manually by an operator viewing an image (e.g., by sliding lines or crosshairs on a screen and noting the respective x and y coordinates of the location of each eye.) The manually entered eye locations also can be selected via a combination of manual and automated functionality. For example, eye finding software can be used to find eye locations in an image and display tentative locations to a user (e.g., in the form of cross hairs, marks resembling the letter “X” at eye locations, circles or other shapes, etc.). The operator can then adjust the displayed eye coordinates as needed. The details of such a process are known and used in many different environments and are not necessary to explain her (for the reader to try an example of such a process, see the “try it on” feature at www.eyeglasses.com which locates a user's eyes in an uploaded image, for the purpose of showing to the user how the user would look in a given pair of glasses).


Based on the manually provided alignment information, the alignment server 88 attempts to create a face template (step 224). If there is a failure in creating a face template (step 228) (for example, if the eyes locations provided did not result in a valid template), then an error message is returned (step 230). If there is no failure (e.g., if the returned result is success or low confidence), then the alignment information and face template are returned to the face data server 78 via the message queue server 84 (step 232).


Referring again to FIG. 4, the face search server 86 receives search requests from the investigator workstation 56, distributes the search to the search engines 94, collects and compiles the resulting match set, and returns the match set to the requesting investigator workstation 56. In one embodiment, the face database 96 is partitioned so that a given search engine 94 searches only its respective partition.


The search engine 94 receives search requests from the search server 86, executes the search on its partition of the face database 96 and returns the resulting match set to the search server 86. In one embodiment we have found that the search cycle can execute at a minimum rate of about twenty million faces per minute, per search engine 86, but this rate is not, of course, limiting. To increase the speed of searching, in at least some embodiments the system 70 can use multiple search engines 86. For example, a configuration with 5 search engines can scale up to one half million faces per minute. We have found that in another embodiment of our invention, each search engine can search about 1.5 million faces.


The FID (face image descriptor) File Handler 92 maintains the database of face templates 96, one for each face image intended as a candidate for matching. Note that in at least one embodiment of the invention, there can be more than one FID file handler 92. For example, in one embodiment of the invention, we have a FID file handler for each template used by a so-called “two pass” biometric system (which uses a so-called coarse biometric template followed by a so-called fine biometric template, e.g., a coarse template of about 84 bytes to do a “first pass” search of a database of other coarse templates, followed by a search of a portion of the “first pass” results using the “fine” template). The face template is a preprocessed representation of the face that a search engine can use in matching search. In at least one embodiment of the invention, the face template is a template usable with a so-called local feature analysis (LFA) type of facial recognition algorithm, such as is used in the IDENTIX FACE IT product. Operation of at least part of this algorithm is detailed in U.S. Pat. No. 6,111,517, which is incorporated by reference in its entirety. The FID Handler 92 does not necessarily store the raw images (as discussed further below). The alignment engine 90 can generate the face templates. The initial face data set can be constructed at any time; in one embodiment, the initial face data set is constructed at the time of system installation. After the initial face data set is constructed, the administration workstation 82 can supply new images for which face data can be added to the FID Files. These new images can be provided periodically (e.g., daily or weekly), in response to a request, or upon the occurrence of a condition. This process is similar to the “enrollment” process of FIG. 3.


To provide scalability and high performance, the face templates can be distributed over multiple computers. The FID File handler 92 is responsible for balancing the data load on multiple data servers. The FID File handler 92 provides a programming interface, so that the other components can retrieve the information they need to perform their tasks. The alignment engine(s) 90 can send face data records to the FID File handler 92 to be added to the face database 96. The FID File handler 92 is transparent to the alignment engine(s) 90 as to on which database partition the data can be stored. In this embodiment, data is added only through the alignment engine(s) 90.


The FID Files 96 (also referred to as the face database 96) are the repositories for face templates. To achieve scalability they may be distributed over multiple computers. In this embodiment, each of search engines has its own set of FID Files 96 to minimize the I/O bottleneck when searching.


The non template face data server 78 (also referred to herein as the misc. face data server 78) maintains the database of face template related data. In one embodiment, face template related data includes data other than the face template itself, such as name, address, biometric information (e.g., fingerprints), and alignment information. Each face template in the FID Files 96 has a respective entry in the misc. face database. In one embodiment, the misc. face data server 78 does not store the raw images. The alignment engine 90 generates the misc. face data. In this embodiment, the initial face data set is constructed at the time of system installation; it can be appreciated, however, that the initial face data set can be created at other times (e.g., before or after system installation). After the initial face data set is constructed, the administration workstation periodically supplies new images for which misc. face data can be added to the database. New images for which misc. face data can be added can also be supplied.


The misc. face database 80 (also referred to as the “Non Template Face Data 80”) is a database of face template related data. This data includes alignment and reference information. In addition, it contains the lookup tables to describe the references between entries in the image/subject database and the FID Files 96.


Operators/investigators can control the search process through use of the investigator workstation 56. In this embodiment, the investigator workstation is a personal computer (PC), but it will be appreciated that any device capable of running a browser (e.g., PDA, web-enabled phone, pocket PC, etc.) and displaying images is usable as a workstation, such as the technologies described for the workstation 10 of FIG. 1. In this embodiment, search transactions, made up of several simple, discrete steps, repeat continuously either automatically in batch mode, or asynchronously upon operator initiation. Each transaction generates a complete result set for each new probe image to search against.



FIG. 7 is a flow chart of a method for conducting biometric searches at the search engine 94 of the system 70 of FIG. 4. The search engine 94 receives a search request (step 300) and the search engine loads the aligned probe image (step 302). The search engine 94 searches for matching templates that are stored on the database partition that is physically on the same machine as the search engine 94 (step 304). For each entry in the search list, the face template is retrieved from the face database (step 306), and it is matched against the probe face. A confidence score is created and stored for each of those matches (step 308). The finished array of search results is sent to the requesting face search server 86 (step 312). A match result record consists of the face template identifier, the subject identifier and the match score.



FIG. 8 is a flow chart of a method for conducting biometric searches at a user workstation, in accordance with one embodiment of the invention. The operator of the workstation 56 receives a search request (step 400). The search request can be delivered by any known means, including by paper document, by transportable media such as floppy disk, by computer network, or by other methods. The request can be a signal received by either the workstation itself or the operator. The operator can receive the request through one means (e.g., telephone call, oral request, message on a different workstation) and act on the request using the workstation.


In one embodiment, the search request comes by listing one or more candidates to be investigated on a probe image verification list. FIG. 10 is an illustrative example of a screen shot of a probe image verification list, in accordance with one embodiment of the invention. An investigator selects one or more records on the list to verify.


The request includes a probe image file, or a means to obtain it, e.g. information about the probe image file, a reference to an entry in the image/subject database, or any other information necessary to locate and/or obtain the probe image file. The probe image is a digitally stored image of a human face, against which the matching search can be conducted.


The workstation 56 loads the probe image and text data associated with the request (step 402). If the face picture contained in the request is not available in a digitally stored form, in at least one embodiment, the means to create a digital image (e.g. scanner, etc.) can be used. For example, a scanner can be made available at the investigator workstation. The digital face image is loaded into the workstation software, and a Search Request Message is created. FIG. 9 is an illustrative example of a screen shot of a user interface showing an image that can be used as a probe image, in accordance with one embodiment of the invention.


Probe images that are not stored in the image database do not necessarily contain any alignment information. For not previously aligned probe images (step 404), an alignment request is made to the alignment server 88 (step 404), which returns alignment information. This request can be executed at a high priority, so that the workstation operator can verify the result. If the automatic alignment fails, the workstation 56 can also provides a tool (not shown in FIG. 8 but described elsewhere herein, such as in FIG. 6B) for the operator to manually align the probe image


The workstation operator specifies the face search settings, and other constraints for the search (step 406), then submits the request (which may include other information, such as search settings) to the Face Search Server 86 via the Message Queue Server 84 (step 408). For example, the workstation operator selects the query set on which the face match can be performed. She/he also selects a minimum threshold and/or a maximum number of returned matches. Other advanced settings can be specified by the administrator on a per face database basis.


The Face Search Server 86 reads the Search Request from the message queue, distributes the request to the search engine(s) 412, and returns the match set to the workstation (steps 424).


Upon submitting the search request to the message queue server, the operator sets a priority for the request. The message queue server maintains a search request queue for the Face Search Server. This queue contains the list of search requests submitted by the workstation(s) that await service by the face search server. The workstation operator reviews the match set and conducts further processing as desired. The workstation handles in an appropriate manner any failed search requests.


The face search server 86 begins a search by reading the search request from the search queue and parsing out the probe image (see, e.g., the method of FIG. 6A). Other information, such as alignment information, also may be parsed out. The face search server 86 also parses out the alignment information, and the query set to be searched on. The following steps are performed, in accordance with one embodiment of the invention:


Handling a Face Search Request

    • The face search server 86 builds the list of face templates that can be searched from the selection chosen by the investigator
    • The search template list and the probe record with the alignment information are sent/distributed to the search engine(s) 94
    • The face search server 86 waits for the search engine(s) 94 to return a resulting match set then builds a combined match set
    • If the search request is hierarchical (has several stages with different recognition settings) the face search server 86 selects a subset of the match set, and re-send/distribute it to the search engine(s) 94
    • Finally, the combined match set is send to the requesting workstation 56.


To prevent results from being lost due to network failures, workstation crashes, etc. the message queue server 84 stores a Search Result (named, in this illustrative embodiment, “SearchResult”) until it is discarded by the workstation.


A match set can contain a sorted array of identifiers for the face data and the subject data. Records in the match set can have a relative match score (“face score” that indicates the determined level of similarity between the image probe and the database images. FIGS. 11A and 11B are illustrative examples of probe images 100, 101 and returned results 102 through 116, respectively, for the system of any one of FIGS. 2-4, and FIG. 13 is an illustrative example of a screen shot of a candidate list screen presented to a user, in accordance with one embodiment of the invention. As these figures illustrate, an investigator can readily compare a probe image with one or more candidate matches, both visually and based on relative match score


Note that, to minimize overall use of system resources, and to separate image data from face data, the result set returned to workstations 56 by the Face Search Server 86 does not necessarily have to contain image data. Rather, in this embodiment, the match set can contain pointers, or other unique identifiers, to image data. The workstation 56 can use the pointers to retrieve the raw image data from the image database server. For example, in one embodiment, the match set does not necessarily contain face images. Rather, the match set contains identifiers for face images and subject data stored on the image/subject server. To display or otherwise process any images identified in the match set, the workstation first retrieves them from the image server


After receiving the match result set from the face search server, the workstation operator may process images selected from the match set in any of the following ways:

    • Display images for comparison to the probe image (see FIGS. 12 and 14)
    • Print display images
    • Display or print reports
    • Obtain fingerprints of subjects identified by selected images (see FIGS. 12 and 15)
    • Select an image from the match set and submit it as the probe image for a new search (processes for doing this are also described more fully in our “Systems and Methods for Recognition of Individuals Using Multiple Biometric Searches”, Ser. No. 10/686,005, filed Oct. 14, 2003


Those skilled in the art can appreciated that other ways of processing images are, of course, possible.


After locating matches, the investigator can flag a record as void, potentially fraudulent, etc. This designation can be associated with the image until removed, and the flag (or other notes to the image) can remain visible even if the record is retrieved again in a different search.


Fourth Illustrative Embodiment

Database Partitioning


In one embodiment, we have found that some database portioning techniques can improve the speed and/or accuracy of searching. These techniques can be advantageous for applications where there are a very large number of legacy images that are never deleted or replaced (e.g., as in many DMVs). For applications such as these, the database can be broken up in active an inactive (or “almost” inactive) parts. In some embodiments, after an image is enrolled (and a template is created), all the information about the image (such as so-called Binary Large Object (BLOB) “Image BLOB”) is essentially inactive. One no longer needs access to the image for searching purposes, only for displaying search results. Another way to think about activity is to say: in at least some situations, the actual images are needed only for enrollment and display and nothing else. After enrollment, images can technically be deleted from at least a portion of the database since they are not needed for searching, either (the searching is done using templates). In one embodiment, the images will only be displayed if they rank high enough on a given search. Thus, the architecture of the search engine 94, file handler 92, search server 86, and/or face database 96 can be modified to separate the basic functions that require speed (enrollment, identification and verification) and those that don't (e.g., the user interface).


We have found several ways to accomplish this portioning. In the first embodiment, the image BLOBS are kept in the face database 96, but they put are in a separate, indexed table on separate physical disk unit, in contiguous disk blocks. A foreign key in the new BLOB table ties it to the existing tables. Note that the particular server being used may dictate whether such control over space allocation is possible (e.g., SQL Server may not allow this level of control over space allocation, but Oracle does). In addition, in at least one embodiment, we distribute the blob image database on the nodes. After successful enrollment (enrollment is described further herein), images can be “archived” on a database stored on each node. One advantage is that each database can be limited (at least today) to about 1 million records. Once the database is “full” no more images will be added to it. This may make backup and recovery an easier job, although care may need to be taken because all the enrollment activity is now directed to one or two “active” nodes.


In a second embodiment, image BLOBS are removed from the database. This can be accomplished using, for example, Oracle's BFILE data type, which may give efficiencies by reducing the database size, but keeps the benefits of using a recognized SQL data type.


In a third embodiment, we leave images in the file system and store only a path to them. We have found this to be a workable approach. We have also evolved to a file structure based on the date that is very helpful when examining lots of records. It also avoids problems that develop with UNIX when the number of files in a directory grows beyond 40,000. One example of a structure that we have used is:


\volume name\YYYY\MM\DD\<filenames> or


\volume name\YYYY\DDD\<filenames>


Some advantages of this embodiment include:

    • It easy to convert a path from Unix to Windows format
    • The number of records on any given day doesn't stress the operating system
    • It is easy to logically group files for backup media


In a fourth embodiment, we cache the face image records (FIRs). In this manner, a node could store all the FIRs allocated to it in a large file on the node itself. Loading it into memory will be far faster than reading rows from a database. This may affect the operation of dynamic partitioning, which can be rectified at least partially by adding another column to the images table to indicate that a node is full and the partition is no longer available for updates.


Fifth Illustrative Embodiment

The fifth illustrative embodiment of the invention applies some of the previously described systems, methods, user interfaces, and screen shots to provide an ID fraud prevention system optimized for use with DMVs, together with methods and processes for interacting with the ID fraud prevention system, including a user-friendly user interface, for searching a database of images for a match to a probe image. The features of this embodiment may be especially useful for large databases of images.


Overview of the Verification Process


In this embodiment, verification is the process of discovering DMV customers who have used false documents to obtain more than one license. The ID Fraud Prevention system of this embodiment assists the DMV investigator searching for multiple identities on documents issued by an issuing authority, such as a DMV. In the normal course of the issuance process, a DMV customer presents documentation to establish his or her identity. In most cases, the DMV has no certain way to determine if the customer's identity is false.


Finding Fraudulent Customer Records


The Fraud Prevention Program is used to quickly and automatically find those customers who already have one or more “valid” driver's licenses. Images uploaded daily from branch offices are sent to the DMVs Fraud Prevention system. The customer's portrait is enrolled (encoded for searching) and permanently stored in the special database used for performing searches and comparisons.


After all images are enrolled, licenses tagged by the DMV as “new issuances” are automatically compared to the entire library of photos on file to see if a match or a close match exists. In one embodiment, the library of photos on file can be over 10 million images. The library of photos advantageously is kept current every day except for a very small percentage of images that fail for one reason or another.


After the database is searched for matching images, one of two results may occur.


1. No match may be found: If no image in the Fraud Prevention database resembles the new photo, it is checked off in the Central Image Database as “passed by the system”. This means that the likelihood of match to any one picture is so low that it is not worth reporting it, or having n investigator look at it. This is the result one would normally expect when a customer who is new to the DMV is getting a license for the first time


2. Possible matches are found: This outcome requires an investigator to look at the customer's photo and the possible matches found in the IDFPP database. It is up to the investigator to determine if photos are in fact a match and if so, what to do with the information. It is important to remember that the Fraud Prevention software is providing the user with its best assessment and not positive proof.


Terminology


The following words have a special meaning in the context of fraud prevention in this embodiment of the invention:















Browser
Microsoft Internet Explorer version 6.0 or higher


Candidate
A customer record returned by the IDFPP search process.



A candidate is rated with a confidence level


Candidate List
A list of candidate images found by the IDFPP software



to be similar to the probe image. All the candidates are



organized so they can be viewed together with the probe



image.


Confidence
This is number from 0 to 100 assigned to each image in


Level
the candidate list. A value of 100 means that the



candidate should match the probe image. A value of zero



mean it totally unlike the probe image.


Duplicate
A candidate image that is obviously the same as the



probe image is loosely referred to as a duplicate. A



duplicate image may be in the database as a result of



operator error, computer error or fraud.


Fraud
This term applies to duplicate records (licenses) present



in the DMV database that are the result of a deliberate



attempt to conceal or alter the customer's identity.



A duplicate is determined after all other possible sources



have been eliminated such as operator or computer error.



Fraud is not determined by the IDFPP system. A DMV



investigator needs to make this determination, and in



most cases will need supporting information from other



sources.


Identical
Images are said to be identical if an image was inserted



into the Central Image Server twice. This is usually the



result of operator or computer error. If found by the



IDFPP software, an image identical to the probe will be



assigned a confidence level of 99 or 100. This is not a



case of fraud.


List Size
This term refers to the size of the candidate list which



appears on the verification screen. Typically, the list size



is set to 15 or 25 images.


Match
This term is used loosely to mean a duplicate record was



found.


Probe
This is the image used when searching for possible



matches. Typically, it is the picture of a DMV customer



who is getting a license for the very first time.


Progressive
If an investigator finds one or more interesting candidate


Search
images, he may use the candidate URNs to initiate a



single search. A search with a candidate image may



yield more matches to the original probe image.


Single search
A single search selects an image from the Central Image



Server and uses it as probe to search IDFPP for possible



matches.


Threshold
As each candidate record is obtained from the IDFPP



search database, the confidence level is compared to a



system-wide value. If the candidate is above the



threshold, it remains on the candidate list. If it is below



the threshold, it is not added to the list.


Timeout
When an investigator stops moving the mouse or clicking



on buttons, a count down timer started. The timer is



initialized to the timeout value. When it gets to zero, the



user is logged off. The timeout value is typically set to 5



minutes, but the system administrator may change this



value.


Verification
This is the process of examining probe and candidate



images to verify the absence of fraud. Probe images are



verified automatically by the system if all the candidate



images are below the confidence level threshold.



Probe images that have candidate records above the



threshold can be verified by an investigator.









Batch Processing of Enrolled Images


In this embodiment, newly captured images from (from one or more image capture locations, such as branch DMV offices) are periodically uploaded to the Central Image Server (CIS). For example, in a DMV application, newly capture images are uploaded to a DMV CIS after the close of business each day.


Also, the library of enrolled images is searched with images tagged as new issuances. The results of this search are available in the morning for an investigator to review. In at least one embodiment, the search begins only after the batch of newly captured images is enrolled. This procedure increases that the chance that a DMV customer will be caught if he is shopping for multiple licenses on a single day. Verification lists can be generated that one or more images were found in the fraud detection system that were similar to the probe images. If no matches were found, a verification list is not generated.


System Requirements:


The IDFPP (ID Fraud Prevention Program) database and verification software of this embodiment can be accessed with any browser capable of browsing the Internet. In one embodiment, the IDFPP is accessible using Microsoft's Internet Explorer 6.0 (or later). The browser can, for example, be resident on a personal computer running the WINDOWS operating system and this version of Internet Explorer.


However, those skilled in the art will appreciate that virtually any-other web-enabled devices (e.g., personal digital assistants (PDA's), laptops, mobile phones, tablet computers, etc.) capable of communicating with a display screen (whether built in or not) and/or an input device, are capable of being used in accordance with the invention. Thus, for example, remote users (e.g., law enforcement personnel) may be able to remotely access the network of the invention and determine “on the fly” whether a given identification document, such as a license, may have been fraudulently obtained. For example, the remote user could use a portable scanning device capable of capturing a digitized image to scan a drivers license image, then conduct a search of the IDFPP system to further investigate possible fraud relating to the image and/or the holder of the drivers license.


The device running the browser should be capable of connecting to or communicating with the DMV network (depending, of course, on the network setup). To logon to the DMV network and/or IDFPP system, a given user may need certain access privileges. If required, an IDFPP database administrator can set up a user's username and password to use with IDFPP. Note that the username and password need not be the same as those used for DMV network access. In one embodiment, the administrator of the IDFPP database system can setup the IDFPP database permissions which with varying access levels, such as “Junior” or “Senior” permissions.


Once these items are set up, a user is ready to logon and start verifying images. This process begins, in one embodiment, by entering a predetermined URL in the browser's address field. The URL brings the user to the IDFPP logon screen.


After logging on, a user can spend a significant portion of time viewing either the verification list (e.g., FIG. 10) or the gallery of candidates presented on the verification screen (e.g., FIGS. 11 and 13). Advantageously, the user interface of this embodiment is designed so that progress through the screens moves in a “straight line” with very little branching to keep things simple. In this embodiment, because it is important to finish work on each screen, even if a user cancels, the screens do not include a browser “Back” button. In cases where it is necessary to make this choice, an explicit back button is provided. If a user finds a candidate image of interest and wants to perform a new search using the candidate as a probe (e.g., the progressive searching described previously), the user can use a so-called “cut and paste” feature (e.g., such as the cut and paste features available in WINDOWS and the MAC OS) to copy the URN associated with the image into a single search input field.


Sixth Illustrative Embodiment

In our sixth illustrative embodiment of the invention, we have found that we can improve the accuracy and/or usability of facial recognition search engines, such as those used in the embodiment described herein, by improving the accuracy of the eye locating (alignment step), to improve the accuracy of the template based on the eye locations.


Many known face recognition systems, for example, occasionally fail to correctly find the subject's eye location. In addition, inexactness can result because algorithms create a “probe” template from a digitized portrait and then search the entire database of templates for “near matches” instead of exact matches. This process results in a list of candidate matches which are ranked in order of likelihood. For certain images the face recognition software recognizes that is has not properly located the eyes and does not generate a face recognition template (e.g., the alignment failures described previously herein). For other particular images, incorrect eye location is not detected and an invalid template is produced. It would be desirable to detect and possibly correct invalid templates as images provided either by a capture station or a legacy database are enrolled into a template database.


We have found that commercially available facial recognition software does not meet the requirements of some types of customers, such as DMV customers. Vendors have created software that is directed at surveillance applications, but from a DMV perspective this software can have serious limitations. For example, vendor software that we have evaluated has some or all of these features:

    • Optimized for databases of less than 1 million records or less
    • Designed to have a human evaluate each image capture d and assist the program when it is enrolling the image in a search database
    • Designed to take advantage of multiple image capture s of the same individual
    • Designed to compare new images to a short “watch list” and present an operator with immediate feedback.


All of these features, except possibly the “watch list”, can be a limitation in DMV applications for at least the following reasons:

    • DMV image databases range in size from a few million to 80 million records and grow every day since DMVs typically do not delete any customer images, even those of deceased license holders
    • Duplicate images are created at the license renewal cycle and it is rare to see more than 2 images of the same person in today's databases
    • Enrollment of existing “legacy” database preferably occurs automatically and cannot require operator intervention.


At least some conventional face recognition algorithms include an initial processing step to locate the eyes of a subject in an image of the subject's face. After the eyes are located a template engine provides a template by processing the face image. For example, at least some facial recognition software available from vendors perform portrait enrollment in roughly two steps:


First, after some conventional image enhancement, a Cartesian coordinate system is established on the portrait by locating the centers of each eye. The line formed by the centers is one axis and the midpoint between the eyes is locates the second, perpendicular axis.


After the coordinate system is established, the manufacturers' proprietary algorithms extract other facial features that are useful when matching one face with another. The features are encoded into a short binary template and added to a database of templates. In practice, the template database is keyed to information, such as a name and address so it is possible to identify an individual if the portrait matches one in the database.


Each step in the above process entails levels of imprecision. We have noted that step 2—the matching of templates depends heavily on the “quality” of the template created during step 1. If the digitized portrait used to create the template is subjectively a high quality portrait and the probe image used later in a search is also high quality, then the first image is nearly always found. The manufacturers give some guidance on this point and at least some recommend that:

    • The optical axis of the camera lens should be perpendicular to the plane of the face to within 15 degrees;
    • The portrait should be taken at a scale so that at least 100 pixels are between the centers of the eyes;
    • The subject should have his eyes open and should not be looking up or down;
    • The “probe” images used in a search should be taken under the same lighting conditions (color temperature, contrast, substantially without shadows) as the those in the template database.


If these conditions are not met, the algorithms are likely to fail at step 1 and not create a template. For example, in most cases, no template is created if the subject's eyes are closed. This is reported by the vendor's software in about 1% of the images we tested.


However, when a “good” portrait is captured, the algorithms still may fail to locate the position of the eyes correctly. Our studies have shown that the algorithms fail this way in about 7% to 20% of the images. When this type of failure occurs, the vendor's algorithms create a template but do not report an error. Incorrect templates are created which will almost never match another photo of the same individual. Failure to find eyes properly can result from many different factors, including whether or not an individual is wearing glasses or jewelry, hair style, whether subject is wearing a hat, how shiny the subject's skin is, etc.


This unreported failure (creation of an incorrect template) effectively “deletes” the image from the template database by making it a non-participant. For databases containing more than 10,000 images, it is impractical to correct these failures by viewing every image in the database and manually correcting the eye coordinates. This is an unacceptably high error rate for many of such customers.


In a first example of one of our tests, we obtained a sample of 300 images from a state's DMV database and enrolled them with software from two different facial recognition vendors (Imagis and Identix). The eye coordinates produced by each algorithm were verified manually and incorrect locations were noted. We ran searches on a small number of portraits that were incorrectly enrolled and verified that we could not match other images of the same individual (a second, slightly different portrait). After manually correcting the coordinates, we ran searches again and verified that matching software succeeded. Based on this testing, we discovered that different subsets of portraits were contained in the set of resulting “failures” and that by combining this information we can reduce the total number of failures (that is, increase the accuracy).


In this embodiment, we provide methods for potentially detecting and/or correcting incorrect eye location. In one aspect of this embodiment, we correct eye location by means of additional eye location algorithms when used in conjunction with legacy database images and a combination of additional eye location algorithms, manual eye location under operator control, or image recapture when the face images are generated by an identification capture station. Advantageously, at least some embodiments of this aspect of the invention may provide increased face recognition accuracy by building a more accurate template database from legacy images and captured images, which will provide more accurate templates.



FIG. 16 is a flow chart of a method for improving the accuracy of facial recognition searching, in accordance with one embodiment of the invention. An image of the subject is received (step 1200). The image can be captured at a capture station or, in at least one embodiment, can be an already-enrolled legacy image. If required for eye finding by a particular algorithm, pre-processing steps can occur to prepare the image for finding eyes, such as removing extraneous information (e.g., background) from the image (step 1202), finding a head, face, and/or skin in the image (step 1204), and resizing, centering, and/or the image if needed (step 1206). Then, the eyes of the subject are found in the image (step 1208).


This step can use multiple eye location modules/engines in parallel (e.g., facial recognition engines 1 through N, which each may have an eye finding functionality) (steps 1218, 1220, 1222) to process the image, return eye coordinates (step 1209). Generally, each eye locating module returns (X, Y) eye location coordinates. Optionally, the eye locating module can also return an indication of success or failure of the eye location process. In at least some embodiments of the invention, the step of finding eyes in the image (step 1208) can be accomplished using on or more of the following:

    • Process the image with primary face recognition module which returns a failure indicator and eye location coordinates
    • Process image with a “blob” feature detector module configured to find eyes in a scaled centered image
    • Process image with a secondary face recognition module to obtain eye location coordinates
    • Process image with a third party proprietary face recognition algorithm which locates a eyes in an identification image


The evaluation of the eye coordinates (step 1210) can be automated, manual, or a combination of automation and manual. Automated evaluation can involve one or more rules (described further below) which are applied to the coordinates. Evaluation (automated or manual) also can include one or more of the following types of analysis of the returned eye coordinates.


Consistency checks: determine that both eyes are not in the same place, determine that horizontal and/or vertical eye locations are realistic.


Statistical comparisons: compare eye location coordinates provided by each eye finding module, check tolerances between modules, compute average coordinate values, variance etc., which can help to eliminate anomalous coordinates, smooth errors, etc.


General analysis: Check for other predetermined potential template problems based on eye location coordinates


Evaluation provides a success or fail indication as well as eye location coordinates to be used by the template engine. If it is determined that there is a problem with the primary face recognition module's eye location coordinates and the problem can be corrected, updated eye location coordinates (based on the correction) are provided from one of the other modules.


Evaluation also can involve application of a predetermined rule to evaluate and determine the “final” eye coordinates (step 1210). We have found that various rules can be used on one or more of the returned sets of coordinates, assuming that at least some of them are “acceptable” (step 1214). A determination of whether results are “acceptable” can involve many different factors. For example, if a one or more of the eye locating modules did not find eye coordinates at all, and it is still possible to get a new image of the subject, the image of the subject may be automatically or manually recaptured (step 1216) and the image is re-evaluated to locate new eye coordinates. If the eye coordinates returned by the eye locating modules are so different that no pair of coordinates for an eye is within some predetermined distance (e.g., 1 inch) at least one other set of coordinates, the results may be deemed to be unacceptable. In another example, if an eye locating module returns one eye coordinate that appears to be in a significantly different vertical position on the face than the other (e.g., left eye being 4 inches higher than right eye), results may be deemed to be unacceptable. Similarly, it may be deemed unacceptable if the left and right eye are in the same spot, or are more than several inches apart. Those skilled in the art will, of course, appreciate that many other patterns of returned results can be deemed to be not “acceptable”.


For example, in one embodiment we provide a “majority rules” type of implementation, where the coordinates are selected that are closest to (or an average of) those selected by a majority of the eye locating modules. For example, assume that for a subject image, 5 different eye locating modules returned the following X and Y coordinates for a right eye (only one eye is used here, for simplicity, but it will be appreciated that the returned coordinates for the left eye can be similarly evaluated, and, indeed, coordinates for both eyes can be evaluated at the same time). Table 1 shows the results:













TABLE 1







Eye_Locating_Module
X Coordinate
Y Coordinate









Vendor A
55
110



Vendor B
35
 90



Vendor C
52
100



Vendor D
58
115



Vendor #
21
 21










As table 1 shows, the results from Vendors A, C, and D are closest to each other and in the “majority”. In one embodiment, the eye coordinates can be assumed to be the average of these majority results, which would result in an assumption that the X coordinate has a value of 55 and the Y coordinate has a value of 108. These “averaged” locations can be used as the eye location.


The above is just one illustrative example of a rule that can be used to select the “best” eye coordinates from the returned eye coordinates. Other rules that we have tested and which may be usable include any one or more of the following:

    • Determining a broad area of interest for the location of both eyes and rejecting points outside of the area.


Applying a weighted voting mechanism to the results from each of the results generated by the different eye locating modules (blob detection), and picking the one with the highest weighted number of “votes”. Historical accuracy data can be used to assist in computing weights (for example a given eye locating module may be especially accurate for individuals with darker skin but less accurate for individuals with lighter skin and that information can be noted by the operator prior to finding the eyes, so that results from that eye finding module are weighted more heavily than those from other modules).

    • Replacing the eye coordinate with the center of gravity of all the candidate locations
    • Excluding points that are too far away from the frame midline after the captured image is scaled and framed.
    • Excluding point outside of boundaries for each possible eye location
    • Rejecting points if the location is not contained in a blob with the correct “eye” characteristics.
    • Rejecting pairs of points if the slope of the connecting line is too high (or too low) (e.g., the results show one eye has a markedly different vertical location than the other)


Referring again to FIG. 16, if the automatic eye location (using the eye locating modules) fails after processing a predetermined number of images (step 1214), then the capture station operator is prompted (if the image is being enrolled) to manually set eye locations (step 1224 and 1226). The remaining steps (creation of template, etc.) are similar to those described in connection with other figures herein, and are not repeated here.


Testing


We used a set of 300 DMV images (the same DMV images which we used in the “first example” described above) as input to an industrial vision system manufactured by Acuity Corp. This algorithm uses a technique know as “blob detection” that decomposes an image into areas that meet certain programmed conditions. Geometric features of interest for each blob are then measured and stored in a database. The features we used to sort the blobs were:


eccentricity—measuring roundness


total area (in pixels)


length and orientation of the major axis


length and orientation of the minor axis


(X, Y) coordinates of the centroid


We removed all the blobs that had a Y coordinate that could not be paired with another blob (to within a narrow tolerance band). We also removed blobs that were outside an area of interest (a band of pixels just below the top of the head).


Blobs that had at least one companion at about the same height were checked to see if the X coordinates spanned the midline of the picture frame. All blobs that did not have a companion in the other half of the frame were eliminated.


Finally the remaining blobs were checked according to size and eccentricity. Those that were roughly the same size and similar orientation, were paired.


Examining the results manually, we found that this approach could be used to provide a set of candidate eye coordinates in most cases.


Modifications to the Method of FIG. 16.


Additional steps can be added to the method of FIG. 16, if desired. For example, in one embodiment, if there are known duplicates of a subject (e.g., previous image capture s known to be of the same individual) already in the database, the newly generated template can be compared to the previous ones to determine if they match. If they do not, the operator can be given feedback to adjust the eye coordinates of the newly captured image.


In one embodiment, even if the coordinates returned by the eye locating modules are deemed acceptable, the operator can override them and manually enter the eye coordinates. The operator also can manually override the threshold of step 1214 to retake further images of the subject (which can be advantageous if the subject accidentally moves or blinks during image capture).


The method of FIG. 16 can be adapted to enroll images from a legacy database of images (e.g., where there is not the ability to re-capture images of the subject as needed). In one embodiment, multiple images can be processed if the legacy database includes multiple images for each person in the database. Manual eye location for legacy images is, of course, possible; however the number of images which require may manual correction can make this process impracticable.


In at least one embodiment of legacy enrollment, if it is determined by the evaluator that the eye location is unacceptable and manual correction is not enabled, then no template is generated, and an indication of eye location failure is placed in the database.


Additional Features of these and Other Embodiments of the Invention


The embodiments of the invention disclosed herein, including the records of the investigations and searches, can be used in many ways, especially in ways that benefit law enforcement and/or other government entities. For example, data associated with multiple attempts at fraud by a given individual can be sorted by the geographic location (e.g., DMV location) at which the individual sought the identification document (e.g., the location where an individual presented fraudulent credentials and/or had his/her image capture d). The list of locations may help law enforcement officials to determine patterns of attempted fraud, DMV locations where the most (and least) fraud occur, and possible geographic regions where an individual suspected of fraud may reside.


In addition, the lists of fraudulent images may be useful as “watch lists” to be provided to other governmental and/or law enforcement agencies. Such “watch lists” could be compared to other lists, such as FBI “most wanted” lists, international terrorist watch lists, Immigration and Naturalization Service (INS) watch lists, etc., to attempt to track down the locations of individuals of interest. The batch processing features of at least some embodiments of the invention can also be utilized to assist other agencies and can be adapted to work with databases used by other systems. For example, in addition to comparing a given captured image to the database of images stored by the issuing agency (e.g., DMV), the given capture image also could be compared with one or more watch lists of images that are maintained by other agencies. The same features of the invention (detailed previously in the first, second, and third embodiments) can be used to search these other databases. Indeed, it should be appreciated and understood that the invention is applicable not just to issuers of identification documents (such as DMVs), but to virtually any agency or organization where it is important to locate any and all individuals who may match a given image.


Furthermore, although the invention has heretofore been described using captured images, the invention can readily be implemented using so-called “live” images (e.g., live feeds from surveillance cameras).


In addition, although the systems and methods described herein have been described in connection with facial recognition techniques and fraud prevention, the embodiments of the invention have application with virtually any other biometric technologies that lends itself to automated searching (e.g., retinal scanning, fingerprint recognition, hand geometry, signature analysis, voiceprint analysis, and the like), including applications other than fraud prevention. For example, the systems and user interfaces of the present invention could be used with a fingerprint recognition system and associated search engine, where an investigator is searching a fingerprint database for a match to a latent fingerprint image retrieved from a crime scene.


Embodiments of the invention may be particularly usable in reducing fraud in systems used for creating and manufacturing identification cards, such as driver's licenses manufacturing systems. Such systems are described, for example, in U.S. Pat. Nos. 4,995,081, 4,879,747, 5,380,695, 5,579,694, 4,330,350, 4,773,677, 5,923,380, 4,992,353, 480,551, 4,701,040, 4,572,634, 4,516,845, 4,428,997, 5,075,769, 5,157,424, and 4,653,775. The contents of these patents are hereby incorporated by reference.


Such card systems may include a variety of built in security features, as well, to help reduce identity fraud. In an illustrative embodiment of the invention, the biometric authentication process described above can be used during the production of a photo-identification document that includes a digital watermark. Digital watermarking is a process for modifying physical or electronic media to embed a machine-readable code therein. The media may be modified such that the embedded code is imperceptible or nearly imperceptible to the user, yet may be detected through an automated detection process. The code may be embedded, e.g., in a photograph, text, graphic, image, substrate or laminate texture, and/or a background pattern or tint of the photo-identification document. The code can even be conveyed through ultraviolet or infrared inks and dyes.


Digital watermarking systems typically have two primary components: an encoder that embeds the digital watermark in a host media signal, and a decoder that detects and reads the embedded digital watermark from a signal suspected of containing a digital watermark. The encoder embeds a digital watermark by altering a host media signal. To illustrate, if the host media signal includes a photograph, the digital watermark can be embedded in the photograph, and the embedded photograph can be printed on a photo-identification document. The decoding component analyzes a suspect signal to detect whether a digital watermark is present. In applications where the digital watermark encodes information (e.g., a unique identifier), the decoding component extracts this information from the detected digital watermark.


Several particular digital watermarking techniques have been developed. The reader is presumed to be familiar with the literature in this field. Particular techniques for embedding and detecting imperceptible watermarks in media are detailed, e.g., in Digimarc's co-pending U.S. patent application Ser. No. 09/503,881 and U.S. patent application Ser. No. 6,122,403. Techniques for embedding digital watermarks in identification documents are even further detailed, e.g., in Digimarc's co-pending U.S. patent application Ser. No. 10/094,593, filed Mar. 6, 2002, and Ser. No. 10/170,223, filed Jun. 10, 2002, co-pending U.S. Provisional Patent Application No. 60/358,321, filed Feb. 19, 2002, and U.S. Pat. No. 5,841,886.


CONCLUDING REMARKS

In describing the embodiments of the invention illustrated in the figures, specific terminology (e.g., language, phrases, product brands names, etc.) is used for the sake of clarity. These names are provided by way of example only and are not limiting. The invention is not limited to the specific terminology so selected, and each specific term at least includes all grammatical, literal, scientific, technical, and functional equivalents, as well as anything else that operates in a similar manner to accomplish a similar purpose. Furthermore, in the illustrations, Figures, and text, specific names may be given to specific features, modules, tables, software modules, objects, data structures, servers, etc. Such terminology used herein, however, is for the purpose of description and not limitation.


Although the invention has been described and pictured in a preferred form with a certain degree of particularity, it is understood that the present disclosure of the preferred form, has been made only by way of example, and that numerous changes in the details of construction and combination and arrangement of parts may be made without departing from the spirit and scope of the invention. In the Figures of this application, in some instances, a plurality of system elements or method steps may be shown as illustrative of a particular system element, and a single system element or method step may be shown as illustrative of a plurality of a particular systems elements or method steps. It should be understood that showing a plurality of a particular element or step is not intended to imply that a system or method implemented in accordance with the invention must comprise more than one of that element or step, nor is it intended by illustrating a single element or step that the invention is limited to embodiments having only a single one of that respective elements or steps. In addition, the total number of elements or steps shown for a particular system element or method is not intended to be limiting; those skilled in the art can recognize that the number of a particular system element or method steps can, in some instances, be selected to accommodate the particular user needs.


It also should be noted that the previous illustrations of screen shots, together with the accompanying descriptions, are provided by way of example only and are not limiting. Those skilled in the art can recognize that many different designs of interfaces, screen shots, navigation patterns, and the like, are within the spirit and scope of the invention.


Having described and illustrated the principles of the technology with reference to specific implementations, it will be recognized that the technology can be implemented in many other, different, forms, and in many different environments. The technology disclosed herein can be used in combination with other technologies. Also, instead of ID documents, the inventive techniques can be employed with product tags, product packaging, labels, business cards, bags, charts, smart cards, maps, labels, etc., etc. The term ID document is broadly defined herein to include these tags, maps, labels, packaging, cards, etc.


It should be appreciated that the methods described above as well as the methods for implementing and embedding digital watermarks, can be carried out on a general-purpose computer. These methods can, of course, be implemented using software, hardware, or a combination of hardware and software. Systems and methods in accordance with the invention can be implemented using any type of general purpose computer system, such as a personal computer (PC), laptop computer, server, workstation, personal digital assistant (PDA), mobile communications device, interconnected group of general purpose computers, and the like, running any one of a variety of operating systems. We note that some image-handling software, such as Adobe's PrintShop, as well as image-adaptive software such as LEADTOOLS (which provide a library of image-processing functions and which is available from LEAD Technologies, Inc., of Charlotte, N.C.) can be used to facilitate these methods, including steps such as providing enhanced contrast, converting from a color image to a monochromatic image, thickening of an edge, dithering, registration, manually adjusting a shadow, etc. Computer executable software embodying the steps, or a subset of the steps, can be stored on a computer readable media, such as a diskette, removable media, DVD, CD, hard drive, electronic memory circuit, etc.).


Moreover, those of ordinary skill in the art will appreciate that the embodiments of the invention described herein can be modified to accommodate and/or comply with changes and improvements in the applicable technology and standards referred to herein. Variations, modifications, and other implementations of what is described herein can occur to those of ordinary skill in the art without departing from the spirit and the scope of the invention as claimed.


The particular combinations of elements and features in the above-detailed embodiments are exemplary only; the interchanging and substitution of these teachings with other teachings in this and the referenced patents/applications are also expressly contemplated. As those skilled in the art will recognize, variations, modifications, and other implementations of what is described herein can occur to those of ordinary skill in the art without departing from the spirit and the scope of the invention as claimed. Accordingly, the foregoing description is by way of example only and is not intended as limiting. The invention's scope is defined in the following claims and the equivalents thereto.


Having described the preferred embodiments of the invention, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may be used. These embodiments should not be limited to the disclosed embodiments, but rather should be limited only by the spirit and scope of the appended claims.

Claims
  • 1. A computer-implemented system for issuing identification documents to one of a plurality of individuals, comprising: a workstation, the workstation having a processor, a memory, an input device and a display;a first database, the first database operatively connected to the workstation and storing a plurality of digitized images, each digitized image comprising a biometric image of an individual seeking an identification documenta server in operable communication with the workstation and with the first database, the server programmed to: send, at a predetermined time, one or more digitized images of the individual from the first database to a biometric recognition system, the biometric recognition system in operable communication with a second database, the second database including biometric templates associated with a plurality of individuals whose images have been previously captured;the biometric recognition system comparing, the digitized image of the individual to the plurality of individuals whose images have been previously captured;the server being further programmed to:(a) receive from the biometric recognition system, for each digitized image of the individual sent, an indicator, based on the biometric searching of the second database, as to whether the second database contains any images of individuals who may at least partially match the digitized image of the individual that was sent; and(b) receive from the biometric recognition system a list of images of the individuals who may at least partially match the digitized image of the individual that was sent together with a score of each individual, the score indicating a score above a predetermined threshold relating to the degree of matching to the individual searching an identification document;the workstation being configured to permit a user to review the indicator and the scores of individuals from the biometric recognition system and to make a determination as to whether the individual is authorized to be issued an identification document or to keep an identification document already in the individual's possession.
  • 2. The system of claim 1 wherein the digitized image is at least one of a facial, fingerprint, thumbprint, and iris image.
  • 3. The system of claim 1 wherein the identification document is a driver's license.
  • 4. The system of claim 1, wherein the biometric recognition system is programmed to create a biometric template based on the digitized image received from the first server and to use that biometric template to search the second database.
  • 5. The system of claim 1, wherein the server is programmed to create a biometric template and provide that template to the biometric recognition system.
  • 6. The system of claim 1, wherein the indicator comprises a list of further data associated with the individual whose image at least partially matches the digitized image that was sent.
  • 7. The system of claim 6 further comprising a third database in operable communication with the workstation, the third database storing at least one of images and non-image data associated with each biometric template in the second database, wherein the workstation is configured to be able to retrieve information from the third database upon request and display it to a user.
  • 8. The system of claim 7, wherein the indicator is displayed on a user interface of the display, the user interface retrieving from the third database the images of at least a portion of the images of individuals that the biometric recognition system has determined may at least partially resemble the digitized image that was sent.
  • 9. The system of claim 8, wherein each image accessible to the workstation system is associated with at least one of additional biometric data and demographic information and wherein the user interface is operable to permit a user to do at least one of the following functions: visually compare the digitized image that was sent directly to an image of an individual whose data was returned in the indicator by the biometric recognition system;visually compare demographic information associated with the individual whose digitized image was sent directly to demographic information of an individual whose data was returned in the indicator by the biometric recognition system;visually compare the other biometric information associated with the digitized image that was sent to other biometric information associated with an individual whose data was returned in the indicator by the biometric recognition system;create a new biometric template of the digitized image that was sent and conduct a new search of the biometric recognition system using the new biometric template;perform a re-alignment of the digitized image and use the re-alignment data to conduct a new search of the biometric recognition system;capture a new image of the individual whose digitized image was sent;adding a notification to a record associated with at least one of the digitized image that was sent and the data that was returned in the indicator by the biometric recognition system, the notification providing an alert that there may be a problem with the record; andselecting at least one of the images of an individual whose data was returned in the indicator by the biometric recognition system and sending that image to the biometric recognition search system to run a search on that image.
  • 10. The system of claim 1, further comprising a capture station configured to acquire at least one digitized image of an individual seeking an identification document and to provide the digitized image to the first server.
PRIORITY CLAIM

This application claims priority to the following U.S. Provisional patent application: Systems and Methods for Managing and Detecting Fraud in Image Databases Used With Identification Documents (Application No. 60/429,501, filed Nov. 26, 2002. This application also is related to the following U.S. provisional and nonprovisional patent applications: Integrating and Enhancing Searching of Media Content and Biometric Databases (Application No. 60/451,840, filed Mar. 3, 2003; and Systems and Methods for Detecting Skin, Eye Region, and Pupils (Application No. 60/480,257, filed Jun. 20, 2003). Identification Card Printed With Jet Inks and Systems and Methods of Making Same (application Ser. No. 10/289,962, Inventors Robert Jones, Dennis Mailloux, and Daoshen Bi, filed Nov. 6, 2002); Laser Engraving Methods and Compositions, and Articles Having Laser Engraving Thereon (application Ser. No. 10/326,886, filed Dec. 20, 2002—Inventors Brian Labrec and Robert Jones); Multiple Image Security Features for Identification Documents and Methods of Making Same (application Ser. No. 10/325,434, filed Dec. 18, 2002—Inventors Brian Labrec, Joseph Anderson, Robert Jones, and Danielle Batey); Covert Variable Information on Identification Documents and Methods of Making Same (application Ser. No. 10/330,032, filed Dec. 24, 2002—Inventors: Robert Jones and Daoshen Bi); Image Processing Techniques for Printing Identification Cards and Documents (application Ser. No. 11/411,354, filed Apr. 9, 2003—Inventors Chuck Duggan and Nelson Schneck); Enhanced Shadow Reduction System and Related Technologies for Digital Image capture (Application No. 60/447,502, filed Feb. 13, 2003—Inventors Scott D. Haigh, Tuan A. Hoang, Charles R. Duggan, David Bohaker, and Leo M. Kenen); Enhanced Shadow Reduction System and Related Technologies for Digital Image capture (application Ser. No. 10/663,439, filed Sep. 15, 2003—Inventors Scott D. Haigh, Tuan A. Hoang, Charles R. Duggan, David Bohaker, and Leo M. Kenen); All In One Capture station for Creating Identification Documents (application Ser. No. 10/676,362, filed Sep. 30, 2003); Systems and Methods for Recognition of Individuals Using Multiple Biometric Searches (application Ser. No. 10/686,005, Inventors James V. Howard and Francis Frazier); and Detecting Skin, Eye Region, and Pupils in the Presence of Eyeglasses (Application No. 60/514,395, Inventor Kyungtae Hwang), filed Oct. 23, 2003. The present invention is also related to U.S. patent application Ser. No. 09/747,735, filed Dec. 22, 2000, Ser. No. 09/602,313, filed Jun. 23, 2000, and Ser. No. 10/094,593, filed Mar. 6, 2002, U.S. Provisional Patent Application No. 60/358,321, filed Feb. 19, 2002, as well as U.S. Pat. No. 6,066,594.

US Referenced Citations (1032)
Number Name Date Kind
2815310 Anderson Dec 1957 A
2957830 Goldberg Oct 1960 A
3153166 Thorton, Jr., et aL Oct 1964 A
3225457 Schure Dec 1965 A
3238595 Schwartz Mar 1966 A
3413171 Hannon Nov 1968 A
3496262 Long et al. Feb 1970 A
3569619 Simjian Mar 1971 A
3571957 Cumming et al. Mar 1971 A
3582439 Thomas Jun 1971 A
3601913 Pollock Aug 1971 A
3614430 Berler Oct 1971 A
3614839 Thomas Oct 1971 A
3640009 Komiyama Feb 1972 A
3647275 Ward Mar 1972 A
3665162 Yamamoto et al. May 1972 A
3703628 Philipson, Jr. Nov 1972 A
3737226 Shank Jun 1973 A
3758970 Annenberg Sep 1973 A
3802101 Scantlin Apr 1974 A
3805238 Rothfjell Apr 1974 A
3838444 Loughlin et al. Sep 1974 A
3845391 Crosby Oct 1974 A
3860558 Klemchuk Jan 1975 A
3914484 Creegan et al. Oct 1975 A
3914877 Hines Oct 1975 A
3922074 Ikegami et al. Nov 1975 A
3929701 Hall et al. Dec 1975 A
3932036 Ueda et al. Jan 1976 A
3949501 Andrews et al. Apr 1976 A
3953869 Wah Lo et al. Apr 1976 A
3961956 Fukuda et al. Jun 1976 A
3975291 Claussen et al. Aug 1976 A
3984624 Waggener Oct 1976 A
3987711 Silver Oct 1976 A
4035740 Schafer et al. Jul 1977 A
4051374 Drexhage et al. Sep 1977 A
4072911 Walther et al. Feb 1978 A
4082873 Williams Apr 1978 A
4096015 Kawamata et al. Jun 1978 A
4100509 Walther et al. Jul 1978 A
4104555 Fleming Aug 1978 A
4119361 Greenaway Oct 1978 A
4121003 Williams Oct 1978 A
4131337 Moraw et al. Dec 1978 A
4155618 Regnault et al. May 1979 A
4171766 Ruell Oct 1979 A
4183989 Tooth Jan 1980 A
4184701 Franklin et al. Jan 1980 A
4225967 Miwa et al. Sep 1980 A
4230990 Lert, Jr. et al. Oct 1980 A
4231113 Blasbalg Oct 1980 A
4238849 Gassmann Dec 1980 A
4252995 Schmidt et al. Feb 1981 A
4256900 Raue Mar 1981 A
4270130 Houle et al. May 1981 A
4271395 Brinkmann et al. Jun 1981 A
4274062 Brinkmann et al. Jun 1981 A
4289957 Neyroud et al. Sep 1981 A
4301091 Schieder et al. Nov 1981 A
4304809 Moraw et al. Dec 1981 A
4313197 Maxemchuk Jan 1982 A
4313984 Moraw et al. Feb 1982 A
4317782 Eckstein et al. Mar 1982 A
4324421 Moraw et al. Apr 1982 A
4326066 Eckstein et al. Apr 1982 A
4330350 Andrews May 1982 A
4338258 Brinkwerth et al. Jul 1982 A
4356052 Moraw et al. Oct 1982 A
4359633 Bianco Nov 1982 A
4360548 Skees et al. Nov 1982 A
4367488 Leventer et al. Jan 1983 A
4379947 Warner Apr 1983 A
4380027 Leventer et al. Apr 1983 A
4384973 Harnisch May 1983 A
4395600 Lundy et al. Jul 1983 A
4415225 Benton et al. Nov 1983 A
4417784 Knop et al. Nov 1983 A
4423415 Goldman Dec 1983 A
4425642 Moses et al. Jan 1984 A
4428997 Shulman Jan 1984 A
4443438 Kasamatsu et al. Apr 1984 A
4450024 Haghiri-Tehrani et al. May 1984 A
4467209 Maurer et al. Aug 1984 A
4468468 Benninghoven et al. Aug 1984 A
4474439 Brown Oct 1984 A
4476468 Goldman Oct 1984 A
4506148 Berthold et al. Mar 1985 A
4507346 Maurer et al. Mar 1985 A
4510311 Eckstein Apr 1985 A
4516845 Blakely et al. May 1985 A
4522881 Kobayashi et al. Jun 1985 A
4523777 Holbein et al. Jun 1985 A
4527059 Benninghoven et al. Jul 1985 A
4528588 Lofberg Jul 1985 A
4529992 Ishida et al. Jul 1985 A
4532508 Ruell Jul 1985 A
4544181 Maurer et al. Oct 1985 A
4547804 Greenberg Oct 1985 A
4551265 Brinkwerth et al. Nov 1985 A
4553261 Froessl Nov 1985 A
4568824 Gareis et al. Feb 1986 A
4572634 Livingston et al. Feb 1986 A
4579754 Maurer et al. Apr 1986 A
4590366 Rothfjell May 1986 A
4595950 Lofberg Jun 1986 A
4596409 Holbein et al. Jun 1986 A
4597592 Maurer et al. Jul 1986 A
4597593 Maurer Jul 1986 A
4597655 Mann Jul 1986 A
4599259 Kobayashi et al. Jul 1986 A
4617216 Haghiri-Tehrani et al. Oct 1986 A
4621271 Brownstein Nov 1986 A
4627997 Ide Dec 1986 A
4629215 Maurer et al. Dec 1986 A
4637051 Clark Jan 1987 A
4638289 Zottnik Jan 1987 A
4652722 Stone et al. Mar 1987 A
4653775 Raphael et al. Mar 1987 A
4653862 Morozumi Mar 1987 A
4654290 Spanjer Mar 1987 A
4656585 Stephenson Apr 1987 A
4660221 Dlugos Apr 1987 A
4663518 Borror et al. May 1987 A
4665431 Cooper May 1987 A
4670882 Telle et al. Jun 1987 A
4672605 Hustig et al. Jun 1987 A
4672891 Maurer et al. Jun 1987 A
4675746 Tetrick et al. Jun 1987 A
4677435 Causse D'Agraives et al. Jun 1987 A
4682794 Margolin Jul 1987 A
4687526 Wilfert Aug 1987 A
4689477 Goldman Aug 1987 A
4701040 Miller Oct 1987 A
4703476 Howard Oct 1987 A
4709384 Schiller Nov 1987 A
4711690 Haghiri-Tehrani Dec 1987 A
4712103 Gotanda Dec 1987 A
4718106 Weinblatt Jan 1988 A
4732410 Holbein et al. Mar 1988 A
4735670 Maurer et al. Apr 1988 A
4738949 Sethi et al. Apr 1988 A
4739377 Allen Apr 1988 A
4741042 Throop et al. Apr 1988 A
4748452 Maurer May 1988 A
4750173 Bluthgen Jun 1988 A
4751525 Robinson Jun 1988 A
4754128 Takeda et al. Jun 1988 A
4765636 Speer Aug 1988 A
4765656 Becker et al. Aug 1988 A
4766026 Lass et al. Aug 1988 A
4773677 Plasse Sep 1988 A
4775901 Nakano Oct 1988 A
4776013 Kafri et al. Oct 1988 A
4790566 Boissier et al. Dec 1988 A
4803114 Schledorn Feb 1989 A
4805020 Greenberg Feb 1989 A
4807031 Broughton et al. Feb 1989 A
4811357 Betts et al. Mar 1989 A
4811408 Goldman Mar 1989 A
4816372 Schenk et al. Mar 1989 A
4816374 Lecomte Mar 1989 A
4820912 Samyn Apr 1989 A
4822973 Fahner et al. Apr 1989 A
4835517 van der Gracht et al. May 1989 A
4841134 Hida et al. Jun 1989 A
4855827 Best Aug 1989 A
4859361 Reilly et al. Aug 1989 A
4861620 Azuma et al. Aug 1989 A
4864618 Wright et al. Sep 1989 A
4866025 Byers et al. Sep 1989 A
4866027 Henzel Sep 1989 A
4866771 Bain Sep 1989 A
4869946 Clay Sep 1989 A
4871714 Byers et al. Oct 1989 A
4876234 Henzel Oct 1989 A
4876237 Byers et al. Oct 1989 A
4876617 Best et al. Oct 1989 A
4878167 Kapulka et al. Oct 1989 A
4879747 Leighton et al. Nov 1989 A
4884139 Pommier Nov 1989 A
4888798 Earnest Dec 1989 A
4889749 Ohashi et al. Dec 1989 A
4891351 Byers et al. Jan 1990 A
4894110 Lass et al. Jan 1990 A
4903301 Kondo et al. Feb 1990 A
4908836 Rushforth et al. Mar 1990 A
4908873 Philibert et al. Mar 1990 A
4911370 Schippers et al. Mar 1990 A
4915237 Chang et al. Apr 1990 A
4921278 Shiang et al. May 1990 A
4931793 Fuhrmann et al. Jun 1990 A
4935335 Fotland Jun 1990 A
4939515 Adelson Jul 1990 A
4941150 Iwasaki Jul 1990 A
4943973 Werner Jul 1990 A
4943976 Ishigaki Jul 1990 A
4944036 Hyatt Jul 1990 A
4947028 Gorog Aug 1990 A
4959406 Foltin et al. Sep 1990 A
4963998 Maufe Oct 1990 A
4965827 McDonald Oct 1990 A
4967273 Greenberg Oct 1990 A
4968063 McConville et al. Nov 1990 A
4969041 O'Grady et al. Nov 1990 A
4972471 Gross et al. Nov 1990 A
4972476 Nathans Nov 1990 A
4977594 Shear Dec 1990 A
4979210 Nagata et al. Dec 1990 A
4990759 Gloton et al. Feb 1991 A
4992353 Rodakis et al. Feb 1991 A
4993068 Piosenka et al. Feb 1991 A
4994831 Marandi Feb 1991 A
4995081 Leighton et al. Feb 1991 A
4996530 Hilton Feb 1991 A
4999065 Wilfert Mar 1991 A
5005872 Lass et al. Apr 1991 A
5005873 West Apr 1991 A
5006503 Byers et al. Apr 1991 A
5010405 Schreiber et al. Apr 1991 A
5011816 Byers et al. Apr 1991 A
5013900 Hoppe May 1991 A
5023907 Johnson et al. Jun 1991 A
5024989 Chiang et al. Jun 1991 A
5027401 Soltesz Jun 1991 A
5036513 Greenblatt Jul 1991 A
5051147 Anger Sep 1991 A
5053956 Donald et al. Oct 1991 A
5058926 Drower Oct 1991 A
5060981 Fossum et al. Oct 1991 A
5061341 Kildal et al. Oct 1991 A
5062341 Reiling et al. Nov 1991 A
5063446 Gibson Nov 1991 A
5066947 Du Castel Nov 1991 A
5073899 Collier et al. Dec 1991 A
5075195 Babler et al. Dec 1991 A
5075769 Allen et al. Dec 1991 A
5079411 Lee Jan 1992 A
5079648 Maufe Jan 1992 A
5086469 Gupta et al. Feb 1992 A
5087507 Heinzer Feb 1992 A
5089350 Talvalkar et al. Feb 1992 A
5095196 Miyata Mar 1992 A
5099422 Foresman et al. Mar 1992 A
5100711 Satake et al. Mar 1992 A
5103459 Gilhousen et al. Apr 1992 A
5113445 Wang May 1992 A
5113518 Durst, Jr. et al. May 1992 A
5122813 Lass et al. Jun 1992 A
5128779 Mallik Jul 1992 A
5128859 Carbone et al. Jul 1992 A
5138070 Berneth Aug 1992 A
5138604 Umeda et al. Aug 1992 A
5138712 Corbin Aug 1992 A
5146457 Veldhuis et al. Sep 1992 A
5148498 Resnikoff et al. Sep 1992 A
5150409 Elsner Sep 1992 A
5156938 Foley et al. Oct 1992 A
5157424 Craven et al. Oct 1992 A
5161210 Druyvesteyn et al. Nov 1992 A
5166676 Milheiser Nov 1992 A
5169707 Faykish et al. Dec 1992 A
5171625 Newton Dec 1992 A
5172281 Ardis et al. Dec 1992 A
5173840 Kodai et al. Dec 1992 A
5179392 Kawaguchi Jan 1993 A
5180309 Egnor Jan 1993 A
5181786 Hujink Jan 1993 A
5185736 Tyrrell et al. Feb 1993 A
5191522 Bosco et al. Mar 1993 A
5199081 Saito et al. Mar 1993 A
5200822 Bronfin et al. Apr 1993 A
5201044 Frey, Jr. et al. Apr 1993 A
5208450 Uenishi et al. May 1993 A
5212551 Conanan May 1993 A
5213337 Sherman May 1993 A
5215864 Laakmann Jun 1993 A
5216543 Calhoun Jun 1993 A
5224173 Kuhns et al. Jun 1993 A
5228056 Schilling Jul 1993 A
5233513 Doyle Aug 1993 A
5237164 Takada Aug 1993 A
5243423 DeJean et al. Sep 1993 A
5243524 Ishida et al. Sep 1993 A
5245329 Gokcebay Sep 1993 A
5249546 Pennelle Oct 1993 A
5253078 Balkanski et al. Oct 1993 A
5258998 Koide Nov 1993 A
5259025 Monroe et al. Nov 1993 A
5261987 Luening et al. Nov 1993 A
5262860 Fitzpatrick et al. Nov 1993 A
5267334 Normille et al. Nov 1993 A
5267755 Yamauchi et al. Dec 1993 A
5270526 Yoshihara Dec 1993 A
5272039 Yoerger Dec 1993 A
5276478 Morton Jan 1994 A
5280537 Sugiyama et al. Jan 1994 A
5284364 Jain Feb 1994 A
5288976 Citron et al. Feb 1994 A
5293399 Hefti Mar 1994 A
5294774 Stone Mar 1994 A
5294944 Takeyama et al. Mar 1994 A
5295203 Krause et al. Mar 1994 A
5298922 Merkle et al. Mar 1994 A
5299019 Pack et al. Mar 1994 A
5301981 Nesis Apr 1994 A
5304513 Haghiri-Tehrani et al. Apr 1994 A
5304789 Lob et al. Apr 1994 A
5305400 Butera Apr 1994 A
5308736 Defieuw et al. May 1994 A
5315098 Tow May 1994 A
5317503 Inoue May 1994 A
5319453 Copriviza et al. Jun 1994 A
5319724 Blonstein et al. Jun 1994 A
5319735 Preuss et al. Jun 1994 A
5321751 Ray et al. Jun 1994 A
5325167 Melen Jun 1994 A
5334573 Schild Aug 1994 A
5336657 Egashira et al. Aug 1994 A
5337361 Wang et al. Aug 1994 A
5351302 Leighton et al. Sep 1994 A
5374675 Plachetta et al. Dec 1994 A
5379345 Greenberg Jan 1995 A
5380044 Aitkens et al. Jan 1995 A
5380695 Chiang et al. Jan 1995 A
5384846 Berson et al. Jan 1995 A
5385371 Izawa Jan 1995 A
5386566 Hamanaka et al. Jan 1995 A
5387013 Yamauchi et al. Feb 1995 A
5393099 D'Amato Feb 1995 A
5394274 Kahn Feb 1995 A
5394555 Hunter et al. Feb 1995 A
5396559 McGrew Mar 1995 A
5404377 Moses Apr 1995 A
5408542 Callahan Apr 1995 A
5409797 Hosoi et al. Apr 1995 A
5410142 Tsuboi et al. Apr 1995 A
5421619 Dyball Jun 1995 A
5421869 Gundjian et al. Jun 1995 A
5422213 Yu et al. Jun 1995 A
5422230 Boggs et al. Jun 1995 A
5422963 Chen et al. Jun 1995 A
5422995 Aoki et al. Jun 1995 A
5424119 Phillips et al. Jun 1995 A
5428607 Hiller et al. Jun 1995 A
5428731 Powers, III Jun 1995 A
5432870 Schwartz Jul 1995 A
5434994 Shaheen et al. Jul 1995 A
5435599 Bernecker Jul 1995 A
5436970 Ray et al. Jul 1995 A
5446273 Leslie Aug 1995 A
5446659 Yamawaki Aug 1995 A
5448053 Rhoads Sep 1995 A
5449200 Andric et al. Sep 1995 A
5450490 Jensen et al. Sep 1995 A
5450504 Calia Sep 1995 A
5451478 Boggs et al. Sep 1995 A
5454598 Wicker Oct 1995 A
5455947 Suzuki et al. Oct 1995 A
5458713 Ojster Oct 1995 A
5463209 Figh et al. Oct 1995 A
5463212 Oshima et al. Oct 1995 A
5466012 Puckett et al. Nov 1995 A
5469506 Berson et al. Nov 1995 A
5471533 Wang et al. Nov 1995 A
5473631 Moses Dec 1995 A
5474875 Loerzer et al. Dec 1995 A
5479168 Johnson et al. Dec 1995 A
5483442 Black et al. Jan 1996 A
5483632 Kuwamoto et al. Jan 1996 A
5489639 Faber et al. Feb 1996 A
5490217 Wang et al. Feb 1996 A
5493677 Balogh et al. Feb 1996 A
5495411 Ananda Feb 1996 A
5495581 Tsai Feb 1996 A
5496071 Walsh Mar 1996 A
5499294 Friedman Mar 1996 A
5499330 Lucas et al. Mar 1996 A
5504674 Chen et al. Apr 1996 A
5505494 Belluci et al. Apr 1996 A
5509693 Kohls Apr 1996 A
5514860 Berson May 1996 A
5515081 Vasilik May 1996 A
5516362 Gundjian et al. May 1996 A
5522623 Soules et al. Jun 1996 A
5523125 Kennedy et al. Jun 1996 A
5523942 Tyler et al. Jun 1996 A
5524489 Twigg Jun 1996 A
5524933 Kunt et al. Jun 1996 A
5525403 Kawabata et al. Jun 1996 A
5529345 Kohls Jun 1996 A
5530852 Meske, Jr. et al. Jun 1996 A
5532104 Goto Jul 1996 A
5534372 Koshizuka et al. Jul 1996 A
5548645 Ananda Aug 1996 A
5550346 Andriash et al. Aug 1996 A
5550976 Henderson et al. Aug 1996 A
5553143 Ross et al. Sep 1996 A
5560799 Jacobsen Oct 1996 A
5573584 Ostertag et al. Nov 1996 A
5576377 El Sayed et al. Nov 1996 A
5579479 Plum Nov 1996 A
5579694 Mailloux Dec 1996 A
5586310 Sharman Dec 1996 A
5594226 Steger Jan 1997 A
5594809 Kopec et al. Jan 1997 A
5612943 Moses et al. Mar 1997 A
5613004 Cooperman et al. Mar 1997 A
5629093 Bischof et al. May 1997 A
5629512 Haga May 1997 A
5629980 Stefik et al. May 1997 A
5633119 Burberry et al. May 1997 A
5634012 Stefik et al. May 1997 A
5635012 Belluci et al. Jun 1997 A
5636276 Brugger Jun 1997 A
5636292 Rhoads Jun 1997 A
5638443 Stefik et al. Jun 1997 A
5638508 Kanai et al. Jun 1997 A
5639819 Farkas et al. Jun 1997 A
5640193 Wellner Jun 1997 A
5640647 Hube Jun 1997 A
5645281 Hesse et al. Jul 1997 A
5646997 Barton Jul 1997 A
5646999 Saito Jul 1997 A
5652626 Kawakami et al. Jul 1997 A
5652714 Peterson et al. Jul 1997 A
5654105 Obringer et al. Aug 1997 A
5654867 Murray Aug 1997 A
5657462 Brouwer et al. Aug 1997 A
5658411 Faykish Aug 1997 A
5659164 Schmid et al. Aug 1997 A
5659726 Sandford, II et al. Aug 1997 A
5663766 Sizer, II Sep 1997 A
5665951 Newman et al. Sep 1997 A
5668636 Beach et al. Sep 1997 A
5669995 Hong Sep 1997 A
5671005 McNay et al. Sep 1997 A
5671282 Wolff et al. Sep 1997 A
5673316 Auerbach et al. Sep 1997 A
5680223 Cooper et al. Oct 1997 A
5681356 Barak et al. Oct 1997 A
5683774 Faykish et al. Nov 1997 A
5684885 Cass et al. Nov 1997 A
5687236 Moskowitz et al. Nov 1997 A
5688738 Lu Nov 1997 A
5689620 Kopec et al. Nov 1997 A
5689706 Rao et al. Nov 1997 A
5691757 Hayashihara et al. Nov 1997 A
5694471 Chen et al. Dec 1997 A
5696705 Zykan Dec 1997 A
5697006 Taguchi et al. Dec 1997 A
5698296 Dotson et al. Dec 1997 A
5700037 Keller Dec 1997 A
5706364 Kopec et al. Jan 1998 A
5710834 Rhoads Jan 1998 A
5712731 Drinkwater et al. Jan 1998 A
5714291 Marinello et al. Feb 1998 A
5715403 Stefik Feb 1998 A
5717018 Magerstedt et al. Feb 1998 A
5717391 Rodriguez Feb 1998 A
5719667 Miers Feb 1998 A
5719948 Liang Feb 1998 A
5721781 Deo et al. Feb 1998 A
5721788 Powell et al. Feb 1998 A
5734119 France et al. Mar 1998 A
5734752 Knox Mar 1998 A
5742411 Walters Apr 1998 A
5742845 Wagner Apr 1998 A
5745308 Spangenberg Apr 1998 A
5745901 Entner et al. Apr 1998 A
5748783 Rhoads May 1998 A
5760386 Ward Jun 1998 A
5761686 Bloomberg Jun 1998 A
5763868 Kubota et al. Jun 1998 A
5764263 Lin Jun 1998 A
5765152 Erickson Jun 1998 A
5767496 Swartz et al. Jun 1998 A
5768001 Kelley et al. Jun 1998 A
5768426 Rhoads Jun 1998 A
5768505 Gilchrist et al. Jun 1998 A
5768506 Randell Jun 1998 A
5769301 Hebert et al. Jun 1998 A
5773677 Lansink-Rotgerink et al. Jun 1998 A
5774168 Blome Jun 1998 A
5774452 Wolosewicz Jun 1998 A
5776278 Tuttle et al. Jul 1998 A
5778102 Sandford, II et al. Jul 1998 A
5783024 Forkert Jul 1998 A
5786587 Colgate, Jr. Jul 1998 A
5787186 Schroeder Jul 1998 A
5787269 Hyodo Jul 1998 A
5790703 Wang Aug 1998 A
5795643 Steininger et al. Aug 1998 A
5797134 McMillan et al. Aug 1998 A
5798949 Kaub Aug 1998 A
5799092 Kristol et al. Aug 1998 A
5801687 Peterson et al. Sep 1998 A
5801857 Heckenkamp et al. Sep 1998 A
5804803 Cragun et al. Sep 1998 A
5808758 Solmsdorf Sep 1998 A
5809139 Girod et al. Sep 1998 A
5809317 Kogan et al. Sep 1998 A
5809633 Mundigl et al. Sep 1998 A
5815093 Kikinis Sep 1998 A
5815292 Walters Sep 1998 A
5816619 Schaede Oct 1998 A
5818441 Throckmorton et al. Oct 1998 A
5824447 Tavernier et al. Oct 1998 A
5824715 Hayashihara et al. Oct 1998 A
5825892 Braudaway et al. Oct 1998 A
5828325 Wolosewicz et al. Oct 1998 A
5832481 Sheffield Nov 1998 A
5834118 R.ang.nby et al. Nov 1998 A
5840142 Stevenson et al. Nov 1998 A
5840791 Magerstedt et al. Nov 1998 A
5841886 Rhoads Nov 1998 A
5841978 Rhoads Nov 1998 A
5844685 Gontin Dec 1998 A
5848413 Wolff Dec 1998 A
5848424 Scheinkman et al. Dec 1998 A
5852673 Young Dec 1998 A
5853955 Towfiq Dec 1998 A
5855969 Robertson Jan 1999 A
5856661 Finkelstein et al. Jan 1999 A
5857038 Owada et al. Jan 1999 A
5861662 Candelore Jan 1999 A
5862260 Rhoads Jan 1999 A
5862500 Goodwin Jan 1999 A
5864622 Marcus Jan 1999 A
5864623 Messina et al. Jan 1999 A
5866644 Mercx et al. Feb 1999 A
5867199 Knox et al. Feb 1999 A
5867586 Liang Feb 1999 A
5869819 Knowles et al. Feb 1999 A
5870711 Huffman Feb 1999 A
5871615 Harris Feb 1999 A
5872589 Morales Feb 1999 A
5872627 Miers Feb 1999 A
5873066 Underwood et al. Feb 1999 A
5875249 Mintzer et al. Feb 1999 A
5877707 Kowalick Mar 1999 A
5879502 Gustafson Mar 1999 A
5879784 Breen et al. Mar 1999 A
5888624 Haghiri et al. Mar 1999 A
5892661 Stafford et al. Apr 1999 A
5892900 Ginter et al. Apr 1999 A
5893910 Martineau et al. Apr 1999 A
5895074 Chess et al. Apr 1999 A
5897938 Shinmoto et al. Apr 1999 A
5900608 Iida May 1999 A
5902353 Reber et al. May 1999 A
5903729 Reber et al. May 1999 A
5905248 Russell et al. May 1999 A
5905251 Knowles May 1999 A
5907149 Marckini May 1999 A
5907848 Zaiken et al. May 1999 A
5909683 Miginiac et al. Jun 1999 A
5912767 Lee Jun 1999 A
5912974 Holloway et al. Jun 1999 A
5913210 Call Jun 1999 A
5915027 Cox et al. Jun 1999 A
5918213 Bernard et al. Jun 1999 A
5918214 Perkowski Jun 1999 A
5919853 Condit et al. Jul 1999 A
5920861 Hall et al. Jul 1999 A
5920878 DeMont Jul 1999 A
5923380 Yang et al. Jul 1999 A
5925500 Yang et al. Jul 1999 A
5926822 Garman Jul 1999 A
5928989 Ohnishi et al. Jul 1999 A
5930377 Powell et al. Jul 1999 A
5930759 Moore et al. Jul 1999 A
5930767 Reber et al. Jul 1999 A
5932863 Rathus et al. Aug 1999 A
5933816 Zeanah et al. Aug 1999 A
5933829 Durst et al. Aug 1999 A
5935694 Olmstead et al. Aug 1999 A
5936986 Cantatore et al. Aug 1999 A
5937189 Branson et al. Aug 1999 A
5938726 Reber et al. Aug 1999 A
5938727 Ikeda Aug 1999 A
5939695 Nelson Aug 1999 A
5939699 Perttunen et al. Aug 1999 A
5940595 Reber et al. Aug 1999 A
5944356 Bergmann et al. Aug 1999 A
5944881 Mehta et al. Aug 1999 A
5947369 Frommer et al. Sep 1999 A
5948035 Tomita Sep 1999 A
5949055 Fleet et al. Sep 1999 A
5950169 Borghesi et al. Sep 1999 A
5950173 Perkowski Sep 1999 A
5953710 Fleming Sep 1999 A
5955021 Tiffany, III Sep 1999 A
5955961 Wallerstein Sep 1999 A
5956687 Wamsley et al. Sep 1999 A
5958528 Bernecker Sep 1999 A
5962840 Haghiri-Tehrani et al. Oct 1999 A
5963916 Kaplan Oct 1999 A
5965242 Patton et al. Oct 1999 A
5969324 Reber et al. Oct 1999 A
5971277 Cragun et al. Oct 1999 A
5973842 Spangenberg Oct 1999 A
5974141 Saito Oct 1999 A
5974548 Adams Oct 1999 A
5975583 Cobben et al. Nov 1999 A
5977514 Feng et al. Nov 1999 A
5978773 Hudetz et al. Nov 1999 A
5979757 Tracy et al. Nov 1999 A
5982912 Fukui et al. Nov 1999 A
5983218 Syeda-Mahmood Nov 1999 A
5984366 Priddy Nov 1999 A
5985078 Suess et al. Nov 1999 A
5987434 Libman Nov 1999 A
5988820 Huang et al. Nov 1999 A
5991429 Coffin et al. Nov 1999 A
5991733 Aleia et al. Nov 1999 A
5991876 Johnson et al. Nov 1999 A
6000607 Ohki et al. Dec 1999 A
6002383 Shimada Dec 1999 A
6003581 Aihara Dec 1999 A
6007660 Forkert Dec 1999 A
6007929 Robertson et al. Dec 1999 A
6009402 Whitworth Dec 1999 A
6012641 Watada Jan 2000 A
6016225 Anderson Jan 2000 A
6017972 Harris et al. Jan 2000 A
6022905 Harris et al. Feb 2000 A
6024287 Takai et al. Feb 2000 A
6025462 Wang et al. Feb 2000 A
6028134 Zhang et al. Feb 2000 A
6036099 Leighton Mar 2000 A
6036807 Brongers Mar 2000 A
6037102 Loerzer et al. Mar 2000 A
6037860 Zander et al. Mar 2000 A
6038012 Bley Mar 2000 A
6038333 Wang Mar 2000 A
6038393 Iyengar et al. Mar 2000 A
6042249 Spangenberg Mar 2000 A
6043813 Stickney et al. Mar 2000 A
6047888 Dethloff Apr 2000 A
6049055 Fannash et al. Apr 2000 A
6049463 O'Malley et al. Apr 2000 A
6049665 Branson et al. Apr 2000 A
6051297 Maier et al. Apr 2000 A
6052486 Knowlton et al. Apr 2000 A
6054170 Chess et al. Apr 2000 A
6062604 Taylor et al. May 2000 A
6064414 Kobayashi et al. May 2000 A
6064764 Bhaskaran et al. May 2000 A
6064983 Koehler May 2000 A
6066437 Kosslinger May 2000 A
6066594 Gunn et al. May 2000 A
6071855 Patton et al. Jun 2000 A
6072894 Payne Jun 2000 A
6073854 Bravenec et al. Jun 2000 A
6075223 Harrison Jun 2000 A
6076026 Jambhekar et al. Jun 2000 A
6081832 Gilchrist et al. Jun 2000 A
6082778 Solmsdorf Jul 2000 A
6086971 Haas et al. Jul 2000 A
6089614 Howland et al. Jul 2000 A
6092049 Chislenko et al. Jul 2000 A
6095566 Yamamoto et al. Aug 2000 A
6100804 Brady et al. Aug 2000 A
6101602 Fridrich Aug 2000 A
6105007 Norris Aug 2000 A
6106110 Gundjian et al. Aug 2000 A
6110864 Lu Aug 2000 A
6111506 Yap et al. Aug 2000 A
6111517 Atick et al. Aug 2000 A
6115690 Wong Sep 2000 A
6120142 Eltgen et al. Sep 2000 A
6120882 Faykish et al. Sep 2000 A
6122403 Rhoads Sep 2000 A
6127475 Vollenberg et al. Oct 2000 A
6131161 Linnartz Oct 2000 A
6134582 Kennedy Oct 2000 A
6138913 Cyr et al. Oct 2000 A
6141611 Mackey et al. Oct 2000 A
6143852 Harrison et al. Nov 2000 A
6146032 Dunham Nov 2000 A
6146741 Ogawa et al. Nov 2000 A
6151403 Luo Nov 2000 A
6155168 Sakamoto Dec 2000 A
6155605 Bratchley et al. Dec 2000 A
6156032 Lennox Dec 2000 A
6157330 Bruekers et al. Dec 2000 A
6159327 Forkert Dec 2000 A
6160526 Hirai et al. Dec 2000 A
6160903 Hamid et al. Dec 2000 A
6161071 Shuman et al. Dec 2000 A
6162160 Ohshima et al. Dec 2000 A
6163770 Gamble et al. Dec 2000 A
6163842 Barton Dec 2000 A
6164548 Curiel Dec 2000 A
6165696 Fischer Dec 2000 A
6173284 Brown Jan 2001 B1
6173901 McCannel Jan 2001 B1
6174400 Krutak, Sr. et al. Jan 2001 B1
6179338 Bergmann et al. Jan 2001 B1
6181806 Kado et al. Jan 2001 B1
6183018 Braun et al. Feb 2001 B1
6184782 Oda et al. Feb 2001 B1
6185042 Lomb et al. Feb 2001 B1
6185316 Buffam Feb 2001 B1
6185490 Ferguson Feb 2001 B1
6185540 Schreitmueller et al. Feb 2001 B1
6186404 Ehrhart et al. Feb 2001 B1
6199144 Arora et al. Mar 2001 B1
6202932 Rapeli Mar 2001 B1
6205249 Moskowitz Mar 2001 B1
6206292 Robertz et al. Mar 2001 B1
6207244 Hesch Mar 2001 B1
6207344 Ramlow et al. Mar 2001 B1
6209923 Thaxton et al. Apr 2001 B1
6210777 Vermeulen et al. Apr 2001 B1
6214916 Mercx et al. Apr 2001 B1
6214917 Linzmeier et al. Apr 2001 B1
6219639 Bakis et al. Apr 2001 B1
6221552 Street et al. Apr 2001 B1
6223125 Hall Apr 2001 B1
6226623 Schein et al. May 2001 B1
6234537 Gutmann et al. May 2001 B1
6236975 Boe et al. May 2001 B1
6238840 Hirayama et al. May 2001 B1
6238847 Axtell, III et al. May 2001 B1
6243480 Zhao et al. Jun 2001 B1
6244514 Otto Jun 2001 B1
6246933 Bague Jun 2001 B1
6247644 Horne et al. Jun 2001 B1
6250554 Leo et al. Jun 2001 B1
6254127 Breed et al. Jul 2001 B1
6256736 Coppersmith et al. Jul 2001 B1
6257486 Teicher et al. Jul 2001 B1
6258896 Abuelyaman et al. Jul 2001 B1
6259506 Lawandy Jul 2001 B1
6260029 Critelli Jul 2001 B1
6264296 Klinefelter et al. Jul 2001 B1
6268804 Janky et al. Jul 2001 B1
6277232 Wang et al. Aug 2001 B1
6283188 Maynard et al. Sep 2001 B1
6284337 Lorimor et al. Sep 2001 B1
6286036 Rhoads Sep 2001 B1
6286761 Wen Sep 2001 B1
6289108 Rhoads Sep 2001 B1
6291551 Kniess et al. Sep 2001 B1
6292092 Chow et al. Sep 2001 B1
6292575 Bortolussi et al. Sep 2001 B1
6301164 Manning et al. Oct 2001 B1
6301363 Mowry, Jr. Oct 2001 B1
6302444 Cobben Oct 2001 B1
6308187 DeStefano Oct 2001 B1
6311214 Rhoads Oct 2001 B1
6312858 Yacobucci et al. Nov 2001 B1
6313436 Harrison Nov 2001 B1
6316538 Anderson et al. Nov 2001 B1
6321981 Ray et al. Nov 2001 B1
6324091 Gryko et al. Nov 2001 B1
6324573 Rhoads Nov 2001 B1
6326128 Telser et al. Dec 2001 B1
6336096 Jernberg Jan 2002 B1
6340725 Wang et al. Jan 2002 B1
6341169 Cadorette, Jr. et al. Jan 2002 B1
6343138 Rhoads Jan 2002 B1
6343303 Nevranmont Jan 2002 B1
6345105 Nitta et al. Feb 2002 B1
6351537 Dovgodko et al. Feb 2002 B1
6351893 St. Pierre Mar 2002 B1
6357664 Zercher Mar 2002 B1
6363360 Madden Mar 2002 B1
6368684 Onishi et al. Apr 2002 B1
6372394 Zientek Apr 2002 B1
6380131 Griebel et al. Apr 2002 B2
6381415 Terada Apr 2002 B1
6381561 Bomar, Jr. et al. Apr 2002 B1
6389151 Carr et al. May 2002 B1
6389155 Funayama et al. May 2002 B2
6390375 Kayanakis May 2002 B2
6397334 Chainer et al. May 2002 B1
6400386 No Jun 2002 B1
6404643 Chung Jun 2002 B1
6408082 Rhoads et al. Jun 2002 B1
6408304 Kumhyr Jun 2002 B1
6413687 Hattori et al. Jul 2002 B1
6418154 Kneip et al. Jul 2002 B1
6421013 Chung Jul 2002 B1
6424029 Giesler Jul 2002 B1
6424249 Houvener Jul 2002 B1
6427744 Seki et al. Aug 2002 B2
6430306 Slocum et al. Aug 2002 B2
6444068 Koops et al. Sep 2002 B1
6444377 Jotcham et al. Sep 2002 B1
6446086 Bartlett et al. Sep 2002 B1
6446865 Holt et al. Sep 2002 B1
6449377 Rhoads Sep 2002 B1
6463416 Messina Oct 2002 B1
6466982 Ruberg Oct 2002 B1
6469288 Sasaki et al. Oct 2002 B1
6473165 Coombs et al. Oct 2002 B1
6474695 Schneider et al. Nov 2002 B1
6475588 Schottland et al. Nov 2002 B1
6478228 Ikefuji et al. Nov 2002 B1
6478229 Epstein Nov 2002 B1
6482495 Kohama et al. Nov 2002 B1
6483993 Misumi et al. Nov 2002 B1
6485319 Bricaud et al. Nov 2002 B2
6487301 Zhao Nov 2002 B1
6493650 Rodgers et al. Dec 2002 B1
6500386 Burstein Dec 2002 B1
6503310 Sullivan Jan 2003 B1
6525672 Chainer et al. Feb 2003 B2
6526161 Yan Feb 2003 B1
6532459 Berson Mar 2003 B1
6536665 Ray et al. Mar 2003 B1
6536672 Outwater Mar 2003 B1
6542622 Nelson et al. Apr 2003 B1
6546112 Rhoads Apr 2003 B1
6555213 Koneripalli et al. Apr 2003 B1
6570609 Heien May 2003 B1
6580819 Rhoads Jun 2003 B1
6581839 Lasch et al. Jun 2003 B1
6583813 Enright et al. Jun 2003 B1
6606420 Loce et al. Aug 2003 B1
6608911 Lofgren et al. Aug 2003 B2
6614914 Rhoads et al. Sep 2003 B1
6616993 Usuki et al. Sep 2003 B2
6638635 Hattori et al. Oct 2003 B2
6641874 Kuntz et al. Nov 2003 B2
6650761 Rodriguez et al. Nov 2003 B1
6675074 Hathout et al. Jan 2004 B2
6681032 Bortolussi et al. Jan 2004 B2
6685312 Klinefelter et al. Feb 2004 B2
6702282 Pribula et al. Mar 2004 B2
6712397 Mayer et al. Mar 2004 B1
6715797 Curiel Apr 2004 B2
6719469 Yasui et al. Apr 2004 B2
6723479 Van De Witte et al. Apr 2004 B2
6725383 Kyle Apr 2004 B2
6729719 Klinefelter et al. May 2004 B2
6751336 Zhao Jun 2004 B2
6752432 Richardson Jun 2004 B1
6758616 Pribula et al. Jul 2004 B2
6764014 Lasch et al. Jul 2004 B2
6765704 Drinkwater Jul 2004 B2
6769061 Ahern Jul 2004 B1
6782115 Decker et al. Aug 2004 B2
6782116 Zhao et al. Aug 2004 B1
6794115 Telser et al. Sep 2004 B2
6803114 Vere et al. Oct 2004 B1
6817530 Labrec et al. Nov 2004 B2
6818699 Kajimaru et al. Nov 2004 B2
6825265 Daga et al. Nov 2004 B2
6827277 Bloomberg et al. Dec 2004 B2
6827283 Kappe et al. Dec 2004 B2
6832205 Aragones et al. Dec 2004 B1
6843422 Jones et al. Jan 2005 B2
6853739 Kyle Feb 2005 B2
6865011 Whitehead et al. Mar 2005 B2
6882737 Lofgren et al. Apr 2005 B2
6900767 Hattori May 2005 B2
6903850 Kay et al. Jun 2005 B2
6923378 Jones et al. Aug 2005 B2
6925468 Bobbitt et al. Aug 2005 B1
6938029 Tien Aug 2005 B1
6942331 Guillen et al. Sep 2005 B2
6944773 Abrahams Sep 2005 B1
6952741 Bartlett et al. Oct 2005 B1
6954293 Heckenkamp et al. Oct 2005 B2
6959098 Alattar Oct 2005 B1
6961708 Bierenbaum Nov 2005 B1
6963659 Tumey et al. Nov 2005 B2
6970844 Bierenbaum Nov 2005 B1
6978036 Alattar et al. Dec 2005 B2
7013284 Guyan et al. Mar 2006 B2
7016516 Rhoads Mar 2006 B2
7024418 Childress Apr 2006 B1
7036944 Budd et al. May 2006 B2
7043052 Rhoads May 2006 B2
7063264 Bi et al. Jun 2006 B2
7081282 Kuntz et al. Jul 2006 B2
7086666 Richardson Aug 2006 B2
7095426 Childress Aug 2006 B1
7143950 Jones et al. Dec 2006 B2
7183361 Toman Feb 2007 B2
7185201 Rhoads et al. Feb 2007 B2
7196813 Matsumoto Mar 2007 B2
7197444 Bomar, Jr. et al. Mar 2007 B2
7199456 Krappe et al. Apr 2007 B2
7202970 Maher et al. Apr 2007 B1
7206820 Rhoads et al. Apr 2007 B1
7207494 Theodossiou et al. Apr 2007 B2
7277891 Howard et al. Oct 2007 B2
7278580 Jones et al. Oct 2007 B2
7289643 Brunk et al. Oct 2007 B2
7343307 Childress Mar 2008 B1
7344325 Meier et al. Mar 2008 B2
7353196 Bobbitt et al. Apr 2008 B1
7356541 Doughty Apr 2008 B1
7359863 Evenshaug et al. Apr 2008 B1
7363264 Doughty et al. Apr 2008 B1
7398219 Wolfe Jul 2008 B1
7418400 Lorenz Aug 2008 B1
7430514 Childress et al. Sep 2008 B1
7430515 Wolfe et al. Sep 2008 B1
7498075 Bloomberg et al. Mar 2009 B2
7515336 Lippey et al. Apr 2009 B2
7526487 Bobbitt et al. Apr 2009 B1
7548881 Narayan et al. Jun 2009 B2
20010002035 Kayanakis May 2001 A1
20010013395 Pourmand et al. Aug 2001 A1
20010037223 Beery et al. Nov 2001 A1
20010037455 Lawandy et al. Nov 2001 A1
20020007289 Malin et al. Jan 2002 A1
20020018430 Heckenkamp et al. Feb 2002 A1
20020020832 Oka et al. Feb 2002 A1
20020021001 Stratford et al. Feb 2002 A1
20020023218 Lawandy et al. Feb 2002 A1
20020027359 Cobben et al. Mar 2002 A1
20020030587 Jackson Mar 2002 A1
20020034319 Tumey et al. Mar 2002 A1
20020035488 Aquila et al. Mar 2002 A1
20020041372 Gardner et al. Apr 2002 A1
20020048399 Lee et al. Apr 2002 A1
20020049619 Wahlbin et al. Apr 2002 A1
20020051569 Kita May 2002 A1
20020055860 Wahlbin et al. May 2002 A1
20020055861 King et al. May 2002 A1
20020059083 Wahlbin et al. May 2002 A1
20020059084 Wahlbin et al. May 2002 A1
20020059085 Wahlbin et al. May 2002 A1
20020059086 Wahlbin et al. May 2002 A1
20020059087 Wahlbin et al. May 2002 A1
20020059097 Wahlbin et al. May 2002 A1
20020062232 Wahlbin et al. May 2002 A1
20020062233 Wahlbin et al. May 2002 A1
20020062234 Wahlbin et al. May 2002 A1
20020062235 Wahlbin et al. May 2002 A1
20020069091 Wahlbin et al. Jun 2002 A1
20020069092 Wahlbin et al. Jun 2002 A1
20020070280 Ikefuji et al. Jun 2002 A1
20020077380 Wessels et al. Jun 2002 A1
20020080992 Decker et al. Jun 2002 A1
20020080994 Lofgren et al. Jun 2002 A1
20020082873 Wahlbin et al. Jun 2002 A1
20020087363 Wahlbin et al. Jul 2002 A1
20020091937 Ortiz Jul 2002 A1
20020106494 Roth et al. Aug 2002 A1
20020116330 Hed et al. Aug 2002 A1
20020128881 Wahlbin et al. Sep 2002 A1
20020136435 Prokoski Sep 2002 A1
20020136448 Bortolussi et al. Sep 2002 A1
20020145652 Lawrence et al. Oct 2002 A1
20020146549 Kranenburg-Van Dijk et al. Oct 2002 A1
20020166635 Sasaki et al. Nov 2002 A1
20020170966 Hannigan et al. Nov 2002 A1
20020187215 Trapani et al. Dec 2002 A1
20020191082 Fujino et al. Dec 2002 A1
20020194476 Lewis et al. Dec 2002 A1
20030002710 Rhoads Jan 2003 A1
20030031340 Alattar et al. Feb 2003 A1
20030031348 Kuepper et al. Feb 2003 A1
20030034319 Meherin et al. Feb 2003 A1
20030038174 Jones Feb 2003 A1
20030052680 Konijn Mar 2003 A1
20030055638 Burns et al. Mar 2003 A1
20030056499 Binder et al. Mar 2003 A1
20030056500 Huynh et al. Mar 2003 A1
20030059124 Center, Jr. Mar 2003 A1
20030062421 Bloomberg et al. Apr 2003 A1
20030099379 Monk et al. May 2003 A1
20030114972 Takafuji et al. Jun 2003 A1
20030115459 Monk Jun 2003 A1
20030117262 Anderegg et al. Jun 2003 A1
20030126121 Khan et al. Jul 2003 A1
20030128862 Decker et al. Jul 2003 A1
20030141358 Hudson et al. Jul 2003 A1
20030161507 Lawandy Aug 2003 A1
20030173406 Bi et al. Sep 2003 A1
20030178487 Rogers Sep 2003 A1
20030178495 Jones et al. Sep 2003 A1
20030183695 Labrec et al. Oct 2003 A1
20030188659 Merry et al. Oct 2003 A1
20030200123 Burge et al. Oct 2003 A1
20030211296 Jones et al. Nov 2003 A1
20030226897 Jones et al. Dec 2003 A1
20030234286 Labrec et al. Dec 2003 A1
20030234292 Jones Dec 2003 A1
20040011874 Theodossiou et al. Jan 2004 A1
20040017490 Lin Jan 2004 A1
20040024694 Lawrence et al. Feb 2004 A1
20040030587 Danico et al. Feb 2004 A1
20040036574 Bostrom Feb 2004 A1
20040049409 Wahlbin et al. Mar 2004 A1
20040054556 Wahlbin et al. Mar 2004 A1
20040054557 Wahlbin et al. Mar 2004 A1
20040054558 Wahlbin et al. Mar 2004 A1
20040054559 Wahlbin et al. Mar 2004 A1
20040066441 Jones et al. Apr 2004 A1
20040074973 Schneck et al. Apr 2004 A1
20040076310 Hersch et al. Apr 2004 A1
20040093349 Buinevicius et al. May 2004 A1
20040099731 Olenick et al. May 2004 A1
20040102984 Wahlbin et al. May 2004 A1
20040102985 Wahlbin et al. May 2004 A1
20040103004 Wahlbin et al. May 2004 A1
20040103005 Wahlbin et al. May 2004 A1
20040103006 Wahlbin et al. May 2004 A1
20040103007 Wahlbin et al. May 2004 A1
20040103008 Wahlbin et al. May 2004 A1
20040103009 Wahlbin et al. May 2004 A1
20040103010 Wahlbin et al. May 2004 A1
20040111301 Wahlbin et al. Jun 2004 A1
20040133582 Howard et al. Jul 2004 A1
20040198858 Labrec Oct 2004 A1
20040213437 Howard et al. Oct 2004 A1
20040243567 Levy Dec 2004 A1
20040245346 Haddock Dec 2004 A1
20050001419 Levy et al. Jan 2005 A1
20050003297 Labrec Jan 2005 A1
20050010776 Kenen et al. Jan 2005 A1
20050031173 Hwang Feb 2005 A1
20050035589 Richardson Feb 2005 A1
20050060205 Woods et al. Mar 2005 A1
20050072849 Jones Apr 2005 A1
20050095408 LaBrec et al. May 2005 A1
20050160294 LaBrec et al. Jul 2005 A1
20050192850 Lorenz Sep 2005 A1
20060027667 Jones et al. Feb 2006 A1
20060039581 Decker et al. Feb 2006 A1
20070152067 Bi et al. Jul 2007 A1
20070158939 Jones et al. Jul 2007 A1
20070187515 Theodossiou et al. Aug 2007 A1
Foreign Referenced Citations (171)
Number Date Country
2235005 May 1997 CA
2470094 Jun 2003 CA
2469956 Jul 2003 CA
1628294 Jun 2005 CN
1628318 Jun 2005 CN
1647428 Jul 2005 CN
1664695 Sep 2005 CN
2943436 May 1981 DE
3738636 Jun 1988 DE
3806411 Sep 1989 DE
9315294 Feb 1994 DE
4403513 Aug 1995 DE
69406213 Mar 1998 DE
058482 Aug 1982 EP
111075 Jun 1984 EP
0157568 Oct 1985 EP
190997 Aug 1986 EP
0233296 Aug 1987 EP
0279104 Aug 1988 EP
0280773 Sep 1988 EP
0336075 Oct 1989 EP
0356980 Mar 1990 EP
0356981 Mar 1990 EP
0356982 Mar 1990 EP
0362640 Apr 1990 EP
0366075 May 1990 EP
0366923 May 1990 EP
372601 Jun 1990 EP
0373572 Jun 1990 EP
0374835 Jun 1990 EP
411232 Feb 1991 EP
0420613 Apr 1991 EP
441702 Aug 1991 EP
0446834 Sep 1991 EP
0446846 Sep 1991 EP
0464268 Jan 1992 EP
0465018 Jan 1992 EP
0479265 Apr 1992 EP
493091 Jul 1992 EP
0523304 Jan 1993 EP
0524140 Jan 1993 EP
0539001 Apr 1993 EP
581317 Feb 1994 EP
629972 Dec 1994 EP
0636495 Feb 1995 EP
0637514 Feb 1995 EP
642060 Mar 1995 EP
0649754 Apr 1995 EP
650146 Apr 1995 EP
0696518 Feb 1996 EP
0697433 Feb 1996 EP
705025 Apr 1996 EP
0734870 Oct 1996 EP
0736860 Oct 1996 EP
0739748 Oct 1996 EP
0926608 Jun 1999 EP
0982149 Mar 2000 EP
0991014 Apr 2000 EP
1013463 Jun 2000 EP
1017016 Jul 2000 EP
1035503 Sep 2000 EP
1046515 Oct 2000 EP
1110750 Jun 2001 EP
1410315 Apr 2004 EP
1456810 Sep 2004 EP
1459239 Sep 2004 EP
1546798 Jun 2005 EP
1550077 Jul 2005 EP
1564673 Aug 2005 EP
1565857 Aug 2005 EP
1603301 Dec 2005 EP
1618521 Jan 2006 EP
1909971 Apr 2008 EP
1088318 Oct 1967 GB
1213193 Nov 1970 GB
1472581 May 1977 GB
2063018 May 1981 GB
2067871 Jul 1981 GB
2132136 Jul 1984 GB
2204984 Nov 1988 GB
2227570 Aug 1990 GB
2240948 Aug 1991 GB
2325765 Dec 1998 GB
63146909 Jun 1988 JP
3115066 May 1991 JP
03126589 May 1991 JP
3-185585 Aug 1991 JP
03-239595 Oct 1991 JP
4-248771 Sep 1992 JP
5-242217 Sep 1993 JP
6234289 Aug 1994 JP
7088974 Apr 1995 JP
7115474 May 1995 JP
09064545 Mar 1997 JP
10171758 Jun 1998 JP
10177613 Jun 1998 JP
10197285 Jul 1998 JP
10214283 Aug 1998 JP
11161711 Jun 1999 JP
11259620 Sep 1999 JP
11301121 Nov 1999 JP
11321166 Nov 1999 JP
2000-292834 Oct 2000 JP
2001-058485 Mar 2001 JP
2004355659 Dec 2004 JP
2005525254 Aug 2005 JP
2005525949 Sep 2005 JP
2005276238 Oct 2005 JP
2006190331 Jul 2006 JP
WO-8204149 Nov 1982 WO
WO-8900319 Jan 1989 WO
WO-8908915 Sep 1989 WO
WO-9116722 Oct 1991 WO
WO-9427228 Nov 1994 WO
WO-9510835 Apr 1995 WO
WO-9513597 May 1995 WO
WO-9514289 May 1995 WO
WO-9520291 Jul 1995 WO
WO-9603286 Feb 1996 WO
WO-9627259 Sep 1996 WO
WO-9636163 Nov 1996 WO
WO-9701446 Jan 1997 WO
WO-9718092 May 1997 WO
WO-9732733 Sep 1997 WO
WO-9743736 Nov 1997 WO
WO-9814887 Apr 1998 WO
WO-9820642 May 1998 WO
WO-19980019869 May 1998 WO
WO-9824050 Jun 1998 WO
WO-19980030224 Jul 1998 WO
WO-9840823 Sep 1998 WO
WO-9849813 Nov 1998 WO
WO-9924934 May 1999 WO
WO-9934277 Jul 1999 WO
WO0010116 Feb 2000 WO
WO-0010116 Feb 2000 WO
WO-0016984 Mar 2000 WO
WO-00036593 Jun 2000 WO
WO-0043214 Jul 2000 WO
WO-20000043215 Jul 2000 WO
WO-20000043216 Jul 2000 WO
WO-0045344 Aug 2000 WO
WO-0078554 Dec 2000 WO
WO-0100719 Jan 2001 WO
WO-0129764 Apr 2001 WO
WO-0143080 Jun 2001 WO
WO-0145559 Jun 2001 WO
WO-0156805 Aug 2001 WO
WO-0195249 Dec 2001 WO
WO-0196112 Dec 2001 WO
WO-0226507 Apr 2002 WO
WO-0227647 Apr 2002 WO
WO-0242371 May 2002 WO
WO-0245969 Jun 2002 WO
WO-0252499 Jul 2002 WO
WO-0253499 Jul 2002 WO
WO-0278965 Oct 2002 WO
WO-0296666 Dec 2002 WO
WO-0305291 Jan 2003 WO
WO-0330079 Apr 2003 WO
WO-0355684 Jul 2003 WO
WO-0356500 Jul 2003 WO
WO-0356507 Jul 2003 WO
WO-0395210 Nov 2003 WO
WO-0396258 Nov 2003 WO
WO-2004025365 Mar 2004 WO
WO-2004028943 Apr 2004 WO
WO-200434236 Apr 2004 WO
WO-04042512 May 2004 WO
WO-200449242 Jun 2004 WO
WO-04097595 Nov 2004 WO
Related Publications (1)
Number Date Country
20040213437 A1 Oct 2004 US
Provisional Applications (1)
Number Date Country
60429501 Nov 2002 US