This invention relates generally to authenticating individuals, and more particularly, to a method and system for biometric authentication.
Generally, biometric authentication systems are used to identify and verify the identity of individuals and are used in many different contexts such as verifying the identity of individuals entering a country using electronic passports. Biometric authentication systems have also been known to verify the identity of individuals using driver's licenses, traveler's tokens, employee identity cards and banking cards.
Known biometric authentication system search engines generally identify individuals using biometric feature templates derived from raw biometric data captured from individuals. Specifically, a biometric feature template derived from biometric data captured from an individual during authentication is compared against a database of previously derived biometric feature templates, and the identity of the individual is verified upon determining a match between one of the stored biometric feature templates and the biometric feature template derived during authentication. However, comparing biometric feature templates against a database of biometric feature templates may place substantial demands on computer system memory and processing which may result in unacceptably long authentication periods. Moreover, such known biometric authentication system search engines are generally highly specialized and proprietary.
By virtue of being highly specialized and proprietary it has been known to be difficult, time consuming and costly to modify known biometric authentication search engines to operate with other authentication systems. Furthermore, known biometric authentication search engines, by virtue of evaluating only biometric data of an individual for authentication, in many cases, do not provide an adequate amount of information about the individual to yield consistently accurate authentication results.
In one aspect of the invention, a method of authentication is provided. The method includes capturing biometric data for a desired biometric type from an individual, determining an algorithm for converting the biometric data into authentication words, converting the captured biometric data into authentication words in accordance with the determined algorithm, including the authentication words in a probe, and comparing the probe against identity records stored in a server system. Each of the identity records includes enrollment biometric words of an individual obtained during enrollment. Moreover, the method includes identifying at least one of the identity records as a potential matching identity record when at least one of the authentication words included in the probe matches at least one of the enrollment biometric words included in the at least one identity record, and generating a list of potential matching identity records.
In another aspect of the invention, a system for biometric authentication is provided. The system includes a computer configured as a server. The server includes at least a data base and is configured to store within the database at least one conversion algorithm and at least a gallery of data including identity records. Each identity record includes at least biographic data of an individual and enrollment biometric words of the individual. The at least one client system includes at least a computer configured to communicate with the server. The client system is configured to at least capture biometric data for at least one desired biometric type from an individual.
The server is also configured to convert the captured biometric data into authentication words by executing the at least one conversion algorithm. The at least one conversion algorithm is configured to generate the enrollment biometric words. Moreover, the server is configured to generate a probe including at least the authentication words, compare the probe against the gallery, and identify at least one of the identity records as a matching identity record when at least one of the authentication words matches at least one of the enrollment biometric words included in the at least one identity record. Furthermore, the server is configured to generate a list of potential matching identity records.
In yet another aspect of the invention, a method of text-based biometric authentication is provided. The method includes capturing biometric data for a plurality of different biometric types from an individual and determining a plurality of algorithms. Each of the algorithms is operable to convert captured biometric data of a corresponding biometric type into a vocabulary of words. Moreover, the method includes converting the captured biometric data for each biometric type into authentication words in accordance with the corresponding one of the algorithms and comparing a probe against identity records stored in a server system. The probe includes authentication words and biographic words, and each of the identity records includes at least enrollment biometric words and biographic words of a corresponding individual obtained during enrollment. Furthermore, the method includes identifying at least one of the identity records as a potential matching identity record when at least one of the biographic words included in the probe or at least one of the authentication words included in the probe matches at least one of the biographic words or one of the enrollment biometric words, respectively, included in the at least one identity record. The method also includes generating a list of potential matching identity records.
The database server 16 is connected to a database that is stored on the disk storage unit 20, and can be accessed by authorized users from any of the client computer systems 14 in any manner that facilitates authenticating individuals as described herein. The database may be configured to store documents in any type of database including, but not limited to, a relational object database or a hierarchical database. Moreover the database may be configured to store data in formats such as, but not limited to, text documents and binary documents. In an alternative embodiment, the database is stored remotely from the server system 12. The server system 12 is configured to conduct any type of matching of any feature or information associated with individuals as described herein. The server system 12 is also configured to determine at least one conversion algorithm for converting biometric data into words.
The server system 12 is typically configured to be communicatively coupled to client computer systems 14 using the Local Area Network (LAN) 22. However, it should be appreciated that in other embodiments, the server system 12 may be communicatively coupled to end users at computer systems 14 via any kind of network including, but not limited to, a Wide Area Network (WAN), the Internet, and any combination of LAN, WAN and the Internet. Any authorized end user at the client computer systems 14 can access the server system 12, and authorized client computer systems 14 may automatically access the computer system 12 and vice versa.
In the exemplary embodiment, the client computer systems 14 may be computer systems associated with entities that administer programs requiring improved identity authentication. Such programs include, but are not limited to, driver licensing programs, Visa programs, national identity programs, offender programs, welfare programs and taxpayer registration programs. Moreover, each client system 14 may be used to manage and administer a plurality of such programs. Each of the client computer systems 14 includes at least one personal computer 26 configured to communicate with the server system 12. Moreover, the personal computers 26 include devices, such as, but not limited to, a CD-ROM drive for reading data from computer-readable recording mediums, such as a compact disc-read only memory (CD-ROM), a magneto-optical disc (MOD) and a digital versatile disc (DVD). Additionally, the personal computers 26 include a memory (not shown). Moreover, the personal computers 26 include display devices, such as, but not limited to, liquid crystal displays (LCD), cathode ray tubes (CRT) and color monitors. Furthermore, the personal computers 26 include printers and input devices such as, but not limited to, a mouse (not shown), keypad (not shown), a keyboard, a microphone (not shown), and biometric capture devices 28.
Although the client computer systems 14 include personal computers 26 in the exemplary embodiment, it should be appreciated that in other embodiments the client computer systems 14 may include portable communications devices capable of at least displaying messages and images, and capturing and transmitting authentication data. Such portable communications devices include, but are not limited to, a smart phone and any type of portable communications device having wireless capabilities such as a personal digital assistant (PDA) and a laptop computer. Moreover, it should be appreciated that in other embodiments the client computer systems 14 may include any computer system that facilitates authenticating the identity of an individual as described herein such as, but not limited to, server systems.
Each of the biometric capture devices 28 includes hardware configured to capture at least one specific type of biometric sample. In the exemplary embodiment, each biometric capture device 28 may be any device that captures any type of desired biometric sample. Such devices include, but are not limited to, microphones, iris scanners, fingerprint scanners, vascular scanners and digital cameras. Thus, each of the client systems 14 is configured to at least capture biometric data for a desired biometric type from an individual. It should be appreciated that although the exemplary embodiment includes two client computer systems 14 each including at least one personal computer 26, in other embodiments any number of client computer systems 14 may be provided and each of the client computer systems 14 may include any number of personal computers 26.
Application server 18 and each personal computer 26 includes a processor (not shown) and a memory (not shown). It should be understood that, as used herein, the term processor is not limited to just those integrated circuits referred to in the art as a processor, but broadly refers to a computer, an application specific integrated circuit, and any other programmable circuit. It should be understood that computer programs, or instructions, are stored on a computer-readable recording medium, such as the memory (not shown) of application server 18 and of the personal computers 26, and are executed by the processor. The above examples are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of the term “processor.”
The memory (not shown) included in application server 18 and in the personal computers 26, can be implemented using any appropriate combination of alterable, volatile or non-volatile memory or non-alterable, or fixed, memory. The alterable memory, whether volatile or non-volatile, can be implemented using any one or more of static or dynamic RAM (Random Access Memory), a floppy disc and disc drive, a writeable or re-writeable optical disc and disc drive, a hard drive, flash memory or the like. Similarly, the non-alterable or fixed memory can be implemented using any one or more of ROM (Read-Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), an optical ROM disc, such as a CD-ROM or DVD-ROM disc, and disc drive or the like.
It should be appreciated that the memory of the application server 18 and of the personal computers 26 is used to store executable instructions, applications or computer programs, thereon. The terms “computer program” and “application” are intended to encompass an executable program that exists permanently or temporarily on any computer-readable recordable medium that causes the computer or computer processor to execute the program. In the exemplary embodiment, a parser application and a generic filtering module (GFM) application are stored in the memory of the application server 18. The parser application causes the application server 18 to convert biometric data into at least text strings according to a determined conversion algorithm. At least one of the text-strings is included in a probe that may be generated by the GFM application. The probe may also be generated by another application, different than the GFM application, stored in the server system 12 or any of the client systems 14. Text strings are also known as words. The probe may include any data such as, but not limited to, words. Specifically, words generated from biometric data captured during enrollment are referred to herein as enrollment biometric words and words generated from biometric data captured during authentication are referred to herein as authentication words.
The GFM application is a text search engine which causes the application server 18 to compare the probe against identity records stored in the server system 12. Moreover, the GFM application causes the application server 18 to generate a list of potential matching identity records according to the similarity between the probe and the identity records in the server system 12. Furthermore, the GFM application causes the application server 18 to determine the similarity between the probe and identity records using one of a plurality of authentication policies and rules included in the GFM application itself. However, it should be appreciated that in other embodiments the authentication policies and rules may not be included in the GFM application. Instead, the authentication policies and rules may be stored in the server system 12 separate from the GFM application or in any of the client systems 14. It should be understood that the authentication policies may determine the similarity between a probe and the identity records on any basis, such as, but not limited to, according to the number of matching words between the probe and each of the identity records. Although the parser application is stored in the application server 18 in the exemplary embodiment, it should be appreciated that in other embodiments the parser application may be stored in any of the client systems 14.
Although the captured biometric data is from a fingerprint in the exemplary embodiments described herein, it should be appreciated that in other embodiments the captured biometric data may be from any other biometric type or combinations of biometric types including, but not limited to, face, voice, and iris. Moreover, it should be appreciated that such other biometric types may have biometric features different than the biometric features of fingerprints that can be extracted from the captured biometric data and included in a biometric feature template. For example, when iris biometric data is captured during authentication, phase information and masking information of the iris may be extracted from the captured iris biometric data and included as data in a biometric feature template. Although the captured biometric data is processed into a biometric feature template in the exemplary embodiment, it should be appreciated that in other embodiments the captured biometric data may be processed into any form that facilitates authenticating the individual, such as, but not limited to, photographs and electronic data representations.
A longitudinal direction of ridges 32 in a core 34 of the fingerprint is used to determine the orientation of the fingerprint image 30. Specifically, a Cartesian coordinate system is electronically superimposed on the image 30 such that an axis Y is positioned to extend through the core 34 in the longitudinal direction, and another axis X is positioned to pass through the core 34 and to perpendicularly intersect the Y-axis at the core 34. It should be appreciated that the intersection of the X and Y axes constitutes an origin of the Cartesian coordinate system.
The radial lines Rj and circles Ci define a plurality of intersections 38 and a plurality of cells 40 in the radial grid 36. Coordinates based on the Cartesian coordinate system are computed for each intersection 38 and for each minutia point MPn to determine the position of each minutia point MPn relative to the radial grid 36. Specifically, the coordinates of each minutia point MPn are compared against the coordinates of the intersections 38, to determine one of the cells 40 that corresponds to and contains, each minutia point MPn. For example, by comparing the coordinates of the minutia point MP8 against the coordinates 38, it is determined that one of the cells 40 defined by radial lines R3 and R4, and circles C6 and C7, contains the minutia point MP8. Because the minutia point MP8 is contained in a cell 40 defined by radial lines R3, R4 and circles C6, C7, the position of minutia point MP8 may be expressed in a text string using radial line and circle designations derived from the radial grid 36. Specifically, in the exemplary embodiment, the position of the minutia point MP8 is expressed in the alphanumeric text string R3R4C6C7. Consequently, it should be understood that the position of each one of the minutia points MPn may be described textually in an alphanumeric text string derived from its corresponding cell 40. As such, it should be understood that superimposing the radial grid 36 on the fingerprint image 30 facilitates converting the minutia points MPn into text strings. It should be appreciated that any number of minutia points MPn may be positioned in any one of the cells 40 and that desirably, each of the minutia points MPn is positioned in a single one of the cells 40.
Each alphanumeric text string is an alphanumeric word that facilitates textually describing biometric features included in captured biometric data that is to be used for authentication. Moreover, because each word is derived from the position of a corresponding cell 40, each cell 40 of the radial grid 36 constitutes a word that may be used to facilitate textually describing biometric features included in captured biometric data. Furthermore, because the radial grid 36 includes a plurality of cells 40, the radial grid 36 defines a plurality of words that may be used to facilitate textually describing biometric features included in captured biometric data. Additionally, because a plurality of words constitutes a vocabulary, the radial grid 36 itself constitutes a vehicle for defining a vocabulary of words that may be used to facilitate textually describing biometric features included in captured biometric data. By using the radial grid 36 as described in the exemplary embodiment, an algorithm is executed that converts captured biometric data into words, included in a vocabulary of words, that may be used as the basis for authenticating the identity of an individual. Thus, it should be understood that by virtue of executing the conversion algorithm, words are generated that map to the vocabulary.
A biometric data sample captured for an identical biometric type from the same person may vary each time the biometric data sample is captured. Consequently, the positions of the biometric features included in the captured biometric data samples, and minutia points corresponding to the biometric features, may also vary. It should be appreciated that the minutia point variances generally do not affect the positions, and related words, of minutia points MPn within the grid 36. However, the minutia point variances may affect the positions, and related words, of minutia points MPn positioned proximate to or on a border between adjacent cells 40. It should be appreciated that by virtue of defining the plurality of cells 40, the radial lines Rj and circles Ci also define the borders between adjacent cells 40. Thus, minutia points positioned proximate to or on a radial line Rj or a circle Ci, may be located in different cells 40 in different biometric data samples captured for the identical biometric type from the same person. Minutia points MPn positioned proximate to or on a line Rj or a circle Ci are referred to herein as borderline minutia points.
Minutia point MP3 is positioned in a first cell 40-1 proximate the border R22 between the first cell 40-1 and a second cell 40-2 included in the radial grid 36. Thus, minutia point MP3 is a borderline minutia point whose position within the grid 36 may vary between different biometric data samples captured for the identical biometric type from the same person. Specifically, the location of minutia point MP3 within the grid 36 may vary such that in one biometric data sample the minutia point MP3 is located in cell 40-1 proximate the radial line R22, and in another biometric data sample of the identical biometric type the minutia point MP3 is located in cell 40-2 proximate radial line R22. Minutia point MP1 is also a borderline minutia point and is located within a third cell 40-3 proximate the circle C9 between the third cell 40-3 and a fourth cell 40-4. Thus, the position of minutia point MP1 within the grid 36 may also vary between captured biometric data samples. That is, the position of minutia point MP1 within the grid 36 may vary, similar to minutia point MP3, between cells 40-3 and 40-4 in different biometric data samples of an identical biometric type from the same person. Thus, it may be difficult to accurately determine a single cell 40 location for borderline minutia points such as MP1 and MP3.
The information shown in
The overlapping border regions 42-1 and 42-2 operate to effectively expand the borders of adjacent cells so that the borders of adjacent cells 40 overlap. Thus, the overlapping border regions 42-1 and 42-2 effectively establish an area, representing a tolerance of positions of minutia points MPn, about the borders R22 and C9, respectively, within which the position of minutia points MP1 and MP3 may vary. Thus, it should be appreciated that minutia points located within the overlapping border regions 42-1 and 42-2 are borderline minutia points. Moreover, it should be appreciated that the overlapping border regions 42-1 and 42-2 may be used to determine borderline minutia points. Furthermore, it should be appreciated that by effectively establishing an area within which the positions of minutia points may vary, the overlapping border regions 42-1 and 42-2 facilitate accounting for variances that may be introduced while capturing biometric data and thus facilitate increasing the accuracy of text-based biometric authentication as described herein.
In the exemplary embodiment, minutia point MP3 is located within the overlapping border region 42-1. Thus, to account for the possible positional variation of minutia point MP3, in the exemplary embodiment minutia point MP3 is considered to have two positions within the grid 36. That is, the minutia point MP3 is considered to be positioned in adjacent cells 40-1 and 40-2, and is described using words derived from adjacent cells 40-1 and 40-2. Specifically, the position of minutia point MP3 is described with the words R21R22C6C7 R22R23C6C7. Minutia point MP1 is located within the overlapping border region 42-2, and is also considered to have two positions within the grid 36. That is, minutia point MP1 is considered to be positioned in adjacent cells 40-3 and 40-4, and is described with words derived from cells 40-3 and 40-4. Specifically, the position of minutia point MP1 is described with the words R22R23C8C9 R22R23C9C10. It should be understood that multiple words may constitute a sentence. Thus, because the words describing the positions of the minutia points MP1 and MP3 constitute multiple words, the words describing the positions of the minutia points MP1 and MP3 are sentences.
It should be understood that the borderline minutia points MP1 and MP3 as described in the exemplary embodiment are positioned within overlapping border regions 42-2 and 42-1, respectively, and thus are described with words derived from two different cells 40. However, it should be appreciated that in other embodiments, borderline minutia points may be located at an intersection of different overlapping border regions, such as at the intersection of overlapping border regions 42-1 and 42-2. Such borderline minutia points located at the intersection of two different overlapping border regions are considered to have four different cell positions within the grid 36, and are described with words derived from the four different cells.
Although the exemplary embodiment is described as using an angle θ1 of one degree, it should be appreciated that in other embodiments the angle θ1 may be any angle that is considered to define an overlapping border region large enough to capture likely borderline minutia points. Moreover, in other embodiments, instead of rotating the radial line R22 by the angle θ1 to define the overlapping border region 42-1, the radial line R22 may be offset to each side by a predetermined perpendicular distance, adequate to capture likely borderline minutia points, to define the overlapping border region 42-1. It should also be appreciated that although the exemplary embodiment is described using only one overlapping border region 42-1 for one radial line R22, and only one overlapping border region 42-2 for one circle C9, in other embodiments overlapping border regions may be positioned about each radial line Rj and each circle Ci, or any number of radial lines Rj and circles Ci that facilitates deriving words for borderline minutia points as described herein.
In the exemplary embodiment, the words are defined such that the radial lines Rj are expressed first in sequentially increasing order, followed by the circles Ci which are also expressed in sequentially increasing order. It should be appreciated that in other embodiments the radial lines Rj and the circles Ci may be expressed in any order. Moreover, it should be appreciated that although the exemplary embodiment expresses the location of minutia points MPn in alphanumeric words, in other embodiments the words may be expressed in any manner, such as, but not limited to, only alphabetic characters and only numeric characters, that facilitates authenticating the identity of an individual as described herein.
The information shown in
Coordinates based on the superimposed Cartesian coordinate system are computed for each intersection 38 and for each minutia point MPn to determine the position of each minutia point MPn relative to the radial grid 36. However, in contrast to the exemplary embodiment described with reference to
By using the radial grid 36 as described in this alternative exemplary embodiment, an algorithm is executed that converts captured biometric data into words, included in the different vocabulary of words, which may be used as the basis for authenticating the identity of an individual. Thus, by virtue of executing the algorithm of the alternative exemplary embodiment, words are generated that map to the different vocabulary.
In this alternative exemplary embodiment borderline minutia points such as MP1 and MP3 are also considered to have two positions within the grid 36. Thus, in this alternative exemplary embodiment, borderline minutia point MP1 is described with the words S22B9 S22B10 and borderline minutia point MP3 is described with the words S21B7 S22B7.
In this alternative exemplary embodiment, the words are defined such that the sectors Sk are expressed first and the concentric bands Bp are expressed second. However, it should be appreciated that in other embodiments the sectors Sk and the concentric bands Bp may be expressed in any order that facilitates authenticating the identity of an individual as described herein.
It should be appreciated that in yet other exemplary embodiments after obtaining the word for each cell 40, the words may be simplified, or translated, to correspond to a single cell number. For example, the word S0B0 may be translated to correspond to cell number zero; S1B0 may be translated to correspond to cell number one; S2B0 may be translated to correspond to cell number two; S31B0 may be translated to correspond to cell number 31; and, S0B1 may be translated to correspond to cell number 32. Thus, the words S0B0, S1B0, S2B0, S31B0 and S0B1 may be represented simply as single cell numbers 0, 1, 2, 31 and 32, respectively.
In this alternative exemplary embodiment the words describing the positions of minutia points MP1 and MP3 are sentences. Additionally, it should be appreciated that when the fingerprint image 30 includes a plurality of minutia points MPn, words corresponding to the minutia points may be sequentially positioned adjacent each other to form sentences. Such sentences may be generated, for example, by combining words that are nearest to the origin of the Cartesian co-ordinate system, starting with word S0B0, and proceeding clockwise and outwards to end at the word SkBp. However, in other embodiments the words are not required to be positioned sequentially, and may be positioned in any order to form a sentence that facilitates authenticating the identity of an individual as described herein.
Although this alternative exemplary embodiment includes the same radial grid 36 superimposed on the same biometric image 30 as the exemplary embodiment, it should be appreciated that the same radial grid 36 may be used to generate many different vocabularies in addition to those described herein. Moreover, although both of the exemplary embodiments described herein use the same radial grid 36 to generate different vocabularies, it should be appreciated that in other embodiments any other medium that establishes a positional relationship with biometric features of a desired biometric type may be used as a conversion algorithm for generating at least one vocabulary of words that describes the positions of the biometric features. Such mediums include, but are not limited to, rectangular grids, triangular grids, electronic models and mathematical functions. Furthermore, it should be appreciated that different vocabularies generated from different mediums may be combined to yield combined, or fused, vocabularies for the same biometric type and for different biometric types.
In the exemplary embodiments described herein the grid 36 is used to generate words that map to a corresponding vocabulary. Moreover, the grid 36 may be used to generate many words that each map to a same or different vocabulary. Furthermore, it should be understood that any other medium that establishes a positional relationship with biometric features may be used for generating words that each map to the same or different vocabulary.
Using the grid 36 to generate a vocabulary of words as described in the exemplary embodiments, effectively executes an algorithm that generates a vocabulary of words for use in authenticating the identity of individuals based on captured biometric data. However, it should be appreciated that in other embodiments other known algorithms, or classification algorithms, may be used to convert biometric features into words and thus generate additional alternative vocabularies. Such other known algorithms may convert biometric features into words by analyzing captured biometric data and classifying the captured biometric data into one or more finite number of groups. Such known classification algorithms include, but are not limited to, a Henry classification algorithm. The Henry classification algorithm examines a fingerprint global ridge pattern and classifies the fingerprint based on the global ridge pattern into one of a small number of possible groups, or patterns.
Consequently, in yet another alternative exemplary embodiment, another vocabulary of alphanumeric words may be generated by mapping each Henry classification pattern to a corresponding word included in a vocabulary defined for the Henry classification algorithm. For example, an arch pattern in the Henry classification algorithm may be mapped, or assigned, the corresponding word “P1,” and a left loop pattern may be mapped, or assigned, the corresponding word “P2.” It should be appreciated that in other embodiments, vocabularies of words and sentences may be established for any classification algorithm, thus facilitating use of substantially all known classification algorithms to authenticate the identity of individuals as described herein. It should be appreciated that other classification algorithms may rely on distances between groups or bins. In such classification algorithms, a lexicographic text-encoding scheme for numeric data that preserves numeric comparison operators may be used. Such numerical comparison operators include, but are not limited to, a greater than symbol (>), and a less than symbol (<). Further examples of fingerprint classification techniques that could be utilized using this approach include, but are not limited to, ridge flow classification, ridge flow in a given fingerprint region, ridge counts between minutiae points, lines between minutiae points, and polygons formed between minutiae points.
As discussed above, using the grid 36 as described in the exemplary embodiments effectively constitutes executing an algorithm that generates a vocabulary of words that can be independently used for biometrically authenticating individuals, and that generates many words that each map to a same or different vocabulary. It should also be appreciated that other algorithms may be used to convert biometric features into words to generate vocabularies of words for different biometric features of the same biometric type that may be independently used for authentication. Such other algorithms may also generate words that each map to the same or different vocabulary.
In yet another alternative embodiment, another algorithm may generate an additional vocabulary of words and sentences derived from the overall ridge pattern of a fingerprint instead of from fingerprint ridge endings and ridge bifurcations. Combining, or fusing, vocabularies that include words for the same biometric type, but for different biometric features, provides a larger amount of information that can be used to generate more trustworthy authentication results. Thus, it should be appreciated that by combining or fusing vocabularies, additional new vocabularies representing a same biometric type and different biometric features may be generated such that different words, from the combined vocabulary, representing the same biometric type may be used to generate more trustworthy authentication results. For example, when authenticating the identity of an individual on the basis of fingerprint biometric data, the identity may be authenticated using appropriate words from a vocabulary derived from fingerprint ridge endings and ridge bifurcations, and words from another vocabulary derived from the overall ridge pattern of the fingerprint. It should be appreciated that authenticating the identity of an individual using different words from a combined vocabulary representing the same biometric type and different biometric features facilitates increasing the level of trust in the authentication results. It should be understood that by virtue of generating a vocabulary of words each algorithm also defines the vocabulary of words. Moreover, it should be appreciated that each different algorithm generates and defines a different vocabulary of words.
The exemplary embodiments described herein use algorithms to convert biometric features of fingerprints into words. Such words are included in the vocabularies of words generated by respective algorithms. However, it should be appreciated that in other embodiments different algorithms may be used to convert biometric features, of any desired biometric type, into words. These words are also included in the vocabularies of words generated by the respective different algorithms. For example, a first algorithm may convert biometric features of the iris into words included in a first vocabulary of words generated by the first algorithm, and a second algorithm, different than the first algorithm, may convert biometric features of the voice into words included in a second vocabulary of words generated by the second algorithm. It should be understood that an additional third vocabulary of words including the first and second vocabularies may be generated by combining, or fusing, the first and second vocabularies. Combining, or fusing, vocabularies that define words for different biometric types also provides a larger amount of information that can be used to generate more trustworthy authentication results. Thus, it should be appreciated that by combining or fusing vocabularies, additional new vocabularies representing different biometric types may be generated such that different words, from the combined vocabulary, representing different biometric types may be used to generate more trustworthy authentication results. For example, when authenticating the identity of an individual on the basis of iris and voice biometric data, the identity may be authenticated using appropriate words from the first vocabulary and appropriate words from the second vocabulary. It should be appreciated that authenticating the identity of an individual using different words from a fused vocabulary representing different biometric types facilitates increasing the level of trust in the authentication results.
When a plurality of biometric types are used for authentication, configurable authentication policies and rules included in the GFM application may be configured to weight some biometric types differently than others. Authentication based on certain biometric types is more trustworthy than authentication based on other biometric types. For example, a biometric authentication result based on biometric data captured from an iris may often be more trustworthy than an authentication result based on biometric data captured from a fingerprint. In order to account for the different levels of trust in the authentication results, each biometric type may be weighted differently. For example, in a fused vocabulary certain words may be directed towards a fingerprint of an individual and other words may be directed towards an iris of the same individual. Because authentication based on an iris may be considered more trustworthy, during authentication the iris words are given greater emphasis, or are more heavily weighted, than the fingerprint words. It should be appreciated that weighting biometric data of one biometric type differently than biometric data of another biometric type by emphasizing the biometric data of the one biometric type more than the biometric data of the other biometric type may yield more trustworthy authentication results.
Words in fused vocabularies may also be weighted due to the source of the original words before fusion. For example, words from the vocabulary generated using the method of the exemplary embodiment may be weighted more heavily than words from the vocabulary generated using the alternative exemplary embodiment. Different types of words generated from the same biometric type may also be weighted differently. For example, elderly individuals may be associated with certain types of words that identify them as elderly. Weighting such certain types of words more heavily during biometric authentication may facilitate reducing the time required for authentication by reducing the number of comparisons against those identity records having the same certain types of words.
It should be understood that converting captured biometric data into words, as described herein, facilitates enabling the server system 12 to implement matching algorithms using industry standard search engines. Moreover, it should be understood that performing industry standard searches based on such words facilitates enabling the server system 12 to generate and return results to the client systems 14 more efficiently and more cost effectively than existing biometric systems and methods, and facilitates reducing dependence on expensive, specialized, and proprietary biometric matchers used in existing biometric authentication systems and methods.
In the exemplary embodiment, during enrollment each individual manually types the desired biographic data 46 into the keyboard associated with one of the client systems 14. In order to properly capture desired biometric data, the client systems 14 are configured to include enrollment screens appropriate for capturing the desired biometric data, and are configured to include the biometric capture devices 28 for capturing the desired biometric data submitted by the individuals. However, in other embodiments, the biographic data 46 and biometric data may be obtained using any method that facilitates enrolling individuals in the system 12. Such methods include, but are not limited to, automatically reading the desired biographic data 46 and biometric data from identity documents and extracting the desired biographic data 46 and biometric data from other databases positioned at different locations than the client system 14. Such identity documents include, but are not limited to, passports and driver's licenses. It should be understood that enrollment data of individuals constitutes at least the biographic data 46 and the words 50 derived from the desired biometric data.
The term “biographic data” 46 as used herein includes any demographic information regarding an individual as well as contact information pertinent to the individual. Such demographic information includes, but is not limited to, an individual's name, age, date of birth, address, citizenship and marital status. Moreover, biographic data 46 may include contact information such as, but not limited to, telephone numbers and e-mail addresses. However, it should be appreciated that in other embodiments any desired biographic data 46 may be required, or, alternatively, in other embodiments biographic data 46 may not be required.
After obtaining the desired biometric data during enrollment, the desired biometric data is converted into words 50 with a conversion algorithm. In the exemplary embodiment, the desired biometric data is the left index finger. Thus, during enrollment biometric data of the left index finger is captured and is converted into a corresponding text string 50, or words 50, using the algorithm of the exemplary embodiment as described with respect to
It should be appreciated that the words R22R23C8C9 R22R23C9C10 and R21R22C6C7 R22R23C6C7 describe minutia points MP1 and MP3, respectively. Moreover, it should be appreciated that in other embodiments, words 50 describing minutia points of the left index finger may include a prefix, such as, but not limited to, FLI which abbreviates Finger—Left Index. Likewise, words 50 describing minutia points of the right index finger may include a prefix such as, but not limited to, FRI which abbreviates Finger—Right Index. Thus, the word 50 describing exemplary minutia point MP1 may be represented as FLIR22R23C8C9 FLIR22R23C9C10.
Although the words 50 are described in the exemplary embodiment as being generated from biometric data captured during enrollment, in other embodiments additional words 50, derived from biometric data obtained after enrollment, may be added to an identity record 44 after enrollment. Moreover, in other embodiments the words 50 may include words 50 generated from different types 48 of biometric data such as, but not limited to, face, iris and voice biometric data. Words 50, corresponding to the different types of biometric data, are generally generated by different algorithms. Words 50 generated by different algorithms for a same biometric type may also be included in the identity records 44.
Although the identity records 44 are stored as record data in the server system 12 in the exemplary embodiment, it should be appreciated that in other embodiments the identity records 44 may be stored in any form such as, but not limited to, text documents, XML documents and binary data.
The information shown in
The information shown in
The method continues by determining 60 one of a plurality of algorithms for converting biometric features of the desired biometric type into words. The server system 12 determines the one conversion algorithm in accordance with authentication policies stored therein. In the exemplary method the same conversion algorithm is used for converting biometric feature template data into words as was used during enrollment. Although the one conversion algorithm is determined using authentication policies in the exemplary embodiment, it should be understood that in other embodiments the server system 12 may not have authentication policies stored therein. In such other embodiments a single conversion algorithm is stored in the server system and is determined to be the algorithm used for converting biometric features into words.
Next, the method continues by converting 62 the data included in the biometric feature template into at least one word using the determined conversion algorithm and including the at least one word in a probe generated by the system 12. Words generated as a result of converting the biometric feature template data during authentication are authentication words. Although biometric data of one biometric type is captured in the exemplary embodiment, it should be appreciated that in other embodiments biometric data may be captured for a plurality of different biometric types. In such other embodiments the captured biometric data for each biometric type is processed into a respective biometric feature template, and a conversion algorithm is determined for each of the different biometric types such that the data included in each of the respective biometric feature templates may be converted into at least an authentication word. The authentication words are included in the probe.
After including the authentication words in the probe 62, the method continues by filtering 64 with the generic filtering module (GFM) application by comparing the probe against the gallery. Specifically, the GFM application compares 64 the authentication words included in the probe against the enrollment biometric words 50 included in each of the identity records 44 to determine potential matching identity records. It should be appreciated that a list of potential matching identity records is generated by the GFM application according to the similarity between the probe and the identity records 44.
In the exemplary embodiment, when a comparison does not result in a match between at least one authentication word in the probe and at least one enrollment biometric word 50 in a given identity record 44, the given identity record 44 is discarded, or filtered out. Moreover, when a comparison does not result in a match between at least one authentication word in the probe and at least one enrollment biometric word 50 in any of the identity records 44, the method continues by communicating 66 a negative result to the client system 14. The client system 14 then displays a message indicating “No Matches,” and the method ends 68. Although the client system 14 displays a message indicating “No Matches” when a comparison does not result in a match in the exemplary embodiment, it should be appreciated that in other embodiments the client system may communicate the negative result in an alternative message or in any manner, including, but not limited to, emitting a sound and sending a communication to another system or process.
However, when at least one authentication word included in the probe matches at least one enrollment biometric word included in at least one identity record 44, processing continues by identifying the at least one identity record 44 containing the at least one matching enrollment biometric word as a potential matching identity record. After comparing 68 the probe against all of the identity records 44 in the gallery, processing continues by generating the list of potential matching identity records from the potential matching records. The list of potential matching identity records includes a listing of identity record identifiers that each correspond to a different one of the potential matching identity records. In other embodiments the list may include any data that facilitates identifying the potential matching identity records.
Next, processing continues by ranking 70 the potential matching identity records included in the list in accordance with the authentication policies and rules included in the server system 12. For example, the authentication policies and rules may rank the potential matching identity records according to the number of enrollment biometric words contained therein that match against authentication words in the probe. Thus, the greater the number of matching enrollment biometric words contained in a potential matching identity record, the more similar a potential matching identity record is to the probe. Consequently, the more similar a potential matching identity record is to the probe, the higher the ranking of the potential matching identity record in the list. It should be understood that the most highly ranked potential matching identity records in the list are most likely to be true matching identity records that may be used to authenticate the identity of the individual. After ranking the potential matching identity records 70 in the list, the list of ranked potential matching identity records is stored in the server system 12. Processing continues by communicating 72 the list of ranked potential matching identity records and the ranked matching identity records themselves to a client system 14 for any desired use by an entity associated with the client system 14. For example, the entity may use the ranked potential matching identity records to authenticate the individual. Next, processing ends 68.
Although the exemplary method determines a potential matching identity record when at least one authentication word in a probe matches at least one enrollment biometric word in an identity record 44, it should be appreciated that in other embodiments any other matching criteria may be established to determine a potential matching identity record that facilitates authenticating the identity of an individual as described herein. Such other criteria include, but are not limited to, determining a potential matching identity record when two or more words match between a probe and an identity record 44. Although the GFM application ranks the potential matching identity records according to the number of matching words contained therein in the exemplary method, it should be appreciated that in other embodiments the GFM application may rank the potential matching identity records in accordance with any policy, or may rank the potential matching identity records in any manner, that facilitates ranking the potential matching identity records based on similarity with the probe.
The information shown in
However, when the identity of the individual is not verified 76, a negative result is output 80 to the client system 14. Specifically, the client system 14 displays the negative result as a message that indicates “Identity Not Confirmed.” Next, processing ends 68.
It should be appreciated that comparing the authentication words included in a probe against the enrollment biometric words included in the identity records constitutes an initial filtering process because the number of identity records to be analyzed in a subsequent 1:1 verification transaction is quickly reduced to a list of potential matching identity records. By thus quickly reducing the number of identity records, the initial filtering process facilitates reducing the time required to biometrically authenticate individuals. Thus, it should be understood that by filtering out non-matching identity records to quickly generate the list of potential matching identity records, and by generating highly trusted authentication results 76 from the list of potential matching identity records, a method of text-based biometric authentication is provided that facilitates accurately, quickly, and cost effectively authenticating the identity of individuals.
Although the probe includes authentication words in the exemplary methods described herein, it should be appreciated that in other methods the probe may include a combination of biographic words and authentication words. In such other methods, the biographic words constitute words representing any biographic data such as, but not limited to, words describing an individual's name, words describing an individual's date of birth, and alphanumeric words describing an individual's address. The biographic data 46 may also be included in the identity records 44 as biographic words.
It should be understood that by virtue of including the combination of biographic words and authentication words in the probe, the whole identity of an individual may be used for authentication. Moreover, it should be understood that using the whole identity of an individual for authentication facilitates increasing confidence in authentication results. Authentication based on the whole identity of an individual as described herein is unified identity searching. Thus, including the combination of biographic words and authentication words in the probe facilitates enabling unified identity searching and facilitates enhancing increased confidence in authentication results. It should be appreciated that in unified identity searching, identity records are determined to be potential matching identity records when at least one of the biographic words included in the probe, or at least one of the authentication words included in the probe, matches at least one of the biographic words or one of the enrollment biometric words, respectively, included in an identity record. Furthermore, when unified identity matching is implemented, a list of potential matching identity records is generated and processed as described herein in the exemplary method with regard to the flowchart 54.
In the exemplary embodiments described herein, biometric authentication based on words is used to facilitate authenticating the identities of individuals. More specifically, a determined algorithm converts biometric feature template data into authentication words. The authentication words are used in an initial filtering process to generate a list of ranked potential matching identity records. The list of ranked potential matching identity records and the identity records themselves are communicated to an entity for any use desired by the entity. Instead of communicating the list to an entity, a subsequent process may be conducted by performing a 1:1 verification matching transaction between the biometric feature template data included in a probe against each of the ranked potential matching identity records to authentication the individual. Because the text-based searching of the initial filtering process is more efficient, less time consuming and less expensive than image based searching, the identity of an individual is facilitated to be authenticated quickly, accurately and cost effectively. Moreover, it should be appreciated that conducting text-based searching as described herein, facilitates leveraging industry standard search engines to facilitate increasing the efficiency of biometric authentication, to facilitate reducing the time and costs associated with such authentications, and to facilitate easier modification of known biometric authentication search engines such that known search engines may operate with other authentication systems. Furthermore, text-based searching as described herein facilitates enhancing continued investment in search engine technology.
Exemplary embodiments of methods for authenticating the identity of an individual using biometric text-based authentication techniques are described above in detail. The methods are not limited to use as described herein, but rather, the methods may be utilized independently and separately from other methods described herein. Moreover, the invention is not limited to the embodiments of the method described above in detail. Rather, other variations of the method may be utilized within the spirit and scope of the claims.
Furthermore, the present invention can be implemented as a program stored on a computer-readable recording medium, that causes a computer to execute the methods described herein to authenticate the identity of an individual using words derived from biometric feature templates. The program can be distributed via a computer-readable storage medium such as, but not limited to, a CD-ROM.
While the invention has been described in terms of various specific embodiments, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the claims.
This is a continuation application of U.S. patent application Ser. No. 12/857,337, filed Aug. 16, 2010, now U.S. Pat. No. 8,041,956, issued Oct. 18, 2011, the disclosure of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3959884 | Jordan et al. | Jun 1976 | A |
4015240 | Swonger et al. | Mar 1977 | A |
4135147 | Riganati et al. | Jan 1979 | A |
4185270 | Fischer, II et al. | Jan 1980 | A |
4646352 | Asai et al. | Feb 1987 | A |
4747147 | Sparrow | May 1988 | A |
4752966 | Schiller | Jun 1988 | A |
4817183 | Sparrow | Mar 1989 | A |
5109428 | Igaki et al. | Apr 1992 | A |
5224173 | Kuhns et al. | Jun 1993 | A |
5420937 | Davis | May 1995 | A |
5493621 | Matsumura | Feb 1996 | A |
5548667 | Tu | Aug 1996 | A |
5613014 | Eshera et al. | Mar 1997 | A |
5828779 | Maggioni | Oct 1998 | A |
5933516 | Tu et al. | Aug 1999 | A |
5982914 | Lee et al. | Nov 1999 | A |
6041133 | Califano et al. | Mar 2000 | A |
6049621 | Jain et al. | Apr 2000 | A |
6072895 | Bolle et al. | Jun 2000 | A |
6134340 | Hsu et al. | Oct 2000 | A |
6185316 | Buffam | Feb 2001 | B1 |
6185317 | Nakayama | Feb 2001 | B1 |
6185318 | Jain et al. | Feb 2001 | B1 |
6219794 | Soutar et al. | Apr 2001 | B1 |
6233348 | Fujii et al. | May 2001 | B1 |
6256737 | Bianco et al. | Jul 2001 | B1 |
6266433 | Bolle et al. | Jul 2001 | B1 |
6282304 | Novikov et al. | Aug 2001 | B1 |
6314197 | Jain et al. | Nov 2001 | B1 |
6330347 | Vajna | Dec 2001 | B1 |
6363485 | Adams et al. | Mar 2002 | B1 |
6487306 | Jain et al. | Nov 2002 | B1 |
6546122 | Russo | Apr 2003 | B1 |
6571014 | Larkin | May 2003 | B1 |
6580814 | Ittycheriah et al. | Jun 2003 | B1 |
6681034 | Russo | Jan 2004 | B1 |
6735695 | Gopalakrishnan et al. | May 2004 | B1 |
6766040 | Catalano et al. | Jul 2004 | B1 |
6778687 | Sanders et al. | Aug 2004 | B2 |
6778688 | Aggarwal et al. | Aug 2004 | B2 |
6836554 | Bolle et al. | Dec 2004 | B1 |
6895104 | Wendt et al. | May 2005 | B2 |
6928546 | Nanavati et al. | Aug 2005 | B1 |
6941003 | Ziesig | Sep 2005 | B2 |
6941461 | Carro et al. | Sep 2005 | B2 |
6957337 | Chainer et al. | Oct 2005 | B1 |
6963659 | Tumey et al. | Nov 2005 | B2 |
7020308 | Shinzaki et al. | Mar 2006 | B1 |
7046829 | Udupa et al. | May 2006 | B2 |
7047137 | Kasif et al. | May 2006 | B1 |
7099498 | Lo | Aug 2006 | B2 |
7114646 | Hillhouse | Oct 2006 | B2 |
7136514 | Wong | Nov 2006 | B1 |
7151846 | Fujii | Dec 2006 | B1 |
7172113 | Olenick et al. | Feb 2007 | B2 |
7194393 | Wei et al. | Mar 2007 | B2 |
7206437 | Kramer et al. | Apr 2007 | B2 |
7233685 | Miyasaka et al. | Jun 2007 | B2 |
7236617 | Yau et al. | Jun 2007 | B1 |
7274804 | Hamid | Sep 2007 | B2 |
7274807 | Hillhouse et al. | Sep 2007 | B2 |
7298873 | Miller et al. | Nov 2007 | B2 |
7308708 | Blume | Dec 2007 | B2 |
7315634 | Martin | Jan 2008 | B2 |
7327859 | Chau | Feb 2008 | B1 |
7330566 | Cutler | Feb 2008 | B2 |
7333641 | Hara et al. | Feb 2008 | B2 |
7349559 | Miyasaka | Mar 2008 | B2 |
7349560 | Miyazaki | Mar 2008 | B2 |
7356168 | Tavares | Apr 2008 | B2 |
7359532 | Acharya et al. | Apr 2008 | B2 |
7359533 | Moon et al. | Apr 2008 | B2 |
7362884 | Willis et al. | Apr 2008 | B2 |
7391891 | Hillhouse | Jun 2008 | B2 |
7400749 | Hillhouse | Jul 2008 | B2 |
7440929 | Schneider et al. | Oct 2008 | B2 |
7447339 | Mimura et al. | Nov 2008 | B2 |
7461266 | Chou | Dec 2008 | B2 |
7474769 | McAfee, II et al. | Jan 2009 | B1 |
7474773 | Chau | Jan 2009 | B2 |
7487089 | Mozer | Feb 2009 | B2 |
7492925 | Silvester | Feb 2009 | B2 |
7505613 | Russo | Mar 2009 | B2 |
7512807 | Hillhouse | Mar 2009 | B2 |
7542590 | Robinson et al. | Jun 2009 | B1 |
7564997 | Hamid | Jul 2009 | B2 |
7565548 | Fiske et al. | Jul 2009 | B2 |
7596245 | Kaleedhass | Sep 2009 | B2 |
7596246 | Miller et al. | Sep 2009 | B2 |
7606396 | Miller et al. | Oct 2009 | B2 |
7616787 | Boshra | Nov 2009 | B2 |
7747044 | Baker et al. | Jun 2010 | B2 |
7773784 | Boult | Aug 2010 | B2 |
7822238 | Bolle et al. | Oct 2010 | B2 |
7835548 | Langley | Nov 2010 | B1 |
7840034 | Takahashi et al. | Nov 2010 | B2 |
7899217 | Uludag et al. | Mar 2011 | B2 |
7929732 | Bringer et al. | Apr 2011 | B2 |
7929733 | Lehnert et al. | Apr 2011 | B1 |
7949156 | Willis | May 2011 | B2 |
7956890 | Cheng et al. | Jun 2011 | B2 |
8005277 | Tulyakov et al. | Aug 2011 | B2 |
8468355 | Gerdes et al. | Jun 2013 | B2 |
8472680 | Choi et al. | Jun 2013 | B2 |
8482859 | Border et al. | Jul 2013 | B2 |
8488246 | Border et al. | Jul 2013 | B2 |
20020018585 | Kim | Feb 2002 | A1 |
20020028004 | Miura et al. | Mar 2002 | A1 |
20020031245 | Rozenberg et al. | Mar 2002 | A1 |
20030031340 | Alattar et al. | Feb 2003 | A1 |
20030044052 | Martin | Mar 2003 | A1 |
20030061233 | Manasse et al. | Mar 2003 | A1 |
20030223624 | Hamid | Dec 2003 | A1 |
20040042645 | Wang | Mar 2004 | A1 |
20040099731 | Olenick et al. | May 2004 | A1 |
20040101173 | Hara et al. | May 2004 | A1 |
20040111625 | Duffy et al. | Jun 2004 | A1 |
20040114786 | Cannon et al. | Jun 2004 | A1 |
20040123113 | Mathiassen et al. | Jun 2004 | A1 |
20040170306 | Miyazaki | Sep 2004 | A1 |
20040193893 | Braithwaite et al. | Sep 2004 | A1 |
20040230810 | Hillhouse | Nov 2004 | A1 |
20040243356 | Duffy et al. | Dec 2004 | A1 |
20050009498 | Ho et al. | Jan 2005 | A1 |
20050039053 | Walia | Feb 2005 | A1 |
20050084139 | Kyle | Apr 2005 | A1 |
20050084143 | Kramer et al. | Apr 2005 | A1 |
20050105783 | Moon et al. | May 2005 | A1 |
20050168340 | Mosher et al. | Aug 2005 | A1 |
20050207624 | Ehlers et al. | Sep 2005 | A1 |
20050238207 | Tavares | Oct 2005 | A1 |
20050249389 | Knowles | Nov 2005 | A1 |
20060013446 | Stephens | Jan 2006 | A1 |
20060047970 | Mochizuki | Mar 2006 | A1 |
20060078171 | Govindaraju et al. | Apr 2006 | A1 |
20060093190 | Cheng et al. | May 2006 | A1 |
20060098850 | Hamid | May 2006 | A1 |
20060101281 | Zhang et al. | May 2006 | A1 |
20060104484 | Bolle et al. | May 2006 | A1 |
20060104485 | Miller et al. | May 2006 | A1 |
20060104493 | Hsieh et al. | May 2006 | A1 |
20060120578 | Shatford | Jun 2006 | A1 |
20060171571 | Chan et al. | Aug 2006 | A1 |
20060210170 | Yumoto et al. | Sep 2006 | A1 |
20060260988 | Schneider et al. | Nov 2006 | A1 |
20070003114 | Hendriks et al. | Jan 2007 | A1 |
20070009140 | Jitsui et al. | Jan 2007 | A1 |
20070017136 | Mosher et al. | Jan 2007 | A1 |
20070031009 | Mwale | Feb 2007 | A1 |
20070031014 | Soderberg et al. | Feb 2007 | A1 |
20070036400 | Watanabe et al. | Feb 2007 | A1 |
20070041622 | Salva Calcagno | Feb 2007 | A1 |
20070047770 | Swope et al. | Mar 2007 | A1 |
20070050636 | Menczel et al. | Mar 2007 | A1 |
20070096870 | Fisher | May 2007 | A1 |
20070162963 | Penet et al. | Jul 2007 | A1 |
20070172114 | Baker et al. | Jul 2007 | A1 |
20070174633 | Draper et al. | Jul 2007 | A1 |
20070183632 | Bringer et al. | Aug 2007 | A1 |
20070186106 | Ting et al. | Aug 2007 | A1 |
20070217708 | Bolle et al. | Sep 2007 | A1 |
20070226496 | Maletsky et al. | Sep 2007 | A1 |
20070241861 | Venkatanna et al. | Oct 2007 | A1 |
20070253608 | Tulyakov et al. | Nov 2007 | A1 |
20070253624 | Becker | Nov 2007 | A1 |
20070276853 | Hamza | Nov 2007 | A1 |
20070283165 | Milgramm et al. | Dec 2007 | A1 |
20070286465 | Takahashi et al. | Dec 2007 | A1 |
20080005578 | Shafir | Jan 2008 | A1 |
20080010674 | Lee | Jan 2008 | A1 |
20080013794 | Kalker et al. | Jan 2008 | A1 |
20080013808 | Russo et al. | Jan 2008 | A1 |
20080049983 | Miller et al. | Feb 2008 | A1 |
20080052527 | Siedlarz | Feb 2008 | A1 |
20080059807 | Miller et al. | Mar 2008 | A1 |
20080095410 | Shalev et al. | Apr 2008 | A1 |
20080118099 | Alattar et al. | May 2008 | A1 |
20080172386 | Ammar et al. | Jul 2008 | A1 |
20080172729 | Takamizawa et al. | Jul 2008 | A1 |
20080192988 | Uludag et al. | Aug 2008 | A1 |
20080199077 | Fowell | Aug 2008 | A1 |
20080211627 | Shinzaki | Sep 2008 | A1 |
20080221735 | Schaffer et al. | Sep 2008 | A1 |
20080235515 | Yedidia et al. | Sep 2008 | A1 |
20080253619 | Hagino et al. | Oct 2008 | A1 |
20080285815 | Bolle et al. | Nov 2008 | A1 |
20080298649 | Ennis et al. | Dec 2008 | A1 |
20090022374 | Boult | Jan 2009 | A1 |
20090023428 | Behzad et al. | Jan 2009 | A1 |
20090027351 | Zhang et al. | Jan 2009 | A1 |
20090113209 | Lee et al. | Apr 2009 | A1 |
20090116703 | Schultz | May 2009 | A1 |
20090138724 | Chiou et al. | May 2009 | A1 |
20090138725 | Madhvanath et al. | May 2009 | A1 |
20090169064 | Kim et al. | Jul 2009 | A1 |
20090183008 | Jobmann | Jul 2009 | A1 |
20090199282 | Tsitkova et al. | Aug 2009 | A1 |
20090231096 | Bringer et al. | Sep 2009 | A1 |
20090287930 | Nagaraja | Nov 2009 | A1 |
20090294523 | Marano et al. | Dec 2009 | A1 |
20090310779 | Lam et al. | Dec 2009 | A1 |
20090310830 | Bolle et al. | Dec 2009 | A1 |
20100014718 | Savvides et al. | Jan 2010 | A1 |
20100039223 | Siedlarz | Feb 2010 | A1 |
20100046806 | Baughman et al. | Feb 2010 | A1 |
20100060411 | Ikegami | Mar 2010 | A1 |
20100066493 | Rachlin | Mar 2010 | A1 |
20100111376 | Yan et al. | May 2010 | A1 |
20100162072 | Chabanne et al. | Jun 2010 | A1 |
20100174914 | Shafir | Jul 2010 | A1 |
20100185864 | Gerdes et al. | Jul 2010 | A1 |
20100214062 | Hayashida | Aug 2010 | A1 |
20100228692 | Guralnik et al. | Sep 2010 | A1 |
20100232659 | Rahmes et al. | Sep 2010 | A1 |
20100257369 | Baker | Oct 2010 | A1 |
20100277274 | Toleti et al. | Nov 2010 | A1 |
20100284575 | Yoshimine et al. | Nov 2010 | A1 |
20100312763 | Peirce | Dec 2010 | A1 |
20110010558 | Baldan et al. | Jan 2011 | A1 |
20110032076 | Rickman | Feb 2011 | A1 |
20110037563 | Choi et al. | Feb 2011 | A1 |
20110188709 | Gupta et al. | Aug 2011 | A1 |
20110211735 | Langley | Sep 2011 | A1 |
20120011565 | Garlie et al. | Jan 2012 | A1 |
20120057764 | Hara et al. | Mar 2012 | A1 |
20120140996 | Hara et al. | Jun 2012 | A1 |
20120230555 | Miura et al. | Sep 2012 | A1 |
20120262275 | Schultz | Oct 2012 | A1 |
Number | Date | Country |
---|---|---|
2 073 147 | Jun 2009 | EP |
2 917 525 | Dec 2008 | FR |
2 050 026 | Dec 1980 | GB |
2 452 116 | Feb 2009 | GB |
2003-178307 | Jun 2003 | JP |
WO 8701224 | Feb 1987 | WO |
WO 2009004215 | Jan 2009 | WO |
WO 2009081866 | Jul 2009 | WO |
Entry |
---|
“A secure fingerprint matching technique,” Yang et al., Proceedings of the 2003 ACM SIGMM workshop on Biometrics methods and applications, Nov. 2003. |
“Generating Cancelable Fingerprint Templates,” Ratha et al., Pattern Analysis and Machine Intelligence, IEEE Transactions on, Apr. 2007, vol. 29 , Issue: 4 pp. 561-572. |
“Efficient Finger Print Image Classification and Recognition using Neural Network Data Mining,” Umamaheswaari et al., International Conference, Feb. 2007, pp. 426-432. |
“A cost-effective fingerprint recognition system for use with low-quality prints and damaged fingertips,” Willis et al., Pattern Recognition 34 (2001) pp. 255-270. |
Combining Cryptography with Biometrics for Enhanced Security, Venkatachalam, et al., Conference on Control, Automation, Comm. and Energy Conversation, 2009. pp. 1-6. |
Randomized Radon Transforms for Biometric Authentication via Fingerprint Hashing, Jakubowski et al., Microsoft Research, Redmond,WA, 7th ACM DRM Workshop Oct. 29, 2007, pp. 90-94. |
The Development of Destination-Specific Biometric Authentication Andrew R. Mark, Apr. 15, 2000, Network Security Library :: Auth. & Access Control, pp. 77-80. |
Integrating Faces and Fingerprints for Personal Identification, Hong et al., IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 20,No. 12, pp. 1295-1307 Dec. 1998. |
Shihua He et al., Clustering-Based Descriptors for Fingerprint Indexing and Fast Retrieval, Computer Vision—ACCV 2009: 9th Asian Conf on Comp. Vision, Sep. 23, 2009, pp. 354-363. |
Unsang Park et al., Periocular Biometrics in the Visible Spectrum: A Feasibility Study, Biometrics: Theory, Applications, and Systems 2009, Sep. 28, 2009, pp. 1-6. |
Xu Zhao et al., Discrimitive Estimation of 3D Human Pose Using Gaussian Processes, 19th Int. Conf. on Pattern Recognition, 2008, Dec. 8, 2008, p. 1-4. |
Di Liu et al., Bag-of-Words Vector Quantization Based Face Identification, Electronic Commerce and Security, May 22, 2009, pp. 29-33. |
Josef Sivic et al., Person Spotting: Video Shot Retrieval for Face Sets, Image and Video Retrieval, Aug. 4, 2005, pp. 226-236. |
Fergus R. et al., Object Class Recognition by Unsupervised Scale-Invarient Learning, IEEE Comp. Society Conf. on Comp. Vision and Pattern Recognition, Jun. 18, 2003, pp. 264-271. |
Sivic et al., Video Google: A Text Retrieval Approach to Object Matching in Videos, 9th IEEE Int. Conf. on Computer Vision, Oct. 13, 2003, pp. 1470-1477, vol. 2. |
Omar Hamdoun et al., Person Re-Identification in Multi-Camera System by Signature Based on Interest Point Descriptors Collected on Short Video Sequences, Sep. 7, 2008, pp. 1-6. |
Extended European Seach Report for counterpart foreign application EPO No. 11152343.7 mailed Apr. 28, 2011. |
Extended European Seach Report for EPO application No. 11152341.1 mailed Apr. 28, 2011. |
“Discrete finger and palmar feature extraction for personal authentication,” Doi, J.; Yamanaka, M.; Kajita, H.; Intel Signal Processing, 2003 IEEE Int'l Symp., pp. 37-42. |
“Biometric authentication using finger and palmar creases,” Junta Doi; Yamanaka, M.; Virtual Environments, Human-Computer Interfaces and Meas Sys., 2004,pp. 72-76. |
“Discrete finger and palmar feature extraction for personal authentication,” Doi, J.; Yamanaka, M.; Instr. and Meas.,IEEE Trans.,vol. 54, No. 6, Dec. 2005,pp. 2213-2219. |
Number | Date | Country | |
---|---|---|---|
20120042171 A1 | Feb 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12857337 | Aug 2010 | US |
Child | 13233243 | US |