This application relates generally to user authentication, and more particularly, to using facial image data for user authentication.
User authentication is often performed prior to granting user access to a system. Typically, user authentication involves accessing previously stored user information, such as user identification information and/or user biometric data, and comparing the previously stored user information with information that a user provides in connection with an access request. Systems that perform user authentication store user information in a data storage device. Prior to requesting authorization, users enroll with the system by providing user information to be stored.
Some authentication systems described herein perform authentication using a newly-captured facial image and identity chain of an authenticated person that includes at least the person's face and a document that includes a previously-captured image of the person's face. For example, a person who has previously been verified, authenticated or granted authorization, submits a request for a subsequent authorization. In response to the request (e.g., from a user device), a facial image is received. Image analysis is performed on the received facial image to determine facial image data. The facial image data is further analyzed to determine whether the person's face in the image matches a facial image in the identity chain. If the image analysis determines that there is a match, authorization is granted. In some embodiments, if authorization is granted, the received facial image is stored to the identity chain for future comparisons. In some embodiments, after the received facial image is stored to the identity chain, the matching criteria is updated. In this way, the system promotes continuous learning by increasing the sample of facial images available to use in the matching criteria and other processes. As such, a device is enabled to perform re-authentication using the newly received facial image without requiring a user to request a full authentication or submit identification documentation a second time.
In some embodiments, a method is performed at a server system including one or more processors and memory storing one or more programs for execution by the one or more processors. The method includes receiving first authentication information that includes first facial image data for a user and second facial image data for the user, where the first facial image data for the user is distinct from the second facial image data for the user. The method includes comparing the first facial image data for the user with the second facial image data for the user to determine whether first matching criteria are met. The method includes, in accordance with a determination that the first matching criteria are met, generating an identity chain that includes at least one of the first facial image data for the user or the second facial image data for the user. The method further includes, after generating the identity chain, receiving a request to perform a first transaction and receiving second authentication information that includes third facial image data for the user. The method includes determining whether the third facial image data for the user meets second matching criteria by comparing the third facial image data for the user with facial image data for a respective image of the identity chain. The method includes, in accordance with a determination that the third facial image data for the user meets the second matching criteria, transmitting authorization information for the first transaction.
In accordance with some embodiments, an electronic device (e.g., a server system, a client device, etc.) includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs include instructions for performing the operations of the method described above. In accordance with some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by an electronic device, cause the server system to perform the operations of the method described above.
So that the present disclosure can be understood in greater detail, features of various embodiments are illustrated in the appended drawings. The appended drawings, however, merely illustrate pertinent features of the present disclosure and are therefore not limiting.
In accordance with common practice, some of the drawings may not depict all of the components of a given system, method, or device. Finally, like reference numerals denote like features throughout the specification and figures.
Numerous details are described herein in order to provide a thorough understanding of the example embodiments illustrated in the accompanying drawings. However, some embodiments may be practiced without many of the specific details, and the scope of the claims is only limited by those features and aspects specifically recited in the claims. Furthermore, well-known processes, components, and materials have not been described in exhaustive detail so as not to unnecessarily obscure pertinent aspects of the embodiments described herein.
The processor(s) 136 execute modules, programs, and/or instructions stored in the memory 102 and thereby perform processing operations.
In some embodiments, the memory 102 stores one or more programs (e.g., sets of instructions) and/or data structures, collectively referred to as “modules” herein. In some embodiments, the memory 102, or the computer readable storage medium (e.g., a non-transitory computer-readable storage medium) of the memory 102, or a computer program product stores the following programs, modules, and data structures, or a subset or superset thereof:
The above identified modules (e.g., data structures, and/or programs including sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, the memory 102 stores a subset of the modules identified above. In some embodiments, a remote authentication database 154 and/or a local authentication database 144 store one or more modules identified above. Furthermore, the memory 102 may store additional modules not described above. In some embodiments, the modules stored in the memory 102, or a non-transitory computer readable storage medium of the memory 102, provide instructions for implementing respective operations in the methods described below. In some embodiments, some or all of these modules may be implemented with specialized hardware circuits that subsume part or all of the module functionality. One or more of the above identified elements may be executed by one or more of the processor(s) 136. In some embodiments, one or more of the modules described with regard to the memory 102 is implemented in the memory 202 of an image capturing device 200 (
In some embodiments, the I/O subsystem 140 communicatively couples the computing system 100 to one or more local devices, such as a biometric input device 142 and/or a local authentication database 144, via a wired and/or wireless connection. In some embodiments, the I/O subsystem 140 communicatively couples the computing system 100 to one or more remote devices, such as a remote authentication database 154, a first image capturing device 200a, and/or a second image capturing device 200b, via a first communications network 150, a second communications network 152, and/or via a wired and/or wireless connection. In some embodiments, the first communications network 150 is the Internet. In some embodiments, the first communication network 150 is a first financial network and the second communication network 152 is a second financial network.
In some embodiments, a biometric input device 142 (e.g., a fingerprint scanner, a retinal scanner, and/or a camera) is communicatively coupled to the computing system 100. For example, the computing system 100 is located in or near to an authentication kiosk, or is communicatively coupled to an authentication kiosk that includes the biometric input device 142.
The communication bus 134 optionally includes circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
The processor(s) 220 execute modules, programs, and/or instructions stored in the memory 202 and thereby perform processing operations.
In some embodiments, the memory 202 stores one or more programs (e.g., sets of instructions) and/or data structures, collectively referred to as “modules” herein. In some embodiments, the memory 202, or the non-transitory computer readable storage medium of the memory 202 stores the following programs, modules, and data structures, or a subset or superset thereof:
The above identified modules (e.g., data structures, and/or programs including sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, the memory 202 stores a subset of the modules identified above. In some embodiments, the camera 218 stores one or more modules identified above (e.g., captured image data 206). Furthermore, the memory 202 may store additional modules not described above. In some embodiments, the modules stored in the memory 202, or a non-transitory computer readable storage medium of the memory 202, provide instructions for implementing respective operations in the methods described below. In some embodiments, some or all of these modules may be implemented with specialized hardware circuits that subsume part or all of the module functionality. One or more of the above identified elements may be executed by one or more of the processor(s) 220. In some embodiments, one or more of the modules described with regard to the memory 202 is implemented in the memory 102 of the computing system 100 and executed by processor(s) 136 of the computing system 100.
The camera 218 captures still images, sequences of images, and/or video. In some embodiments, the camera 218 is a digital camera that includes an image sensor and one or more optical devices. The image sensor is, for example, a charge-coupled device or other pixel sensor that detects light. In some embodiments, one or more optical devices are movable relative to the image sensor by an imaging device actuator. The one or more optical devices affect the focus of light that arrives at the image sensor and/or an image zoom property.
In some embodiments, the image capturing device 200 includes a camera 218 (e.g., the camera 218 is located within a housing of the image capturing device 200). In some embodiments, the camera 218 is a peripheral device that captures images and sends captured image data 206 to the I/O subsystem 226 of the image capturing device 200 via a wired and/or wireless communication connection.
In some embodiments, the I/O subsystem 226 communicatively couples image capturing device 200 to one or more remote devices, such as a computing system 100, via a first communication network 150 and/or a second communication network 152.
In some embodiments, a user input device 230 and/or an output device 232 are integrated with the image capturing device 200 (e.g., as a touchscreen display). In some embodiments, a user input device 230 and/or an output device 232 are peripheral devices communicatively connected to an image capturing device 200. In some embodiments, a user input device 230 includes a microphone, a keyboard, and/or a pointer device such as a mouse, a touchpad, a touchscreen, and/or a stylus. In some embodiments, the output device 232 includes a display (e.g., a touchscreen display that includes input device 230) and/or a speaker.
The communication bus 228 optionally includes circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
In some embodiments, one or more user input devices and/or output devices (not shown), such as a display, touchscreen display, speaker, microphone, keypad, pointer control, zoom adjustment control, focus adjustment control, and/or exposure level adjustment control, are integrated with the device 200.
In some embodiments, the document 300 includes facial image location cue information (e.g., the concentric rectangles indicated at 304). Facial image location cue information 304 is a visual indication on the document 300 of a location of the identification image 302 within the document 300. For example, the concentric rectangles 304 that surround identification image 302 provide a cue to indicate the location of the identification image 302 within the document 300. In some embodiments, facial image location cue information includes one or more marks and/or pointers. For example, facial image location cue information indicates a facial image area that is smaller than the full area of the document 300 and that includes the identification image 302, such as a perimeter that indicates boundaries of the identification image 302 or otherwise surrounds the identification image 302. In some embodiments, a facial image location cue is a background surrounding the identification image 302 (e.g., a background that has a predefined color and/or pattern). In some embodiments, a facial image location cue includes a material and/or texture of the facial image area of the document 300 that is different from a material and/or texture of the remainder of the document 300.
In some embodiments, authentication information is a representation of an authentication request sent over an electronic connection (e.g., from an image capturing device 200 to a server 100, via network 150, tasked with performing authentication). In some embodiments, the authentication information is sent in response to the authentication request. In some embodiments, the authentication information is a data structure that is sent from the image capturing device 200 to the server 100 in response to the user submitting a request for authentication at the image capturing device 200.
In some embodiments, the identity chain 600 is associated (e.g., rooted) to document 300 (e.g., via first facial image data 610 through the process define in
In some embodiments, the identity chain 600 is used to authenticate subsequent transactions. For example, the new images included in the identity chain and associated with document 300 (e.g., via first facial image data 610) are used to authenticate captured image data 650 for additional transactions. For purposes of this disclosure, a transaction is the execution of a particular action and/or agreement between the user and a third-party. The captured images (602, 608, 604, and/or 610) included in the identity chain 600 are received by an image capturing device 200 such as the first image capturing device 200a as described with regard to
In
While the illustrative regions indicated in
Based on a determination that the first matching criteria (e.g., facial comparison between the image data) has been met, the identity chain is generated and at least one of first facial image data 610 and/or second facial image data 604 is included in the identity chain 600. Based on a determination that the first matching criteria has been met, the generated identity chain associates the included facial image data (e.g., second facial image data 604 and/or first facial image data 610) with document 300. Is some embodiments, the identity chain 600 is continuously updated and/or includes newly-captured facial image data. Subsequently and/or newly included facial image data in the identity chain 600 are associated with document 300 (e.g., image data 608 of the document).
In some embodiments, third facial image data 652 is compared with respective facial image data of identity chain 600 to determine if second matching criteria is met. For example, third facial image data 652 is compared with second facial image data 604 (and/or first facial image data 610), where second facial image data 604 (and/or first facial image data 610) is associated with document 300. In some embodiments, a portion third facial image 654 is compared with a respective portion (e.g., eyes, nose, mouth, less than all of a full face, etc.) of the respective facial image data of identity chain 600. For example, eyes, mouth, and nose of captured image data 650 (portion of third facial image data 654) are compared with eyes, mouth, and nose of respective facial image data of identity chain 600 (e.g., portion of second facial image data 606 (and/or a portion of first facial image data 612)), where the respective facial image data of identity chain 600 is associated with document 300. In some embodiments, third facial image data 652 is compared with a plurality of facial image data included in identity chain 600. In some embodiments, in accordance with a determination that the second matching criteria is met, third facial image data 652 (or captured image data 650) is included in identity chain 600 and associated with document 300.
In some embodiments, if it is determined that invalid facial image data (e.g., facial image data that fails to meet matching criteria (e.g., first or second matching criteria) and/or other criteria) was improperly added to the identity chain 600, facial image data is removed from the identity chain 600. In some embodiments, if facial image data is determined to improperly meet the second matching criteria and added to the identity chain 600, the each added facial image data is removed from the identity chain 600 (e.g., identity chain 600 reverted to a state when it was first generated). For example, if third facial image data 610 is added to identity chain 600 improperly, then third facial image data and/or other added facial image data is removed from identity chain 600 until at least one of the first facial image data 610 and/or the second facial image data 606 remain (e.g., whichever facial image data was included when identity chain 600 was generated). In other words, identity chain 600 is updated to reflect the initial state as when it was first generated (e.g. with at least one of the first facial image data 610 and/or the second facial image data 606 remain).
In some embodiments, second matching criteria is based on the first matching criteria and the identity chain 600. For example, included images in the identity chain 600 are used to determine the second matching criteria. In some embodiments, the second matching criteria is updated for each included image in identity chain 600. In some embodiments, the second matching criteria is updated periodically (e.g., daily, weekly, monthly, etc.). In some embodiments, the second matching criteria is updated when facial image data is removed from identity chain 600. For example, if identity chain 600 is reverted back to a state when it was first generated, the second matching criteria is updated to include the changes in identity chain 600.
In
For example, in accordance with a determination by the image analysis module 106 that a facial position adjustment is needed, the computing system 100 transmits to the image capturing device 200 a facial position adjustment request, which includes a message such as “please turn your head to the left.” In some embodiments, in response to receiving the transmitted request, the image capturing device 200 displays or otherwise outputs this message (e.g., via an output device 232). In some embodiments, in response to receiving the transmitted request (e.g., subsequent to displaying the received message), image capturing device 200 captures a new image data 750, which includes new facial image data 752, as shown in
In some embodiments, determining whether a first facial image in a first facial image data position 702 and the identity chain 600 meet facial position matching criteria includes determining whether a location of one or more facial features (e.g., right eye, left eye, mouth, nose, and/or other identified facial curve or protrusion) detected in the identity chain 600 (e.g., identification image 302 and/or user image data 602) are also detected in the first facial image in the first facial image data position 702. If the one or more facial features in the second facial image are not detected in the first facial image data position 702 of the first image, the computing system 100 transmits to the image capturing device 200 a facial position adjustment request (e.g., including a message such as, “please turn your head to the left,” “please turn your head to the right,” “please tilt your head upward,” or “please tilt your head downward”).
In some embodiments, determining whether a first facial image in a first facial image data position 702 and identity chain 600 image meet facial position matching criteria includes determining whether a face in the first facial image data position 702 is at least partially obstructed (e.g., partially covered by a hat) and/or determining whether a face in the identity chain 600 is at least partially obstructed (e.g., covered by a finger). If an obstruction is detected, the computing system 100 transmits to image capturing device 200 a facial position adjustment request (e.g., including a message such as, “please remove your hat,” or “please move your finger so that it is not covering the picture of your face”).
In some embodiments, to meet the movement criteria for a liveness assessment, movement of a facial feature must exceed a threshold distance (e.g., relative to movement of a boundary of the person's face).
System 100 receives (1306) first authentication information that includes first facial image data for the user and second facial image data for the user, where the first facial image data for the user is distinct from the second facial image data for the user. In some embodiments, the first facial image data is previously-authenticated image data. In some embodiments, the first facial image data for the user corresponds to (e.g., was obtained from) a government issued identification (1308) (e.g., or another issued identification). For example, a new user that seeks to register and/or authenticate themselves would provide first authentication information to begin the authentication process. In some circumstances, the first authentication information include a picture a photo identification. In some embodiments, the first facial image data is image data derived from a picture of a photo identification provided during enrollment of the user. In some embodiments, the first facial image data and the second image data are received concurrently as part of an enrollment request (e.g., a request to authenticated the first facial image data).
In some embodiments, a system 100 receives (1302-a) a request to perform a transaction (e.g. a second transaction) before the system 100 has received authentication information (e.g. first authentication information) from a user; the system 100 determines (1302-b) whether the user is associated with an identity chain; and in accordance with a determination that the user is not associated with the identity chain, the system 100 prompts (1302-c) the user to provide the authentication information (e.g. first authentication information). For example, system 100 receives a request from a new user to perform a transaction, the system determines that the new user is not associated with an identity chain (e.g. has not been previously authenticated) and, in turn, prompts the new user to provide the authentication information that includes facial image data of person 402 and an image of a document for the person 402. In some embodiments, a transaction requires authorization to grant access (e.g., a data access, a device access, and/or facility) access request) and/or to execute a particular action and/or agreement. Additionally and/or alternatively, in some embodiments, prompting (1304) the user to provide the authentication information includes a request to capture, via an image capturing device 200, a first facial image data for the user and a second facial image data for the user.
Returning to the process, system 100 compares (1310) the first facial image data 610 for the user with the second facial image data 604 for the user to determine whether first matching criteria are met. In some embodiments, system 100 analyzes (1312) the first facial image data for the user 610 and the second facial image data for the user 604 to determine a first portion 612 of the first facial image data for the user 610 and a second portion 606 of the second facial image data for the user 604, where the first portion 612 and the second portion 606 corresponds to respective one or more facial features (e.g., nose, eyes, mouth, less than all of a full face, etc.). For example, system 100 receives an image of a government issued identification (e.g., an example of document 300) that includes first facial image data for the user 610 and a second image of the user 602 that includes second facial image data for the user 604, system 100 identifies a first portion 612 corresponding to one or more facial features (e.g., eyes, nose, mouth, less than all of a full face, etc.) of the first facial image data 610 and a second portion 606 corresponding to one or more facial features of the second facial image data 604. In some embodiments, the first and the second portions of the respective facial image data (e.g. 606 and 604) are used to determine whether that matching criteria is met.
System 100, in accordance with a determination that the first matching criteria are met, generates (1314) an identity chain that includes at least one of the first facial image data for the user or the second facial image data for the user. In some embodiments, identity chain 600 is associated (e.g., rooted) to the image data 608 of the document (e.g., document 300) such that subsequent image captures of document 300 are not needed and/or required. For example, the facial image data of identity chain is rooted to the image data 608 of the document (and document 300) and allow for subsequent verification of additional facial image data received by the user without document 300.
After generating the identity chain, system 100 receives (1316) a request to perform a first transaction and second authentication information that includes third facial image data 652 for the user. As illustrated in
System 100 determines (1322) whether the third facial image data 652 for the user meets second matching criteria by comparing the third facial image data 652 for the user with facial image data for a respective image of the identity chain 600. For example, the third facial image data 652 can be compared with the second facial image data 604, the first facial image data 610, and/or with any other facial image data included in the identity chain.
In some embodiments, the system 100 determines one or more parameters (e.g., shape of face, location of facial features such as eyes, mouth, and nose relative to one another and/or relative to an outline of the face, relative sizes of facial features, and/or distances between facial features) of facial image data for a respective image of the identity chain 600 and uses the one or more parameters of the respective image of the identity chain 600 to identify corresponding parameters in the third facial image data 652. In some embodiments, the one or more parameters of the respective image of the identity chain 600 and the corresponding parameters in the third facial image data 652 are used to determine if the second matching criteria is met.
In some embodiments, the second matching criteria is based at least in part on the first matching criteria and the identity chain (1324). For example, the first matching criteria is used to determine initial second matching criteria and, after the identity chain 600 is generated, at least a respective image of the identity chain is used to determine the second matching criteria.
In some embodiments, the system 100 determines (1326) whether the third facial image data 652 for the user meets the second matching criteria includes determining liveness of third first facial image data 652 for the user. For example, liveness determined by using one or more liveness challenges (e.g., as described with regard to
In some embodiments, system 100 analyzes (1328) the third facial image data 652 for the user to determine a respective portion (e.g., 654) of the third facial image data 652 for the user that corresponds to one or more facial features (e.g. eyes, mouth, nose, etc.). For example, system 100 determines a location of a facial feature (e.g., an iris of at least one eye) within the third facial image data 652 of the captured image data 650 and within a respective image of the identity chain 600 (e.g., second facial image data 604 and/or first facial image data 610), and compares a color of the facial feature (e.g., a color of at least one pixel) in the third facial image data 652 of the captured image data 650 with the color of the facial feature in the respective identity chain 600 (e.g., second facial image data 604 and/or first facial image data 610) to determine whether the captured image data 650 and the respective identity chain 600 (e.g., identification image 302 and/or user image data 602) meet the second matching criteria. In another example, system 100 determines a location of a first facial feature (e.g., a nose) within the third facial image data 652 of the captured image data 650 and within respective images of the identity chain (e.g., second facial image data 604 and/or first facial image data 610 of the image data that corresponds to the user image data 602 and/or identification image 302.) In a further example, the system 100 determines a location of a second facial feature (e.g., a left eye) within the third facial image data 652 of the captured image data 650 and within a respective image of the identity chain 600 (e.g., second facial image data 604 and/or first facial image data 610.) A first distance between the first facial feature and the second facial feature in the third facial image data 652 of the captured image data 650 is determined. A second distance between the first facial feature and the second facial feature in the respective image of the identity chain 600 (e.g., second facial image data 604 and/or first facial image data 610.) The first distance (e.g., relative to the size of captured image data 650 in the third facial image data 652) is compared with the second distance (e.g., relative to the size of the respective image of the identity chain 600, e.g., first facial image data 610 in the identification image 302 and/or the second facial image data 604 in the user image data 602) to determine whether the third facial image data 652 and the respective image of the identification chain 600 meet the second matching criteria. Although the above examples are representative of the second matching criteria, a similar process is used to determine whether the first matching criteria is met.
Additionally and/or alternatively, in some embodiments, system 100 determines a respective portion (e.g. 654) of the third facial image data 652 from a plurality of image frames (e.g., image frames of a video) to compare with the respective image of the identity chain 600 (e.g. a respective portion such as first portion 612 of image data 608). For example, the system uses edge detection techniques to determine a region and/or outline (e.g., third facial image portion 654) of the third facial image data 652 and/or other techniques to determine distance, size, shape, curve features, color, and/or relative properties of one or more portions of the third facial image data 652 and he respective image of the identity chain 600 (e.g., second facial image data 604 and/or first facial image data 610). In some embodiments, system 100 determines a shape of a face outline within (e.g. portion) the third facial image data 652 of the captured image data 650 and within the respective identity chain 600 (e.g., second facial image data 604 and/or first facial image data 610), and compares the shape of the face in the third facial image data 652 of the captured image data 650 with the shape of the face in the respective identity chain 600 (e.g., second facial image data 604 and/or first facial image data 610) to determine whether the captured image data 650 and the respective image of identity chain 600 (e.g., identification image 302 and/or user image data 602) meet the second matching criteria.
Optionally, in some embodiments, system 100 generates the third facial image data 652 by compositing a plurality of respective portions of respective image frames from the plurality of image frames that correspond to the captured image data 650. For example, if a segment of the face in the captured image data 650 is obstructed in a first frame and a distinct segment of the face in the captured image data 650 is obstructed in a second frame, the obstructed segment of the face in the second frame can be replaced with a corresponding unobstructed segment of the face from the first frame.
In accordance with a determination that the third facial image data 652 for the user meets the second matching criteria, the system 100 transmits (1330) authorization information for the first transaction. In some embodiments, in accordance with a determination that the third facial image data 652 for the user meets the second matching criteria, system 100 includes (1332) (e.g., adds) the third facial image data 652 for the user in the identity chain 600. Including additional facial image data in the identity chain 600 improves the accuracy of the authentication process by making a robust set of image data available to authenticate the user.
In some embodiments, if it is determined that invalid facial image data (e.g., facial image data that fails to meet matching criteria (e.g., first or second matching criteria) and/or other criteria) was improperly added to the identity chain 600, the system 100 removes from the identity chain 600 additional facial image data until at least one of the first facial image data 610 or the second facial image data 606 remain. In other words, identity chain 600 is updated to reflect the initial state as when it was first generated (e.g. with at least one of the first facial image data 610 or the second facial image data 606 remain). Additionally and/or alternatively, in some embodiments, in some embodiments, the system 100 determines that facial image data and/or image data was improperly added to the identity chain 600 via a quality control process. The quality control process identifies (e.g. flags) improperly added facial image data and/or image data to the identity chain 600 by determining, periodically (e.g. each day, each week, each month, etc.), that a respective image of the identity chain 600 fails to meet matching criteria (e.g. first matching criteria, second matching criteria, and/or other criteria) is flagged by a later (that determines that image data fails to meet authorization criteria, authenticated criteria, or other criteria).
In some embodiments, system 100 utilizes (1334) the identity chain 600 (e.g., identity chain 600 that has been updated to include additional facial image data) to update the second matching criteria. Updating the second matching criteria enables system 100 to improve the matching criteria, increase accuracy over time, and account for a user's changes over time. In some embodiments, the second matching criteria is updated each time the identity chain 600 is modified (e.g. new facial image data is added and/or the identity chain 600 is reset). In some embodiments, the second matching criteria is updated periodically (e.g. each day, each week, each month, etc.).
In some embodiments, system 100 receives (1336-a) a request to perform a third transaction after the first transaction and receives (1336-b) third authentication information that includes fourth facial image data (e.g., captured similar to captured image data 650 and third facial image data 652) for the user. System 100 further determines (1336-c) whether the user is associated with the identity chain and, in accordance with a determination that the user is associated with the identity chain, determines (1336-d) whether the fourth facial image for the user data meets updated second matching criteria by comparing the fourth facial image data for the user with facial image data for a new respective image of the identity chain 600. For example, the system determines whether the fourth facial image data for the user matches the third facial image data, which has been added to the identity chain. In some embodiments, the identity chain includes facial image data for a plurality of images, wherein the plurality of images were authenticated at different times. In accordance with a determination that the fourth facial image data for the user meets (1336-e) the updated second matching criteria (e.g., matches facial image data corresponding to any of the images in the plurality of images of the image chain), system 100 generates (1336-f) authorization information for the third transaction, includes (1336-g) (e.g., adds) the fourth facial image data for the user in the identity chain 600, and updates (1336-h) the updated second matching criteria based on the identity chain 600.
In some embodiments, the identity chain includes (1338-a) a plurality of facial image data. In some embodiments, determining whether the fourth facial image data for the user meets the updated second matching criteria includes (1338-b) comparing the third facial image data with a subset of facial image data of the identity chain
In some embodiments, system 100 receives (1340-a) a request to perform a third transaction after the first transaction, determines (1340-b) whether the user is associated with the identity chain, and, in accordance with a determination that the user is associated with the identity chain, prompts (1340-c) the user for third authentication information that includes fourth facial image data for the user (e.g., and does not include an image of a photo identification).
In some embodiments, in accordance with a determination that the captured image data 650 (e.g. third facial image data 652 or portion of third facial image data 654) does not meet the second matching criteria, the device forgoes generates authorization information for the first transaction. In some embodiments, in accordance with a determination that the captured image data 650 (e.g. third facial image data 652 or portion of third facial image data 654) does not meet the second matching criteria, the device transmits authorization denial information to the image capturing device 200. Additionally and/or alternatively, in some embodiments, in accordance with a determination that the captured image data 650 (e.g. third facial image data 652 or portion of third facial image data 654) does not meet the second matching criteria, the device transmits, to the image capturing device, a facial position adjustment request. Examples of facial position adjustment requests are discussed above with regard to
In some embodiments, in lieu of receiving captured image data from an image capturing device 200 that is remote from the computing system 100, the computing system 100 captures the captured image data. For example, the computing system 100 captures the captured image data using a biometric input device 142, a camera (not shown) that is a component of the computing system 100, or a local camera (not shown) that is a peripheral device of the computing system 100. In this way, the same system that captures the image data also analyzes the image data as described with regard to
As illustrated in
If, on the other hand, the system determines that the matching criteria has not been met, a user is prompted to provide new facial image data 1422. If the user agrees to provide new image data, the system repeats steps 1406-1410, as discussed above, until a final determination is made. If the user decides not to provide new image data, then the system forgoes generating authorization information for the transaction 1424.
If the system determined that the user is not associated with an identity chain 1404, then the system prompts or requests the user to generate an identity chain 1418. If the user decides not to generate the identity chain, then the system forgoes 1420 generating authorization information for the transaction. On the other hand, if the user decides to generate the identity chain, then the system prompts the user for authentication information 1426 (e.g., enrolment) as shown in
In prompting the user for authentication information 1426, the system requests for at least two distinct facial images where at least one facial image is from a document. The system receives authentication information that includes at least two distinct facial images, where at least one facial image is from a document, from the user 1428. The system compares the facial image data with the document image data to determine whether authentication matching criteria is met 1430. If the system determines that the authentication matching criteria has been met 1432, then the system authenticates the user 1434 and generates the identity chain 1436 for the user. After the identity chain is generated 1436 for the user, the system includes 1438 at least one of the two distinct facial images in the identity chain. If the system determines that the authentication matching criteria has not been met 1432, then a request or prompt for new image data is made 1440 is made to the user. If the user decides not to provide new image data 1440, then the system forgoes authenticating user 1442. If the user decides to provide new image data 1440, then the system repeats steps 1428-1438, as described above, until a final determination is made.
Features of the present invention can be implemented in, using, or with the assistance of a computer program product, such as a storage medium (media) or computer readable storage medium (media) having instructions stored thereon/in which can be used to program a processing system to perform any of the features presented herein. The storage medium (e.g., the memory 102 and the memory 202) can include, but is not limited to, high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. In some embodiments, the memory 102 and the memory 202 include one or more storage devices remotely located from the CPU(s) 120 and 220. The memory 102 and the memory 202, or alternatively the non-volatile memory device(s) within these memories, comprises a non-transitory computer readable storage medium.
Communication systems as referred to herein (e.g., the communication system 141 and the communication system 234) optionally communicate via wired and/or wireless communication connections. Communication systems optionally communicate with networks (e.g., the networks 150 and 152), such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. Wireless communication connections optionally use any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 102.11a, IEEE 102.11ac, IEEE 102.11ax, IEEE 102.11b, IEEE 102.11g and/or IEEE 102.11n), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the embodiments and the appended claims, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain principles of operation and practical applications, to thereby enable others skilled in the art.
This application is a continuation of International App. No. PCT/US20/59858, filed Nov. 10, 2020, which claims priority to U.S. Prov. App No. 62/938,779, filed Nov. 21, 2019; each of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62938779 | Nov 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US20/59858 | Nov 2020 | US |
Child | 17747698 | US |