The disclosed embodiments relate to biometric security. More specifically, the disclosed embodiments relate to face matching verification systems.
With the growth of personal electronic devices that may be used to access many different user accounts, and the increasing threat of identity theft and other security issues, there is a growing need for ways to securely access user accounts via electronic devices. Account holders are thus often required to have longer passwords that meet various criteria such as using a mixture of capital and lowercase letters, numbers, and other symbols. With smaller electronic devices, such as smart phones, smart watches, “Internet of Things” (“IoT”) devices and the like, it may become cumbersome to attempt to type such long passwords into the device each time access to the account is desired and if another individual learns the user's password, then the user can be impersonated without actually being present themselves. In some instances, users may even decide to deactivate such cumbersome security measures due to their inconvenience on their devices. Thus, users of such devices may prefer other methods of secure access to their user accounts.
One other such method uses biometrics. For example, an electronic device may have a dedicated sensor that may scan a user's fingerprint to determine that the person requesting access to a device, or an account is authorized. However, such fingerprint systems on small electronic devices, or are often considered unreliable and unsecure.
In addition, facial matching recognition is generally known and may be used in a variety of contexts. Two-dimensional facial image matching is commonly used to tag people in images on social networks or in photo editing software. Face matching and verification software, however, has not been widely implemented on its own to securely verify the identity or the continuity of users attempting to gain access to an account because is not considered secure enough. For example, two-dimensional facial matching is considered unsecure because faces may be photographed or recorded, and then the resulting prints or video displays showing images of the user may be used to trick the system. Accordingly, there is a need for reliable, cost-effective, and convenient methods to verify the identity or continuity of users attempting to log in to, for example, an online account.
The disclosed embodiments have been developed in light of the above and aspects of the invention may include a method for enrolling and authenticating a user in a verification system via a mobile computing device. The user's device includes a camera.
In one embodiment, the user may enroll in the system by providing enrollment images of the user's face. The enrollment images are taken by the camera of the mobile device as the user moves the mobile device to different positions relative to the user's head. The user may thus obtain enrollment images showing the user's face from different angles and distances. The system may also utilize one or more movement sensors of a mobile device to determine an enrollment movement path that the phone takes during the imaging. At least one image is processed to detect the user's face within the image, and to obtain biometric information from the user's face in the image. The image processing may be done on the user's mobile device or at a remote device, such as a verification server or a user account server. The enrollment information (the enrollment biometrics, movement, and other information) may be stored on the mobile device or remote device or both.
The system may then verify a user by the user providing at least one verification image via the camera of the mobile device while the user moves the mobile device to different positions relative to the user's head. The verification images are processed for face detection and facial biometric information. Path parameters may also be obtained during the imaging of the verification images (verification movement). The verification information (verification biometric, movement, and other information) is then compared with the enrollment information to determine whether the user should be verified or denied. Image processing and comparison may be conducted on the user's mobile device or may be conducted remotely.
In some embodiments, multiple enrollment profiles may be created by a user to provide further security. For example, a user may create an enrollment wearing accessories such as a hat or glasses, or while making a funny face. In further embodiments, the user's enrollment information may be linked to a user's email address, phone number, or other unique identifier.
The verification system may include feedback displayed on the mobile device to aid a user in learning and verification with the system. For instance, an accuracy meter may provide feedback on a match rate of the verification biometrics or movement. A movement meter may provide feedback on the movement detected by the mobile device.
In some embodiments, the system may reward users who successfully utilize the verification system or who otherwise take fraud preventing measures. Such rewards may include leaderboards, status levels, reward points, coupons, or other offers, and the like. In some embodiments, the verification system may be used to login to multiple accounts.
In addition to biometric and movement matching, some embodiments may also utilize banding detection, glare detection, and screen edge detection to further secure the system. In other embodiments, other user attributes may be detected and matched including users' gender, age, ethnicity, and the like.
The system may also provide gradual access to user account(s) when the user first sets up the verification system. As the user successfully implements the system, authorization may be expanded. For example, during a time period as the user gets accustomed to the verification system, lower transaction limits may be applied.
In some embodiments, the mobile device may show video feedback of what the user is imaging to aid the user to image his or her face during enrollment or verification. The video feedback may be displayed on only a portion of the display screen of the mobile device. For example, the video feedback may be displayed in an upper portion of the display screen. The video feedback display may be positioned on a portion of the display screen that corresponds with a location of a front-facing camera of the mobile device.
To facilitate imaging in low-light, portions of the screen other than the video feedback may be displayed in a bright color, such as white. In some embodiments, LED or infrared light may be used, and near infrared thermal imaging may be done with an infrared camera. The mobile device used for imaging may thus have multiple cameras for capture visible light and infrared images. The mobile device may also have multiple cameras (two or more) imaging in a single spectrum or multiple spectrum to provide stereoscopic, three-dimensional images. In such an embodiment, the close-up frames (zoomed) may create the most differentiation as compared to images captured from a distance. In such an embodiment, the frames captured at a distance may be unnecessary.
In some embodiments, to provide added security, the mobile device may output objects, colors, or patterns on the display screen to be detected during the imaging. The predetermined object or pattern may be a unique one-dimensional or two-dimensional barcode. For example, a QR code (two-dimensional barcode) may be displayed on the screen and reflected off the user's eye. If the QR code is detected in the image, then the person may be verified. In other embodiments, an object may move on the screen and the system may detect whether a user's eyes follow the movement.
In some embodiments, the system may provide prompts on a video feedback display to aid the user in moving the device relative to the user's head during enrollment and/or verification. The prompts may include ovals or frames displayed on the display screen in which the user must place his or her face by moving the mobile device until his or her face is within the oval or frame. The prompts may preferably be of differing sizes and may also be centered on different positions of the screen. When an actual three-dimensional person images himself or herself close up and far away, it has been found that the biometric results are different due to the barrel distortion effect of the lens at the different distances. Thus, a three-dimensional person may be validated when biometric results are different in the close-up and far away images. This also allows the user to have multiple biometric profiles for each of the distances.
In other embodiments, biometrics from images obtained between the close-up and far away images may be analyzed for incrementally different biometric results. In this manner, the morphing of the face from the far face to the warped close up face is captured and tracked. The incremental frames during a verification may then be matched to frames captured at similar locations during enrollment along the motion path and compared to ensure that the expected similarities and differences are found. This results in a motion path and captured image and biometric data that can prove a three-dimensional person is presently being imaged. Thus, not only are the close-up and far away biometrics compared, but also biometric data obtained in between. The biometric data obtained in between must also correspond to a correct morphing speed along the motion path, greatly enhancing the security of the system.
The touch screen may be utilized in some embodiments. For example, the user may need to enter or swipe a code or pattern in addition to the verification system described herein. The touchscreen may also detect the size and orientation of a user's finger, and whether a right hand or a left hand is used on the touch screen. Voice parameters may also be used as an added layer of security. The system may detect edge sharpness or other indicators to ensure that the obtained images are of sufficient quality for the verification system.
When a camera has an autofocus, the autofocus may be controlled by the system to validate the presence of the actual, three-dimensional person. The autofocus may check that different features of the user or environment focus at different focal lengths. In other embodiments, verification images may be saved to review the person who attempted to verify with the system.
In some embodiments, the match thresholds required may be adapted over time. The system may thus account for changing biometrics due to age, weight gain/loss, environment, user experience, security level, or other factors. In further embodiments, the system may utilize image distortion prior to obtaining biometric information to further protect against fraudulent access.
The system may utilize any number or combination of the security features as security layers, as described herein. When verification fails, the system may be configured so that it is unclear which security layer triggered the failure to preserve the integrity of the security system.
Also disclosed is a method for authenticating identity of a customer as part of a business transaction comprising presenting a customer, with questions, the customer questions having corresponding customer answers, and then receiving customer answers from the customer in response to the presenting of customer questions. Next, processing the customer answers to create processed customer answers and transmitting the processed customer answers to a remote computing device. This method also compares the processed customer answers to stored data at the remote computing device and, responsive to the comparing determining that a match has not occurred, denying further verification. Responsive to the comparing determining that a match has occurred, allowing further verification by capturing and processing one or more facial images of the customer to verify the identity of the customer and liveness of the customer.
In one embodiment, the processed customer answers are encrypted, subject to a hash operation, or both. In one embodiment, the method further comprises inverting the one or more facial images to captured verification data and comparing the captured verification data to stored verification data to determining if a match occurs. In one configuration, the stored verification data is stored in a blockchain and the comparing, for a match, the reverse processed customer answers to stored customer answers data controls access to the blockchain storing the stored verification data.
It is also contemplated that a result of the identity and liveness verification of the customer is communicated to a business to thereby verify the identity of the customer to the business. The business may be a credit reporting agency or a lender. It is contemplated that verification may further comprises verifying the liveness of the customer by processing a first image of the customer's face captured at a first distance from the customer and capturing a second image of the customer's face captured at a second distance from the customer. In one configuration, the verification further comprises comparing at least one image of the customer's face to a previously captured image of the customer's face which is part of stored verification data.
Also disclosed is a verification system to verify a user's identity comprising a data collection device having a processor and memory storing non-transitory machine executable code which is executable by the processor. The machine executable code of the data collection device may be configured to present user related questions to the user and receive answers to the user related questions. The answers are entered by the user into the data collection device. It is also configured to process the answers to create secured answer data, transmit the secured answer data, and responsive to instructions from a remote server, collect and transmit collected verification data from the user.
Also part of the system is the remote server having a processor and memory storing non-transitory machine executable code which is executable by the processor, such that the machine executable code is configured to receive the secured answer data from the data collection device and process the secured answer to determine if the received secured answer data matches stored secured answer data. Responsive to the received secured answer data not matching the stored secured answer data, denying access to stored verification data for the user. Responsive to the received secured answer data matching the stored secured answer data, then initiating a verification session by communicating with the data collection device to collect and transmit collected verification data, and then receive collected verification data from the data collection device. The machine executable code is further configured to compare the collected verification data received from the data collection device to stored user verification data stored on the remote server to determine if a match occurs, such that a match verifies the identity of the user.
In one embodiment, the secured answer data comprises encrypted answers or hashed answers. The collected verification data may comprise one or more images of the user captured by a camera of the data collection device. The user verification data may comprise a first image of the user's face captured, by the camera, at a first distance separating the user and the camera and second image of the user's face captured, by the camera, at a second distance separating the user and the camera, such that the first distance is different from the second distance. In one configuration, this system further comprises transmitting a verified identity notice to a third-party server and responsive thereto, receiving data from the third-party server as part of a business transaction. It is also contemplated that the stored user verification data is stored in a blockchain and the blockchain storing the stored user verification data is only accessed when the received secured answer data matches the stored secured answer data.
A verification system for use by a business to verify the identity of a user. In one embodiment, the verification system comprises a data collection device having a screen and a user interface. The data collection device is configured to receive answers from the user to questions presented to the user, process the answers to create secure answer data, and transmit the secure answer data to a verification server. Also part of this embodiment is a verification server configured to receive the secure answer data from the data collection device and compare the secure answer data, or processed secure answer data, to stored answer data. Responsive to the comparing determining that the secure answer data or processed secure answer data does not match the stored answer data, terminating the identify verification. Responsive to the comparing determining the secure answer data or processed secure answer data matchings the stored answer data, then initiating a verification session which includes capture of one or more images of the customer's face with a camera associated with the data collection device or another device.
The data collection device may be an electronic device owned by the user. The data collection device may be an electronic device owned by the business. In one embodiment, the stored answer data is created by performing the same processing on the answers as occurred by the data collection device to form the secure answer data. In one configuration, the questions presented to the user are based on information that should only be known to the user.
In one embodiment, the step of initiating a verification session comprises providing notice, from the verification server, to initiate the verification session by sending a message from the verification server to the data collection device or another device. Then, capturing at least one image of the user with a camera associated with the data collection device or another device. This step also includes processing the at least one image to generate captured image data and transmitting the captured image data to the verification server. At the verification server, processing the captured image data to verify three dimensionality of the user and comparing the captured image data to stored image data derived from at least one previously captured image of the user to determine if match occurs within a threshold range. Then, responsive to verifying three dimensionality of the user and obtaining the match within the threshold range, then verifying the identity of the user to the business. The system of claim 15 wherein the stored verification data, such as biometric data, is stored in a blockchain. In one embodiment, the one or more images of the user's face comprises a first image captured with the camera at a first distance from the user and a second image captured with the camera at a second distance from the user, the first distance different than the second distance.
Also disclosed is a method for verifying identity of a customer by a business comprising initiating an identity verification session for the customer. At the business, presenting questions to the customer which have stored answers that are stored at a remote location and also at the business, receiving customer answers to the questions. Then, transmitting the customer answers or a processed version of the customer answers to a verification system. At the verification system, which may be remote from the user, receiving the customer answers or the processed version of the customer answers at the verification system. The verification system compares the customer answers or the processed version of the customer answers to stored customer answers or a stored processed version of the customer answers to determine if a match occurs. If a match does not occur, providing notice to the business of a failure to match and ending the identity verification processes. If a match does occur, initiating a verification process by obtaining one or more images of the customer's face with a camera and processing one or more of the images of the customer's face to generate captured facial image data. Then, transmitting the captured facial image data to the verification system, and processing the captured facial image data to determine three-dimensionality and liveness of the customer generating the captured facial image data. This method of operation then compares the captured facial image data to stored facial image data confirm the stored facial image data matches the captured facial image data, the stored facial image data based on previously captured images of the customer's face.
This method of operation may further comprise, responsive to the stored facial image data matching the captured facial image data, sending an identity verification success message to the business, to a credit reporting agency so the credit reporting agency can send a credit report to the business, to a lender so the lender will provide a loan or financing to the customer, or any combination thereof.
The step of capturing the one or more images of the user may comprise a first image capture with the camera a first distance from the customer's face and a second image captured with the camera a second distance from the user's face such that the first distance is different than the second distance. The customer answers may be encrypted or hashed prior to transmitting to the verification system. In one configuration, the step of comparing the customer answers or the processed version of the customer answers to stored customer answers or a stored processed version of the customer answers controls access to verification data stored is a blockchain.
Other systems, methods, features and advantages of the invention will be or will become apparent to one with skill in the art upon examination of the following Figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
The components in the Figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. In the Figures, like reference numerals designate corresponding parts throughout the different views.
A system and method for providing secure and convenient face matching and/or verification will be described below. The system and method may be achieved without the need for additional expensive biometric readers or systems while offering enhanced security over conventional face matching systems.
Face Matching and/or Verification Environment
In this environment, a user 108 may have a mobile device 112 which may be used to access one or more of the user's accounts via verification systems. A user 108 may have a mobile device 112 that can capture a picture of the user 108, such as an image of the user's face. The user may use a camera 114 on or connected to the mobile device 112 to capture an image or multiple images or video of himself or herself. The mobile device 112 may comprise any type of mobile device capable of capturing an image, either still or video, and performing processing of the image or communication over a network.
In this embodiment, the user 108 may carry and hold the mobile device 112 to capture the image. The user may also wear or hold any number of other devices. For example, the user may wear a watch 130 containing one or more cameras 134 or biosensors disposed on the watch. The camera 134 may be configured to create an image from visible light as well as infrared light. The camera 134 may additionally or alternatively employ image intensification, active illumination, or thermal vision to obtain images in dark environments.
When pointed towards a user 108, the camera 134 may capture an image of the user's face. The camera 134 may be part of a module that may either include communication capability that communicates with either a mobile device 112, such as via Bluetooth®, NFC, or other format, or communication directly with a network 116 over a wired or wireless link 154. The watch 130 may include a screen on its face to allow the user to view information. If the camera module 134 communicates with the mobile device 112, the mobile device 134 may relay communications to the network 116. The mobile device 134 may be configured with more than one front facing camera 114 to provide for a 3D or stereoscopic view, or to obtain images across different spectral ranges, such as near infrared and visible light.
The mobile device 112 is configured to wirelessly communicate over a network 116 with a remote server 120. The server 120 may communicate with one or more databases 124. The network 116 may be any type of network capable of communicating to and from the mobile device including but not limited to a LAN, WAN, PAN, or the internet. The mobile device 112 may communicate with the network via a wired or wireless connection, such as via ethernet, wi-fi, NFC, and the like. The server 120 may include any type of computing device capable of communicating with the mobile device 112. The server 120 and mobile device 112 are configured with a processor and memory and are configured to execute machine readable code or machine instructions stored in the memory.
The database 124, stored on mobile device or remote location as shown, may contain facial biometric information and verification information of users 108 to identify the users 108 to allow access to associated user data based on one or more images or biometric information received from the mobile device 112 or watch 134. The data may be, for example, information relating to a user account or instruction to allow access to a separate account information server 120B. The term biometric data may include among other information biometric information concerning facial features and path parameters. Examples of path parameters may include an acceleration and speed of the mobile device, angle of the mobile device during image capture, distance of the mobile device to the user, path direction in relation to the user's face position in relation to the user, or any other type parameter associated with movement of the mobile device or the user face in relation to a camera. Other data may also be included such as GPS data, device identification information, and the like.
In this embodiment, the server 120 processes requests for identification from the mobile device 112 or user 108. In one configuration, the image captured by the mobile device 112, using facial detection, comprises one or more images of the user's face 108 during movement of the mobile device relative to the user's face, such as in a side to side or horizontal arc or line, vertical arc or line, forward and backwards from the user's face, or any other direction of motion. In another configuration, the mobile device 112 calculates biometric information from the obtained images and sends the biometric information to the server 120. In yet another embodiment, the mobile device 112 compares biometric information with stored biometric information on the mobile device 112 and sends a verification result from the comparison to the server 120.
The data including either the image(s), biometric information, or both are sent over the network 116 to the server 120. Using image processing and image recognition algorithms, the server 120 processes the person's biometric information, such as facial data, and compares the biometric information with biometric data stored in the database 124 to determine the likelihood of a match. In other embodiments, the image processing and comparison is done on the mobile device 112, and data sent to the server indicates a result of the comparison. In further embodiments, the image processing and comparison is done on the mobile device 112 without accessing the server, for example, to obtain access to the mobile device 112 itself.
By using face match processing, an accurate identity match may be established. Based on this and optionally one or more other factors, access may be granted, or an unauthorized user may be rejected. Face match processing is known in the art (or is an established process) and as a result, it is not described in detail herein.
Also shown is a second server 120B with associated second database 124B, and third server 120C with associated third database 124C. The second and third database may be provided to contain additional information that is not available on the server 120 and database 124. For example, one of the additional servers may only be accessed based on the verification of the user 108 performed by the server 120.
Executing on the mobile device 112 is one or more software applications. This software is defined herein as an identification application (ID app). The ID app may be configured with either or both facial detection and face matching and one or more software modules which monitor the path parameters and/or biometric data. Facial detection as used herein refers to a process which detects a face in an image. Face matching as used herein refers to a process that can analyze a face using an algorithm, mapping its facial features, and converting them to biometric data, such as numeric data. The biometric data can be compared to that derived from one or more different images for similarities or dissimilarities. If a high percentage of similarity is found in the biometric data, the individual shown in the images may be considered a match.
With the ultimate goal of matching a face of a user to an identity or image stored in a database 124, to verify the user, the ID app may first process the image captured by the camera 114, 134 to identify and locate the face that is in the image. As shown in
The portion of the photo that contains the detected face may then be cropped, cut, and stored for processing by one or more face matching algorithms. By first detecting the face in the image and cropping only that portion of the face, the face matching algorithm need not process the entire image. Further, in embodiments where the face matching processing occurs remotely from the mobile device 112, such as at a server 120, much less image data is required to be sent over the network to the remote location. It is contemplated that the entire image, a cropped face, or only biometric data may be sent to the remote server 120 for processing.
Facial detection software can detect a face from a variety of angles. However, face matching algorithms are most accurate in straight on images in well-lit situations. In one embodiment, the highest quality face image for face matching that is captured is processed first, then images of the face that are lower quality or at different angles other than straight toward the face are then processed. The processing may occur on the mobile device or at a remote server which has access to large databases of image data or facial identification data.
The facial detection is preferred to occur on the mobile device and is performed by the mobile device software, such as the ID app. This reduces the number or size of images (data) that are sent to the server for processing where faces are not found and minimizes the overall amount of data that must be sent over the network. This reduces bandwidth needs and network speed requirements are reduced.
In another preferred embodiment, the facial detection, face matching, and biometric comparison all occur on the mobile device. However, it is contemplated that the face matching processing may occur on the mobile device, the remote server, or both.
In this example embodiment, the mobile device 200 is configured with an outer housing 204 configured to protect and contain the components described below. Within the housing 204 is a processor 208 and a first and second bus 212A, 212B (collectively 212). The processor 208 communicates over the buses 212 with the other components of the mobile device 200. The processor 208 may comprise any type processor or controller capable of performing as described herein. The processor 208 may comprise a general-purpose processor, ASIC, ARM, DSP, controller, or any other type processing device. The processor 208 and other elements of the mobile device 200 receive power from a battery 220 or other power source. An electrical interface 224 provides one or more electrical ports to electrically interface with the mobile device, such as with a second electronic device, computer, a medical device, or a power supply/charging device. The interface 224 may comprise any type electrical interface or connector format.
One or more memories 210 are part of the mobile device 200 for storage of machine readable code for execution on the processor 208 and for storage of data, such as image data, audio data, user data, medical data, location data, accelerometer data, or any other type of data. The memory 210 may comprise RAM, ROM, flash memory, optical memory, or micro-drive memory. The machine-readable code as described herein is non-transitory.
As part of this embodiment, the processor 208 connects to a user interface 216. The user interface 216 may comprise any system or device configured to accept user input to control the mobile device. The user interface 216 may comprise one or more of the following: keyboard, roller ball, buttons, wheels, pointer key, touch pad, and touch screen. A touch screen controller 230 is also provided which interfaces through the bus 212 and connects to a display 228.
The display comprises any type display screen configured to display visual information to the user. The screen may comprise a LED, LCD, thin film transistor screen, OEL CSTN (color super twisted nematic), TFT (thin film transistor), TFD (thin film diode), OLED (organic light-emitting diode), AMOLED display (active-matrix organic light-emitting diode), capacitive touch screen, resistive touch screen or any combination of these technologies. The display 228 receives signals from the processor 208 and these signals are translated by the display into text and images as is understood in the art. The display 228 may further comprise a display processor (not shown) or controller that interfaces with the processor 208. The touch screen controller 230 may comprise a module configured to receive signals from a touch screen which is overlaid on the display 228.
Also part of this exemplary mobile device is a speaker 234 and microphone 238. The speaker 234 and microphone 238 may be controlled by the processor 208. The microphone 238 is configured to receive and convert audio signals to electrical signals based on processor 208 control. Likewise, the processor 208 may activate the speaker 234 to generate audio signals. These devices operate as is understood in the art and as such are not described in detail herein.
Also connected to one or more of the buses 212 is a first wireless transceiver 240 and a second wireless transceiver 244, each of which connect to respective antennas 248, 252. The first and second transceiver 240, 244 are configured to receive incoming signals from a remote transmitter and perform analog front-end processing on the signals to generate analog baseband signals. The incoming signal may be further processed by conversion to a digital format, such as by an analog to digital converter, for subsequent processing by the processor 208. Likewise, the first and second transceiver 240, 244 are configured to receive outgoing signals from the processor 208, or another component of the mobile device 208, and up convert these signal from baseband to RF frequency for transmission over the respective antenna 248, 252. Although shown with a first wireless transceiver 240 and a second wireless transceiver 244, it is contemplated that the mobile device 200 may have only one such system or two or more transceivers. For example, some devices are tri-band or quad-band capable, or have Bluetooth®, NFC, or other communication capability.
It is contemplated that the mobile device, and hence the first wireless transceiver 240 and a second wireless transceiver 244 may be configured to operate according to any presently existing or future developed wireless standard including, but not limited to, Bluetooth, WI-FI such as IEEE 802.11 a,b,g,n, wireless LAN, WMAN, broadband fixed access, WiMAX, any cellular technology including CDMA, GSM, EDGE, 3G, 4G, 5G, TDMA, AMPS, FRS, GMRS, citizen band radio, VHF, AM, FM, and wireless USB.
Also part of the mobile device is one or more systems connected to the second bus 212B which also interfaces with the processor 208. These devices include a global positioning system (GPS) module 260 with associated antenna 262. The GPS module 260 can receive and process signals from satellites or other transponders to generate location data regarding the location, direction of travel, and speed of the GPS module 260. GPS is generally understood in the art and hence not described in detail herein. A gyroscope 264 connects to the bus 212B to generate and provide orientation data regarding the orientation of the mobile device 204. A magnetometer 268 is provided to provide directional information to the mobile device 204. An accelerometer 272 connects to the bus 212B to provide information or data regarding shocks or forces experienced by the mobile device. In one configuration, the accelerometer 272 and gyroscope 264 generate and provide data to the processor 208 to indicate a movement path and orientation of the mobile device.
One or more cameras (still, video, or both) 276 are provided to capture image data for storage in the memory 210 and/or for possible transmission over a wireless or wired link or for viewing later. The one or more cameras 276 may be configured to detect an image using visible light and/or near-infrared light. The cameras 276 may also be configured to utilize image intensification, active illumination, or thermal vision to obtain images in dark environments. The processor 208 may process image data to perform image recognition, such as in the case of, facial detection, item detection, face matching, item recognition, or bar/box code reading.
A flasher and/or flashlight 280, such as an LED light, are provided and are processor controllable. The flasher or flashlight 280 may serve as a strobe or traditional flashlight. The flasher or flashlight 280 may also be configured to emit near-infrared light. A power management module 284 interfaces with or monitors the battery 220 to manage power consumption, control battery charging, and provide supply voltages to the various devices which may require different power requirements.
In this example confirmation, the mobile device 304 includes a receive module 320 and a transmit module 322. These software modules are configured to receive and transmit data to remote devices, such as cameras, glasses, servers, cellular towers, or WIFI system, such as router or access points.
Also part of the mobile device 304 is a location detection module 324 configured to determine the location of the mobile device, such as with triangulation or GPS. An account setting module 326 is provided to establish, store, and allow a user to adjust account settings. A log in module 328 is also provided to allow a user to log in, such as with password protection, to the mobile device 304. A facial detection module 308 is provided to execute facial detection algorithms while a face matching module 321 includes software code that recognizes the face or facial features of a user, such as to create numeric values which represent one or more facial features (facial biometric information) that are unique to the user.
An information display module 314 controls the display of information to the user of the mobile device. The display may occur on the screen of the mobile device or watch. A user input/output module 316 is configured to accept data from and display data to the user. A local interface 318 is configured to interface with other local devices, such as using Bluetooth® or other shorter-range communication, or wired links using connectors to connected cameras, batteries, data storage elements. All the software (with associated hardware) shown in the mobile device 304 operate to provide the functionality described herein.
Also shown in
As shown in
An information display module 356 controls a display of information at the server 350. A user input/output module 358 controls a user interface in connection with the local interface module 360. Also located on the server side of the system is a face matching module 366 that is configured to process the image data from the mobile device. The face matching module 366 may process the image data to generate facial data (biometric information) and perform a compare function in relation to other facial data to determine a facial match as part of an identify determination.
A database interface 368 enables communication with one or more databases that contain information used by the server modules. A location detection module 370 may utilize the location data from the mobile device 304 for processing and to increase accuracy. Likewise, an account settings module 372 controls user accounts and may interface with the account settings module 326 of the mobile device 304. A secondary server interface 374 is provided to interface and communicate with one or more other servers.
One or more databases or database interfaces are provided to facilitate communication with and searching of databases. In this example embodiment the system includes an image database that contains images or image data for one or more people. This database interface 362 may be used to access image data users as part of the identity match process. Also part of this embodiment is a personal data database interface 376 and privacy settings data module 364. These two modules 376, 364 operate to establish privacy setting for individuals and to access a database that may contain privacy settings.
A verification system with path parameters that is operable in the above described environment and system will now be described as shown in
In step 410, the system enrolls a user in the face matching and/or verification system. In one embodiment, a verification server, such as the server 120 (
An enrollment process according to one embodiment will be described with reference to
Next, in step 516, the mobile device 112 may send device information to the verification server 120. The device information may include among other information a device identifier that uniquely identifies the mobile device of the user. Such information may include device manufacturer, model number, serial number, and mobile network information. In step 518, when the verification server 120 is incorporated with the account server 120B, the verification server 120 associates and stores the device information with the user's account information. When the verification server 120 is separate from the account server 120B, the account server 120B may generate a unique identifier related to the account information and send the unique identifier to the verification server 120. The verification server 120 may associate the device information and the unique identifier with each other and may store the information in a database 124.
The user is next prompted to provide a plurality of images of his or her face using a camera 114 on the mobile device 112 (hereinafter, “enrollment images”) in step 510. The enrollment images of the user's face are taken as the user holds the mobile device and moves the mobile device to different positions relative to his or her head and face. Thus, the enrollment images of the user's face are taken from many different angles or positions. Furthermore, the path parameters of the mobile device are monitored and recorded for future comparison in step 522. Some non-limiting examples of how a user might hold a mobile device and take a plurality of images of her face are shown in
In
The enrollment images may be obtained as follows. The user holds and orients a mobile device 112 with a camera 114 so that the camera 114 is positioned to image the user's face. For example, the user may use a front facing camera 114 on a mobile device 112 with a display screen and may confirm on the display screen that his or her face is in position to be imaged by the camera 114.
Once the user has oriented the device, the device may begin obtaining the enrollment images of the user. In one embodiment, the user may press a button on the device 112 such as on a touchscreen or other button on the device to initiate the obtaining of the enrollment images. The user then moves the mobile device to different positions relative to his or her head as the device images the user's face from a plurality of angles or positions as described above. When the above-mentioned front-facing camera is used, the user may continually confirm that his or her face is being imaged by viewing the imaging on the display screen. The user may again press the button to indicate that the imaging is completed. Alternatively, the user may hold the button during imaging, and then release the button to indicate that imaging is complete.
As described above, the mobile device 112 may include face detection. In this embodiment in step 524, the mobile device may detect the user's face in each of the enrollment images, crop the images to include only the user's face, and send, via a network, the images to the verification server 120. In step 526, upon receipt of the enrollment images, the verification server 120 performs face matching on the images to determine biometric information (“enrollment biometrics”) for the user. The verification server 120 may then associate the enrollment biometrics with the device information and the unique identifier (or account information) and stores the biometric information in the database 124 in step 528. For added security, in step 530, the mobile device 112 and the verification server 120 may be configured to delete the enrollment images after the enrollment biometrics of the user are obtained.
In another embodiment, the mobile device 112 may send the images to the verification server 120 without performing face detection. The verification server 120 may then perform the face detection, facial recognition, and biometric information processing. In another embodiment, the mobile device 112 may be configured to perform the facial detection, facial recognition, and biometric processing, and then send the results or data resulting from the processing to the verification server 120 to be associated with the unique identifier or user account. This prevents sensitive personal data (images) from leaving the user's device. In yet another embodiment, the mobile device 112 may perform each of the above-mentioned steps, and the mobile device 112 may store the enrollment information without sending any of the enrollment biometrics or images to the server.
In one embodiment, the mobile device's gyroscope, magnetometer, and accelerometer are configured to generate and store data while the user moves the mobile device about his or her head to obtain the enrollment images (path parameters). The mobile device may process this data in step 532 to determine a path or arc in which the mobile device moved while the user imaged his or her face (“enrollment movement”). By using data from the accelerometer, magnetometer, and gyroscope, the system may check when a user is ready to begin scanning himself/herself, as well as determining the scan path. The data is thus used to determine when to start and stop the scan interval. The data may additionally include the time elapsed during scanning. This time may be measured from the user pressing the button to start and stop the imaging or may be measured from the duration the button is held down while imaging, or during more movement or to complete sweep.
The enrollment movement of the mobile device 112 (which is data that defined the movement of the mobile device during image capture) may be sent to the verification server 120. The verification server 120 associates and stores the enrollment movement, the enrollment biometrics, the device information, and the unique identifier or account information. Alternatively, the data generated by the gyroscope, magnetometer, and accelerometer may be sent to the server 120, and the server 120 may process the data to determine the enrollment movement.
Thus, in the above described embodiment, the enrollment information may thus comprise the device information, the enrollment biometrics, and the enrollment movement (based on movement of the mobile device 112).
Returning to
In one embodiment outlined in
In step 816, the mobile device 112 sends the device information identifying the device and sends path parameters such as gyroscope, magnetometer, and accelerometer information defining the path of the mobile device taken during imaging, as well as the elapsed time during imaging (“verification movement”) to the server 120. The credentials received by the verification server 120 for a login in the face matching system may thus comprise the device information, the verification images or the verification biometrics, and the verification movement (path parameters).
Returning to
In step 920, the verification server 120 may then compare the login credentials with the information stored from the enrollment process. In step 920, the server 120 compares the identification of the device obtained during the login process to that stored during enrollment. In step 930, the verification biometrics may be compared with the enrollment biometrics to determine whether they sufficiently correspond with the enrollment biometrics. In step 940, the verification movement may be compared with the enrollment movement to determine whether it sufficiently corresponds with the enrollment movement.
In some embodiments, a copy of the enrollment information may be stored on the mobile device 112, and the mobile device 112 may verify that the credentials received on the mobile device 112 sufficiently correspond with the enrollment information. This would allow a user to secure documents, files, or applications on the mobile device 112 itself in addition to securing a user's account hosted on a remote device, such as the verification server 120, even when a connection to the verification server 120 may be temporarily unavailable, such as when a user does not have access to the internet. Further, this would allow the user to secure access to the mobile device 112 itself. Or enrollment info may be stored on server.
Accordingly, in step 950, if the verification server 120 or mobile device 112 determines that the enrollment information sufficiently corresponds with the credentials received, then the server or mobile device may verify that the identification of the user attempting login corresponds the account holder. This avoids the cumbersome process of the user having to manually type in a complex password using the small screen of the mobile device. Many passwords now require capital, non-text letter, lower case, and numbers.
The level of correspondence required to determine that the enrollment information sufficiently corresponds with the verification information in the login attempt may be set in advance. For example, the level of correspondence may be a 99.9% match rate between the enrollment biometrics and the verification biometrics and a 90% match rate between the enrollment movement and the verification movement. The required level of correspondence may be static, or elastic based on the established thresholds.
For example, the required level of correspondence may be based on GPS information from the mobile device 112. In one embodiment, the verification server 120 may require a 99.9% match rate as the level of correspondence when the GPS information of the mobile device corresponds with the location of the user's home or other authorized location(s). In contrast, if the GPS information shows the device is in a foreign country far from the user's home, the verification server may require a 99.99% match rate as the level of correspondence or may be denied entirely. Hence, the required match between pre-stored verification data (enrollment information) and presently received verification data (verification information) is elastic in that the required percentage match between path parameters or images my change depending on various factors, such as time of day, location, frequency of login attempt, date, or any other factor.
The required level of correspondence may additionally depend on time. For instance, if a second verification attempt is made shortly after a first verification attempt in a location far from the first verification location based on GPS information from the mobile device 112, the level of correspondence threshold may be set higher. For example, a user cannot travel from Seattle to New York in 1 hour. Likewise, login attempts from midnight to three in the morning may be a sign of fraud for some users based on patterns of the users' usage.
The level of correspondence between the enrollment information and the verification information may be the result of compounding the various parameters of the enrollment information and the verification information. For example, when the button hold time in the verification information is within 5% of the button hold time of the enrollment information, the correspondence of the button hold time may constitute 20% of the overall match. Similarly, when the motion path trajectory of the verification information is within 10% of the enrollment information, the motion path trajectory may constitute 20% of the overall match. Further parameter match rates such as the face size and face matching as compared to the enrollment information may constitute the remaining 10% and 50% of the overall level of correspondence. In this manner, the total overall level of correspondence may be adjusted (total of all parameters being more than 75%, for example), or the match rate of individual parameters may be adjusted. For example, on a second attempted login, the threshold match rate of one parameter may be increased, or the overall level of correspondence for all parameters may be increased. The threshold match rates may also be adjusted based on the account being verified or other different desired levels of security.
Returning to
Alternatively, if the credentials provided by the user are not verified, the verification server may transmit a message to display on the screen of the mobile device 112 indicating that the login attempt failed. The verification server 120 may then allow the user to try again to log in via the face matching login system, or the verification server 120 may require the user to enter typical account credentials, such as a username and password.
In one embodiment, the server 120 may allow three consecutive failed login attempts before requiring a username and password. If in one of the attempts, the required level of correspondence is met, then the user may be verified, and access may be granted. According to one embodiment, the verification server 120 may retain the information from each successive verification attempt and combine the data from the multiple verification attempts to achieve more accurate facial biometric information of the person attempting to be verified. In addition, the level of correspondence may be increased at each successive attempt to verify. In addition, by averaging the path data (verification movement) and/or image data (verification images/biometrics) from several login attempts, the login data (enrollment information) is perfected and improved.
Accordingly, the above described verification system allows for verification to a remote server 120 or on the mobile device 112 itself. This may be accomplished as described above by the mobile device 112 capturing the verification credentials, and the verification server 120 processing and analyzing the credentials compared to the enrollment information (cloud processing and analysis); the mobile device 112 capturing the verification credentials and processing the credentials, and the verification server 120 analyzing the credentials compared to the enrollment information (mobile device processing, cloud analysis); or the mobile device 112 capturing the verification credentials, and processing and analyzing the credentials compared to the enrollment information (mobile device processing and analysis).
The above described system provides several advantages. As one advantage, the face matching and/or verification system provides a secure login. For example, if during a login attempt the camera of the mobile device imaged a digital screen displaying a person rotating their head while the phone was not moving, the accelerometer, magnetometer, and gyroscope data would not detect any motion. Thus, the enrollment movement and the verification movement would not correspond, and the login attempt would be denied.
In addition, because a plurality of images are used as enrollment images and verification images, histograms or other photo manipulation techniques may be used to determine if a digital screen is present in place of a human face in the images. For example, the system may check for light frequency changes in the captured images, or banding in an image which would indicate an electronic display generated the image, backlighting, suspicious changes in lighting, or conduct other analyses on the images by comparing the images to determine that the actual live user is indeed alive, present, and requesting authorization to login.
As yet another advantage, as explained above, not only must the enrollment biometrics sufficiently correspond to the verification biometrics, but also the enrollment movement must match the verification movement, and the device information must match the enrollment device information. For example, an application may be downloaded to a mobile device that has a digital camera. The application may be a login application or may be an application from a financial institution or other entity with which the user has an account. The user may then login to the application using typical login credentials such as a website username and password. Further, the user may have a device code from logging in on another device or may use the camera to scan QR code or other such code to pair the device to their user account.
The user then holds the mobile device to move the mobile phone to different positions relative to his or her head while keeping his or her face visible to the camera as it is moved. As the mobile device is moved, the camera takes the enrollment images of the face. During imaging, the speed and angle of the current user's mobile device movement is measured using the accelerometer, magnetometer, and gyroscope to generate the enrollment movement. Further continuous imaging and detection of the face throughout the process has been shown to prevent fraud. This is because a fraud attempt cannot be made by rotating images in and out of the front of the camera.
For example, a user may start the movement from right to left or from left to right as shown in
The system therefore provides enhanced security for authenticating a user who has a mobile device. As explained above, the system may use at least any one or more of the following in any number of combinations to securely verify the user: physical device verification, mobile network verification, face matching including the size of the face in the image, a face detected in every frame during the movement, accelerometer information, gyroscope information, magnetometer information, pixels per square inch, color bits per pixel, type of image, user entered code or pattern, and GPS information.
As another advantage, the face matching login system provides a convenient manner for a user to login to an account with a mobile device. For example, once enrolled, a user does not need to enter a username and password on the small mobile device each time the user wishes to access the account. Instead, the user simply needs to image himself or herself while mimicking the enrollment movement with the mobile device. This is especially advantageous with smaller mobile devices such as mobile phones, smart watches, and the like.
The system may be further configured to allow a user to securely log on to multiple devices, or to allow users to securely share devices. In one embodiment, the enrollment information may be stored on a verification server (or on “the cloud”) and thus is not associated only with the user's original device. This allows the user to use any number of suitable devices to verify with the verification server. In this manner, a user may use a friend's phone (third party device) or other device to access his or her information, such as account information, address book information, email, or other messaging, etc. By performing the verification operation on any device.
For example, the user may provide an email address, username code, or similar identifier on the friend's phone such that the verification server compares the login information with enrollment information for the user's account. This would indicate to the verification server which verification profile to use but does not by itself allow access to the user's data, accounts, or tasks. Upon logging out of a friend's phone, access to the user's information on the friend's phone is terminated. The provides the benefit of allowing a user to securely access account or other verification accessible information or tasks using any device without having to type the user's password into the third-party device, where it could be logged or copied. In a sense, the user is the password.
Through cloud-based enrollment information, a single user may also securely transfer data between verified devices. In one embodiment, a user may own a first device, such as a mobile phone, and is verified on the first device via the verification system. The user may then acquire a new device, such as a new phone, tablet computer, or other device. Using the cloud-based verification system, the user may verify identity or image continuity on the new device and transfer data from the first device to the new device. The transfer of data may be completed via the internet, a local network connection, a Bluetooth connection, a wired connection, or a near field communication. The verification process may also be part of a security check to resent or restore a system after the phone is lost or stolen. Thus, the verification system may be used to activate or verify a new device, with the verification used to verify the user of the new device.
Similarly, the system may facilitate secure access to a single shared device by multiple people to control content or other features on the device. In many cases, passwords can be viewed, copied, guessed, or otherwise detected, particularly when a device is shared by several users. The users may be, for example, family members including parents and children, coworkers, or other relationships, such as students. The verification system may allow each of the family members to log in based on his or her own unique enrollment information associated with a user account.
The device may restrict access to certain content or features for one or more of the certain user's accounts, such as children's user accounts, while allowing access to content and features for others, such as the parents' accounts. By using the verification system for the shared device, the users such as children are unable to utilize a password to try and gain access to the restricted content because the verification system requires the presence of the parent for verification, as explained above. Thus, device sharing among users with different privileges is further secured and enhanced. Likewise, in a classroom setting, a single device may be securely shared between multiple people for testing, research, and grade reporting.
Numerous modifications may be made to the above system and method without departing from the scope of the invention. For example, the images may be processed by a face matching algorithm on the device and may also be converted to biometric data on the device which is then compared to previously created biometric data for an authorized user. Alternatively, the images from a device may be sent through a wired or wireless network where the face matching algorithms running on a separate server can process the images, create biometric data, and compare that data against previously stored data that assigned to that device.
Further, the photo enrollment process may be done multiple times for a user to create multiple user profiles. For example, the user may enroll with profiles with and without glasses on, with and without other wearable devices, in different lighting conditions, wearing hats, with different hair styles, with or without facial or ear jewelry, or making different and unique faces, such as eyes closed, winking or tongue out to establish another level of uniqueness to each user profile. Such ‘faces’ made by the user would not be available on the user's social media pages and hence not available for copying, manipulation, and use during a fraud attempt. Each set of enrollment images, enrollment biometrics, or both may be saved along with separate enrollment movement. In one embodiment at least three images are captured as the mobile device completes the path. It is contemplated that any number of images may be captured.
It is also contemplated that the enrollment process may be linked to an email address, phone number, or other identifier. For example, a user may sign up with an email address, complete one or more enrollments as described above, and confirm the enrollments via the same email address. The email address may then further enhance the security of the system. For example, if a user unsuccessfully attempts to login via the verification system a predetermined number of times, such as three times for example, then the verification system locks the account and sends an email to the email address informing the user of the unsuccessful login attempts. The email might also include one or more pictures of the person who failed to login and GPS or other data from the login attempt. The user may then confirm whether this was a valid login attempt and reset the system, or the user may report the login attempt as fraudulent. If there is a reported fraudulent login, or if there are too many lockouts, the system may delete the account associated with the email address to protect the user's security. Thus, future fraudulent attempts could not be possible.
To further facilitate imaging, the mobile device may include various feedback meters such as a movement meter or accuracy meter as shown in
The mobile device 1012 may also display an accuracy meter 1026 or any other visual representation of frames to aid the user in validating himself/herself using the verification system and learning to improve verification. The accuracy meter 1026 may show a user a match rate (graphical, alpha, or numerical) of a predetermined number of images obtained during the verification process. The accuracy meter can be represented on the display in a variety of ways including numeric percentages, color representation, graphical, and the like. A combination of representations may also be utilized.
For example, as shown in
In another embodiment, each of the images may be represented on a table as a color that corresponds to the match rate. The color dark green may represent a very high match rate, light green may represent a good match rate, yellow may represent a satisfactory match rate, red may represent a mediocre match rate, and grey may represent a poor match rate. Other colors schemes may also be used.
The height of the bars or the colors used may correspond to predetermined match rates. For example, a full bar or dark green may be a match rate greater than 99.9%, a three-quarter bar or light green may be a match rate between 90% and 99.9%, a half bar or yellow may be a match rate of 50-90%, red may be a match rate of 20%-50%, and a single line to a quarter bar or grey may be a match rate of 0-20%. A pie chart, line graph, or any other type of representation could also be used or any other numerical or graphical display. An overall score may be presented or a score per image.
The accuracy meter may also include a message 1028 indicating an overall match score. For example, the accuracy meter may indicate an average overall match score or the number of images which achieved a 99.9% match rate, and display the message to a user. With the movement meter 1024 and the accuracy meter 1026 as described above, the user may quickly learn to use the verification system due to the feedback presented by the meters 1024, 1026.
The movement and accuracy meters 1024, 1026 may also be configured to incorporates game features, aspects, or techniques into the verification system to encourage a user to try and get the best match possible (such as a high number score or a high percentage of frames), increasing the user's skill in utilizing the verification system. This also builds user adoption rates for the technology.
For example, the user may compete with themselves to mimic or improve past verification scores to encourage or train the user to achieve a high score. Further modifications of the verification meter may also be incorporated such as the ability to share accuracy match results with others to demonstrate one's skill in using the system or to compete against others. In other instances, the user may receive a reward, such as a gift or coupon, for high accuracy scores. While this may slightly increase costs, the reduction in fraud loss would far outweigh the additional cost.
Further game techniques may be incorporated into the verification system to encourage users to take actions which will prevent unauthorized or fraudulent verification. In one embodiment, the verification system may award users that engage in fraud preventing activities. One such activity is utilizing the face matching and/or verification system described herein. For example, based on the above described accuracy meter, the system may reward a user that successfully verifies with the system above a certain match rate. The system may award reward points, cash, or other prizes based on the successful verification or on a predetermined number of successful verifications. Where reward points are utilized, the points may be cashed in for predetermined prizes.
Other game features may involve award levels for users who gain a predetermined amount of experience using the verification feature. For example, different reward levels may be based on users successfully authenticating 100 times, 500 times, 1000 times, etc. Because each instance of fraud loss can be significant and can damage the goodwill of the business or organization, the benefits to fraud prevention are significant.
In one embodiment, the user may be notified that he or she has achieved various competency levels, such as a “silver level” upon achieving 100 successful verifications, a “gold level” for achieving 500 successful verifications, or a “platinum level” for achieving 1000 successful verifications. An amount of points awarded for each verification above a given match rate may increase based on the user's experience level. Of course, the names of the levels and the number of verifications for each level as described above are only exemplary and may vary as desired.
In one embodiment, a verification only counts toward reward levels when business is transacted at the web site while in other embodiments, repeated attempts may be made, all of which count toward rewards. Another feature may incorporate a leaderboard where a user may be notified of a user ranking comparing his or her proficiency or willingness in using the verification system as compared with other users.
Successful use of the verification system benefits companies and organizations that utilize the system by reducing costs for fraudulent activities and the costs of preventing fraudulent activities. Those cost savings may be utilized to fund the above described game features of the verification system.
Further activities that correspond to the verification system and contribute to the reduction of fraud may also be incorporated to allow a user to earn points or receive prizes. Such activities may include a user creating a sufficiently long and strong password that uses a certain number and combination of characters. This encourages and rewards users to set passwords that are not easily compromised. Other examples may include rewarding users to take time to perform verification steps in addition to an initial verification such as a mobile phone or email verification of the verification, answering one or more personal questions, or other secondary verifications as currently known or later developed. This rewards users for taking on added time and inconvenience to lower the risk of fraud to a company or organization.
As another example, if the verification service is used to login to websites or apps that provide affiliate programs, then the reward or gift can be subsidized from the affiliate commissions on purchases made on those sites. For example, if a commerce (product or service) web site utilizes the method and apparatus disclosed herein to avoid fraud, and thus increase profits, then a percentage of each purchase made by a user using the verification service will be provided to the verification service. By reducing fraud, consumer purchases are more likely and additional users will be willing to enter financial and personal information. An affiliate link, code, or referral source or identifier may be used to credit the verification system with directing the consumer to the commerce (product or service) web site.
It is also contemplated that the verification system may be configured to allow a user to access several different web sites using a single verification. Because the verification process and result are unique to the user, the user may first designate which participating web sites the user elects to log into and then after selecting which one or more web sites to log into, the user performs the verification described herein. If the secure verification is successful, then the user is logged into the selected web sites. In this way, the verification process is a universal access control for multiple different web sites and prevents the user from having to remember multiple different usernames and passwords while also reducing fraud and password overhead for each user.
It is also contemplated that the system may be configured to have the video camera running on the phone. The mobile device would grab frames and path parameter data when the phone moves (using the camera, gyroscope, magnetometer, and accelerometer) but only process into biometric data on the device or send the frames up to the server if they have a face in them. In this embodiment, the application executing on the mobile device could trigger the software application to start saving frames once the phone is moving and then if the phone continues to move in the correct path (a semi-circle, for example) and the system detects a face in the frame the mobile device would start to send images, a portion of the image, or biometric data to the server for processing. When the system senses motion it may trigger the capture of images at certain intervals. The application may then process the frames to determine if the images contain a face. If the images do include a face, then the application crops it out and then verifies if the motion path of the mobile device is similar to the one use used during enrollment. If the motion path is sufficiently similar, then the application can send the frames one at a time to the server to be scanned or processed as described above.
When a fraudulent attempt is made using a display screen, such as an LED, LCD, or other screen, the system may detect the fraudulent login attempt based on expected attributes of the screen. In one embodiment, the verification system will run checks for banding produced by digital screens. When banding is detected, the system may recognize a fraudulent attempt at a login. In another embodiment, the system will run checks for edge detection of digital screens. As the mobile device is moved to obtain the verification movement during a login attempt, the system checks the captured images to for edges of a screen to recognize a fraudulent login attempt. The system may also check for other image artifacts resulting from a screen such as glare detection. Any now known or later developed algorithms for banding and screen edge detection may be utilized. Upon detection of fraud will prevent verification and access to the website or prevent the transaction or account access.
The verification system may further conduct an analysis on the enrollment images to estimate at least one of a gender, an approximate age, and an ethnicity. In an alternative embodiment, the user may manually enter one or more of their gender, an approximate age, and an ethnicity, or this information may be taken or obtained from existing records which are known to be accurate. The verification system may then further store a user's estimated gender, age, and ethnicity as enrollment credentials or user data. Thus, when the user later attempts to verify with the system, the system will compare derived gender, age, and ethnicity obtained from verification images (using biometric analysis to determine such data or estimates thereof based on processing) with the stored gender, age, and ethnicity to determine whether to verify the user. For example, if the derived data for gender, age and ethnicity matches the stored enrollment credentials, then the verification is successful, or this aspect of the verification is successful.
The verification system may make the gender, age, and ethnicity estimations based on a single image during the verification process or based on multiple images. For example, the verification system may use an image from the plurality of images that has an optimal viewing angle of the user's face for the analysis. In other embodiments, a different image may be used for each analysis of age, gender, and ethnicity when different images reveal the best data for the analysis. The verification may also estimate the gender, age, and ethnicity in a plurality of the images and average the results to obtain overall scores for a gender, age, and ethnicity.
As an alternative to obtaining the gender, age, and ethnicity as enrollment information, the estimated gender, age, and ethnicity estimations as verification credentials may be set over a course of repeated use of the verification system. For example, if in previous successful verifications using biometrics and movement information, the verification system always estimates a user's age being between 40 and 50, then the verification may set credentials for that user requiring later login information to include images of a face estimated to be between 40 and 50. Alternatively, gender, age, and ethnicity estimations may be implemented as one of many factors contributing to an overall verification score to determine whether or not to verify a user.
For example, if the verification process has a gender estimation of + or −0.2 of 1.9 male rating, then if the actual results do not fall within that range the system may deny access for the user. Likewise, if the user's age range always falls between 40-50 years of age during prior verification attempts or enrollment, and a verification attempt falls outside that range, the system may deny access or use the result as a compounding factor to deny access.
In a further embodiment, when a bracelet or watch capable of obtaining an EKG signature is used, a certain EKG signature may be required at login. The EKG signature could also be paired with the face matching rotation to provide multiple stage sign-on for critical security and identification applications. Further, the credentials could also include GPS information where login is only allowed within certain geographic locations as defined during enrollment. In one configuration the GPS coordinates of the mobile device are recorded and logged for a login attempt or actual login. This is additional information regarding the location of the user. For example, if the GPS coordinates are in a foreign country known for fraud, then the attempt was likely fraudulent, but if the GPS coordinate indicate the attempt or login was made in the user's house, then fraud is less likely. In addition, some applications may only allow a user to login when at specified location such as a secure government facility or at a hospital.
The enrollment information may further include distance information. Because the motion arc (speed, angle, duration . . . ) is unique to each user, face detection software on the device can process the images and determine if the device is too close or too far from the subject. Or in other words, the enrollment information may consider the size of the face in the images. Thus, the potential enrollment information may also vary based on the length of a user's arm, head, and face size, and on the optics of the camera in the user's particular mobile device. The user may also be positioned at a fixed computer or camera, such as laptop, desktop, or atm. The user may then move the face either forwards and back, side to side, or up and down (or a combination) to create the images. Hence, this method of operation is not limited to a mobile device. In one embodiment, the camera is disposed in an automobile, such as in a mirror, and the person moves their head or face to verify.
In one embodiment, the system is set to limit what the user can do when first enrolled and verified. Then, after further verifications or after a predetermined time period and number of verifications, additional capabilities may be granted. For example, during the first 20 verifications during the first 3 months, a maximum transaction of $100 may be allowed. This builds a database of known verification data relating to non-objected to transactions by the user. Then, during the next 20 verifications a transaction limit of $3000 may be established. This limits the total loss in the event of fraud when the verification data is limited, and the user is new to the system. For example, if an unauthorized user manages to fraudulently enroll in the verification system.
When the user images himself/herself using a front-facing camera, the user may confirm that his/her face is being imaged by viewing the image on the display, as described above. The image shown on the display may be configured to be smaller in area than the entire display and may be positioned in an upper portion of the display towards the top of the device. When the user's image is shown only in the top portion of the user's display screen, the user's eyes tend to look more closely at the front camera. When the user's eyes are tracking up, the accuracy of the face matching may be improved. Further, tracking the movement of the eyes from frame to frame may allow the system to validate that the images are of a live person, and are not from a photograph or video recording of the person.
The image shown on the display may also be positioned to correspond with a camera location on the user's device, as shown in
The image viewed on the display by the user may further be modified such that the edge pixels on the sides display are stretched horizontally as shown in
An example of this process is described with reference to
Next, the system analyses the pixel placement in one or more subsequent frames to determine whether the pixels representing the detected features correspond with features located in the foreground or the background of the scene in step 1204.
In one embodiment, when the user moves the device to fit his or her face within the ovals, such as those shown in
In step 1205, the various features are tracked through successive images to obtain two-dimensional vectors characterizing the flow or movement of the features. The movement of the features in this example is caused as the user moves the device to fit his/her face within the oval shown in the exemplary screen displays of
The device (processor executing machine readable code stored in memory) then compares image frames (formed by an array of pixels) as the device moves closer to the face of the user. The pixels representing objects in the image are tracked to determine the velocity characteristics of the objects represented by the pixels in the foreground and the background. The system detects these changes in position of items based on pixel data, or two-dimensional pixel velocity vectors, by comparing the successive images taken by the device. When the live, three-dimensional user is authenticating, velocity characteristics of the foreground features (face) and the background features differ significantly as compared to velocity characteristics of a two-dimensional spoof being imaged. That is, the velocity characteristics of facial features are different for a live, three-dimensional person are different as compared to a two-dimensional spoof as the user moves the device to fill his/her face in the oval shown in
Thus, in step 1207, the system checks if the two-dimensional vectors of foreground features match expected values of a live, three-dimensional person. The expected values or expected rate of change of an item in an image, defined by pixel location or values, may be based on testing over time such as expected location, expected displacement, expected rate of change of the item, or even expected differences in the rate to change which would indicate three-dimensionality (as opposed to a 2D photograph or video screen of a person). In this example, testing may set an expected value of movement or velocities of the ears, cheekbone, nose, etc. When two-dimensional vectors match expected values, the method proceeds to step 1210 to increase a likelihood that the images are of a live, three-dimensional person. If the two-dimensional vectors do not match expected values, (or match values that are expected when a two-dimensional spoof is used) then the method decreases the likelihood that the images are of a live, three-dimensional person as shown in step 1212.
When a live, three-dimensional person is being imaged, the two-dimensional vectors, or displacement of pixels between successive images are different in the foreground and background of the image. Thus, in step 1214, the system also analyzes the two-dimensional vectors of background objects to determine whether these match expected values. The likelihood of the images being of a live, three-dimensional person is again updated in either steps 1210 or 1212.
As explained above, some pixels representing certain background objects may appear or disappear completely. For example, as the user moves the device from arm's length to closer in towards his or her face, pixels, edges, and/or features of the user's face will have a higher rate of movement than features in the background, such as a picture frame on a wall, a clock, etc. Additionally, some pixels that are visible on or around the user's face when the device is furthest out from the user will no longer be visible when the user moves the device closer to his or her face. The pixels around a person's face may be defined as the facial halo and the items in these pixels (facial halo) will no longer be captured by the camera in the image due to the person's face taking up more of the image and ‘expanding’ due to the movement of the camera closer to the person's face. As mentioned above, this check may be referred to as edge detection. In step 1216, the system verifies whether background images around the edges of foreground images match expected values. The system also ensures that pixels representing the edge of the foreground object (such as the face) replace pixels of background objects near the edges of the foreground object. The likelihood of the images being of a live, three-dimensional user is adjusted in step 1210 and 1212 based on the outcome of the edge detection in step 1216. Thus, by tracking these pixels and the displacement, the system can verify whether the pixel velocity analysis is consistent with three dimensional objects having a foreground and background.
In step 1218, the liveness or three-dimensionality of the user being imaged and verified is validated based on the various checks described above. A determination that the user attempting verification is a live person is one element that must be met as part of the verification. Thus, attempts at fraudulent access to an account or device using screens or photos of the person can be more reliably prevented. This prevents attempts at fooling the verification system with a two-dimensional image such as a printed picture, a digital projection, or a digital screen image of a person.
Further enhancements may also be achieved using pixel velocity analysis for liveness or three-dimensionality. When the user brings the device (camera) closer to the user's face, the facial features will distort differently due to the large relative distances between the various features and the camera and the placement of the features in the field of view of the camera as the camera comes closer to the face. This effect may be referred to as perspective distortion. When this distortion begins to occur, pixels in the center of the frame that represent the features in the center of the face such as the nose will have the least amount of distortion in the frame, whereas the pixels that represent the outer portions of the face such as the cheeks, the chin, and the forehead will show the most relative pixel movement (more than pixels at the center of the frame) and the highest acceleration. Thus, the three-dimensionality can also be shown by comparing the features on the face itself. This is because at close proximity to the device, facial features closer to the device can be considered foreground features, and facial features farther from the device are background features. For example, pixels representing the nose will show less movement between frames than pixels representing the cheekbone because of the nose's shorter relative distance from the camera when the device is held at eye level.
Pixel velocity analysis may also be used to track liveness characteristics that are very difficult to recreate during a fraudulent verification event. For example, the human eyes are never completely still even when focusing on an object. There is always quick involuntary movement of the eyes as the eyes scan an object, moving around to locate interesting parts of the object, and developing a mental, three-dimensional “map” corresponding to the scene. These movements are called saccades and are involuntary. Saccades last from 20 ms-200 ms and serve as the mechanism of eye fixation. Two-dimensional velocity vectors, based on movement of the eyes based on pixel values, may thus be generated by the saccadic motion of the eyes across frames. The presence of these vectors, the hertz of the eye jitter and the acceleration of the pixel movement between frames can be compared to measurements of verified sessions and can be used to increase confidence that the user in front of the camera is not an inanimate spoof such as a photo, a wax sculpture, or doll.
In another example, when a bright light is presented to the human eyes, the pupil will constrict to mitigate the light's path to the retina. Cameras on typical mobile devices such as smart phones generally operate at high enough resolutions that two-dimensional velocity vectors will track the pupils constricting when compared over a series of frames where the amount of light entering the eyes increases, such as when the user moves the device and screen closer to his or her face, or when a front-facing flash of a mobile device is activated.
Another feature that may be detected by pixel velocity analysis is reflection off the eye of the user. The surface of the eye reflects a larger amount of the light hitting it when the pupil contracts, providing a brighter reflection of the light emitting object. In the case of the device with an illuminated screen being moved closer to the face of the user, the size and brightness of the reflection of the device's screen will increase while the size of the pupil contracts. It is possible to observe and document these two-dimensional vectors in a consistent motion path and then provide a liveness evaluation on video frame sessions based on the expected two-dimensional vectors being observed or absent.
Face matching algorithms use landmarked points on the face to measure the distance and angles between the facial features. This creates the unique look of individuals and the corresponding unique biometric data. In some embodiments, pixel velocity analysis may be used not only to verify the three-dimensionality of the person, but also as an additional or alternative face matching algorithm.
To facilitate imaging, the screen on the mobile device may additionally be displayed with a white background, and the brightness of the screen may be increased to light up the user's face in dark environment. For example, a portion of the display could provide video feedback for the user to ensure he or she is imaging himself or herself, while the remaining portion of the display is configured to display a bright white color. Referring to the example shown in
When infrared imaging is used as thermal imaging, further security enhancements are possible. Particularly, the thermal imaging may be analyzed to indicate whether the obtained images are from an actual user or are fraudulent images from a screen or other device. When a person is in front of an infrared thermal imaging camera, the heat radiation detected should be fairly oval shaped designating the person's head. In contrast, the heat radiating from a screen is typically rectangular. Further, the heat patterns detected in the actual person's face as well as the movement of the heat patterns in the images can be compared with expected heat patterns of a human face to distinguish the images from fraudulent authorization attempts using a screen.
Detecting Output from the Mobile Device
The display or other light source on the mobile device may further be utilized to provide additional security measures. During the verification process described above, light from the display or other light source is projected onto the user's face and eyes. This projected light may then be detected by the camera of the mobile device during imaging. For example, the color tone detected on the skin, or a reflection of the light from the cornea of a user's eye may be imaged by the camera on the mobile phone. Because of this, random light patterns, colors, and designs may be utilized to offer further security and ensure there is a live person attempting verification and not merely an image or video of a person being imaged by a fraudster.
As one example, when a user begins verification, the verification server may generate and send instructions to the user's device to display a random sequence of colors at random intervals. The verification server stores the randomly generated sequence for later comparison with the verification information received from the mobile device. During verification imaging, the colors displayed by the device are projected onto the user's face, and are reflected off the user's eyes (the cornea of the eyes) or any other surface that receives and reflects the light from the screen. The camera on the user's mobile device detects the colors that are reflected off the user's skin or eyes (or other surface) and generates color data indicating the colors detected based on the screen projection. This data may be returned to the verification server to determine if the color sequence or pattern sent to the mobile device matches that known sequence or pattern projected by the screen of the user device. Based on this comparison at the verification server the verification is a success or denied. The comparison with the random sequence of colors in the instructions may alternatively occur exclusively at the user device to determine that a live user is being verified.
As another example, when a user begins verification, the verification server may send instructions to the user's device to display a randomly generated pattern which is then stored on the verification server. This pattern may include graphics, text, lines or bars, flashing light patterns, colors, a QR code, or the like. The randomly generated pattern is displayed during verification imaging, and the pattern is reflected off the user's eyes (cornea). The camera of the user's device detects the reflected pattern off the eye of the user and processes the reflected, mirrored image of the displayed pattern. The processed pattern (such as being converted to a numeric value) is transmitted to the verification server and compared to the pattern that was randomly generated and stored on the verification server to verify if the pattern displayed by the screen, and imaged after reflection off the user's face establishes a pattern match.
If a match occurs, this establishes or increases the likelihood that a live person is being imaged by the device. If the pattern is not a match, or does not meet a match threshold level, then the verification process may fail (access denied), or the account access or transaction amount may be limited. It is noted that this example could also be incorporated on desktop computer with a webcam that does not incorporate the enrollment movement and verification movement described above. Further, this example may not only be incorporated with face matching but could also serve as an added layer of security for iris recognition or any other type of eye blood vessel recognition, or any facial feature that is unique to a user.
When the above example is implemented on a desktop computer, eye tracking may also be utilized to further demonstrate the presence of a live user. For example, the screen could show a ball or other random object or symbol moving in a random pattern that the user watches with his or her eyes. The camera can detect this real-time movement to verify the user is live, and not a picture or display, and verify that the eye or head movements correspond to and match the expected movement of the object or words on the screen, which are known by the verification system. Eye tracking can also be done by establishing an anchor point, such as via a mouse click at a location on the screen (if the user is looking at the location where the mouse click takes place), and then estimating where the user is looking at the screen relative to the anchor position.
The use of a moving object on the screen may also be beneficial during enrollment on either a mobile or stationary device. For example, while capturing the enrollment images, the device may display a moving digital object (such as a circle or words(s)) that moves around the screen so that the user is encouraged to follow it with his or her head and eyes. This movement may be involuntary from the user, or the device may be configured to instruct the user to follow the object. This results in movement of the head and/or eyes creating small changes in the orientation of the user's head and face with the device camera, providing more complete enrollment information. With more complete enrollment information, the system may better ensure that the user will later be verified at a high rate even at slightly different angles during future verification attempts.
In one embodiment, the system is configured to aid the user to easily learn to verify with the system. As shown in
Next, as shown in
Thus, the system provides and teaches the user a simple method to provide enrollment and verification images along with enrollment and verification movement as explained above. The system may also teach varying enrollment and verification movement by varying the location of the small oval 1320 on the screen 1315, and by changing the order and the size of the ovals displayed. For example, the user may zoom in ½ way, then out, then in all the way, by moving the mobile device. The system may be configured to monitor that the camera's zoom function (when equipped) is not in use, which typically requires the user to touch the screen.
In one embodiment, the enrollment movement may be omitted, and the verification movement may be compared to expected movement based on the prompts on the screen. For example, the device or verification server generates a series of differently sized ovals within which the user must place his or her face by moving the mobile device held in the user's hand. In this manner, the verification movement may be different during each login depending on the order, size, and placement of the ovals shown on the screen.
The system may also incorporate other security features when the “zoom in” movement is used as shown in
The barrel distortion effect becomes more pronounced on an image of a person's face when the person images his or her face close to the lens. The effect results in the relative dimensions of the person's face appearing different than when the imaging is done with the person's face farther away from the lens. For example, a person's nose may appear as much as 30% wider and 15% taller relative to a person's face when the image is taken at a close proximity as compared to when the image is taken at a distance. The differences in the relative dimensions are caused by the relatively larger differences between the camera and the various facial features when the person is imaged close to the lens as compared to the relatively equal distances when the person is imaged at a distance farther from the lens.
Such differences have been found to be significant in many face matching algorithms. That is, a face matching algorithm may not recognize a live person imaged at a close proximity and a far proximity as the same person. In contrast, if a two-dimensional photograph of a person is imaged by the camera at both a close proximity and a farther proximity, the relative focal lengths between the lens and the two-dimensional image do not change so significantly. Thus, a face matching algorithm would recognize the two-dimensional photograph as the same person when imaged at both a close proximity and a distance farther from the lens.
This effect may be used to increase the security of the verification system. For example, during enrollment, enrollment images may be provided by the user at both close and far proximity from the lens, in addition to other positions through the movement. Later, during verification, verification images may be obtained at both the close and far distances from the lens to determine if they match with the enrollment information obtained from the enrollment images. Further, because the barrel distortion effect is expected when an actual, three-dimensional person is present, an absence of the relative change in the dimensions of the facial features alerts the system to a fraudulent attempt at verification. This effect could not easily be re-created with a two-dimensional picture (printed photograph or screen) and thus, this step can serve as a secure test to prevent a two-dimensional picture (in place of a live face) from being used for verification.
In other words, using this movement of “zooming” in and out on the user's face, two or more biometric profiles could be created for the same person. One of the multiple profiles for the person may be imaged farther from the camera, and one of the multiple profiles may be for the person imaged closer to the camera. For the system to verify the person, the verification images and biometrics must match the two or more profiles in the enrollment images and biometrics.
In addition, the system may detect the presence of a real person as compared with a fraudulent photograph of a person by comparing the background of the images obtained at a close and a far proximity. When the mobile device 1310 is held such that the person's face fits within the oval 1320, objects in the background that are almost directly behind the person may be visible. However, when the mobile device 1310 is held such that the person's face fits within the larger oval 1330, the person's face blocks the cameras' ability to see the same objects that are almost directly behind the person. Thus, the system may compare the backgrounds of the images obtained at the close and the far proximity to determine whether the real person is attempting verification with the system.
Of course, in
The number of frame sizes presented to the user may also vary for a single user based on the results of other security features described herein. For example, if the GPS coordinates of the mobile device show that the device is in an unexpected location, more frames at different distances may be required for verification. One or more indicators, such as lights, words, or symbols may be presented on the screen to be visible to the user to direct the user to the desired distance that the mobile device should be from the user.
In
For example, as described above, enrollment images and biometrics may be obtained for a user at two distances from the user. During verification, multiple images are captured in addition to images corresponding the close and far distances of the enrollment images and biometrics. Based on the expected distortion of these intermediary images according to the distanced traveled by the device, the system may validate that the change in distortion of the images is happening at the correct rate, even though only two enrollment profiles are obtained.
The capturing of these images may be still images or video, such that frames or images are extracted from the video that is taken during the movement from the first position distant from the user and the second position proximate the user. Thus, it is contemplated the operation may capture numerous frames during the zoom motion and ensure that the distortion is happening at the correct rate for the head size and the movement of the mobile device distance based on data from the accelerometers, magnetometers, and so forth.
Over time based on accumulated data, or calculated data during design phase, the system will have data indicating that if a phone is moved a certain distance toward a user's face, then the distortion effect should fall within a known percentage of the final distortion level or initial distortion level. Thus, to fool or deceive the verification system disclosed herein, the fraud attempt would not only need to distort the fraudulent two-dimensional picture image, but would also need to cut the background, and then make a video of the face, distortion, and background that does all of this incrementally and at the correct speed, all while not having any banding from the video screen or having any screen edges visible, which is very unlikely.
Many currently known facial detection and face matching algorithms are configured to look for a small face within an image. Thus, to ensure that the facial detection and recognition algorithms detect and recognize the user's face in the zoomed in image (
When the enrollment and verification movement resulting from the process described with
In one embodiment, at least one blink is required to prove liveness for verification. In another embodiment, blinks may be counted, and the number of blinks may be averaged over time during verifications. This allows for an additional factor in verification to be the number of blinks observed during the motion. If a pattern of when the user blinks during the motion is observed, the system may verify that the user blinks at the expected time and device location during the motion during future verification attempts.
In other embodiments, the size or location of the oval or frame may change to sizes or locations other than that shown in
In one exemplary method, the mobile device is positioned at a first distance from the user and a first image captured for processing. This distance may be linearly away from the user and in this embodiment not in an arc or orbit. This may occur by the user moving the mobile device, either by hand, or by the mobile device being on a movable device or rail system. Alternatively, the lens system may be adjusted if in a fixed system to change the size of the user's face in relation to the frame size. Alternatively, the user may stay stationary, the multiple cameras may be used, or camera may move without the user moving. Once some form of movement (from a device, camera, lens, or user) has occurred to establish the camera at a second distance, a second image is captured for processing. Movement from the first position to the second position may be straight toward the user. Processing occurs on both images.
The processing may include calculations to verify a difference between the two images, or a difference in biometrics obtained from the two images, which indicates that a real person is being imaged. Processing may occur to compare the first verification image to a first enrollment image (corresponding to the first distance) to determine if a match is present and then compare the second verification image to a second enrollment image (corresponding to the second distance) to determine if a match is present. If a match occurs, then verification may proceed.
Variations on these methods are also possible with the system requiring a match at the first distance, but a failure to match at the second distance, thereby indicating that the second image is not of a two-dimensional picture. The processing resulting in a match or failure to match may be any type image or face matching processing algorithm. As with other processing described herein, the processing may occur on the mobile device, one or more remote servers, or any combination of such devices.
All the processing described herein may occur on only the mobile device, only a remote server, or a combination there. The biometric data may be stored on the mobile device or the server, or split between the two for security purposes. For example, the images could be processed on the mobile device, but compared to enrollment data in the cloud or at a remote server. Or the images could be sent to the cloud (remote server) for processing and comparison.
Additional added security modifications may include information about a user's finger. Many mobile devices with touch screens can detect the location and approximate size of a user's touch on the screen. Accordingly, an approximate size of a user's finger or thumb may be measured by the system. In addition to the size of a finger, an orientation angle of the finger or whether the fingers or thumbs of the right or left hand are used can be detected.
In one embodiment, a user selects an account to open, begins enrollment imaging, or begins verification imaging by touching the touchscreen of the user device. The verification system may thus detect whether the touch by a user during verification corresponds with previously stored enrollment information including the size of the user's finger or thumb, amount of pressure applied to the screen and whether the user is right or left handed. This adds an additional security layer for the verification system.
Furthermore, the verification system may require that the user initiates a verification by touching a fingerprint reader or the touchscreen in one or more predetermined manners. In one embodiment, as shown in
The regions 1420 on the touchscreen may be visually represented by a grid or may not be displayed at all on the touchscreen 1410. As shown in
It is also contemplated that the user could record their voice by speaking a phrase while recording their images during the enrollment process when first using the system. Then, to verify, the user would also have to also speak the phrase when also moving the mobile device to capture the image of their face. Thus, one additional path parameter may be the user's spoken voice and use of voice recognition as another layer or element of the verification process.
The verification system may also process the images received from the mobile device to determine if the images are of sufficient quality. For example, the system may check the images for blurriness caused by the images being out of focus or by the camera lens being obscured by fingerprints, oils, etc. The system may alert the user that the quality of the images is insufficient (or too bright or too dark) and direct the user to adjust a focus, exposure, or other parameter, or to clean the lens of the camera.
The verification system may also utilize an autofocus feature when the mobile device camera is equipped with such. For example, when an actual, three-dimensional person is being imaged, the system checks to ensure that the sharpness of the image changes throughout as the camera performs auto-focusing. In another embodiment, the system may control the autofocus so that the camera focuses on a first location or distance to check for sharpness (in focus) of a portion of the image containing a face. The system then controls the camera to focus at a second location or distance where the presence of a face is not detected and checked for sharpness (in focus) of a portion of the image. If a three-dimensional person in a real environment is being imaged, it is expected that the focal length settings should be different at the first and second locations, which suggests a real person is presently being imaged. However, if the focal lengths of both locations are the same, this indicates that a two-dimensional photograph or screen is being imaged, indicating a fraudulent login attempt.
The system may also control the auto-focus of the device to check for different focal lengths of different features in the image. For example, when a person's face is imaged from the front, a person's ear is expected to have a different focal length (more distant) than the tip of a person's nose.
The verification server may also be configured to store the verification images for a predetermined length of time. The images may provide additional security benefits as evidence of a person attempting to log in to a user's account. For example, the system may store a predetermined number of prior log in attempts, such as twenty login attempts, or store images from login attempts for a predetermined time period, such as during the past seven days or weeks. Any fraud or attempted fraud will result in pictures of the person attempting the login being stored or sent to the verification server of the account server.
The mere knowledge that photos will be taken and sent is a significant deterrent to any potentially dishonest person because they know their picture will be taken and stored, and it is an assurance of security to the user. Likewise, any attempted and failed attempt can have the photo stored and indicator of who is attempting to access the account. It is also contemplated that an email or text message along with the picture of the person attempting the failed log in may be sent to the authorized user, so they know who is attempting to access their account. This establishes the first line of security for the account as the user with the photo or image also being possessed by the verification server.
Further, the level or percentage of correspondence between the enrollment information and the verification information to verify the user may change over time. In other words, the system may comprise an adaptive threshold.
After a user regularly uses the verification system described above, the user will have logged in with the system by moving the mobile device in the predetermined path relative to his or her head many times. Accordingly, it may be expected that as the user will gain experience using the verification system, and that the user will gradually settle into a comfortable and standardized motion path. In contrast, the initial enrollment movement of a user will likely be the most awkward and clumsy movement as the user has little experience with the verification system.
To make the verification system more convenient for the user without losing security, the adaptive threshold system allows the enrollment movement to adapt so that the user is not locked into the awkward and clumsy initial movement as the enrollment movement. To facilitate this, upon each successfully authorization, the successful authorization movement is stored, and the motion path is added to a list of acceptable motion paths. The list of acceptable motion paths may be limited to a predetermined number of paths. When a new successfully authorization is completed and the list of acceptable motion paths is full, the older enrollment motion path is deleted and the newest is stored in its place. Alternatively, the motion path that is least like the other motion paths stored on the list may be deleted. Thus, by storing the most alike or newest motion paths, the enrollment movement may slowly adapt over time as the user because familiar with the system and settles into a comfortable motion path for verification.
In addition, other enrollment information may adaptively change in a similar manner as the user information. For example, successful verification photos or biometric information can be stored as part of the enrollment information, and old enrollment information may be discarded over time. In this manner, the verification system can be convenient for a user even over a long period of time as the user experiences aging, facial hair growth, different styles of makeup, new glasses, or other subtle face alterations.
Determining how much variance is allowed over time in the motion path or the biometric information, or both may be set by the entity requiring verification to meet that entity's security requirements. Time or number of scans after the initial enrollment can be used to modify the adaptive threshold. For example, during a first few days after enrollment, the threshold may be lower while a security threat is low and the differences in paths are likely to be higher. After several verifications or several days, the threshold may increase. The threshold further may be set based on trending data of either the motion path or biometric information. For example, the threshold may be more lenient in a direction the data is trending, while having a tighter tolerance for data against the trend.
A temporal aspect may also be added along with the location information. For example, if the user conducts and verifies a transaction near his home, and then one hour later another transaction is attempted in a foreign country, the transaction may be denied. Or it may be denied if the distance between the prior verification location and the next verification location cannot be traveled or is unlikely to have been traveled in the amount of time between login or verification attempts. For example, if the user verifies in the system in Denver, but an hour later an attempt is made in New York, Russia or Africa, then either first or second attempt is fraudulent because the user likely cannot travel between these locations in 1 hour.
Further, if the next transaction is attempted at a more reasonable time and distance away from the first transaction, the level of correspondence threshold may be raised to provide added security, without automatically denying the transaction. Likewise, an altimeter may be used such that if the altitude determined by the mobile device is different than the altitude of the city in which the user is reported to be located, then this may indicate a fraud attempt. Thus, altitude or barometric readings from the mobile device may be used to verify location and can be cross referenced against GPS data, IP address or router location data, or user identified location.
To provide an additional layer of security to the face matching and/or verification system, the system may utilize random image distortion. For example, a user may be assigned a random distortion algorithm upon enrollment into the system. The distortion algorithm may include such distortions to the image as widening or narrowing the person's face by a predetermined amount, adding or superimposing a predetermined shape at a predetermined position on the user's face. As one example of this, the distortion may be a circle superimposed at 100 pixels above the user's left eye.
With the uniquely assigned distortion on the images from the user, the biometric data for that user will be unique to the account or device used by the user. That is, the enrollment biometrics stored on the verification server or on the mobile device will reflect not only the facial features of the user, but also will reflect the uniquely assigned image distortion. Thus, even if an accurate, fraudulent representation of a person were used on a different device or via a different account, the proffered verification biometrics would not sufficiently correspond due to a different or an absence of the unique distortion. Thus, the overall security may be enhanced.
It is noted that each of the above embodiments, modifications, and enhancements may be combined in any combination as necessary to create multiple layers of security for verification. For example, the face matching may be combined with motion detection or path detection or operate independently of these features for verification. Further, when more than one of the above described enhancements or modifications are combined, the verification system may be configured so as not to provide any feedback or indication on which layer failed verification.
For example, when a predetermined touch pattern to initiate verification is combined with the verification movement and facial verification, the system does not indicate whether a touch pattern was incorrect, or the verification movement or verification images failed to correspond to the enrollment information. Instead, the system provides an identical denial of verification no matter what failure occurs. This is the case when any number of the security features described above are combined. In this manner, it is difficult for a fraudster to detect what aspect of the fraudulent credentials must be corrected, further enhancing the security of the system.
All the above features may be incorporated together, or only some features may be used, and others omitted. For example, when the device prompts the user to move the device so that the user places his or her head within a first small frame (such as an oval) then to a second large frame (such as in
Likewise, although described herein as financial account verification, the verification using path parameters and image data may be implemented in any environment requiring verification of the user's identity before allowing access, such as auto access, room access, computer access, web site or data access, phone use, computer use, package receipt, event access, ticketing, courtroom access, airport security, retail sales transaction, IoT access, or any other type of situation.
For example, an embodiment will be described where the above verification system is used to securely conduct a retail sales transaction. In this embodiment, a user is enrolled with the verification server or a verification application on the mobile device as described above and has generated enrollment information including enrollment images and/or biometrics, and enrollment movement. In this example, the user initiates or attempts to complete a transaction at a retail establishment with a credit card, smart card, or using a smart phone with NFC capabilities.
The user begins the transaction by swiping a credit card, smart card, or using an application on a smartphone with NFC capabilities to pay for goods or services. The retail establishment would then authorize the card or account with the relevant network of the financial institution (“gateway”). For example, the retail establishment, through a gateway such as one operated by VISA or AMERICAN EXPRESS would determine whether the account is available and has sufficient available funds.
The gateway would then communicate with the authorization server to authorize the transaction by verifying the identity of the user. For example, the gateway may send an authorization request to the verification server, and the verification server then sends a notification, such as a push notification, to the user's mobile device to request that the user confirm or verify the transaction.
Upon receipt of the notification from the verification server, such as through a vibration, beep, or other sound on the mobile device, the user may then verify his or her identity with the mobile device. The verification server may also send information concerning the transaction to the user for verification by the user. For example, the verification server may send information that causes the mobile device to display the merchant, merchant location, and the purchase total for the transaction.
Next, as before, the user may hold the mobile device and obtain a plurality of verification images as the user moves the mobile device to different positions relative to the user's head. While moving the mobile device to obtain the verification images, the mobile phone further tracks the path parameters (verification movement) of the mobile device via the gyroscope, magnetometer, and the accelerometer to obtain the verification movement of the device. The mobile device may then send the device information, the verification images, and the verification movement to the verification server. In other embodiments, the mobile device may process the images to obtain biometric data and send the biometric data to the server. In still other embodiments, the mobile device may process the images, obtain the verification information, compare the verification information to enrollment information stored on the mobile device, and send pass/fail results of the comparison to the verification server.
The verification server may then verify the identity of the user and confirm that the user wishes to authorize the transaction on his or her account if the device information, verification images and/or biometrics, and verification movement correspond with the enrollment device information, the enrollment images and/or biometrics, and the enrollment movement. The verification server then transmits an authorization message to the gateway. Once the gateway has received confirmation of the authorization, the gateway then communicates with the retail establishment to allow the retail transaction.
Several advantages may be obtained when a retail transaction is authorized utilizing the above system and method. Because the identity verification of the user and the confirmation of the transaction is completed via the verification system and mobile device, there is no longer a requirement for a user to provide his or her credit card or signature, or to enter a pin number into the retailer's point of sale system. Further, the retail establishment does not need to check a photo identification of the user. The above method and system also have the advantage that it provides secure transactions that can work with mobile and online transactions that do not have cameras, such as security cameras, on the premises.
In the secure retail transaction described above, the user obtains the total amount due on his or her mobile device from the retail establishment via the gateway and verification server. However, in one embodiment, the mobile phone may use the camera as a bar code, QR code, or similar scanner to identify the items and the prices of the items being purchased. The mobile device may then total the amount due and function as the checkout to complete the transaction with the retail establishment.
In another embodiment, a user of the application may want to anonymously pay an individual or a merchant. In this instance, the user would designate an amount to be paid into an application, and the application would create a unique identifying transaction number. This number may then be shown to the second user, so the second user can type the identifying transaction number on an application on a separate device. The unique identifying transaction number may also be sent from the user to the second user via NFC, Bluetooth, a QR code, or other suitable methods. The second user may also type the amount and request payment.
Upon receiving the payment request and unique identifying transaction number, the verification server may send a notification to the first user's mobile device to verify the transaction. The user would then verify his or her identity using the face matching and/or verification system described above. The user may alternatively or additionally verify his or her identity using other biometric data such as a fingerprint or retina scan, path based motion and imaging, or the user may enter a password. Upon verification, the user's device would send a request to the user's payment provider to request and authorize payment to the second user. In this manner, the payment may be made securely while the users in the transaction are anonymous.
According to one embodiment, as an additional measure of security, the GPS information from the mobile device may also be sent to the verification server to verify and allow the retail transaction. For example, the GPS coordinates from the mobile device may be compared with the coordinates of the retail establishment to confirm that the user is actually present in the retail establishment. In this manner, a criminal that has stolen a credit card and attempts to use the card from a distant location (as compared to the retail location) is unable to complete a transaction because the user's phone is not at the location of the retail establishment. IP addresses may also be used to determine location.
As explained above, the level or percentage of correspondence between the enrollment information and the verification information to verify the user may also be adjusted based on the coordinates of the GPS of the mobile device. For example, if the retail establishment and GPS coordinates of the mobile device are near a user's home, then the level of correspondence may be set at a lower threshold, such as at a 99% match rate. Alternatively, if the location is very far from the user's home, and is in a foreign country, for example, then the level of correspondence may be set at a higher threshold, such as at a 99.999% match rate.
Most biometric identification systems in recent years use devices such as smartphones to capture biometric data (e.g., A digital photograph or scan of a fingerprint). This biometric data is matched to preexisting biometric data either on the device (in compliance with the FIDO alliance standards) or on the cloud (a remote computing device) where the biometric data is sent to servers and compared to preexisting data.
However, with the ability to convert images or other biometric data into biometric templates on the device without sending the raw data files up to a server, an additional option is available. Existing raw biometric data such as facial images, fingerprint scans, etc. Or converted biometric templates may be downloaded to the device. The downloaded biometric data may then be converted and/or compared to a biometric template that was created from the data captured on that device and previously uploaded to the cloud or captured and uploaded to the cloud from a different device.
This allows a third party to provide an existing root identity profile for comparison to the biometric information obtained at the device for verification. For example, the root identity profile may comprise an image or other biometric reading from a customer that was captured and verified in a bank branch, from a DMV file, or from another authorized and trusted source. The root identity profile may alternatively or additionally comprise biometric templates created from the verified image or biometric reading. In this manner, the identification match at the device has increased level of trust based on the verified, third-party root identity profile.
A root identity server 1630 is also connected to the network 116. The root identity server 1630 may be a bank server, a government server, or other “trusted” server that stores the root identity information including biometric information and/or biometric template(s). The root identity server 1630 is connected to biometric sensing devices such as a camera 1632 or fingerprint scanner 1634. A verification server 1620 providing an application such as face matching algorithms and the like is also connected to the network 116.
In step 1705, biometric information such as an image that contains data about the face of an individual from the root identity profile is sent from the server 1630 to the smart device 1612 upon a verification request from the smart device 1612. The user of the smart device 1612 then articulates the camera 1614 so that the user's face can be captured by the device's camera 1614, in step 1707. The image downloaded from the server 1630 and the image that has been captured on the device 1612 can now be compared in step 1709. For example, each image is converted into a biometric template by a face matching algorithm for comparison. Upon comparison, if the templates are similar enough based on the thresholds set by, for example, an application publisher, the device captured image (device identity) and the previously captured image (root identity) can be considered a match in step 1711. Access may then be granted, or the signup/enrollment process may then be completed based on the matching images in step 1713. If there is no match in step 1711, the access is denied in step 1715.
The benefits of this system include but are not limited to the ability to match previously captured biometric data from a different device with a new device while no biometric data leaves the new device during the matching. This is important in some regulatory environments and industries.
For face matching systems with a server component, the same face matching algorithm can be loaded onto the server as is running in an application on the smart device. This allows only the template to be transferred to the device instead of the biometric reading itself (e.g., the facial images, fingerprints scans, etc.). For example, in step 1705, the biometric information may be the biometric template instead of an image from the root identity profile. The algorithms must be configured so that the templates they create are homogenous and can be compared. That is, if the algorithms output data in different formats, the resulting biometric templates/data format is incompatible, and no matching can occur because the similar facial features would not be represented by similar biometric template data patterns. The term template is defined herein as biometric data points represented by a string of numbers or other data formed in a consistently formatted pattern so that similarities and differences may be determined via various methods of comparison.
In an embodiment where on the template is transferred to the device, the root identity established in step 1703 may include a biometric template created from a biometric algorithm, such as a face matching algorithm. For example, an image that includes the face of an individual that captured with a trusted device (camera 1632 at a bank branch, DMV, etc.) is sent to the server 1630 where it is converted to a biometric template with a face matching algorithm. As mentioned above, the biometric template from the root identity profile is sent to the smart device 1612 upon a verification request in step 1705. This can be referred to as the root identity biometric template. The method proceeds as previously explained with reference to
In another example, two or more biometric modalities could be used together such as fingerprints, face, and voice. Another example of the method of
The root identity biometric data and the device identity biometric data are converted into biometric templates (root identity biometric templates and device identity biometric templates) by fingerprint recognition, facial recognition, and/or voice recognition algorithms. In some instances, the root identity biometric data may be converted into the root identity biometric templates at the server, and the templates may be sent to the device. The root identity biometric templates and the device identity biometric templates are compared in step 1709, and if the templates are similar enough based on the thresholds set by, for example, an application publisher, the root identity templates, and the device identity templates can be considered a match. Based on the match, access may be granted, or a signup/enrollment process can be completed in step 1713.
In another embodiment, in step 1709, the images and/or the biometric template(s) from the user's device may be uploaded to the server where they can be stored and/or compared with the root identity biometric images and/or template(s). Then, if the user wishes to replace the original device or add a second user device to the account, both the root identity image(s) and/or template(s) the device identity image(s) and/or template(s) captured on the first device can be sent to the second device during set up or enrollment for comparison and matching. This daisy-chains the root identity from the server to the first device identity, and then again to the second device identity. If no root identity image and/or template has been captured previously and stored on the server, the image and/or template that is uploaded from the first device can still provide added security. If the user chooses to add a second device to an account, the image(s) and/or template(s) from the first device can be downloaded to the second device, and the comparison described above may again occur. This allows the user to add a second device with increased security because the user identities on both devices were deemed to be a match.
In addition, when the image(s) and/or template(s) are uploaded to the server, the on-server comparisons between the image(s) and/or template(s) can be performed independent from a comparison performed directly on the device. This offers a significant increase in security because even if a hacker were somehow able to manipulate the user's device to send a “match” result back to the server, the server would also compare the same image(s) and/or biometric template(s). Hence, the verification may occur at two or more devices or servers to make the system more secure. If less than all or a predetermine number of device/serves to not verify, then a match is not declared. Thus, the server would also need to determine that the image(s) and/or biometric template(s) were a match using the same thresholds. Therefore, the hacker would not only need to compromise the user's device, but also the one or more servers to defeat the security.
In addition to the biometric matching, liveness checks may be included on the device portion of the matching as well as the server portion, as have been described in detail above. For example, additional information such as device movement, skin texture, three-dimensional depth information can be used to help determine that the biometric data being presented to the camera is from a live human being and not a photo, video, or mask spoof.
To verify biometric data, an individual typically is required to enter a bank branch, a government office such as a DMV or police station, or other “trusted” location to have his/her biometric data collected. For example, a bank may require a photograph, a fingerprint, or a voice recording to open certain types of accounts. The obtained biometric data is then linked to the person and the account. This in-person collection of biometric data has typically been required because there was no other way to trust that an individual was indeed who they claimed to be. Through the in-person collection, the identification is verified by, for example, the person providing documents with their name and photograph issued by a governing body.
However, according to an exemplary embodiment disclosed herein, an individual may provide his/her own biometric data using any smart device with a biometric sensor or camera to be verified without in-person verification. In fact, according to the disclosed embodiments, account providing, or financial institutions may trust with more certainty than ever before that the biometric data provided is from the correct individual and not an imposter, hacker, or bad actor.
Next, the user makes a payment or a deposit to the institution in step 1805. For example, if a lending institution has provided a mortgage to the user, then the user would enter his/her payment account information into the application so that the institution could collect payment. When the payment information and authorization is transmitted to the lending institution some or all of the biometric enrollment data from the user is collected and is transferred to the lending institution's server with it. Because the payment is made by the user for the user's debt, which causes money to flow away from the user and thus would not occur by a potential hacker or person committing fraud, the resulting biometric data collected as part of the transaction is considered as trusted.
Later, when the user again opens the application to conduct another transaction, the user is again prompted to present his/her biometric information to the camera or sensor, and new biometric templates can be created in step 1807. The new biometric templates are compared to the previous “enrollment data” on the device and/or the new templates can be sent to the server for comparison in step 1809. In some embodiments, the device may compare the templates by downloading the enrollment data templates from the server to the device for matching.
When it is determined that the new biometric information and/or templates do not match the enrollment data, then the transaction may be denied as shown in step 1811 and the root identity will not have the unmatched biometric data added to it. However, when the new biometric information sufficiently matches the enrollment data, the transaction may be authorized as shown in step 1813. Furthermore, when there is a match, the trust level of the biometric data appended to the user's profile is increased.
Because the user is sending funds into the account, for example to pay a debt or to make a deposit, he/she has an incentive to be able to later access the account that contains those funds or that has had debt reduced. Thus, over time as several deposits and/or payments are made with matching biometric templates, the trust in the identity of the user performing the transactions increases as shown in the loop of steps 1807, 1809, and 1813.
To limit liability, access of withdrawals can be limited to the same amount or less than has been deposited or paid in total by the user. For example, if a user pays a $3,000 mortgage payment each month for three months using his/her smart device and using his/her face to identify themselves each time, the lending institution may be willing to allow that person to transfer up to $9,000 from a different account that the bank has for the user, such as a checking account.
As banks and other lending institutions report on outstanding balances, credit limits, and payment timeliness to the credit bureaus, it is envisaged that the bank could also provide the biometric template (possibly in an encrypted format) to the credit bureau to store as part of the identifying information in the user's credit file. Then if the user desires to apply for credit from a different institution that institution can require that the user access their version of the application with the same biometric data collection system as was used to create the template. The biometric templates could be sent to the credit bureaus servers and be compared with the templates on file for that individual. With this process, the user can positively identify themselves and grant access to the financial institution to view their credit information without providing or transmitting their social security number, date of birth or other sensitive information.
If a user does not have a debt to pay to the account issuer or the issuer is not a financial institution, it is possible to simply offer a temporary escrow service to provide the assurance that the biometric data provided is true and correct for the user being claimed. For example, a user can provide a credit card number with his/her name and address, the card could be billed $100, and the user would provide their biometric data to the app in their smart device. The user would then correctly answer a series of knowledge based verification questions based on their credit report, insurance information, medical information or other potential confidential information, and provide their biometric data again to the app to retrieve the funds. The result is a biometric identity that can be trusted in future transactions up to the amount that was previously placed into escrow and successfully retrieved.
There are numerous security and privacy benefits to a decentralized, anonymous, biometric identity network as compared to biometric verification conducted on a centralized database or solely on a user device. As previously explained, biometric identity information may comprise images having biometric data such as digital photographs of a face or a fingerprint, and/or biometric templates which are strings of numbers representing data that has been captured by a sensor and converted to a string by a biometric recognition algorithm.
Decentralized ledgers such as blockchains, tangles, hashgraphs, etc., Referred to hereafter at blockchains, can be used to create public or private records that provide an immutable transaction history. The blocks may store various data, and in this embodiment, the blocks may store biometric data in the form of an image or a biometric template created from a biometric sensor (camera, fingerprint scanner, etc.) And/or from an algorithm analyzing an output from the biometric sensor (photograph, fingerprint scan, etc.).
In an exemplary biometric verification method, a smart device 1912 would run an application allowing a sensor 1916 or camera 1914 to capture biometric data and optionally convert the biometric data to one or more biometric templates. That biometric data and/or template(s) would be added to an encrypted block along with additional information such as a device ID, a unique user ID, user identity information, the algorithm/sensor version/type info, date and time stamp, GPS information, and/or other data.
The block may be added to the blockchain 1940 where it is stored. If the user attempts to open the application again or provides the public key or a unique user identifier that corresponds to the public key for the block into another application. Then the user is again presented with the biometric data capture interface through which the user again presents his/her biometric data to the sensor 1619 or camera 1914. The captured biometric data may again optionally be converted to a biometric template on the device 1912. Next, the user's previous block is requested from the blockchain 1940 and is downloaded to the smart device 1912 where a private key may be kept in the application to decrypt the block. The data and/or biometric template(s) from the block can now be compared to the recently captured biometric data and/or biometric template(s). If a match is found, then the user is verified and granted access to the application, can make a transaction, etc. And the successful decryption of the block and the matching of the templates can be recorded with any combination of the data, the transaction, original template, the most recently successfully matched template or both may be stored in the new block.
In addition to or as an alternative to the comparison and matching being done on the device 1912, the comparison and matching may be completed on the blockchain ledger servers 1940. In this instance, biometric data obtained at the user device 1912 and/or biometric template(s) generated at the user device 1912 from the biometric data is encrypted and sent to the blockchain ledger servers 1940. Next, the public key and the private decryption key may be sent to the blockchain ledger servers 1940 to decrypt one or more previous blocks of the user's biometric information and/or template(s) as well as to decrypt the most recently sent biometric data and/or template(s). The blockchain ledger servers 1940 then run the matching algorithms to determine if the biometric information and/or template(s) stored in the block and the most recently collected biometric information and/or template(s) are deemed a match by the thresholds previously set in the matching algorithm. By providing template matching on all the blockchain ledger severs 1940 (which could be hundreds or thousands of servers), an account provider can be sure that the device 1912 running the application has not been compromised if the matching results are the same as on the blockchain ledger servers 1940. The device 1912 and all of the blockchain ledger servers 1940 would have to be compromised at the same time for a hacker to change all of them, which of course would be highly unlikely if not impossible.
In yet another embodiment a dedicated “matching server” 1950 could be employed that would be sent a copy of both the recently collected biometric information and/or template(s) from the device and the biometric information and/or template(s) in the block. The device 1912 may provide the decryption key directly to the matching server 1950, or the blockchain 1940 could be instructed to send the encrypted biometric template(s) to the matching server with a “smart contract” which is a set of computer instructions coded into the block. This is a feature of blockchains with decentralized processing abilities like Ethereum.
It is also envisaged that when a new device requests a block using a user's unique ID, for example an email address, phone number, or a public key, that the device is only authorized to download blocks in the chain that contain biometric templates of the user that are associated with that unique ID because the device contains the private keys. So, the user's most recent templates could be compared with all the templates that have been captured and are stored on the blockchain, allowing for multiple matches. This may provide fewer false rejections of the correct users that can result from changes in appearance due to lighting, aging, makeup, hair, beard, glasses, etc.
In one configuration of the system and method disclosed herein, there is a private key, and the private key will decrypt the block contents, but the biometric data inside the block is what is used on the comparison to determine if there is a match between new biometric data and stored biometric data. Thus, the private key is required to gain access to the biometric data block. The private key may be created by the user, the system, or the private key could corresponded to a combination of unique identifiers that are is easier to remember, a phone number, a social security number, an email address and a date of birth, etc., And thus also unique to the user. In this configuration, it is possible and contemplated that there are two blockchains, one with the personal data in it, and one with anonymous storage of biometrics templates only, in it. The personal data blocks in the first blockchain would be decrypted by a private key or corresponding personal data combos that only you know, and you share it only with specific vendors that you want to be able to verify that identity, then in that data the block number of another block(s) with your biometric data is appended to that record and then the app can go unlock that block and match/update your newly uploaded biometric data to the data in that biometric block.
In addition to the biometric matching, the application collecting the biometric data may perform liveness tests on the biometric data collected, such as those described above. If the user is proven to exhibit traits that typically only exist in living humans, at the exact moment that the identity is verified then the biometric data can be trusted to be from a real human being, not a non-living object such as a photo or video spoof.
Computing device 2000 includes a processor 2002, memory 2004, a storage device 2006, a high-speed interface or controller 2008 connecting to memory 2004 and high-speed expansion ports 2010, and a low-speed interface or controller 2012 connecting to low-speed bus 2014 and storage device 2006. Each of the components 2002, 2004, 2006, 2008, 2010, and 2012, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 2002 can process instructions for execution within the computing device 2000, including instructions stored in the memory 2004 or on the storage device 2006 to display graphical information for a GUI on an external input/output device, such as display 2016 coupled to high-speed controller 2008. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 2000 may be connected, with each device providing portions of the necessary operations (e.g., As a server bank, a group of blade servers, or a multi-processor system).
The memory 2004 stores information within the computing device 2000. In one implementation, the memory 2004 is a volatile memory unit or units. In another implementation, the memory 2004 is a non-volatile memory unit or units. The memory 2004 may also be another form of computer-readable medium, such as a magnetic or optical disk.
The storage device 2006 is capable of providing mass storage for the computing device 2000. In one implementation, the storage device 2006 may be or contain a computer-readable medium, such as a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid-state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2004, the storage device 2006, or memory on processor 2002.
The high-speed controller 2008 manages bandwidth-intensive operations for the computing device 2000, while the low-speed controller 2012 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 2008 is coupled to memory 2004, display 2016 (e.g., Through a graphics processor or accelerator), and to high-speed expansion ports 2010, which may accept various expansion cards (not shown). In the implementation, low-speed controller 2012 is coupled to storage device 2006 and low-speed bus 2014. The low-speed bus 2014, which may include various communication ports (e.g., USB, Bluetooth, ethernet, wireless ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., Through a network adapter.
The computing device 2000 may be implemented in a number of different forms, as shown in the Figure. For example, it may be implemented as a standard server 2020, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 2024. In addition, it may be implemented in a personal computer such as a laptop computer 2022. Alternatively, components from computing device 2000 may be combined with other components in a mobile device (not shown), such as device 2050. Each of such devices may contain one or more of computing device 2000, 2050, and an entire system may be made up of multiple computing devices 2000, 2050 communicating with each other.
Computing device 2050 includes a processor 2052, memory 2064, an input/output device such as a display 2054, a communication interface 2066, and a transceiver 2068, among other components. The device 2050 may also be provided with a storage device, such as a Microdrive or other device, to provide additional storage. Each of the components 2050, 2052, 2064, 2054, 2066, and 2068, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
The processor 2052 can execute instructions within the computing device 2050, including instructions stored in the memory 2064. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 2050, such as control of user interfaces, applications run by device 2050, and wireless communication by device 2050.
Processor 2052 may communicate with a user through control interface 2058 and display interface 2056 coupled to a display 2054. The display 2054 may be, for example, a TFT LCD (thin-film-transistor liquid crystal display) or an OLED (organic light emitting diode) display, or other appropriate display technology. The display interface 2056 may comprise appropriate circuitry for driving the display 2054 to present graphical and other information to a user. The control interface 2058 may receive commands from a user and convert them for submission to the processor 2052. In addition, an external interface 2062 may be provided in communication with processor 2052, so as to enable near area communication of device 2050 with other devices. External interface 2062 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
The memory 2064 stores information within the computing device 2050. The memory 2064 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 2074 may also be provided and connected to device 2050 through expansion interface 2072, which may include, for example, a SIMM (single in line memory module) card interface. Such expansion memory 2074 may provide extra storage space for device 2050 or may also store applications or other information for device 2050. Specifically, expansion memory 2074 may include instructions to carry out or supplement the processes described above and may include secure information also. Thus, for example, expansion memory 2074 may be provided as a security module for device 2050, and may be programmed with instructions that permit secure use of device 2050. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2064, expansion memory 2074, or memory on processor 2052, that may be received, for example, over transceiver 2068 or external interface 2062.
Device 2050 may communicate wirelessly through communication interface 2066, which may include digital signal processing circuitry where necessary. Communication interface 2066 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA 2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 2068. In addition, short-range communication may occur, such as using Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, GPS (global positioning system) receiver module 2070 may provide additional navigation- and location-related wireless data to device 2050, which may be used as appropriate by applications running on device 2050.
Device 2050 may also communicate audibly using audio codec 2060, which may receive spoken information from a user and convert it to usable digital information. Audio codec 2060 may likewise generate audible sound for a user, such as through a speaker, e.g., In a handset of device 2050. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., Voice messages, music files, etc.) And may also include sound generated by applications operating on device 2050.
The computing device 2050 may be implemented in a number of different forms, as shown in the Figure. For example, it may be implemented as a cellular telephone 2080. It may also be implemented as part of a smart phone 2082, personal digital assistant, a computer tablet, or other similar mobile device.
Thus, various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASIC (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., Magnetic discs, optical disks, memory, programmable logic devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., A CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., A mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., Visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system (e.g., Computing device 2000 and/or 2050) that includes a back end component (e.g., As a data server), or that includes a middleware component (e.g., An application server), or that includes a front end component (e.g., A client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., A communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Biometric data templates are not suitable to be used as public keys and cannot be reliably hashed into public keys because each session contains biometric data that is slightly different than previous sessions. Biometric matching is done by creating a probability of a match and setting an acceptable threshold. In one embodiment, the settings are such that if the comparison reveals collected biometrics data that a 100% match, it is may be considered to not be a match an instead a potential fraud attempt because biometric data comparisons are typically never a 100% a match unless a replay (of the same data) attack is being perpetrated. Because biometrics rely on probability to confirm a matching identity it is important not to allow bad actors to specifically target a known identity armed with copies of that individual's biometric data, such a photos, videos or masks. This may be achieved by limiting access to the blockchain using user question data. It is also contemplated that an efficient means to provide a blockchain wherein the identity of the individual whose biometric data contained in each encrypted block is not readily known to the other users of the blockchain and therefore cannot be easily singled out and targeted is desirable. This is typically accomplished in blockchains with a public key, however if a bad actor knows the public key for a specific individual, they can target a spoofing attack with reproduction of that individual's biometric data. By using a questions layer (requiring users to answer questions before granting access to the block chain) that does not require the users to store, transmit or even know their public key, the likelihood that a bad actor could match a specific block to a specific user and then spoof the system is reduced significantly. This method would allow a user to easily input data from memory that would then be used to recreate their public key and then used to identify to the blocks in the block chain system that contain their encrypted biometric data for verification but not use personally identifiable information (PII) to do so. In one embodiment, this is accomplished through a series of questions that the person answers to generate user question data. In one embodiment, these questions are such that the person would always know the answers, such as city of birth, parent names, or high school name. In one embodiment, the questions are such that the person creates the answers such as favorites, things that change, or opinion based questions. Examples of this type of user question data include favorite color, favorite food, or favorite holiday. In one embodiment, the user question data is created based on system requirements but does not relate to the user. Examples of this type of user data may be data containing only numbers, data containing special symbols, data containing only letters, and/or data containing a required number of each type of characters. Some of this data may be easily recalled and thus not forgotten by the user. Other data is less likely to be guessed by others, but is harder to remember. It is contemplated that any other type of information and questions may be used for the user questions and associated user question data.
For the questions that are easily recalled, or which are memorized, this user question data is always available to the user. In one embodiment, as part of an identification process, the user is asked questions or asked to provide the answers (user question data) to the questions. The user question data is concatenated and then hashed to create a public key and/or block identifier. This may then be used for one or more of the following: identify the user, identify the block associated with the user in the block chain, combined with personally identifiable information to identify the user or the blocks that contain a user's encrypted information. For example, this concatenated and hashed user question data may identify to the verification system which block to match their biometric verification session against. This user question data may be referred to as a public key.
Examples of the type of user questions include, but are not limited to, best year of your life, number of siblings, shoe size, height, favorite color, eye color, last 4 digits of your first phone number, middle name, parents name, favorite grade in school, favorite month, favorite day of the year, best physical trait, school name, favorite food, dietary choices, political affiliation, and religious affiliation or any other similar type of question or data. In one embodiment the data is well known (and not forgettable by the user) but is not of the type that is of public record or can be obtained by typical identity theft methods.
In one example method of operation, this verification system may be used when obtaining a money loan, at an automobile dealership, or any other situation where it is necessary or desired to positively identify the person and allow them to grant access their credit bureau information to a third party (or some other function where identity is required and important).
The user device 2104 provides the user question data which, after hashing or other processing is provided 2150 by electronic transmission, to the remote verification system 2018 with associated database 2112 to identify both the user and their block. The verification system 2108 can run a same hash operation on the stored previously captured data, stored on database 2112 to determine if the received data matches a user, account, block in a blockchain, or another identifier. In accordance with blockchain operation, many verification systems 2018 may be provided at different locations, or their blockchain data for the user may be stored in many different databases 2112 at different locations. The verification system 2108 may provide communication back to the user. Thus, the submitted user answer data matching the stored user answer data may identify the blockchain which stores the user's verification data, grant access to the blockchain, or both.
Once the block or blocks that are associated with that public key are identified, it can be decrypted with the hash to obtain the contents of that block. In this example, the hashed user question data provides access to the user's blocks and can be used to reveal the biometric data stored in the block, which is then compared to the newly submitted user's verification attempt (facial data and movement data) to determine if the user's identity matches the identity stored in the block chain (distributed at different locations thus preventing unauthorized access and unauthorized changes). If a match occurs, then, credit agency, loan department or other entity 2116 will receive notice of the verification via communication 2160. This in turn may allow the loan to occur or a credit report to be sent to the business 2124 via communication 2170. For example, if the loan or credit is approved by the 3rd party 2116, then that will be communicated to the car dealership 2124 which in turn will allow the car to be driven away with only the down payment and/or a payment agreement. The match may also be a gateway requirement before the dealership can pull a user's credit or access a user's credit report. It is contemplated that in some embodiments the lender 2116 and business 2124 may be combined.
Using this method, the user may provide user question data that would not be easily known by a third party since it is personal to the user and not asked by third parties. This form of data and associated method overcomes the drawbacks of the prior art by providing and associating complex data (user question data) that the user will have memorized and thus always with them but yet that others do not know, and which uniquely identifies themselves or their block or account in the blockchain. The answer to the user question data is complex, difficult to guess and longer and more difficult to obtain by a third party than the nine digit social security number or other personal information (PII) but is generally easy for the user to remember.
If a third party knows the answers to all of the user's questions, the system would only allow them to attempt to match presented biometric data with the data stored in the blocks for that user. Because the third party will not easily match the biometric data with a photo, video or mask if the biometric verification has strong depth and liveness detection systems, the verification attempt would be not verified and thus the third party would not be able to impersonate the user. In addition, an email address or mobile phone number could be entered into to the encrypted block when the user is enrolling, and an email or text message could be sent to the registered user's email address or phone number every time that block is unlocked, and the biometric data matched from a verification session or for every attempt. This would alert a user if a bad actor had gained the answers to their public key generating questions and was attempting to impersonate them through various means such by using a look-alike of the user for a biometric spoof. If the bad actor were to be successful in spoofing the system, the real registered user would get an email saying that a successful verification session had been performed and if it were not them, they could initiate steps to stop the bad actor. Notification could also be provided for unsuccessful attempts to access the block. It is contemplated that notification may be sent by email, phone call, or text, or any combination. In embodiment, the system may alternatively or in addition send a verification code to the user, such as by mail, phone (voice), or text, that must be entered with the user question data to provide an additional level of security. Sending and entry of verification codes are known and thus not described in detail.
It is contemplated that the user question data can arrive into the dealership, credit agency, bank or other entity, in any manner. For example, the user question data may be entered by the user with the business's device, uploaded by the user on their own device, by using a third-party kiosk, provided by telephone, text messages, or any other means. Using this innovation, a method of creating a public key that people can easily remember because it is well suited for how human memory works. While the user question data may not all be secret, it can be easily remembered and it is not publicly available and has not been part of the numerous data breaches, as the questions are not typical data such as social security number, birth date, and middle name. Any number of questions may be provided to create the public key, such as for example, two questions or ten questions such that the more questions, the less likely someone will know or guess the answers to access the block data for a verification attempt. While it is possible to use a user's name, social security number, email or phone, this data would also identity the user and easily lead back to the blocks in the blockchain but would expose the user's identity and can become known due to use of that information in other situations. With the disclosed system, utilizing user question data, the identity of the user and the block that stores their corresponding biometric data are anonymous to everyone including the operators of the blockchain nodes. It is still possible for an individual to provide all of the answers to their user questions to a dishonest 3rd party or have that information phished from them unknowingly, but this is unlikely. For this to occur would still require the bad actor to spoof the biometric verification system to gain access to any credit information or other information, which due to the extreme accuracy of the verification routines disclosed herein, is extremely unlikely.
At a step 2020 the system processes the user question data to generate hashed user question data. This could also occur at a remote location. The hashed user question data may serve as a public key. Then, at a step 2024 the system uploads the hashed user question data to a remote server (encryption optional). Then, at a step 2228, the system, such as a remote computer configured for user verification, compares hashed user question data from the user to stored hashed user question data that is stored on one or more databases. The stored data was from earlier input from the user when the identity was known.
At a step 2232, responsive to a match between the stored user question data and the submitted user question data (hashed or unhashed), the system identifies the user's blockchain. Thereafter, the system requests a verification attempt from the user to collect facial data and movement data during verification. This occurs at a step 2236. In this embodiment, this data is collected the user question data matches, but in other embodiments, the user facial and movement data may be collected at the time of collection of the user question data. At a step 2240, the system uploads the verification data to a remote server from a user (encryption optional) and at a step 2244 the system uses the hashed user question data as a public key to unlock the verification data (facial, movement, or combination thereof) that is stored in the blockchain. This may occur at multiple locations such is the nature of a distributed blockchain.
At a step 2248 the verification system compares the stored user verification data to the user submitted verification data to determine if there is a match within a predetermined threshold. As discussed above, 100% matches are unlikely or impossible, so the similarities between data should be within some range or threshold which can be adjusted based on the use and need to verify identity. At a step 2252, responsive to a match, access is allowed, or the requested information is provided such as access to a credit score, credit report, or authorization for other type transaction or loan. This system can be used in any scenario where verifying a person's identity is important. For example, buying an expensive watch or jewelry would benefit from identify verification, as would access control to secure location or data.
Identity Verification with Issued Photo Identification Card
In some embodiments, the identity of a person authenticating using the above described systems and methods may be verified using a photo identification card issued to the person. Identification using only the card analysis described herein is also contemplated.
A photo identification card 2400 typically has a front side 2402 and a rear side 2404 which are each shown in
Other information is also printed on the card and may be formatted as shown or may be varied as needed and/or according to design preferences. For example, a name of a state 2408 issuing the card may be printed on the top of the front 2402 of the card 2400. The person's name, 2410 and other identifying information 2412 may also be printed such as a home address, height, weight, sex, date of birth, etc. The card 2400 may comprise one or more security features such as a hologram 2414. On the back 2404 of the card 2400, a barcode 2416 may be provided which is encoded with the holder's personal information and/or other information related to the identification card 2400.
In step 2502, face matching is conducted using the device including liveness verification. As explained in detail above, the person authenticating with the system captures images of their face with the camera of the device as prompted on the display of the device. As discussed above, the system may check for liveness and/or three-dimensionality of the person by prompting the person to change the distance between themselves and the camera by moving the device/camera or themselves with respect to the camera. This allows the system to verify whether the person authenticating is a live person and not a spoof. This also allows the system to conduct face matching and to collect biometric information for the person being imaged. It is contemplated and disclosed that one or more of the face matching and/or liveness detection features described herein may be used, alone or in any combination, with the photo identification card method described below
In step 2504, the person verified or being verified is prompted to capture an image or video his/her photo identification card, and the system scans the image of the card for authenticity and for the information contained on the card. The image could also be uploaded to a remote web site configured with software to evaluate and verify the identification. During the image capture, for example, the system may prompt the user to move the card relative to the camera or the camera relative to the card. In other embodiments, the card/camera distance is not changed. If moving, the card (or camera) may be moved such that distance is changed between the camera and card in a straight line closer or further away.
As shown in
By requiring movement of the card relative to the camera, the system may perform several checks to determine whether the photo identification card 2400 is authentic. For example, as the card 2400 is moved relative to the camera, the hologram 2414 on the card 2400 may appear, disappear, and/or change. The system may include a check for the hologram on the photo identification card 2400 in order to verify that the card 2400 is genuine. In other embodiments, the system may perform banding, edge detection, and other screen detection processes as described above. In one embodiment, the system may check for the user's fingers at the edges of the card to help confirm that the card is genuine and being displayed on a screen of another device. Further, by imaging the card at a close proximity, the device can obtain a high-quality image of the card 2400, including all of the information on the card. It is also contemplated that the card may be rotated while being held so that the camera can see not only the face of the card and images and text on the face, but also the edges of the card. This further shows three dimensionality and will further capture any security features of the card such as holographic features. This would detect photocopies of the card on a piece of paper.
For example, in some embodiments the device reads information from the card for use during verification or for other use. The system may scan the photo 2406 on the photo identification card 2400 to obtain biometric information to compare to the biometric information obtained during step 2502. Further, the device may scan the card 2400 to retrieve the person's name and other identifying information via text recognition. The information may also be obtained by imaging the back 2404 of the card 2400 for the barcode 2416 or other type of code. This may be particularly useful for when a user sets up an account for the first time with an institution so that the user does not have to manually input the user information.
In step 2506, the biometric information obtained from the user during step 2502 and from the photo identification card during step 2504 are compared to determine whether they are a match. The data obtained from processing the images of the card may be compared to a database of known card details to verify the format of the card is accurate and other details regarding the card match known formats such as but limited to picture location, card thickness, text font details, text location, security features, bar code format and location, card color scheme, and card aspect ratio. In this embodiment, facial biometric information obtained from imaging the user's face and from imaging the photo 2406 of the photo identification card 2400 are compared to determine whether the images are of the same person. This comparison may occur based on the captured image of the person that occurs as part of the verification process or from earlier captured photos stored in a database. If the biometric information from the different images are similar within a given threshold, then the user is verified.
Several variations to verify using a photo identification card are also contemplated. For example, steps 2502 and steps 2504 may be conducted in reverse order. That is, the user may first image the photo identification card prior to imaging themselves. In another example, the user may image themselves and the photo identification card simultaneously. This provides the advantage of having an image of the person holding the actual card thus showing that the person is in possession of the actual card.
Also disclosed is a digital identification configured to further identify or provide assurances of the identify of a user. In many instances, it is desirable to have assurances that a person is who they say they are. Instances when this may be helpful occur in many situations. For example, prior to or as part of a transaction between two parties that are not conducting the transaction in person it would be desirable for one or both parties to verify the identity of the other party. In particular, if one party has to pay before receiving the goods or before the goods are shipped, then they may want assurances regarding the person selling the goods. Internet and long-distance transactions are increasingly common. In addition, identity verification prior to a loan is another instance when it would be desirable to verify the identity of the person receiving the money. Likewise, hiring some to work remotely is an instance when verifying their identity is preferred. Further when renting a house, car, or other item to a person without meeting them or verifying their identify is unwise. Many other instances exist where a third party may want to verify a person's identity including, but not limited to dating, business relationship, care giver, transaction counterparty or voter, such as a voter ID or an ID used to verify eligibility of government benefits. Therefore, there are numerous instances when it is preferred or needed to have some assurances or verify the identity of a person.
The system and method disclosed allows a user of the system to become a verified user. A verified user is a person to performs the steps disclosed herein, receives a digital ID, and the authenticity of the digital ID is conferred by a verification server. The verification server comprises one or more computer systems with associated software configured to process data received from the user during the creation of the digital ID and during the verification of the digital ID by a third party. The third party may be any individual or entity who is using the digital ID to verify the identity of the user. The digital ID may be used to verify the identity of the user the making the user a verified user.
The verification server may be one or more servers or computer executing machine executable code. For example, one server or computer may act as a web server while another may function as a verification processing. Another server may perform data storage and database functions.
At step 2716, the images or facemap data is processed to verify liveness of the user. Liveness verification may occur in any manner including any manner disclosed herein. If the liveness verification determines that the user is not a live user, such as if the photos represented a two dimensional image or a non-human three dimensional representation of a person (mannequin, bust, 3-D face model), then the operation ends, and the digital identification cannot be created.
Alternatively, if at step 2716 the photos or facemaps are determined to be a live person, the operation advances to a step 2720. At a step 2720, the user is instructed to take a picture of their photo ID (ID which has a picture of the user), such as a driver license, military ID, state or country issued ID, or their passport. In one embodiment, the user has the option, either manually or automatically, black out and not show one or more items of information from the photo ID. For example, the user's driver license number, passport number, birthdate, and/or address, or any other sensitive information may not be shown on the digital ID or not uploaded to the verification server. One or both sides of the ID are photographed by the user using their device to capture the photos using a camera associated with device. At a step 2724 the user uploads the captured image to the verification server. In one embodiment, the user manually uploads the image while in other embodiments the application software automatically uploads the image of the user's ID or passport.
It is also contemplated that alternative or additional documents may be captured with an image and uploaded to the verification server. For example, to verify that the user has the goods or the right to rent/sell the property, or conduct the transaction, additional images may be captured and uploaded. This may include but not limited to images of the item being sold, or a vehicle title, property tax records, work history, themselves in or at a property, themselves with the goods or showing the VIN, or voter registration card or any other image capture.
Next, at a step 2728 the verification server and software (machine executable code) running on the verification server compares the one or more of the first image and the second image (captured at different distances) of the user to the photo of the user in the user ID to verify that the user ID photo matches the photos of the user captured at step 2712. This may occur using face matching or any other image comparison techniques for determining or matching the identity of a user.
At a decision step 2732 a determination is made whether one of the images of the user in the photo ID or passport. If the photos do not match, then the user's ID does not match the uploaded photos. The photo ID may be outdated, stolen, or forged. As a result, the operation advances to step 2736 and the operation terminates with a message to the user that the photos do not match and as such a digital identification (ID) cannot be created.
Alternatively, if the photos match, then the operation advances to step 2740 and the verification server processes the liveness verification determination and the photo(s) of the user's photo ID or passport to generate the digital ID. The digital ID may take any form, but in this embodiment, it is an image or PDF file that shows one or more of the following: photo ID image or variation thereof, user's photo, user's email address for the user, verification of liveness, and GPS location, specific or generalized, city or country, timestamp, estimated age, or any other information.
Next, at a step 2744, the verification server processes the digital ID to generate a hash value representing the digital ID. It is also contemplated that any other type processing may occur on the digital ID file to generate a unique code that represents the digital ID. A hash function is one example of processing that generates a unique value corresponding to the digital ID. Hash functions performed on an image are known by one of ordinary skill in the art and are not described in great detail herein.
The value resulting from the hash function is stored for future use and associated with the digital ID. At a step 2748 the digital ID is sent from the verification server, such as by email as an attachment file, to the user. The digital ID may be an image file that is viewable by the user and which may be stored by the user or sent to a third party by the user.
The user may also be provided a link to the verification server such that the link may also be shared with a third party. Use of the link is discussed below in connection with
At a step 3312, the third party accesses the verification server using the verification link and then at step 3316 the third party uploads the digital ID to the verification server using the interface shown in
At a comparison step 3328, a determination is made whether the first hash value matches the second hash value. If the values do not match, then the operation proceeds to step 3332 and the verification server indicates to the third party that the digital ID has been modified. The digital ID is not verified.
Alternatively, if at the decision step 3328 the two hash values do match, then the operation advances to step 3336 and the verification server sends a reply to the third party that the digital ID is valid and verified, along with the email address used by the verified user to use the digital ID software and receive the digital ID from the verification server.
Next, at a step 3340 the verification server may update a record associated with the verified user of the submission of the digital ID for verification and the successful match of the two hash values. This may be useful to validate the verified user over time to provide a trust score to the verified user or to create a history profile. At a step 3344, the verification server may request feedback from the third party regarding their interaction with the verified user. For example, the request for feedback may ask if the third party had a successful interaction with the verified user, or whether the verified user turned out to be who the verified user represented they were. This feedback can be used to build a trust score for the digital ID and the verified user, or conversely, associate the verified user with fraud or misrepresentation. This information may be shared with other users or future users as a way to establish further trust in the system and digital ID.
In one or more embodiments, additional steps may occur to build trust in the user or the photo ID. In other embodiments, if the image of the photo identification provided to the verification server is a type that is known to the verification server database, such as a driver license, then one or more matching algorithms may be run on the photo identification to verify that the photo identification matches a template of acceptable formats for the photo identification. Stated another way, if the photo identification does not have the required information in the required location and other aspects of the photo identification do not match the accepted template for that type of photo identification, it is noted on the digital ID, or the digital ID is not generated and provided to the user. For example, the matching algorithm may cross check the submitted photo ID image against the accepted template for the following factors, but are not limited to the following factors: font type, layout of elements on photo ID, color of elements or background of elements, expiration date, arrangement of information, format of information, watermarks, photo size, size ratio of elements to other elements, images, pictures, or artwork on photo ID, holograms, anti-copy features, bar codes, facial features compared to information on ID such as eye color, skin color, hair color, or any other factor or features
As discussed above, a verification server, which may comprise one or more servers or computers, may receive the information from application software installed and executed on the user's computing device. The application software executes to provide the screen displays and functionality described herein. For example, the app software executing on the user's mobile computing device may capture images of the user and user's photo identification, and also upload the image files to the verification server. This provides a controlled, secure, closed system for obtaining the required information and transmitting the information to the verification server. It is also contemplated that a web page may be created which acts as the portal for the user to interface with the verification server. It is also contemplated that a user or third party may use a desktop computer or laptop computer interface with the verification server.
As discussed herein, the facemap comprises data that is derived from the images of the user's face. The facemap data may be sent to the verification server instead of the entire image to reduce bandwidth requirements, reduce the time (for a given bandwidth) required to upload required information to the verification server, and to add greater privacy for the user's images. In one embodiment, the facemap data cannot be used to re-create the image of the person. When generating the facemap data or selecting which image(s) to send to the verification server, specific face frames are selected for their position and quality.
The digital ID may be in any format of electronic file suitable for sending via text message, email, or other electronic transition means. For example, and not limited to, the digital ID which may be an image file such as a jpeg, tiff, raw image format, PFD, bmp, GIF, PNG, or any other type image file format. In one embodiment, the image file is locked and non-editable. The file format for the digital ID may be a proprietary format, usable by only the application software which is executed on a computing device. This may make editing or making changes to the digital ID more difficult, although any changes would be detected during the comparison of the hash values derived from the digital ID.
A drawback of prior art systems is the risk of face photo information release and the ongoing, varying state by state or country by country biometric and privacy laws, many of which are proposed and written by lawmakers with minimal knowledge of the technology and how advanced and complex verification system operation. One valid privacy concern from individuals and trusted entities is the privacy of face image data obtained by a trusted source. For example, while an employer, bank, or government entity, such as the department of motor vehicles (DMV) or police department, may have face image data and other personal information, there is an understood interest in maintaining internal control over this information and preventing release to the public of this data to third parties. In particular, the DMV and police departments are typically unwilling to release or receive face image data as part of identity verification. As a result, there is a need for a method and apparatus to verify the identity of a user based on data from a trusted source identity issuer's database (such as a database belonging to the DMV, employer, federal government, or police) without having to send face image data from or receive face image data from the trusted source.
To overcome the drawbacks in the prior art and provide additional benefits it is proposed, in one embodiment, to create a graphical code with face feature vector data derived from facial images encoded into the graphical code. This graphical code (which may be referred to as UR™ code) can then be printed on an ID document established on any item (such as but not limited to documents, diplomas, credit cards, credit reports, passports (or identification cards), event tickets, voting ballots, or shown on digital screens. Once established on an item, imaging of the code and the face of the person associated with the code, can occur to verify an association between the facevector data stored in the graphical code, and the person that is presenting their face to the camera for comparison. Other aspects of the document may be incorporated into the security checks to verify the ensure that the UR Code has not been added to the document later or swapped from another document. One approach is performing OCR on the text on a document, or scanning a barcode and comparing the contents of the bar code that are also contained in the UR Code, like the name, DL #, or DOB, or obtaining data from the barcode, and hash it and store that in the UR Code, which is discussed in detail below. This ensures the barcode, and the UR Code are connected and have not been swapped. These may be considered secondary checks. So that the UR Code can stand on its own, it is disclosed to include checksums and/or hash functions results in the UR Code to verify a high confidence in the UR code and prove that the UR Code was not edited. To verify the UR code, hashing of the entire UR Code by an encoding entity and the publishing of the Hash and UR Code UID, is a way for the relying party to authenticate that the UR Code has not been edited.
Stated another way, the disclosed method and apparatus can establish identity or an association between a user and an item (such as a document) by using a graphical UR™ code that is printed on a document that comes from a trusted source (government identification, diploma, credit report) and match it to the face image data from the live 3D human (the person) without the document or UR™ code exposing the photo of the person's face or revealing the person's physical characteristics like gender, age or ethnicity.
In other situations, a photo of the person is printed on the document in addition to the UR™ code which contains the encoded face vector data of that person. The face photo on the document can be imaged or scanned, and then processed and converted into a data format which can be compared to the data stored in the UR™ code printed on the document or compared to the UR™ code face vector data which was derived from an actual image of the person collected at the time of validation. The term ‘validation’ and ‘verification’ are used interchangeably. By matching two, three, or more, of these trusted, or liveness-proven biometrics to each other, a very high confidence that the presenter of the document containing the UR™ code is indeed the person that was issued the document is achieved. This can occur with or without communication over a network to a trusted entity, such as the entity that issued the document to compare the UR™ code data to UR™ code data generated from trusted images at the trusted entity, or to a 3rd party that can confirm legitimacy of the UR™ code and that is has not been tampered with using any number of cryptographic checks.
An additional benefit to the described method and apparatus is that the face image of the person, the person's face shape, the person's age, gender, or ethnicity, or person's identity cannot be reverse engineered or derived from the UR™ code in any human viewable, comparable, or evaluable form. This allows the UR™ code to be transmitted to and from a trusted entity without revealing or compromising the person's physical likeness.
As can be understood, the size of the data that is in the facescan file is large, and thus, for subsequent processing, the file size is reduced to provide benefits for storage and transmission efficiency. For example, 3D facescan files are created on the user device and encrypted before transmission over a network to a server that will decrypt and process the image data to determine if the face shown in the images stored in the 3D facescan were captured in real time from a live person, and are first generation captures of the physically present human, not photos of photos (spoofs), or replayed video frames.
Next, at a stage 3682, some of the face images in the facescan related to liveness detection are removed to create a 3D facemap (hereafter facemap). The facemap is a smaller file size and does not contain data sufficient for the previous level of liveness detection, but contains the record that liveness was previously proven to a high level of confidence on the larger data that comprises the 3D facescan file, if needed, at stage 3680. The facemap is suitable for use for matching to other facemap data, such as from an identification, prior image, or collected image or to face feature vector data. While encrypted with sophisticated, proprietary encryption schemes with the facescan and the facemap files, images of the user's face are included and stored as human-viewable face images. Thus, these two data types do not hide the user's facial features, and if decrypted could be reverse processed into human viewable face images of the user. The terms user and person may be at times used interchangeably herein.
Next, at a stage 3684, the facemap (containing face image data) is processed into face feature vector file, and remain as a face feature vector format that can be processed by a face matching algorithm, neural network model or similar data comparison method, or be encrypted, and require decryption prior to processing by a data comparison algorithm. (hereafter facevector). The facevector comprises a much smaller data size and may be represented by a string of alpha numeric values. In one embodiment the size of the facevector data is as small as 128 bytes. The facevector cannot be reverse processed into a human viewable face image or used to recreate the person's face or likeness in any meaningful way that would reveal source data subject's identity. In this way, the facevector, either represented as an alpha numeric string or stored in a graphical UR™ code, can be published, shared, or distributed without providing an ability to re-create the person's face image, or face likeness.
Finally, at a stage 3686 the facevector is encoded into the format of a graphical code, like a barcode, which may be referred to as a UR™ code. In one embodiment, the facevector is 64 kilobytes, however in other embodiments, the UR code may be made to be larger or smaller, or a greater or less resolution. As the UR code becomes larger or higher resolution the amount of data which can be stored in the UR code increases
Stated another way, facevector data is face image derived data that cannot be used to re-create the likeness or identify the person visually, and does not visually link the facevector to the subject person's face whose data is encoded into it. Facevectors are created by processing face image data to create a face feature vector value, which may be numeric, alpha, or alphanumeric or any other data format. The processing should be considered a one-way conversion, such that the processing and facevector cannot be used to recreate the face image, the image data, or be used to identify the person in the image, unless that person is physically present, provides their face images again, and then with their consent the newly collected face data is converted into face feature vector data and then matched to the face feature vector data encoded into the UR™ code as part of non-surreptitious identity verification process the subject person opts into. If the person who is attempting to match their face to the face feature vector data encoded into the UR™ code is not a high probability match then no information about the characteristics of the subject's face that the facial feature vector stored in the UR™ was derived from is revealed, protecting the privacy of the subject. As a result, transmission or receipt of a facevector (or UR™ code which is a graphic code that includes an encoded facevector) is not transmission of user identifying biometric data without additional data such as data that links the facevector to a particular user. In one embodiment, facevector is similar to a hash function output such that it cannot be reverse process to reveal the original data, yet facevectors have the added benefit of revealing similarities to another facevector that was created from the same person's face data.
This is a significant benefit because a hash function output can be compared to another hash function to determine if the original data is exactly the same, but provides no indication that the original data is similar or the degree to which the original data (prior to hash) was similar. For example, if two facevectors are created based on DMV images taken one after another at the DMV, then those two facevector data sets can be processed to determine that those two facevectors were generated from the same person, and not from a different person. As a result, a probability may be generated of similarities between images which provides information regarding whether, and to what extent, an image is similar to another image, all from facevector data which cannot be used to recreate the original image or reveal data regarding the user's face.
By way of a nonlimiting example, if the original face image(s) of the person in the video frames output by a smart devices camera are over 200 mb in size, the 3D facescan at stage 3680 may be reduced to 350 KB in size, the 3D facemap at stage 3682 may be 180 KB in size, while the 3D facevector at stage 3684 may be 128 bytes) in size (with no encryption layers or meta data). The graphical code 3686 is capable of graphically containing the encoded the 3D facevector 3684. The graphical code may be any size, shape, configuration, or be created using any type printing or imposing technique and as such is not limited to that shown herein.
In this embodiment, the graphical code is printed on a driver's license or ID card. In other embodiments, the graphical code may be on any type of document or item which is to be verifiably associated with a person. This includes but is not limited to transcripts, diplomas, passports, deeds, titles to property, license plates, auto registrations, access cards or badges, legal documents, visas, credit cards, social security cards, green cards, assignments, transcripts, shipping labels or transport documents, medical or auto insurance cards, pet ownership tags, property ID badges, event tickets, voting ballots, or any other type document or item, including digital versions of these items, images or renderings of these items, or digital iterations of these items, such as mobile driver licenses (mDLs) or digital identity or payment wallets.
To aid in the understanding of one or more aspects of the innovation disclosed herein,
After creation of the facevector, the facevector is provided to a UR™ code creation engine 3624, which may be a software, hardware, or both, configured to generate a machine readable graphic code that may be placed or imprinted on an item 3616 to create a coded item 3628.
At a later time, to validate or verify that the code on the coded item 3628 that is associated with the person 3612 may be imaged or scanned and the person, such as their face, may be imaged with the same or different camera 3614B. As shown, the scan of the person's face is provided to the same or different facevector creation engine 3620B to generate a facevector of the person's face. Similarly, the image/scanned code is provided to a UR™ code to facevector decoder 3624, which may be a software, hardware, or both, configured to convert the imaged/scanned UR™ code to a facevector. Facevectors can be created from any digital 2D face image or facemap but the source images a preferred to have characteristics similar to ICAO 9303 and ISO 29794: biometric sample quality part 5—face image data. The facevectors can be created at any time with any face image similar to a mugshot or driver's license photo and the matching accuracy potential of the facevector may be influenced by the resolution and quality of the source face images. In some embodiments a facevector that was created in the past that has been encoded into a UR™ and a newer facevector that has been encoded into a different UR™ code can be compared. Based on the relative dates of the UR™ code creation the code may be considered the older code or the newer code, and certain characteristics about the subject can be assumed, such as the subject is now older in the more recent UR™ code encoded facevector than in the older UR™ code facevector.
The face created validation facevector created from the person's face at the time of validation, as well as the UR™ code derived facevector, which was derived from the UR™ code, are provided to a facevector comparison engine 3630, which may be a software, hardware, or both, that is configured to compare the two facevectors. The results of the comparison are provided to a similarity probability engine 3640, which may be software, hardware, or both. The similarity probability engine 3640 evaluates one or more aspects of the comparison and generates a probability value or other indicator of a degree of match between the facevector derived from the UR™ code and the facevector calculated from the person's face at the time of validation. The results of the comparison and/or the probability value or match level may be provided to a 3rd party or an entity 3644 requesting the validation of the document being associated with the person. The entity 3644 may be any entity that seeks confirmation that the item imprinted with the code is associated with the person.
Also part of this system overview is a trusted entity 3636 that may be the same as or different than the UR™ code creation entity 3610. It is contemplated that the output of the facevector creation engine 3620B and the UR™ code to facevector decoder 3624 may be provided to the trusted entity for comparison to trusted facevector data, such as by using a facevector derived from a trusted image that is stored by or accessible by the trusted entity 3636. The trusted entity 3636 may be a government entity such as the DMV, a financial institution, or corporate entity which has trusted facevectors or images of the person which can be compared to facevectors created later in time. Although not shown, the trusted entity 3636 may include facevector comparison engines and/or similarity probability engines.
The code or document creation entity 3820 comprises data communication elements configured to accept data from the computing device 3834. In this embodiment, the code or document creation entity 3820 comprises a server 3840 or other computing system having typical elements associated with a server. In addition, the code or document creation entity 3820, either part of or separate from the server 3840, includes a liveness evaluation engine 3842, a facemap engine 3846, a facevector engine 3854, and graphical (UR™) code processing engine 3805. These elements may be hardware, software, or a combination of both including machine executable code (software) stored in a non-transitory state in a memory.
The server 3840 or other computing system performs typical server or computer processing functions such as data input/output, printing, data storage and retrieval, and execution of software code. These are known elements and functions and as such are not described in detail herein. The liveness evaluation engine 3842 is a combination of software, hardware, or both that is configured to process the image data or data derived from the image data of the person (referred to above as a facescan) to evaluate liveness, three-dimensional depth of the person, or both. The facemap engine 3846 is a combination of software, hardware, or both that is configured to process the facescan data set to create the facemap data set.
At the time that the item (such as an identification card (ID) 3854 is created, it is contemplated that an image of the user is captured (or a trusted image that was previously captured) and placed on the item. This is typical of many or most forms of ID. In other embodiments, the photo may be omitted. In addition, an image of the user that is placed on the ID is stored in the trusted image database and is also processed with the facevector creation engine 3854 to create the facevector. The resulting facevector may optionally be stored or for security reasons may not be stored so it cannot be improperly obtained. The facevector is encoded into a UR™ code using the UR™ code creation engine 3850 and the UR™ code is printed or otherwise incorporated on the front or back of the ID document or digital ID document 3854. Creation of the facevector and the UR™ code is discussed below in greater detail.
Then, at step 3916 the trusted entity captures one or more photographs or images of the person. Optionally, at a step 3920, the one or more captured images may be processed by computer algorithms to verify the existence of a physical camera, the three-dimensional depth and liveness of the person, or the liveness may be attested to by a trusted person operating a camera, such as a DMV employee or bank employee. As part of an automated, AI powered liveness verification, the data processed may be referred to as a facescan. This step may occur if the person is not physically in front of the person creating the UR™ code.
At a step 3924, the image (digital data representing the image of the person) is processed to generate facial data, which is data representing the three-dimensional facial data. This data after processing may be referred to as a facemap. As discussed above, the facemap data is smaller in size than the images of the user. Then, at a step 3928 the facemap is processed to generate non-identifiable facial data (NIFD) that represents one or more aspects of the person's face. One example type of a NIFD is a facevector. The NIFD is data that represents the face of the person, but which cannot be reverse processed to re-create the image or likeness of person's face. Thus, the NIFD is data which cannot be reverse processed to identify the user, unless by comparison to similarly created NIFD data.
At a step 3932 the NIFD is further processed into a computer readable graphical code, such as the referred to UR™ code. This code can be imaged with a camera or scanner, and the resulting image is processed to convert the graphical code into an alphanumeric string. It is also contemplated that the graphical code may be scanned with a laser scanning system, such as a bar code reader to derive the alpha-numeric string. As an optional step 3936, non-facial personal data (NFPD) may be collected from an ID card, document, a database, or from the user. The NFPD may comprise but is not limited to all or a portion of name, address, driver license number, full or partial social security number, date of birth, eye color, height, weight, location of birth or similar information. At step 3940, the created graphical code may be saved as a digital file, transmitted to a third party or entity as an image file for use on a digital screen, or printed directly onto a document or item.
The document 4004 may be any item as described herein or which would benefit from a verifiable association between the item and the person 3820 associated with the document. The person 4020 is the person who is to be verified as being associated with document, namely the person should be matched to the facevector data stored in the graphical code 4010 on the document 4004. The person 4020 may utilize a computing device 4024 (either their own computing device or another's computing device) to scan or image the code 4010 on the document 4004 and also capture one or more images of their face. The images may be captured at different distances from the person as described herein, such as a first distance which is at around arm's length and a second distance which is closer to the person's face. The source of the software 4012 may be the entity that provides the software (machine executable code stored in a non-transitory state in a memory). One exemplary source such of software is FaceTec, Inc., with headquarters located in Summerlin, NV.
Turning now to the image and code processing system (ICPS) 4016, show is one example embodiment of a system for processing the facevector code (extracted from the graphical code) received from the computing device 4024 as well as processing the image data, or associated facemap from the user, and performing a comparison therebetween. In this embodiment, facevector data extracted from the UR™ code on the document is received at the processing system 4016 from the person 4020 and/or the computing device 4024 such as over the internet or a computer network. Also received from the person 4020 or the computing device 4024, or from a database, is an image of the user, or data derived from a user of the image, such as facemap data.
The processing system 4016 comprises data communication elements (not shown) configured to accept data from the computing device 3834. In this embodiment, the processing system 3816 comprises a server 3840 or other computing system having typical elements associated with a server. In addition, the processing system 3816, which may be either part of or separate from the server 3840, includes a liveness evaluation engine 3842, a facemap engine 3846, a facevector engine 3854 and graphical (UR™) code processing engine 3850.
The server 3840 or other computing system performs typical server or computer processing functions such as data input/output, printing, data storage and retrieval, and execution of software code. Server functions are known elements and functions and as such are not described in detail herein. In addition to the server functions, the new aspects include the liveness evaluation engine 3842 which is a combination of software, hardware, or both that is configured to optionally process the image data or data derived from the image data of the person (referred to above as a facescan) to evaluate liveness three-dimensional depth, or both of the person. The facemap engine 3846 is a combination of software, hardware, or both that is configured to process the received image of the user facescan data set to create the facemap data set. This step may occur at the computing device 4024 or at the processing system 4016.
The facevector engine 3854 is a combination of software, hardware, or both that is configured to process the facemap data set to create the facevector data set. The facevector engine 3854 can be used to process facemap data, received from the user, into the facevector data. The UR™ code processing engine 3850 is a combination of software, hardware, or both that is configured to process the facevector data to create the graphical code, which can be printed at the time onto a document (item) 4004. The UR™ code processing engine 3850 may also be configured to compare the document facevector (extracted from the graphical code)—that was on the document or items and provided by the user—to the facevector that is derived from an image of the user that was captured just prior in time, or compared to a stored photo or stored facevector (or facemap) of the user from a trusted entity (describe below in detail). As discussed herein, the facevector data cannot be used to re-create the person's face or reveal the person's identity.
At a step 4104, the person or a third party seeking to verify or validate an association between the person and an item marked with the code (or obtain identity verification) is initiated. At a step 4108, the entity seeking verification is presented with the document containing the graphical code and scans or images the code. After step 4108, the operation branches to step 4136 and step 4112. At a step 4136 the device or computer which scans or images the code may convert the code into data, such as an alphanumeric code, which may be referred to as a facevector code or facevector data. At a step 4140, if the process is to be performed at a remote location, then the facevector is transmitted to a remote location for processing. Alternatively, the processing may occur at the same location at which the code was scanned or imaged.
At a step 4112, the entity seeking verification captures an image of the person, such as using a camera or other image capture or scanning device. A camera of a smartphone or web cam may be used. The image of the person may be processed to reduce the file size and to prevent transmission of data that can be used to identify or recreate the image of the person. As an optional step 4116, one or more images of the person may be processed for liveness as described herein by looking for expected differences between two or more images of the user. At a step 4116, the UR™ code stored data and the image data, or a processed version, may be transmitted to a remote location if such for processing. Alternatively, the processing may occur at the same location as the image collection.
At a step 4124, the image of the person is processed to create facial data which represents the person's face. This may be referred to as a facemap. Then, at a step 4128 the facemap is further processed to generate non-identifiable facial data (NIFD) which represents the person's face. The NIFD may comprise or be referred to as a facevector. The NIFD cannot be used or reverse processed to generate or re-create an image of the user. It is contemplated that the processing of the image data may progress directly to facevector data.
At a step 4132, the system may also collect non-facial personal data (NFPD) or document data. The NFPD may comprise any data about the person as is described herein and the UR™ code may also include or be encoded with the same type of date. This allows for a comparison between the UR™ code encoded data and data from the document collected at the time of verification. The type of information collected from the document may vary based on the type of item or document.
After step 4132 and 4140 the operation advances to step 4144. At step 4144 the system, such as the processing system 4016 (
In addition to the structure of
The trusted entity 4208 may optionally include identification (document) creation elements such as a computer, printer, ID printer, camera, and ID content creation, or these elements may be controlled by a third party business or contractor. These elements are not described in detail as such elements are known in the art. The camera may be used to capture the trusted image(s) which may be processed into facevectors. Also part of the trusted entity 4208 is a server 4258 which executes many of the functions of the trusted entity. Databases are part of this embodiment, such as a trusted personal data database 4278 and a trusted image data database 4274 which store images of the person, and/or facevectors or facemaps of the person, and optional other data regarding the person(s). These databases may be combined into a single database. In many instances, the trusted entity 4208 may be unwilling to release information (image data and/or personal data) to third parties or receive additional image data from third parties. As a result, the facevectors may be used in place of images to protect privacy.
Also part of the trusted entity 4208 is a UR™ code evaluation engine 4262, a UR™ code creation engine 4266 and a facevector creation engine 4270. The UR™ code evaluation engine 4262 is hardware, software, or a combination of both that evaluates UR™ codes, such as by decoding the UR™ code to determine the facevector values represented by the UR™ code, or by comparing the facevector that is derived from the code. It is contemplated that a graphic representation (image for example) the UR™ code, the derived facevector, or all may be provided to the trusted entity 4208. The UR™ code generation engine 4266 is hardware, software or both, which generates UR™ codes based on facevector data. The facevectors are data derived from an image. Facevectors are described below in more detail. The facevector engine 4270 is hardware, software, or both that process an image or portion of an image to create face vectors.
At a later time, a person 4020 may seek to provide assurances of their identity or validate an association between the document having the UR™ code thereon and themselves through use of imaging and analysis of themselves and the UR™ code. However, documents can be synthetically created, tampered with, or altered to create what is commonly referred to as a fake document or a fake ID. To provide assurances that the document is accurate the UR™ code may be placed on the document as discussed herein.
The following is an overview of one method of operation. The person 4020 may capture images of the UR™ code on the document 4004 and upload those image(s), in a camera interface and then the URL stored in the UR™ code can use the QR Code™ decoding software and instructions to open a web interface or application executing on their computing device to the entity requesting verification (not shown) or to a validating entity. In this example, the document is a photo ID, such as a driver's license. Alternatively, the UR™ code may be scanned or processed by the camera on a computing device to collect and decode the facevector. In the case of an ID, the front of the ID may also be photographed to capture an image of the picture of the person. The entity requesting verification 3816 may forward the images of the ID 3824 to the validating entity 3820 for processing, or processing may occur on the person's computing device. The person's computing device may be a smartphone, tablet, desktop computer, kiosk, laptop, or any other computing device usable by a user.
Upon receipt of the image of the ID and/or the UR™ code on the ID, which may be scanned separately as occurs with QR codes or as part of the image, the validating entity may perform one or more steps to evaluate the user is the person identified on the ID. As discussed above, liveness evaluation may occur by capturing images of the person's face to evaluate whether the images of the face that is presented to the camera are from a person that is physically present, live and are being captured in real time, meaning the camera is not seeing a mask, figurine, printed 2D picture, or video or deepfake imagery that is being injected by virtual camera software, or similar camera bypass attacks. In addition, the data may be collected from the ID and the UR™ code on the ID may be processed with the UR™ code processing engine to derive the facevector from the UR™ code. In the event that only the UR™ code or only facevectors are sent to the validating entity (such as the processing system 4016), no face image data or personal identifying data is being sent.
The resulting facevector (and/or the UR™ code) may be sent to the processing system 4016 for validation and the processing system may interface with the trusted entity 4208 as part of the process. In one embodiment, an additional code or identifier may be sent to the trusted entity 4208 with the facevector to aid in the validation of the facevector, such as to aid the trusted user in narrowing down or locating a trusted facevector to be used in the comparison. Upon receipt of the facevector, the trusted entity performs a database look up to determine if there is a match with the facevector to a facevector stored in the trusted database or the name (or other identifier) may be used to index into the database. In one embodiment, the comparison is performed by a neural network located at the processing system 4016 or the trusted entity 4208. The neural network may be custom tailored for the particular validating entity but should be the same neural network model used when generating and analyzing facevectors. For example, the DMV may use one neural network model and facevectors created using that model should then in the future be processed with the same model, although that model may be replicated at different entities. Similarly, a credit agency may have its own different neural network model and facevectors associated with that model must be in the future processed with the same model. The neural network may be shared between two devices, or two neural networks may be shared. For example, one or more elements of the processing system 4016 and/or the trusted entities 4208 may be neural networks.
If the name or other code that identifies the person is also sent, then that data may be used to locate a corresponding facevector that should match, or an image of the person that was originally used to create the facevector and the UR™ code, on the ID, at the time of ID creation. The original image, stored in the database, that was used to create the facevector which was processed into the UR™ code and placed on the ID can be processed again to re-calculate the facevector and the re-calculated facevector from the stored image is compared to the facevector from the person and/or facevector derived from UR™ code on the document.
Based on the analysis by the trusted entity, the trusted entity 4208 can return a match or not a match response, and/or provide a probability of match. In the event that the same image (stored at the trusted entity) that was used to originally calculate the facevector is again processed to create a re-calculated facevector, then the two resulting facevectors should be a perfect match or very close to a perfect match because both facevectors (facevector from stored image and UR™ code derived facevector) were created from the exact same image. Alternatively, if the facevector is calculated by the facevector creation engine at the validating entity (or at any location) based on the image of the user on the ID or from a new photograph of the person, then the comparison of this new facevector will not exactly match or be the same as the facevector stored or calculated by the trusted entity because there will be inherent differences between these trusted entity images (or facevector) and the image (or facevector) as actually captured by the camera. However, the comparison will determine a high likelihood of a match thereby giving confidence that the 1) image on the ID is of the same person as 2) the person to which it matches in the trusted image database and 3) and the same person photographed at the time of the validation. All of this can occur without the transmission of any image data or identifying data to or from the trusted entity. The facevectors will not be a 100% match because the image of the ID will not be exactly the same as the trusted image at the trusted entity due to it being a photograph of a photograph.
The outcome of the analysis by the trusted entity is then sent to the validating entity, in this embodiment the processing system, or in other embodiments any entity that sent the request to the trusted entity. The outcome of the analysis, which may be referred to as match probability data, may be a match or a no match response, or some probability value or similarity value that the photo in the ID matches the trusted photo, and both match to the person presenting the ID (or document). Businesses or requesting entities can then make a decision based on the match level from the trusted entity or processing system regarding the match probability. The validating entity can also provide additional analysis to the decision from the trusted entity.
Because the trusted entity is traditionally unwilling to transmit or share trusted images or received images for comparison, in this method and apparatus the output from the trusted entity does not include any image data or person identifiable data, nor does the data sent to the trusted entity contain any person identifiable data. The matching occurs between two facevectors. This protects a person's privacy and biometric data while also allowing the trusted entity to further transactional goals of the business community, the citizens of the state, and prevent fraud, which are all in the best interests of a government entity and its citizens. For example, it benefits the person (citizens) by having a secure and accurate way to provide assurances of their identity or association with an item (such as a document) for an online interaction, thus reducing fraud. Business's benefit because the business is less likely to be subject to fraud, thereby increasing profits or allowing a price reduction, and the additional tax revenues aid local governments.
In addition to the steps of
At a step 4316, one or more images of the person may be captured, such as at a first distance from the person's face and a second distance from the person's face as disclosed herein. As an optional step 4324, one or more images of the person may be processed for liveness, three-dimensional depth, or both as described herein by looking for expected differences between two or more images of the user captured at different distances or any other aspect of liveness or three-dimensionality.
At a step 4328, the image of the user is processed to create facial data which represents the person's face. This may be referred to as a facemap. Then, at a step 4132, the facemap is further processed to generate non-identifiable facial data (NIFD) which represents the person's face. The NIFD may comprise or be referred to as a facevector. In one embodiment, processing may proceed directly to facevector generation from the face image(s) without creating facemap data. The NIFD cannot be used, or reverse processed to generate or re-create an image of the user. At a step 4336 the system may also collect non-facial personal data (NFPD) or document data, such as by performing optical character recognition of the text on the document. The NFPD may comprise any data about the person as is described above or hashes of personal data that can later be verified against in zero-knowledge proofs. In addition, personal data may optionally be collected such as if the user were to enter personal data manually. The type of information collected from the document may vary based on the type of item or document. Examples of such items or documents are driver license number, passport number, employee number, credit card number, and badge number.
Personal data (NFPD) may comprise but is not limited to the following or a portion of the following: a person's name, date of birth, address, age, eye color, height, place of birth, personal preferences, life events, social security number, maiden name, parent's name(s), and occupation. Including personal data, which does not identify the person, into the UR™ code prevents an existing UR™ code from another ID from being cut and glued, laminated or taped onto another ID to create a fake ID or from being printed directly into a high quality fake ID because the text on the document, such as a driver's license would undergo an optical character recognition OCR process. The OCR NFPD would be compared to the NFPD extracted from the UR™ code when the UR™ code is decoded. If these two types of NFPD do not match (or the hash of one of the data does not match the stored hash), this is a sign of fraud or a fake document.
After step 4320 and step 4336 the operation advances to step 4340. At step 4340 the system, such as the processing system or the trusted entity, compares the code stored facevector data (NIFD) and/or the facevector data from an image of the person on the document (if present) to the image derived facevector data from the image of the person that was just captured at step 4316. Then, at step 4344 the system calculates a similarity value based on the comparison that occurs at step 4340. The similarity value provides an indicator of the distance delta of two or more facevectors in N-dimensional space in the neural network model. At a step 4348, the system compares the similarity value to one or more threshold values to determine whether to continue. If the comparison between the UR™ code derived facevector and the image derived facevector are a low probability of being sourced from the same human subject, then the operation may end at this step and involvement of the trusted entity is avoided. Steps 4340, 4344, and 4348 may be optional, but provide a greater level of security by allowing access to the trusted entity only when the person's face and the code on the document match sufficiently.
Next, at a step 4352 the facevector extracted from the graphic code on the document, the facevector derived from the person's image(s), the facevector from the image on the document, or a combination thereof are transmitted to the trusted entity for processing. At a step 4356, the trusted entity received the one or more facevectors (or images of the graphic codes) and retrieves or attempts to retrieve from a trusted database, a file or record that matches the person. This may be based on an identifying code that was also sent (such as an address into a database), or based on a match to one or more of the actual received facevectors to a facevector in the trusted entity database. A facevector retrieved from the trusted database at the trusted entity, or a facevector calculated from an image stored in the trusted database are referred to herein as trusted facevectors.
At a step 4360 the trusted facevector(s) from the trusted entity are compared to a received facevector(s), which may be a facevector currently derived from the person, stored in the UR™ code on the document, or created from a picture of the person on the document. At a step 4364 the trusted entity calculates a similarity value based on the comparison at step 4360. Then, at a step 4368 the trusted entity or other entity compares the similarity value from step 4364 to one or more threshold values to determine the degree or probability of validation of a person's identity and/or the association between the person and the document bearing the UR™ code. As discussed above, the two facevectors will not have a 100% probability of match, but will still indicate a match to the trusted facevector stored in a trusted database due to differences in the original image from which the trusted facevector was derived and subsequent facevectors derived from photos or images captured later in time. The facevectors still indicate or suggest that the person is the same and/or the document is validated. Finally, at step 4372 the trusted entity or other entity transmits the results of the comparison, such as the similarity value or the probability calculated at step 4368 to the requesting entity or to the person, or both so that a decision may be made regarding the identity of the person or the association between the person and the document bearing the UR™ code.
Proprietary or Open Source Facevector Creation, UR™ Code Creation and/or Decode Engine
It is also contemplated that the facevector creation engine may be open sourced, partially open sourced, or alternatively, proprietary such that it is not a public domain program or processing engine, thereby further increasing security by making it impossible or difficult for an entity to recreate an accurate and meaningful image to facevector conversion/associated UR™ code, and likewise difficult to decode a UR™ code (graphical code) into a facevector and any contained identity related info or metadata. In one embodiment, the creation of the facevector is proprietary and secure, while decoding a UR™ code into a facevector is open source or partially open source. Thus, the creation of UR™ codes is secure, to prevent creation of fraudulent UR™ codes, but decoding the UR™ code into a facevector for comparison is available to the public.
In addition, it is also contemplated that a unique key (number, code, value, alpha string, or combination thereof) may be used as input when creating and/or decoding a facevector and/or associated UR™ code. For example, in one embodiment a unique 79 digit or 1224 digit alpha-numeric key (or any size or length) may be provided to a trusted entity (for example DMV). Each trusted entity may be provided with a different key for security reasons. In such an embodiment, the key is used and required when creating the facevector and that same key or a derivative must be used to process images to create the UR™ code and/or when verifying identity or the UR™ code later in time. Without this code, the facevector (UR™ code) cannot be created and/or decoded thereby preventing or reducing the person's ability to create a fake ID with a faked UR™ code. UR™ codes could contain a URL and then the remaining data as a variable in the URL, for example www.URCodes.com/verify?data=xxxx . . . xxxx, (of any length string) and this data could be encrypted and only be unlocked with a private key.
As mentioned above, it is also contemplated that additional information may be inserted into the UR™ code beyond data regarding or derived from the image. This additional data encoded into or part of the UR™ code may comprise or represent, but is not limited to, select portions of information regarding the person or the document, such as year of birth, first letter of last name, last letter of first name, 3rd number (or any of the one or more numbers) in the person's social security number, sum of both numbers in day of birth, document information, details or creation date, or other information that adds further details about the person or document, but which does not identify the person. For example, a person cannot be identified, to the exclusion of other people, by only their first letter of their last name, and the last letter of first name, however, that information does exclude a tremendous number of people to greatly narrow the possible matches that may occur for the facevector. Same with the day number of a person's date of birth. Assuming an even distribution of birthdates over a course of a month, adding this information to the UR™ code reduces the match possibilities by a factor of about 30. These are non-limiting examples of a person's information that may be incorporated into a UR™ code, but which does not identify the user, yet does reduce the number of facevectors (people) against which the person's face image could match. For example, there are a greatly reduced number of people whose last name starts with N, and first name starts with T, yet no single person can be identified by that combination. In contrast, the prior art barcode on the back of a driver license can be read to identify the person's name.
The trusted entity 3808 can verify that the person identified by the UR™ code also has those same personal details, even though the person cannot be identified by the additional details. Additional data incorporated into the UR™ code will increase the accuracy of the identity assurances and document association provided by the trusted entity. It will become increasingly difficult to create a fake ID or document that can also bypass interrogation by the trusted entity based on the UR™ code while also maintaining a higher accuracy of facevector matching across different images. Even with this additional information, no biometric data (that can be used to re-create the user's face or identify the user) is sent to or from the trusted entity.
In one embodiment, the facevector may be created directly from the image of the user while in other embodiments, the facevector may be created as part of a multi-step process. The multi-step process makes it more difficult to pass a fake ID as a valid ID. In one exemplary multi-step process, the ID photo (or any photo, such as but not limited to the trusted image) is provided to a neural network that may optionally be configured to perform liveness processing and/or face matching against database of images that is not located at the trusted entity. As part of this process multiple images of a person's head or face may be captured, such as at different distances from a user. Liveness analysis may occur on one or more of these images that yields a liveness answer or liveness probability which indicates the likelihood that the person being imaged is a live person and not a mask, figurine, or printed image of the person. This provides another level of security.
Using these images, a 3D facemap may be created which is a processed version of the image. The facemap may be used for matching the photos of the person to the photo on the ID that is asserted by the person to be the ID of the person. Thus, the photo taken by the person during the session is compared to the photo on the ID. This provides another level of security. Alternatively, the collected photos collected during a verification or ID check session of the person may be used for matching the images to pre-existing collection of photos of the person stored in a remote database. Unlike the facevector, facemap data could potentially be decrypted and expose images of the user's face.
The facemap may then be further processed, such as with a neural network or any other processing device executing the same neural network model (such as a processor executing software stored on a memory as machine executable instructions), to create the 3D facevector (facevector). Using a neural network for creation and comparison of the facevectors adds another layer of security because most people committing fraud are not going to design, build and enable a neural network or have access to the same neural network model used to create the facevector that the UR code represents. The facemap data is pushed into a transforming system (facemap engine) that generates the facevector that can be stored in memory for processing and comparison, or stored as a specific file, referred to herein as a facemap file type. The facemap and facevector generation engine software, also referred to as a transforming system, is available from FaceTec, Inc. located in Las Vegas, NV. In one embodiment, the facevector data comprises long strings of alphanumeric data. In one embodiment, the match accuracy data when matching two facevectors that were create from the data in two 3D facemaps is 1 in 125 million, meaning that the comparison is only expected to incorrectly match two random people who are not the same once in every 125 million comparisons, which is astonishingly accurate.
The facemap and facevectors may be encrypted if stored or transmitted over a computer network. Similarly, the facevector may be input into a processing engine or a server SDK, such as UR™ code creation engine, to create the UR™ code. In one exemplary environment and embodiment, the original photo of the user, such as the stored trusted image or a photograph collected by the computing device of the user, may be in the size range of 300 Kbytes to 400 Kbytes. The 3D facemap, created by processing the trusted image, may be in the size range of 200 to 250 Kbytes. The facevector can be stored in a UR™ code, which has usable storage capability in the 2 to 3 Kbyte range. The use of the smaller sized facevector provides the benefit of reduced storage space for all applications such as but not limited to, block chain verification arrangements, databases with a large number of users such as 5 million or 100 million users, and less data must be transmitted over a network, thereby reducing lag, as well as avoiding privacy issues or BIPA lawsuits liability exposure.
Use of URL Code in Connection with UR™ Code
Different URL's can be used to direct the process to different web sites. For example, a URL on a driver's license may direct the processing system to the DMV for verification, while a URL on a college diploma may direct the processing system to the college that issued the diploma for verification. Thus, the URL may be tailored to the person, document, or key used for creating the facevector or UR™ code.
It is also contemplated that the URL address 4404 may include a URL code 4408 that provides information to the web site accessed by the URL address, or which is unique to the user. In one embodiment, the URL code 4408 and/or URL address 4404 may be part of and included in the UR™ code 3604. The URL code 4408 may be used by the web site retrieved by the URL address 4404 to access a record associated with the ID or direct the computing device using the URL code to a particular web site or location within a server. The code 4408 may identify the user or be a pointer into a database to a particular facevector for that person or be part of the facevector itself. In one embodiment, the URL code 4408 may direct the query from the link to a particular image in the database that is used for comparison or processed to create a facevector that is compared to the captured image of the ID.
Testing on large data sets has confirmed that the system and method disclosed herein can reduce the facevector data set down to as little as 128b, and still achieve accuracy at a level of 1/>125 m FAR @<0.99 FRR (False Acceptance Rate). Stated another way the FAR (false acceptance rate) of the UR™ coded data and an image of the user is less than 1 in every 125 million attempts. In other embodiments, a higher or lower level of FAR may be achieved based on the size of the facevector data size, and similarly the size or resolution of the UR™ code, and other factors. In one embodiment the facevector data size is 1 kilobyte.
In one embodiment, the facevector data is mapped in N dimensional space, which may be represented by large arrays of data. High-value dimensional space is difficult if not impossible to visualize mentally as compared to traditional three-dimensional X, Y, and Z coordinate systems. For example, N may be set at 512. For instance, it can attempt to be visualized by imagining that a facevector representing a human face would be represented as a small sphere inside a much larger sphere, and that the best algorithms would spread the spheres (facevector values with overlap volume for FAR at specified FRR) out such that they are the centers are spaced away from each other, and the small spheres overlap as little as possible. This would give each face the most volume or separation for itself in relation to and from other facevectors, but the FAR level would be represented by the overlap of the smaller spheres, as it is possible for the model to falsely match two very similar looking people. This would be an ideal facevector to space conversion, but in practice, some facevector data is closer or more similar to other facevector data than other facevector data, because some faces lack as many distinguishing features or have similar shapes from the facevector encoding algorithm approach. Stated another way, some people look very similar. The lower the overlap or proximity between facevector location mappings when mapped into N dimensional space, the more accurate the face matching is, the more likely that the facevector of one person can be distinguished from a facevector of another person. It is preferred to map the same facevectors of the same person to as close to the same point as possible, every time a facescan is converted to a facevector.
To further aid in understanding the mapping of data into facevectors, it is disclosed that the dimensions are not traditional X, Y, Z dimensions, but instead is an algebraical mapping. Any number of dimensions may be represented by independent variables. To describe where a point is in a 3D geometric space requires 3 variables, namely the positions of X, Y, and Z. The axes may not even be fully perpendicular, but it can represent all points in the universe using any three axes as basis. Location information is lost if less than 3 axes are used, which means that to describe a point in the space we physically occupy requires at minimum 3 independent variables.
It can be understood that two variables provide 2-dimensional mapping, and with three variables three dimensional mapping can be achieved. In addition, with three dimensional mapping a greater accuracy is achieved, and a greater number of unique data points can be mapped. Expanding this to a greater number of dimensions, such as N dimensions where N is any whole number, the facevectors can be represented as or considered an n-dimensional geometry mapping. In one embodiment, the n-dimensional ‘space’ is an array that the facevector maps to a 512 dimensional space. While this can clearly not be imagined or processed in the mind, it can be represented mathematically when multiplying two matrixes, such as matrixes that map into or represent a transformation on a 512 dimensional space. Different embodiments may utilize different matrix sizes and different dimensional space mappings.
Considering this N dimensional mapping, the further away or greater the distance of the mathematical mapping may be considered as being less similar facial data people. While closer mappings may be considered to indicate that the people appear more similar. The difference is mapping may appear different mathematically, but not to the human eye/brain. In this way, facevector mappings of the same person, even using different photos taken at different times will map closer in the N dimensional space than a mapping of a different person. In this way, mappings that are not exactly the same but are close, may indicate the same person even though there is not an exact match. In this manner, the mappings differ from a hash function. A threshold value may be used as a guide to determine if the degree of similarity in the mapping is sufficient to advise that the two facevectors are of the same person or different people.
Hash Function with UR™ Code
Also disclosed herein is the use of a hash function in connection with the facevector, which may be represented in a UR code. In one embodiment, the facevector, which is a long string of numbers and letters, which can also be represented as a number, can be hashed to create a hash value. This hash value can be stored inside the UR code, as part of the facevector, or stored at a secure location such as a trusted entity or a trusted third party that controls the facevector hashes.
In the event that the hash value is placed inside the UR code, upon extraction of the facevector from the UR code, the facevector can again be hashed to regenerate the hash value of the facevector and this newly calculated hash value can be compared to the hash value embedded in the UR code. If any value or aspect of the facevector has changed, the hash values will not be the same. This is a further security feature.
In another embodiment, the hash function may also or alternatively be stored at a secure location instead of being embedded in the UR code. This secure location that stored trusted hash values may be referred to as a trusted hash value entity. As such, upon extraction or conversion of the facevector from the UR code, a hash function may be performed on the UR code stored facevector to obtain a calculated hash value. The calculated hash value can be sent to the trusted hash value entity for comparison to the trusted hash value that was calculated at the time of creation of the facevector to verify that the facevector stored in the UR code is the same as the facevectors that was originally calculated. If the has hash values do not match, then the UR code or facevector represented in the UR code was tampered with or edited. This is a further security feature. The trusted hash value storing entity may be FaceTec Inc, a bank, credit agency, government entity, corporation, or any entity. The trusted hash value entity may be the same as or different from the trusted facevector entity to provide separation between the two entities, which provides further security such that both entities would have to be hacked or internally compromised for fraud to occur.
In addition, other or supplemental calculations may occur to identify a facevector which has been edited. In one embodiment, a check sum operation may occur on the facevectors. Other mathematical operation as also contemplated. To avoid confusion, while the facevector calculated from a collected image of the user, taken at the time of document or person verification, will differ but be similar to the facevector stored in the code, the facevector stored in the code should be exactly the same as the facevector calculated at the time of creation/printing of the original UR code.
In one embodiment, plain text from the document may be scanned and subject to OCR (optical character recognition) processing. The scanned OCR text may be compared to what is in the bar code of the ID and/or what is in the UR™ code. As discussed herein, information from the document with the UR code may be encoded into the UR code. In addition, the text may optionally be hashed to obtain hashed document text data. Hash functions are known in the art and as such are not described in detail. The hashed text is then encoded in the bar code and/or the UR™ code to create an additional level of security such that the hashed text that is encoded into the bar code or UR™ code must match a newly created hash of the text on the ID (document). For example, data from the document that does not identify the person or a hashed value thereof may be stored in the UR code. At a later time, the non-facevector data encoded in the UR code must match the OCR data from the document at the time of verification. In this way any alterations to the actual text of the document will be detected when it is hashed at the time of validation and compared to the previously hashed text that is encoded in the bar code or the UR™ code.
It is also contemplated that a hash value resulting from a hash of image file representing the UR code or a hash of the UR code graphic may occur to detect and prevent fraud attempts which tamper with the UR code itself. In such an embodiment, an unchanged version of the UR code or digital file representing the UR code would be hashed and the resulting value would be compared to a trusted hash of the UR code graphic or digital file.
UR Code Verification with Liveness and/or Three-Dimensional Depth Verification
It is also contemplated and disclosed that the UR code processing may occur in connection with liveness verification. Liveness verification is discussed above in greater detail. In one embodiment images are captured of the person at a first distance from camera and at a second distance from the camera. The images are then compared to determine if expected differences are found as disclosed above to verify or provide an assessment of whether the person is live and/or three dimensional. This provides an additional security layer by preventing a fraud attempt whereby a 2D image, mask, figurine, or other false representation of the person is presented to the camera to generate the person generated facevector. Verifying liveness on the captured image of the person used to calculate the facevector, prevents a UR code to person comparison with the use of 2D image, mask, figurine, or other false representation of the person. For example, absent liveness and or three dimensionality verification, use of something other than a captured image of the person could be used in an attempt to spoof the system by presenting an image of a 2D image, mask, figurine, or other false representation of the person to create the person's generated facevector which is to be compared to the facevector stored in the UR code.
Combining user liveness verification with the UR code facevector comparison is particularly useful when performing online or remote verification because in such embodiments a live trusted person is not there to monitor, supervise, and ensure that the image(s) captured of the face are from of a real live person. In online or remote unsupervised verification, the person can attempt fraud without being monitored by anyone overseeing the process and ensuring its integrity.
In addition to the use environments discussed above, it is also contemplated that the UR™ code (graphic code) may be placed on numerous items or documents that will result in numerous benefits throughout the economy and establish greater security.
One such additional environment of use is on a shipping label which is affixed to a shipped package. In this use environment the face of the person set to receive the package may be images or scanned when the product is ordered. For example, a person ordering an expensive product will undergo a facescan or facial image collection at the time of purchase of the product or at least once in the purchase history, which may also include a liveness verification and/or identity check based on an in person or ID card/driver's license scan and analysis. The person's facevector data is then associated with the order. When the shipping label is printed, the facevector generated from the purchaser's face is converted to a UR™ code and printed on the shipping label.
When the package is delivered to a person representing themselves to be the purchaser (receiver), the delivery person will scan the receiver's face and the UR™ code on the shipping label. A computing device will process and compare the face derived facevector and the UR™ code derived facevector. Based on the comparison, a determination can be made that the person receiving the package is the same person who ordered the package. In this example method of operation, no internet access is required because all the processing can occur on the same computing device. Additional identity verification may occur by communication with a trusted entity as described above. This not only confirms the receiver actually received the package, thus preventing the common fraud of claiming the package was not delivered, but also verifying the package is handed to the actual purchaser. It is also contemplated that a receptionist or family member could also have their facevector on file and be authorized to receive packages for third parties, such as co-workers or family.
Another exemplary environment of use is with a passport or visa, collectively a passport. Prior art e-passports contain an electronic chip. The chip holds the same information that is printed on the passport's data page: the holder's name, date of birth, and other biographic information. An e-passport also contains a biometric identifier. The United States requires that the chip contain a digital photograph of the holder. All e-passports issued by the United States have security features to prevent the unauthorized reading or “skimming” of data stored on the e-passport chip.
As discussed herein, UR™ codes contain encoded face data from a liveness-proven (optional) 3D facemap or a trusted 2D face photo and are significant improvement for remote KYC (know your customer)/IDV (identity verification) and user privacy. Security is enhanced significantly versus using a photo from of a photo ID for remote IDV because of internal cryptographic proofs (keys, hashes etc.) Ensure the UR™ code has not been tampered with, and the facevector data enables significantly more accurate face matching or validation than can be performed using a photo on a photo ID document as used in prior art passports, even if the image data is directly encoded into an RFID chip in the passport.
As discussed above, the UR™ code is a cameral scannable graphical code that can be printed onto any document. Thus, the UR™ code may be a 2D matrix graphic which defines encoded data from a FaceTec 3D facevector which comprises data derived from an image of a person along with some hashed personal data. Any image of the user may be used, such as a previously captured image on a driver license, or an image of the person captured proximate in time, such as when the person is passing through a border and presenting their passport for entry to a country.
This provides security equal to or better than an NFC passport chip, with similar internal security features, but the face data is not humanly viewable, protecting privacy and preventing bias, etc. In addition, the UR™ code can be printed on a credit report, credit card, health insurance card, diploma, state issued ID, customers paperwork, shipping label, event tickets, voting ballots, as well as a passport as discussed herein. It can also be transmitted digitally or optically, stored in a wallet, on a blockchain etc., while protecting (not revealing) the image of the person from human viewing and recognition.
In one embodiment, the passport chip (NFC chip in or associated with the passport) contains a file, such as a document security object (DSO), that stores hash values of all files stored in the chip (including but not limited to person's picture, fingerprint data, etc.) And a digital signature of these hashes. The digital signature is made using a document signing key which itself is signed by a country signing key. If a file in the chip (e.g., the picture) is changed, this can be detected since the hash value is incorrect. This provides an additional level of security. The readers may be provided access to all used public country keys to check whether the digital signature is generated by a trusted country. The UR™ code and its associated processing can replace or supplement traditional passport identity and security features. In one embodiment, the hash of the SOD photo may not be stored, but the hash of the facevector data itself could be part of the hash. The facescan, the 2D photo, or both, may be encoded into the UR™ code.
It is contemplated that in addition to the data derived from a person's face, additional data may be incorporated into the UR™ code or used in connection with a hash function to form the UR™ code. This data includes but is not limited to the following:
In one embodiment, the following may be the suggested default supplemental data fields:
To supplement the passport security features as many of these security features and data fields may be included to meet or exceed prior art passport security levels. In addition to the features described herein the following may also be included. One such feature is active verification (VA), which helps to prevent cloning of biometric passports. Passive verification (PA) may be included to detect chip modifications. Basic access control (BAC) protects the channel of communication between the passport chip and the e-passport reader. Extended access control (EAC) is an extra safeguard for iris scans and fingerprint data.
It is also contemplated that the disclosed method and apparatus may be used as or
to enhance online remote driver's license and passport renewals with the option for liveness/three-dimensionality checks during the renewal process. The renewal process can compare and match newly collected photos to two dimensional photos, or 3D facemaps or facevectors which are on file with the ID or passport issuing entity. This provides a greater level of security during the renewal process as compared to a system which does not use images to verify identity during an online renewal. In addition, numerous ICAO (international civil aviation organization) factors may be checked.
The newly captured images for the user, which were captured by the person using their web camera or phone camera can be isolated from the background that is in the image and then the person's face can be placed on an ICAO compliant background, such as is suitable for a passport. This provides the benefit of an updated photo for the document picture which was verified by liveness checks and UR™ code validation to more accurately reflect what the person looks like at the time of renewal. Thus, the updated photo of the person that was used in identification verification, will be the official passport or driver's license photo and printed on the ID or passport. In addition, this photo may be included inside the e-passport chip and used to create an updated UR™ code. The UR™ codes can be created based on the updated photo, which includes or comprises the facevector data or the UR™ code which can be encoded from the 2DIAOC output image.
As discussed herein, the image of the person, or another asserting to be the person, and processing of the UR™ code occurs as described herein. The processing may occur on a number of different devices and structures. In one embodiment as discussed herein the processing occurs on a neural network. In general, a neural network is a computational structure that comprises of interconnected nodes that process data and feedback paths with error analysis that allow the system to learn from training data. The neural network enabling tasks such as pattern recognition and decision making in machine learning. Neural networks can be embodied as in purely hardware, but such structure can be inflexible. Neural networks are typically embodied as software code (machine executable instructions) that are stored in a non-transitory state in a memory. The software code is executed by one or more processors, ASIC, DSP, or other processing elements. As such, the elements that perform the neural network functions are shown as flow charts and processing steps which are typical of software code functions which execute on a processor or other purpose built device.
The nodes and layers may include forward propagation and back propagation. The process of passing data from one layer to the next layer defines this neural network as a feedforward network and comprises forward propagation of the data. As discussed below this may include weighting of the inputs and/or outputs from a node to establish a greater or lesser emphasis on that node. The neural network may also be trained or re-enforced with back propagation of data or node outcomes or the final outcome from node 4530.
Backpropagation allows the system to calculate and attribute the error associated with each node, allowing the system to adjust, and fit the parameters of the model(s) appropriately. In the case of image analysis, training may occur using a known dataset of images, such as for example millions of image sets which are known to not be the same person and other image sets which are known to be the same person. Outcome errors from the processing of these datasets may be fed back into the system to allow or force the neural network to increase or decrease weightings, or the processing algorithms within each node, to obtain a correct outcome.
Each node may be equipped with processing algorithm, logic, or other processing to process and analyze the received data, which may include input data from numerous other nodes as well as feedback data from numerous other nodes. For image processing as described herein, the inputs may be the image file, or derived characteristics of facial features from the image, or other data. Different embodiments may be configured or programmed differently. It is contemplated that one of ordinary skill in the art concerning image processing and/or neural network will be able to program the node algorithms to arrive at facevector datasets which function as described here.
As shown in
The processing node 4616 receives multiple weighted inputs and may be configured to perform recursive processing, multiplication and addition, or any other type of processing function upon the multiple weighted inputs to form the facevectors. The neural network training can adjust the node processing, subject to predefined parameters. In this embodiment, the processing node 4616 performs a sum function with an optional weighting function.
The activation function element calculates the output value for the neuron. This output value is then passed on to the next layer of the neural network through another synapse. Also shown are one or more feedback paths 4640 from the activation function elements to either or both of the processing node 4616 and the weighting elements 4612. Although only one feedback path 4640 is shown, each weighting element may receive a feedback signal which can be used to update the weighting factor. An output 4624 is provide an output from the node, such as to a subsequent node. The system of
At step 4720 the image data is subject to face detection to detect if a face is contained within the image data. Face detection is known by those of skill in the art and as such is not described herein. If a face is not detected, the operation will end, without progressing to any additional steps. If a face is detected, then the operation advances to a step 4724. At step 4724 face landmarking occurs. Landmarking of a face is the detection, identification, and locating of landmarks. Landmarks are points of interest on a face, such as nose outline points, eye location points, facial edges, or other unique facial characteristics which can be identified in the facial image.
At a step 4728 face affining and cropping occurs. The step of affining performs rotation and alignment to correct any side tilt of the face or camera rotation issues on the image while cropping occurs to isolate the face and any other portions of interest in the image. Cropping the image to isolate the face reduces the size of the image data which in turn reduces the process burden for subsequent steps. Thereafter, at a step 4732, minimum requirement checks are performed on the image to verify that the remaining image data meets the minimum requirements for subsequent processing. These requirements include but are not limited to resolution, clarity, focus, noise, lighting, or any other image characteristics. Step 4736 may occur prior to or after step 4728.
If the image meets the minimum requirements for subsequent processing, then the operation advances to step 4736. At step 4736 the affined and cropped image data is provided as an input into a DNN (deep neural network) model. The DNN processes the image data using the weighting factors and node algorithms, with optional feedback or recursive processing such as during training. Different algorithms and models may be used to arrive at a unique code, such as for example, a facevector. DNN processing is described herein and is known by those of skill in the art. The output of the DNN model is an n-dimensional feature vector (which may be referred to as a facevector or the outputs of the DNN model may assembled into the n-dimensional feature vector (facevector) at a step 4740. Facevectors are described above. The value of N may be any whole number. The value of N may be defined as the number of characteristics, features or attributes which are analyzed in the image data by the DNN to generate the facevectors.
At a step 4744 processing on the two or more facevectors occurs to determine distances between two or more facevectors. In this embodiment, one facevector is the UR code stored facevector (from the UR code 4708) and the other is the image derived facevector from the image 4704. In one embodiment, the distance may be a difference in values between the vectors, such as for example but not limitation, differences in vector direction, magnitude, and location. Although shown with two facevectors (feature vectors) being compared, it is contemplated and disclosed that more than two may be compared, such as for example a facevector from a photo ID or a prior image may also be processed. Computing differences between two or more multi-dimensional vectors in n-dimensional space is known in the art, and as such is not described in detail. At a step 4748 the facevector distances (or differences) are output from the processing system, such as by using software code executing on a processor configured to calculate the difference or distance between facevectors.
Thereafter, at a step 4752, the distances (differences) between the facevectors are assigned a similarity value or match level based on a comparison or mapping of the distances to accuracy thresholds. In one embodiment the match levels are 0-15, which are levels that define the accuracy of the match, based on the differences in the distances. The level correlates to accuracy levels, such as by way of example the following accuracy levels. In one embodiment, the following levels and FAR accuracy are as follows for 3D: 3DD face matching: match level 15—1/125,000,000 FAR, match level 14—1/95,000,000 FAR, match level 13—1/70,00,000 FAR, match level 12—1/50,000,000 FAR, match level 11—1/25,000,000 FAR, match level 10—1/12,800,000 FAR, match level 9—99.99995% (1/2,000,000 FAR), match level 8—99.9999% (1/1,000,000 FAR), match level 7—99.9998% (1/500,000 FAR), match level 6—99.999% (1/100,000 FAR), match level 5—99.99% (1/10,000 FAR), match level 4—99.9% (1/1,000 FAR), match level 3—99.8% (1/500 FAR), match level 2—99.6% (1/250 FAR), match level 1—99% (1/100 FAR), match level 0—non-match and additional match levels up to level 15 are contemplated. In other embodiments other scales or similarity indicators may be used to allow entities to accurately understand the degree of similarity and thus the likelihood of identity match or person to UR code association.
While various embodiments of the invention have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of this invention. In addition, the various features, elements, and embodiments described herein may be claimed or combined in any combination or arrangement.
Number | Date | Country | |
---|---|---|---|
63457085 | Apr 2023 | US |