For security purposes and other reasons electronic devices, systems, and services may be protected by one or more authentication protocols such as a password authentication protocol. In an example password authentication protocol, an individual may supply a username and password to a service provider (e.g., his or her email provider). The service provider may store this information in association with the individual's account. When the individual wishes to access the account, he/she may log in to the service by providing his/her user name and password through a relevant portal such as a website or other application. Similarly, a key code or other type of password may be used to protect one or more rooms or areas from unauthorized access.
Although password authentication protocols are useful, they are becoming increasingly cumbersome as the number of user accounts and the need to use secure (e.g. complex and/or hard to remember) passwords increases. Such protocols also frequently require the storage of a username and password combination on a third party system such as an authentication server. Because authentication servers often store copious amounts of user account information, they may be considered a prime target for attack by malicious software and/or a hacker. If either or both of those entities successfully attack and gain access to the authentication server, the usernames and passwords stored in the server may be compromised.
Biometric authentication protocols have been considered as an alternative to passwords for user identity verification. In this regard a variety of biometric authentication protocols have been developed on the basis of specific biometric features, such as fingerprints, facial recognition, speech recognition, retina/iris scanning, and hand geometry. While existing biometric authentication protocols may be useful, their effectiveness may be limited by various factors such as the ability to circumvent the technology (e.g., by presenting a static image of a face to a camera), the need for expensive custom hardware, etc. Such protocols may also require users to engage in precise and repetitive actions so that a suitably accurate measurement of biometric features may be performed, potentially degrading user experience.
Features and advantages of embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts, and in which:
The present disclosure generally relates to technologies for learning the body part geometry, and biometric authentication technologies using the same. According to one aspect, the technologies include systems, methods and computer readable media that are configured to determine one or more biometric features of a user. In some embodiments the biometric feature(s) may be determined by leveraging a calibrated computer model of a body part of the user, as well as depth information in a depth image of the body part. As will be described in detail below, the technologies can use the biometric features to generate a biometric template, e.g., in an enrollment process. Once a biometric template has been created, the technologies may use the biometric template to verify the identity of a user via biometric authentication.
Various aspects and examples of the technologies of the present disclosure will now be described. It should be understood that while the technologies of the present disclosure are described herein with reference to illustrative embodiments for particular applications, such embodiments are exemplary only and that the invention as defined by the appended claims is not limited thereto.
Indeed for the sake of illustration the present disclosure focuses on embodiments in which the technologies described herein are used to determine biometric features of a human hand, to create a biometric template including such features as biometric reference information, and to perform biometric authentication. It should be understood that such discussions are for the sake of illustration only, and that the technologies described herein may be used in other contexts and with body parts other than a hand. Those skilled in the relevant art(s) with access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope of this disclosure, and additional fields in which embodiments of the present disclosure would be of utility.
The technologies described herein may be implemented using one or more electronic devices. The terms “device,” “devices,” “electronic device” and “electronic devices” are interchangeably used herein to refer individually or collectively to any of the large number of electronic devices that may be used as a biometric authentication system consistent with the present disclosure. Non-limiting examples of devices that may be used in accordance with the present disclosure include any kind of mobile device and/or stationary device, such as cameras, cell phones, computer terminals, desktop computers, electronic readers, facsimile machines, kiosks, netbook computers, notebook computers, internet devices, payment terminals, personal digital assistants, media players and/or recorders, security terminals, servers, set-top boxes, smart phones, tablet personal computers, ultra-mobile personal computers, wired telephones, combinations thereof, and the like. Such devices may be portable or stationary. Without limitation, in some embodiments the technologies herein are implemented in the form of a system for generating a biometric template or a system for performing biometric authentication, wherein such systems include or are in the form of one or more cellular phones, desktop computers, electronic readers, laptop computers, security terminals, set-top boxes, smart phones, tablet personal computers, televisions, or ultra-mobile personal computers.
For ease of illustration and understanding, the specification describes and the FIGS. depict various methods and systems as implemented in or with a single electronic device. It should be understood that such description and illustration is for the sake of example only and that the various elements and functions described herein may be distributed among and performed by any suitable number of devices. For example, the present disclosure envisions embodiments in which a first electronic device is configured to perform an enrollment process in which biometric features of a body part are determined and incorporated into a biometric reference template, whereas a second electronic device is configured to perform biometric authentication operations that utilize the biometric reference template generated by the first device.
The term “biometric information” is used herein to refer to observable physiological or behavioral traits of human beings (or other animals) that may be used to identify the presence of a human being (or other animal) and/or the identity of a specific human being (or other animal). Non-limiting examples of biometric information include biometric features such as biosignals (brain waves, cardiac signals, etc.), ear shape, eyes (e.g., iris, retina), deoxyribonucleic acid (DNA), face, finger/thumb prints, gait, hand geometry, handwriting, keystroke (i.e., typing patterns or characteristics), odor, skin texture, thermography, vascular patterns (e.g., finger, palm and/or eye vein patterns), skeletal parameters (e.g., joint measurements, range of movement, bone length, bone contours, etc.) and voice of a human (or other animal), combinations thereof, and the like. Such feature may be detectable using one or more sensors, such as an optical or infrared camera, iris scanner, facial recognition system, voice recognition system, finger/thumbprint device, eye scanner, biosignal scanner (e.g., electrocardiogram, electroencephalogram, etc.), DNA analyzer, gait analyzer, combinations thereof, and the like.
Without limitation, in some embodiments the technologies described herein utilize biometric features of a first body part of a human in various operations, such as the generation of a biometric template and the performance of biometric authentication. For example, in such embodiments the first body part may be a human hand, and the biometric features may include be or include features of the hand. Non-limiting examples of such features include skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof.
Example skeletal features of a hand include but are not limited to a circumference of a knuckle and/or a joint of said hand, a length of a knuckle and/or joint of said hand, a length of a finger bone of said hand, a length of a bone extending between two or more joints of a finger of said hand, or one or more combinations thereof.
Example tissue features of a hand include but are not limited to a skin thickness in at least one region of the hand, a blood vessel pattern of at least a portion of said hand, or a combination thereof.
Example surface features of said hand include but are not limited to a palm print of said hand, a finger print of a finger of said hand, a contour map of at least a portion of said hand, or a combination thereof.
The term “biometric reference template” is used herein to refer to a data structure containing biometric reference information of a user (e.g., biometric features of a first body part of the user), particularly when the user is the target of a biometric authentication protocol. The term “biometric reference information” is used herein to refer to biometric information (features) of a user and/or one or more body parts of a user that is/are contained in a biometric reference template.
In various instances the present disclosure describes embodiments in which biometric information of a first body part (e.g., a hand) is used, e.g., to develop a biometric template and/or to perform biometric authentication of a user. As will be described later, in some embodiments the biometric template may include “supplemental biometric reference information.” In such contexts it should be understood that the term “supplemental biometric information” is used to denote biometric information of the user that is not obtained from the first body part. For example, supplemental biometric information may include biometric information obtained from at least a second body part of a user, such as the user's face, eyes, mouth, teeth, combinations thereof, and the like. Alternatively or additionally, supplemental biometric reference information may include the gait of the user, the voice of the user, etc.
With this in mind, the term “supplemental biometric reference information” is used to refer to supplemental biometric information (e.g., features) that is included in a biometric template. In such instances it should be understood that the biometric reference information and supplemental biometric reference information may be contained in the same or different biometric reference templates.
The term “pose” is used herein to refer to the configuration of a body part. In the case of a hand, for example, the term “pose” refers to the overall arrangement of the elements of the hand, such as the fingers, thumb, palm, etc., as they may be presented to a system consistent with the present disclosure. Similarly in terms of other body parts such as a foot or a face, the term pose refers to the overall arrangement of the elements of the foot (e.g., the sole, heel, toes, arch, etc.) or the face (e.g., the eyes, nose, mouth, teeth, chin, eyebrows, etc.), as they may be presented to a system consistent with the present disclosure.
Unless otherwise stated to the contrary herein, the terms “substantially,” and “about” when used in connection with a value or a range are interchangeably used herein to refer +/−5% of the indicated amount or range. As used in any embodiment herein, the term “module” may refer to software, firmware, and circuitry configured to perform one or more operations consistent with the present disclosure. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage mediums. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. “Circuitry”, as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, software and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms a part of one or more devices, as defined previously. In some embodiments one or more modules described herein may be in the form of logic that is implemented at least in part in hardware to perform one or more object detection and/or filtering operations described herein.
One aspect of the present disclosure relates to methods for determining biometric features of a user and, more particularly, to methods of producing a biometric reference template from biometric features of at least one body part of a user. In this regard reference is made to
Once one or more depth image(s) have been captured (or if depth images of a body part of the user are provided in some other manner) the method may proceed to block 103. Pursuant to this block, a calibrated computer model (“calibrated model) of the body part(s) under consideration may be developed. As will be described in detail later, generation of the calibrated model may entail comparing the depth information in the depth image(s) to one or more hypotheses produced by an un-calibrated model of the body part. The result of such comparison may be the generation of calibration parameters which may be used to fit the un-calibrated model to the body part in question. In this way, the technologies of the present disclosure may develop a calibrated model that is customized to the body part(s) under consideration. In any case, it should be understood that the calibrated model may be understood to provide an accurate representation of the body part(s) under consideration. Indeed in some embodiments the calibrated model can provide an accurate model of one or more of the skeletal features, tissue features, and surface features of the body part(s) under consideration.
Once a calibrated model of the body part(s) under consideration is generated, the method may proceed to block 104, wherein biometric features of the body part may be determined. As will be described in detail below, determining the biometric features of the body part in question in some embodiments may entail using the calibrated model to identify the location of one or more semantic points (i.e., known features) of the body part(s) within a depth image. Biometric information may then be determined based at least in part on one or more selected semantic points, e.g., from the calibrated model, the depth information in the depth image, or a combination thereof.
Once one or more biometric features of the body part(s) in question is/are determined the method may proceed to block 105, wherein one or more biometric templates may be generated. As will be described in detail later, production of a biometric template in some embodiments may entail incorporating the biometric features determined pursuant to block 104 into a data structure as biometric reference information. Although any suitable data structure may be employed, in some embodiments the data structure is in the form of a database. In some embodiments the biometric reference information in the biometric reference template may be supplemented with other information, such as supplemental biometric reference information.
Once a desired number of biometric reference templates have been produced the method may proceed to block 106 and end. As such,
The present disclosure will now proceed to describe features of various elements of the method of
Turning specifically to optional block 102 of
Depth cameras are sometimes referred to as three-dimensional (3D) cameras. A depth camera may contain a depth image sensor, an optical lens, and an illumination source, among other components. The depth image sensor may rely on one of several different sensor technologies. Among these sensor technologies are time-of-flight, known as “TOF”, (including scanning TOF or array TOF), structured light, laser speckle pattern technology, stereoscopic cameras, active stereoscopic sensors, depth from focus technologies, and depth from shading technologies. Most of these techniques rely on active sensors, in the sense that they supply their own illumination source. In contrast, passive sensor techniques, such as stereoscopic cameras, do not supply their own illumination source, but depend instead on ambient environmental lighting. In addition to depth information, the cameras may also generate color data, in the same way that conventional color cameras do, and the color data may be combined with the depth information for processing.
The depth information generated by depth cameras may have several advantages over data generated by conventional, two-dimensional “2D” cameras. For example, depth information can simplify the problem of segmenting the background of an image from objects in the foreground. Depth information may also be robust to changes in lighting conditions, and can be used effectively to interpret occlusions. Using one or more depth sensors such as depth cameras, it is possible to identify and track a body part of a user in real-time, such as one or both of the user's hands and/or his fingers in real-time. In this regard, the following describes methods that employ depth images to track one or more body parts.
As may be understood from the foregoing, one or more depth images may be obtained by imaging the body part(s) of a user under consideration with a depth camera. Alternatively or additionally, one or more depth image(s) may be obtained from another source. For example, one or more depth images may be obtained from a (optionally verified) database of depth images of one or more body parts of a user. In such instances optional block 102 may not be required (and thus may be omitted), and method 100 may include operations in which one or more depth images are acquired from the database, e.g., via wired and/or wireless communication.
Once one or more depth images have been acquired the method may proceed to block 103, pursuant to which a calibrated model of the body part(s) in question is generated. In this regard reference is made to
Before describing the elements of
One non-limiting example of a 3D skinned hand model that may be used in accordance with the present disclosure is a skinned hand model described, which is briefly described below for the sake of ease of understanding. In general, the 3D hand skeleton model may be in the form of a hierarchical graph, where the nodes of the graph represent the skeletal joints of the hand, the edges correspond to the bones of the skeleton of the hand. Each bone in the skeleton may have a fixed length and may be connected to other bones by a joint, each of which may rotate in three or fewer dimensions. The model is thus configurable and able to accurately reproduce the movements of a human hand. Furthermore, constraints may be imposed on the rotation of the joints, e.g., to restrict movements of the model skeleton to the natural movements of the human hand. For example, in some embodiments one or more joints of the model skeleton may be constrained to one or two dimensions, e.g., so as to mimic the movement of certain joints in the human hand.
In addition to the skeleton the models used herein may also contain a mesh. In general, the mesh may be a geometrical structure of vertices and associated edges that are constrained to move based on the movements of the skeleton joints. In some embodiments the mesh may be composed of polygons. For example, a mesh corresponding to the fingers of a hand may be composed of cylinders, spheres, combinations thereof, and the like, which may be modeled from polygons. It is noted however that a cylinder-based model may provide only a rough approximation of the actual shape of a human hand and thus, in some embodiments the cylinder-based model may be relaxed to produce a 3D model geometry that more closely approximates that of a human hand.
In some embodiments the geometrical structure of the mesh may be “skinned” so that movements of the mesh vertices are controlled by associated joints. In this regard it is noted that various methods of skinning are known, and any suitable method may be used to skin the models used in accordance with the present disclosure.
As explained above the models of the present disclosure may model the skeleton of a body part in question, such as a human hand. To illustrate this concept reference is made to
As may be appreciated, any or all of skeletal parameters 801 may differ from person to person. Moreover, the surface features of a hand may also differ from person to person. For example, skeletal parameters such as those shown in
As will be discussed in detail below, calibration of the model may involve adjusting the lengths of the model skeleton to fit the depth information in the depth image, i.e., the depth information corresponding to the body part in question (e.g., a user's hand). More specifically, during calibration there may be two objectives, namely: 1) to adjust the skeleton parameters of the model to fit the body part in question; and 2) to accurately compute the 3D positions of the joints (e.g., hand joints) of the body part.
As shown in
Assuming an initialization pose is used, pursuant to optional block 201 one or more gesture detection operations may be employed to determine whether the body part in question is in the initialization pose. Various gesture recognition techniques can be used to perform this task. For example in some embodiments template matching and Haar-like feature-based classifiers (and, by extension, cascade classifiers) are used to detect whether the body part is in the initialization pose. Alternatively, some implementations may detect explicit features of the hands, such as the shapes of individual fingers, and then combine multiple individual features to recognize the hand in the image. In many instances the gesture detection operations may be facilitated by combining the depth image with other image data, such as color (red, green, blue) data, infrared data, amplitude data, combinations thereof, and the like, if they are available. By way of example, depth information may be with amplitude data for gesture recognition purposes. In some embodiments, gesture recognition may be performed by generating a silhouette of the body part in question, and analyzing the contour of the silhouette to determine the pose of the body part.
While some embodiments of the present disclosure initiate the generation of a calibrated model with a determination of whether the body part in question is in an initialization pose, it should be understood that such a determination is not required. Indeed the present disclosure envisions and encompasses embodiments in which a calibrated model may be developed without the use of an initialization pose, and/or without a determination that the body part under consideration was in the initialization pose when the depth frame(s) were acquired. For example, in some embodiments no initialization pose is used, and skeleton tracking may be employed to track the body part under consideration. As the body part is tracked, depth images of the body part (e.g., produced by a depth camera) may be analyzed to determine calibration parameters (discussed below) which may be applied to calibrate the model to the body part under consideration.
When an initialization pose is detected or if detection of an initialization pose is not required the method may proceed to block 202, wherein a multiple hypothesis method may be employed to iteratively adjust the parameters of the model skeleton (e.g., on an ad hoc basis) until they sufficiently match the depth information in the depth image obtained from the depth sensor. More specifically, in some embodiments features of the body part in question (e.g., a hand) may be identified from the depth image, and the parameters of the skeleton model may be adjusted based at least in part on those identified features. Color (red, green, and blue), infrared, and/or amplitude data may also be used in conjunction with the depth images to detect features of the body part in question. In any case, the pose of the body part, including articulation of the skeleton joints may be computed as part of calibrating the model.
More specifically pursuant to block 202, a multiple hypothesis method may be employed to calibrate the model. In the multiple hypothesis methods of the present disclosure, the parameters of an (un-calibrated) model of the body part (e.g., hand) in question may be adjusted on an ad hoc basis to produce a plurality of hypotheses for the skeleton parameters of the model, such as skeleton parameters 801 of
Once one or more hypotheses have been developed (or after a plurality of hypotheses have been developed) the method may proceed to block 203, wherein each hypothesis is tested against the depth information from the depth image under consideration. In some embodiments, each hypothesis (e.g., each depth map) may be evaluated to determine the degree to which it is similar to the depth information from the depth image. Although any suitable method may be used to perform this comparison, in some embodiments the comparison is performed using an objective function and/or a motion model.
In any case, the method may proceed to block 204, wherein a determination may be made as to whether one or more of the hypotheses substantially matches the depth information in the depth image. If not, the method may loop back to optional block 102, wherein additional depth images may be acquired, optionally from the body part in an initialization pose. In any case if a hypothesis substantially matching the depth information of the depth image is not found, the method may develop additional hypotheses pursuant to block 202 for comparison to the depth information in one or more depth image(s). If a hypothesis that sufficiently matches the depth information is found, however, the hypothesis that most closely matches the depth information may be considered a “best” hypothesis and the method may proceed to block 205. Pursuant to that block calibration parameters that may be applied to calibrate the model to the body part in question may be determined based at least in part on the best hypothesis. The calibration parameters for the model may then be stored, e.g., in a database and optionally in association with a user profile associated with a user. In this context the term, “calibration parameters” refers to the skeletal parameter values that were used to generate the best hypothesis.
As may be appreciated by the foregoing, the models described herein may be calibrated such that they provide an accurate 3D representation of one of more features of a body part of interest. For example in the case of a hand, calibration parameters may be applied to fit the model to a hand of a user, such that the model accurately represents the skeleton of the user's hand, either alone or in combination with one or more of tissue features of the user's hand and surface features of the user's hand. Furthermore once calibrated, the model may be used to accurately track the motion of the user's hand through various configurations.
Once a calibrated model has been obtained the method may proceed from block 103 of
In this regard reference is made to
Semantic points may be identified in any suitable manner. For example in some embodiments semantic points may be computed from a depth image using image processing techniques, which may be any technique that is capable of identifying a region of a body part from an image. These operations may be performed with and/or facilitated by the use of color and/or infrared images.
In some implementations the body part in question is a hand, and semantic points may be identified by detecting edges corresponding to the center axes of the fingers. In such instances those edges may be used to approximate individual fingers by roughly fitting a piecewise continuous line composed of up to three segments, where the three segments correspond to the three bones of a finger. Alternatively or additionally, local maxima may be detected in the depth image or a hand blob (i.e., a portion of the depth image that corresponds to the body part of interest, and which has been segmented from the background) thereof and used as semantic points indicating the positions of the fingertips. Local maxima are regions of the depth image where surrounding pixels have values further away from the camera than the center of the region. Local maxima correspond to features indicating, for example, fingertips pointing towards the camera, since the pixels at the periphery of the fingers are generally further away from the camera, and thus may have uniformly higher depth values.
Alternatively or additionally, semantic points may be determined at least in part using the calibrated hand model. For example, various parameters of the hand model such as the skeletal parameters described above may be associated with known features of a human hand, such as the knuckles, fingertips, base of the palm, etc. Because the location of such features is known in the model and the model is calibrated as discussed above, the location of semantic features corresponding to specific points in the model may be mapped to the depth information. Conversely, semantic points may be determined by image processing the depth image as discussed above, after which such points may be mapped to the calibrated model.
More specifically, one or more of the calibration parameters used to produce the calibrated model may be used to generate a mathematical function describing the relationship of one or more semantic points identified in the model to the depth information in the depth image obtained from the body part in question. Thus for example, one or more knuckles, a portion of the palm, the fingertips, etc. of a hand may be identified in the calibrated model as semantic points, and may be mapped by the calibrated model to specific pixels or groups of pixels in the depth image. Alternative or additionally, semantic points may be identified by image processing the depth image, after which the identified points may be mapped to corresponding points of the calibrated model.
In some embodiments, once a semantic point has been identified and associated with a specific portion of the body part in question, it may be labeled accordingly. For example, semantic points identified as fingertips may be labeled as corresponding to a specific fingertip, e.g., the fingertip of the index finger, of the thumb, etc. In some embodiments, machine learning algorithms are used to label semantic points as specific points of a body part in question, such as specific parts of a hand. Once one or more semantic points have been identified the method may proceed to block 302 of
Without limitation, in some embodiments one or more biometric features may be determined at least in part by analyzing one or more portions of the calibrated model. More specifically, one or more biometric features of the body part may be determined by selecting one or more semantic points of the body part in question, which as noted above may be accurately reproduced and identified in the calibrated model. Once one or more of the semantic points has been identified, one or more biometric features may be calculated, measured, or otherwise determined using the selected semantic point(s) as a point of reference.
By way of example, in instances where the body part under consideration is a hand, semantic points corresponding to each side of the distal knuckle of the pinky may be identified as selected semantic points. This concept is illustrated in
In other non-limiting embodiments, one or more biometric features may be determined at least in part by an analysis of the depth information in the depth image of the body part under consideration. As in the prior example in which the body part is a hand, one or more semantic points may be determined, e.g., by image processing the depth image and/or by mapping one or more semantic features identified in the calibrated model to the depth information of the depth image. In either case, the semantic points may be used as reference points from which one or more biometric features may be determined. For example, image processing techniques may be applied to calculate, measure, or otherwise determine the linear distance (e.g., width) 803 between points 802, e.g. in instances wherein points 802 are selected semantic points. Also like the previous example, a circumference of the distal knuckle of the pinky may be determined, in which one or more of points 802 is/are a selected semantic point.
While the foregoing examples focus on embodiments in which points 802 are selected semantic points and the biometric features determined include one or both of a linear distance of a knuckle (width) and a circumference of a knuckle, it should be understood that those examples are for the sake of illustration only. Other semantic points may be used as selected semantic points, from which any number and/or type of biometric features may be determined from the calibrated model, the depth image, or a combination thereof. Indeed the present disclosure envisions embodiments in which the features include one or more skeletal features of a body part, tissue features of a body part, surface features of a body part, or one or more combinations thereof.
In some embodiments the body part in question is a hand, and the biometric features include one or more features of the hand, such as but not limited to skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof. In some embodiments the features are skeletal features of the hand, and include or are selected from one or more of a circumference of a knuckle of a joint of the hand, a length and/or width of a joint of the hand, a length of a finger bone of the hand, a length of a bone extending between two or more joints of a finger of the hand, a width of the palm of the hand, combinations thereof, and the like.
Alternatively or in addition to the above noted skeletal features, in some embodiments the body part in question is a hand, and the biometric features include one or more tissue features of the hand. Non-limiting examples of tissue features that may be used include the skin thickness of the hand in at least one region thereof, an average skin thickness of the entire hand, a blood vessel pattern of at least a portion of the hand, or combinations thereof.
In still further embodiments, alternatively or in addition to one or more of the above noted skeletal and tissue features, in some embodiments the body part in question is a hand, and the biometric features include one or more surface features of the hand. Non-limiting examples of surface features that may be used include a palm print of the hand, a contour map of at least a portion of the hand, or combinations thereof.
To further illustrate the foregoing concepts, in some embodiments the body part in question may be a hand, and semantic points correlating to specific points on the hand geometry may be identified pursuant to block 301 of
Pursuant to block 302 of
Regardless of the number of depth bands, the identified depth bands in the first and second depth images may be associated with an index, with corresponding depth bands in each image being identified with the same index. In this regard, two depth bands may be considered corresponding if they are located at the same position relative to common semantic points.
With the foregoing in mind the depth data along each depth band may be sampled and used to calculate the circumference of the finger(s). The resulting set of calculated circumferences may then provide an accurate description of the geometry of the user's body part, and in some embodiments may be sufficient to use as a biometric feature that is sufficient to identify a user, either alone or in combination with other biometric information. Alternatively, the calculated circumferences for each finger may be plotted against their indices and the curvature of that plot may be determined and used as a biometric feature of the user.
Alternatively or additionally, depth data along each depth band may be sampled and used to calculate the 3D world position of each point on the surface of the hand. As above, the resulting set of calculated 3D world positions may be associated with an index, which in turn may be associated with a particular depth band. In any case, the set of 3D world positions may provide a highly detailed description of the hand geometry of a user. As such, all or a portion of the set of 3d world positions may be sufficient to use as biometric feature of the user, either alone or in combination with other biometric information.
The foregoing discussion focused on embodiments in which two depth images of a hand are used, as may be suitable for example in a use case in which a user is prompted to present the front and back of a hand to a depth camera. It should be understood that such description is for the sake of example, and that in some embodiments biometric features of the body part in question may be extracted from a plurality of depth images, and in some cases regardless of orientation and/or rotation of the body part.
For example, skeleton tracking may be used to track the motion of a hand of the user and depth images of the hand may be acquired, e.g., periodically or at random intervals. As the depth images are acquired, the skeleton tracking may also update the calibrated model of the hand. Simultaneously or subsequently, the calibrated model may be used to identify semantic points of the hand, and to map those semantic points to the acquired depth images. Because the calibrated model provides an accurate 3D model of the hand, the same semantic points of the hand may be identified in the model regardless of hand position or orientation. Provided those semantic points are visible to the depth camera acquiring the depth images, the model maps the identified semantic points to the depth data acquired of the users hand. In other words, the determination of the position of semantic points within depth images of the body part may be rotation and/or orientation invariant, provided the semantic point is visible to the depth camera. As will be described below, this may allow biometric features of the body part of interest to be extracted in an asynchronous manner.
Specifically, in some embodiments semantic features of the body part (e.g., hand) may be identified in a first depth image of the body part, as discussed above. Using those semantic features, the regions between the depth data may be subdivided into bands, and each band may be assigned an index, as discussed above. The data along those bands may then be sampled. As the hand moves and is tracked, another (e.g., second) depth image of the hand may be acquired. The calibrated model may identify the same semantic points of the hand in the second depth image, after which the regions between the semantic points may be subdivided into bands and sampled. As in the first image, each band may be assigned an index. As a result, each band of depth data in the second (or subsequent) image(s) is matched to bands of depth data obtained in the first (or other) depth images.
This process may continue as the hand is moved, until a desired amount of depth images have been sampled. At that point, the depth data associated with each index may be used to compute or otherwise determine biometric features of the hand. In the hand use case, for example, the depth data acquired from the depth images and which is associated with a particular index may in some embodiments be used to calculate a circumference of a finger at that index. Similarly, 3D world coordinates of each point of a hand and along a particular index may be determined from the depth data acquired from the depth images of the hand that is associated with that same index. In this manner, biometric features such as finger circumference, 3D world position of the surface of various points of the hand, etc. may be determined over a period of time.
Returning to
Before or after the production of a data structure the method may proceed to optional block 402. Pursuant to this optional block the biometric features determined pursuant to block 104 may be supplemented with additional biometric information, hereinafter called “supplemental biometric information.” In general, supplemental biometric information may be understood as biometric information of a user that is other than the biometric features determined pursuant to block 104. For example where the body part under consideration is a first body part of a user (e.g., a hand), supplemental biometric information may be in the form of one or more other biometric features of the user. Non-limiting examples of such other biometric features include a voice of the user, a gait of the user, biometric features obtained from one or more (e.g., second) body parts of the user other than the first body part (e.g., the face of the user, a foot of the user, an ear of the user, an eye of the user, etc.), other features (e.g., a palm print) of the same body part (e.g., hand) used to produce the biometric reference information (e.g., finger circumference, 3D world position, etc.).
Without limitation, in some embodiments the supplemental biometric information is in the form of a palm print of a hand of a user. In this regard it is noted that the palm print of a hand of a user may be extracted from depth images in much the same manner as described above with respect to
Alternatively or additionally, in some embodiments the bands of depth data acquired from the body part may be utilized as a biometric template. That is, alternatively or in addition to determining the above noted features of the body part, the bands of depth data (optionally indexed) may be considered biometric information of the user, and may themselves be stored in a biometric template. In some embodiments, the depth data in these bands may be used to determine one or more reference feature vectors (e.g., in a similar manner as described later in connection with
In instances where supplemental biometric information is used, it may be compiled in the same or a different data structure as the biometric features determined pursuant to block 104 (e.g., from a first body part). Like the biometric features determined pursuant to block 104, the supplemental biometric information may be compiled (e.g., as supplemental biometric reference information) in any suitable data structure, such as but not limited to a database. As will be described later the data structure (or, more particularly, the biometric reference information and supplemental biometric reference information contained therein) may later be employed as a biometric reference template in a biometric authentication process.
Once the biometric features determined pursuant to block 104 and optionally the supplemental biometric information have been compiled into one or more data structures, the method may proceed to block 403, pursuant to which the data structures may be stored as biometric templates for later use, e.g., in a biometric authentication process. Storage of the data structures may be performed in any suitable manner, such as by including the data structures into one or more databases, which may be stored on a biometric authentication system or a remote computer system. Once the data structured have been stored the method may proceed to block 106 and end.
The foregoing discussion has focused on methods in which one or more biometric features of a body part may be determined from a depth image and used to produce a biometric template. With this in mind, another aspect of the present disclosure relates to methods for performing biometric authentication. As will become apparent from the following discussion, use of the technologies described herein can in some embodiments facilitate the performance of both active biometric authentication and passive biometric authentication. As used herein, the term “active biometric authentication” refers to a biometric authentication process in which a user presents a body part in a specific pose for biometric authentication. In contrast, the term “passive biometric authentication” reference to a biometric authentication process in which a user is not required to present a body part in a specific pose for biometric authentication, e.g., where authentication may be performed while the user is engaged in another activity.
Reference is therefore made to
Once the initialization pose has been detected the method may proceed to block 503, wherein one or more depth images of the body part may be acquired with a depth sensor such as a depth camera. Like the depth images discussed previously in connection with
Once one or more depth images have been acquired from the body part in the initialization pose, the method may proceed to block 504, during which one or more biometric features may be determined. For the sake of clarity, such biometric features are referred to as “extracted biometric features”. In some embodiments, the extracted biometric features may be determined in much the same manner as described above with regard to
In an alternative embodiment, prior to the initiation of method 500 a user may optionally provide some other identification indicia to assert his identity. By way of example, a user may provide a biometric sample (e.g., voice, retina, fingerprint, etc.), a username and password, etc., which may be used to assert his identity to a biometric authentication system. Based on the provided identification indicia, the biometric authentication system may identify a user profile associated with the user, e.g., via a lookup operation. The user profile may associate the user with one or more biometric templates, as well as calibration parameters that may be used to generate a calibrated model of a body part (e.g., a hand) of the user. Method 500 may then proceed as described above with regard to blocks 502-504, except that the calibration parameters associated with the user profile may be used to generate a calibrated model of the body part in question. As may be appreciated, such embodiments avoid the need to re-determine calibration factors that are used to calibrate the model of the body part to the specific user. Like the previously described embodiments, semantic points may then be determined and used to calculate, measure, or otherwise determine extracted biometric features of the body part in question.
As will be described later, the biometric authentication methods described herein compare extracted biometric features of a body part under consideration to biometric reference information in one or more biometric reference templates. For the comparison to be meaningful, the extracted biometric features in some embodiments should include at least the same type of biometric features as the biometric reference information stored in one or more biometric reference template. With the foregoing in mind, in some embodiments the biometric features determined pursuant to block 504 may include one or more skeletal features, tissue features, surface features, or combinations thereof. Non-limiting examples of such features include the same biometric features discussed above in connection with block 104 of
Once one or more extracted biometric features have been determined the method may proceed to optional block 505, wherein the extracted biometric feature(s) may be augmented with additional information. The additional information in some embodiments may include additional biometric features of the user. As noted above, the biometric authentication methods compare extracted biometric features obtained from one or more depth image(s) of a body part to biometric reference information in a biometric template. In some embodiments, however, the biometric features determined from one depth image may be insufficient to determine whether there is a match between the extracted biometric features and the biometric reference information. With this in mind, in some embodiments the extracted biometric features may be augmented with additional biometric features determined from one or more additional depth images of the body part under consideration.
This concept is illustrated in blocks 506-508 of
At block 507, a determination is made as to whether the extracted biometric features and biometric reference information in a biometric reference template match, either identically or greater than or equal to a threshold degree of similarity. If the extracted features and biometric reference information in a biometric reference template do not match, the method may proceed to block 508.
Pursuant to block 508, a determination may be made as to whether the method is to continue. The outcome of block 508 may depend on one or more factors, such as a time limit, whether the lack of a match was due to insufficient extracted biometric features (e.g., when the extracted biometric features determined pursuant to block 504 do not include one or more biometric features of the biometric reference information), whether the comparison performed pursuant to block 506 was able to eliminate one or more biometric reference templates from consideration or not, combinations thereof, or the like. In any case if the method is to continue, the method may loop back to block 503, wherein one or more additional depth image(s) may be acquired. Pursuant to blocks 504 and 505, the method may attempt to detect additional biometric features of the body part, and to augment the previously extracted biometric features with newly extracted biometric features. A comparison between the augmented extracted biometric features and the biometric reference information may then be performed pursuant to block 506. The loop of blocks 503-508 may continue until a match is detected, or until it is determined that the method is not to continue, in which case the method may proceed from block 508 to block 509, whereupon biometric authentication fails. The method may then proceed to block 513 and end.
As shown in
The method may then proceed to block 1103, wherein the values of the depth bands are approximated with one or more basis functions, such that:
in which (bi)(t) are the basis functions, N is the number of basis functions used in the approximation of a band, and may be any suitable number, i is the index of the basis function, and ai is an approximation coefficient(s). In some embodiments, a polynomial basis function is used, such that bi(t)=ti. Alternatively, a spline basis function or another basis function may be used. In any case, the approximation may be determined for each depth band.
Once the value of each depth band has been approximated the method may proceed to block 1104, wherein, for each depth band, feature vectors are constructed from the approximation coefficient(s) ai, and a distance metric representing the difference between two feature vectors is used to compare the depth data captured from the user's hand with the biometric reference information in a database of biometric reference templates. Without limitation, in some embodiments this is performed by concatenating all of the approximation coefficients ai for each band and determining a single feature vector from the concatenation of the approximation coefficients. Specifically, feature vectors may be constructed for each depth band based on their respective approximation coefficients and subsequently, a cumulative distance metric derived from all of the respective depth bands can be computed, given the distance metrics of all of the individual depth bands.
In any case the method may proceed to blocks 1105 and 1106 wherein the distance metric(s) calculated pursuant to block 1104 may be compared to a threshold (hereinafter called a threshold distance), and a determination is made as to whether the threshold is satisfied (with regard to the entirety of the measured depth data or an individual band). In this context, the threshold distance may be understood to represent a maximum distance by which the depth data/feature vector(s) of the measured depth data may differ from the depth data/feature vector(s) of biometric reference information in a database in order to constitute a match. If the distance between a depth band/feature vector of the measured depth data and corresponding depth data/feature vector of biometric reference information in the database is less than or equal to the threshold, the method may proceed to block 1107, wherein a match may be indicated. Alternatively if the distance between a depth band/feature vector of the measured depth data and corresponding depth data/feature vector of biometric reference information in the database is higher than the threshold, the method may proceed to block 1108, wherein it is determined that there is no match.
In instances where individual bands of measured depth data and/or feature vectors thereof are being compared to individual bands of depth data and/or feature vectors of biometric reference information, the determination pursuant to block 1106 in some embodiments may be conditioned on the comparison returning a threshold number of “match” or “no match” results. Thus for example, the comparison and determination made pursuant to block 1105 and 1106 may proceed on a depth band/feature vector by depth band/feature vector basis, with each comparison resulting in a match or no match determination. The comparison may iterate for each depth band/feature vector, until all of the measured depth bands/feature vectors have been compared to corresponding depth bands/feature vectors of the biometric reference information in the database of biometric templates.
The total number of match and no match results may then be compared to one or more thresholds, so as to determine whether the measured depth data overall matches one biometric reference information in the database. For example, when the total number of measured depth band/feature vectors matching corresponding bands of biometric reference information meets or exceeds a threshold number, a determination may be made that the measured depth data matches that biometric reference information. Conversely when the total number of measured depth band/feature vectors matching corresponding bands of biometric reference information is less than a threshold number (or, alternatively, the total number of measured depth band/feature vectors that do not match corresponding depth bands/vectors of biometric information meets or exceeds a threshold number), a determination may be made that the measured depth data matches that biometric reference information. In any case, after the match or no match determination is made method 1100 may proceed from blocks 1108 or 1107 to block 1109 and end.
Returning to
After prompting the user to perform an action with the body part, the secondary process may further involve monitoring for the performance of the action. Any suitable technique may be applied in this regard. For example, gesture recognition techniques may be applied to detect specific gestures, skeleton tracking (as discussed above) may be applied to analyze the motion of the body part, etc. Without limitation, in some embodiments the secondary process involves prompting a user to move the body part under consideration in a specific manner, and using skeleton tracking to monitor the motion of the body part as the user performs the requested action.
In some embodiments the secondary verification process may be based on supplemental biometric information of the user. In such instances the supplemental biometric information may in some embodiments be the same as the supplemental biometric information discussed above in regard to
In instances where a secondary verification process is applied pursuant to block 510, the method may proceed to block 511, wherein a determination may be made as to whether the secondary verification has passed or failed. In instances where the secondary verification relies on the performance of a requested action, the outcome of this determination may depend on the analysis of the body part that was performed pursuant to block 510 as the user performs the requested action. Specifically, the outcome may depend on a determination of whether the requested action was performed by the user correctly, i.e., in a manner that is identical or sufficiently similar to the requested action. If not, the method may proceed from block 511 to block 509, wherein authentication fails. The method may then proceed to block 513 and end.
Alternatively where secondary authentication relies on supplemental biometric information, the outcome of the determination in some embodiments depends on a comparison of the measured supplementary biometric information to supplementary biometric reference information contained in one or biometric reference templates. In this regard, the supplementary biometric reference information may be included in the same biometric template as the biometric reference information corresponding to the body part under consideration, or a different biometric template. In the latter case, the biometric reference template containing the supplementary biometric reference information may be correlated or otherwise associated with the biometric template containing the biometric reference information of the body part under consideration. Regardless, the outcome of block 511 may turn on a comparison of the measured supplemental biometric information to the supplemental biometric reference information. If the measured supplemental biometric information does not substantially match the supplemental biometric reference information, the method may proceed from block 511 to block 509, wherein authentication fails. The method may then proceed to block 513 and end.
In either case if secondary verification passes or if secondary verification is not required, the method may proceed to block 512, wherein authentication passes. The method may then proceed from block 512 to block 513 and end.
Reference is now made to
With the foregoing in mind, as shown in
In general the determination of biometric features pursuant to block 604 may proceed in much the same manner as described above with respect to block 504 of
As noted above, depth images of the body part in question may be captured as the user is engaged in various activities, and/or as the user moves the body part around. Depending on the orientation of the body part to a depth sensor, it may not be possible to determine some biometric features from one particular depth image of the body part. For example, certain positions of the body part may occlude one or more features of the body part from the depth sensor, which may hinder or prevent determining certain biometric features from that depth image. Thus while an analysis of one depth image may allow for the determination of some extracted biometric features, those features may not be sufficient alone to verify the identity of the user.
With this in mind, method 600 may address this issue in some embodiments by augmenting extracted biometric features from one depth image with biometric features extracted from additional depth images. This concept is illustrated in
As biometric features are extracted and optionally augmented, the method pursuant to block 606 may compare the extracted biometric features to biometric reference information in one or more biometric reference template. As discussed above, the comparison may focus on the degree to which the extracted features are similar to the biometric reference information. Pursuant to block 607, a determination is made as to whether the extracted biometric features match or substantially match biometric reference information (or, more specifically, biometric features) in a biometric reference template. If not the method may proceed to block 608, wherein a determination is made as to whether the method is to continue. The outcome of block 608 may depend on one or more of the same considerations as the outcome of block 508 of
This loop may continue until a match is detected in block 607 or it is determined that the method should not continue pursuant to block 608. If pursuant to block 608 it is determined that the method should not continue, the method may proceed to block 609 wherein verification fails. The method may then proceed from block 609 to block 611 and end.
If a match is detected pursuant to block 607, however, the method may proceed to block 610, wherein verification passes and the method may proceed to block 611 and end.
Although not shown in
Another aspect of the present disclosure relates to systems for performing biometric authentication operations consistent with the present disclosure. Non-limiting examples of biometric authentication operations that may be performed by the systems include biometric template generation operations and biometric authentication operations. Examples of biometric template generation operations include but are not limited to the operations described above in connection with
For the sake of clarity and ease of understanding the present disclosure will proceed to describe embodiments in which a single system is configured to perform both biometric template generation and biometric authentication operations consistent with the present disclosure. While such embodiments may be particularly useful in some implementations, it should be understood that those embodiments are for the sake of example only and that the performance of biometric template generation operations and biometric authentication operations may be performed by separate systems. Such systems may be referred to herein as a system for generating a biometric template, a system for performing biometric authentication, or, collectively, a biometric authentication system. In any case, it should be understood that the biometric template generation operations and biometric authentication operations may be performed by one system or multiple different systems, regardless of the particular notation used herein. Therefore a system for generating a biometric template may also be configured to perform biometric authentication operations, and a system for performing biometric authentication may also be configured to perform biometric template generation operations.
With the foregoing in mind reference is made to
As shown, system 700 includes device platform 701, which may be any suitable device platform. In some embodiments device platform correlates to the type of electronic device used as system 700. Thus for example where system 700 is in the form of a cellular phone, a smart phone, a security terminal, or a desktop computer, device platform 701 may be a cellular phone platform, a smart phone platform, a security terminal platform, or a desktop computer platform, respectively.
Device platform 701 includes processor 702, memory 703, communications interface (COMMS) 704, biometric authentication module (BAM) 705, and optional depth sensor 706. Such components may communicate with one another via interconnect 708, which is an abstraction that represents any one or more separate physical buses, point to point connections, or both connected by appropriate bridges, adapters, or controllers. In some embodiments interconnect 708 may include or be in the form of one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a Hyper Transport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 694 bus, sometimes referred to as “Fire wire”.
Processor(s) 702 can include central processing units (CPUs) and graphical processing units (GPUs) that can execute software or firmware stored in memory 703. The processor(s) 702 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
Memory 703 represents any form of memory, such as random access memory (RAM), read-only memory (ROM), flash memory, or a combination of such devices. In use, in some embodiments memory 703 can contain, among other things, a set of computer readable instructions which, when executed by processor 702, causes system 700 to perform operations to implement biometric template generation operations and/or biometric authentication operations consistent with the present disclosure.
COMMS 704 is generally configured to enable communication between system 700 and one or more computing platforms, devices, sensors, etc., e.g., using a predetermined wired or wireless communications protocol, such as but not limited to an Internet Protocol, WI-FI protocol, BLUETOOTH protocol, combinations thereof, and the like. COMMS 704 may therefore include hardware (i.e., circuitry), software, or a combination of hardware and software that allows system 700 to send and receive data signals to/from one or more computing systems, sensors, servers, etc., with which it may be in communication. COMMS 704 may therefore include one or more transponders, antennas, BLUETOOTH® chips, personal area network chips, near field communication chips, Wi-Fi chips, cellular antennas, combinations thereof, and the like.
As noted above, system 700 may also include depth sensor 706. Depth sensor 706 may be any suitable type of depth sensor, such as but not limited to a depth camera. In some embodiments depth sensor may be external to device platform 701, e.g., as a standalone sensor or a sensor that may be in communication with device platform 701, e.g., via COMMS 704. This concept is illustrated in
In some embodiments and as illustrated in
System 701 may also include one or more optional input devices and/or optional display devices (both not shown). When sued, the input devices can include a keyboard and/or a mouse, and the display devices can include a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device.
The following examples pertain to further embodiments. The following examples of the present disclosure may comprise subject material such as a system, a device, a method, a computer readable storage medium storing instructions that when executed cause a machine to perform acts based on the method, and/or means for performing acts based on the method, as provided below.
According to this example there is provided a method for generating a biometric template, including: generating a calibrated model of a first body part of a user at least in part from depth information included in a depth image of the first body part acquired with a depth sensor; extracting one or more biometric features of the first body part at least in part using the calibrated model; and producing a biometric reference template including the biometric features of the first body part as biometric reference information.
This example includes any or all of the features of example 1, wherein the depth sensor includes a depth camera.
This example includes any or all of the features of examples 1 or 2,
wherein generating the calibrated model includes: formulating multiple hypotheses for a model of the first body part in a first position, each of the multiple hypotheses including a depth map of the first body part in the first position, wherein the first position corresponds to the position of the first body part when the depth frame is acquired; and identifying a best hypothesis from the multiple hypotheses at least in part by comparing the depth map of each of the multiple hypotheses to the depth information in the depth frame, the best hypothesis including one of the multiple hypotheses that most closely fits the depth information.
This example includes any or all of the features of example 3, wherein generating the calibrated model further includes: determining calibration parameters for the model of the first body part model based at least in part on the best hypothesis; and adjusting the model of the first body part using the calibration parameters to produce the calibrated model, the calibrated model accurately modeling at least the skeletal geometry of the first body part.
This example includes any or all of the features of any one of examples 1 and 2, wherein the extracting includes: identifying a plurality of semantic points of the first body part using the calibrated model, wherein each of the semantic points correspond to a known feature of the first body part; identifying at least one selected semantic point from the plurality of semantic points; and determining the one or more biometric features of the first body part based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 5, wherein the determining includes measuring at least one biometric feature of the first body part from the depth information, the calibrated model, or a combination thereof based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 6, wherein the determining includes measuring at least one biometric feature of the first body part based at least in part on the depth information and the at least one selected semantic point.
This example includes any or all of the features of example 6, wherein the determining includes measuring at least one biometric feature of the first body part from the calibrated model and the at least one selected semantic point.
This example includes any or all of the features of example 5, wherein the first body part is a hand, and the one or more biometric features of the first body part comprise features of the hand.
This example includes any or all of the features of example 9, wherein the features of the hand comprise at least one of skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 9, wherein the features of the hand include skeletal features of the hand, the skeletal features including one or more of a circumference of a knuckle of a joint of the hand, a length of a joint of the hand, a length of a finger bone of the hand, a length of a bone extending between two or more joints of a finger of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 9, wherein the features of the hand comprise tissue features of the hand, and the tissue features comprise at least one of a skin thickness in at least one region of the hand, a blood vessel pattern of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of example 9, wherein the features of the hand comprise surface features of the hand, and the surface features comprise a palm print of the hand, a contour map of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of any one of examples 1 and 2, wherein producing the biometric template includes incorporating the one or more biometric features of the first body part into a data structure.
This example includes any or all of the features of example 14, wherein the data structure is in the form of a database.
This example includes any or all of the features of any one of examples 1 and 2, further including supplementing the one or more biometric features of the first body part with supplemental biometric information.
This example includes any or all of the features of example 16, wherein the supplemental biometric information includes at least one biometric feature of a second body part of the user.
According to this example there is provided a method of performing biometric authentication, including: generating a calibrated model of a first body part at least in part from depth information included in a depth image of the first body part acquired from a user with a depth sensor; extracting one or more biometric features of the first body part at least in part using the calibrated model to produce extracted biometric features; and comparing the extracted biometric features to biometric reference information in a biometric template; denying authentication of the user's identity when the extracted biometric features and the biometric reference information do not substantially match; and verifying the user's identity when the extracted biometric features and the biometric reference information substantially match.
This example includes any or all of the features of example 18, wherein the depth sensor includes a depth camera.
This example includes any or all of the features of any one of examples 18 and 19, wherein generating the calibrated model includes: formulating multiple hypotheses for a model of the first body part in a first position, each of the multiple hypotheses including a depth map of the first body part in the first position, wherein the first position corresponds to the position of the first body part when the depth frame is acquired; and identifying a best hypothesis from the multiple hypotheses at least in part by comparing the depth map of each of the multiple hypotheses to the depth information in the depth frame, the best hypothesis including one of the multiple hypotheses that most closely fits the depth information.
This example includes any or all of the features of example 20, wherein generating the calibrated model further includes: determining calibration parameters for the model of the first body part based at least in part on the best hypothesis; and adjusting the model of the first body part using the calibration parameters to produce the calibrated model, the calibrated model accurately modeling at least the skeletal geometry of the first body part.
This example includes any or all of the features of any one of examples 18 and 19, wherein the extracting includes: identifying a plurality of semantic points of the first body part using the calibrated model, wherein each of the semantic points correspond to a known feature of the first body part; identifying at least one selected semantic point from the plurality of semantic points; and determining the one or more biometric features of the first body part based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 22, wherein the determining includes measuring at least one biometric feature of the first body part from the depth information, the calibrated model, or a combination thereof based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 23, wherein the determining includes measuring at least one biometric feature of the first body part based at least in part on the depth information and the at least one selected semantic point.
This example includes any or all of the features of example 23, wherein the determining includes measuring at least one biometric feature of the first body part from the calibrated model and the at least one selected semantic point.
This example includes any or all of the features of example 22, wherein the first body part is a hand, and the one or more biometric features of the first body part comprise features of the hand.
This example includes any or all of the features of example 26, wherein the features of the hand comprise at least one of skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 26, wherein the features of the hand include skeletal features of the hand, the skeletal features including one or more of a circumference of a knuckle of a joint of the hand, a length of a joint of the hand, a length of a finger bone of the hand, a length of a bone extending between two or more joints of a finger of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 26, wherein the features of the hand comprise tissue features of the hand, and the tissue features comprise at least one of a skin thickness in at least one region of the hand, a blood vessel pattern of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of example 26, wherein the features of the hand comprise surface features of the hand, and the surface features comprise a palm print of the hand, a contour map of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of any one of examples 18 and 19, wherein the biometric template is in the form of a data structure including the biometric reference information.
This example includes any or all of the features of example 31, wherein the data structure is in the form of a database.
This example includes any or all of the features of any one of examples 18 and 19, further including: comparing measured supplemental biometric information obtained from the user to supplemental biometric reference information; and denying authentication of the user's identity when at least one of the extracted biometric features or the measured supplemental biometric information does not substantially match the biometric reference information or the supplemental reference biometric information, respectively; and
This example includes any or all of the features of any one of examples 18 and 19, wherein the supplemental reference biometric information includes at least one previously obtained biometric feature of a second body part of the user, and the measured supplemental biometric information includes at least a measurement of the biometric feature of the second body part.
According to this example there is provided a system for generating a biometric template, including logic implemented at least in hardware to cause the system to perform the following operations including: generating a calibrated model of a first body part at least in part from depth information included in a depth image of the first body part acquired from a user with a depth sensor; extracting one or more biometric features of the first body part at least in part using the calibrated model; and producing a biometric reference template including the biometric features of the first body part as biometric reference information.
This example includes any or all of the features of example 35, wherein the depth sensor includes a depth camera.
This example includes any or all of the features of any one of examples 35 and 36, wherein generating the calibrated model includes: formulating multiple hypotheses for a model of the first body part in a first position, each of the multiple hypotheses including a depth map of the first body part in the first position, wherein the first position corresponds to the position of the first body part when the depth frame is acquired; and identifying a best hypothesis from the multiple hypotheses at least in part by comparing the depth map of each of the multiple hypotheses to the depth information in the depth frame, the best hypothesis including one of the multiple hypotheses that most closely fits the depth information.
This example includes any or all of the features of example 37, wherein generating the calibrated model further includes: determining calibration parameters for the model of the first body part model based at least in part on the best hypothesis; and adjusting the model of the first body part using the calibration parameters to produce the calibrated model, the calibrated model accurately modeling at least the skeletal geometry of the first body part.
This example includes any or all of the features of any one of examples 35 and 36, wherein the extracting includes: identifying a plurality of semantic points of the first body part using the calibrated model, wherein each of the semantic points correspond to a known feature of the first body part; identifying at least one selected semantic point from the plurality of semantic points; and determining the one or more biometric features of the first body part based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 39, wherein the determining includes measuring at least one biometric feature of the first body part from the depth information, the calibrated model, or a combination thereof based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 40, wherein the determining includes measuring at least one biometric feature of the first body part based at least in part on the depth information and the at least one selected semantic point.
This example includes any or all of the features of example 40, wherein the determining includes measuring at least one biometric feature of the first body part from the calibrated model and the at least one selected semantic point.
This example includes any or all of the features of example 40, wherein the first body part is a hand, and the one or more biometric features of the first body part comprise features of the hand.
This example includes any or all of the features of example 43, wherein the features of the hand comprise at least one of skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 43, wherein the features of the hand include skeletal features of the hand, the skeletal features including one or more of a circumference of a knuckle of a joint of the hand, a length of a joint of the hand, a length of a finger bone of the hand, a length of a bone extending between two or more joints of a finger of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 43, wherein the features of the hand comprise tissue features of the hand, and the tissue features comprise at least one of a skin thickness in at least one region of the hand, a blood vessel pattern of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of example 43, wherein the features of the hand comprise surface features of the hand, and the surface features comprise a palm print of the hand, a contour map of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of any one of examples 35 and 36, wherein producing the biometric template includes incorporating the one or more biometric features of the first body part into a data structure.
This example includes any or all of the features of example 48, wherein the data structure is in the form of a database.
This example includes any or all of the features of any one of examples 35 and 36, wherein the logic is further configured to cause the system to perform the following operations including: supplementing the one or more biometric features of the first body part with supplemental biometric information.
This example includes any or all of the features of example 50, wherein the supplemental biometric information includes at least one biometric feature of a second body part of the user.
According to this example there is provided a system for performing biometric authentication, including logic implemented at least in part in hardware to cause the system to perform the following operations including: generating a calibrated model of a first body part at least in part from depth information included in a depth image of the first body part acquired from a user with a depth sensor; extracting one or more biometric features of the first body part at least in part using the calibrated model to produce extracted biometric features; and comparing the extracted biometric features to a biometric template, the biometric template including biometric reference information; denying authentication of the user's identity when the extracted biometric features and the biometric reference information do not substantially match; and verifying the user's identity when the extracted biometric features and the biometric reference information substantially match.
This example includes any or all of the features of example 52, wherein the depth sensor includes a depth camera.
This example includes any or all of the features of any one of examples 52 and 53, wherein generating the calibrated model includes: formulating multiple hypotheses for a model of the first body part in a first position, each of the multiple hypotheses including a depth map of the first body part in the first position, wherein the first position corresponds to the position of the first body part when the depth frame is acquired; and identifying a best hypothesis from the multiple hypotheses at least in part by comparing the depth map of each of the multiple hypotheses to the depth information in the depth frame, the best hypothesis including one of the multiple hypotheses that most closely fits the depth information.
This example includes any or all of the features of example 54, wherein generating the calibrated model further includes: determining calibration parameters for the model of the first body part based at least in part on the best hypothesis; and adjusting the model of the first body part using the calibration parameters to produce the calibrated model, the calibrated model accurately modeling at least the skeletal geometry of the first body part.
This example includes any or all of the features of any one of examples 52 and 53, wherein the extracting includes: identifying a plurality of semantic points of the first body part using the calibrated model, wherein each of the semantic points correspond to a known feature of the first body part; identifying at least one selected semantic point from the plurality of semantic points; and determining the one or more biometric features of the first body part based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 56, wherein the determining includes measuring at least one biometric feature of the first body part from the depth information, the calibrated model, or a combination thereof based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 57, wherein the determining includes measuring at least one biometric feature of the first body part based at least in part on the depth information and the at least one selected semantic point.
This example includes any or all of the features of example 57, wherein the determining includes measuring at least one biometric feature of the first body part from the calibrated model and the at least one selected semantic point.
This example includes any or all of the features of example 56, wherein the first body part is a hand, and the one or more biometric features of the first body part comprise features of the hand.
This example includes any or all of the features of example 60, wherein the features of the hand comprise at least one of skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 60, wherein the features of the hand include skeletal features of the hand, the skeletal features including one or more of a circumference of a knuckle of a joint of the hand, a length of a joint of the hand, a length of a finger bone of the hand, a length of a bone extending between two or more joints of a finger of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 60, wherein the features of the hand comprise tissue features of the hand, and the tissue features comprise at least one of a skin thickness in at least one region of the hand, a blood vessel pattern of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of example 60, wherein the features of the hand comprise surface features of the hand, and the surface features comprise a palm print of the hand, a contour map of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of any one of examples 52 and 53, wherein the biometric template is in the form of a data structure including the biometric reference information.
This example includes any or all of the features of example 65, wherein the data structure is in the form of a database.
This example includes any or all of the features of any one of examples 52 and 53, further including: comparing measured supplemental biometric information obtained from the user to supplemental biometric reference information previously obtained from the user; and denying authentication of the user's identity when at least one of the extracted biometric features or the measured supplemental biometric information does not substantially match the biometric reference information or the supplemental biometric reference information, respectively; and verifying the user's identity when the extracted biometric features and the measured supplemental biometric information substantially match the biometric reference information and the supplemental biometric reference information, respectively.
This example includes any or all of the features of example 67, wherein the supplemental biometric reference information includes biometric information of at least a second body part of the user, and the measured supplemental biometric information includes at least a measurement of the biometric feature of the second body part.
According to this example there is provided at least one computer readable medium including instructions for generating a biometric template, wherein the instructions when executed by a processor of a system for generating a biometric template cause the system to perform the following operations including: generating a calibrated model of a first body part at least in part from depth information included in a depth image of the first body part acquired from a user with a depth sensor; extracting one or more biometric features of the first body part at least in part using the calibrated model; and producing a biometric reference template including the biometric features of the first body part as biometric reference information.
This example includes any or all of the features of example 69, wherein the depth sensor includes a depth camera.
This example includes any or all of the features of any one of examples 69 and 70, wherein generating the calibrated model includes: formulating multiple hypotheses for a model of the first body part in a first position, each of the multiple hypotheses including a depth map of the first body part in the first position, wherein the first position corresponds to the position of the first body part when the depth frame is acquired; and identifying a best hypothesis from the multiple hypotheses at least in part by comparing the depth map of each of the multiple hypotheses to the depth information in the depth frame, the best hypothesis including one of the multiple hypotheses that most closely fits the depth information.
This example includes any or all of the features of example 71, wherein generating the calibrated model further includes: determining calibration parameters for the model of the first body part model based at least in part on the best hypothesis; and adjusting the model of the first body part using the calibration parameters to produce the calibrated model, the calibrated model accurately modeling at least the skeletal geometry of the first body part.
This example includes any or all of the features of any one of examples 69 and 70, wherein the extracting includes: identifying a plurality of semantic points of the first body part using the calibrated model, wherein each of the semantic points correspond to a known feature of the first body part; identifying at least one selected semantic point from the plurality of semantic points; and determining the one or more biometric features of the first body part based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 73, wherein the determining includes measuring at least one biometric feature of the first body part from the depth information, the calibrated model, or a combination thereof based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 74, wherein the determining includes measuring at least one biometric feature of the first body part based at least in part on the depth information and the at least one selected semantic point.
This example includes any or all of the features of example 74, wherein the determining includes measuring at least one biometric feature of the first body part from the calibrated model and the at least one selected semantic point.
This example includes any or all of the features of example 74, wherein the first body part is a hand, and the one or more biometric features of the first body part comprise features of the hand.
This example includes any or all of the features of example 77, wherein the features of the hand comprise at least one of skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 77, wherein the features of the hand include skeletal features of the hand, the skeletal features including one or more of a circumference of a knuckle of a joint of the hand, a length of a joint of the hand, a length of a finger bone of the hand, a length of a bone extending between two or more joints of a finger of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 77, wherein the features of the hand comprise tissue features of the hand, and the tissue features comprise at least one of a skin thickness in at least one region of the hand, a blood vessel pattern of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of example 77, wherein the features of the hand comprise surface features of the hand, and the surface features comprise a palm print of the hand, a contour map of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of any one of examples 69 and 70, wherein producing the biometric template includes incorporating the one or more biometric features of the first body part into a data structure.
This example includes any or all of the features of example 82, wherein the data structure is in the form of a database.
This example includes any or all of the features of any one of examples 69 and 70, wherein the instructions when executed further cause the system to perform the following operations including: supplementing the biometric reference template with supplemental biometric information.
This example includes any or all of the features of example 84, wherein the supplemental biometric information includes at least one biometric feature of a second body part of the user.
According to this example there is provided at least one computer readable medium for perform biometric authentication, including computer readable instructions which when executed by a processor of a biometric authentication system cause the system to perform the following operations including: generating a calibrated model of a first body part at least in part from depth information included in a depth image of the first body part acquired from a user with a depth sensor; extracting one or more biometric features of the first body part at least in part using the calibrated model to produce extracted biometric features; and comparing the extracted biometric features to biometric reference information in a biometric template; denying authentication of the user's identity when the extracted biometric features and the biometric reference information; and verifying the user's identity when the extracted biometric features and the biometric reference information.
This example includes any or all of the features of example 86, wherein the depth sensor includes a depth camera.
This example includes any or all of the features of any one of examples 86 and 87, wherein generating the calibrated model includes: formulating multiple hypotheses for a model of the first body part in a first position, each of the multiple hypotheses including a synthesized depth map of the first body part in the first position, wherein the first position corresponds to the position of the first body part when the depth frame is acquired; and identifying a best hypothesis from the multiple hypotheses at least in part by comparing the synthesized depth map of each of the multiple hypotheses to the depth information in the depth frame, the best hypothesis including one of the multiple hypotheses that most closely fits the depth information.
This example includes any or all of the features of example 88, wherein generating the calibrated model further includes: determining calibration parameters for the model of the first body part based at least in part on the best hypothesis; and adjusting the model of the first body part using the calibration parameters to produce the calibrated model, the calibrated model accurately modeling at least the skeletal geometry of the first body part.
This example includes any or all of the features of any one of examples 86 and 87, wherein the extracting includes: identifying a plurality of semantic points of the first body part using the calibrated model, wherein each of the semantic points correspond to a known feature of the first body part; identifying at least one selected semantic point from the plurality of semantic points; and determining the one or more biometric features of the first body part based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 90, wherein the determining includes measuring at least one biometric feature of the first body part from the depth information, the calibrated model, or a combination thereof based at least in part on the at least one selected semantic point.
This example includes any or all of the features of example 91, wherein the determining includes measuring at least one biometric feature of the first body part based at least in part on the depth information and the at least one selected semantic point.
This example includes any or all of the features of example 91, wherein the determining includes measuring at least one biometric feature of the first body part from the calibrated model and the at least one selected semantic point.
This example includes any or all of the features of example 90, wherein the first body part is a hand, and the one or more biometric features of the first body part comprise features of the hand.
This example includes any or all of the features of example 94, wherein the features of the hand comprise at least one of skeletal features of the hand, tissue features of the hand, surface features of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 94, wherein the features of the hand include skeletal features of the hand, the skeletal features including one or more of a circumference of a knuckle of a joint of the hand, a length of a joint of the hand, a length of a finger bone of the hand, a length of a bone extending between two or more joints of a finger of the hand, or one or more combinations thereof.
This example includes any or all of the features of example 94, wherein the features of the hand comprise tissue features of the hand, and the tissue features comprise at least one of a skin thickness in at least one region of the hand, a blood vessel pattern of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of example 94, wherein the features of the hand comprise surface features of the hand, and the surface features comprise a palm print of the hand, a contour map of at least a portion of the hand, or a combination thereof.
This example includes any or all of the features of any one of examples 86 and 87, wherein the biometric template is in the form of a data structure including the biometric reference information.
This example includes any or all of the features of example 99, wherein the data structure is in the form of a database.
This example includes any or all of the features of any one of examples 86 and 87, wherein the instructions when executed further cause the system to perform the following operations including: comparing measured supplemental biometric information obtained from the user to supplemental biometric reference information previously obtained from the user; and denying authentication of the user's identity when at least one of the extracted biometric features or the measured supplemental biometric information does not substantially match the biometric reference information or the supplemental biometric reference information, respectively; and verifying the user's identity when the extracted biometric features and the measured supplemental biometric information substantially match the biometric reference information and the supplemental biometric reference information, respectively.
This example includes any or all of the features of example 101, wherein the supplemental biometric reference information includes at least one biometric feature previously determined from at least a second body part of the user, and the supplemental biometric reference information includes at least a measurement of the at least one biometric feature of the second body part.
The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents.