The present invention generally relates to verifying the identity of a person, and more particularly to a method for identifying and verifying an approved user of an electronic device.
Transactions of many types require a system for identifying a person (Who is it?) or for verifying a person's claimed identity (Is she who she says she is?). The term recognition refers to identification and verification collectively. Traditionally, three methods have been used for recognizing a person: passwords, tokens, and biometrics.
Biometrics refers to information measured from a person's body or behavior. Examples of biometrics include fingerprints, hand shapes, palm prints, footprints, retinal scans, iris scans, face images, ear shapes, voiceprints, gait measurements, keystroke patterns, and signature dynamics. The advantages of pure biometric recognition are that there are no passwords to forget or to give out, and no cards (tokens) to lose or lend.
In biometric verification, a user presents a biometric which is compared to a stored biometric corresponding to the identity claimed by the user. If the presented and stored biometrics are sufficiently similar, then the user's identity is verified. Otherwise, the user's identity is not verified.
In biometric identification, the user presents a biometric which is compared with a database of stored biometrics typically corresponding to multiple persons. The closest match or matches are reported. Biometric identification is used for convenience, e.g., so that users would not have to take time consuming actions or carry tokens to identify themselves, and also for involuntary identification, e.g., when criminal investigators identify suspects by matching fingerprints.
There is an ever-growing need for convenient, user-friendly security features on electronic devices. These devices have permeated our society and have become a primary mode of communication in voice, text, image, and video formats today, with the promise of even greater functionality in the future for high speed web access, streaming video, and even financial transactions. Authentication of the device user in these applications is of paramount importance and a significant challenge.
Biometric technologies are viewed as providing at least a partial solution to accomplish these objectives of user identity and different types of biometrics have been incorporated into wireless products for this purpose. The most common of these include fingerprint, face, and voice recognition. Most of these biometric technology implementations require some type of specialized hardware, e.g., swipe sensor or camera, and/or specific actions to be taken by the user to “capture” the biometric data, e.g., swiping or placing a finger, pointing a camera, or speaking a phrase. The special hardware adds unwanted cost to the product in a cost sensitive industry, and the active capture can make the authentication process inconvenient to use.
Accordingly, it is desirable to provide a biometric technology that can be implemented with existing sensing components of the wireless device and in which the biometric data capture occurs passively, or unobtrusively, during the normal operation of the device, without intentional and time consuming action of the user. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description of the invention and the appended claims, taken in conjunction with the accompanying drawings and this background of the invention.
The present invention will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and
The following detailed description of the invention is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background of the invention or the following detailed description of the invention.
The present invention comprises a method of capturing a distinctive, physical biometric, i.e., skin texture, using a sensor incorporated within a touch input display in electronic devices and in the normal operation of the device, e.g., during texting, navigating menus, playing games, or a phone conversation. The method involves a standard enrollment process, e.g., a one time setup task including capturing skin texture data from one or more body parts for later comparisons, and an authentication process. The authentication process involves: 1) detecting a touch anywhere on the main device touchscreen, 2) optionally recognizing the device use mode for determining which enrollment samples with which to compare, e.g., use finger data when dialing, or ear or cheek data when talking, 3) illuminating a specific region of pixels on the touchscreen in response to the touch, 4) capturing the skin texture data, 5) comparing the skin texture data with reference data, and 6) making a decision based on the comparison.
Enhancements of previously known skin texture biometrics have recently been demonstrated that allow for recognition of individuals (see for example, U.S. Patent Publication No. 2006/0062438 A1 assigned to Lumidigm, Inc. and incorporated herein by reference). Multiple illumination sources, e.g., red, green, blue, and white light, both polarized and unpolarized, may be used to capture finger print images which reveal both surface and subsurface characteristics of the skin. These skin features, referred to as “textures”, can be measured on any skin surface (not just fingertips) and over much smaller areas than conventional fingerprints. The texture properties are similar from finger to finger and across different regions of the body, but are distinctive among individuals. Therefore, the texture properties can be used for identification purposes and could allow for different locations on the skin to be used for enrollment versus verification purposes.
Image capture of skin texture may occur in any of several modes during normal operation of the mobile phone having a touch input display. The most common user interface would very likely be through finger presses on the touch screen display or a touch key. Almost every interaction with the device will involve this type of activity, e.g., dialing phone numbers, navigating through menus, surfing the web, playing games, etc.
While the finger 124 is shown in
A skin texture image can, in principle, be captured at every touch of a finger onto the screen and can be done passively without awareness of the user. This passive (surreptitious, unobtrusive) use means without any intentional action required by the user and possible without a realization by the user that it is taking place. To minimize distraction during illumination of the display for image capture, the position of the fingers touching the display could be sensed first, and then only the portions of the display fully covered by the skin contact points could be energized to provide illumination. In this way, the entire display would not have to be lighted for capture. Illumination of the entire display might be extremely distracting to the users and others in the vicinity, thereby compromising the unobtrusiveness of the biometric capture, while providing for inefficient use of limited battery energy on the mobile device. It is noted that the remainder of the display not including the portion touched by the skin may display an image, e.g., the image existing prior to the skin being sensed.
For passive, or unobtrusive, capture of biometric data, fingerprints may not be the best option because in a typical interaction with a touch screen, only the tips of the fingers contact the screen during the input stroke. The tip of the finger has a low density of ridge information compared with that on the flatter, pad portion of the finger, where the fingerprint core exists, and therefore makes for very poor fingerprint matching results. On the other hand, rich skin texture data can be captured easily from the smaller areas of the fingertips and used effectively in the matching process.
Skin texture meets most of the criteria for a good biometric. They are universal (all humans), they are sufficiently distinctive to be of value for the purposes described herein in that they have a high level of permanency (they don't change much over time), and are readily collectable (as described herein).
In another normal mode of phone use, e.g., executing a phone conversation, the device would be placed against the ear in such a manner that a significant portion of the ear, particularly the lower regions like the ear lobe and concha areas, would lie against the touch input display allowing for capture of the skin texture biometric from these areas. This mode may be beneficial if the user were wearing gloves, for example, preventing identification from finger touches. Referring to
In addition, it is very possible in this mode of operation, that the touch input display is also pressed against the flesh of the cheek (and possibly even the lips) where skin texture images could be captured as well, maybe even simultaneously.
Since phone conversations typically last an extended period of time, compared to the capture time, many inputs could be acquired for analysis to improve the accuracy of the biometric modality. And since most phone users position the phone underneath hair or caps covering the ear, and directly against the ear itself to achieve the best audio performance, this mode of acquisition is not hindered by such ear coverings.
Although the preferred exemplary embodiments of the phones 110 and 210 as shown illustrates a unitary body, any other configuration of wireless communication devices, e.g., flip phones, may utilize the invention described herein. The phones 110 and 210 typically includes an antenna (not shown) for transmitting and receiving radio frequency (RF) signals for communicating with a complementary communication device such as a cellular base station or directly with another user communication device. The phones 110 and 210 may also comprise more than one display and may comprise additional input devices such as an on/off button and a function button.
In yet another common mode of phone handling, the carrying of the phone in the palm or fingers of the hand, a skin texture image could be captured from the palm (or along the body of the fingers) surreptitiously. This mode of operation would be relevant during a call if the touch input display were on the opposite side of the phone from the speaker and microphone such that it would be against the palm of the hand instead of the ear and cheek during a call.
Other modes of flesh interaction with the touch display, either intentionally or unintentionally, can also be envisioned. Note that the phone may either be of the “bar” type, or the “flip” type in any of the embodiments.
There is a growing trend toward the use of touch input displays in high tier wireless communication devices, e.g., smart phones and PDAs. This is largely driven by the desire for efficient use of the limited surface area of the device. Typically, two user interface elements dominate the surface of the device: the keypad for input and the display for output. A touch input display input display (described in more detail hereinafter) combines the input and output user interface into a single element.
The touch input function can either be integrated into the display backplane or implemented in transparent layers applied over the surface of the display. There are at least three different touch input sensing technologies that have been demonstrated, including resistive, capacitive and optical, though an optical technology is envisioned for the embodiments described herein. With the proper array-based implementation, the optical mode is capable of generating characteristics of skin that is placed in contact with the surface. Because there are no lenses used to project and create an image, this approach is called a “near field” mode of capture. Only the portion of the skin that is in contact with the screen contributes to the characteristics.
The unobtrusive capture of this particular skin texture for biometric identification and verification provides several advantages over other biometric technologies, including: (1) skin texture biometrics are convenient and their acquisition tends to be perceived as less invasive, (2) skin texture geometry readers can work even under adverse conditions, e.g., dry, cracked, dirty skin, when fingerprint capture would fail, and (3) special sensors will not be required if the device employs an optical touchscreen.
Only the portion of skin in contact with an image detector is illuminated, with light scattered from the skin being received by the image detector. Characteristics are generated from the illuminated skin and analyzed. The image detector may be a monochromatic (black and white) imaging detector or a color imaging detector.
While varying from one person to the next, skin texture (composition and structure) is distinct and complex. A number of determinations may be made by conducting optical measurements of the spatiospectral properties of skin and its underlying tissue, including determining whether the skin is a living organism and performing identification or verification of the person's skin being sampled.
The epidermis, the outer most layer of the skin, overlies the dermis and hypodermis. The epidermis may include as many as five sublayers: stratum corneum, stratum ludidum, stratum granulosum, stratum spinosum, and stratum germinativum. Each layer, and their complex interfaces, will impart measurable characteristics within reflected light that is uniquely characteristic of an individual. Furthermore, protrusions from the dermis into the epidermis for the distribution of blood provides further unique and measurable characteristics.
Spectral and spatial characteristics received by the detector are identified and compared with spectral characteristics stored in a database. The spectral and spatial characteristics of a particular individual include unique spectral features and combinations of spectral features that may used to identify individuals. These spectral and spatial characteristics may be extracted by, e.g., discriminant analysis techniques.
Light reflected from the skin, and scattered thereby, may be subjected to various types of mathematical analyses for comparison with a specific reference. These analyses include moving-window analysis and block-by-block or tiled analysis, for example. Such analyses are described in detail in U.S. Patent Publication 2006/0274921 A1, incorporated herein by reference.
Regardless of which of these embodiments described herein, or other embodiments, is utilized, characteristics of the skin texture are made from the illuminated skin, and compared with stored characteristics of a person or persons skin. Values are assigned to the measurement comparisons. If the values are within a threshold, the identity of the person is verified.
Referring to
The substrate 324 protects the integrated display 312 and imaging device 326 and typically comprises plastic, e.g., polycarbonate or polyethylene terephthalate, or glass, but may comprise any type of material generally used in the industry. The thin transparent conductive coating 322 is formed over the substrate 324 and typically comprises a metal or an alloy such as indium tin oxide or a conductive polymer.
Though the exemplary embodiment described herein is an LCD, other types of light modulating devices, for example, an electrowetting device, may be used.
An electroluminescent (EL) layer 328 is disposed contiguous to the ITO ground layer and includes a backplane and electrodes (not shown) as known to those skilled in the art and which provides backlight for operation of the display 312 in both ambient light and low light conditions by alternately applying a high voltage level, such as one hundred volts, to the backplane and electrodes. The ITO ground layer 332 is coupled to ground and provides an ITO ground plane for reducing the effect on the imaging device 326 of any electrical noise generated by the operation of the EL stack layer 328 or other lower layers within the display 312. The various layers 318, 322, 324, 326, 332, are adhered together by adhesive layers (not shown) applied therebetween. Although the EL layer 328 is preferred, other light sources, such as a light emitting diode, may alternatively provide radiant energy to the layers 332, 326, 324, 322, and 318. Alternatively, the EL layer 328 may be other types of light sources, for example, an LED or a field emission device. This radiant energy may span the visible range of wavelengths to accommodate the display requirements, but may also include near infrared to accentuate skin texture image capture and analysis.
The imaging device 326 comprises a plurality of pixels 338 for producing displayed images (black and white, black and white including shades of gray, or color) and illumination of skin texture (a single wavelength, a spectral band, or a plurality of spectral bands), and a plurality of photosensors 340 for sensing touchscreen inputs on the transparent cover 318 of the display 312 and for capturing reflected images of the skin texture. Each pixel 338 has a photosensor 340 associated therewith. When three pixels are grouped to form a triad of pixels to represent a color image, one photosensor 340 may be positioned with each triad, or with each pixel in the triad, or may be more sparsely populated within the imaging device 326.
In order to prevent the entire display from lighting when the finger touches a small portion, those photosensors 342 detecting the touch of the finger 344 (
In one exemplary embodiment and as known in the art, the touch input display 312 includes a layer of liquid crystal molecules formed between two electrodes. Horizontal and vertical filter films are formed on opposed sides of the imaging device 326 for blocking or allowing the light to pass.
The electrodes in contact with the layer of liquid crystal material are treated to align the liquid crystal molecules in a particular direction. In a twisted nematic device, the most common LCD, the surface alignment directions at the two electrodes are perpendicular and the molecules arrange themselves in a helical structure, or twist. Light passing through one polarizing filter is rotated by the liquid crystal material, allowing it to pass through the second polarized filter. When a voltage is applied across the electrodes, a torque acts to align the liquid crystal molecules parallel to the electric field. The magnitude of the voltage determines the degree of alignment and the amount of light passing therethrough. A voltage of sufficient magnitude will completely untwist the liquid crystal molecules, thereby blocking the light.
Referring to
Referring to
In accordance with the exemplary embodiment and illustrated in
During normal use, when a user touches the display and the skin is sensed 512, the display provides 514 radiant energy (illumination) to that portion touched by the skin. The radiant energy may be a single wavelength, a spectral band, or a plurality of spectral bands. Reflected and scattered radiant energy is received 516 from the skin including its underlying layers and active characteristics are estimated 518 from the received radiant energy. A determination 520 is made if the estimated characteristics are of sufficient quality. If not, the skin texture image quality may be improved by adjusting 522 the brightness of the illumination, the spectral balance of the illumination, or recording another sample.
An active data sample of the skin texture is derived 524. This second data sample is passively captured without any specific, intentional action taken by the user. The above steps are repeated wherein corrections are made to the data sample including, for example, filtering out noise. A statistical model of the active data sample may be formed. Combinations of data within the active data sample, such as ratios or logical comparisons, may also be determined. These values are then compared 526 with stored values from the reference data sample(s). The comparison may be carried out using any method of comparing quantities or sets of quantities, e.g., by summing squared differences. Values are assigned based on the comparison, and a determination is made whether the values are within a threshold. If the values are within a threshold, the identity of the person whose skin is being scanned is verified 528 and one or more specific functions of the wireless communication device is enabled 530. The functions may include, for example, allowing use in the most basic sense and configuring, or tailoring (personalizing), the wireless communication device to a particular user. If the values are not within a threshold, the identity of the person whose skin is being scanned is not verified 528, the steps 512-528 may be repeated 536 a number N times. If not verified within N times, the device would be disabled 538. The number N is some integer, such as 3, determined to provide a reasonable opportunity to obtain an accurate image of the finger.
Each of the steps 512 through 528 may be repeated 532 for a continuing verification that the user is an authorized user. This repeating of steps 512 through 528 would prevent, for example, an unauthorized user from using the device after the user has been authenticated. These steps 512-528 are performed with no intentional action by the user of the electronic device. Additionally, an optional dynamic enrollment update 534 may be performed by comparing each of the active data samples with the original data sample and adjusting an acceptable range of to be received active data samples based on the original data sample and additional active data samples.
In another exemplary embodiment, the above described method of verifying the user based on a data sample taken may be only one of several biometric measurements taken for verification. An attempt to take two or more biometric samples, such as a voiceprint, a picture of the user's face, a fingerprint, as well as a skin texture data sample may be made. Since one particular biometric sample may not be obtainable, a successful capture of another biometric sample may enable a function on the wireless communication device.
While at least one exemplary embodiment has been presented in the foregoing detailed description of the invention, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the invention, it being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope of the invention as set forth in the appended claims.