1. Field
This invention relates to the field of data security, such as secure storage and retrieval of sensitive medical and financial data, multifactor authentication, access control, remote control of devices in absentia, such as in the case of home automation and other remote devices, as well as biometrics. It specifically relates to multifactor authentication for gaining access to a place or a resource such as a data bank, or conducting transactions, using handheld (mobile) or fixed devices. It is also related to near field communication and other wireless communication techniques as well as cryptography and key exchange encryption techniques such as symmetric and asymmetric hashing and encryption.
2. Description of the Related Art
Mobile devices such as smartphones, personal digital assistants (PDAs), as well as many other handheld devices are being used as authentication devices for financial as well as access transactions. In some countries these devices are providing the means for cash transactions in the same way a debit card is used. Some African countries have even been using these devices as prepaid credit devices which may be used for cash transactions simply by having the credit transferred from one phone to another. These are mostly done using the mobile network. In addition, there have been applications where a mobile device is used to access a data repository using well-established authentication methods, as well as hard-wired access control devices used for physical access to restricted areas. Some of these systems have also used biometrics such as fingerprint and iris recognition at fixed entry systems.
ICT Regulation Toolkit is a toolkit which is generated by the Information for Development Program (InfoDev) and the International Telecommunication Union (ITU). A Practice Note [1] gives many different examples of financial services which are available through the use of a mobile phone. These include, Branchless Banking Models, such as the WIZZIT service [2] in South Africa, Mobile Payment systems such as M-PESA in Kenya, Globe Complete G-Cash service in the Philippines, and Airtime Transfers [3] in Egypt, South Africa, and Kenya. See [1] for details.
However, the listed transactions currently rely on one or two of the following two authentication factors:
1. Possession of an item (something one owns).
2. Knowledge of a fact (something one knows).
In the scenario described at the beginning of the Description of the Related Art, the phone is being used as an item being owned (1st authentication factor). In this case, if the phone is stolen or used without permission, one or more transactions may take place before the phone may be deactivated or the credit may be blocked. In fact, technically, the possession of the phone is equivalent to the old standard of possessing currency.
To reduce the chance of the fraud described in the previous paragraph, some implementations also require another factor in the form of something the person knows (2nd factor), such as a challenge passcode. However, most such passcodes are simple to ascertain and to abuse in order to attain unlawful access to the funds associated with the telephone.
The present invention provides for methods and systems that perform electronic transactions utilizing mobile devices in conjunction with multifactor authentication. The multifactor authentication, described here, utilizes four types of authentication factors including:
1. Possession of an item (something one owns).
2. Knowledge of a fact (something one knows).
3. Identity (something one is).
4. Liveness Factor (proof of being a human and not a machine)
Of course it is preferred to use more than one authentication method in each factor type (category). In order to be able to decide if the device of interest is in the possession of the target individual. As an example, take the first factor of identifying the device itself. One may use the Subscriber Identity which is stored in the form of an ID on the Subscriber Identity Module (SIM) on most phones. Most PDAs and other handheld devices also have similar network subscriber IDs. Other possible device identifiers are the Media Access Control (MAC), or the Universally Unique Identifiers (UUID) and/or (Internet Protocol) IP address, or Caller ID. N.B., MAC addresses are not unique, but the chance of two devices possessing the same MAC address is low. Also, IP addresses may be spoofed.
This invention will not only utilize the third factor in conjunction with the first two factors in order to increase the security of the device and to reduce the chance of providing unauthorized access to individuals, but it also provides methodologies for combining these sources of information to reduce the chance of fraud.
As it will be made more clear, this new methodology may be used for many other similar authentication applications such as any financial transaction, any access control (to account information, etc.), and any physical access scenario such as doubling for a passport or an access key to a restricted area (office, vault, etc.). It may also be used to conduct remote transactions such as those conducted on the Internet (E-Commerce, account access, etc.). Yet another application is access and manipulation of medical, financial, and other sensitive, personal, and/or sensitive records or data. In the next section this multifactor authentication is described further.
For the second factor (knowledge of a fact), as an example, a challenge in the form of a traditional passcode may be requested, in which case it is usually typed in, or depending on the available input devices, preselected or predefined facial expressions (for cameras), natural language understanding or a repeated phrase, through a speech recognizer for a microphone input, a handwritten signature such as described by [4] used with a touchpad or a pen may be used along with other methods, some of which are described in Section 2.3.
For the third factor (something one is), biometric techniques are used. Many different biometric methods may be used, such as those listed in Section 1.3. Some such techniques are Speaker Recognition, Image-Based or Audio-Based Ear Recognition, Face Recognition, Fingerprint Recognition, Palm Recognition, Hand-Geometry Recognition, Iris Recognition, Retinal Scan, Thermographic Image Recognition, Vein Recognition, Signature Verification, Keystroke Dynamics Recognition, and Brain Wave Recognition using a Brain-Computer Interface (BCI). Of course a multimodal biometric recognition is preferred, since it reduces the chance of errors due to technological shortcomings and fraud.
With increasingly sophisticated technologies which are available, more care needs to be given to establishing the liveness of the individual being authenticated by the system. Spoofing techniques enjoy access to better mimicry technologies such as high fidelity recordings of speech of the person, high quality digital images of the person, latex replica of fingerprints, high quality image of the iris of the individual, etc. The liveness factor helps ensure that a person is indeed being authenticated and not a machine posing in his/her place. More details are given in Sections 1.3.1.2, 1.3.4.1, 1.6, and Item 4 of Section 1.1.
Several methodologies will be presented by this invention in the process of combining the above elements from the four possible factors, in order to reduce the chance of fraud. Moreover, in order to ensure that the data kept on the device and on the servers associated with the transaction do not lose their integrity and all transactions are secure, a unique methodology is presented (Section 2) which provides assurance that all the data and models associated with the biometrics, as well as the software being utilized to run the authentication and the transactions are unchanged and authentic. It also ensures that a transaction authority may trust the multifactor authentication being done on a device which is held by the person being authenticated. This trust is essential to allow for the authentication to be done locally on the device being used by the user, or an independent device which is not owned or controlled by the transaction authority. In all, the communication, storage, and processing of transactions among the following modules: data capture, authentication, point of service, transaction authority, and certificate authority are made secure and trustworthy by the process presented in this invention.
As a practical usage of the multifactor authentication capabilities associated with the system described above, new detailed applications are elaborated, enabling practical usage of the said authentication system for gaining access to restricted locations and performing device automation and control, either at a home or a place of business or any other restricted location. Specific examples will be given to clarify the field of invention. Due to the nature of the applications, the device may either be in the form of a handheld device or be mounted in a specific place for usage by the members of the public who are authorized for the usage of the device, in order to gain access to the place of interest or to control remote devices, such as lights, appliances, or other devices.
Furthermore, access may be realized in the form of access to restricted data or records such as access to medical or financial records. Namely, a methodology will be presented that allows for the safe storage of personal data such as medical and financial records on a personal device or computer where the multifactor authentication described in this patent will be used to unlock the data. This ensures that only the owner of the data has access to the data and will be the only person in control of the data. For example, this enables the owner of the data to store the data safely on his/her device and selectively share parts of the data with institutions which may have a shared interest in having access to some of that data. For example, all of a patient's medical records may be stored on his/her personal device, locked with his/her multifactor authentication scheme described here. In one instance, the patient may need to see an internist who may need to review his/her latest blood test results, in which case the patient may unlock the data and transmit that part of the data to the doctor. In this scenario, the patient chooses what part of the data is shared with the doctor and the data is only shared with the patient's consent. Any other person gaining access to the personal device of the patient will not be able to access the data. Also, this allows for patients to carry their medical history with them at all time without having any concern with privacy issues. In fact, different parts of the data may be tagged with different security levels. As an example, if the patient is diabetic or has known severe allergies, he/she may choose to make the data associated with these items public so that in case he/she goes into a shock, the data is available to all individuals accessing his/her device. However, they will not have access to the rest of his/her medical history and details. The important enablement is that the patient makes a decision on what part of the data requires high security and what part does not.
The following is a system in which a person may use a Cellular (Mobile) Telephone, a PDA or any other handheld computer to make a purchase. This is an example only. The process may entail any type of transaction which requires authentication, such as any financial transaction, any access control (to account information, etc.), and any physical access scenario such as doubling for a passport or an access key to a restricted area (office, vault, etc.). It may also be used to conduct remote transactions such as those conducted on the Internet (E-Commerce, account access, etc.). In the process, a multifactor authentication is used.
In this narrative, the words, “PDA”, “device,” and “phone” are used interchangeably to indicate a Cellular (Mobile) phone, any Personal Digital Assistant (PDA), Personal Music Player (Assistant), or any portable electronic device capable of capturing a biometric and communicating with a computer and/or telephony network.
As we will see later, one of the possible biometrics would be speech (speaker recognition). In this specific case, for example, the PDA of
For instance, if the biometric of choice is fingerprint, then the PDA would have to have a fingerprint capture device. These requirements have been explored in the description below, for different biometrics.
It is important to clarify some terminology with first, regarding the process of conducting multifactor authentication. There are two ways authentication may be done, verification and identification.
With verification, generally a unique identifier is presented by the person who is asking to be authenticated along with the test data which should be used to authenticate the individual. The unique identifier is usually a key into the database that contains the models related with the target individual. The target ID is the claimed ID which needs to be tested against the test data being presented by the test individual. If the identity of the test individual is confirmed to match the identity of the target (claimed) individual according to the test data, then the test individual is authenticated. If the identity of the test individual does not match that of the target individual, due to the fact that the test data does not match the target model (data) on file, according to the claimed ID that has been provided, the authentication fails. Note that the following summary glossary:
In the case of verification, the claimed ID is provided by the test user. Therefore, verification entails the matching if the test data against the model of the target data and in contrast, the matching of the test data against one or more competing models as reference. This technique is usually very quick, since it only requires a handful of matches at most. On the other hand, an identification process is more costly and time consuming.
It is also possible to perform an authentication by using an identification scheme instead of a verification scheme. One way, which is what is being proposed here is the use of some unique or near unique identifier in conjunction with the test data. If the population of the enrolled users is small, then an identification procedure is performed, namely matching the test data against all the reference models on file and obtaining some score associated with each model on file. In this case, we still need a rejection mechanism for test speakers who are not enrolled in the system. The rejection mechanism used here is described in Section 1.4. In this case, the claimed ID does not need to be unique. It only needs to be limiting in a way to reduce the number possible models substantially. This way, the claimed Identity information is treated as a separate factor, which will be described in Section 1.1 as an authentication factor of type 2 (personal information). Let us assume for the moment that this ID is actually a personal identification number (PIN). It is possible that more than one person in our database has chosen the same 4-digit PIN. However, since there are 10,000 different 4-digit PINs, the presentation of the PIN reduces the number of possible models that may match the test data. In this case, the PIN is a specific sequence of numbers. However, the personal information, as described in Section 1.1, as a factor of type 2, may for example be the answer to a question whose answer is only known to the user. However, since the answer may be in free form, say in the form of a speech response, or a typed response, it is possible that the user may forget how he/she had exactly chosen the response to the question when he/she enrolled into the system. In that case, a match does not have a crisp binary result of 0 (no match) or 1 (match). The response may be 90% correct. For example the question may have been, “In what city were you born?” Assuming the person was born in New York City, he/she could have spelled out “New York City” at the enrollment time. However, at the time of the test, he may have responded using speech as “I was born in New York.” The speech recognition system and the natural language understanding would produce a score of, say, 90% associated with this response, in relation with the correct response of “New York City.” This score may be fused along the methods described in Section 1.5 with the scores being returned by the multimodal biometrics of say speaker and face recognition to provide a final score for each target individual in the list of enrolled people, using the test data from the user being authenticated. In here, a complete multifactor authentication is proposed which contains few modes from each of four different types of authentication factor which are defined here. Each of these modes produces a score for the test speaker versus the target speakers of choice.
The present invention uses 4 different types of factors or types of sources for authenticating the individual who is requesting to gain access to a physical or virtual location. A physical location may be a home, an office, a country (passport), a bank vault, or any other location which imposes restrictions on entry based on a person's identity. A virtual location may be a bank account, a merchant account, an account with a seller or a facilitator of a sale between buyers and sellers, personal data stored on a storage device such as passwords and classified information on a hard drive or other storage media. A virtual location may also an entry point into an electronic transaction as described in the Transaction Section 2.2. Each factor type may include one or more modes, which are methods that would fall under that factor type. The following is a list of factor types or sources used for performing the authentication:
In order to be able to present the fusion of multiple factors with a combination of different media and traits, the following alternative perspective is provided. It is important to understand the difference between the use of the second factor in contrast with the third factor, as defined in Section 1.1. This invention provides new techniques which may be used as the second factor, such as predefined or preselected facial expressions, predetermined content which may be uttered using a speech recognition system and a natural language processing and understanding engine. Yet another interface to use for the second factor may be the use of hand gestures. One may also use a predetermined handwriting gesture or handwritten word. The use of the handwritten word or gesture includes two aspects in one. The following describes the perspective and categorization that allows combining many traits and challenge-responses from different biometrics.
The following are the different human-machine interface channels, where each interface conveys different factors. Different media are listed below. Under each medium, the different human-machine interface channels are listed. For each channel, there is a list of traits and combination of challenge-response cases with the corresponding factor or combination of authentication factors they provide, as defined in the Section 1.1.
Each of the following human-machine interface channels may be used to perform many different combinations of biometric authentication (factor 3), personal information capture (factor 2), in the form of preselected responses or the discovery of a-priori knowledge of facts, and liveness information (factor 4), in the form of prompted discovery, such as repeating prompted requests or producing actions and responses that would carry proper information related to the prompted query.
Imaging provides access to several human-machine interface channels. Imaging may be done from any part of the body, including the face, the ear, the hand, the complete body, the eyes (e.g. retina and iris), etc. It may also be done in different types light such as visible light, infra-red, different types of x-ray, etc. Although due to practicality, the visible light is preferred. Some higher frequency lighting (above visible) may be used for discovering concealment at the same time. Sometimes infrared light may be used for penetrating the skin to the level of the veins (Sections 1.3.10 and 1.3.11), etc. following are some prominent ones.
Facial imaging may be used for obtaining images of the face in the form of a series of still images, forming a video stream, or a single frame. With this interface, the following information may be obtained, leading to the listed factors or combination of factors.
Face recognition may be conducted by taking frames of pictures of the face, mostly in full frontal position, although they may be done in any orientation. The faces are manipulated to produce features and are matched against a list of faces in the database. This is a biometric.
The face of the person is used as discussed above, in the form of a biometric. However, at the enrollment stage, preselected facial gestures have been chosen by the user which need to be enacted at the test or verification time, in order for the person to be authenticated. This conveys biometric information, as well as personal information that is only known to the user, in the form of preselected gestures. For example, these gestures may be the blinking of the right eye a few times, followed by blinking the left eye, the lip movement for the utterance of a preselected word, for which the camera will detect a series of lip movements, the use of the tongue for gestures in any fashion, etc. The preselected nature states that the same activity must have been done at the time of training. Therefore, at the test time, the activity is being compare to the enrolled activity. If at test time, both the biometric face recognition and the performed gestures match the models stored in the database from the time of enrollment, the person is authenticated.
The face of the person is used as discussed above, in the form of a biometric (Section 1.3.4). However, the recognition system is capable of recognizing different facial gestures by using a standard description associated with each gesture. The system randomly prompts the user to perform different gestures. In this case, the point is to ensure liveness and that the system is not being supplied with a pre-recorded or artificially generated series of facial images.
Ear imaging may be used as a biometric as described in Section 1.3.2, therefore it provides a Factor of type 3.
Image of the palm of the hand with or without the fingers may be used as a biometric as described in Sections 1.3.6, 1.3.7, and 1.3.11. Standalone, this imaging would provide a biometric which is a Factor of type 3. However, in addition, the image of the hand may be used to capture hand gestures in the form of hand and movement and configuration. If it is done in conjunction with preselected gestures, then it would provide Factors 2+3. If it is done in conjunction with prompted hand gestures, such as a request to do a thumbs-up, or shown the index finger, or to cross two fingers, etc., it would provide a liveness test as well, which means it would produce factors 3+4.
Full body imaging may be done to provide biometric information such as Gait (Factor 3). I may also be used to ascertain body language. The body language may be thought of as specific movements of different parts of the body, with relative positions of the different parts of the body, including relate speeds and accelerations. This information may be used in pretty much the same as the previous human-machine interface, to deduce biometrics (Factor 3) such as in Gait recognition (Section 1.3.14), or with preselected body movements (Factor 2) or prompted body movements (Factor 4). Any combination of the above may be used to produce a multifactor authentication procedure.
As with imaging, audio capture may be used to perform many different combinations of biometric authentication (factor 3), personal information capture (factor 2), in the form of preselected responses or the discovery of a-priori knowledge of facts, and liveness information (factor 4), in the form of prompted discovery, such as repeating prompted requests or producing actions and responses that would carry proper information related to the prompted query.
Speech may be used to provide voice biometrics about the user. Therefore speaker recognition by itself is a biometric (Factor 3), however, speech may also convey content in the form of a predetermined or preselected text (Factors 2+3). Using speech to convey prompted content would provide a liveness test, hence producing Factors 3+4. In another usage, speech may be used to answer specific question related to the situation at hand. For example, a question may be posed about the local weather, the response to which should be befitting the question. Another example would be a question about the color of an object, the answer to which would require presence in the locality of interest at that specific moment. Depending on the capabilities of the natural language processing and understanding systems being used, in conjunction with the speech recognition capabilities, more or less complex questions may be asked by the system to assess liveness. These examples provide Factors 3+4. In fact, the queries may be formed in such an interactive way that would require Factors 2+3+4. In this case, the person's response would contain information about preselected or known facts, as well as prompts relating to the current state to ensure liveness.
Nonspeech human generated audio (whistle, clicks, taps, claps, etc.) may also be used to mostly provide Factors 2 and 4. A predetermined sound sequence such as a specific series of tones by whistling, or a number and duration of clicks, taps, claps, etc. performed by the mouth, hands, fingers, feet, etc. These may be used to provide an authentication Factor of type 2. Also, a prompted sound sequence such as a specific series of tones by whistling, or a number and duration of clicks, taps, claps, etc. performed by the mouth, hands, fingers, feet, etc. This would constitute an authentication Factor of type 4. A combination of the two types may be combined through an intelligent prompt system such as discussed in Section 1.2.2.1 to provide a combination of Factors 2+4 as well.
Audio-based ear recognition (Section 1.3.3) may be used to provide biometric information (Factor 3).
Online handwriting recognition [7] is very similar to the speech (Section 1.2.2.1). It has three different aspects that it may convey. The first is the recognition of the content of the writing, as in reading the handwriting of a person for content [7]. This aspect may be used to ascertain for Factors of type 2 and 4. The second aspect is online signature recognition where not only the shape, but also the local dynamics of the signature (relative velocities and accelerations of the different points in the signature) are used as a biometric to recognize the person based on his/her signature. Being a biometric measure, this aspect leads to a Factor of type 3. The third aspect is the use of handwritten gestures in which case the same techniques as are used in performing unconstrained handwriting recognition [7] may be used to recognize the gestures. These gestures may either be preselected (Factor 2) or be prompted, such as asking the person to draw a circle or a cross, etc. (Factor 4).
Signature recognition (verification) [4] is very similar to text-dependent speaker recognition where the signal is a handwriting signal. It may also be seen as a preselected gesture (Section 1.2.5). It provides biometric information (Factor 3), but it also includes personal information which is known to the individual. Unfortunately signatures (in their image form) are public and may be found on signed checks and documents, and therefore they may be available for mimicry. Although the dynamics are hard to mimic, it may still be achievable by seasoned impostors.
Choosing other preselected words, preferably a number of choices to be chosen at the time of the test by the user or system, will be more effective. It provides Factors 2+3.
As mentioned in Section 1.2.4, online signatures are special cases of online preselected gestures. In the same manner, one may choose other gestures and enroll the system using these gestures. At the time of the test, the user may use any combination of these preselected gestures which are not publicly available in contrary with signatures. These gestures provide Factors 2+3. Much in the same spirit, the user may be prompted to input specific gestures such as drawing a circle or a cross or any other shape. In the process, the relative motion of the hand, containing motor control dynamics information [8] which provides a biometric (Factor 3), as well the content of gesture which is prompted, which provides a liveness factor (Factor 4). Therefore, this case can provide Factors 3+4.
1.2.7 Brainwave or Brain-Computer Interface (BCI) (e.g. through EEG)
Brainwave recognition (Section 1.3.15) produces a biometric measure, but it generally always requires some context. The context may be either a predetermined context or one which is prompted.
In this scenario, predetermined brain activity will generate a specific EEG signature which may be recognized as a biometric (Factor 3) and the knowledge of the specific activity which leads to the EEG signal. The user needs to know the activity and also will generate the EEG signal which is expected. This is somewhat like a text-dependent speaker recognition, where the text is not provided to the user and needs to be known.
1.2.7.2 Use of Brainwave to create a predetermined pattern (Factors 2+3)
In this case, in contrast with the previous case (Section 1.2.7.2), the user uses brain activity to affect a change in an intermediate medium, such as moving a cursor through a path on the screen, etc. Now, the user needs to conduct such activity to drive generate or follow a predetermined path. For the cursor example, we can think of the user using the capability of moving a cursor in a two-dimensional field or picking from a list of letters and numbers to produce a predetermined pattern or character or number sequence.
1.2.7.3 Use of Brainwave to create a prompted pattern (Factor 3+4)
This case is very similar to the case of Section 1.2.7.2, but in this case, the pattern is randomly generated by the requesting machine and the user needs to produce that pattern. This ensures liveness as well, just in case the previous patterns generated by the EEG have been intercepted and reproduced by a spoofing system.
Much in the same way as handwriting, keystroke dynamics can provide motor control information which is somewhat text-dependent (Section 1.3.13) or at least dependent on the local ngram being typed [9]. The motor control by itself provides a biometric (Factor 3). The text may also be preselected to provide Factors 2+3. Alternatively, the text may be randomly prompted. In this case, using the NGram information as described in [9] provide the building blocks for testing liveness as well as the biometrics (Factors 3+4).
There are several biometric challenges which may be used, depending on the available sensors. Today, most devices are equipped with a microphone as well as a camera. Some newer models of devices such as the iPhone, also have cameras which face the user. Other inexpensive sensors such as fingerprint sensors may be added to devices and are present on some larger devices. The following are some of the biometrics which are deemed practical for such a challenge. However, the list is not limited to the one given below. In general, any biometric system capable of verifying the identity of an individual based on a biological measure may be used for this purpose.
In a generic speaker verification application, the person being verified (known as the test speaker), identifies himself/herself, usually by non-speech methods (e.g., a username, an identification number, et cetera). The provided ID is used to retrieve the enrolled model for that person which has been stored according to the enrollment process, described earlier, in a database. This enrolled model is called the target speaker model or the reference model. The speech signal of the test speaker is compared against the target speaker model to verify the test speaker.
Of course, comparison against the target speaker's model is not enough. There is always a need for contrast when making a comparison. Therefore, one or more competing models should also be evaluated to come to a verification decision. The competing model may be a so-called (universal) background model or one or more cohort models. The final decision is made by assessing whether the speech sample given at the time of verification is closer to the target model or to the competing model(s). If it is closer to the target model, then the user is verified and otherwise rejected. This kind of competing model is used as the state of the art at the present [6]. In addition, the state of the art sometimes uses cohorts of the speaker being tested, according to the user ID which is provided by the user. However, if the person happens to be an impostor, then the cohort is selected based on the user ID which he/she provides. It is possible that the impostor is closer to the user ID he/she is trying to mimic in relation to the cohort which are a small set of speakers in the database who have similar traits to the target speaker. In this patent, a novel technique is used, in addition to the conventional techniques of a universal background model and/or the cohort set. This new technique applies to all biometric models and is not limited to speaker recognition and may be found in Section 1.4.
The speaker verification problem is known as a one-to-one comparison since it does not necessarily need to match against every single person in the database. Therefore, the complexity of the matching does not increase as the number of enrolled subjects increases. Of course in reality, there is more than one comparison for speaker verification, as stated—comparison against the target model and the competing model(s).
It is important to make sure that the user of the device is not using a prerecorded message captured from the authorized user of the phone to spoof (see [6]) the speaker recognition engine. To do this, a challenge may be used that would test the liveness of the individual using the phone. Basically, these are some methods for doing such a liveness test.
Most other biometric verification is quite similar to the speaker verification methodology given above. Some special features of other biometrics are listed below.
There are two types of image-based ear recognition systems, two-dimensional and three-dimensional. Two-dimensional image-based ear recognition relies on a photograph of the ear which may be taken using the built-in camera of the phone. The image may be taken and processed directly from the camera. The techniques use information about the color, texture, and shape of the ear to determine the identity of the claimant [10, 11, 12, 13]. There are also some 3-dimensional algorithms which either use a three-dimensional image of the ear (in which case they mostly need a supplemental 2-dimensional image for color reference). These techniques either use a three dimensional image [14, 15] of the ear or combine several two-dimensional images to produce a 3-D image [16, 17]. The three-dimensional approach does not seem to be too practical for a PDA application.
The second ear recognition approach uses the acoustic properties of the pinna to establish the identity of the individual. In this approach, a small speaker and a microphone, both point into the ear canal. The speaker sends out a wave (1.5-kHz-22-kHz) into the ear canal at an angle and once the wave goes through the canal and reflects back from the ear drum and the wall of the canal, the microphone picks up the reflection wave. The way the wave is manipulated by this reflection is a related to the transfer function which is made up of the transfer functions of the speaker, the pinna, the ear canal and the microphone. This transfer function is estimated based on the input and reflected output [18]. This technique may be deployed by using a special earphone to replace the normal earphone that usually accompanies the PDA.
Automatic face recognition has received quite a bit of attention in the last decade mostly due to the availability of the many video cameras in public locations for security purposes. Although, there has been active research in third field for more than 3 decades [19]. There have also been a handful of books written on the subject in the recent years [20, 21]. Cooperative face recognition may be use by incorporating the built-in camera in the PDA to identify the user. In order to ensure liveness of the user, several different techniques may be deployed.
One possible liveness challenge is to request one or more pictures from the user with different expressions. For example, the candidate may be asked to make a specific expression which he/she has made in the past and which is registered in the telephone upon enrollment. The challenge would be the random tag associated with some of the enrolled expressions. The user is the only person who would know how to make the specific expression by name. The biometric models (enrollment data) are kept on the PDA in an encrypted form. Therefore, there is no way anyone can see the corresponding expressions. Only The tag is stored on the PDA. The challenger will ask for a specific number which is interpreted by the face recognition software as the label for a specific expression. The tag is then displayed on the PDA and the candidate will point the PDA toward his/her face and changes his expression to the dictated expression and presses a button. The image is then verified using the PDA and the results are passed to the authentication requester (cash register, etc.)
Finger print recognition [22] would require the existence of a fingerprint sensor.
On portable devices, hand-palm recognition [23] may be done using the built-in camera.
Normally, hand geometry [24] recognition is used in larger systems, however, on a small portable device, the built-in camera may be used for capturing samples.
Iris recognition [25] is usually implemented using sophisticated cameras. However, in the applications of interest to this invention, it is presumed that the user will be a cooperative user (see [6]). Therefore, the built-in camera should be sufficient for most applications.
Using a special modification to the camera for conducting a retinal scan (see [26]).
Using a modification to the camera for obtaining thermal images [27]. These modifications are currently costly, but may come down in price and become more practical in the future.
Vein recognition [6] generally requires infrared or near-infrared imaging. It may be done using a modification to the camera.
For telephones and PDAs which have a stylus, signature verification citer-m:gruber-2010, may be used. Those with touchpads may also use a simpler gesture recognition systems.
For PDAs and cellular telephones with a keyboard (soft or hard), a phrase will be requested which will be typed using the keyboard and the typing style and dynamics [28, 29, 30, 31, 32] will be used to do the verification.
Imaging of the full body and the way a person conducts while walking is known as Gait biometric recognition. The length of a person's stride and his/her cadence are somewhat behavioral, but they also possess some physiological aspects. They are affected by the person's height, weight, and gender among other factors. Cadence is a function of the periodicity of the walk and by knowing the distance the person travels, his/her stride length can be estimated [33].
Brainwave is name given to the capturing of the residual electrical or electro-magnetic signals being produced by the brain of an individual. These signals may be captured using Electroencephalogram (EEG) or other brain imaging techniques such as fMRI, etc. However, EEG is much more practical at the present, since small headset-like devices [34, 35] may be worn by the individual and the brainwave may be captured. Brainwave has been used in the past to control devices, etc. However, it is has never been used as a biometric. This invention uses brainwave somewhat in the same manner as say speech is used for performing speaker recognition [6]. Although the brainwave signals are weak and carry a lot of noise, they would be unique in the way a person would thing of specific tasks. Capturing the brainwave under cooperative condition, when the person cooperates to be able to pass the biometric test, may lead to the verification or identification of the individual. Brainwave interfaces are also known as Brain Computer Interface (BCI).
Any combination of the above biometrics may be used to reduce the error rate and obtain a more reliable result. This combination may be done in any of the methods described in general, by this invention, treating biometric verification as a form of encryption, as well as straight combination of the results.
Alternative Choice of Competing Biometric Reference Models: Note that the techniques described here apply to all biometrics and are not limited to voice biometrics. The proposed technique may be used in the presence of a large data set comprising many individuals. If such database exists, then in lieu or in addition to the conventional competing models described in Section 1.3.1.1, multiple reference models are selected from a large pool of biometric models. For example, let us assume that 1500 reference models (such as reference speakers for the case of speaker biometrics also known as voice biometrics) are picked from a population of 1.5 million distinct biometric models (speakers in the case of voice biometrics) at hand. These reference models (reference speakers for voice biometrics) are picked such that they cover the whole space, defined (spanned) by the original 1.5 million models (speakers for voice biometrics). Namely, each chosen reference model is representative of the part of space in the original population of 1.5 million which surrounds it, in the sense of the metric being discussed in Section 1.4. The representative biometric model selection may be done on the basis of a uniform distribution, or it may be done according to a parametric or nonparametric distribution of the 1.5 million models defined in the space being discussed in Section 1.4. Namely, a proportionally higher density of representative biometric models may be used according to the actual population density of the biometric models. In contrast, in parts of the model space where there is a lower density of biometric models, a sparser, more distant set of reference models may be chosen.
In an alternative reference model selection scheme, the original 1.5 million models may be split into populations of known common characteristics. For example in the case of speaker (voice) biometrics, this may mean splitting the models into male and female speakers, or even male adults, female adults, and children. Once this preselection is done, the above may be done to create two (male/female) or three (male/female/child) sets of reference speakers. For the voice biometrics example, either established ground truth about the genders of the speakers represented by each model or automatic gender classification may be used to limit the population for which the rejection (competing) models are chosen. Other common traits may be used for other biometrics. Further reduction in the use of reference models may be done by first associating the reference data with the part of the space for the original 1.5 million models and then using the pertinent subspace of the reference speakers as complement models.
Note that by space, here, we mean the multidimensional space where the biometric models reside. For example, if the model happens to be represented by a single multidimensional vector of features, then the space would be the multidimensional Euclidean space that spans these models. If each model happens to be represented by a multidimensional mean vector and corresponding variance-covariance matrices, then again, the space would be that multidimensional Euclidean space that spans these densities, represented by the given means and variances. If the model is represented by continuous vectors in some discrete or continuous normed vector space, then the corresponding Banach, pre-Hilbert, or Hilbert space containing and governing these models is the space of choice. Refer to [6], specifically Chapter 6 of for such measure spaces, Chapter 8 for the computation of the appropriate metrics and divergences among such models, and Chapter 11 for detailed description of unsupervised clustering. In some cases, where there is a high overlap of the different dimensions of the representative models in their spanned space, a kernel mapping may be used to consider their projections into spaces of different dimensionality, where the metric between any two model representations is computed in that kernel space, described in Chapter 15 of [6].
Hierarchical grouping may be used to speed up the reference model selection. The hierarchy may be done either in a supervised or unsupervised fashion. Example of a supervised hierarchical classification is the initial splitting of all the models into male and female models. Then each group may be split further, by an unsupervised technique which would either do top-down or bottom up unsupervised clustering. Both clustering techniques require the presence of a distortion measure such as a distance or divergence between any two models. In order to be able to use a bottom-up (agglomorative) method, a merging function will also be helpful. The merging function should be able to marge two models and produce a new model that more or less possesses characteristics (traits) from both models being merged. Reference [36] provides an agglomorative technique for creating such hierarchy.
However, in the current invention, a novel approach is used which uses a divisive technique that splits the population, represented by their biometric models spread in their spanning space, as described in Section 1.4 into the number of clusters which are requested. In this case, we would like to cluster 1.5 million biometric models into 1500 clusters. To do this clustering (Chapter 11 of [6]), we define a distortion measure (Chapter 8 of [6]) between any two statistical models which are representations of the models. Depending on the type of model, the distortion measure may be defined differently. For example, if the model is represented by a vector, as it is the case for total variability space models (i-vector models) in speaker recognition, then the distortion measure may be defined as any distortion measure that would apply to two vectors, such as a cosine distance or any other relevant metric or divergence (Section 1.4). See [6] for many such distortion measures. Some of these measures will be divergences and some will uphold the symmetry and triangular properties that would deem a distortion measure a distance measure ([6]). For the sake of generality, we will call the measure a distortion measure which may be a divergence (directed or symmetric) or a distance measure. [6] As another case, if the model is in the form of a set of sufficient statistics, such as mean and covariances, then there are also different ways to define a distortion measure between these collections of densities or distributions which are defined by parametric or nonparametric traits.
As described in the section on Multifactor Authentication (Section 1), as well as in the description of
Depending on the factor, it is important to modify the given scores such that they would be comparable across the different factors. Each factor is generally measuring a different information, so the nature of these scores is different. In order to be able to compare them, they need to be transformed into a space that would not only present the same nominal range (say between 0 and 100), but also similar spread (distribution) of confidence in the different parts of the range. Once such transformation is determined, the scores are mapped into comparable spaces and any normal statistical measures, moments, and combination techniques such as means, variances, curtosis, etc. may be used to combine them and any metrics such as those defined in Chapter 8 of [6] may be used to compare any two matches. Note that it is not necessary to impose limits such as definitely staying between 0 and 100 for the scores, as long as they statistically have similar distributions in these ranges with possibly a handful of outliers outside the chosen range. We have been speaking of the range if 0 to 100 since percentages are generally well understood by the common population. Although the actual transformed scores may not be true percentages and may go below 0 and above 100, but only in few occasions. As an example, take the output of a verification process based on log likelihood ratio (see [6]). The numbers are generally small negative numbers in that case, but they can go above 0 as well since they are defined as the difference between two log likelihoods, each of which would go from 0 to minus infinity. Being akin to logs of likelihoods, one may first exponentiate them to get a score which is more like a likelihood and then normalize them to get confidence scores which would mostly lie between 0 and 100 with higher confidence represented by scores of near 100 and lower confidence being represented by near numbers near 0. There is no panacea that may be applied to all factors. Each factor needs to be examined on its own merit and its score should be transformed into some form of a confidence score. The final fusion score, representing the results of the total multifactor authentication is produced using a statistical technique such as weighted averaging to combine the individual factors, weighted mostly based on their effectiveness and relevance to the authentication problem. The fusion scores (115) of
The liveness challenge in the case of speaker recognition is described in Section 1.3.1.2. However, the liveness challenge does not necessarily have to be done using this speaker liveness challenge. It may be done using other sensory communication and media. For example, this can be the reading of a displayed random prompt, a valid response to a displayed random question, a facial or hand gesture such as the blinking of a specific eye, a number of times, or the performance of a hand or body gesture such as the snapping of a finger, holding up a number of fingers with a requested orientation, the turning of the face in a special way instructed by the application. Another possibility is to use an online signature for assessing liveness. In this scenario, the person being authenticated would use a pen, as described in U.S. Pat. No. 7,474,770 B2 [4] and provide an online signature. Since an online signature has to be provided using a tablet, it will ensure liveness of the individual performing the authentication. The online signature in this case plays two roles. The first role is as one of the biometrics listed in the multimodal biometrics being tested and the second role is the proof of liveness since the data needs to be provided using a pen and a tablet and is not readily spoofable as audiovisual, fingerprint and other media. This is partly due to the fact online signature verification needs to replicate the dynamics of the signature such as velocity, acceleration, pressure, etc. and not just the shape of the signature.
In order to ensure that the data kept on the device and on the servers associated with the transaction do not lose their integrity and all transactions are secure, a unique methodology is presented here, which provides assurance that all the data and models associated with the biometrics, as well as the software being utilized to run the authentication and the transactions are unchanged and authentic. It also ensures that a transaction authority may trust the multifactor authentication being done on a device which is held by the person being authenticated. This trust is essential to allow for the authentication to be done locally on the device being used by the user, or an independent device which is not owned or controlled by the transaction authority. In all, the communication, storage, and processing of transactions among the following modules: data capture, authentication, point of service, transaction authority, and certificate authority are made secure and trustworthy by the process presented in this invention.
2.1 The Enrollment and/or Registration Stage
When the phone is registered (or at some later time), the owner of the device does a biometric enrollment and the model/models is/are built and stored on the device. These models are generally representations of the features of the specific biometric of interest. Most biometric models do not store the actual features of the biometric. The models are usually statistical parameters and other function representations of the features captured at the enrollment process combined with statistical or function of features of some larger training sample from the biometric vendor. [6], herein incorporated by reference in its entirety, provides an overview of biometric models as well as a detail treatment of speaker recognition as a biometric.
The initial enrollment may need to be verified by a third party using a Public Key Infrastructure (PKI) such as the X.509 standard being used by most Internet applications and prescribed in detail in the ITU-T RFC 5280 [37]. The noted third party may be a certificate authority, such as those which exist for issuing secure certificates, or may be another trusted institution, such as the service provider, a bank or a notary. The enrollment may be certified using a secure key such as the digital certificate which is signed by an SSL certificate authority. It makes sense for this to be done by the Cellular telephone vendor who makes the sale or by his/her organization. See the Encryption and Key exchange section.
Once the biometric enrollment is completed, the models for doing a biometric challenge are ready to enable the biometric authentication services on the phone.
At this point, account information may be linked to the device/user through registration with a transaction authority
At this stage, the biometric enrollment and account linking is done. Let us assume that there is a MasterCard account certificate issued by bank A and saved on the device, the person's passport is linked with the phone and the employer of the individual has linked in an account for accessing the office building and special parts of the company which require restricted access.
Note that all the information is being stored in the form of encrypted keys in the phone and each key may only be deciphered by the issuing authority who has the related private key used at the time of conducting the transaction. This is in contrast with holding the information on a server or servers which would have to be distributed. A server-based solution is not viable since it requires constant communication with the place where the information is stored and may be fooled to release the information to unauthorized devices. In the situation described here, once the linking is done, the possession of the device holding the keys also becomes important.
For every account which is linked, a minimum requirement of the available authentication methods is picked. The authorizing institution sets the minimum requirements at the setup and the owner of the PDA may add extra authentication methods to apply more security. Each linked account may be set up to require a different combination of authentication methods. N.B., see authentication methods for more information.
The transaction may be any process requiring authentication such as a physical access control scenario such as a passport, an account access scenario using the Internet or a telephone network, etc. The following sales transaction is used to simplify the understanding of the process.
The authentication process may check for the validity of the subscriber ID with an authority. Note that the authenticity of the subscriber ID has been validated by the validation process (Section 2.5) and should only be checked by some transaction authority for validity.
Based on the second authentication factor (something one knows), a challenge request may initiated by the point of service. This item may be designed to work seamlessly with a biometric challenge (see Speaker Recognition [6] for example) or it may be entered using the keypad or any other data entry device, such as picking from a list of images, etc.
The authentication also includes one or more biometric challenges [6]. This item has been described below in detail, beginning in Section 1.3.
2.4 Registration with the Certificate Authorities
Y=I(X)I(X)=H(X):H(X)=X (1)
The output of the hash function (Also called a Digest) is a string of binary digits called the hash of X, H(X). Non-trivial (non-identity) hashing functions come in a number of varieties, such as checksums, keyed cryptographic hash functions, and keyless cryptographic hash functions. Some popular hash functions are keyless, such as MD5, MD6, SHA-256, SHA-512, etc. Some keyed hash functions are message authentication codes such as UMAC, VMAC, and One-key MAC.
The following definitions are used to describe the digital signature of the information which is stored on the device to ensure the authenticity of the authentication references.
X
i
Authentication Reference ∀iε{0,1, . . . ,N,N+1} (2)
where X0=S, Xn=Bn ∀nε{1, 2, . . . , N}, and XN+1=CS. The authentication reference is also referred to as reference data herein.
Yi denotes the output of the hash function applied on the authentication reference, Xi,
Y
i
H
i(Xi)∀iε{0,1, . . . ,N,N+1} (3)
Assuming that there is a certificate authority [37],[5] who is used to sign the references, we denote that authority by CA and the private and public keys of that authority, as defined by the X.509 standard [37] for the Public Key Infrastructure (PKI) are denoted by the following two variables respectively,
R
CA
Private key of the CA (4)
P
C4
Public key of the CA (5)
In the same way as in the case of the certificate authority discussed previously, there will be a private and public key pair which are generated on the PDA at the time of the registration, using the registration application. This pair of keys is denoted by the following two variables,
B
PDA
Private key of the Device (6)
PPDAPublic key of the Device (7)
We need to define two functions which denote the encryption and decryption of some data. These functions are defined as follows, using any encryption technique which may be desirable. Many such techniques are given by the X.509 standard [37] and a lot more are explained in detail in [5].
Z=E(R,Y)Encryption function for Private key R and data Y (8)
D(P,Z):D(P,Z)=Y(Decryption function) (9)
where
Zi
E(R,Yi)∀iε{0,1, . . . ,N,N+1} (10)
The signed hashed values, Ai, and the public key of the CA, PCA, are stored in the persistent memory of the device shown in
For more security conscious applications, it is possible to have multiple CAs sign the hash values. This may be done either in parallel or in series. For a series process, the hash values are sent to CA1. Then the resulting signed data from CA1, Ai1 is sent to CA2 to receive Ai2, and so on. Finally, there will be an encrypted data with a series of M public keys associated with the M different CAs. In this case, the registration application will store the order of signatures, O, in an encrypted file, using RPDA and stored in the persistent memory of the PDA, along with AiM from the last (Mth) CA, and all M public keys, PCAm:mε{1, 2, . . . , M}.
At the time of validating a data signed by a series of CAs, the authentication application will decrypt the order data, O, from the persistent memory using PPDA and uses it to decrypt the series of encryptions in the reverse order, using Aim and PCAM to get AiM-1 and so on until YiCA is deciphered. See
For a parallel signature process, each CA signs the same Yi independently. In this case, all AiM, PCAM:mε{1, 2, . . . , M} are stored. No specific order is necessary. At the validation step, all the hash values deciphered from the Am and from the reference data would have to match. See
The multiple signature process may be used to store the different signed hash values at different locations. For example, if the device has access to network storage in L locations, it may send each of L signed hash values by L different CAs to these L locations, one of which is the persistent memory of the PDA. Then at the authentication step, it may try to retrieve as many copies of these hashed values as possible. If because of network or technical issues some of the L locations are not accessible, it can use down to a minimum prescribed number of different retrieved signed copies as it can. Then if the prescribed minimum locations is met and if all the hash values match with the data on the PDA, the device may go ahead with the authentication process.
It is important to ensure that the applications in charge of the registration and authentication are genuine and certified. This may be done using standard digital certificates which have been described in detail in [5]. Specifically, a code signing technique (see [38]) may be used to ensure that the code being run is authentic and that it has not been tempered with. Most modern operating systems are equipped with code signature verification. This is especially true with mobile operating systems.
In order to prove the authenticity of the application, being used for authentication, to the transaction authority, the application hashes the certificate associated with its code signature (as described previously), using a hashing function, Hi, and sends its encrypted value to the transaction authority when it registers with the transaction authority (see the registration procedure of Section 2.8). To perform this encryption, the application will use the private key that it uses in performing the registration step of Section 2.8, RPDA. This information is kept on file at the transaction authority (CPDA) along with the other information such as its private key and send it to the POS or transaction authority at the beginning of all communication. Since the transaction authority also stores the public key associated with the device, PPDA, at the time of registration (see the registration procedure of Section 2.8), the transaction authority has is capable of decrypting the hash value of the software certificate. Upon every transaction, the software will recompute this value and send it to the transaction authority which will compare it against the data on file and ensures the authenticity of the authentication software running on the PDA. Therefore, the transaction authority can rest assured that the authentication results received from the PDA are sent by the original authentication software and that the software has not been fraudulently spoofed.
2.8 Registration with the Transaction Authorities
A transaction authority (TA) 76 is any authority which has control over the transaction of interest. For example, this may be a credit card company for charge transactions, a financial institution for performing a financial transaction, an office authority for providing access to a building, a government organization for providing passport access, an airport or any other transportation authority for allowing entrance credentials to a terminal, a bank for allowing access to a vault, etc.
In
A Point of Service 74 is any party which would provide a sale or a service. Some examples have been provided as a Point of Sale merchant (
The PDA will perform the following actions:
The POS will perform the following actions:
The TA will perform the following actions:
There are several databases which are kept across the different components of the system (
The applications using the combination of multifactor authentication and transaction/data security/integrity may have the following different forms:
The following few sections describe different kinds of applications which are proposed to be used with the multifactor authentication and transaction and data security and integrity processes described in this invention.
The default status of the Access Control system is a denied status. Logically, the default is to deny access unless the authentication criteria are met. This is really the starting moment of the access control process, right after it has been initiated by depressing the access button or by the means of an audio or face detection scheme.
At the beginning of the authentication process, one embodiment is depicted in
A test sample is data which is captured at the time a user is being authenticated.
4.2.4 Biometric model
A biometric model is generally a statistical or parametric representation of the traits of the individual for whom the biometric model is built. They are generally in the form of mixtures of distributions, multidimensional vectors, or parametric representations. The specifics depend on the type of biometric at hand and the algorithm being used for producing the biometric model. For example, using a speaker model, for voice or speaker biometrics, if a Gaussian Mixture Model (GMM) method is used, the model may be a collection of multidimensional means and variance-covariance representations with corresponding mixture coefficients. Alternatively, for the same model, some systems may choose to only keep the mean vectors, by stacking them into a large supervector. A neural network model, though, may be stored as the set of weights and thresholds for the enrollment sample of that individual.
In order to achieve high levels of security, an access system is proposed here in which one may set a requirement that a predetermined number of authorized individuals should be authenticated in series, in order to gain access to a physical or virtual location. Note that a virtual location may be access to sensitive data, a web portal, etc. In order to enable such authentication, one sets a maximum number of tests that will be done, through which a minimum number of people have to authenticate, in order to grant access. Each time there is a successful authentication, a counter (Figure component 87) is incremented by the authentication software. Once the minimum number of matches (Figure component 88) has been achieved, before getting to the maximum number of tests which are defined by the administration, access is granted to the group of people who have authenticated.
Another methodology tests how well each match is done, or what rank of individuals have passed the authentication, the minimum changes. Here is an example to explain the technique. Let us assume that we have an organization where people have different access levels. For example, one person may be an administrator who should have the highest level of access. There is a regular worker who is considered to have the least level of access and there is a project manager who has an intermediate level of access. Therefore, there are three levels of access, administrator, project manager, and worker, listed in order, from high to low. We can require require fewer number of high level access holders than when lower level access individuals are present. For example we may assign a number to each access level. For the sake of simplicity let us assign 3 to administrator, 2 to project manager, and 1 to regular worker. This way, if we set the required access level to 6, the requirements may be met by having two administrators (2×3), one administrator, one project manager, and one worker (3+2+1), or two project managers and two workers (2×2+1×2).
Another technique would be the use of the authentication score in a formula that would require a minimum score for entrance. For example, a possible linear formula would be as follows:—We require a minimum score of 400. Let us assume that generally an excellent authentication score is considered to be 70 for each individual. Furthermore, let us assume that we are going to compute the total score as the weighted sum of the authentication scores weighted by the access level of each individual. Therefore, two administrators getting a score of 70 each, for the authentication, would contribute 420 (=2×(3×70)), which is enough to grant them access. Each of these people would not be given access though. However, let us assume that one of the administrators gets an authentication score of 40. Then the total weighted score would be 330 (=3×70+3×40), which is 70 points shy of obtaining access. In that case, a project manager obtaining an authentication of 50, will contribute 100 (=2×50) to the score and would bring the total score to 430, which allows access. Practically, there would be a minimum score and a maximum number of people set in the set up. The maximum number of people is set to avoid having a great many number of authentications with very low scores, which would naturally be a security breach.
A specific portal may possess an RFID or NFC tag which may be read by the device and be used to address the said portal to be targeted by the Access Transaction authority. This means that when access is requested, being in the vicinity of a specific portal will enable the tag of that portal to be read and transmitted to the Access transaction authority to grant permission to that specific portal. This ID may undergo the same scrutiny that has been described in Section 3 in order to check whether the person or persons have access to that specific portal. If access privileges are established, the Access Transaction Authority may send a signal to that portal to grant access to the holder of the device being used.
This proposed approach solves a problem which exists with the use of NFC and RFID tags. The proposed technique provides the mean to only enable NFC or RFID communication upon a successful multifactor authentication, as described in this invention. The motivation for this is the fact that for example in the case of NFC, currently some credit card companies are including NFC chips which constantly emit information about the credit card. There are shielding sleeves available for sale for RFID and NFC communication shielding to ensure that the credit card information or account information cannot be intercepted by thieves. The current invention can use the described authentication on the device, in order to enable its NFC or RFID communication only when the authentication is successful. In addition, NFC communication may be used to communicate with the POS or the transaction authority, where ever there is a communication stack. A similar scenario allows access to a specific part of restricted network which may provide secure data. This may be thought of as logging into a secure network which provides different restricted services such as the network of a University, a Hospital, etc. Much in the same way, this methodology may be used to enable or disable any other functionality on the device being used. Some examples are location services, Bluetooth communication, etc.
One application of the system described in this patent is to use the discussed infrastructure to remotely control the devices in one's home. For example, the POS and transaction authority may both reside on a computer or device that would be connected to the different appliances such as lights, toaster oven, air conditioning, etc. in a home. The device may be the smartphone being carried by the homeowner. The authentication service may either run on the smartphone or on the device at home. The multiple factors are captured on the smartphone and the command is also given on that device. The command is similar to a purchase request that was described in
Health care records have been recently converted to electronic formats. It is unclear who has these electronic records at the moment, but it makes sense that the patient (owner of the records) should own these records and should share them with the corresponding health care provider when the need arises. The access control being discussed by the application in Section 4.2 may also be realized in the form of access to restricted data or records such as access to medical or financial records. Using the techniques presented in the Transaction and Data Security and Integrity Section 2, the sensitive medical records may be safely hashed, signed for integrity by a CA, encrypted and saved on the device, belonging to the patient or owner of the data. Let us assume that the person holding the device of interest goes to a healthcare provider who draws some blood and sends the blood to a lab for examination. The results are sent back to the doctor who in turn shares them with the patient. Either the lab or the doctor who is in possession of the resulting digital data, many transfer them to the patient's device. In this process, the patient may ask the institution that has produced the data to sign the hashed version of the data for safe-keeping on the device, together with the original data which is encrypted and saved on the device, in the same way as a biometric models are saved and stored on the device, described in Section 2. Each of the new pieces of data (new medical record) is in this way saved to ensure its integrity. Using the multifactor authentication described in Section 1, any part of the medical records may be unlocked and shared through another pair of POS and transaction authority which would be the receiving health care provider requesting a copy of any part of these records in order to perform further healthcare services for the individual. This new POS/TA pair will be assured of the integrity of the data, since the data and the processing software have undergone the same process as described in Section 2. Each healthcare provider or service the healthcare provider uses (such as a health insurance provider) would have had occasion of registering the user and the software in the same manner as described in Section 2.
As an example, take the instance were the patient may need to see an internist who may need to review his/her latest blood test results, in which case the patient may unlock the data and transmit that part of the data to the doctor. In this scenario, the patient chooses what part of the data is shared with the doctor and the data is only shared with the patient's consent. Any other person gaining access to the personal device of the patient will not be able to access the data. Also, this allows for patients to carry their medical history with them at all time without having any concern with privacy issues. In fact, different parts of the data may be tagged with different security levels. As an example, if the patient is diabetic or has known severe allergies, he/she may choose to make the data associated with these items public so that in case he/she goes into a shock, the data is available to all individuals accessing his/her device. However, they will not have access to the rest of his/her medical history and details. The important enablement is that the patient makes a decision on what part of the data requires high security and what part does not.
Financial records may be treated in a similar manner as described by Section 4.5. In this case, the data providers will be the banks, credit report companies, or other financial institutions, holding securities or financial certificates for the user. Much in the same way as the health records, the financial date and holdings certificates may be stored and signed with a CA, to only be unlocked using the multifactor authentication and data security described in this invention. Once the financial records or certificates are unlocked, ownership may be transferred to a different user using a transaction authority which is a financial institution. This is also similar to the digital cash case described in Section 4.7.
Similar techniques as discussed in Sections 4.6 and 4.5 may be used to store digital cash on a device, where the CA in this case would re-sign the remaining amount, every time a transaction takes place and an amount is removed or added from/to the remaining digital cash on the device. Only the user associated with the device is able to unlock the digital cash certificate using the multifactor authentication and data security described in this invention. At any point, a record-keeping read-only POS may be utilized on the device that would use a read-only amount assessment transaction authority on the device to provide the user with the amount of digital cash which is available on the device.
There have been described and illustrated herein several embodiments of a method and system that performs electronic transactions with a mobile device using multifactor authentication. While particular embodiments of the invention have been described, it is not intended that the invention be limited thereto, as it is intended that the invention be as broad in scope as the art will allow and that the specification be read likewise. Thus, while particular hashing functions and public key infrastructure systems have been disclosed, it will be appreciated that other hashing functions and key infrastructure systems can be used as well. In addition, while particular types of biometric models and biometric verification processes have been disclosed, it will be understood that other suitable biometric models and biometric verification processes can be used. Furthermore, while particular electronic transaction processing has been disclosed, it will be understood that other electronic transaction processing can be similarly used. It will therefore be appreciated by those skilled in the art that yet other modifications could be made to the provided invention without deviating from its spirit and scope as claimed.
The present application is a continuation-in-part of U.S. patent application Ser. No. 13/287,994, filed on Nov. 2, 2011, which claims benefit of U.S. Provisional Application No. 61/409,151, filed on Nov. 2, 2010, herein incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
61409151 | Nov 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13287994 | Nov 2011 | US |
Child | 14747211 | US |