Unauthorized access into handheld cellphone devices or laptops is an increasing problem for the industry. Hackers and the cyber industry are engaged in a constant technological race in which they try to defeat each other's latest improvements and advancements. As such, the industry always has a need for more sophisticated authentication and protection methods.
In recent years, increasingly more sophisticated methods for protecting devices have been developed. These have come to include hand and finger recognition, and voice and video detection.
The present invention provides a method for authenticate a user access or action using a computerized device, using audio data inputted by the user, said method implemented by one or more processors operatively coupled to a non-transitory computer readable storage device, on which are stored modules of instruction code that when executed cause the one or more processors to perform:
According to some embodiments of the present invention the selected words are at least one of: randomly selected, a random string of words, consisting a meaningful sentence.
According to some embodiments of the present invention the method further comprising the step of perform facial image recognition of face articulation in relation to sound for analyzing lips motion, to authenticate of uttered sentences by correlating to the phonetic analysis implemented by the audio analysis.
According to some embodiments of the present invention the method further comprising the steps of analyzing voice of user for identifying and parsing audio into phoneme and combination of sequence phonemes phoneme based on the known phonetics of the text and comparing to recorded sequence phonemes of the user.
According to some embodiments of the present invention the method the selected words are transmitted sentence through cellular network.
According to some embodiments of the present invention the method the defining selection of phoneme based on required sensitivity parameters
According to some embodiments of the present invention the method the step of analyzing voice of user for identifying unique speech patterns identifying the user by analyzing sound recording characteristic including at least: amplitude, pitch, or frequency.
According to some embodiments of the present invention the method the step of checking lips motion to identify opening of the mouth, stretching of the lips to identify level/intensity of speech comparing to audio recording speech amplitude.
According to some embodiments of the present invention the method the select sentences are randomly selected from a database of sentences.
According to some embodiments of the present invention the method the user is required to record a set of sentences which include all possible phonemes.
According to some embodiments of the present invention the method selected words or sentence have an actual relevance to the context of activities he is currently taking at website or application.
The present invention provides a method for authenticate a user access or action using a computerized device, using video data inputted by the user, said method implemented by one or more processors operatively coupled to a non-transitory computer readable storage device, on which are stored modules of instruction code that when executed cause the one or more processors to perform.
The present invention provides a system for authenticate a user access or action using a computerized device, using audio data inputted by the user, said system comprising a non-transitory computer readable storage device and one or more processors operatively coupled to the storage device on which are stored modules of instruction code executable by the one or more processors, said modules comprising:
According to some embodiments of the present invention the selected words are randomly selected, a random string of words, consisting a meaningful sentence.
According to some embodiments of the present invention the analyzing module further comprising the step of perform facial image recognition of face articulation in relation to sound for analyzing lips motion, to authenticate of uttered sentences by correlating to the phonetic analysis implemented by the audio analysis.
According to some embodiments of the present invention the analyzing module further comprising the steps of analyzing voice of user for identifying and parsing audio into phoneme and combination of sequence phonemes phoneme based on the known phonetics of the text and comparing to recorded sequence phonemes of the user.
According to some embodiments of the present invention the selected words are transmitted sentence through cellular network.
According to some embodiments of the present invention the defining selection of phoneme based on required sensitivity parameters
According to some embodiments of the present invention the analyzing module further comprising the step of analyzing voice of user for identifying unique speech patterns identifying the user by analyzing sound recording characteristic including at least: amplitude, pitch, or frequency.
According to some embodiments of the present invention the analyzing module further comprising the step of checking lips motion to identify opening of the mouth, stretching of the lips to identify level/intensity of speech comparing to audio recording speech amplitude.
According to some embodiments of the present invention the randomly select sentences from a database of sentences.
According to some embodiments of the present invention the user is required to record a set of sentences which include all possible phonemes.
According to some embodiments of the present invention the selected sentence have an actual relevance to the context of activities he is currently taking at website or application.
Following is a table of definitions of the terms used throughout this application.
The authentication system 10 sends the user device 20 authentication requirements and guiding instructions 20A, and receives behavioral data and authentication data from the user's device 10 (20B) in return.
The authentication system 10 dynamically enables changing the authentication procedure and the authentication procedure's properties according to various parameters, such as:
The passive monitoring module 200 continuously gathers user authentication data and behavioral data which do not require feedback from the user (e.g. continuously capturing video frames of the user). The gathering of the said data may initiate following a triggering event set by the authorizing entity, or according to a predefined schedule.
Examples for authentication data include: facial data, voice data, passwords.
Examples for behavioral data include: monitored phone movements, mouse movements or mouse clicks.
The passive monitoring module 200 propagates the said authentication data and behavioral data to the Analysis Module 400 and the Analysis Control Module 600
The active monitoring module 300 gathers active user authentication data. This data is acquired during any authentication process that requires the user 20 to take action (e.g. introducing a user name and password, or performing a required task according to instructions).
All acquired active user authentication data is recorded and propagated to the analysis module 400 and the control module 600.
An audio analysis module 400A receives data that contains the recorded sound of the user, and sends it to the Phonetic Parsing Module 50, where the phonetic data is interpreted and processed.
The Users Phonetics Module 60 is responsible for obtaining user-specific phonetic patterns. It is activated during the set-up process, as part of the machine learning training, or as new users are introduced into the system.
The Users Phonetics Module 60 requires newly introduced users to record a set of sentences which may include all possible phonemes. The said recordings are then parsed by the Phonetic parsing Module 50, to identify patterns of utterance for each phoneme. The recordings and patterns of the user's utterance of individual phonemes are stored in a user's phonetic database (not shown in
In some embodiments of the present invention, the phonetic data obtained from the user is compared to expected phonetic data obtained by the Users Phonetics Module 60, to determine user authentication. Following is a non-limiting example to such a process of authentication through speech:
According to some embodiments, the user is required to utter a sentence actual relevance to the context of activities he is currently taking at website or application. Having the actual information conveyed in the user's utterance of speech may be used to enhance the authentication process. For example, during a financial transaction, the user may be required to narrate their action as in: “I am transferring 100 dollars to the account of William Shakespeare”.
According to some embodiments, the information conveyed in the authentication sentence will be imperative to processes that are taking place in the authentication system's 10 environment. For example, a pilot may be required to say “I am now lowering the landing gear” as part of security protocol.
The Phonetic Parsing Module 50 returns the results of the said analysis back to the audio analysis module 400A. The results are propagated to the Authentication Assessment module 500 for further assessment and validation.
The random sentence generator module 40 creates a random string of words, consisting a meaningful or meaningless sentence. According to some embodiments, this sentence may be presented to the user, upon which they would need to read it as part of the authentication process.
According to some embodiments, the random sentence generator module 40 may randomly select sentences from a database of sentences (not shown in
The video analysis module 400B receives data that contains the recorded video of a user and uses that data to run various tests to authenticate the user. Non-limiting examples for such tests include:
The Behavioral analysis module 400C receives Data from multiple sources, and analyzes that data to identify user behavioral patterns or actions. The said data sources may include:
According to some embodiments, the authentication process may incorporate such behavioral data to identify patterns that are unique to a specific user.
According to some embodiments, an active authentication process may incorporate such behavioral data as part of a requirement presented to the user (e.g. “Please move your Smartphone in the left direction”).
The Authentication assessment module 500 receives the results from all analysis modules (400A, 400B, 400C) and determines whether the authentication score has passed a predefined threshold in relation to a sensitivity parameter set by the authentication control module 600. It then propagates the result to the authorizing entity 30, indicating successful or unsuccessful authentication.
The Authentication control module 600 implements the authentication policy dictated by the Authorizing entity 30. It does so by managing the type and the properties of required authentication methods.
The Authentication control module 600 takes at least one of the following parameters into account:
The Authentication control module 600 may dynamically change parameters such as the authentication method such as face recognition, voice passwords or any combination, authentication properties and sensitivity parameters according to analyzed authentication data and monitored user behavior.
According to some embodiments, the Authentication control module 600 may oversee and combine the authorization processes against more than one user device 20. This capability accommodates user authentication in cases where, for example, the approval of more than one individual is required in order to promote a certain task.
According to some embodiments, the Authentication procedure may require multiple users actions to authenticate or preform specific action. For example requiring two authentication keys or signatures of two different users, to authenticate one action for performing financial operation
The authorizing entity 30 receives authentication assessment data from the authentication assessment module 500. This data indicates whether or not the authorization has succeeded, and whether the authorizing entity 30 should grant access to the user device 20.
The process comprises the following steps:
Optionally a procedure of incremental enrollment can be implemented, receiving just a few sentences from the user at the beginning, and then requiring user to say additional sentences during the first login actions to serve as further enrollment process.
The procedure of incremental enrollment can be implemented for each authentication method such as face recognition, or voice recognition, where at each login process are added facial or voice data
This module processing is activated once the user logged in (step 810), continuously analyzing user profile, context parameters; (step 820) and Monitoring user behavior and activities (step 830).
By analyzing received data, determining authentication sensitivity parameters based on user profile, context parameters authorizing entity profile and user activities and behavior;
Continuously, based on authentication sensitivity parameters, the process determines active prevention action or authentication action; (step 840)
The action may include: Prompt user with requirements, stop session, enable or prevent from user privileged access or action (step 850), if required receiving user response data based on requirements and authenticate data (step 860).
According to some embodiments of the present invention analyzing voice of user for identifying unique speech patterns identifying the user. (step 950)
Optionally Applying learning algorithm to enhance the identification of phonemes based on previous phoneme identification (step 960).
Transferring individual phonemes audio or combination of phonemes of recording to database (step 970)
The Phonetic training module apply the following: defining selection of phoneme based on required sensitivity parameters (step 1210), randomly selecting words or sentences from prepared text book where the words include selection phoneme (step 12220) and optionally Randomly selecting words or sentences from prepared text book where the words include speech patterns of specific user
The present invention may be described, merely for clarity, in terms of terminology specific to particular programming languages, operating systems, browsers, system versions, individual products, and the like. It will be appreciated that this terminology is intended to convey general principles of operation clearly and briefly, by way of example, and is not intended to limit the scope of the invention to any particular programming language, operating system, browser, system version, or individual product.
It is appreciated that software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable typically non-transitory computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs. Components described herein as software may, alternatively, be implemented wholly or partly in hardware, if desired, using conventional techniques. Conversely, components described herein as hardware may, alternatively, be implemented wholly or partly in software, if desired, using conventional techniques.
Included in the scope of the present invention, inter alia, are electromagnetic signals carrying computer-readable instructions for performing any or all of the steps of any of the methods shown and described herein, in any suitable order; machine-readable instructions for performing any or all of the steps of any of the methods shown and described herein, in any suitable order; program storage devices readable by machine, tangibly embodying a program of instructions executable by the machine to perform any or all of the steps of any of the methods shown and described herein, in any suitable order; a computer program product comprising a computer useable medium having computer readable program code, such as executable code, having embodied therein, and/or including computer readable program code for performing, any or all of the steps of any of the methods shown and described herein, in any suitable order; any technical effects brought about by any or all of the steps of any of the methods shown and described herein, when performed in any suitable order; any suitable apparatus or device or combination of such, programmed to perform, alone or in combination, any or all of the steps of any of the methods shown and described herein, in any suitable order; electronic devices each including a processor and a cooperating input device and/or output device and operative to perform in software any steps shown and described herein; information storage devices or physical records, such as disks or hard drives, causing a computer or other device to be configured so as to carry out any or all of the steps of any of the methods shown and described herein, in any suitable order; a program pre-stored e.g. in memory or on an information network such as the Internet, before or after being downloaded, which embodies any or all of the steps of any of the methods shown and described herein, in any suitable order, and the method of uploading or downloading such, and a system including server's and/or client/s for using such; and hardware which performs any or all of the steps of any of the methods shown and described herein, in any suitable order, either alone or in conjunction with software. Any computer-readable or machine-readable media described herein is intended to include non-transitory computer- or machine-readable media.
Any computations or other forms of analysis described herein may be performed by a suitable computerized method. Any step described herein may be computer-implemented. The invention shown and described herein may include (a) using a computerized method to identify a solution to any of the problems or for any of the objectives described herein, the solution optionally include at least one of a decision, an action, a product, a service or any other information described herein that impacts, in a positive manner, a problem or objectives described herein; and (b) outputting the solution.
The scope of the present invention is not limited to structures and functions specifically described herein and is also intended to include devices which have the capacity to yield a structure, or perform a function, described herein, such that even though users of the device may not use the capacity, they are, if they so desire, able to modify the device to obtain the structure or function.
Features of the present invention which are described in the context of separate embodiments may also be provided in combination in a single embodiment.
For example, a system embodiment is intended to include a corresponding process embodiment. Also, each system embodiment is intended to include a server-centered “view” or client centered “view”, or “view” from any other node of the system, of the entire functionality of the system, computer-readable medium, apparatus, including only those functionalities performed at that server or client or node.
Number | Date | Country | |
---|---|---|---|
62419632 | Nov 2016 | US |