The present disclosure is related to biometric identification, and particularly to biometric identification based on brainwaves.
Biometric authentication is used in a growing number of security systems, including physical and computing environments. Biometric authentication aims to identify an individual based on unique biological characteristics. Examples of existing biometrics used for authentication include fingerprinting, facial recognition, and retinal imaging.
Many biometrics are unsuited for people with specific diseases. In addition, many existing biometrics can be easily forged or collected without a person's consent. Relatedly, many existing biometrics are easy to interpret and can be used to determine a person's identity, raising significant privacy concerns.
Biometric identification using electroencephalogram (EEG) signals is provided. Embodiments are targeted for biometric applications, where an individual can be identified with a precision of over 99%, using sensed brain signals. In particular, a method is described which can extract unique biomarkers from EEG response signals to classify individuals, also referred to as simple visual reaction task-based EEG biometry (SVRTEB). A subject experiences a simple stimulus or task, and a multi-channel EEG response is recorded. Unique biomarkers are extracted from the recorded EEG response (e.g., as periodogram data points corresponding to different frequencies observed in the brain waves, which can be used to identify a person). A novel signal processing approach uses neural network-based architecture to analyze the EEG response and identify the subject. This signal processing architecture can be readily implemented on hardware and provides high accuracy, precision, and recall.
SVRTEB is universally applicable, as EEG signals can be acquired from all human brains. EEG biometrics cannot be easily forged or collected without consent. Because brainwaves carry unique biomarkers for an individual's response to external stimulus, the proposed model can identify individuals with an accuracy greater the 99%. In addition, SVRTEB can be implemented with commercially available devices, such that the entire process chain can be implemented on a field-programmable gate array (FPGA) or other readily available hardware for real-time identification. Finally, using EEG provides enhanced privacy as EEG signals are difficult to decode and decipher a subject's true identity.
An exemplary embodiment provides a method for identifying a human subject. The method includes obtaining EEG data for a human subject which is responsive to a stimulus; extracting a plurality of feature points from the EEG data; and analyzing the plurality of feature points to identify the human subject.
Another exemplary embodiment provides a biometric classification device. The biometric classification device includes an EEG sensor; a memory configured to store EEG data from the EEG sensor; and a processor. The processor is configured to: receive the EEG data for a human subject which is responsive to a stimulus; extract a plurality of feature points from the EEG data; and identify the human subject based on the plurality of feature points.
Another exemplary embodiment provides a biometric classification system. The biometric classification system includes a memory configured to store EEG data from an EEG sensor; and a processor. The processor is configured to: receive the EEG data for a human subject which is responsive to a stimulus; extract a plurality of feature points from the EEG data; and implement a neural network to identify the human subject from the plurality of feature points.
Those skilled in the art will appreciate the scope of the present disclosure and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.
The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.
The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element such as a layer, region, or substrate is referred to as being “on” or extending “onto” another element, it can be directly on or extend directly onto the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” or extending “directly onto” another element, there are no intervening elements present. Likewise, it will be understood that when an element such as a layer, region, or substrate is referred to as being “over” or extending “over” another element, it can be directly over or extend directly over the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly over” or extending “directly over” another element, there are no intervening elements present. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
Relative terms such as “below” or “above” or “upper” or “lower” or “horizontal” or “vertical” may be used herein to describe a relationship of one element, layer, or region to another element, layer, or region as illustrated in the Figures. It will be understood that these terms and those discussed above are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used herein specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Biometric identification using electroencephalogram (EEG) signals is provided. Embodiments are targeted for biometric applications, where an individual can be identified with a precision of over 99%, using sensed brain signals. In particular, a method is described which can extract unique biomarkers from EEG response signals to classify individuals, also referred to as simple visual reaction task-based EEG biometry (SVRTEB). A subject experiences a simple stimulus or task, and a multi-channel EEG response is recorded. Unique biomarkers are extracted from the recorded EEG response (e.g., as periodogram data points corresponding to different frequencies observed in the brain waves, which can be used to identify a person). A novel signal processing approach uses neural network-based architecture to analyze the EEG response and identify the subject. This signal processing architecture can be readily implemented on hardware and provides high accuracy, precision, and recall.
SVRTEB is universally applicable, as EEG signals can be acquired from all human brains. EEG biometrics cannot be easily forged or collected without consent. Because brainwaves carry unique biomarkers for an individual's response to external stimulus, the proposed model can identify individuals with an accuracy greater the 99%. In addition, SVRTEB can be implemented with commercially available devices, such that the entire process chain can be implemented on a field-programmable gate array (FPGA) or other readily available hardware for real-time identification. Finally, using EEG provides enhanced privacy as EEG signals are difficult to decode and decipher a subject's true identity.
I. Overview of Simple Visual Reaction Task-Based EEG Biometry (SVRTEB)
For example, the visual output device (e.g., a digital screen) can be placed in front of the subject and present a simple task 10 to be performed. The task 10 may be to respond to a change in the visual output (e.g., changing a sign on the screen from a + symbol to an X symbol) through pressing a button (e.g., a spacebar), tapping a touch screen, or otherwise responding through the input device. In other examples, a stimulus may be provided to the subject or observed by a computer system monitoring the subject.
Embodiments of SVRTEB record multi-channel EEG data during the task 10. This includes recording an observation period before the symbol change and recording a reaction period between the symbol change and performance of the task 10 through the input device. The observation period can render individual attention markers, where an individual's response to external stimulus (e.g., the symbol change) is different and produces unique EEG patterns. Thus, embodiments of SVRTEB extract unique attention markers from the EEG data to identify individuals.
SVRTEB can be used for many applications. The unique attention markers from the EEG can be extracted and analyzed for EEG-based biometry. Reaction-time estimation can be extracted to monitor health and neurological conditions of the subject. In addition, SVRTEB can provide patient monitoring for attention-related disorders.
In further embodiments, the stimulus may be an environmental stimulus (e.g., a change in visual, auditory, or other sensory aspects of an environment observed by the subject and the computer system) or a prompt to interact with a user interface (e.g., to enter a password at the computer system or another device). It should be understood that the tasks/stimuli are not limited to visual tasks or stimuli, but can include a wide variety of tasks or stimuli which result in an observable change in brainwave activity of the subject.
Embodiments of SVRTEB use multi-channel EEG data to enhance analysis of the subject's response to external stimuli and increase biometric identification accuracy. For example, an evaluation of SVRTEB was performed using a non-invasive thirty-channel EEG (described further below in Section III). The EEG data can include multiple trials for a given subject, where a trial includes a stimulus (e.g., visual or otherwise, including multiple types of stimuli) and response.
The visual stimuli provided to a subject can be of different types and forms. For example, the digital screen can provide video sequences with specific objects appearing randomly or complex images with components of interest. In some examples, a natural visual cue can be used, such as any object or person or combination of both. The stimuli may be provided by embodiments of the present disclosure, or it may be externally provided and observed to elicit a response by the subject.
Similarly, responses can be obtained in various forms. The response may be brain activity alone, or it may also include a motor control or other response of the subject. For example, a motor control response can be received through a mouse click, a keyboard press, a buzzer press, a vocal response, or any other action by the subject that can be timed through an input device. In further examples, the motor control response may be observed through sensors (e.g., ocular tracking, motion sensors, and so on).
II. Signal Processing and Analysis Approach of SVRTEB
The method used for SVRTEB illustrated in
A. EEG Data Pre-Processing
In some embodiments, motor control artifacts (e.g., ocular artifacts such as changes in the EEG data from eye movements) are removed using ICA. In some examples, the down-sampling of the EEG data is performed only for the ICA analysis, with the data being back-projected after motor control (e.g., ocular) artifacts are removed. The EEG data can further be normalized (e.g., by mean and variance) across the channels to remove any DC offset and further analyzed as described below (e.g., using the original sampling rate).
B. Spectral Feature Extraction
A number of spectral features can be extracted for each channel of the multi-channel EEG data. For example, the spectral features can include an estimated power spectral density and a square of absolute valued Fourier transform. The feature space contains components from frequency bands between 1 to 35 Hz (e.g., from delta to beta brain wave frequency bands). In some embodiments, the analyzed spectral features include information from delta, theta, alpha, and beta brain wave frequency bands. In some embodiments, information from the gamma brain wave frequency band may also be used.
A number of feature points are extracted for each channel (e.g., having multiple feature points for each spectral feature). In an exemplary embodiment, 72 feature points are obtained from each channel. This results in a total of 2160 feature points (72 feature points×30 channels) from a single trial to be used for classifying a subject. It should be understood that any number of channels may be collected and analyzed, including a single channel.
C. Neural Network Architecture
In the example of the FCNN, the neural network 20 includes three fully connected layers 22, 24, 26. A first fully connected layer 22 has dimensions of W1=2160×500 and b1=1×500, a second fully connected layer 24 has dimensions of W2=500×100 and b2=1×100, and a third fully connected layer 26 has dimensions of W3=100×48 and b3=1×48. There are two rectified linear unit (ReLU) layers 28, 30 in between the three fully connected layers 22, 24, 26.
The output of the FCNN is one-hot encoded labels 32. A softmax layer 34 is placed between the third fully connected layer 26 and the one-hot encoded labels 32 to produce a probability distribution of each class. The FCNN uses cross entropy as the loss function of the model.
III. Evaluation
Returning to
The SVRTEB was evaluated with a total of 15,324 trials from 48 subjects. The entire dataset was split into a training set (65%) and a testing set (35%), with the training and test sets being adequately balanced. Performance of the SVRTEB model was measured based on classification accuracy (%), precision (%), and recall (%). The classification process of SVRTEB described in Section II was repeated for multiple randomizations, where the training and testing sets were randomized anew for each repetition. This measures the sensitivity of the SVRTEB model.
IV. Method for Identifying a Human Subject
The process may optionally continue at operation 806, with pre-processing the EEG data to remove motor control (e.g., ocular) artifacts. In an exemplary aspect, the pre-processing may include bandpass filtering the EEG data within a range of human brain waves, down sampling the EEG data (e.g., only for ICA analysis), ICA to remove the ocular artifacts, and/or normalizing the EEG data across each channel of the EEG data. The process continues at operation 808, with extracting a plurality of feature points (e.g., spectral feature points) from the EEG data. The process continues at operation 810, with analyzing the plurality of feature points (e.g., with a neural network) to identify the human subject.
Although the operations of
V. Computer System
The exemplary computer system 900 in this embodiment includes a processing device 902 or processor, a system memory 904, and a system bus 906. The system memory 904 may include non-volatile memory 908 and volatile memory 910. The non-volatile memory 908 may include read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and the like. The volatile memory 910 generally includes random-access memory (RAM) (e.g., dynamic random-access memory (DRAM), such as synchronous DRAM (SDRAM)). A basic input/output system (BIOS) 912 may be stored in the non-volatile memory 908 and can include the basic routines that help to transfer information between elements within the computer system 900.
The system bus 906 provides an interface for system components including, but not limited to, the system memory 904 and the processing device 902. The system bus 906 may be any of several types of bus structures that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and/or a local bus using any of a variety of commercially available bus architectures.
The processing device 902 represents one or more commercially available or proprietary general-purpose processing devices, such as a microprocessor, central processing unit (CPU), or the like. More particularly, the processing device 902 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or other processors implementing a combination of instruction sets. The processing device 902 is configured to execute processing logic instructions for performing the operations and steps discussed herein.
In this regard, the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with the processing device 902, which may be a microprocessor, FPGA, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Furthermore, the processing device 902 may be a microprocessor, or may be any conventional processor, controller, microcontroller, or state machine. The processing device 902 may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
The computer system 900 may further include or be coupled to a non-transitory computer-readable storage medium, such as a storage device 914, which may represent an internal or external hard disk drive (HDD), flash memory, or the like. The storage device 914 and other drives associated with computer-readable media and computer-usable media may provide non-volatile storage of data, data structures, computer-executable instructions, and the like. Although the description of computer-readable media above refers to an HDD, it should be appreciated that other types of media that are readable by a computer, such as optical disks, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the operating environment, and, further, that any such media may contain computer-executable instructions for performing novel methods of the disclosed embodiments.
An operating system 916 and any number of program modules 918 or other applications can be stored in the volatile memory 910, wherein the program modules 918 represent a wide array of computer-executable instructions corresponding to programs, applications, functions, and the like that may implement the functionality described herein in whole or in part, such as through instructions 920 on the processing device 902. The program modules 918 may also reside on the storage mechanism provided by the storage device 914. As such, all or a portion of the functionality described herein may be implemented as a computer program product stored on a transitory or non-transitory computer-usable or computer-readable storage medium, such as the storage device 914, volatile memory 910, non-volatile memory 908, instructions 920, and the like. The computer program product includes complex programming instructions, such as complex computer-readable program code, to cause the processing device 902 to carry out the steps necessary to implement the functions described herein.
An operator, such as the user, may also be able to enter one or more configuration commands to the computer system 900 through a keyboard, a pointing device such as a mouse, or a touch-sensitive surface, such as the display device, via an input device interface 922 or remotely through a web interface, terminal program, or the like via a communication interface 924. The communication interface 924 may be wired or wireless and facilitate communications with any number of devices via a communications network in a direct or indirect fashion. An output device, such as a display device, can be coupled to the system bus 906 and driven by a video port 926. Additional inputs and outputs to the computer system 900 may be provided through the system bus 906 as appropriate to implement embodiments described herein.
The operational steps described in any of the exemplary embodiments herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary embodiments may be combined.
Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.
This application claims the benefit of provisional patent application Ser. No. 63/065,117, filed Aug. 13, 2020, the disclosure of which is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63065117 | Aug 2020 | US |