This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2019-219154 filed Dec. 3, 2019.
The present invention relates to an information processing system and a non-transitory computer readable medium storing a program.
Earphones have a structure that covers the external acoustic openings of the ears. In addition, headphones have a structure that covers the ears. Therefore, it is difficult for the user who wears the devices to hear the ambient sound naturally. In consideration of this inconvenience, there is a device having a function capable of capturing the ambient sound without being removed. This function is called, for example, an ambient sound capture function. In contrast, there is a device having a function of actively blocking unwanted ambient sounds. This function is called a so-called noise canceling function.
JP2019-004488A is an example of the related art.
However, it is necessary for the user to manually switch between the capturing and the blocking of the ambient sound in a device having a structure that covers the ears.
Aspects of non-limiting embodiments of the present disclosure relate to an information processing system and a non-transitory computer readable medium storing a program that can automatically adjust the volume of an ambient sound, unlike a case in which the switching of the input amount or output amount of the ambient sound is performed by an operation of a user.
Aspects of certain non-limiting embodiments of the present disclosure overcome the above disadvantages and/or other disadvantages not described above. However, aspects of the non-limiting embodiments are not required to overcome the disadvantages described above, and aspects of the non-limiting embodiments of the present disclosure may not overcome any of the disadvantages described above.
According to an aspect of the present disclosure, there is provided an information processing system including a processor configured to detect biological information measured at a head and control a volume of an ambient sound output from a speaker provided in a device which is worn so as to cover an ear according to the detected biological information.
Exemplary embodiment(s) of the present invention will be described in detail based on the following figures, wherein:
Hereinafter, exemplary embodiments of the invention will be described with reference to the drawings.
System Configuration
The earphone system 1 illustrated in
The earphone 10 and the information terminal 20 in this exemplary embodiment are examples of an information processing system.
The earphone 10 according to this exemplary embodiment is provided with a circuit that measures an electric signal (hereinafter, referred to as a “brain wave”) caused by the activity of the brain, in addition to a circuit that reproduces a sound received from the information terminal 20. The earphone 10 used in this exemplary embodiment is a wireless device. Therefore, the earphone 10 is connected to the information terminal 20 by wireless communication.
In this exemplary embodiment, Bluetooth (registered trademark) is used for wireless connection between the earphone and the information terminal 20. WiFi (registered trademark) or other communication standards can be used for the wireless connection. In addition, the earphone 10 and the information terminal 20 may be connected to each other by a cable.
The information terminal 20 has a function that estimates the state of the user from information (hereinafter referred to as “brainwave information”) related to brain waves included in a digital signal received from the earphone 10 and automatically controls the volume of sound around the user (hereinafter, referred to as “ambient sound”) output from the earphone 10 according to the estimated state of the user.
Automatic volume control includes reducing the ambient sound to a volume that the user does not care about. The control of reducing the volume of the ambient sound to the volume that the user does not care about includes controlling the volume of the ambient sound to zero.
In this exemplary embodiment, the control of the volume of the ambient sound output from the earphone 10 means controlling the volume of the ambient sound that can be heard by the user wearing the earphone 10. That is, the volume of the ambient sound in this exemplary embodiment does not mean control for increasing or decreasing the physical volume of the ambient sound output from a speaker (not illustrated), but means control for the volume perceived by the user in a case in which the user wears the earphone 10. For example, a so-called noise canceling function outputs a sound having a phase opposite to the phase of the ambient sound from the earphone 10 to making it difficult to hear the ambient sound.
In this exemplary embodiment, in a case in which the volume of the ambient sound is forcibly suppressed, the noise canceling function is controlled to be turned on. As a result, the user who wears the earphone 10 does not perceive the presence of the ambient sound or perceives the ambient sound only to the extent that the user does not care about the ambient sound.
Whether or not the user perceives the ambient sound is also related to the volume of music or voice output from the earphone 10.
For example, in a case in which the volume of music or voice output from the earphone 10 is low even though the volume of the ambient sound output from the earphone 10 is the same, the user may perceive the ambient sound. In a case in which the volume of music or voice output from the earphone 10 is high, the user may not perceive the ambient sound.
In this exemplary embodiment, the minimum value of the volume at which the user can perceive the presence of the ambient sound in relation to the volume of music or voice output from the earphone 10 is referred to as a “reference volume”.
Therefore, in order to prevent the user from perceiving the ambient sound, it is necessary to set the volume of the ambient sound output from the earphone 10 to be less than the reference volume.
On the other hand, in order to make the user perceive the ambient sound, it is necessary to set the volume of the ambient sound output from the earphone 10 to be higher than the reference volume.
However, there is a large individual difference in how individuals perceive sound. For example, even in a case in which the volume is the same, some sounds are audible to young people and are inaudible to old people or difficult to hear for old people. In addition, sound may or may not be heard depending on the physical conditions. Further, there is an individual difference in hearing. For this reason, it is difficult to set the “reference volume” common to any users.
Therefore, in this exemplary embodiment, the “reference volume” is not used in a strict sense, but is used in a rough sense. That is, not only the volume at which the user does not perceive the ambient sound but also the volume at which the user perceives the ambient sound, but do not care about the ambient sound is treated as volume lower than the reference volume.
That is, the reduction of the ambient sound in this exemplary embodiment may be equivalent to that in the noise canceling function of commercially available earphones.
Similarly, the volume at which the ambient sound is perceived may be equivalent to that in the ambient sound capturing function of commercially available earphones.
However, in the case of the commercially available earphones, the user needs to manually turn on each function in order to enable the functions. Similarly, the user needs to manually turn off each function in order to disable the functions.
In the case of this exemplary embodiment, the user only needs to wear the earphone 10. The information terminal 20 according to this exemplary embodiment estimates the state of the user from the brain wave information of the user measured by the earphone 10 and controls the volume of the ambient sound according to the estimated state of the user. The content of this control will be described in detail below.
In the example illustrated in
Hereinafter, in this exemplary embodiment, the reason why the earphone 10 is used for measuring brain waves will be described. The brain wave is an example of biological information measured at the head.
In a case in which the spread of devices that can measure the brain waves is considered, there is a possibility that wearing a device that apparently measures brain waves will not be supported by the user. For example, there is a possibility that a helmet-type device will not be supported by the user from the viewpoint of design and the burden on the body.
For the above reasons, in this exemplary embodiment, the earphone 10 is used as a device for measuring brain waves. Since the earphone 10 is widely used as a so-called audio device, it is considered that there is little psychological resistance to wearing the earphone.
In addition, since the external acoustic opening into which the earphone 10 is put is close to the brain, the external acoustic opening is also an ideal part for measuring brain waves. The fact that the brain waves can be measured by the earphone 10 will be described below in the section of experimental results which will be described below.
The external acoustic opening is an example of the ear. The ear according to this exemplary embodiment is used in a sense including the auricle and the external acoustic opening. In addition, the earphone 10 is appropriate for acquiring the ambient sound.
Configuration of Earphone 10
The earphone 10 includes earphone chips 11R and 11L that are inserted into the external acoustic openings, earphone bodies 12R and 12L to which the earphone chips 11R and 11L are attached, respectively, ear hooks 13R and 13L that are placed in a gap between the auricle and a temporal region, a cable that connects the earphone bodies 12R and 12L, and a controller 15 having a power button and a volume button provided thereon.
In
The earphone chip 11R according to this exemplary embodiment includes a dome-shaped electrode 11R1 that is inserted into the external acoustic opening and comes into contact with the inner wall of the external acoustic opening and a ring-shaped electrode 11R2 that comes into contact with the cavity of the concha.
Both the electrode 11R1 and the electrode 11R2 according to this exemplary embodiment are made of conductive rubber. The electrodes are for measuring an electric signal that appears on the skin. The electrode 11R1 and the electrode 11R2 are electrically separated from each other by an insulator.
In this exemplary embodiment, the electrode 11R1 is a terminal (hereinafter, referred to as an “EEG measurement terminal”) that is used to measure a potential change caused by an electroencephalogram (EEG).
The electrode 11R2 is a ground electrode (hereinafter, also referred to as a “GND terminal”).
The earphone chip 11L includes a dome-shaped electrode 11L1 that is inserted into the external acoustic opening and comes into contact with the inner wall of the external acoustic opening. In this exemplary embodiment, the electrode 11L1 is a terminal (hereinafter, referred to as a “REF terminal”) that is used to measure a reference potential (REF). However, in this exemplary embodiment, the electrode 11R2 and the electrode 11L1 are electrically short-circuited.
In this exemplary embodiment, the potential change caused by the brain waves is measured as a difference signal between the electric signals measured by the electrodes 11R1 and 11L1.
In the field of brain science, all potential changes resulting from sources other than brain waves are called artifacts. In the field of brain science, it is considered that an electrical signal obtained by measuring brain waves always contains the artifact. In this exemplary embodiment, the potential change measured by the earphone 10 is referred to as an electric signal obtained by measuring brain waves, without distinguishing the origin of the potential change.
Incidentally, components included in the artifact are classified into components resulting from a living body, components resulting from a measurement system, such as an electrode, and components resulting from an external opportunity or environment. Among the three components, components other than the component resulting from the living body can be measured as noise measured by the earphone 10. The noise can be measured as an electric signal in a state in which the electrode 11R1 and the electrode 11L1 are electrically short-circuited.
The earphone main body 12R according to this exemplary embodiment includes, for example, a circuit that generates measurement signals of the brain waves and a potential change resulting from something other than the brain waves, a circuit that generates audio data from an electric signal output from a microphone (not illustrated), and a circuit that performs a process of decoding audio data received from the information terminal 20 (see
A battery is provided in the earphone main body 12L.
In this exemplary embodiment, the earphone main body 12R includes a digital electroencephalograph 121, a microphone 122, a speaker 123, a six-axis sensor 124, a Bluetooth module 125, a semiconductor memory 126, and a micro processing unit (MPU) 127.
The digital electroencephalograph 121 includes a differential amplifier that differentially amplifies a potential change appearing in the electrodes 11R1 and 11L1, a sampling circuit that samples the output of the differential amplifier, and an analog/digital conversion circuit that converts the sampled analog potential into a digital value. In this exemplary embodiment, a sampling rate is 600 Hz. The resolution of the analog/digital conversion circuit is 16 bits.
The microphone 122 includes a diaphragm that vibrates in response to voice uttered by the user, a voice coil that converts the vibration of the diaphragm into an electric signal, and an amplifier that amplifies the electric signal. In addition, an analog/digital conversion circuit that converts the analog potential of the electric signal output from the amplifier into a digital value is separately prepared.
The speaker 123 includes a diaphragm and a voice coil through which a current corresponding to audio data flows to make the diaphragm vibrates. In addition, a digital/analog conversion circuit converts audio data input from the MPU 127 into an analog signal.
The six-axis sensor 124 includes a three-axis acceleration sensor and a three-axis gyro sensor. The six-axis sensor 124 is used to detect the posture of the user.
The Bluetooth module 125 is used to transmit and receive data to and from the information terminal 20 (see
In addition, the Bluetooth module 125 can be used to receive a signal (hereinafter, referred to as a “control signal”) for controlling the volume of the ambient sound from the information terminal 20. However, in a case in which the ambient sound whose volume has been controlled is generated by the information terminal 20 and is then transmitted as audio data to the earphone 10, it is not necessary to receive the control signal for the volume of the ambient sound.
The semiconductor memory 126 includes, for example, a read only memory (ROM) storing a basic input output system (BIOS), a random access memory (RAM) used as a work area, and a rewritable non-volatile memory (hereinafter, referred to as a “flash memory”).
In this exemplary embodiment, the flash memory is used to store, for example, the digital signal output from the digital electroencephalograph 121, the audio data acquired by the microphone 122, and the audio data received from the information terminal 20.
The MPU 127 controls, for example, the transmission and reception of digital signals to and from the information terminal 20, the processing of the digital signals to be transmitted to the information terminal 20, and the processing of the digital signals received from the information terminal 20. In this exemplary embodiment, the MPU 127 performs a process, such as Fourier transform, on the digital signal output from the digital electroencephalograph 121. The MPU 127 and the semiconductor memory 126 operate as a computer.
A lithium battery 128 is provided in the earphone main body 12L.
Configuration of Information Terminal 20
In
The information terminal 20 illustrated in
The Bluetooth module 201 is used for communication with the Bluetooth module 125 provided in the earphone 10.
The MPU 202 acquires brain wave information from the digital signal received from the earphone 10 and implements the function of estimating the state of the user. Here, the function is implemented by the execution of an application program. In this exemplary embodiment, the state of the user is used to mean the state of mind and body. In this exemplary embodiment, the state of mind and body is classified into an excited state, a concentrated state, a relaxed state, a light sleep state, and a deep sleep state. The classification of the state of mind and body is not limited to the exemplified states. The state of mind and body may be classified into a smaller number of states or a larger number of states.
The excited state is a state in which a large number of γ-wave are output. The γ-waves are also output in an irritated state or an unpleasant state.
The concentrated state is a state in which a large number of β-waves are output. It is said that the β-waves appear in daily life or working.
The relaxed state is a state in which a large number of α-waves are output. The α-waves are output even in a state in which the consciousness is concentrated. In addition, the state corresponding to the α-waves may be subdivided. There are three types of α-waves, that is, fast α-waves, middle α-waves, and slow α-waves. The fast, middle, and slow levels correspond to the height of frequencies. The fast level is classified as concentration with tension, the slow level is classified as concentration close to rest, and the middle level is classified as so-called relaxed concentration.
The light sleep state is a state in which a large number of θ-waves are output. It is said that the θ-waves are output in a state in which there is consciousness, but the level of consciousness is low.
The deep sleep state is a state in which a large number of δ-waves are output. It is said that the δ-waves are output in an unconscious state.
The MPU 202 illustrated in
The ambient sound determination unit 221 according to this exemplary embodiment determines, for example, whether the ambient sound received from the earphone 10 includes a voice including a predetermined term or a predetermined type of sound.
Examples of the predetermined term include the name of the user who wears the earphone 10, a calling word, and a greeting word. Further, an example of the predetermined term is a word indicating danger. Examples of the predetermined term include “dangerous” and “run away”. In addition, for example, some announcements used in transport facilities can be included in the predetermined term.
Examples of the predetermined type of sound include siren sounds, bell sounds, and horn sounds. Siren sounds or horn sounds that call attention to danger or caution include sounds used in, for example, police vehicles, fire trucks, ambulances, and disaster prevention wireless systems. In addition, the bell sounds include the sound of an alarm clock, the sound of a timer, the sound of a fire alarm, and a sound indicating an earthquake motion with high seismic intensity.
The predetermined terms or the predetermined types of sounds are determined in the initial settings. However, some of the predetermined terms or the predetermined types of sounds may be edited or added by the user.
The user state estimation unit 222 according to this exemplary embodiment extracts the brain wave information from the digital signal received from the earphone 10 and estimates the state of the user on the basis of a large number of frequency components included in the brain wave information. For example, fast Fourier trans form is used for frequency component decomposition. In this exemplary embodiment, the MPU 127 (see
The brain wave information includes a plurality of frequency components. In this exemplary embodiment, the frequency component whose output has been confirmed to be larger than a threshold value determined for each frequency component is defined as a frequency component that is generally included in the brain wave information. However, in a case in which there are a plurality of frequency components greater than the threshold value, one frequency component may be determined according to a predetermined priority.
In addition, one frequency component that is assigned to an output pattern of a plurality of frequency components may be used as a representative frequency component, unlike the frequency component greater than the threshold value.
The ambient sound output control unit 223 according to this exemplary embodiment controls the volume of the ambient sound output from the speaker 123 (see
In this exemplary embodiment, the content of the control corresponding to the combination of the estimated state of the user and the content of the ambient sound is determined by a program. The relationship between the content of the control and the combination of the estimated state of the user and the content of the ambient sound may be prepared in a table.
In addition, the ambient sound output control unit 223 according to this exemplary embodiment has a function of reproducing the ambient sound recorded in the concentrated state from the speaker 123 (see
The reproduction of the recorded ambient sound may be performed on condition that the user wants to reproduce the ambient sound recorded in the concentrated state. The confirmation of the user's request may be performed using a confirmation screen displayed on a display unit of the information terminal 20 (see
The semiconductor memory 203 according to this exemplary embodiment stores a table 231 in which the relationship between the characteristics of the brain wave information and the state of the user has been recorded.
In
In addition, the concentrated state is associated with a character BB in which many β-waves appear. The relaxed state is associated with a characteristic CC in which many α-waves appear. The light sleep state is associated with a characteristic DD in which many θ-waves appear. The deep sleep state is associated with a characteristic EE in which many δ-waves appear. Hereinafter, the light sleep state and the deep sleep state are collectively referred to as a sleep state.
The table 231 is referred to by the user state estimation unit 222 (see
The semiconductor memory 203 includes a ROM in which a BIOS is stored, a RAM used as a work area, and a flash memory as an external storage memory, in addition to the table 231. The audio data of the ambient sound received from the earphone 10 is recorded on the flash memory. The ambient sound recorded on the flash memory is read by the ambient sound output control unit 223 and is output to the Bluetooth module 201 at a volume corresponding to the state of the user and the content of the ambient sound. In a case in which there is music that the user is listening to or a voice heard over the phone, audio data is generated by mixing the audio data of the music or the voice with the ambient sound.
Processing Operation of Information Terminal 20
Hereinafter, an example of a processing operation implemented by the execution of a program by the MPU 202 (see
In this exemplary embodiment, the digital information including the brain wave information is transmitted from the earphone 10 (see
First, the MPU 202 determines whether or not a mode for automatically adjusting the volume of the ambient sound is set (Step S1).
In a case in which the determination result in Step S1 is “No”, the MPU 202 controls the output of the ambient sound in the operation mode that has been manually set (Step S2). This control is provided as a portion of the function of the ambient sound output control unit 223 (see
On the other hand, in a case in which the determination result in Step S1 is “Yes”, the MPU 202 estimates the state of the user on the basis of the frequency components generally included in the brain wave information (Step S3). In this exemplary embodiment, one of the excited state, the concentrated state, the relaxed state, the light sleep state, and the deep sleep state is used as the estimated value of the state of the user.
Then, the MPU 202 determines the content of the ambient sound (Step S4). In addition, the order of Step S3 and Step S4 may be interchanged or Step S3 and Step S4 may be performed in parallel.
Then, the MPU 202 performs control corresponding to the current state and the content of the ambient sound.
In
In a case in which the determination result in Step S5 is “Yes”, the MPU 202 determines whether or not the ambient sound includes predetermined content (Step S6). The predetermined content is a predetermined term or a predetermined type of sound.
In a case in which the user is in the concentrated state and the ambient sound does not include the predetermined content, the MPU 202 obtains a negative result in Step S6. In this case, the MPU 202 forcibly suppresses the volume of the ambient sound (Step S7). As a result, the concentrated state of the user is not hindered. Further, the user does not need to individually perform the operation of suppressing the ambient sound.
In contrast, in a case in which the user is in the concentrated state and the predetermined content is included in the ambient sound, the MPU 202 obtains a positive result in Step S6. In this case, the MPU 202 forcibly increases the volume of the ambient sound (Step S8). As a result, the concentrated state is hindered, but the user can perceive a call or the danger of the body.
In a case in which the user is not in the concentrated state, the MPU 202 obtains a negative result in Step S5. In this case, the MPU 202 determines whether or not the user is in the excited state (Step S9). That is, the MPU 202 determines whether or not many γ-waves have appeared in the brain wave information.
In a case in which the user is in the excited state, the MPU 202 obtains a positive result in Step S9.
In a case in which the determination result in Step S9 is “Yes”, the MPU 202 performs the determination in Step S6 and then performs a process corresponding to the result of the determination. That is, in a case in which the predetermined content is not included in the ambient sound, the MPU 202 forcibly suppresses the volume of the ambient sound so as not to stimulate the excited state of the user (Step S7). On the other hand, in a case in which the predetermined content is included in the ambient sound, the MPU 202 forcibly increases the volume of the ambient sound even though the user is in the excited state (Step S8).
In a case in which the user is not in the excited state, the MPU 202 obtains a negative result in Step S9.
In a case in which the negative result is obtained in Step S9, the MPU 202 determines whether or not the user is in an awakened state (Step S10). That is, the MPU 202 determines whether or not many α-waves appear in the brain wave information.
In a case in which the user is in the light sleep state or the deep sleep state, the MPU 202 obtains a negative result in Step S10.
In a case in which the negative result is obtained in Step S10, the MPU 202 performs the determination in Step S6 and then performs a process corresponding to the result of the determination. That is, in a case in which the predetermined content is not included in the ambient sound, the MPU 202 forcibly suppresses the volume of the ambient sound so as not to stimulate the sleep state of the user (Step S7). On the other hand, in a case in which the predetermined content is included in the ambient sound, the MPU 202 forcibly increases the volume of the ambient sound even though the user is in the sleep state (Step S8).
In a case in which the user is in the relaxed state, the MPU 202 obtains a positive result in Step S10.
In a case in which the positive result is obtained in Step S10, the MPU 202 determines whether or not the previous state of the user is the concentrated state (Step S11).
In a case in which the previous state of the user is the excited state or the sleep state, the MPU 202 obtains a negative result in Step S11. In this case, the MPU 202 according to this exemplary embodiment proceeds to Step S8 and performs a process of forcibly increasing the volume of the ambient sound. That is, in the relaxed state, control is performed such that the ambient sound can be heard.
However, in a case in which the previous state of the user is the concentrated state, the MPU 202 obtains a positive result in Step S11 and directs the earphone 10 to output the ambient sound recorded in the concentrated state (Step S12).
As described above, in a case in which the user is in the concentrated state, the MPU 202 performs control to forcibly reduce the volume of the ambient sound so as not to hinder the concentrated state as long as the predetermined content is not included in the ambient sound. On the other hand, in a case in which the concentrated state ends, there is a possibility that the user wants to check the content of the ambient sound in the concentrated state.
Therefore, in this exemplary embodiment, in a case in which the state changes from the concentrated state to the relaxed state, control is performed such that the ambient sound recorded in the concentrated state is output from the earphone 10. Step S12 may be performed only in a case in which the user sets the execution of Step S12 in advance. Further, a function may be provided which inquires of the user whether to output the recorded ambient sound before starting the output of the recorded ambient sound.
As described above, the earphone system 1 according to this exemplary embodiment estimates the state of the user who wears the earphone 10 covering the external acoustic opening using brain waves and automatically controls the volume of the ambient sound perceived by the user according to the estimated state. Therefore, the user does not need to manually perform an operation for hearing the ambient sound or an operation for not hearing the ambient sound. In other words, the user can continue his or her own action or activity, without being bothered with the ambient sound. For example, even in a case in which the user moves to a place where noise is severe, the user can enjoy the music and sound output from the earphone 10 without being conscious of the ambient sound.
It is possible to increase the volume such that the user is forced to hear the ambient sound including the sounds or terms of danger and user safety and user convenience are also considered.
Experimental Results
Next, the fact that the earphone 10 (see
Reliability of MindWave (NeuroSky Inc.) Used for Comparison with Earphone 10
In this experiment, MindWave manufactured by NeuroSky, Inc. which is commercially available is used as the headset 30 with a brain wave sensor.
As described above, the earphone 10 uses the external acoustic opening as a brain wave measurement point. In contrast, MindWave manufactured by NeuroSky, Inc. uses the forehead 30A as a brain wave measurement point.
The forehead 30A illustrated in
The brain waves measured by MindWave are equivalent to the brain waves in a medically certified EEG system and are verified by Elena Ratti et al., “Comparison of Medical and Consumer Wireless EEG Systems for Use in Clinical Trials” (https://www.frontiersin.org/articles/10.3389/fnhum.2017.0 0398/full).
This paper is peer-reviewed by Dimiter Dimitrov, Ph.D., Senior Scientist, Duke University, U.S. and Marta Parazzini, Ph.D., the Italian National Research Council (CNR), Milan Institute of Technology, Italy.
B-Alert and Enobio illustrated in
In
Although the detailed description of the paper is omitted, the measurement of the brain waves at rest is performed twice another day on five healthy subjects. In the same experiment, Fp1 of the forehead is used as a common measurement point and brain wave patterns and power spectrum densities in a state in which the eyes are closed and a state in which the eyes are opened are compared. The evaluation in this paper corresponds to the evaluation of the output of α-waves in the brain waves in a case in which the eyes are closed.
In addition, the conclusion section of the paper shows that the power spectrum measured at Fp1 of MindWave and the result of a reproducibility test are almost the same as the power spectrum and the result of a reproducibility test of B-Alert and Enobio which are medically certified EEG systems and the peak of α-waves is also captured. Further, the conclusion section shows that, in the brain waves measured by MindWave, blinking and movement during eye-opening are included as noise. In addition, it is pointed out that the reason for the low reliability of Muse is the possibility of artifacts.
Comparison of Measurement Results by Earphone 10 and Measurement Results by MindWave
Next, the results of the experiment in which the subjects wear both the earphone 10 (see
In the applicant's experiments, the number of subjects is 58. Three attention rise tests and meditation rise tests are designed for each person on the same day and an experiment to capture the appearance of α-waves during eye closure is performed.
The actual number of subjects is 83. However, the measurement results of 25 subjects are excluded since the influence of artifacts during eye-opening is excessive.
In the attention rise test, the subjects are asked to keep staring at a pen tip that is 150 mm ahead for 30 seconds with the eyes open. The purpose of this test is to create the concentrated state, to suppress the appearance of α-waves, and to increase β-waves.
In the meditation rise test, the subjects are asked to meditate for 30 seconds with the eyes closed. This test corresponds to the evaluation of the output of α-waves during eye closure. In other words, the purpose is to check the rate of increase in α-waves in the relaxed state.
In the experiments, after the attention rise test, the meditation rise test is performed to evaluate the output of α-waves.
In general, for the evaluation of the output of α-waves, two sets of the closed state of the eyes for 30 seconds after the open state of the eyes for 30 seconds are repeated and the rise of α-waves in the closed state of the eyes is checked.
However, in this experiment, the number of sets is increased in order to collect a large amount of data at once.
First, the reason for performing the meditation rise test and the method used for evaluating the output of α-waves during eye closure will be described.
It is said that the reproducibility of brain waves by human movements is low and it is difficult to evaluate the reproducibility of the acquisition performance on the basis of clinical data. However, it is said that α-waves among the brain waves are likely to appear due to the difference between eye-opening and eye closure.
It is said that any type of brain wave tends to appear uniformly in the eye-open state and waves other than the α-waves are uniformly attenuated in the eye-closed state. That is, it is said that α-waves appear while being relatively less affected even in the eye-closed state.
In experiments using this characteristic, Fourier transform is performed on the raw data of the brain waves and the spectral intensity Sn of a frequency band corresponding to each wave is used as a characteristic value.
In the experiments, an α-wave intensity ratio Tα is defined as the ratio (=Sα/ΣSn) of the spectral intensity Sα of an α-wave band to the sum of the spectral intensities of all frequency bands (that is, ΣSn) and it is checked whether or not the α-wave intensity ratio Tα increases due to a change from the eye-open state to the eye-closed state.
In a case in which an increase in the α-wave intensity ratio Tα is confirmed, the increase is the evidence of the measurement of the brain waves.
Next, the difference between the measurement results by the earphone 10 and the measurement results by MindWave will be described with reference to
In a case in which there is no blinking, a high similarity between the measurement results by the earphone 10 and the measurement results by MindWave is confirmed.
On the other hand, in a case in which there is blinking, artifacts affected by the blinking appear remarkably in the measurement results by MindWave. It is considered that the reason is that the forehead is close to the eyes and MindWave is likely to detect blinking as a large artifact during eye-opening. This is pointed out in the above-mentioned paper by Elena Ratti et al.
Artifacts due to the influence of blinking generally appear in the δ-wave band. However, in a case in which there is a large artifact as illustrated in
In addition, the artifacts detected in association with blinking include not only a potential change resulting from the living body which occurs due to the movement of the eyelid, but also a potential change resulting from the brain waves related to attempts to move the eyelid.
In contrast, in the measurement results obtained by the earphone 10 (see
However, it is confirmed that the artifacts caused by the movement of the jaw swallowing saliva are detected regardless of whether the eye is open or closed. The artifacts caused by the movement of the jaw swallowing saliva generally appear in the θ-wave band.
In contrast, the spectral intensity of the artifact that appears due to the swallowing of saliva is much lower than the spectral intensity of the artifact corresponding to blinking detected by MindWave. Therefore, the influence of the artifact on an increase in α-waves is not confirmed as in the case of MindWave.
The artifacts that appear due to the swallowing of saliva include not only a potential change resulting from the living body which occurs due to the movement of the jaw muscles, but also a potential change resulting from the brain waves related to attempts to move the jaw muscles.
In the above description, the reason why the operation of the jaw swallowing saliva is given as an example of the intentional movement of the muscle by the user while keeping a specific operation in mind is that the artifacts illustrated in
Next, an increase in the α-waves appearing in the measurement results by the earphone 10 and an increase in the α-waves appearing in the measurement results by MindWave will be described with reference to
In
The distribution of the spectrum intensity of MindWave (see
The results of the experiment show that an increase in α-waves is confirmed in 46 subjects in both MindWave and the earphone 10. This ratio corresponds to about 80% of 58 subjects.
Incidentally, the increase in α-waves is confirmed in 7 subjects only in the earphone 10. In other words, in the earphone 10, the increase in α-waves is confirmed in a total of 53 subjects. That is, in the earphone 10, the increase in α-waves is confirmed in about 90% or more of the subjects.
In addition, the increase in α-waves is not confirmed in 5 subjects in both MindWave and the earphone 10. The waveforms illustrated in
In
As illustrated in
As described above, it is confirmed that the earphone 10 used to measure brain waves in the external acoustic opening in this exemplary embodiment has the same measurement capability as MindWave.
The exemplary embodiment of the invention has been described above. However, the technical scope of the invention is not limited to the scope described in the above exemplary embodiment. It is apparent from the description of the claims that various modifications or improvements of the above-described exemplary embodiment are included in the technical scope of the invention.
For example, in the above-described exemplary embodiment, the brain waves have been described as an example of the potential change that can be measured by the earphone 10 (see
In the above-described exemplary embodiment, the earphones 10 are put into the external acoustic openings of both ears to measure brain waves. However, the earphone 10 may be a type that is put into the external acoustic opening of one ear.
In the case of this configuration, a lithium battery 128 (see
In the above-described exemplary embodiment, the earphone 10 (see
Further, in the above-described exemplary embodiment, for example, the information terminal 20 (see
In the above-described exemplary embodiment, the MPU 202 (see
In the above-described exemplary embodiment, the example in which the electrode for measuring a potential change caused by, for example, brain waves is provided in the earphone 10 has been described. However, the electrode may be provided in other articles. Next, some specific examples will be described.
For example, the electrode for measuring a potential change caused by, for example, brain waves may be provided in headphones that cover the auricle. In the case of the headphones, the electrode is provided in a portion of an ear pad which comes into contact with the head. In this case, the electrode is disposed at a position where the hair is thin and which can come into direct contact with the skin.
Further, the article that comes into contact with the auricle may be a spectacle-type device. The devices are examples of a wearable device.
As illustrated in
In
In addition, the electrode used to measure brain waves may be combined with a smart glass or a headset that displays information and is called a head-mounted display. Further, the electrode may be provided in a headset that has a function of understanding the environment around the user and displaying an image assimilated to the environment.
The headset 50 illustrated in
In the headset 50 illustrated in
Similarly to the case of the glasses 40 (see
In the case of this configuration, devices other than the speaker 123 in the configuration illustrated in
In the above-described exemplary embodiment, the case in which biological information including brain waves is acquired using the electrode that comes into contact with the ear of the user has been described. However, the position where biological information including brain waves is acquired is not limited to the ears. For example, the electrodes may be provided at the forehead and other positions of the head.
In the case of
In addition, for example, the electrodes 11R1, 11R2, and 11L1 of the headset 50 (see
In the above-described exemplary embodiment, the case in which biological information including brain waves is acquired using the electrode that comes into contact with the head including the ears of the user has been described. However, the activity of the brain may be measured by a change in blood flow.
In addition, magnetoencephalography may be used to acquire biological information including brain waves. For example, a tunnel magneto resistance (TMR) sensor is used to measure the magnetic field generated by electrical activity generated by nerve cells of the brain.
The earphone chips 11R and 11L that are provided with only the speakers 123 (see
In this configuration, devices other than the speaker 123 in the configuration illustrated in
The MPU in each of the above-described exemplary embodiments indicates a processor in a broad sense. Examples of the processor include general processors (e.g., CPU: Central Processing Unit), dedicated processors (e.g., GPU: Graphics Processing Unit, ASIC: Application Integrated Circuit, FPGA: Field Programmable Gate Array, and programmable logic device).
In the embodiments above, the term “processor” is broad enough to encompass one processor or plural processors in collaboration which are located physically apart from each other but may work cooperatively. The order of operations of the processor is not limited to one described in the embodiments above, and may be changed.
The foregoing description of the exemplary embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
JP2019-219154 | Dec 2019 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
8798283 | Gauger, Jr. et al. | Aug 2014 | B2 |
9953626 | Gauger, Jr. et al. | Apr 2018 | B2 |
10074354 | Gauger, Jr. et al. | Sep 2018 | B2 |
10362385 | Di Censo | Jul 2019 | B1 |
20150092972 | Lai | Apr 2015 | A1 |
20160210407 | Hwang | Jul 2016 | A1 |
20160351183 | Gauger, Jr. et al. | Dec 2016 | A1 |
20170039045 | Abrahami | Feb 2017 | A1 |
20170041699 | Mackellar | Feb 2017 | A1 |
20180133431 | Malchano | May 2018 | A1 |
20200265823 | Kremer | Aug 2020 | A1 |
20200296500 | Wang | Sep 2020 | A1 |
Number | Date | Country |
---|---|---|
2019004488 | Jan 2019 | JP |
Number | Date | Country | |
---|---|---|---|
20210168486 A1 | Jun 2021 | US |