This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2019-112561 filed Jun. 18, 2019.
The present disclosure relates to an information processing device and a non-transitory computer readable medium.
Japanese Unexamined Patent Application Publication No. 2001-008915 discloses the following technique for providing a brain wave data obtaining device capable of measuring brain waves up to a high frequency range in rapid eye movement (REM) sleep. That is, a signal from an electrode or sensor for collecting an electroencephalogram (EEG), an electromyogram (EMG), and an electroculogram (EOG) is converted into digital data by an A/D converter through an amplifier at a sampling rate determined by a sampling controller and is stored in a waveform memory. These operations are controlled by an information processor. The waveform data stored in the waveform memory is printed as waveform data by a recording unit through an interface unit, for example. The sampling controller increases the sampling rate when the data of the EMG becomes flat and REM sleep is detected.
In a configuration including a sensor for measuring biological information and another sensor for measuring a motion of a head, measurement results of the sensors are obtained separately. Thus, even when there is a possibility that the biological information and the motion of the head are associated with each other, the measurement results are displayed separately.
Aspects of non-limiting embodiments of the present disclosure relate to an information processing device and a non-transitory computer readable medium that are capable of simultaneously presenting biological information and a motion of a head in a comparable manner, compared to a case where biological information and a motion of a head are measured by different sensors.
Aspects of certain non-limiting embodiments of the present disclosure address the above advantages and/or other advantages not described above. However, aspects of the non-limiting embodiments are not required to address the advantages described above, and aspects of the non-limiting embodiments of the present disclosure may not address advantages described above.
According to an aspect of the present disclosure, there is provided an information processing device including an extractor and a controller. The extractor extracts biological information and a motion of a head from a potential measurement result, which is a result of measuring a potential in the head of a human body. The controller performs control to enable the biological information and the motion of the head that have been extracted by the extractor to be simultaneously presented in association with each other.
An exemplary embodiment of the present disclosure will be described in detail based on the following figures, wherein:
Hereinafter, an exemplary embodiment for carrying out the present disclosure will be described with reference to the attached drawings.
Modules are components of software (including computer programs as the interpretation of “software”) or hardware that can be logically separated from one another in general. Thus, the modules according to the exemplary embodiment include not only modules in a computer program but also modules in a hardware configuration. Therefore, the description of the exemplary embodiment includes a description of a computer program for causing a computer to function as those modules (for example, a program for causing a computer to execute individual steps, a program for causing a computer to function as individual units, or a program for causing a computer to implement individual functions), a system, and a method. For the convenience of description, “store”, “cause . . . to store”, or an expression equivalent thereto may be used. These expressions mean “cause a storage device to store” or “perform control to cause a storage device to store” in a case where an exemplary embodiment is a computer program. The modules may correspond to functions on a one-to-one basis. In terms of packaging, a single module may be constituted by a single program, plural modules may be constituted by a single program, or a single module may be constituted by plural programs. Plural modules may be executed by a single computer, or a single module may be executed by plural computers in a distributed or parallel environment. Alternatively, a single module may include another module. Hereinafter, the term “connection” will be used to refer to a logical connection (for example, transmission and reception of data, instructions, a referential relationship between pieces of data, login, etc.) as well as a physical connection. The term “predetermined” means being determined before target processing, and includes the meaning of being determined in accordance with a present situation/state or a previous situation/state, before target processing before or after processing according to the exemplary embodiment starts. In a case where there are plural “predetermined values”, the plural predetermined values may be different from one another, or two or more of the values (of course including all the values) may be the same. A description “in the case of A, B is performed” is used as the meaning “whether A or not is determined, and B is performed if it is determined A”, except for a case where the determination of whether A or not is unnecessary. Enumeration of items, such as “A, B, and C”, is merely enumeration of examples unless otherwise noted, and includes selection of only one of them (for example, only A).
A system or device may be constituted by plural computers, hardware units, devices, or the like connected to one another through a communication medium, such as a network (“network” includes communication connections on a one-to-one basis), or may be constituted by a single computer, hardware unit, device, or the like. The terms “device” and “system” are used synonymously. Of course, “system” does not include a man-made social “organization” (i.e., a social system).
Target information is read from a storage device in individual processing operations performed by respective modules or in individual processing operations when plural processing operations are performed within a module. After each processing operation is performed, a result of the processing is written into the storage device. Thus, a description of reading from the storage device before a processing operation and writing into the storage device after a processing operation may be omitted. Examples of the storage device include a hard disk drive, a random access memory (RAM), an external storage medium, a storage device connected through a communication line, a register in a central processing unit (CPU), and the like.
An information processing device 100 according to the exemplary embodiment has a function of simultaneously presenting biological information and a motion of a head in association with each other and includes, as illustrated in the example in
The “biological information” herein means information obtained by measuring a vital activity of a human body. Examples of the biological information include information about an electrocardiogram, heart rate, blood pressure, body temperature, brain wave, myoelectric potential, and retinal (fundus) potential. In the exemplary embodiment, brain wave information is mainly used as an example.
The “head” herein means a portion including a neck and thereabove and includes, for example, any one or more of ears, mouth, throat, eyes, nose, forehead, cheeks, and the like.
The “motion of the head” herein means a motion of the whole or part of the head and includes, for example, nodding, head shaking, chewing, swallowing, winking, breathing, a motion of corners of the mouth, and the like. Specifically, a motion of corners of the mouth may be determined by using myoelectric potential information and a face image. For this purpose, measurement of biological information and capturing of a face image may be simultaneously performed, and potential data and a face image may be displayed and presented in association with each other.
The biological information extracting module 105 is connected to the analyzing module 115. The biological information extracting module 105 extracts biological information from a potential measurement result, which is a result of measuring a potential in the head of a human body.
In addition, the biological information extracting module 105 may extract waves in plural frequency bands from the potential measurement result.
Here, examples of a “wave in a frequency band” include an α (alpha) wave, a β (beta) wave, a γ (gamma) wave, a θ (theta) wave, and a δ (delta) wave. The “waves in plural frequency bands” mean waves in two or more of these frequency bands.
The biological information extracting module 105 may perform fast Fourier transform (FFT) on the potential measurement result to extract brain waves.
The head information extracting module 110 is connected to the analyzing module 115. The head information extracting module 110 extracts a motion of the head.
In addition, the head information extracting module 110 may extract a motion of the head from a potential measurement result, which is a result of measuring a potential in the head of a human body.
The analyzing module 115 is connected to the biological information extracting module 105, the head information extracting module 110, and the display control module 120. The analyzing module 115 analyzes the relationship between brain wave information and a motion of the head by using brain wave information, which is the biological information extracted by the biological information extracting module 105, and the motion of the head extracted by the head information extracting module 110.
In addition, the analyzing module 115 may analyze an action of a user.
The display control module 120 is connected to the analyzing module 115 and the display module 125. The display control module 120 performs control to enable the biological information extracted by the biological information extracting module 105 and the motion of the head extracted by the head information extracting module 110 to be simultaneously presented in association with each other.
In addition, the display control module 120 may perform control to display, on the screen of the display module 125 in a comparable manner, plural waves extracted by the biological information extracting module 105 and the motion of the head.
The display module 125 is connected to the display control module 120. The display module 125 performs display on the screen, such as a liquid crystal display or an organic electroluminescence (EL) display, in accordance with the control of the display control module 120.
For example, the motion of the head may include chewing.
In this case, the biological information extracting module 105 may extract, as a chewing portion, a portion that matches a predetermined pattern in the potential measurement result.
The “predetermined pattern” herein may be that a first peak of a graph is higher than a predetermined first threshold value or is higher than or equal to the first threshold value and that a second peak after the first peak is lower than the first threshold value or is lower than or equal to the first threshold value and is higher than a predetermined second threshold value or is higher than or equal to the second threshold value, the second threshold value being smaller than the first threshold value.
The display control module 120 may perform control to display in a comparable manner the biological information and the chewing portion that is on the graph indicating the potential measurement result.
For example, the motion of the head may include swallowing.
In this case, the analyzing module 115 may analyze that a portion that matches a predetermined pattern in the potential measurement result is a swallowing portion.
The “predetermined pattern” herein may be that a third peak of the graph after chewing is higher than a predetermined third threshold value or is higher than or equal to the third threshold value.
The display control module 120 may perform control to display in a comparable manner the biological information and the swallowing portion that is on the graph indicating the potential measurement result.
In this case, if the analyzing module 115 analyzes that the motion of the head is a predetermined motion by using a measurement result of an acceleration sensor put on the head, the analyzing module 115 may extract swallowing at a portion that matches a predetermined pattern in the potential measurement result as swallowing of drink.
The “predetermined motion” herein may be a motion of tilting the head backward.
The display control module 120 may perform control to display information indicating swallowing of drink.
The display control module 120 may perform control to simultaneously display results of plural types of transform processes on the potential measurement result. For example, an obtained potential measurement result may be processed by plural types of transform processes, such as FFT, wavelet transform, Stockwell transform, and empirical mode decomposition (a function arbitrarily set by a user), and the results of the transform processes may be simultaneously displayed.
In addition, the display control module 120 may perform control to display a graph showing an intensity ratio as a result of frequency analysis, or to display an overall intensity ratio as a result of frequency analysis.
The display control module 120 may perform control to display results of spectrum analysis for a predetermined period using a low-pass filter (LPF) and a high-pass filter (HPF) and to display other data that does not have periodicity.
The display control module 120 may perform control to display individual graphs showing results of transform processes such as FFT in a superimposed manner, may allow a user to select any of the graphs, and may perform control to display the graph selected by the user in an emphasized manner. Emphasized display includes, for example, changing the color or shape of the graph, and highlighted display.
The display control module 120 may allow a mode of a vertical-axis scale of the graph to be selected from among a mode of adjusting the scale in accordance with a maximum value or/and a minimum value, a mode of arbitrarily fixing a value display range in accordance with a user operation, and a mode of actively fixing the scale with a maximum value or/and a minimum value.
The device 250 includes a communication module 255 and a biological information detecting module 260.
The communication module 255 is connected to the biological information detecting module 260 and is connected to a communication module 230 of the information processing device 200A through a communication line. The communication module 255 communicates with the information processing device 200A. The communication line may be a wireless link, a wired link, or a combination thereof. For example, near field wireless communication, such as Wi-Fi or Bluetooth (registered trademark), may be used as a wireless link.
The biological information detecting module 260 is connected to the communication module 255. The device 250 is attached to, for example, the head of a user. The device 250 measures a potential in the head of the user wearing the device 250. For example, the electrodes described in Japanese Unexamined Patent Application Publication No. 2019-024758 (the electrodes that are made of a foam material, that have conductivity at least in a portion touching a living body, and that detect a brain wave while being in contact with a living body) may be used.
The biological information detecting module 260 transmits a potential measurement result to the communication module 255, and the communication module 255 transmits the potential measurement result to the information processing device 200A.
The information processing device 200A includes the biological information extracting module 105, the head information extracting module 110, the analyzing module 115, the display control module 120, the display module 125, and the communication module 230.
The communication module 230 is connected to the biological information extracting module 105 and the head information extracting module 110, and is connected to the communication module 255 of the device 250 through the communication line. The communication module 230 communicates with the device 250 to receive a potential measurement result. The communication module 230 transmits the potential measurement result received from the device 250 to the biological information extracting module 105 and the head information extracting module 110.
The biological information extracting module 105 is connected to the analyzing module 115 and the communication module 230. The biological information extracting module 105 extracts biological information from the potential measurement result, which is a result of measuring a potential in the head of a human body, received from the device 250.
The head information extracting module 110 is connected to the analyzing module 115 and the communication module 230. The head information extracting module 110 extracts a motion of the head from the potential measurement result, which is a result of measuring a potential in the head of a human body, received from the device 250.
The information processing device 200B includes the biological information extracting module 105, the head information extracting module 110, the analyzing module 115, the display control module 120, the display module 125, and the communication module 230.
The communication module 230 is connected to the biological information extracting module 105, and is connected to the communication module 255 of the device 250 through the communication line. The communication module 230 communicates with the device 250 to receive a potential measurement result. The communication module 230 transmits the potential measurement result received from the device 250 to the biological information extracting module 105.
The biological information extracting module 105 is connected to the analyzing module 115 and the communication module 230. The biological information extracting module 105 extracts biological information from the potential measurement result, which is a result of measuring a potential in the head of a human body, received from the device 250.
The head information extracting module 110 includes an image capturing module 235 and is connected to the analyzing module 115. The image capturing module 235 captures an image of the head of a user carrying the information processing device 200B (this user is the same as the user wearing the device 250). The image to be captured may be a still image or a moving image. In the case of a still image, two or more still images may be captured at different times according to one exemplary embodiment.
The head information extracting module 110 extracts a motion of the head from the image of the head of the user captured by the image capturing module 235. For example, the head information extracting module 110 may extract a motion of the head from the image by using a learning model generated through machine learning.
The display control module 120 performs control to display the image captured by the image capturing module 235. At this time, a graph indicating the biological information and the motion of the head may be displayed together with the image captured by the image capturing module 235. In addition, an image captured when the head moves may be displayed together.
The information processing device 200C includes the biological information extracting module 105, the head information extracting module 110, the analyzing module 115, the display control module 120, the display module 125, and the communication module 230.
The communication module 230 is connected to the biological information extracting module 105 and the head information extracting module 110, and is connected to the communication module 255 of the device 250 through the communication line. The communication module 230 communicates with the device 250 to receive a potential measurement result. The communication module 230 transmits the potential measurement result received from the device 250 to the biological information extracting module 105 and the head information extracting module 110.
The biological information extracting module 105 is connected to the analyzing module 115 and the communication module 230. The biological information extracting module 105 extracts biological information from the potential measurement result, which is a result of measuring a potential in the head of a human body, received from the device 250.
The head information extracting module 110 includes the image capturing module 235 and is connected to the analyzing module 115 and the communication module 230. The head information extracting module 110 extracts a motion of the head from the potential measurement result received from the device 250 and the image of the head of the user captured by the image capturing module 235. The head information extracting module 110 has both the function of the head information extracting module 110 illustrated in the example in
A smartphone 300 is a specific example of the information processing device 200 (in particular, the information processing device 200B or the information processing device 200C), and a wearable device 350 is a specific example of the device 250. The smartphone 300 includes a camera 335 and captures an image of the head of a user 390. The camera 335 is a specific example of the image capturing module 235.
The user 390 carries the smartphone 300 and wears the wearable device 350 on the head. The smartphone 300 and the wearable device 350 communicate with each other by using near field wireless communication.
With use of the smartphone 300 and the wearable device 350, biological information and a motion of the head of the user 390 are displayed in association with each other on the screen of the smartphone 300. For example, chewing and swallowing of the user 390 are sensed.
As the wearable device 350, sensors are disposed in external auditory canals, and the wearable device 350 detects data of a composite waveform of a brain wave and a myoelectric potential (so-called raw data, hereinafter also referred to as raw data). The wearable device 350 transmits the raw data to the smartphone 300. The smartphone 300 regards the raw data as myoelectric potential data, and performs FFT on the raw data to convert the raw data to brain wave data. In other words, the smartphone 300 obtains two pieces of information, that is, brain wave data as biological information and myoelectric potential data indicating a motion of the head, from one piece of waveform data (raw data). For example, the smartphone 300 detects chewing and swallowing from the myoelectric potential data, marks a graph of the myoelectric potential data, and displays the graph of the myoelectric potential data and a graph of the brain wave data in the same time series. Accordingly, the user 390 is able to know the state of the brain wave data when chewing and swallowing are performed.
Specifically, the smartphone 300 extracts peaks on the graph of the myoelectric potential data, and compares the peaks with threshold values to detect chewing and swallowing. Subsequently, the smartphone 300 puts marks of chewing and swallowing on the graph of the myoelectric potential data. In addition, the smartphone 300 performs FFT analysis on the raw data, removes a noise frequency component, analyses a brain wave to generate a graph of the brain wave, and displays the graph of the brain wave and the graph of the myoelectric potential data in the same time series.
With use of the camera 335 of the smartphone 300, an image of the head, mainly the face, of the user 390 may be captured, and a motion of the face and a brain wave may be displayed in association with each other. A motion of the face corresponds to, for example, a facial exercise for beauty, training of facial muscles of expression, or the like.
In a case where the electrodes described in the above-mentioned Japanese Unexamined Patent Application Publication No. 2019-024758 is used for the sensors of the wearable device 350, the sensors serve as electrometers that measure a very low potential (for example, a potential of several μV) and are requested to obtain a signal of a high S/N ratio even in a slight change in potential.
However, when a slight change in potential can be obtained, a myoelectric potential generated from a muscle around the electrodes may be included as noise in a signal.
In the exemplary embodiment, a motion of the head is detected by using myoelectric potential information that is normally dealt with as noise. A motion of the head includes, for example, chewing, swallowing, and the like.
The wearable device 350, which performs measurement by using brain wave measuring electrodes disposed in external auditory canals and a ground (GND) disposed near an ear, is capable of obtaining a myoelectric potential signal from a motion of chewing, swallowing, or the like because there are large muscles of jaws, cheeks, and throat near the wearable device 350.
In the case of using the electrodes described in Japanese Unexamined Patent Application Publication No. 2019-024758, the following process is performed in the exemplary embodiment.
(1) The brain wave electrodes of the wearable device 350 are disposed in the external auditory canals of the user 390.
(2) Raw data is myoelectric potential data, an FFT process is performed on the raw data to obtain brain wave data, and two pieces of biological information are obtained from one piece of waveform data.
In step S402, the wearable device 350 detects biological information of the user 390. The biological information corresponds to the raw data described above.
In step S404, the wearable device 350 generates data to be transmitted to the smartphone 300.
In step S406, the wearable device 350 transmits the data to the smartphone 300. As described above, for example, the smartphone 300 and the wearable device 350 communicate with each other by using near field wireless communication.
In step S502, the smartphone 300 receives data from the wearable device 350.
In step S504, the smartphone 300 extracts information about a brain wave from the received data.
In step S506, the smartphone 300 extracts information about a motion of the head from the received data.
In step S508, the smartphone 300 simultaneously displays both a graph of the information about the brain wave and a graph of the information about the motion of the head. Obviously, the two graphs have the same time series. Specifically, X axes indicating time of the two graphs may be caused to match each other, or the two graphs may be displayed in one region in a superimposed manner.
In step S602, the smartphone 300 receives data from the wearable device 350.
In step S604, the smartphone 300 obtains an image captured by the camera 335 of the smartphone 300.
In step S606, the smartphone 300 extracts information about a brain wave from the received data.
In step S608, the smartphone 300 analyses the image and extracts information about a motion of the head.
In step S610, the smartphone 300 simultaneously displays both a graph of the information about the brain wave and a graph of the information about the motion of the head. For example, a mark may be put on the time point where the motion of the head occurred on the graph of the brain wave.
In step S702, the smartphone 300 receives data from the wearable device 350.
In step S704, the smartphone 300 obtains an image captured by the camera 335.
In step S706, the smartphone 300 extracts information about a brain wave from the received data.
In step S708, the smartphone 300 extracts information about a motion of the head from the received data.
In step S710, the smartphone 300 analyses the image and extracts information about a motion of the head.
In step S712, the smartphone 300 combines the information about the motion of the head extracted in step S708 and the information about the motion of the head extracted in step S710.
In step S714, the smartphone 300 simultaneously displays both a graph of the information about the brain wave and a graph of the information about the motion of the head.
For example, a screen 800 of the smartphone 300 includes a raw data field 802, an FFT field 804, a wavelet transform field 806, a Stockwell transform field 808, and an empirical mode decomposition field 810.
A signal obtained from the wearable device 350 is subjected to a plurality of types of processes, such as FFT, wavelet transform, Stockwell transform, and empirical mode decomposition (a function arbitrarily set by a user), and results of the processes are displayed on the screen 800. Here, individual graphs are displayed in the raw data field 802, the FFT field 804, the wavelet transform field 806, the Stockwell transform field 808, and the empirical mode decomposition field 810, which have the same horizontal axis serving as a time axis.
An intensity ratio obtained as a result of frequency analysis may be displayed in the form of a graph, and an entire intensity ratio as a result of frequency analysis may be displayed.
A screen 900 includes a raw data field 902, a frequency analysis field 904, a periodical frequency waveform generation field 906, and a processed waveform data field 908.
Periodical frequency components appear in the graph in the frequency analysis field 904. In
In the processed waveform data field 908, for example, a graph generated from the graph in the raw data field 902 and the graph in the periodical frequency waveform generation field 906 is displayed. That is, as a result of removing periodical frequency components from the raw data, a waveform without periodicity of a myoelectric potential or the like can be seen easily.
In this example, the portions of chewing and swallowing are marked on a graph.
A screen 1000 includes a raw data field 1002, a δ wave field 1004, a θ wave field 1006, an α wave field 1008, and a myoelectric potential data field 1010. The graphs in the δ wave field 1004, the θ wave field 1006, and the α wave field 1008 are obtained by performing a process such as FFT on raw data to divide the raw data into a δ wave, a θ wave, and an α wave. In the myoelectric potential data field 1010, a graph of myoelectric potential data extracted by removing a brain wave component from the raw data is displayed. On this graph, the portions of chewing and swallowing are marked.
Regarding chewing, if a first peak of the graph is higher than a predetermined first threshold value or is higher than or equal to the first threshold value, and if a second peak after the first peak is lower than the first threshold value or is lower than or equal to the first threshold value and is higher than a predetermined second threshold value or is higher than or equal to the second threshold value, the first peak is regarded as a chewing portion. The second threshold value is smaller than the first threshold value. In the example in
Regarding swallowing, if a third peak after a chewing portion is higher than a predetermined third threshold value or is higher than or equal to the third threshold value, the third peak is regarded as a swallowing portion. In the example in
With such display being performed, when chewing and swallowing are detected from myoelectric potential data, the relationship with individual brain waves may be checked on one screen, and thus hypothesizing or verifying may be easily performed.
The graph illustrated in the example in
The raw data is divided into brain wave data and myoelectric potential data. For example, an FFT process may be performed on the raw data to generate brain wave data. The raw data may be dealt with as myoelectric potential data. That is, two pieces of information are obtained from one piece of raw data obtained from the wearable device 350.
The graph illustrated in the example in
The graph illustrated in the example in
The data of an acceleration data graph 1210 is obtained by the acceleration sensor in the wearable device 350 put on the head of the user 390. As the acceleration sensor, for example, a six-axis sensor or the like may be used.
The data of a raw data graph 1220 is obtained by the myoelectric potential sensor in the wearable device 350.
As a result of associating a motion of the head with information about a myoelectric potential of jaws by using a detection result of the acceleration sensor in the wearable device 350, the motion can be classified more specifically. For example, when information indicating swallowing with the head tilted backward is obtained, the motion is determined to be a motion of drinking.
In the example in
A raw data graph 1310, a brain wave (α wave) graph 1320, and a brain wave (β wave) graph 1330 are displayed on a screen 1300 in a superimposed manner. Obviously, the same horizontal axis serving as a time axis is used for these graphs.
In this example, the user is allowed to select a graph and highlight the graph. For example, when the brain wave (α wave) graph 1320 is selected by the user, the brain wave (α wave) graph 1320 may be displayed in red.
With the portions of chewing such as chewing portions 1350 and 1352 being marked, the state of the brain wave while the chewing is being performed can be observed.
The smartphone 300 includes a screen 325 and the camera 335. The camera 335 is also referred to as a built-in camera and is used to capture an image of the face of the user 390 having the smartphone 300.
The example in
The screen 325 includes a captured image display area 1410, a remaining number of times display area 1420, a remaining time display area 1430, a comment display area 1440, and a graph display area 1450.
At the time of initial setting of the wearable device 350, initialization is performed in accordance with a potential of an individual user. At the time of initialization, both or either of a potential and image data is used.
In a case where it is not possible to measure a potential, the number of times of training is counted from only image information. The training may be, for example, training for orbicular muscles of the eyes. The “case where it is not possible to measure a potential” is an example of a case where it is not possible to extract biological information.
The screen 325 of the smartphone 300 is displayed by a training application, and displays the face of the user 390, potential data, and activity information in association with each other. The activity information includes, for example, an advice to open the eyes wider, the number of times of training, a training time, and the like. For example, “remaining number of times: 5” is displayed in the remaining number of times display area 1420, “remaining time: 00:32” is displayed in the remaining time display area 1430, and “lift up your cheeks more” is displayed in the comment display area 1440.
In addition, myoelectric potential data received from the wearable device 350 is analyzed to detect whether lift-up or the like has been performed. Furthermore, a face image may be analyzed to detect whether lift-up or the like has been performed. That is, even in an abnormal state where it is not possible to obtain information from a biological potential such as a myoelectric potential, an effect of the training may be estimated and the user 390 may be notified of the effect without the application being stopped. Similarly, in a case where it is not possible to obtain a face image, an effect may be estimated from only information about a biological potential and the user 390 may be notified of the effect. In a case where both the biological potential and the face image are normally measured and analyzed, a training effect can be estimated and notified more accurately than in a case where only one of them is measured.
In this example, face images before and after training are displayed in a comparable manner. A screen 325a illustrated in
For comparison of the images, a manner of displaying the images may be selected from among, for example, displaying the images side by side within one screen, displaying the images one by one, making two images transparent and superimposing the images one on top of the other, displaying right or left halves of the images before and after training, and enlarging part of the images.
In addition, a face image captured when lift-up or the like is performed may be displayed. Furthermore, the face images before and after the lift-up may also be displayed.
The screen 325 includes a captured image display area 1610, a remaining number of times display area 1620, a remaining time display area 1630, a comment display area 1640, and a graph display area 1650.
The example illustrated in
For example, “remaining number of times: 5” is displayed in the remaining number of times display area 1620, “remaining time: 00:32” is displayed in the remaining time display area 1630, and “open your eyes wider” is displayed in the comment display area 1640.
In this example, face images before and after training are displayed in a comparable manner. The screen 325a illustrated in
In this example, it is seen that the degree of opening of the eyes is increased. In particular, the target portion of training is enlarged. In this example, the image of an eye in the training target display area 1720a and the image of an eye in the training target display area 1720b can be compared with each other.
For comparison of the images, a manner of displaying the images may be selected from among, for example, displaying the images side by side within one screen, displaying the images one by one, making two images transparent and superimposing the images one on top of the other, displaying right or left halves of the images before and after training, and enlarging part of the images.
In addition, a face image captured when opening of the eyes or the like is performed may be displayed. Furthermore, the face images before and after the opening of the eyes may also be displayed.
A hardware configuration of a computer that executes a program as the exemplary embodiment is a typical computer as illustrated in
In the above-described exemplary embodiment, the process based on a computer program is performed by cooperation between software and hardware resources by causing a system having the above-described hardware configuration to read the computer program as software. Accordingly, the above-described embodiment is carried out.
The hardware configuration illustrated in
The above-described program may be provided by storing it in a recording medium or may be provided through communication. In this case, for example, the above-described program may be regarded as a “computer-readable recording medium storing the program”.
The “computer-readable recording medium storing the program” is a computer-readable recording medium storing the program and used to install, execute, or distribute the program.
Examples of the recording medium include a digital versatile disc (DVD), such as “DVD-R, DVD-RW, DVD-RAM, and the like” defined by DVD Forum and “DVD+R, DVD+RW, and the like” defined by DVD+RW Alliance; a compact disc (CD), such as a read only memory (CD-ROM), a CD recordable (CD-R), and a CD rewritable (CD-RW); a Blu-ray Disc (registered trademark); a magneto-optical (MO) disc; a flexible disk (FD); magnetic tape; a hard disk; a read only memory (ROM); an electrically erasable and programmable ROM (EEPROM, registered trademark); a flash memory; a random access memory (RAM); and a secure digital (SD) memory card.
All or part of the above-described program may be stored or distributed by recording it on the recording medium. Alternatively, all or part of the program may be transmitted through communication, for example, using a transmission medium such as a wired or wireless communication network used in a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), the Internet, an intranet, or an extranet, or a combination of the wired and wireless communication networks. Alternatively, all or part of the program may be carried using carrier waves.
Furthermore, the above-described program may be all or part of another program, or may be recorded on a recording medium together with another program. Alternatively, the program may be recorded on plural recording media in a split manner. The program may be recorded in any manner, for example, the program may be compressed or encrypted, as long as the program can be recovered.
The foregoing description of the exemplary embodiment of the present disclosure has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiment was chosen and described in order to best explain the principles of the disclosure and its practical applications, thereby enabling others skilled in the art to understand the disclosure for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2019-112561 | Jun 2019 | JP | national |