The present disclosure relates to an information providing device, an information providing method, and a computer-readable storage medium.
In recent times, information devices have been undergoing significant evolution due to high-speed CPUs, due to the high-definition screen display technology, accompanying the advancement in the technology of compact and lightweight batteries, and accompanying the prevalence of wireless network environments and widening of their bandwidth. As far as such information devices are concerned, along with the popularization of smartphones that represent a typical example, what are called wearable devices that are worn by the users have also become popular. For example, in Japanese Patent Application Laid-open No. 2011-96171, a device is disclosed that presents a plurality of sets of sensory information to the user, and gives the user the sense that virtual objects actually exist. In Japanese Patent Application Laid-open No. 2011-242219, regarding an information providing device, the providing form and the providing timing is decided in such a way that the sum of an evaluation function, which indicates the appropriateness level of the information providing timing, becomes equal to the maximum level.
Regarding an information providing device that provides information to a user, there has been a demand that the information is provided to the user in an appropriate manner.
An information providing device according to an embodiment provides information to a user, and include: an output unit including a display unit configured to output a visual stimulus, a sound output unit configured to output an auditory stimulus, and a sensory stimulus output unit configured to output a sensory stimulus other than the visual stimulus and the auditory stimulus; an environment sensor configured to detect, as environment information surrounding the information providing device, position information of the information providing device; a biological sensor configured to detect, as biological information of the user, cerebral activation degree of the user; an output selecting unit configured to select, based on the environment information, one of the display unit, the sound output unit, and the sensory stimulus output unit; an output specification deciding unit configured to decide on a reference output specification for adjusting outputs of the display unit, the sound output unit, and the sensory stimulus output unit, based on the position information of the information providing device; and a user state identifying unit configured to calculate an output specification correction degree for correcting the reference output specification, based on the cerebral activation degree of the user. The output specification deciding unit is configured to correct the reference output specification by using the output specification correction degree to calculate the outputs of the display unit, the sound output unit, and the sensory stimulus output unit.
An information providing method according to an embodiment is for providing information to a user. The information providing method includes: detecting, as environment information surrounding an information providing device, position information of the information providing device; detecting, as biological information of the user, cerebral activation degree of the user; selecting, based on the environment information, one of a display unit configured to output a visual stimulus, a sound output unit configured to output an auditory stimulus, and a sensory stimulus output unit configured to output a sensory stimulus other than the visual stimulus and the auditory stimulus; deciding on a reference output specification for adjusting outputs of the display unit, the sound output unit, and the sensory stimulus output unit, based on the position information of the information providing device; and calculating an output specification correction degree for correcting the reference output specification, based on the cerebral activation degree of the user. The deciding of the reference output specification includes correcting the reference output specification by using the output specification correction degree to calculate the outputs of the display unit, the sound output unit, and the sensory stimulus output unit.
A non-transitory computer-readable storage medium according to an embodiment stores a computer program for providing information to a user. The computer program causes a computer to execute: detecting, as environment information surrounding an information providing device, position information of the information providing device; detecting, as biological information of the user, cerebral activation degree of the user; selecting, based on the environment information, one of a display unit configured to output a visual stimulus, a sound output unit configured to output an auditory stimulus, and a sensory stimulus output unit configured to output a sensory stimulus other than the visual stimulus and the auditory stimulus; deciding on a reference output specification for adjusting outputs of the display unit, the sound output unit, and the sensory stimulus output unit, based on the position information of the information providing device; and calculating an output specification correction degree for correcting the reference output specification, based on the cerebral activation degree of the user. The deciding of the reference output specification includes correcting the reference output specification by using the output specification correction degree to calculate the outputs of the display unit, the sound output unit, and the sensory stimulus output unit.
An exemplary embodiment is described below in detail with reference to the accompanying drawings. However, the present disclosure is not limited by the embodiment described below.
Information Providing Device
Environmental Image
Content Image
As illustrated in
In the example illustrated in
Configuration of Information Providing Device
Environment Sensor
The environment sensor 20 detects environment information of the surrounding of the information providing device 10. The environment information of the surrounding of the information providing device 10 can be said to be the information indicating the type of environment in which the information providing device 10 is present. Moreover, since the information providing device 10 is attached to the user U, the environment sensor 20 can also be said to detect the environment information of the surrounding of the user U.
The environment sensor 20 includes a camera 20A, a microphone 20B, a GNSS receiver 20C, an acceleration sensor 20D, a gyro sensor 20E, an light sensor 20F, a temperature sensor 20G, and a humidity sensor 20H. However, the environment sensor 20 can be an arbitrary sensor that detects the environment information; and, for example, either can include either one of the camera 20A, the microphone 20B, the GNSS receiver 20C, the acceleration sensor 20D, the gyro sensor 20E, the light sensor 20F, the temperature sensor 20G, and the humidity sensor 20H, or can include some other sensor.
The camera 20A is an imaging device that detects, as the environment information, the visible light of the surrounding of the information providing device 10 (the user U), and performs imaging of the surrounding of the information providing device 10 (the user U). The camera 20A can be a video camera that performs imaging at a predetermined framerate. In the information providing device 10, the position of installation and the orientation of the camera 20A can be set in an arbitrary manner. For example, the camera 20A can be installed in the device 10A illustrated in
The microphone 20B detects, as the environment information, the sounds (sound wave information) generated in the surrounding of the information providing device 10 (the user U). In the information providing device 10, the position of installation, the orientation, and the count of the microphone 20B can be set in an arbitrary manner. When a plurality of microphones 20B is used, the information about the orientation directions of the microphones 20B is also obtained.
The GNSS receiver 20C detects, as the environment information, the position information of the information providing device 10 (the user U). Herein, the position information represents the global coordinates. In the present embodiment, the GNSS receiver 20C is, what is called, a GNSS module (GNSS stands for a Global Navigation Satellite System) that receives radio waves from satellites and outputs the position information of the information providing device 10 (the user U).
The acceleration sensor 20D detects, as the environment information, the acceleration of the information providing device 10 (the user U). For example, the acceleration sensor 20D detects the gravitational force, vibrations, and impact shocks.
The gyro sensor 20E detects, as the environment information, the rotation and the orientation of the information providing device 10 (the user U) using the principle of the Coriolis force, or the Euler force, or the centrifugal force.
The light sensor 20F detects, as the environment information, the intensity of the light in the surrounding of the information providing device 10 (the user U). The light sensor 20F is capable of detecting the intensity of the visible light, the infrared light, or the ultraviolet light.
The temperature sensor 20G detects, as the environment information, the surrounding temperature of the information providing device 10 (the user U).
The humidity sensor 20H detects, as the environment information, the surrounding humidity of the information providing device 10 (the user U).
Biological Sensor
The biological sensor 22 detects the biological information of the user U. As long as the biological information of the user U can be detected, the biological sensor 22 can be installed at an arbitrary position. It is desirable that the biological information is not unalterable information such as fingerprint information, but is information that, for example, undergoes changes in its value according to the condition of the user U. More specifically, it is desirable that the biological information represents the information related to the autonomic nerves of the user U, that is, represents the information that undergoes changes regardless of the will of the user U. More particularly, the biological sensor 22 includes a pulse wave sensor 22A and a brain wave sensor 22B, and detects the pulse waves and the brain waves of the user U as the biological information.
The pulse wave sensor 22A detects the pulse waves of the user U. For example, the pulse wave sensor 22A can be a transmission-type photoelectric sensor that includes a light emitting unit and a light receiving unit. In that case, the pulse wave sensor 22A is configured in such a way that, for example, the light emitting unit and the light receiving unit face each other across a fingertip of the user U; and the light that has passed through the fingertip is received in the light receiving unit and the pulse waveform is measured using the fact that the blood flow increases in proportion to the pressure of the pulse waves. However, the pulse wave sensor 22A is not limited to have the configuration explained above, and can be configured in an arbitrary manner as long as the pulse waves can be detected.
The brain wave sensor 22B detects the brain waves of the user U. As long as the brain waves can be detected, the brain wave sensor 22B can have an arbitrary configuration. For example, in principle, as long as an understanding is gained regarding the waves, such as the α waves and the β waves, and regarding the activity of the basic pattern (the background brain waves) appearing in the entire brain and as long as the enhancement or the decline in the activity of the entire brain can be detected; it is sufficient if a few brain wave sensors 22B can be installed. In the present embodiment, unlike the electroencephalography done for medical purpose, it serves the purpose as long as the approximate changes in the condition of the user U can be measured. Hence, for example, the configuration can be such that only two electrodes are attached to the forehead and the ears, and extremely simplistic surface brain waves are detected.
Meanwhile, the biological sensor 22 is not limited to only detect the pulse waves and the brain waves as the biological information. Alternatively, for example, the biological sensor 22 can detect at least either the pulse waves or the brain waves. Still alternatively, the biological sensor 22 can detect some other factors other than the pulse waves and the brain waves as the biological information. For example, the biological sensor 22 can detect the amount of perspiration or the pupil size. Meanwhile, the biological sensor 22 is not a part of the mandatory configuration, and need not be installed in the information providing device 10.
Input Unit
The input unit 24 receives user operations and, for example, can be a touch-sensitive panel.
Output Unit
The output unit 26 outputs stimuli for at least one of the five senses of the user U. More particularly, the output unit 26 includes the display unit 26A, the sound output unit 26B, and a sensory stimulus output unit 26C. The display unit 26A displays images and outputs sensory stimuli to the user U. In other words, the display unit 26A can be said to be a visual stimulus output unit. In the present embodiment, the display unit 26A is, what is called, a head-mounted display (HMD). As explained above, the display unit 26A displays the content image PS. The sound output unit 26B is a device (speaker) that outputs sounds for the purpose of outputting auditory stimuli to the user U. In other words, the sound output unit 26B can be said to be an auditory stimulus output unit. The sensory stimulus output unit 26C outputs sensory stimuli to the user U. In the present embodiment, the sensory stimulus output unit 26C outputs tactile stimuli. For example, the sensory stimulus output unit 26C is a vibration motor such as a vibrator that operates according to a physical factor such as vibrations and outputs tactile stimuli. However, the type of tactile stimulation is not limited to vibrations, and some other type can also be used.
In this way, the output unit 26 stimulates, from among the five senses of a person, the visual sense, the auditory sense, and one of the other senses other than the visual sense and the auditory sense (i.e., in the present embodiment, the tactile sense). However, the output unit 26 is not limited to output visual stimuli, auditory stimuli, one of the other stimuli other than visual stimuli and auditory stimuli. For example, the output unit 26 can output at least one type of stimuli from among visual stimuli, auditory stimuli, and one of the other stimuli other than visual stimuli and auditory stimuli; or can output at least visual stimuli (by displaying images); or can output either auditory stimuli or tactile stimuli in addition to outputting visual stimuli; or can output at least either visual stimuli, auditory stimuli, and tactile stimuli along with outputting at least one of the remaining types of sensory stimuli (that is, at least either gustatory stimuli or olfactory stimuli).
Communication Unit
The communication unit 28 is a module for communicating with external devices and, for example, can include an antenna. In the present embodiment, wireless communication is implemented as the communication method in the communication unit 28. However, any arbitrary communication method can be implemented. The communication unit 28 includes a content image receiving unit 28A that functions as a receiver for receiving content image data which represents the image data of content images. Sometimes the content displayed in a content image includes a sound or includes a sensory stimulus other than a visual stimulus and an auditory stimulus. In that case, as the content image data, the content image receiving unit 28A can receive the image data of a content image as well as receive sound data and sensory stimulus data. Thus, the data of a content image is received by the content image receiving unit 28A as explained above. Alternatively, for example, the data of content images can be stored in advance in the storage unit 30, and the content image receiving unit 28A can receive the data of a content image from the storage unit 30.
Storage Unit
The storage unit 30 is a memory used to store a variety of information such as the arithmetic operation details of the control unit 32 and computer programs. For example, the storage unit 30 includes at least either a main memory device, such as a random access memory (RAM) or a read only memory (ROM), or an external memory device such as a hard disk drive (HDD).
The storage unit 30 is used to store a learning model 30A, map data 30B, and a specification setting database 30C. The learning model 30A is an AI model used for identifying, based on the environment information, the environment around the user U. The map data 30B contains the position information of actual building structures and natural objects, and can be said to be the data in which the global coordinates are associated to actual building structures and natural objects. The specification setting database 30C is used to store the information meant for deciding on the display specification of the content image PS as explained later. Regarding the operations performed using the learning model 30A, the map data 30B, and the specification setting database 30C, the explanation is given later. Meanwhile, the learning model 30A, the map data 30B, and the specification setting database 30C as well as the computer programs to be executed by the control unit 32 and stored in the storage unit 30 can be alternatively stored in a recording medium that is readable by the information providing device 10. Meanwhile, neither the computer programs to be executed by the control unit 32 and stored in the storage unit 30 nor the learning model 30A, the map data 30B, and the specification setting database 30C are limited to be stored in advance in the storage unit 30. Alternatively, at the time of using any data, the information providing device 10 can obtain that data from an external device by performing communication.
Control Unit
The control unit 32 is an arithmetic device, that is, a central processing unit (CPU). The control unit 32 includes an environment information obtaining unit 40, a biological information obtaining unit 42, an environment identifying unit 44, a user state identifying unit 46, an output selecting unit 48, an output specification deciding unit 50, a content image obtaining unit 52, and an output control unit 54. The control unit 32 reads a computer program (software) from the storage unit 30 and executes it so as to implement the operations of the environment information obtaining unit 40, the biological information obtaining unit 42, the environment identifying unit 44, the user state identifying unit 46, the output selecting unit 48, the output specification deciding unit 50, the content image obtaining unit 52, and the output control unit 54. Meanwhile, the control unit 32 can perform such operations either using a single CPU or using a plurality of CPUs when installed therein. Moreover, at least some units from among the environment information obtaining unit 40, the biological information obtaining unit 42, the environment identifying unit 44, the user state identifying unit 46, the output selecting unit 48, the output specification deciding unit 50, the content image obtaining unit 52, and the output control unit 54 can be implemented using hardware.
The environment information obtaining unit 40 controls the environment sensor 20 and causes it to detect the environment information. Thus, the environment information obtaining unit 40 obtains the environment information detected by the environment sensor 20. Regarding the operations performed by the environment information obtaining unit 40, the explanation is given later. Meanwhile, if the environment information obtaining unit 40 is implemented using hardware, then it can also be called an environment information detector.
The biological information obtaining unit 42 controls the biological sensor 22 and causes it to detect the biological information. Thus, the biological information obtaining unit 42 obtains the environment information detected by the biological sensor 22. Regarding the operations performed by the biological information obtaining unit 42, the explanation is given later. Meanwhile, if the biological information obtaining unit 42 is implemented using hardware, then it can also be called a biological information detector. Moreover, the biological information obtaining unit 42 is not a mandatory part of the configuration.
The environment identifying unit 44 identifies, based on the environment information obtained by the environment information obtaining unit 40, the environment around the user U. Then, the environment identifying unit 44 calculates an environment score representing the score for identifying the environment; and, based on the environment score, identifies an environment condition pattern indicating the condition of the environment and accordingly identifies the environment. Regarding the environment identifying unit 44, the explanation is given later.
The user state identifying unit 46 identifies the condition of the user U based on the biological information obtained by the biological information obtaining unit 42. Regarding the operations performed by the user state identifying unit 46, the explanation is given later. Meanwhile, the user state identifying unit 46 is not a mandatory part of the configuration.
The output selecting unit 48 selects, based on at least either the environment information obtained by the environment information obtaining unit 40 or the biological information obtained by the biological information obtaining unit 42, the target device to be operated from among the devices in the output unit 26. Regarding the operations performed by the output selecting unit 48, the explanation is given later. Meanwhile, if the output selecting unit 48 is implemented using hardware, then it can also be called a sense selector. In the case in which the output specification deciding unit 50 (explained later) decides on the output specification based on the environment information, the output selecting unit 48 need not be used. In that case, for example, instead of selecting the target device, the information providing device 10 can operate all constituent elements of the output unit 26, that is, can operate the display unit 26A, the sound output unit 26B, and the sensory stimulus output unit 26C.
The output specification deciding unit 50 decides on the output specification of a stimulus (herein, a visual stimulus, an auditory stimulus, or a tactile stimulus), which is to be output by the output unit 26, based on at least either the environment information obtained by the environment information obtaining unit 40 or the biological information obtained by the biological information obtaining unit 42. For example, it can be said that, based on at least either the environment information obtained by the environment information obtaining unit 40 or the biological information obtained by the biological information obtaining unit 42, the output specification deciding unit 50 decides on the display specification (the output specification) of the content image PS displayed on the display unit 26A. The output specification represents the index about the manner of outputting the stimulus that is output by the output unit 26. Regarding the output specification, the detailed explanation is given later. Moreover, regarding the operations performed by the output specification deciding unit 50, the explanation is given later. Meanwhile, in the case in which the output selecting unit 48 selects the target device based on the environment information, the output specification deciding unit 50 need not be included. In that case, for example, in the information providing device 10, without deciding on the output specification according to the environment information, the selected target device can be made to output a stimulus according to an arbitrary output specification.
The content image obtaining unit 52 obtains the content image data via the content image receiving unit 28A.
The output control unit 54 controls the output unit 26 and causes it to perform output. The output control unit 54 ensures that the target device selected by the output selecting unit 48 performs output according to the output specification decided by the output specification deciding unit 50. For example, the output control unit 54 displays the content image PS, which is obtained by the content image obtaining unit 52, in a superimposed manner on the main image PM and according to the display specification decided by the output specification deciding unit 50. Meanwhile, if the output control unit 54 is implemented using hardware, then it can also be called a multisensory provider.
Thus, the information providing device 10 is configured in the manner explained above.
Operation Details
Given below is the explanation of the operation details of the information providing device 10. More specifically, given below is the explanation of the operation details that the output unit 26 is made to output based on the environment information or the biological information.
Acquisition of Environment Information
As illustrated in
Determining Danger Condition
In the information providing device 10, after the environment information is obtained, based on that environment information, the environment identifying unit 44 determines whether a danger condition is present indicating that the surrounding environment around the user U is dangerous (Step S12).
The environment identifying unit 44 determines whether a danger condition is present based on an image of the surrounding of the information providing device 10 as taken by the camera 20A. In the following explanation, an image of the surrounding of the information providing device 10 as taken by the camera 20A is referred to as a surrounding image. For example, the environment identifying unit 44 identifies the object captured in the surrounding image and, based on the type of the identified object, determines whether a danger condition is present. More specifically, if the object captured in the surrounding image is a specific object set in advance, then the environment identifying unit 44 determines that a danger condition is present. However, if the object is not a specific object, then the environment identifying unit 44 determines that a danger condition is not present. Herein, the specific objects can be set in an arbitrary manner. For example, a specific object can be an object that is likely to create a danger to the user U, such as the flames indicating that a fire has broken, a vehicle, or a signboard indicating that some work is going on. Meanwhile, the environment identifying unit 44 can determine whether a danger condition is present based on a plurality of surrounding images successively taken in chronological order. For example, in each of a plurality of surrounding images successively taken in chronological order, the environment identifying unit 44 identifies an object and determines whether that object is a specific object and is the same object. If the same specific object is captured, then the environment identifying unit 44 determines whether the specific object captured in the subsequent surrounding images, which are captured later in the chronological order, has grown relatively larger, that is, determines whether the specific object has been moving closer to the user U. If the specific object captured in the subsequent surrounding images, which are captured later in the chronological order, has grown relatively larger, that is, if the specific object has been moving closer to the user U; then the environment identifying unit 44 determines that a danger condition is present. On the other hand, if the specific object captured in the subsequent surrounding images, which are captured later in the chronological order, has not grown relatively larger, that is, if the specific object has not been moving closer to the user U; then the environment identifying unit 44 determines that a danger condition is not present. In this way, the environment identifying unit 44 can determine about a danger condition either based on a single surrounding image or based on a plurality of surrounding images successively taken in chronological order. For example, the environment identifying unit 44 can switch between the determination method depending on the type of object captured in the surrounding images. Thus, if a specific object such as flames indicating a fire is captured that can be determined to be dangerous from a single surrounding image, then the environment identifying unit 44 can determine that a danger condition is present based on the single surrounding image. Alternatively, for example, if a specific object such as a vehicle is captured that cannot be determined to be dangerous, then the environment identifying unit 44 can determine about the danger condition based on a plurality of images successively taken in chronological order.
Meanwhile, the environment identifying unit 44 can identify an object, which is captured in a surrounding image, according to an arbitrary method. For example, the environment identifying unit 44 can identify the object using the learning model 30A. In that case, for example, the learning model 30A is an AI model in which the data of an image and the information indicating the type of the object captured in that image is treated as a single dataset and which is built by performing learning using a plurality of such datasets as the teacher data. The environment identifying unit 44 inputs the image data of a surrounding image to the already-learnt learning model 30A, obtains the information about the identification of the type of the object captured in that surrounding image, and identifies the object.
Meanwhile, in addition to referring to a surrounding image, the environment identifying unit 44 can also refer to the position information obtained by the GNSS receiver 20C and then determine whether a danger condition is present. In that case, based on the position information of the information providing device 10 (the user U) as obtained by the GNSS receiver 20C and based on the map data 30B, the environment identifying unit 44 obtains location information indicating the location of the user U. The location information indicates the type of the place at which the user U (the information providing device 10) is present. That is, for example, the location information indicates that the user U is at a shopping center or on a road. The environment identifying unit 44 reads the map data 30B, identifies the types of building structures or the types of natural objects present within a predetermined distance from the current position of the user U, and identifies the location information from the building structures or the natural objects. For example, if the current position of the user U overlaps with the coordinates of a shopping center, then the fact that the user U is present in a shopping center is identified as the location information. Subsequently, if the location information has a specific relationship with the type of object identified from the surrounding image, then the environment identifying unit 44 determines that a danger condition is present. On the other hand, if the specific relationship is not established, then the environment identifying unit 44 determines that a danger condition is not present. The specific relationship can be set in an arbitrary manner. For example, such a combination of an object and a place which is likely to create a danger when that object is present at that place can be set as the specific relationship.
Moreover, the environment identifying unit 44 determines whether a danger condition is present based on the sound information obtained by the microphone 20B. In the following explanation, the sound information of the surrounding of the information providing device 10 as obtained by the microphone 20B is referred to as a surrounding sound. For example, the environment identifying unit 44 identifies the type of the sound included in the surrounding sound and, based on the identified type of the sound, determines whether a danger condition is present. More specifically, if the type of the sound included in the surrounding sound is a specific sound that is set in advance, then the environment identifying unit 44 can determine that a danger condition is present. On the other hand, if no specific sound is included, then the environment identifying unit 44 can determine that a danger condition is not present. The specific sound can be set in an arbitrary manner. For example, a specific sound can be a sound that is likely to create a danger to the user U, such as a sound indicating that a fire has broken, the sound of a vehicle, or a sound indicating that some work is going on.
Meanwhile, the environment identifying unit 44 can identify the type of the sound included in the surrounding sound according to an arbitrary method. For example, the environment identifying unit 44 can identify the object using the learning model 30A. In that case, for example, the learning model 30A is an AI model in which sound data (for example, data indicating the frequency and the intensity of a sound) and the information indicating the type of that sound is treated as a single dataset and which is built by performing learning using a plurality of such datasets as the teacher data. The environment identifying unit 44 inputs the sound data of a surrounding sound to the already-learnt learning model 30A, obtains the information about the identification of the type of the sound included in that surrounding sound, and identifies the type of the sound.
Meanwhile, in addition to referring to the surrounding sound, the environment identifying unit 44 can also refer to the position information obtained by the GNSS receiver 20C and then determine whether a danger condition is present. In that case, based on the position information of the information providing device 10 (the user U) as obtained by the GNSS receiver 20C and based on the map data 30B, the environment identifying unit 44 obtains location information indicating the location of the user U. Subsequently, if the location information has a specific relationship with the type of the sound identified from the surrounding sound, then the environment identifying unit 44 determines that a danger condition is present. On the other hand, if the specific relationship is not established, then the environment identifying unit 44 determines that a danger condition is not present. The specific relationship can be set in an arbitrary manner. For example, such a combination of a sound and a place that is likely to create a danger when that sound is generated at that place can be set as the specific relationship.
In this way, in the present embodiment, the environment identifying unit 44 determines about a danger condition based on the surrounding image and the surrounding sound. However, the determination method about the danger condition is not limited to the method explained above, and any arbitrary method can be implemented. For example, the environment identifying unit 44 can determine about the danger condition based on either the surrounding image or the surrounding sound. Alternatively, the environment identifying unit 44 can determine about the danger condition based on at least either the image of the surrounding of the information providing device 10 as taken by the camera 20A, or the sound of the surrounding of the information providing device 10 as detected by the microphone 20B, or the position information obtained by the GNSS receiver 20C. Meanwhile, in the present embodiment, the determination of a danger condition is not mandatory and can be omitted.
Setting of Danger Notification Details
If it is determined that a danger condition is present (Yes at Step S10); then, in the information providing device 10, the output control unit 54 sets danger notification details that represent the notification details about the existence of a danger condition (Step S12). Based on the details of the danger condition, the information providing device 10 sets the danger notification details. The details of the danger condition represent the information indicating the type of the danger, and are identified from the type of the object captured in the surrounding image or from the type of the sound included in the surrounding sound. For example, if the object is a car that is approaching, then the details of the danger condition indicate that “a vehicle is approaching.” The danger notification details represent the information indicating the details of the danger condition. For example, if the details of the danger condition indicate that a vehicle is approaching, then the danger notification contents represent the information indicating that a vehicle is approaching.
The danger notification details differ according to the type of the target device selected at Step S26 (explained later). For example, if the display unit 26A is selected as the target device, then the danger notification details indicate the display details (content) of the content image PS. That is, the danger notification details are displayed in the form of the content image PS. In that case, for example, the danger notification details represent image data indicating the details such as “Beware! A vehicle is approaching.” Alternatively, if the sound output unit 26B is selected as the target device, then the danger notification details represent the details of the sound output from the sound output unit 26B. In this case, for example, the danger notification details represent sound data for making a sound such as “please be careful as a vehicle is approaching.” Still alternatively, if the sensory stimulus output unit 26C is selected as the target device, then the danger notification details represent the details of the sensory stimulus output from the sensory stimulus output unit 26C. In this case, for example, the danger notification details indicate a tactile stimulus meant for gaining attention of the user U.
Meanwhile, the operation of setting the danger notification details at Step S14 can be performed at an arbitrary timing after it is determined at Step S12 that a danger condition is present and before the operation of outputting the danger notification details at Step S38 performed later. For example, the operation of setting the danger notification details at Step S14 can be performed after the selection of the target device at Step S32 performed later.
Calculation of Environment Score
If it is determined that a danger condition is not present (No at Step S12); then, in the information providing device 10, the environment identifying unit 44 calculates various environment scores based on the environment information as indicated from Step S16 to Step S22. An environment score represents a score for identifying the environment in which the user U (the information providing device 10) is present. More particularly, the environment identifying unit 44 calculates the following scores as the environment scores: a posture score (Step S16), a location score (Step S18), a movement score (Step S20), and a safety score (Step S22). The sequence of operations performed between Steps S16 and S22 is not limited to the sequence given above, and an arbitrary sequence can be implemented. Meanwhile, also when the danger notification details are set at Step S14, various environment scores are calculated from Step S16 to Step S22. Given below is the specific explanation of the environment scores.
Posture Score
The environment identifying unit 44 calculates posture scores as environment scores for the category indicating the posture of the user U. A posture score represents the information indicating a posture of the user U, and can be said to indicate a numerical value about a type of the posture of the user U. The environment identifying unit 44 calculates posture scores based on the environment information that, from among a plurality of types of environment information, is related to the postures of the user U. Examples of the environment information related to the postures of the user U include the surrounding image obtained by the camera 20A and the orientation of the information providing device 10 as detected by the gyro sensor 20E.
More specifically, in the example illustrated in
Moreover, based on the orientation of the information providing device 10 as detected by the gyro sensor 20E, the environment identifying unit 44 calculates the posture score for the sub-category indicating that the face orientation is in the horizontal direction. The posture score for the sub-category indicating that the face orientation is in the horizontal direction can be said to be the numerical value indicating the degree of coincidence of the posture (the orientation of the face) of the user U with respect to the horizontal direction. In order to calculate the posture score for the sub-category indicating that the face orientation is in the horizontal direction, any arbitrary method can be implemented. Herein, although the degree of coincidence with respect to the fact that the face orientation is in the horizontal direction is taken into account, the horizontal direction is not the only possible direction. Alternatively, for example, the degree of coincidence can be calculated with respect to the fact that the face is oriented in an arbitrary direction.
In this way, it can be said that the environment identifying unit 44 sets the information indicating the postures of the user U (herein, the posture scores) based on the surrounding image and the orientation of the information providing device 10. However, in order to set the information indicating the postures of the user U, the environment identifying unit 44 is not limited to use the surrounding image and the orientation of the information providing device 10, and alternatively can use arbitrary environment information. For example, the environment identifying unit 44 can use at least either the surrounding image or the orientation of the information providing device 10.
Location Scores
The environment identifying unit 44 calculates location scores as the environment scores regarding the location category of the user U. That is, a location score represents information indicating the location of the user U, and can be said to be the information indicating, in the form of a numerical value, the type of the place at which the user U is present. The environment identifying unit 44 calculates the location score based on the environment information that, from among a plurality of types of environment information, is related to the location of the user U. Examples of the environment information related to the location of the user U include the surrounding image obtained by the camera 20A, the position information of the information providing device 10 as obtained by the GNSS receiver 20C, and the surrounding sound obtained by the microphone 20B.
More specifically, in the example illustrated in
Regarding the sub-category indicating the presence on railway track, the environment identifying unit 44 calculates the location score based on the position information of the information providing device 10 as obtained by the GNSS receiver 20C. The location score for the sub-category indicating the presence on railway track can be said to a numerical value indicating the degree of coincidence of the user U with respect to the place such as the railway track. In order to calculate the location score for the sub-category indicating the presence on railway track, an arbitrary method can be implemented. For example, the location score can be calculated using the map data 30B. For example, after reading the map data 30B, if the current position of the user U overlaps with the coordinates of the railway track, then the environment identifying unit 44 calculates the location score in such a way that the degree of coincidence of the place indicating the railway track with respect to the location of the user U becomes higher. Herein, although the degree of coincidence with respect to the railway track is taken into account, that is not the only possible case. Alternatively, the degree of coincidence with respect to a building structure or a natural object of an arbitrary type can be calculated.
Regarding the sub-category indicating the sound from the inside of a train car, the environment identifying unit 44 calculates the location score based on the surrounding sound obtained by the microphone 20B. The location score for the sub-category indicating the sound from the inside of a train car can be said to be a numerical value indicating the degree of coincidence of the surrounding sound with respect to the sound from the inside of a train car. Herein, in order to calculate the location score for the sub-category indicating the sound from the inside of a train car, an arbitrary method can be implemented. For example, the determination can be performed in an identical manner to the abovementioned method in which, based on the surrounding sound, it is determined whether a danger condition is present. That is, for example, the determination can be performed by determining whether the surrounding sound is a sound of a specific type. Meanwhile, although the degree of coincidence with respect to the sound from the inside of a train car is calculated, that is not the only possible case. Alternatively, the degree of coincidence with respect to the sound at an arbitrary place can be calculated.
In this way, it can be said that the environment identifying unit 44 sets the information indicating the location of the user U (herein, sets the location score) based on the surrounding image, based on the surrounding sound, and based on the position information of the information providing device 10. However, in order to set the information indicating the location of the user U, the environment identifying unit 44 is not limited to use the surrounding image, the surrounding sound, and the position information of the information providing device 10; and can alternatively use arbitrary environment information. For example, the environment identifying unit 44 can use at least either the surrounding image, or the surrounding sound, or the position information of the information providing device 10.
Movement Score
The environment identifying unit 44 calculates a movement score as the environment score for the category indicating the movement of the user U. Thus, the movement score can be said to be the information about a numerical value indicating the manner of movement of the user U. The environment identifying unit 44 calculates the movement score based on the environment information that, from among a plurality of types of environment information, is related to the movement of the user U. Examples of the environment information related to the movement of the user U include the acceleration information obtained by the acceleration sensor 20D.
More specifically, in the example illustrated in
In this way, it can be said that, based on the acceleration information or the position information of the information providing device 10, the environment identifying unit 44 sets the information indicating the movement of the user U (herein, sets the movement score). However, in order to set the information indicating the movement of the user U, the environment identifying unit 44 is not limited to use the acceleration information and the position information, and can alternatively use arbitrary environment information. For example, the environment identifying unit 44 can use at least either the acceleration information or the position information.
Safety Scores
The environment identifying unit 44 calculates safety scores as the environment scores for the category indicating the safety of the user U. A safety score represents the information indicating the safety of the user U; and can be said to be the information indicating, in the form of a numerical value, whether the user U is present in a safe environment. The environment identifying unit 44 calculates the safety scores based on the environment information that, from among a plurality of types of environment information, is related to the safety of the user U. Examples of the environment information related to the safety of the user U include the surrounding image obtained by the camera 20A, the surrounding sound obtained by the microphone 20B, the intensity information of the light as detected by the light sensor 20F, the temperature information of the surrounding as detected by the temperature sensor 20G, and the humidity information of the surrounding as detected by the humidity sensor 20H.
More specifically, in the example illustrated in
Regarding the sub-category indicating that the infrared light or the ultraviolet light is at the appropriate level, based on the intensity of the infrared light or the ultraviolet light as obtained by the light sensor 20F, the environment identifying unit 44 calculates the safety score for the sub-category indicating that the infrared light or the ultraviolet light is at the appropriate level. The safety score for the sub-category indicating that the infrared light or the ultraviolet light is at the appropriate level can be said to be a numerical value indicating the degree of coincidence of the intensity of the surrounding infrared light or the surrounding ultraviolet light with respect to the appropriate intensity of the infrared light or the ultralight light. Herein, an arbitrary method can be implemented for calculating the safety score for the sub-category indicating that the infrared light or the ultraviolet light is at the appropriate level. For example, the safety score can be calculated based on the intensity of the infrared light or the ultraviolet light as detected by the light sensor 20F. Herein, although the degree of coincidence with respect to the appropriate intensity of the infrared light or the ultraviolet light is calculated, that is not the only possible case. Alternatively, the degree of coincidence with respect to an arbitrary intensity of the infrared light or the ultraviolet light can be calculated.
Regarding the sub-category indicating that the temperature is at the appropriate level, the environment identifying unit calculates the safety score based on the surrounding temperature as obtained by the temperature sensor 20G. The safety score for the sub-category indicating that the temperature is at the appropriate level can be said to be a numerical value indicating the degree of coincidence of the surrounding temperature with respect to the appropriate temperature. Herein, an arbitrary method can be implemented for calculating the safety score for the sub-category indicating that the temperature is at the appropriate level. For example, the safety score can be calculated based on the surrounding temperature as detected by the temperature sensor 20G. Herein, although the degree of coincidence with respect to the appropriate temperature is calculated, that is not the only possible case. Alternatively, the degree of coincidence with respect to an arbitrary temperature can be calculated.
Regarding the sub-category indicating the humidity is at the appropriate level, the environment identifying unit 44 calculates the safety score based on the surrounding humidity as obtained by the humidity sensor 20H. The safety score for the sub-category indicating that the humidity is at the appropriate level can be said to be a numerical value indicating the degree of coincidence of the surrounding humidity with respect to the appropriate humidity. Herein, an arbitrary method can be implemented for calculating the safety score for the sub-category indicating that the humidity is at the appropriate level. For example, the safety score can be calculated based on the surrounding humidity as detected by the humidity sensor 20H. Herein, although the degree of coincidence with respect to the appropriate humidity is calculated, that is not the only possible case. Alternatively, the degree of coincidence with respect to an arbitrary humidity can be calculated.
Regarding the sub-category indicating that a hazardous object is present, the environment identifying unit 44 calculates the safety score based on the surrounding image obtained by the camera 20A. The safety score for the sub-category indicating that a hazardous object is present can be said to be a numerical value indicating the degree of coincidence with respect to the fact that a hazardous object is present. Herein, an arbitrary method can be implemented for calculating the safety score for the sub-category indicating that a hazardous object is present. For example, the determination can be performed in an identical manner to the abovementioned method in which, based on the surrounding image, it is determined whether a danger condition is present. That is, for example, the determination can be performed by determining whether the object captured in the surrounding image is a specific object. Moreover, regarding the sub-category indicating that a hazardous object is present, the environment identifying unit 44 calculates the safety score also based on the surrounding sound obtained by the microphone 20B. Herein, in order to calculate the safety score for the sub-category indicating that a hazardous object is present, an arbitrary method can be implemented. For example, the determination can be performed in an identical manner to the abovementioned method in which, based on the surrounding sound, it is determined whether a danger condition is present. That is, for example, the determination can be performed by determining whether the surrounding sound is a sound of a specific type.
Example of Environment Scores
In
Meanwhile, the types of categories and subcategories illustrated in
Deciding on Environment Pattern
From Step S16 to Step S22 illustrated in
In the examples illustrated in
Moreover, in the examples illustrated in
Moreover, in the examples illustrated in
Setting of Target Device and Reference Output Specification
After the environment pattern is selected, in the information providing device 10, based on the environment pattern, the output selecting unit 48 and the output specification deciding unit 50 decide on the target device to be operated in the output unit 26, and set the reference output specification (Step S26).
Setting of Target Device
As explained above, a target device is a device to be operated from among the devices in the output unit 26. In the present embodiment, based on the environment information, more desirably, based on the environment pattern, the output selecting unit 48 selects the target device from among the display unit 26A, the sound output unit 26B, and the sensory stimulus output unit 26C. The environment pattern represents the information indicating the current environment around the user U. Hence, as a result of selecting the target device based on the environment pattern, it becomes possible to select an appropriate sensory stimulus corresponding to the current environment around the user U.
For example, based on the environment information, the output selecting unit 48 determines whether it is highly necessary for the user U to visually confirm the surrounding environment and, based on that determination result, can determine whether to treat the display unit 26A as the target device. In that case, for example, if the necessity to visually confirm the surrounding environment is lower than a predetermined level, then the output specification deciding unit 50 can select the display unit 26A as the target device. On the other hand, if the necessity is equal to or higher than the predetermined level, then the output specification deciding unit 50 does not select the display unit 26A as the target device. The determination about whether it is necessary for the user U to visually confirm the surrounding environment can be performed in an arbitrary manner. For example, when the user U is in motion or when a hazardous object is present, it can be determined that the necessity is equal to or higher than the predetermined level.
Moreover, for example, based on the environment information, the output selecting unit 48 determines whether it is highly necessary for the user U to listen to the sounds of the surrounding environment and, based on that determination result, can determine whether to treat the sound output unit 26B as the target device. In that case, for example, if the necessity to listen to the sounds of the surrounding environment is lower than a predetermined level, then the output specification deciding unit 50 can select the sound output unit 26B as the target device. On the other hand, if the necessity is equal to or higher than the predetermined level, then the output specification deciding unit 50 does not select the sound output unit 26B as the target device. The determination about whether it is necessary for the user U to listen to the sounds of the surrounding environment can be performed in an arbitrary manner. For example, when the user U is in motion or when a hazardous object is present, it can be determined that the necessity is equal to or higher than the predetermined level.
Furthermore, for example, based on the environment information, the output selecting unit 48 determines whether the user U is in a position to receive a tactile stimulus and, based on that determination result, can determine whether to treat the sensory stimulus output unit 26C as the target device. In that case, for example, if it is determined that the user is in a position to receive a tactile stimulus, then the output specification deciding unit 50 selects the sensory stimulus output unit 26C as the target device. On the other hand, if it is determined that the user is in no position to receive a tactile stimulus, then the output specification deciding unit 50 does not select the sensory stimulus output unit 26C as the target device. The determination about whether the user U is in a position to receive a tactile stimulus can be performed in an arbitrary manner. For example, when the user U is in motion or when a hazardous object is present, it can be determined that the user U is in no position to receive a tactile stimulus.
Till now, the explanation was given about the methods by which the output selecting unit 48 selects the target device. More particularly, for example, it is desirable that the output selecting unit 48 selects the target device based on a table indicating the relationship between the environment patterns and the target devices as illustrated in
Setting of Reference Output Specification
Based on the environment information, more desirably, based on the environment pattern, the output specification deciding unit 50 decides on the reference output specification. An output specification represents an index about the manner of outputting a stimulus that is output by the output unit 26. For example, the output specification of the display unit 26A indicates the manner of displaying the content image PS that is output, and can also be termed as the display specification. Examples of the output specification of the display unit 26A include the size (dimensions) of the content image PS, the degree of transparency of the content image PS, and the display details (content) of the content image PS. The size of the content image PS indicates the dimensions of the content image PS that occupy the screen of the display unit 26A. The degree of transparency of the content image PS indicates the extent to which the content image PS is transparent. Herein, higher the degree of transparency of the content image PS, the greater amount of light falling as the environmental image PA on the eyes of the user U passes through the content image PS. With that, the environmental image PA that is superimposed on the content image PS becomes more clearly visible. In this way, it can be said that, based on the environment pattern, the output specification deciding unit 50 decides on the size, the degree of transparency, and the display details of the content image PS as the output specification of the display unit 26A. However, the output specification of the display unit 26A need not always include the size, the degree of transparency, as well as the display details of the content image PS. For example, the output specification of the display unit 26A can include at least either the size, or the degree of transparency, or the display details of the content image PS; or can include some other information.
For example, the output specification deciding unit 50 can determine, based on the environment information, whether it is necessary for the user U to visually confirm the surrounding environment; and, based on that determination result, can decide on the output specification of the display unit 26A (the reference output specification). In that case, the output specification deciding unit 50 decides on the output specification of the display unit 26A (the reference output specification) in such a way that the degree of visibility of the environmental image PM increases in proportion to the necessity to visually confirm of the surrounding environment. The degree of visibility indicates the ease of visual confirmation of the environmental image PM. For example, in proportion to the necessity of visual confirmation of the surrounding environment, the output specification deciding unit 50 can reduce the size of the content image PS, or can increase the degree of transparency of the content image PS, or can increase the restrictions on the display details of the content image PS, or can implement such changes in combination. Meanwhile, in order to increase the restrictions on the display details of the content image PS; for example, distribution images can be excluded from the display details, and the display details can be set to at least either navigation images or notification images. Meanwhile, the determination about whether it is necessary for the user U to visually confirm the surrounding environment can be performed in an arbitrary manner. For example, when the user U is moving or when a hazardous object is present, it can be determined that the user U needs to visually confirm the surrounding environment.
As illustrated in
As illustrated in
As illustrated in
As illustrated in
Till now, the explanation was given about the output specification of the display unit 26A. Similarly, the output specification deciding unit 50 decides on the output specification also of the sound output unit 26B and the sensory stimulus output unit 26. Examples of the output specification of the sound output unit 26B (the sound specification) include the sound volume, the presence or absence of the acoustic, and the extent of the acoustic. Herein, the acoustic indicates special effects such as the surround sound or the stereophonic sound field. Thus, higher the sound volume or greater the extent of the acoustic, the stronger can be the auditory stimulus given to the user. For example, based on the environment information, the output specification deciding unit 50 decides whether it is necessary for the user U to listen to the surrounding sound and, based on that determination result, can decide on the output specification of the sound output unit 26B (the reference output specification). In that case, the output specification deciding unit 50 decides on the output specification of the sound output unit 26B (the reference output specification) in such a way that the sound volume and the extent of the acoustic increases in inverse proportion to the necessity to listen to the surrounding sound. The determination about whether it is necessary for the user U to listen to the surrounding sound can be performed in an arbitrary manner. For example, when the user U is in motion or when a hazardous object is present, it can be determined that the user U needs to listen to the surrounding sound. Meanwhile, in an identical manner to the output specification of the display unit 26A, the output specification deciding unit 50 can set levels regarding the output specification of the sound output unit 26B.
Regarding the sensory stimulus output unit 26C, the intensity of the tactile stimulus or the frequency of outputting the tactile stimulus can be treated as the output specification. Higher the intensity of the tactile stimulus or higher the frequency of the tactile stimulus, the more intense can be the degree of the tactile stimulus given to the user U. For example, based on the environment information, the output specification deciding unit 50 determines whether the user U is in a position to receive a tactile stimulus and, based on that determination result, can determine the output specification of the sensory stimulus output unit 26C (the reference output specification). In that case, the output specification deciding unit 50 decides on the output specification of the sensory stimulus output unit 26C (the reference output specification) in such a way that, higher the suitability to receive the tactile stimulation, the higher becomes the intensity of the tactile stimulus or the higher becomes the frequency of the tactile stimulus. The determination about whether the user U is in a position to receive a tactile stimulus can be performed in an arbitrary manner. For example, when the user U is in motion or when a hazardous object is present, it can be determined that the user U is in a position to receive a tactile stimulus. Meanwhile, in an identical manner to the output specification of the display unit 26A, the output specification deciding unit 50 can set levels regarding the output specification of the sensory stimulus output unit 26C.
Specific Example of Setting of Target Device and Reference Output Specification
It is desirable that the output selecting unit 48 and the output specification deciding unit 50 decide on the target device and the reference output specification based on the relationship of the environment patterns with the target devices and the reference output specifications.
In the example illustrated in
In this way, in the present embodiment, based on the preset relationship of the environment patterns with the target devices and the reference output specifications, the information providing device 10 sets the target device and the reference output specification. However, the method for setting the target device and the reference output specification is not limited to the method explained above. Alternatively, based on the environment information detected by the environment sensor 20, the information providing device 10 can set the target device and the reference output specification according to an arbitrary method. Meanwhile, the information providing device 10 need not select the target device as well as the reference output specification based on the environment information, and can alternatively select least either the target device or the reference output specification.
Obtaining Biological Information
As illustrated in
On the other hand, regarding the brain waves, the waves such as the a waves and the β waves can be detected or the activity of the basic pattern (background brain waves) can be detected, and that can be followed by detecting the amplitude of the activity. With that, it becomes possible to predict, to a certain extent, the fact that the activity of the entire brain is in a heightened state or in a declined state. For example, from the degree of activation of the prefrontal region of the brain, it becomes possible to understand the degree of interest, such as how much interest is taken in an object about which a visual stimulus is given.
Identification of User State and Calculation of Output Specification Correction Degree
As illustrated in
The user state identifying unit 46 decides on the output specification correction degree based on the cerebral activity degree of the user U. In the present embodiment, the output specification correction degree is decided based on output specification correction degree relationship information indicating the relationship between the user state (in this example, the cerebral activity degree) and the output specification correction degree. The output specification correction degree relationship information represents information (in the form of a table) in which the user states are stored in a corresponding manner to the output specification correction degrees. For example, the output specification correction degree relationship information is stored in the specification setting database 30C. In the output specification correction degree relationship information, the output specification correction degrees are set for each device included in the output unit 26, that is, for each of the display unit 26A, the sound output unit 26B, and the sensory stimulus output unit 26C. Thus, the user state identifying unit 46 decides on the output specification correction degree based on the output specification correction degree relationship information and the identified user state. More particularly, the user state identifying unit 46 reads the output specification correction degree relationship information; selects, from the output specification correction degree relationship information, the output specification correction degree corresponding to the set cerebral activity degree of the user U; and decides on the output specification correction degree. In the example illustrated in
Moreover, based on the pulse wave information of the user U, the user state identifying unit 46 identifies the mental balance degree of the user U as the user state. In the present embodiment, from the brain wave information, the user state identifying unit 46 calculates the fluctuation value of the interval of chronologically successive R-waves WH, that is, calculates the derivative value of the R-R interval; and identifies the cerebral activity degree of the user U based on the derivative value of the R-R interval. Herein, smaller the derivative value of the R-R interval, that is, lesser the fluctuation in the interval of the R-waves WH, the higher is the mental balance degree of the user U as identified by the user state identifying unit 46. In the example illustrated in
The user state identifying unit 46 decides on the output specification correction degree based on the output specification correction degree relationship information and the identified mental balance degree. More particularly, the user state identifying unit 46 reads the output specification correction degree relationship information; selects, from the output specification correction degree relationship information, the output specification correction degree corresponding to the set mental balance degree of the user U; and decides on the output specification correction degree. In the example illustrated in
In this way, based on the preset relationship between the user states and the output specification correction degrees, the user state identifying unit 46 sets the output specification correction degrees. However, that is not the only possible method for setting the output specification correction degrees. Alternatively, the information providing device 10 can set the output specification correction degrees according to an arbitrary method based on the biological information detected by the biological sensor 22. Moreover, the information providing device 10 calculates the output specification correction degrees using the cerebral activation degree identified from the brain waves as well as using the mental balance degree identified from the pulse waves. However, that is not the only possible case. Alternatively, for example, the information providing device 10 can calculate the output specification correction degrees either using the cerebral activation degree identified from the brain waves or using the mental balance degree identified from the pulse waves. Furthermore, the information providing device 10 treats the biological information as numerical values, and estimates the user state based on the biological information. Hence, the error in the biological information can also be reflected, thereby making it possible to estimate the mental state of the user U with more accuracy. In other words, it can be said that the information providing device 10 classifies the biological information or classifies the user state based on the biological information into one of three or more degrees, and thus can estimate the mental state of the user U with more accuracy. However, the information providing device 10 is not limited to classify the biological information or classify the user state based on the biological information into one of three or more degrees. Alternatively, for example, the biological information or the user state based on the biological information can be multiple-choice information indicating yes and no.
Generation of Output Restriction Necessity Information
As illustrated in
Moreover, in the example illustrated in
In this way, when the biological information and the environment information satisfies a specific relationship, that is, when the user state and the environment score satisfy at least either the first condition or the second condition, the user state identifying unit 46 generates output restriction necessity information indicating non-authorization for using the display unit 26A. On the other hand, if the user state and the environment score neither satisfy the first condition nor satisfy the second condition, then the user state identifying unit 46 does not generate output restriction necessity information indicating non-authorization for using the display unit 26A, and generates output restriction necessity information indicating authorization for using the display unit 26A. Meanwhile, the generation of output restriction necessity information is not a mandatory operation.
Acquisition of Content Image
As illustrated in
Meanwhile, the content image obtaining unit 52 can obtain the image data of the content image of the content (the display details) corresponding to the position (the global coordinates) of the information providing device 10 (the user U). The position of the information providing device 10 is identified by the GNSS receiver 20C. For example, when the user U is present within a predetermined range from a particular position, the content image obtaining unit 52 receives the content related to that position. In principle, the display of the content image PS can be controlled according to the intention of the user U. However, when the display setting is done to allow the display, the place or the timing of display is not known. Hence, although the display is a convenient means, it sometimes can cause nuisance. In that regard, the specification setting database 30C can be used to record the information indicating the display authorization/non-authorization or the display specification of the content image PS as set by the user U. The content image obtaining unit 52 reads that information from the specification setting database 30C and, based on that information, controls the acquisition of the content image PS. Alternatively, same information as the position information and the specification setting database 30C can be maintained at an Internet site, and the content image obtaining unit 52 can control the acquisition of the content image PS while checking the maintained details. Meanwhile, the operation performed at Step S34 for obtaining the image data of the content image PS is not limited to be executed before the operation performed at Step S36 (explained later). Alternatively, the operation can be performed at an arbitrary timing before the operation at Step S38 (explained later) is performed.
Meanwhile, along with obtaining the image data of the content image PS, the content image obtaining unit 52 can also obtain the sound data and the tactile stimulus data related to the content image PS. The sound output unit 26B outputs the sound data related to the content image PS as the sound content (the details of the sound), and the sensory stimulus output unit 26C outputs the tactile stimulus data related to the content image PS as the tactile stimulus content (the details of the tactile stimulus).
Setting of Output Specification
Subsequently, as illustrated in
As explained above, the information providing device 10 corrects the reference output specification, which is set based on the environment information, according to the output specification correction degree set based on the biological information; and decides on the final output specification. However, the information providing device 10 is not limited to decide on the output specification by correcting the reference output specification according to the output specification correction degree. Alternatively, the output specification can be decided according to an arbitrary method using at least either the environment information or the biological information. That is, the information providing device 10 either can decide on the output specification according to an arbitrary method based on the environment information and the biological information, or can decide on the output specification according to an arbitrary method based on either the environment information or the biological information. For example, based on the environment information from among the environment information and the biological information, the information providing device 10 can decide on the output specification using the method explained earlier in regard to deciding on the reference output specification. Alternatively, based on the biological information from among the environment information and the biological information, the information providing device 10 can decide on the output specification using the method explained earlier in regard to deciding on the output specification correction degree.
Meanwhile, at Step S32, when output restriction necessity information indicating non-authorization for using the output unit 26 is generated, the output selecting unit 48 selects the target device not based on the environment score but based on the output restriction necessity information. That is, at Step S26, even if the output unit 26 has been selected as the target device based on the environment score, if the specification is not authorized in the output restriction necessity information, that output unit 26 is not considered as the target device. In other words, the output selecting unit 48 selects the target device based on the output restriction necessity information and the environment information. Moreover, since the output restriction necessity information is set based on the biological information, the target device can be said to be set based on the biological information and the environment information. However, the output selecting unit 48 is not limited to select the target device based on the biological information as well as the environment information. Alternatively, the output selecting unit 48 can select the target device based on at least either the biological information or the environment information.
Output Control
After the target device and the output specification is set and after the image data of the content image PS is obtained; as illustrated in
For example, when the display unit 26A is set as the target device, the output control unit 54 displays, in the display unit 26A and according to the output specification of the display unit 26A, the content image PS that is based on the content image data obtained by the content image obtaining unit 52. As explained earlier, the output specification is set based on the environment information and the biological information. As a result of displaying the content image PS according to the output specification, the content image PS can be displayed in an appropriate form corresponding to the environment around the user U and the mental state of the user U.
When the sound output unit 26B is set as the target device, the output control unit 54 outputs, to the sound output unit 26B and according to the output specification of the sound output unit 26B, sounds based on the sound data obtained by the content image obtaining unit 52. In that case too, for example, the intensity of the auditory stimulus can be lowered in inverse proportion to the cerebral activation degree of the user U or in proportion to the mental balance degree of the user U. With that, when the user U is focused in some other thing or does not have enough mental space, the risk of being bothered by the sounds can be lowered. On the other hand, the intensity of the auditory stimulus can be increased in inverse proportion to the cerebral activation degree of the user U or in proportion to the mental balance degree of the user U. With that, the user U becomes able to obtain information from the sounds in an appropriate manner.
When the sensory stimulus output unit 26C is set as the target device, the output control unit 54 outputs, to the sensory stimulus output unit 26C and according to the output specification of the sensory stimulus output unit 26C, a tactile stimulus based on the tactile stimulus data obtained by the content image obtaining unit 52. In that case too, for example, the intensity of the tactile stimulus can be lowered in inverse proportion to the cerebral activation degree of the user U or in proportion to the mental balance degree of the user U. With that, when the user U is focused in some other thing or does not have enough mental space, the risk of being bothered by the tactile stimulus can be lowered. On the other hand, the intensity of the tactile stimulus can be increased in inverse proportion to the cerebral activation degree of the user U or in proportion to the mental balance degree of the user U. With that, the user U becomes able to obtain information from the tactile stimulus in an appropriate manner.
Meanwhile, at Step S12, when it is determined that a danger condition is present and danger notification details are set, the output control unit 54 notifies the target device about the danger notification details so as to ensure that the set output specification is followed.
In this way, the information providing device 10 according to the present embodiment sets the output specification based on the environment information and the biological information, so that a sensory stimulus can be output at an appropriate level according to the environment around the user U and according to the mental state of the user U. Moreover, since the target device to be operated is selected based on the environment information and the biological information, the information providing device 10 becomes able to select an appropriate sensory stimulus according to the environment around the user U or according to the mental state of the user U. However, the information providing device 10 is not limited to use the environment information as well as the biological information. Alternatively, for example, either the environment information or the biological information can be used. Thus, for example, the information providing device 10 either can select the target device based on the environment information and then set the output specification, or can select the target device based on the biological information and then set the output specification.
As explained above, the information providing device 10 according to an aspect of the present embodiment provides information to the user U, and includes the output unit 26, the environment sensor 20, the output specification deciding unit 50, and the output control unit 54. The output unit 26 includes the display unit 26A meant for outputting visual stimuli, the sound output unit 26B meant for outputting auditory stimuli, and the sensory stimulus output unit 26C meant for outputting sensory stimuli other than visual stimuli and auditory stimuli. The environment sensor 20 detects the environment information of the surrounding of the information providing device 10. Based on the environment information, the output specification deciding unit 50 decides on the output specification of visual stimuli, auditory stimuli, and sensory stimuli, that is, decides on the output specification for the display unit 26A, the sound output unit 26B, and the sensory stimulus output unit 26C. Based on the output specification, the output control unit 54 causes the output unit 26 to output a visual stimulus, an auditory stimulus, and a sensory stimulus. Since the output specification of visual stimuli, auditory stimuli, and sensory stimuli is set based on the environment information, the information providing device 10 can output a visual stimulus, an auditory stimulus, and a sensory stimulus after balancing them out according to the environment around the user U. That enables the information providing device 10 to provide information to the user U in an appropriate manner.
Moreover, the information providing device 10 according to an aspect of the present embodiment includes a plurality of environment sensors meant for detecting mutually different types of environment information, and includes the environment identifying unit 44. Based on the different types of environment information, the environment identifying unit 44 identifies the environment pattern that comprehensively indicates the current environment around the user U. Based on the environment pattern, the output specification deciding unit 50 decides on the output specification. Based on the environment pattern identified from a plurality of types of environment information, the information providing device 10 sets the output specification of the visual stimuli, the auditory stimuli, and the sensory stimuli; and hence becomes able to provide information in a more appropriate manner according to the environment around the user U.
Furthermore, in the information providing device 10 according to an aspect of the present embodiment, as the output specification of a visual stimulus, the output specification deciding unit 50 decides on at least either the size of the image displayed on the display unit 26A, or the degree of transparency of the image displayed on the display unit 26A, or the content (display details) of the image displayed on the display unit 26A. As a result of deciding on such specification as the output specification of a visual stimulus, the information providing device 10 becomes able to provide visual information in a more appropriate manner.
Moreover, in the information providing device 10 according to an aspect of the present embodiment, as the output specification of an auditory stimulus, the output specification deciding unit 50 decides on at least either the sound volume of the sound output from the sound output unit 26B or the acoustic. As a result of deciding on such specification as the output specification of an auditory stimulus, the information providing device 10 becomes able to provide auditory information in a more appropriate manner.
Furthermore, in the information providing device 10 according to an aspect of the present embodiment, the sensory stimulus output unit 26C outputs a tactile stimulus as a sensory stimulus; and, as the output specification of an auditory stimulus, the output specification deciding unit 50 decides on at least either the intensity or the frequency of the tactile stimulus output by the sensory stimulus output unit 26C. As a result of deciding on such specification as the output specification of a tactile stimulus, the information providing device 10 becomes able to provide tactile information in a more appropriate manner.
Moreover, the information providing device 10 according to an aspect of the present embodiment provides information to the user U, and includes the output unit 26, the biological sensor 22, the output specification deciding unit 50, and the output control unit 54. The output unit 26 includes the display unit 26A meant for outputting visual stimuli, the sound output unit 26B meant for outputting auditory stimuli, and the sensory stimulus output unit 26C meant for outputting sensory stimuli other than visual stimuli and auditory stimuli. The biological sensor 22 detects the biological information of the user U. Based on the biological information, the output specification deciding unit 50 decides on the output specification of visual stimuli, auditory stimuli, and sensory stimuli, that is, decides on the output specification for the display unit 26A, the sound output unit 26B, and the sensory stimulus output unit 26C. Based on the output specification, the output control unit 54 causes the output unit 26 to output a visual stimulus, an auditory stimulus, and a sensory stimulus. Since the output specification of visual stimuli, auditory stimuli, and sensory stimuli is set based on the environment information, the information providing device 10 can output a visual stimulus, an auditory stimulus, and a sensory stimulus after balancing them out according to the mental state of the user U. That enables the information providing device 10 to provide information to the user U in an appropriate manner.
Moreover, according to an aspect of the present embodiment, the biological information contains information related to the autonomic nerves of the user U, and the output specification deciding unit 50 decides on the output specification based on the information related to the autonomic nerves of the user U. As a result of setting the output specification of visual stimuli, auditory stimuli, and sensory stimuli based on the information related to the autonomic nerves of the user U, the information providing device 10 can provide information in a more appropriate manner according to the mental state of the user U.
Furthermore, the information providing device 10 according to an aspect of the present embodiment provides information to the user U, and includes the output unit 26, the environment sensor 20, the output specification deciding unit 50, and the output control unit 54. The output unit 26 includes the display unit 26A meant for outputting visual stimuli, the sound output unit 26B meant for outputting auditory stimuli, and the sensory stimulus output unit 26C meant for outputting sensory stimuli other than visual stimuli and auditory stimuli. The environment sensor 20 detects the environment information of the surrounding of the information providing device 10. Based on the environment information, the output selecting unit 48 selects the target device for use from among the display unit 26A, the sound output unit 26B, and the sensory stimulus output unit 26C. The output control unit 54 controls the target device. As a result of selecting the target device based on the environment information, the information providing device 10 becomes able to appropriately select, according to the environment around the user U, a stimulus such as a visual stimulus, an auditory stimulus, or a sensory stimulus. That enables the information providing device 10 to provide information to the user U in an appropriate manner according to the environment around the user U.
Moreover, the information providing device 10 according to an aspect of the present embodiment further includes the biological sensor 22 meant for detecting the biological information of the user; and the output selecting unit 48 selects the target device based on the environment information and based on the biological information of the user U. As a result of selecting the target device, which is to be operated, based on the environment information and the biological information, the information providing device 10 becomes able to select the sensory stimulus in an appropriate manner according to the environment around the user U or according to the mental state of the user U.
Furthermore, according to an aspect of the present embodiment, the environment sensor 20 detects the position information of the information providing device 10 as the environment information; and the biological sensor 22 detects the cerebral activation degree of the user U as the biological information. When at least either the first condition is satisfied in which the position of the information providing device 10 is within a predetermined area and the cerebral activation degree is equal to or smaller than the cerebral activation degree threshold value or the second condition is satisfied in which the variation in the position of the information providing device 10 per unit time is equal to or smaller than a predetermined variation threshold value and in which the cerebral activation degree is equal to or lower than the cerebral activity degree threshold value; the output selecting unit 48 selects the display unit 26A as the target device. On the other hand, when neither the first condition nor the second condition is satisfied, the output selecting unit 48 does not select the display unit 26A as the target device. Thus, whether to operate the display unit 26A is decided in the manner explained above. Hence, for example, when the user U is not in motion and is in a relaxed state or when the user is in a train car in a relaxed state, the information providing device 10 can output a sensory stimulus to the user U in an appropriate manner.
The computer program for performing the information providing method described above may be provided by being stored in a non-transitory computer-readable storage medium, or may be provided via a network such as the Internet. Examples of the computer-readable storage medium include optical discs such as a digital versatile disc (DVD) and a compact disc (CD), and other types of storage devices such as a hard disk and a semiconductor memory.
According to the present disclosure, it becomes possible to provide information to the user in an appropriate manner.
Although the present disclosure has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2020-157524 | Sep 2020 | JP | national |
2020-157525 | Sep 2020 | JP | national |
2020-157526 | Sep 2020 | JP | national |
This application is a Continuation of PCT International Application No. PCT/JP2021/034398 filed on Sep. 17, 2021 which claims the benefit of priority from Japanese Patent Applications No. 2020-157524, No. 2020-157525 and No. 2020-157526, filed on Sep. 18, 2020, the entire contents of all of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/034398 | Sep 2021 | US |
Child | 18179409 | US |