The present disclosure relates to a display device, a display method, and a computer-readable storage medium.
Recently, with the advance of high-speed CPU, a technique of high definition screen display, and a technique of small and light battery, spread of wireless network environment and widened bandwidth, and the like, information devices have evolved significantly. As display devices that provide images to a user, not only smartphones, which is the representative example thereof, but also so called wearable devices that are worn by the user and the like have also been popularized. For example, in Japanese Patent Application Laid-open No. 2011-096171, a device that provides a sense as if a virtual object is actually present by presenting multiple kinds of sensing information to a user is described. Moreover, Japanese Patent Application Laid-open No. 2014-052518 describes that preferences of a user are determined from biological information, and advertising information is determined based on the determination result.
For display devices that provide an image to a user, appropriate provision of an image is desired.
A display device according to an embodiment includes: a display unit configured to display an image; a biological sensor configured to detect biological information of a user; an output-specification determining unit configured to determine a display specification of a sub-image to be displayed on the display unit based on the biological information of the user; an output control unit configured to cause the display unit to display the sub-image in a superimposed manner on a main image that is visually recognized through the display unit and based on the display specification; and an environment sensor configured to detect environment information of a periphery of the display device. The environment information includes location information of the user. The biological information includes brain wave information of the user. The output-specification determining unit is configured to determine display time of the sub-image per unit time as the display specification of the sub-image based on the location information of the user and the brain wave information of the user.
A display method according to an embodiment includes: detecting biological information of a user; determining a display specification of a sub-image to be displayed on a display unit based on the biological information of the user; detecting environment information of a periphery of the display unit; and causing the display unit to display the sub-image in a superimposed manner on a main image that is visually recognized through the display unit and based on the display specification. The environment information includes location information of the user. The biological information includes brain wave information of the user. The determining of the display specification includes determining display time of the sub-image per unit time as the display specification of the sub-image based on the location information of the user and the brain wave information of the user.
A non-transitory computer-readable storage medium according to an embodiment stores a computer program causing a computer to execute: detecting biological information of a user; determining a display specification of a sub-image to be displayed on a display unit based on the biological information of the user; detecting environment information of a periphery of the display unit; and causing the display unit to display the sub-image in a superimposed manner on a main image that is visually recognized through the display unit and based on the display specification. The environment information includes location information of the user. The biological information includes brain wave information of the user. The determining of the display specification includes determining display time of the sub-image per unit time as the display specification of the sub-image based on the location information of the user and the brain wave information of the user.
Hereinafter, the present embodiment will be explained in detail based on the drawings. The embodiment explained below is not intended to limit the present embodiment.
Main Image
Sub-Image
As illustrated in
The sub-image PS may have arbitrary contents, but in the present embodiment, it is an advertisement. The advertisement herein signifies information informing a commodity product or a service. The sub-image is not limited to be an advertisement, and may be an image including information to be notified to the user U. For example, the sub-image may be a navigation image showing a direction to the user U. In
As described, the display device 10 provides the main image PM and the sub-image PS, but may also display a content image having different contents from the main image PM and the sub-image PS on the display unit 26A. The content image may be images of any content, such as a movie and a TV program.
Configuration of Display Device
Environment Sensor
The environment sensor 20 is a sensor that detects environment information around the display device 10. The environment information around the display device 10 is also regarded as information that indicates what environment the display device 10 is in. Moreover, because the display device 10 is mounted on the user U, it can also be said that the environment sensor 20 detects environment information around the user U.
The environment sensor 20 includes the camera 20A, a microphone 20B, a GNSS receiver 20C, an acceleration sensor 20D, a gyro sensor 20E, a light sensor 20F, a temperature sensor 20G, and a humidity sensor 20H. Note that the environment sensor 20 may include any sensor that detects environment information, and it may be one including at least one of the camera 20A, the microphone 20B, the GNSS receiver 20C, the acceleration sensor 20D, the gyro sensor 20E, the light sensor 20F, the temperature sensor 20G, and the humidity sensor 20H, or may be one including other sensors.
The camera 20A is an imaging device, and images a periphery of the display device 10 as the environment information by detecting visible light around the display device 10 (the user U). The camera 20A may be a video camera that images at a predetermined frame rate. In the display device 10, the camera 20A may be arranged at an arbitrary position and in an arbitrary orientation, but the camera 20A is arranged in the device 10A illustrated in
The microphone 20B is a microphone that detects sound (sound wave information) around the display device 10 (the user U) as the environment information. In the display device 10, the microphone 20B may be arranged at an arbitrary position, in an arbitrary orientation, and in an arbitrary number. When the microphone 20B is provided in plurality, information of a direction in which the microphones 20B are directed is also acquired.
The GNSS receiver 20C is a device that detects position information of the display device 10 (the user U) as the environment information. The position information herein is terrestrial coordinates. In the present embodiment, the GNSS receiver 20C is a so-called global navigation satellite system (GNSS) module, and receives a radio wave from a satellite to detect position information of the display device 10 (the user U).
The acceleration sensor 20D is a sensor that detects an acceleration degree of the display device 10 (the user U) as the environment information, and detects, for example, gravity, vibration, impact, and the like.
The gyro sensor 20E is a sensor that detects a rotation and an orientation of the display device 10 (the user U) as the environment information, and performs detection by using the Coriolis force, the Euler force, the centrifugal force, and the like.
The light sensor 20F is a sensor that detects intensity of light around the display device 10 (the user U) as the environment information. The light sensor 20F can detect the intensity of visible light, infrared ray, and ultraviolet ray.
The temperature sensor 20G is a sensor that detects temperature of periphery of the display device 10 (the user U) as the environment information.
The humidity sensor 20H is a sensor that detects humidity of periphery of the display device 10 (the user U) as the environment information.
Biological Sensor
The biological sensor 22 is a sensor that detects biological information of the user U. The biological sensor 22 may be arranged at arbitrary position as long as biological information of the user U can be detected. The biological information herein is not non-changing information, such as finger print, but is preferable to be, for example, information, a value of which changes according to the condition of the user U. More specifically, the biological information herein is preferable to be information relating to autonomic nerves, that is, information in which a value changes irrespective of intension of the user U. Specifically, the biological sensor 22 includes a pulse wave sensor 22A and a brain wave sensor 22B, and detects a pulse wave and a brain wave of the user U as the biological information.
The pulse wave sensor 22A is a sensor that detects a brain wave of the user U. The pulse wave sensor 22A may be a sensor of a transmission photoelectric system that includes a light emitting unit and a light receiving unit. In this case, the pulse wave sensor 22A has, for example, a structure in which the light emitting unit and the light receiving unit oppose to each other, sandwiching a fingertip of the user U, and the light receiving unit receives light that has passed through the fingertip, and may measure a waveform of pulses by using a phenomenon that a blood flow increases as a pressure of a pulse wave becomes large. However, the pulse wave sensor 22A is not limited thereto, but may be of any system enabling to detect a pulse wave.
The brain wave sensor 22B is a sensor that detects a brain wave of the user U. The brain wave sensor 22B may have arbitrary configuration as long as a brain wave of the user U can be detected, but theoretically, it is sufficient as long as an α wave and a β wave, and basic rhythmic (background brain wave) activity that appears in the entire brain can be grasped, and improvement and deterioration of activity as the entire brain can be detected and, therefore, several units are enough to be arranged. Because it is necessary to measure only rough changes in condition of the user U in the present embodiment unlike brain wave measurement for medical purposes, for example, only two electrodes may be mounted at the forehead and an ear, and very simple surface brain wave may be detected.
The biological sensor 22 is not limited to be configured to detect a pulse wave and a brain wave as the biological information, but may detect, for example, at least one of the pulse wave and the brain wave. Furthermore, the biological sensor 22 may detect ones other than a pulse wave and a brain wave as the biological information, and may detect, for example, an amount of sweating, the size of the pupils, and the like.
Input Unit
The input unit 24 is a device that accepts an operation by a user, and is, for example, a touch panel and the like.
Output Unit
The output unit 26 is a device that outputs a stimulus to at least one of five senses to the user U.
Specifically, the output unit 26 includes the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C. The display unit 26A is a display that outputs a visual stimulus for the user U by displaying an image, and can also be referred to as visual-stimulus output unit. In the present embodiment, the display unit 26A is a so-called head mount display (HMD). The display unit 26A displays the sub-image PS so as to be superimposed on the main image PM as described above. The sound output unit 26B is a device (speaker) that outputs audio stimulus for the user U by outputting sound, and can also be referred to as audio-stimulus output unit. The tactile-stimulus output unit 26C is a device that outputs a tactile stimulus for the user U. For example, the tactile-stimulus output unit 26C outputs a tactile stimulus by physical action, such as vibration, but the kind of the tactile stimulus is not limited to vibration and the like, and may be any kinds.
As described, the output unit 26 stimulates the sense of sight, the sense of hearing, and the sense of touch out of five sense of humans. However, the output unit 26 is not limited to output visual stimulus, audio stimulus, and tactile stimulus. For example, the output unit 26 may be configured to output at least one of a visual stimulus, an audio stimulus, and a tactile stimulus, may be configured to output at least a visual stimulus (display an image), may be configured to output either one of an audio stimulus and a tactile stimulus I addition to a visual stimulus, or may be configured to output another sensing stimulus out of five senses (that is, at least one of a taste stimulus and a olfactory stimulus) in addition to at least one out of a visual stimulus, an audio stimulus, and a tactile stimulus.
Communication Unit
The communication unit 28 is a module to communicate with an external device, and may include, for example, an antenna and the like. A communication method by the communication unit 28 is wireless communication in the present embodiment, but the communication method may be any method. The communication unit 28 includes a sub-image receiving unit 28A. The sub-image receiving unit 28A is a receiver that receives sub-image data that is image data of a sub-image. Contents displayed in a sub-image can include a sound and a tactile stimulus. In this case, the sub-image receiving unit 28A may receive sound data and tactile stimulus data together with image data of a sub-image, as the sub-image data. Moreover, when the display unit 26A displays a content image other than a sub-image described above, the communication unit 28 receives image data of the content image also.
Storage Unit
The storage unit 30 is a memory that stores various kinds of information, such as a calculation content of the control unit 32 and a computer program, and includes, for example, at least one of a main storage device, such as a random access memory (RAM) and a read only memory (ROM), and an external storage device, such as a hard disk drive (HDD).
The storage unit 30 stores a learning model 30A, map data 30B, and a specification setting database 30C. The learning model 30A is an AI model that is used to identify an environment by which the user U is surrounded based on the environment information. The map data 30B is data including position information, such as a building structure and a natural object that actually exist, and can be regarded as data in which terrestrial coordinates and a building structure, a natural object, and the like that actually exit are associated with each other. The specification setting database 30C is a database that includes information to determine a display specification of the sub-image PS as described later. Processing using the learning model 30A, the map data 30B, the specification setting database 30C, and the like will be described later. The learning model 30A, the map data 30B, the specification setting database 30C, and a computer program for the control unit 32 stored in the storage unit 30 may be stored in a recording medium that can be read by the display device 10. Moreover, a computer program for the control unit 32 stored in the storage unit 30, the learning model 30A, the map data 30B, and the specification setting database 30C are not limited to be stored in the storage unit 30 in advance, but may be acquired from an external device by the display device 10 by communication at the time when these pieces of data are used.
Control Unit
The control unit 32 is an arithmetic device, that is, a central processing unit (CPU). The control unit 32 includes an environment-information acquiring unit 40, a biological-information acquiring unit 42, an environment identifying unit 44, a user-condition identifying unit 46, an output selecting unit 48, an output-specification determining unit 50, a sub-image acquiring unit 52, and an output control unit 54. The control unit 32 reads and executes computer programs (software) from the storage unit 30 to implement the environment-information acquiring unit 40, the biological-information acquiring unit 42, the environment identifying unit 44, the user-condition identifying unit 46, the output selecting unit 48, the output-specification determining unit 50, the sub-image acquiring unit 52, and the output control unit 54, and performs processing of those. The control unit 32 may perform the processing by a single CPU, or may include plural CPUs to perform the processing by the plural CPUs. Moreover, at least one of the environment-information acquiring unit 40, the biological-information acquiring unit 42, the environment identifying unit 44, the user-condition identifying unit 46, the output selecting unit 48, the output-specification determining unit 50, the sub-image acquiring unit 52, and the output control unit 54 may be implemented by hardware.
The environment-information acquiring unit 40 controls the environment sensor 20, to cause the environment sensor 20 to detect environment information. The environment-information acquiring unit 40 acquires the environment information detected by the environment sensor 20. Processing of the environment-information acquiring unit 40 will be described later. When the environment-information acquiring unit 40 is hardware, it can also be referred to as environment information detector.
The biological-information acquiring unit 42 controls the biological sensor 22, to cause the biological sensor 22 to detect biological information. The biological-information acquiring unit 42 acquires the biological information detected by the biological sensor 22. Processing of the biological-information acquiring unit 42 will be described later. When the biological-information acquiring unit 42 is hardware, it can also be referred to as biological information detector.
The environment identifying unit 44 identifies an environment by which the user U is surrounded base on the environment information acquired by the environment-information acquiring unit 40. The environment identifying unit 44 calculates an environment score that is a score to identify an environment, and identifies an environment state pattern indicating a state of the environment based on the environment score, and thereby identifies the environment. Processing of the environment identifying unit 44 will be described later.
The user-condition identifying unit 46 identifies the condition of the user U based on the biological information acquired by the biological-information acquiring unit 42. Processing of the user-condition identifying unit 46 will be described later.
The output selecting unit 48 selects a target device to be actuated in the output unit 26 based on at least one of the environment information acquired by the environment-information acquiring unit 40 and the biological information acquired by the biological-information acquiring unit 42. Processing of the output selecting unit 48 will be described later. When the output selecting unit 48 is hardware, it may also be referred to as sense selector.
The output-specification determining unit 50 determines an output specification of a stimulus (visual stimulus, audio stimulus, tactile stimulus in this example) output by the output unit 26 based on at least one of the environment information acquired by the environment-information acquiring unit 40 and the biological information acquired by the biological-information acquiring unit 42. It is also, for example, regarded that the output-specification determining unit 50 determines a display specification (output specification) of the sub-image PS that is displayed by the display unit 26A based on at least one of the environment information acquired by the environment-information acquiring unit 40 and the biological information acquired by the biological-information acquiring unit 42. The output specification is an index indicating how a stimulus output by the output unit 26 is to be output, and details are described later. Processing of the output-specification determining unit 50 will be described later.
The sub-image acquiring unit 52 acquires sub-image data through the sub-image receiving unit 28A.
The output control unit 54 controls the output unit 26 to perform output. The output control unit 54 causes a target device selected by the output selecting unit 48 to perform output in an output specification that has been determined by the output-specification determining unit 50. For example, the output control unit 54 controls the display unit 26A to superimpose the sub-image PS acquired by the sub-image acquiring unit 52 on the main image PM, and to display in a display specification determined by the output-specification determining unit 50. When the output control unit 54 is hardware, it may also be referred to as multisensory sense provider.
The display device 10 has a configuration as explained above.
Processing
Next, Processing performed by the display device 10, more specifically, processing of causing the output unit 26 to output based on the environment information and the biological information will be explained.
Acquisition of Environment Information
As illustrated in
Determination of Dangerous State
Having acquired the environment information, the display device 10 determines whether an environment of the periphery of the user U is in a dangerous state based on the environment information by the environment identifying unit 44 (step S12).
The environment identifying unit 44 determines whether it is in a dangerous state based on an image of the periphery of the display device 10 captured by the camera 20A. Hereinafter, the image of the periphery of the display device 10 captured by the camera 20A will be denoted as periphery image as appropriate. For example, the environment identifying unit 44 identifies an object shown in the periphery image, and determines whether it is in a dangerous state based on a type of the identified object. More specifically, the environment identifying unit 44 may determine that it is in a dangerous state when an object shown in the periphery image is a specific object defined in advance, and may determine that it is not in a dangerous state when the object is not the specific object. The specific object may be arbitrarily defined, and may be an object that can cause a danger for the user U, such as, a flame indicating fire, a vehicle, and a signage indicating that there is construction. Moreover, the environment identifying unit 44 may determine whether it is in a dangerous state based on plural periphery images that are captured chronologically sequentially. For example, the environment identifying unit 44 identifies an object for each of plural periphery images that are chronologically sequentially captured, and determines whether those objects are a specific object and are identical object. When the same specific object is shown, the environment identifying unit 44 determines whether the specific object shown in a periphery image captured later in chronological order is relatively larger in the image, that is, whether the specific object is becoming closer to the user U. The environment identifying unit 44 determines that it is in a dangerous state when the specific object shown in the periphery image captured later is larger, that is, when the specific object is becoming closer to the user U. On the other hand, the environment identifying unit 44 determines that it is not in a dangerous state when the specific object shown in the periphery image captured later is not larger, that is, when the specific object is not becoming closer to the user U. As described, the environment identifying unit 44 may determine whether it is in a dangerous state based on one periphery image, or may determine whether it is in a dangerous state based on plural periphery images sequentially captured in chronologically. For example, the environment identifying unit 44 may switch determination methods according to a type of object shown in the periphery image. The environment identifying unit 44 may determine that it is in a dangerous state from a single periphery image when a specific object that enables determination of danger from a single periphery image, such as a flame indicating fire, is shown. Furthermore, the environment identifying unit 44 may perform determination of a dangerous state based on plural periphery images chronologically sequentially captured when a specific object from which determination of danger is not possible from a single periphery image, such as a vehicle, is shown.
The environment identifying unit 44 may perform identification of an object shown in the periphery image by an arbitrary method and, for example, may identify an object by using the learning model 30A. In this case, for example, the learning model 30A is an AI model in which data of an image and information indicating a type of object shown in the image are one data set, and that is constructed by performing learning with plural data sets as learning data. The environment identifying unit 44 inputs image data of the periphery image into the learned learning model 30A, and acquires information identifying a type of the object shown in the periphery image, to perform identification of the object.
Moreover, the environment identifying unit 44 may determine whether it is in a dangerous state based on position information acquired by the GNSS receiver 20C in addition to the periphery image. In this case, the environment identifying unit 44 acquires location information indicating a location of the user U based on the position information of the display device 10 (the user U) acquired by the GNSS receiver 20C, and the map data 30B. The location information is information indicating at what kind of place the user U (the display device 10) is located. That is, the location information is information indicating that the user U is in a shopping center, or information indicating that he/she is on a street. The environment identifying unit 44 reads out the map data 30B, and identifies a type of a structural object or a natural object within a predetermined distance range for a current position of the user U, and identifies the location information from the structural object and the natural object. For example, when a current position of the user U overlaps coordinates of a shopping center, it is identified that the user U is at the shopping center as the location information. The environment identifying unit 44 determines that it is in a dangerous state when the location information and the type of the object identified from the periphery image are in a specific relationship, and determines that it is not in a dangerous state when not in the specific relationship. The specific relationship may be arbitrarily defined but, for example, a combination of an object and a location that can cause a danger when the object is present at one location may be defined as the specific relationship.
Moreover, the environment identifying unit 44 determines whether it is in a dangerous state based on sound information acquired by the microphone 20B. Hereinafter, the sound information of the periphery of the display device 10 acquired by the microphone 20B is denoted as periphery sound as appropriate. For example, the environment identifying unit 44 identifies a type of sound included in the periphery sound, and determines whether it is in a dangerous state based on the identified type of sound. More specifically, the environment identifying unit 44 may determine that it is in a dangerous state when the type if sound included in the periphery sound is a specific sound defined in advance, and may determine that it is not in a dangerous state when it is not the specific sound. The specific sound may be arbitrarily defined and, for example, may be a sound that can cause a danger for the user U, such as a sound indicating fire, a sound of a vehicle, and a sound indicating construction.
The environment identifying unit 44 may perform identification of a type of sound included in the periphery sound by any method but, for example, may identify an object by using the learning model 30A. In this case, for example, the learning model 30A is an AI model in which sound data (for example, data indicating a frequency and a strength of sound) and information indicating a type of the sound are one data set, and that is constructed by performing learning with plural data sets as learning data. The environment identifying unit 44 inputs sound data of the periphery sound into the learned learning model 30A, and acquires information identifying a type of sound included in the periphery sound, to perform identification of the type of the sound.
Moreover, the environment identifying unit 44 may determine whether it is in a dangerous state based on position information acquired by the GNSS receiver 20C in addition to the periphery sound. In this case, the environment identifying unit 44 acquires location information indicating a location of the user U based on the position information of the display device 10 (the user U) acquired by the GNSS receiver 20C, and the map data 30B. The environment identifying unit 44 determines that it is in a dangerous state when the location information and the type of sound identified from the periphery sound are in a specific relationship, and determines that it is not in a dangerous state when not in the specific relationship. The specific relationship may be arbitrarily defined but, for example, a combination of a sound and a location that can cause a danger when the sound occurs at one location may be defined as the specific relationship.
As described, in the present embodiment, the environment identifying unit 44 determines a dangerous state based on the periphery image and the periphery sound. However, the determination method of a dangerous state is not limited to the above method but is arbitrary, and the environment identifying unit 44 may determine a dangerous state, for example, based on either one of the periphery image and the periphery sound. Moreover, the environment identifying unit 44 may determine a dangerous state based on at least one of an image of the periphery of the display device 10 captured by the camera 20A, a sound of the periphery of the display device 10 detected by the microphone 20B, and location information acquired by the GNSS receiver 20C. Furthermore, in the present embodiment, determination of a dangerous state is not essential, and may be omitted to be performed.
Setting of Danger Notification Content
When it is determined as a dangerous state (step S12: YES), the display device 10 sets a danger notification content that is a notification content to notify that it is in a dangerous state by the output control unit 54 (step S14). The display device 10 sets the danger notification content based on details of the dangerous state. The details of the danger state are information indicating what kind of danger is arising, and are identified from a type of object shown in a periphery image, a type of sound included in a periphery sound, and the like. For example, when the object is a vehicle and is approaching, details of the dangerous state is to be that “vehicle is approaching”. The danger notification content is information indicating the details of the dangerous state. For example, when details of the dangerous state are approaching vehicle, the danger notification content is to be information indicating that a vehicle is approaching.
The danger notification content varies according to a type of a target device selected at step S26 described later. For example, when the display unit 26A is the target device, the danger notification content is to be a display content (contents) of the sub-image PS. That is, the danger notification content is displayed as the sub-image PS superimposed on the main image PM. In this case, for example, the danger notification content is to be image data indicating a content that “Be careful as a vehicle is approaching”. On the other hand, when the sound output unit 26B is the target device, the danger notification content is a sound content output from the sound output unit 26B. In this case, for example, the danger notification content is to be sound data to output a sound, “A vehicle is approaching. Be careful”. Moreover, when the tactile-stimulus output unit 26C is the target device, the danger notification content is to be a tactile stimulus content output from the tactile-stimulus output unit 26C. In this case, for example, the danger notification content is to be a tactile stimulus of drawing attention of the user U.
The setting of the danger notification content of step S14 may be performed at any time after it is determined that it is in a dangerous state at step S12 and before the danger notification content is output at step S38 of a later stage, and may be performed, for example, after the target device is selected at step S32 of a later stage.
Calculation of Environment Score
When it is determined that it is not in a dangerous state (step S12: NO), the display device 10 calculates various kinds of environment scores based on the environment information by the environment identifying unit 44 as indicated at step S16 to step S22. The environment score is a score to identify an environment by which the user U (the display device 10) is surrounded. Specifically, the environment identifying unit 44 calculates a posture score (step S16), calculates a location score (step S18), calculates a movement score (step S20), and calculates a safety score (step S22) as the environment score. Order from step S16 to step S22 is not limited thereto, and is arbitrary. Also when the danger notification content is set at step S14, the respective kinds of environment scores are calculated as indicated at step S16 to step S22. In the following, the environment score will be specifically explained.
Posture Score
The environment identifying unit 44 calculates a posture score as the environment score for a category of posture of the user U. That is, the posture score is information indicating a posture of the user U, and it can be regarded as information indicating what posture the user U is in as a numerical value. The environment identifying unit 44 calculates the posture score based on environment information relating to the posture of the user U out of plural types of environment information. The environment information relating to the posture of the user U includes the periphery image captured by the camera 20A and the orientation of the display device 10 detected by the gyro sensor 20E.
More specifically, in the example in
Furthermore, the environment identifying unit 44 calculates the posture score for the sub-category of face orientation being horizontal direction based on an orientation of the display device 10 detected by the gyro sensor 20E. The posture score for the sub-category of face orientation being horizontal direction can be regarded as a numerical value indicating a degree of match with the horizontal direction of the posture (orientation of the face) of the user U. The calculation method of the posture score for the sub-category of face orientation being horizontal direction may be arbitrary. In this example, the degree of match with the face orientation being horizontal direction is considered, but a degree of match with it being in any direction may be considered.
As described, it is regarded that the environment identifying unit 44 sets information indicating the posture of the user U (the posture score in this example) based on the periphery image and the orientation of the display device 10. However, the environment identifying unit 44 is not limited to use the periphery image and the orientation of the display device 10 to set information indicating the posture of the user U, but may use arbitrary environment information, and may use, for example, at least one of the periphery image and the orientation of the display device 10.
Location Score
The environment identifying unit 44 calculates a location score as the environment score for a category of location of the user U. That is, the location score is information indicating a location of the user U, and it can be regarded as information indicating what kind of place the user U is positioned at as a numerical value. The environment identifying unit 44 calculates the location score based on environment information relating to the location of the user U out of plural types of environment information. The environment information relating to the location of the user U includes the periphery image captured by the camera 20A, the position information of the display device 10 acquired by the GNSS receiver 20C, and the periphery sound acquired by the microphone 20B.
More specifically, in the example in
The environment identifying unit 44 calculates the location score for the sub-category of on railway track based on the position information of the display device 10 acquired by the GNSS receiver 20C. The location score for the sub-category of on railway track can be regarded as a numerical value indicating a degree of match of the location of the user U with the location being on a railway track. The calculation method of the location score for the sub-category of on railway track may be arbitrary but, for example, the map data 30B may be used. For example, the environment identifying unit 44 reads out the map data 30B, and calculate the location score such that the degree of match of the location of the user U with the location being on a railway track becomes high when a current position of the user overlaps coordinates of a railway track. In this example, the degree of match with a location on a railway track is calculated but, not limited thereto, a degree of match with a position of any kind of structural object, a natural object, and the like may be calculated.
The environment identifying unit 44 calculates the location score for the sub-category of sound inside train car based on the periphery sound acquired by the microphone 20B. The location score for the sub-category of sound inside train car can be regarded as a numerical value indicating a degree of match of the periphery sound with a sound inside a train car. A calculation method of the location score for the sub-category of sound inside train car may be arbitrary but, for example, it may be determined by a method similar to the method of determining whether it is in a dangerous state based on the periphery sound as described above, that is, by determining whether the periphery sound is a specific type of sound. Although the degree of match with the sound inside a train car is calculated in this example, not limited thereto, a degree of match with sound of any place may be calculated.
As described, it is regarded that the environment identifying unit 44 sets information indicating the location of the user U (the location score in this example) based on the periphery image, the periphery sound, and the position information of the display device 10. However, the environment identifying unit 44 is not limited to use the periphery image, the periphery sound, and the position information of the display device 10 to set information indicating the location of the user U, but may use arbitrary environment information, and may use, for example, at least one of the periphery image, the periphery sound, and the position information of the display device 10.
Movement Score
The environment identifying unit 44 calculates a movement score as the environment score for a category of movement of the user U. That is, the movement score is information indicating a movement of the user U, and it can be regarded as information indicating how the user U is moving as a numerical value. The environment identifying unit 44 calculates the movement score based on environment information relating to the movement of the user U out of plural types of environment information. The environment information relating to the movement of the user U includes the acceleration information acquired by the acceleration sensor 20D.
More specifically, in the example in
As described, it is regarded that the environment identifying unit 44 sets information indicating the movement of the user U (the movement score in this example) based on the acceleration information of the display device 10 and the position information of the display device 10. However, the environment identifying unit 44 is not limited to use the acceleration information and the position information to set information indicating the movement of the user U, but may use arbitrary environment information, and may use, for example, at least one of the acceleration information and the position information.
Safety Score
The environment identifying unit 44 calculates a safety score as the environment score for a category of safety of the user U. That is, the safety score is information indicating safety of the user U, and it can be regarded as information indicating whether the user U is in a safe environment as a numerical value. The environment identifying unit 44 calculates the safety score based on environment information relating to the safety of the user U out of plural types of environment information. The environment information relating to the safety of the user U includes the periphery image captured by the camera 20A, the periphery sound acquired by the microphone 20B, the intensity information of light detected by the light sensor 20F, the temperature information of the periphery detected by the temperature sensor 20G, and the humidity information of the periphery detected by the humidity sensor 20H.
More specifically, in the example in
The environment identifying unit 44 calculates the safety score for the sub-category of appropriate amount of infrared ray or ultraviolet ray based on the intensity of infrared ray and ultraviolet ray in the periphery acquired by the light sensor 20F. The safety score for the sub-category of appropriate amount of infrared ray or ultraviolet ray can be regarded as a numerical value indicating a degree of match of intensity of infrared ray or ultraviolet ray in the periphery with an appropriate intensity of infrared ray or ultraviolet ray. A calculation method of the safety score for the sub-category of appropriate amount of infrared ray or ultraviolet ray may be arbitrary and, for example, calculation may be performed by using the intensity of infrared ray or ultraviolet ray detected by the light sensor 20F. Although the degree of match with an appropriate intensity of infrared ray or ultraviolet ray is calculated in this example, not limited thereto, for example, a degree of match with an arbitrary intensity of infrared ray or ultraviolet ray may be calculated.
The environment identifying unit 44 calculates the safety score for the sub-category of appropriate temperature based on temperature of the periphery acquired by the temperature sensor 20G. The safety score for the sub-category of appropriate temperature can be regarded as a numerical value indicating a degree of match of the temperature of the periphery with an appropriate temperature. A calculation method of the safety score for the sub-category of appropriate temperature may be arbitrary and, for example, calculation may be performed based on temperature of the periphery detected by the temperature sensor 20G. Although the degree of match with appropriate temperature is calculated in this example, not limited thereto, a degree of match with arbitrary temperature may be calculated.
The environment identifying unit 44 calculates the safety score for the sub-category of appropriate humidity based on humidity of the periphery acquired by the humidity sensor 20H. The safety score for the sub-category of appropriate humidity can be regarded as a numerical value indicating a degree of match of the humidity of the periphery with an appropriate humidity. A calculation method of the safety score for the sub-category of appropriate humidity may be arbitrary and, for example, calculation may be performed based on the humidity of the periphery detected by the humidity sensor 20H. Although the degree of match with an appropriate humidity is calculated in this example, not limited thereto, a degree of match with arbitrary humidity may be calculated.
The environment identifying unit 44 calculates the safety score for the sub-category of presence of a dangerous object based on the periphery image acquired by the camera 20A. The safety score for the sub-category of presence of a dangerous object can be regarded as a numerical value indicating a degree of match with presence of a dangerous object. A calculation method of the safety score for the sub-category of presence of a dangerous object may be arbitrary and, for example, it may be determined by a method similar to the method of determining whether it is in a dangerous state based on the periphery image as described above, that is, by determining whether an object included in the periphery image is a specific object. Furthermore, the environment identifying unit 44 calculates the safety score for the sub-category of presence of a dangerous object based on the periphery sound acquired by the microphone 20B also. A calculation method of the safety score for the sub-category of presence of a dangerous object may be arbitrary and, for example, it may be determined by a method similar to the method of determining whether it is in a dangerous state based on the periphery sound as described above, that is, for example, by determining whether the periphery sound is a specific sound.
One Example of Environment Score
The kinds of the categories and the sub-categories shown in
Determination of Environment Pattern
The display device 10 calculates the respective kinds of environment scores by the method explained above at step S16 to step S22 in
In the examples in
Furthermore, in the examples in
Furthermore, in the examples in
Furthermore, in the examples in
Target Device and Settings of Standard Output Specification
Having selected the environment pattern, the display device 10 selects a target device to be activated from the output unit 26, and sets a standard output specification based on the environment pattern by the output selecting unit 48 and the output-specification determining unit 50 as illustrated in
The target device is a device to be activated in the output unit 26 as described above, and in the present embodiment, the output selecting unit 48 selects the target device from among the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C based on the environment pattern. Because the environment pattern is information indicating a current environment of the user U, by selecting the target device based on the environment pattern, an appropriate stimulus suitable for the current environment of the user U can be selected.
Moreover, the output-specification determining unit 50 determines a standard output specification, which is an output specification to be a standard, based on the environment pattern. The output specification is an index indicating how to output a stimulus that is output by the output unit 26. For example, the output specification of the display unit 26A is to indicate how to display the sub-image PS to be output, and, it can be stated as display specification also. As the output specification of the display unit 26A, in the present embodiment, display time of the sub-image PS per unit time is included. The output-specification determining unit 50 determines the display time of the sub-image PS per unit time based on the environment pattern. The output-specification determining unit 50 may define the display time of the sub-image PS per unit time by changing time for which the sub-image PS is displayed for one time, may define the display time of the sub-image PS per unit time by changing display frequency of the sub-image PS, or may combine these two. By thus changing the display time of the sub-image PS per unit time, a visual stimulus to be given to the user U can be changed and, for example, it can be said that the longer the display time is, the stronger the visual stimulus to be given to the user U is.
Moreover, as the output specification of the display unit 26A, a display mode indicating how to display the sub-image when it is assumed that the sub-image PS is viewed as a still image is included. The display mode will be explained more specifically.
Moreover, as the display mode, a modification that is an image to decorate a content (display content) included in the sub-image PS is included. The modification indicates, in this embodiment, the degree of emphasizing the sub-image PS being an advertisement. In
In the present embodiment, the display position of the sub-image PS and the modification are exemplified as the display mode as described above, but the display mode is not limited thereto, and may be arbitrary. However, the display mode is preferable not to be a content of the sub-image PS, that is, an advertisement content in this example. That is, as the display mode, it is preferable that the content of the sub-image PS itself be unchanged. When plural kinds of the display mode are assumed, only either one of them may be changed, or the plural kinds of the display mode may be changed.
As described, the output-specification determining unit 50 determines, based on the environment pattern, at least one of the display time of the sub-image PS per unit time and the display mode of the sub-image PS as the output specification of the display unit 26A. That is, the output-specification determining unit 50 may determine both the display time of the sub-image PS per unit time and the display mode of the sub-image PS, or may determine only one of them, as the output specification of the display unit 26A.
The output specification of the display unit 26A is explained in the above, but the output-specification determining unit 50 also determines the output specification of the sound output unit 26B and the tactile-stimulus output unit 26C. As the output specification (sound specification) of the sound output unit 26B, volume, whether a sound effect is applied, and the like are included. The sound effect indicates a special effect, such as surround sound and spatial sound. By making the volume larger, or making the level of a sound effect higher, the degree of the audio stimulus to the user U can be made stronger. Moreover, as the output specification of the tactile-stimulus output unit 26C, strength of the tactile stimulus, frequency of the tactile stimulus, and the like are included. By making the strength or frequency of the tactile stimulus higher, the degree of the tactile stimulus to the user U can be made stronger.
In the example of
As described, in the present embodiment, the display device 10 sets the target device and the standard output specification based on a relationship among an environment pattern, a target device, and a standard output specification set in advance. However, the setting method of the target device and the standard output specification is not limited thereto, and the display device 10 may set the target device and the standard output specification by any method based on the environment information detected by the environment sensor 20. Moreover, the display device 10 is not limited to select both the target device and the standard output specification based on the environment information, but may select at least one out of the target device and the standard output specification.
Acquisition of Biological Information
Furthermore, as illustrated in
On the other hand, as for the brain wave, by detecting waves, such as α wave and β wave, and a basal rhythm (background brain wave) activity that appears in the entire brain, and by detecting its amplitude, increase or decrease of activity as the entire brain can be predicted at some level. For example, from the degree of activity of the prefrontal area of the brain, a degree of attention, such as how much interest is paid on an object by which the sense of sight is stimulated, can be grasped.
Identification of User Condition and Calculation of Output-Specification Correction Level
As illustrated in
The user-condition identifying unit 46 determines the output-specification correction level based on the brain activity level of the user U. In the present embodiment, the output-specification correction level is determined based on output-specification correction-level relationship information indicating a relationship with the user condition (the brain activity level in this example) and the output-specification correction level. The output-specification correction-level relationship information is information (table) in which the user condition and the output-specification correction level are stored in an associated manner and, for example, is stored in the specification setting database 30C. In the output-specification correction-level relationship information, the output-specification correction level is set for each type of the output unit 26, that is, the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C herein. The user-condition identifying unit 46 determines the output-specification correction level based on this output-specification correction-level relationship information and the identified user condition. Specifically, the user-condition identifying unit 46 reads out the output-specification correction-level relationship information, and selects an output-specification correction level that is associated with the set brain activity level of the user U from the output-specification correction-level relationship information, to determine the output-specification correction level. In the example in
Moreover, the user-condition identifying unit 46 identifies a mental stability level of the user U as the user condition based on the pulse wave information of the user U. In the present embodiment, the user-condition identifying unit 46 calculates a variation value of interval length between chronologically sequent R-waves WH, that is, a differential value of an R-R interval, and identifies the brain activity level of the user U based on the differential value of the R-R interval. The user-condition identifying unit 46 identifies the mental stability level of the user U as higher, as the differential value of the R-R interval becomes smaller, that is, as the interval length between R-waves WH varies less. In the example in
The user-condition identifying unit 46 determines the output-specification correction level based on the output-specification correction-level relationship information and the identified mental stability level. Specifically, the user-condition identifying unit 46 reads out the output-specification correction-level relationship information, and selects an output-specification correction level that is associated with the set mental stability level of the user U from the output-specification correction-level relationship information, to determine the output-specification correction level. In the example in
As described, the user-condition identifying unit 46 sets the output-specification correction level based on a relationship between the user condition and the output-specification correction level set in advance. However, the method of setting the output-specification correction level is not limited thereto, and the display device 10 may set the output-specification correction level by an arbitrary method based on the biological information detected by the biological sensor 22. Moreover, although the display device 10 calculates the output-specification correction level by using both the brain activity level identified from a brain wave and the mental stability level identified from a pulse wave, it is not limited thereto. For example, the display device 10 may calculate the output-specification correction level by using one out of the brain activity level identified from a brain wave and the mental stability level identified from a pulse wave. Furthermore, the display device 10 expresses the biological information by a numerical value, and by estimating the user condition based on the biological information, it is possible to factor in an error and the like of the biological information, and to estimate a mental condition of the user U more accurately. In other words, it can be said that by classifying the user condition based on the biological information into either one of three or more levels, the display device 10 can estimate a mental condition of the user U accurately. However, the display device 10 is not limited to categorize the biological information and the user condition based on the biological information into three or more levels, but may handle, for example, as information indicating either one of two possible values of Yes and No, and the like.
Generation of Output-Restriction Necessity Information
Moreover, as illustrated in
Acquisition of Sub-Image
Furthermore, as illustrated in
The sub-image acquiring unit 52 may acquire image data of a sub-image having a content (display content) according to a position (terrestrial coordinates) of the display device 10 (the user U). The position of the display device 10 is identified by the GNSS receiver 20C. For example, when the user U is positioned in a predetermined range with respect to one position, the sub-image acquiring unit 52 receives a content relating to the position. As for the sub-image PS, display control is basically enabled by an intention of the user U, but when display has been enabled, when and in what timing display is shown is unknown and, accordingly, it is convenient but it can be annoying. Therefore, information indicating whether display of the sub-image PS is allowed, a display mode, and the like set by the user U may be stored in the specification setting database 30C. The sub-image acquiring unit 52 reads out this information from the specification setting database 30C, and controls acquisition of the sub-image PS based on this information. Moreover, the position information and the specification setting database 30C may save the same information on a site on the Internet, and the sub-image acquiring unit 52 may control acquisition of the sub-image PS while checking contents thereof. Step S34 at which image data of the sub-image PS is acquired is not limited to be performed before step S36 described later, but may be performed at any time before step S38 described later.
The sub-image acquiring unit 52 may acquire, together with image data of the sub-image PS, sound data and tactile stimulus data relating to the sub-image PS also. The sound output unit 26B outputs sound data relating to the sub-image PIS as a sound content (content of sound), and the tactile-stimulus output unit 26C outputs tactile stimulus data relating to the sub-image PS as a tactile stimulus content (content of tactile stimulus).
Setting of Output Specification
Next, as illustrated in
As explained above, the display device 10 corrects the standard output specification set based on the environment information with the output-specification correction level set based on the biological information, to determine a final output specification. However, the display device 10 is not limited to determine an output specification by correcting the standard output specification with the output-specification correction level, but may also determine an output specification by any method using at least one of the environment information and the biological information. That is, the display device 10 may determine an output specification by an arbitrary method based on the environment information and the biological information, or may determine an output specification by an arbitrary method based on either one of the environment information and the biological information.
When the output-restriction necessity information indicating that use of the output unit 26 is not allowed is generated at step S32, the output selecting unit 48 selects a target device not only based on the environment information, but also based on the output-restriction necessity information. That is, even the output unit 26 that has been selected as the target device based on the environment information at step S26 is excluded from the target device when the use is not allowed in the output-restriction necessity information. In other words, the output selecting unit 48 selects a target device based on the output-restriction necessity information and the environment information. Furthermore, because the output-restriction necessity information is set based on the biological information, it can be said that the target device is set based on the biological information and the environment information.
Output Control
Having set the target device and the output specification, and having acquired image data of the sub-image PS and the like, the display device 10 causes the target device to perform output based on the output specification by using the output control unit 54 as illustrated in
For example, the display unit 26A is the target device, the output control unit 54 causes the display unit 26A to display the sub-image PS based on the image data acquired by the sub-image acquiring unit 52, conforming to the output specification of the display unit 26A. More specifically, the output control unit 54 causes the display unit 26A to display the sub-image PS superimposing on the main image PM that is provided through the display unit 26A, and conforming to the output specification of the display unit 26A. Because the output specification is set based on the environment information and the biological information as explained above, by displaying the sub-image PS conforming to the output specification, the sub-image PS can be displayed in an appropriate form according to an environment by which the user U is surrounded and a mental condition of the user U. For example, when display time per unit time of the sub-image PS is set as the output specification, because the display time of the sub-image PS is to be appropriate time according to an environment by which the user U is surrounded and a mental condition of the user U, the sub-image can be provided appropriately to the user U. More specifically, for example, by making the display time of the sub-image PS shorter to reduce a visual stimulus as the brain activity level of the user U is higher or the mental stability level of the user U is lower, it is possible to reduce a possibility of annoying with the sub-image PS when the user U is concentrating on another thing or is not mentally relaxed. On the other hand, when the user U is bored or have mental leeway, by increasing the display time to intensify the visual stimulus, information can be acquired appropriately by the sub-image PS. Furthermore, for example, when the display mode of the sub-image PS (a display position of a sub-image, a size of a sub-image, a modification, and the like) is set as the output specification, because the display mode of the sub-image PS is to be appropriate mode according to an environment by which the user U is surrounded, and a mental condition of the user U, the sub-image can be provided appropriately to the user U. More specifically, by positioning a sub-image on an edge side, by making the size of the sub-image small, or by reducing modifications to reduce the visual stimulus as the brain activity level of the user U is higher or the mental stability level of the user U is lower, and it is possible to reduce a possibility of annoying with the sub-image PS. On the other hand, for example, by positioning a sub-image on a center side, by increasing the size of the sub-image, or by increasing modifications, to intensify the visual stimulus to be stronger as the brain activity of the user U is lower or the mental stability of the user U is higher, information can be thereby acquired appropriately by the sub-image PS.
Moreover, when the sound output unit 26B is the target device, the output control unit 54 causes the sound output unit 26B to output a sound based on sound data that is acquired by the sub-image acquiring unit 52, conforming to the output specification of the sound output unit 26B. In this case also, for example, by reducing the audio stimulus to be weaker as the brain activity level of the user U is higher or the mental stability level of the user U is lower, and it is possible to reduce a possibility of annoying with the sub-image PS when the user U is concentrating on another thing or is not mentally relaxed. On the other hand, by intensifying the audio stimulus to be stronger as the brain activity of the user U is lower or the mental stability of the user U is higher, information can be appropriately acquired by sound.
Furthermore, when the tactile-stimulus output unit 26C is the target device, the output control unit 54 causes the tactile-stimulus output unit 26C to output a tactile stimulus based on tactile stimulus data that is acquired by the sub-image acquiring unit 52, conforming to the output specification of the tactile-stimulus output unit 26C. In this case also, for example, by reducing the tactile stimulus to be weaker as the brain activity level of the user U is higher or the mental stability level of the user U is lower, it is possible to reduce a possibility of annoying with the sub-image PS when the user U is concentrating on another thing or is not mentally relaxed. On the other hand, by intensifying the audio stimulus to be stronger as the brain activity of the user U is lower or the mental stability of the user U is higher, information can be appropriately acquired by a tactile stimulus.
Moreover, when it is determined as a dangerous state and a danger notification content is set at step S12, the output control unit 54 causes the target device to notify of the danger notification content, conforming to the set output specification.
As described, the display device 10 according to the present embodiment can output a sensory stimulus at an appropriate degree according to an environment by which the user U is surrounded or a mental condition of the user U by setting the output specification based on the environment information and the biological information. Furthermore, the display device 10 can select an appropriate sensory stimulus according to an environment by which the user U is surrounded and a mental condition of the user U by selecting the target device to be activated based on the environment information and the biological information. However, the display device 10 is not limited to use both the environment information and the biological information, but may use, for example, only either one. Accordingly, it can be said that the display device 10 is a device that selects the target device and sets the output specification based on the environment information, and a device that selects the target device and sets the output specification based on the biological information also.
Effects
As explained above, the display device 10 according to the present embodiment includes the display unit 26A that displays an image, the biological sensor 22 that detects the biological information of the user U, the output-specification determining unit 50 that determines a display specification (output specification) of the sub-image PS to be displayed on the display unit 26A based on the biological information of the user U, and the output control unit 54 that causes the display unit 26A to display the sub-image PS, superimposing on the main image PM that is provided through the display unit 26A and is visible for the user U, and conforming to the display specification. The display device 10 according to the present embodiment can provide an image appropriately to the user U by superimposing the sub-image PS on the main image PM. Furthermore, by setting the display specification of the sub-image PS to be superimposed on the main image PM based on the biological information, the sub-image PS can be provided appropriately according to a condition of the user U.
Moreover, the biological information includes information relating to the automatic nerve of the user U, and the output-specification determining unit 50 determines the display specification of the sub-image PS based on the information relating to the automatic nerve of the user. The display device 10 according to the present embodiment can provide the sub-image PS appropriately according to a mental condition of the user U by determining the display specification from the biological information relating to the automatic nerve of the user U.
Furthermore, the display device 10 further includes the environment sensor 20 that detects the environment information of the periphery of the display device 10. The output-specification determining unit 50 determines the display specification of the sub-image PS based on the environment information and the biological information of the user U. The display device 10 according to the present embodiment can provide the sub-image PS appropriately according to an environment by which the user is surrounded and a mental condition of the user U by determining the display specification based on the environment information also, in addition to the biological information of the user U.
Moreover, the environment information includes the location information of the user U. The output-specification determining unit 50 determines the display specification of the sub-image PS based on the location information of the user U and the biological information of the user U. The display device 10 according to the present embodiment can provide the sub-image PS appropriately according to a location of the user U and a mental condition of the user U by determining the display specification based on a location of the user U in addition to the biological information of the user U.
Furthermore, the output-specification determining unit 50 categorizes the biological information of the user U to either one out of three or more levels, and determines the display specification of the sub-image PS according to a categorized level. The display device 10 according to the present embodiment grasps a condition of the user U precisely by categorizing the biological information of the user U to three or more levels, to be able to determine the display specification of the sub-image PS based thereon and, therefore, can provide the sub-image PS more appropriately according to a condition of the user U.
Moreover, the output-specification determining unit 50 determines display time of the sub-image PS per unit time as the display specification of the sub-image PS. The display device 10 according to the present embodiment can provide the sub-image PS appropriately according to the condition of the user U by adjusting the display time of the sub-image PS based on the biological information.
Furthermore, the output-specification determining unit 50 determines a display mode indicating how the sub-image PS is to be displayed when it is viewed as a still image as the display specification of the sub-image PS. The display device 10 according to the present embodiment can provide the sub-image PS appropriately according to a condition of the user U by adjusting the display mode of the sub-image PS based on the biological information.
Next, a second embodiment will be explained. The display device 10 according to the second embodiment differs from that of the first embodiment in acquiring advertisement fee information of the sub-image PS also, and in determining the output specification of the sub-image PS based on the advertisement fee information. That is, in the second embodiment, the sub-image PS includes advertisement information. In the second embodiment, for a part having a configuration similar to the first embodiment, explanation will be omitted.
The display device 10 according to the second embodiment determines the output specification based on the advertisement fee information also, in addition to the standard output specification and the output-specification correction level (step S36a). That is, in the second embodiment, the output specification is determined based on the standard output specification set from the environment information, the output-specification correction level set from the biological information, and the advertisement fee information.
Specifically, the output-specification determining unit 50 sets an advertisement-fee correction level to correct the standard output specification based on the advertisement fee information. The output-specification determining unit 50 sets the advertisement-fee correction level such that the output specification (sensory stimulus) becomes higher as an advertisement fee is higher in the advertisement fee information. In the example of the present embodiment, the output-specification determining unit 50 determines the advertisement-fee correction level based on advertisement-fee-correction relationship information that indicates a relationship between the advertisement fee information and the advertisement-fee correction level. The advertisement-fee-correction relationship information is information (table) in which advertisement fee information and an advertisement-fee correction level are associated to be stored, for example, in the specification setting database 30C. In the advertisement-fee-correction relationship information, the advertisement-fee correction level is set for each type of the output unit 26, that is, the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C herein. The output-specification determining unit 50 determines the advertisement-fee correction level based on this advertisement-fee-correction relationship information and the acquired advertisement fee information. Specifically, the user-condition identifying unit 46 reads out the advertisement-fee-correction relationship information, and selects the advertisement-fee correction level that is associated with the acquired advertisement fee information from the advertisement-fee-correction relationship information, to determine the advertisement-fee correction level.
As described, the output-specification determining unit 50 sets the advertisement-fee correction level based on the advertisement-fee-correction relationship information in which the advertisement fee information and the advertisement-fee correction level are associated with each other set in advance. However, the setting method of the advertisement-fee correction level is not limited thereto, and the display device 10 may set the advertisement-fee correction level by an arbitrary method based on the advertisement fee information.
The output-specification determining unit 50 determines the output specification by correcting the standard output specification with the output-specification correction level set based on the biological information and the advertisement-fee correction level set based on the advertisement fee information. A formula to correct the output-specification with the output-specification correction level and the advertisement-fee correction level may be arbitrary. Having thus determined the output specification, the output control unit 54 according to the second embodiment causes the target device to perform output based on the output specification, by a method similar to the first embodiment (step S38).
As described above, the advertisement-fee correction level is set such that the sensory stimulus becomes stronger as an advertisement fee is higher. Therefore, the display time per unit time of the sub-image PS becomes longer as the advertisement fee becomes higher. Moreover, for example, the sub-image PS is displayed at a position closer to the center side as illustrated in
As explained above, the display device 10 according to the second embodiment determines a final output specification by correcting the standard output specification set based on the environment information with the output-specification correction level set based on the biological information and the advertisement-fee correction level set based on the advertisement fee information. However, the display device 10 according to the second embodiment is not limited to determine an output specification by correcting the standard output specification with the output-specification correction level and the advertisement-fee correction level, but may determine an output specification by an arbitrary method, using at least the advertisement fee information. That is, for example, the display device 10 according to the second embodiment may determine an output specification by an arbitrary method by using all of the advertisement fee information, the environment information, and the biological information, may determine an output specification by an arbitrary method using either one of the environment information and the biological information in addition to the advertisement fee information, or may determine an output specification by an arbitrary method using only the advertisement fee information out of the advertisement fee information, the environment information, and the biological information.
As explained above, the display device 10 according to the second embodiment includes the display unit 26A that displays an image, the sub-image acquiring unit 52, the output-specification determining unit 50, and the output control unit 54. The sub-image acquiring unit 52 of the second embodiment acquires image data of the sub-image PS including advertisement information to be displayed on the display unit 26A, and the advertisement fee information about payment to display the advertisement information. The output-specification determining unit 50 of the second embodiment determines the display mode indicating how the sub-image is displayed when it is viewed as a still image as the output specification (display specification) of the sub-image PS based on the advertisement fee information. The output control unit 54 causes the display unit 26A to display the sub-image PS, superimposing on the main image PM that is visible for the user U, and conforming to the output specification (display specification). Because the display device 10 according to the second embodiment determines the display mode of the sub-image, which is an advertisement, based on the advertisement fee, the sub-image PS can be provided appropriately, properly reflecting an intention of an advertiser.
Moreover, the display device 10 according to the second embodiment includes the display unit 26A that displays an image, the sub-image acquiring unit 52, the output-specification determining unit 50, and the output control unit 54. The sub-image acquiring unit 52 of the second embodiment acquires image data of the sub-image PS including advertisement information to be displayed on the display unit 26A, and the advertisement fee information about payment to display the advertisement information. The output-specification determining unit 50 of the second embodiment determines display time of the sub-image PS per unit time based on the advertisement fee information. The output control unit 54 of the second embodiment causes the display unit 26A to display the sub-image PS, superimposing on the main image PM that is provided through the display unit 26A and is visible for the user U, and conforming to the output specification (display specification). Because the display device 10 according to the second embodiment determines display time of the sub-image PS, which is an advertisement, based on an advertisement fee, the sub-image PS can be provided appropriately, properly reflecting an intention of an advertiser.
Next, a third embodiment will be explained. A display device 10b according to the third embodiment differs from the first embodiment in determining a position at which the sub-image PS is displayed based on permission information indicating whether the sub-image PS can be displayed, superimposing on an actually existing object in the main image PM. In the third embodiment, for a part having a configuration similar to the first embodiment, explanation will be omitted. The third embodiment is applicable to the second embodiment also.
Specifically, at step S50, the target-object identifying unit 60 acquires target object information that is information to identify the target object in the main image PM based on the environment information. The target object information may be any information as long as it is information enabling to identify the target object from other objects, and it may be, for example, name of a target object, address of a target object, position information, and the like. The target-object identifying unit 60 acquires the target object information based on the position information of the display device 10b (the user U) acquired by the GNSS receiver 20C and the posture information of the display device 10 (the user U) acquired by the gyro sensor 20E. More specifically, the target-object identifying unit 60 calculates position information of a visually recognized region, which is a place visually recognized by the user U from the position information of the display device 10b (the user U) and the posture information of the display device 10 (the user U). In this case, the target-object identifying unit 60 determines a range that is visually recognized by the user U as a visually-recognized region having a predetermined range of breadth, for example, based on the breadth of vision of the user U, and acquires position information of the visually-recognized region. The breadth of vision of the user U may be set in advance, or may be calculated by an arbitrary method. The target-object identifying unit 60 identifies an actually existing object, such as a structural object and a natural object, present in the visually-recognized region as the target object based on the map data 30B, and acquires the target object information of the target object. That is, because the visually-recognized region signifies a field of view of the user U, and signifies a range of the main image PM, an actually existing object located within the visually-recognized region is to be a target object shown as the main image PM. When plural target objects are present in the visually-recognized region, the target-object identifying unit 60 acquires the target object information for each of those target objects.
The method of identifying a target object by the target-object identifying unit 60, that is, the method of acquiring target object information is not limited to the one described above, and may be arbitrary.
Having identified a target object, the display device 10b according to the third embodiment acquires the permission information for the identified target object by the permission-information acquiring unit 62 (step S52). That is, the permission-information acquiring unit 62 acquires, for the target object identified as present in the main image PM, the permission information indicating whether the sub-image PS can be displayed, superimposed on the target object.
Whether superimposition of the sub-image PS on the target object is permitted is determined in advance by an owner of the target object or the like, and is recorded as the permission information. The permission-information acquiring unit 62 transmits the target object information to an external device (server) in which the permission information is recorded through, for example, the communication unit 28, and acquires the permission information. The external device acquires the permission information assigned to the target object identified in the target object information, to transmit to the display device 10b. The permission-information acquiring unit 62 acquires the permission information assigned to the target object from the external device. The permission-information acquiring unit 62 acquires this permission information for each target object. The method of acquiring the permission information is not limited thereto. For example, information in which the target object information and the permission information are associated with each other may be stored in the storage unit 30 of the display device 10b, and the permission-information acquiring unit 62 may reads out this information to acquire the permission information associated with the acquired target object information.
Thereafter, the display device 10b determines an output specification based on the permission information also, in addition to the standard output specification and the output-specification correction level (step S34). That is, in the third embodiment, the output-specification determining unit 50 determines an output specification based on the standard output specification set from the environment information, the output-specification correction level set from the biological information, and the permission information.
More specifically, the output-specification determining unit 50 determines a display position of the sub-image PS as the output specification based on the permission information. The output-specification determining unit 50 determines whether the sub-image PS may be displayed at the position overlapping a target object based on the permission information. For example, the output-specification determining unit 50 determines not to superimpose the sub-image PS on the target object when the permission information is information indicating that the sub-image PS is not permitted to be displayed on the target object in a superimposed manner, and determines a position other than the position overlapping the target object as the display position of the sub-image PS. That is, the output-specification determining unit 50 excludes a position overlapping the target object from display-enabled positions in which the sub-image PS can be displayed when the permission information is information indicating that the sub-image PS is not permitted to be displayed on the target object in a superimposed manner, and determines a position not overlapping the target object as a display-enabled position of the sub-image PS.
On the other hand, the output-specification determining unit 50 determines that the sub-image PS can be superimposed on the target object when the permission information is information indicating that the sub-image PS can be displayed on the target object in a superimposed manner, and determines a display position of the sub-image PS from among positions overlapping the target object and positions not overlapping the target object. That is, the output-specification determining unit 50 determines both the position overlapping the target object and the position not overlapping the target object as the display-enabled position of the sub-image PS when the permission information is information indicating that the sub-image PS can be displayed on the target object in a superimposed manner.
The output-specification determining unit 50 sets the output specification based on the display-enabled position set based on the permission information, the standard output specification set based on the environment information, and the output-specification correction level set based on the biological information (step S36b). The output-specification determining unit 50 sets the output specification from the standard output specification and the output-specification correction level by a method similar to that of the first embodiment, and sets the display position of the sub-image PS in the output specification based on the display-enabled position. That is, the output-specification determining unit 50 sets the display position of the sub-image PS such that the sub-image PS is displayed in the display-enabled position. For example, the output-specification determining unit 50 determines the display position of the sub-image PS to a position not overlapping the target object when the permission information is information indicating that the sub-image PS is not permitted to be displayed on the target object in a superimposed manner. On the other hand, the output-specification determining unit 50 determines the display position of the sub-image PS to a position overlapping the target object or a position not overlapping the target object when the permission information is information indicating that the sub-image PS is permitted to be displayed on the target object in a superimposed manner. When the permission information is information indicating that the sub-image PS is permitted to be displayed on the target object in a superimposed manner, whether to set the display position of the sub-image PS to the position overlapping the target object may be defined based on the display-enabled position set based on the permission information, the standard output specification set based on the environment information, and the like.
Having set the output specification, the output control unit 54 of the third embodiment causes the target device to perform output based on the output specification by a method similar to that of the first embodiment (step S38). The output control unit 54 controls to display the sub-image PS at a display position based on a determination indicating whether the sub-image PS is permitted to be displayed on the target object in a superimposed manner by the output-specification determining unit 50. That is, the output control unit 54 displays the sub-image PS at a position not overlapping the target object when the permission information is information indicating that the sub-image PS is not permitted to be displayed on the target object in a superimposed manner. On the other hand, the output control unit 54 displays the sub-image PS at a position overlapping or a position not overlapping the target object when the permission information is information indicating that the sub-image PS is permitted to be displayed on the target object in a superimposed manner.
As explained above, the display device 10b according to the third embodiment determines the display position of the sub-image PS based on the standard output specification set based on the environment information, the output-specification correction level set based on the biological information, and the permission information. However, the display device 10b is not limited to determine the display position of the sub-image PS by using the standard output specification, the output-specification correction level, and the permission information. For example, the display device 10 may determine the display position of the sub-image PS using all of the permission information, the environment information, and the biological information by an arbitrary method, may determine the display position of the sub-image PS using either one of the environment information and the biological information, in addition to the permission information by an arbitrary method, or may determine the display position of the sub-image PS using only the permission information out of the permission information, the environment information, and the biological information by an arbitrary method. As described, in the third embodiment, as long as at least the permission information is used to determine the display position of the sub-image PS by an arbitrary method, the environment information and the biological information are not necessarily required to be used.
As explained above, the display device 10b according to the third embodiment includes the display unit 26A that displays an image, the target-object identifying unit 60, the permission-information acquiring unit 62, the output-specification determining unit 50, and the output control unit 54. The target-object identifying unit 60 identifies an actually existing target object in the main image PM that is provided through the display unit 26A and that can be visually recognizable for the user U. The permission-information acquiring unit 62 acquires the permission information indicating whether the sub-image PS may be displayed at a position overlapping the target object of the main image PM. The output-specification determining unit 50 determines whether to display the sub-image PS at a position overlapping the target object of the main image PM based on the permission information. The output control unit 54 controls to display the sub-image PS to be superimposed on the target object of the main image PM based on a determination indicating whether to display the sub-image PS at the position overlapping the target object of the main image PM by the output-specification determining unit 50.
The sub-image PS is displayed in a superimposed manner on the main image PM in which an actually existing object is shown. However, an owner or the like exits for an actually existing object, and it is conceivable that the owner prefers not to have the sub-image PS superimposed on the target object. To deal with this concern, the display device 10b according to the present embodiment determines a display position of the sub-image PS based on the permission information indicating whether the sub-image PS is permitted to overlap the target object. Therefore, it becomes possible to avoid superimposition of the sub-image PS on the target object, for example, when superimposition of the sub-image PS on the target object is not permitted, and to superimpose the sub-image PS on the target object when superimposition of the sub-image PS on the target object is permitted. As described, according to the display device 10b according to the third embodiment by using the permission information, the sub-image PS can be displayed appropriately, for example, considering an intention of the owner of the target object.
Furthermore, in the third embodiment, the target-object identifying unit 60 identifies a target object from the position information of the user U and the posture information of the user. According to the display device 10b according to the third embodiment, by using the position information of the user U and the posture information of the user, a target object in the main image PM can be identified highly accurately.
Another Example of Sub-Image
In the example in
As described, in the example in
Next, a fourth embodiment will be explained. A display device 10c according to the fourth embodiment differs from the first embodiment in counting the number of times of superimposition of the sub-image PS on a target object. In the fourth embodiment, for a part having a configuration similar to the first embodiment, explanation will be omitted. The fourth embodiment is also applicable to the second embodiment and the third embodiment.
Next, the display device 10c identifies the target object on which the sub-image PS is superimposed by the target-object identifying unit 60 (step S102). The target-object identifying unit 60 extracts a target object shown in the main image PM by a method similar to that of the third embodiment. The target-object identifying unit 60 then identifies the target object on which the sub-image PS is superimposed from among the target objects shown in the main image PM.
Next, having identified the target object on which the sub-image PS is superimposed, the display device 10c updates the number of times that the sub-image PS is superimposed for each of the target objects by the count-information acquiring unit 64 (step S104), and causes the storage unit 30 to record the number of times that the sub-image PS is superimposed in the storage unit 30 for each of the target objects (step S106). The count-information acquiring unit 64 counts the number of times that the sub-image PS is superimposed for each of the target objects, and stores the number of counts in the storage unit 30 as the count information. That is, the count-information acquiring unit 64 increments the number of times that the sub-image PS is superimposed by 1 each time the sub-image PS is superimposed, and stores it in the storage unit 30 as the count information. The count-information acquiring unit 64 associates the target object information and the count information with each other, that is, associate the number of times that the sub-image PS is superimposed with the target object, to store in the storage unit 30.
As explained above, the display device 10c according to the fourth embodiment includes the display unit 26A that displays an image, the output control unit 54, the target-object identifying unit 60, and the count-information acquiring unit 64. The output control unit 54 displays the sub-image PS so as to be superimposed on an actually existing object included in the main image PM that is provided through the display unit 26A and is visually recognizable for the user U. The target-object identifying unit 60 identifies a target object on which the sub-image PS is superimposed. The count-information acquiring unit 64 acquires the count information that is information about the number of times that the sub-image PS is superimposed on the identified target object, to store in the storage unit 30. The display device 10c according to the present embodiment calculates the number of times that the sub-image PS is superimposed on the target object, to record it. For example, when the sub-image PS is an advertisement, it is conceivable that an advertisement fee is set or an advertisement fee is paid to the owner of the target object on which the sub-image PS is superimposed according to the number of times of display and the like. For example, in such a case, the display device 10c according to the present embodiment can perform management of the advertisement fee and the like appropriately by counting the number of times that the sub-image PS is superimposed for each target object. As described, according to the display device 10c according to the present embodiment, it can be said that by recording the number of times that the sub-image PS is superimposed, the sub-image PS can be displayed appropriately.
The display device 10 can communicate with a management device 12 that manages the count information, and may output the count information to the management device 12.
The management device 12 is a computer (server) in the present embodiment, and includes an input unit 12A, an output unit 12B, a storage unit 12C, a communication unit 12D, and a control unit 12E. The input unit 12A is a device that accepts an operation of a user of the management device 12, and may be, for example, a touch panel, a keyboard, a mouse, and the like. The output unit 12B is a device that outputs information, and is, for example, a display that displays an image. The storage unit 12C is a memory that stores various kinds of information, such as an arithmetic content of the control unit 12E, a computer program, and the like, and includes, for example, at least one of a main storage device, such as RAM and ROM, and an external storage device, such as HDD. The communication unit 12D is a module that communicates with an external device and the like, and may include, for example, an antenna. A communication method by the communication unit 28 is wireless communication in the present embodiment, but the communication method may be arbitrary.
The control unit 12E is an arithmetic device, that is, a CPU. The control unit 12E performs processing described later by reading and executing a computer program (software) from the storage unit 30, and may perform the processing by a single CPU, or may include plural CPUs and perform the processing by those plural CPUs. Moreover, at least a part of the processing described later performed by the control unit 12E may be implemented by hardware.
The control unit 12E acquires the count information, which is information about the number of times that the sub-image PS is superimposed on a target object, from each of the display devices 10 through the communication unit 12D. The control unit 12E calculates a superimposition total count value that is a total value of the number of times that the sub-image PS is superimposed on the same target object based on the count information acquired from each of the display devices 10. That is, the control unit 12E sums up the number of times that the sub-image PS is superimposed on the same target object for each of the display device 10, and calculates the superimposition total count value. The control unit 12E calculates the superimposition total count value for each target object, and stores it in the storage unit 12C as total count information.
The control unit 12E may output the calculated superimposition total count value to an external device. For example, the control unit 12E may transmit the superimposition total count value to a computer managed by an owner of a target object, or may transmit to a computer managed by an advertiser of the sub-image PS. By thus transmitting the superimposition total count value, management of an advertisement fee can be performed appropriately.
As explained above, the display management system 100 according to the fourth embodiment includes the display device 10 and the management device 12. The management device 12 acquires the count information from plural units of the display device 10, sum up the number of times that the sub-image PS is superimposed on the same target object by the plural display devices 10, and records it as the total count information of the target object. According to the display management system 100 according to the fourth embodiment, by managing the count information of the plural display devices 10c in a centralized manner by the management device 12, display of the sub-image PS can be appropriately managed.
Next, a fifth embodiment will be explained. A display device 10d according to the fifth embodiment differs from the first embodiment in selecting a target device and determining an output content (contents) of the sub-image PS based on age information indicating an age of the user U. In the fifth embodiment, for a part having a configuration similar to the first embodiment, explanation will be omitted. The fifth embodiment is also applicable to the second embodiment, the third embodiment, and the fourth embodiment.
The age-information acquiring unit 66 acquires age information that indicates an age of the user U. The age-information acquiring unit 66 may acquire the age information by an arbitrary method. For example, the age information may be set in advance in the storage unit 30 by input by the user U and the like, and the age-information acquiring unit 66 may reads out the age information from the storage unit 30. Moreover, for example, the age-information acquiring unit 66 may acquire the age information by estimating an age from the biological information.
The physical-information acquiring unit 68 acquires physical information that is information relating to the body of the user U. The physical information is information that indicates a health condition of the user U, information that is different from the biological information acquired by the biological sensor 22, and information that is different from the information about an automatic nerve. Furthermore, the physical information is information relating to performance of five senses of the user U, and is, for example, information indicating the visual acuity, the auditory acuity, and the like. The physical-information acquiring unit 68 may acquire the physical information by an arbitrary method. For example, the physical information is set in advance by input of the user U or the like to be stored in the storage unit 30, and the physical-information acquiring unit 68 may read out the age information from the storage unit 30. Moreover, for example, a body sensor that detects the physical information of the user U may be equipped in the display device 10, and the physical-information acquiring unit 68 may acquire physical information detected by the body sensor.
Next, the display device 10d acquires restriction necessity information to restrict a control device based on the age information and the physical information by the user-condition identifying unit 46 (step S62). The user-condition identifying unit 46 acquires age-restriction necessity information as the restriction necessity information based on the age information, and acquires physical-restriction necessity information as the restriction necessity information based on the physical information.
Next, as illustrated in
As described, the output selecting unit 48 sets the target device based on the age-restriction necessity information based on the age information, the physical-restriction necessity information based on the physical information, the output-restriction necessity information based on the biological information, and the user condition based on the environment information. However, the setting method of the target device is not limited thereto, but may be arbitrary. The output selecting unit 48 may set the target device by an arbitrary method based on at least one of the age information, the physical information, the biological information, and the environment information. For example, the output selecting unit 48 may set the target device by an arbitrary method based on the age information, may set the target device by an arbitrary method based on the age information and the physical information, may set the target device by an arbitrary method based on the age information, the physical information, and the biological information, may set the target device by an arbitrary method based on the age information, the physical information, and the environment information, and may set the target device by an arbitrary method based on the age information, the physical information, the biological information, and the environment information.
Next, as illustrated in
To the sub-image PS, a content rating that is information indicating whether the content of the sub-image PS is permitted to be provided is set. This content rating is set for each predetermined age categories. That is, the content rating can be regarded as information defining a recommended age to which the content can be provided. Examples of the content rating include a motion picture association of America (MPAA) rating, but it is not limited thereto. In the fifth embodiment, the sub-image acquiring unit 52 acquires the content rating of the sub-image PS together with image data of the sub-image PS. The output-content determining unit 70 determines whether the sub-image PS can be displayed based on the content rating of the sub-image PS and the age information of the user U. The output-content determining unit 70 determines that display of the sub-image PS is possible when the content rating of the sub-image PS indicates that the sub-image PS can be provided to the age of the user U, and determines the content of the sub-image PS as the output content. On the other hand, the output-content determining unit 70 determines not to permit display of the sub-image PS when the content rating of the sub-image PS indicates that the sub-image PS cannot be provided to the age of the user U, and avoids using the content of the sub-image PS as the output content. For example, in this case, the output-content determining unit 70 acquires the content rating of another sub-image PS acquired by the sub-image acquiring unit 52, and similarly determines whether display of the sub-image PS is possible.
As described, the output-content determining unit 70 determines an output content of the sub-image PS based on the age information and the content rating, but a determining method of an output content of the sub-image PS is not limited to the one described above, but arbitrary. The output-content determining unit 70 may determine an output content of the sub-image PS by an arbitrary method based on the age information.
Returning back to
As explained above, the display device 10d according to the fifth embodiment includes the display unit 26A that displays an image, the sound output unit 26B that outputs a sound, the tactile-stimulus output unit 26C that outputs a tactile stimulus to the user U, the age-information acquiring unit 66, the output selecting unit 48, and the output control unit 54. The age-information acquiring unit 66 acquires the age information of the user U. The output selecting unit 48 selects the target device to be used from among the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C based on the age information of the user U. The output control unit 54 controls the selected target device. Because of a reason that the ability of the five sense of humans change according to ages, which sense is preferable to be stimulated can vary according to ages. To deal with this, the display device 10d according to the present embodiment selects the target device to be used from among the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C according to the age of the user U. Therefore, according to the display device 10d, appropriate stimulation of a sense of the user U according to the physical condition of the user U is possible and, for example, the sub-image PS can be provided to the user U appropriately.
Moreover, the display device 10d further includes the physical-information acquiring unit 68 that acquires the physical information of the user U, and the output selecting unit 48, and selects the target device to be used from among the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C based also on the physical information. Because the display device 10d according to the present embodiment selects the target device to be used from among the display unit 26A, the sound output unit 26B, and the tactile-stimulus output unit 26C according to the physical information of the user U, it is possible to stimulates a sense of the user U appropriately according to the physical condition of the user U and, for example, the sub-image PS can be provided to the user U appropriately also.
Next, a sixth embodiment will be explained. A display device 10e according to the sixth embodiment differs from the first embodiment in determining an output content (contents) of the sub-image PS based on the age information of the user U and the position information of the user U. In the sixth embodiment, for a part having a configuration similar to the first embodiment, explanation will be omitted. The sixth embodiment is also applicable to the second embodiment, the third embodiment, the fourth embodiment, and the fifth embodiment.
The age-information acquiring unit 66 acquires the age information indicating the age of the user U. The age-information acquiring unit 66 may acquire the age information by an arbitrary method. For example, the age information set in advance by input of the user U may be stored in the storage unit 30, and the age-information acquiring unit 66 may read out the age information from the storage unit 30. Moreover, for example, the age-information acquiring unit 66 may acquire the age information by estimating an age from the biological information.
Having acquired the age information, the display device 10e determines an output content (display content) of the sub-image PS output by the output unit 26 based on the age information and the position information, by the output-content determining unit (step S37e). The age information of the user U is acquired by the age-information acquiring unit 66 as described above, and the position information of the user U is acquired by the environment-information acquiring unit 40 through the GNSS receiver 20C. The output content (display content) of the sub-image PS is a content of the sub-image PS, that is, contents. Step S37e to determine the output content is not limited to be performed after step S36, but performing order is arbitrary.
To the sub-image PS, similarly to the fifth embodiment, the content rating indicating whether a content can be provided according to ages is set, and the sub-image acquiring unit 52 acquires the content rating of the sub-image PS also together with image data of the sub-image PS. Furthermore, in the sixth embodiment, an area rating indicating whether provision of a content is necessary according to a position (terrestrial coordinates) is set. The output-content determining unit 70 sets a final rating that indicates final determination whether a content can be provided to the user U based on the content rating and the area rating, and determines an output content of the sub-image PS based on the final rating and the age information of the user U. In the following, it will be specifically explained.
The output-content determining unit 70 acquires area information that indicates a relationship between the area rating and a position (terrestrial coordinates). In the area rating information, an area rating is set for each position. For example, for a predetermined range, such as 50-meter radius from a position at which an elementary school or the like is present, the area rating is set such that contents that can be provided is strictly restricted, that is, contents that can be provided is a few. Moreover, for example, for an area in a downtown, the area rating is set such that restriction of contents that can be provided is eased, that is, contents that can be provided is not to be a few. Moreover, for other areas, the area rating is set such that the restriction of contents that can be provided is intermediate. The output-content determining unit 70 may acquire the area rating information by an arbitrary method and, for example, the area rating information may be included in the map data 30B, and the output-content determining unit 70 may acquire the area rating information by reading the map data 30B.
The output-content determining unit 70 sets the area rating to be applied based on the position information of the user U and the area rating information acquired. The output-content determining unit 70 applies the area rating that is associated with the acquired position information of the user U in the area rating information.
Hereinafter, for convenience of explanation, a combination of the content rating CA1 and the area rating CB1 is denoted as combination CA1-CB1, and others are also similarly denoted. As described above, out of the content rating and the area rating, one having the restriction of contents that can be provided is strict is set as the final rating. Therefore, in the example in
Having set the final rating, the output-content determining unit 70 determines an output content of the sub-image PS based on the final rating and the age information of the user U. The output-content determining unit 70 determines whether the sub-image PS can be displayed based on the final rating and the age information of the user U. The output-content determining unit 70 determines that display of the sub-image PS is possible when the content is permitted to be provided to the age of the user U in the final rating, and determines the content of the sub-image PS as the output content. On the other hand, the output-content determining unit 70 determines that display of the sub-image PS is not permitted when the content is not permitted to be provided to the age of the user in the final rating, and does not use the content of the sub-image PS as the output content. For example, in this case, the output-content determining unit 70 acquires the final rating for another piece of the sub-image PS acquired by the sub-image acquiring unit 52, and similarly determines whether display of the sub-image PS is possible.
Returning back to
As described, the output-content determining unit 70 determines the output content of the sub-image PS based on the final rating set from the content rating and the area rating, and the age information, but the determining method of the output content of the sub-image PS is not limited to the one described above, and is arbitrary. The output-content determining unit 70 may determine the output content by an arbitrary method based on the age information of the user U and the age information. Moreover, the output-content determining unit 70 is not limited to use both the age information of the user U and the age information, but may determine the output content by an arbitrary method based on the age information of the user U.
As explained above, the display device 10e according to the sixth embodiment includes the display unit 26A that displays an image, the age-information acquiring unit 66, the output-content determining unit 70, and the output control unit 54. The age-information acquiring unit 66 acquires the age information of the user U. The output-content determining unit 70 determines a display content (output content) of the sub-image PS to be displayed on the display unit 26A based on the age information of the user U. The output control unit 54 controls the display unit 26A to display the determined display content of the sub-image PS in a superimposed manner on the main image PM that is provided through the display unit 26A and is visually recognizable for the user U. The content of the sub-image PS can include an inappropriate content depending on the age of the user U. To deal with this concern, the display device 10e according to the present embodiment determines a content of the sub-image PS according to the age of the user U and, therefore, the sub-image PS can be provided appropriately according to the age.
Moreover, the display device 10e further includes the environment sensor 20 that detects position information of the user U, and the output-content determining unit 70 determines a display content of the sub-image PS based on the position information of the user U also. The content of the sub-image PS can be inappropriate to be provided depending on an area, such as neighborhood of an elementary school. To deal with this, the display device 10e according to the present embodiment determines a content of the sub-image PS according to the position information of the user U in addition to the age of the user U and, therefore, the sub-image PS can be provided appropriately according to the age of the user U and the area.
Furthermore, the output-content determining unit 70 acquires the area rating information that indicates a relationship between terrestrial coordinates and a display content permitted to be displayed (that is the area rating) set in advance, and determines a display content of the sub-image PS based on the area rating information and the position information of the user U. The display device 10e according to the present embodiment sets the area rating to restrict provision of the sub-image PS to be applied at a current position of the user U from the area rating information and the position information of the user U, and determines a display content of the sub-image PS based on the area rating. Therefore, the display device 10e according to the present embodiment can provide the sub-image PS appropriately according to the age of the user U and the area.
The computer program for performing the display method described above may be provided by being stored in a non-transitory computer-readable storage medium, or may be provided via a network such as the Internet. Examples of the computer-readable storage medium include optical discs such as a digital versatile disc (DVD) and a compact disc (CD), and other types of storage devices such as a hard disk and a semiconductor memory.
According to the present embodiment, an image can be appropriately provided to a user.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2020-130656 | Jul 2020 | JP | national |
2020-130877 | Jul 2020 | JP | national |
2020-130878 | Jul 2020 | JP | national |
2020-130879 | Jul 2020 | JP | national |
2020-131024 | Jul 2020 | JP | national |
2020-131025 | Jul 2020 | JP | national |
2020-131026 | Jul 2020 | JP | national |
2020-131027 | Jul 2020 | JP | national |
This application is a Continuation of PCT International Application No. PCT/JP2021/028675, filed on Aug. 2, 2021, which claims the benefit of priority from Japanese Patent Applications No. 2020-130656, No. 2020-131024, No. 2020-131025, No. 2020-131026, No. 2020-130877, No. 2020-130878, No. 2020-130879 and No. 2020-131027, filed on Jul. 31, 2020, the entire contents of all of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/028675 | Aug 2021 | US |
Child | 18102112 | US |