The present application claims priority to Japanese Application Number 2015-191042, filed Sep. 29, 2015, the disclosure of which is hereby incorporated by reference herein in its entirety.
This disclosure relates to an image generating device, an image generating method, and an image generating program.
There is known a technology for displaying additional information in superimposition on a content. For example, in Japanese Patent No. 5465620, additional information is displayed in a display region determined depending on characteristics of the content.
When additional information is displayed in a region that a user rarely looks at, a new user experience may be created.
This disclosure has been made in view of the above-mentioned point, and has an object to provide an image generating device, an image generating method, and an image generating program that enable additional information to be displayed in a region that a user rarely looks at, in at least one embodiment.
In order to help solve the above-mentioned problem, according to at least one embodiment of this disclosure, there is provided an image generating device, including; image generating means for generating an image to be displayed on a display; frequency statistics means for creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; low frequency area identifying means for identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; additional image generating means for generating an additional image to be arranged in the low frequency area in superimposition on the image; and image outputting means for outputting the image and the additional image to the display.
Further, according to at least one embodiment of this disclosure, there is provided an image generating method, which is to be executed by a computer, the image generating method including: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
Further, according to at least one embodiment of this disclosure, there is provided an image generating program for causing a computer to execute the procedures of: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
According to this disclosure, additional information can be displayed in a region that a user rarely looks at.
First, contents of at least one embodiment of this disclosure are listed and described. At least one embodiment of this disclosure has the following configuration.
(Item 1) An image generating device, including; image generating means for generating an image to be displayed on a display; frequency statistics means for creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; low frequency area identifying means for identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; additional image generating means for generating an additional image to be arranged in the low frequency area in superimposition on the image; and image outputting means for outputting the image and the additional image to the display.
(Item 2) An image generating device according to Item 1, in which the display includes a head mounted display, and in which the image includes a virtual reality image to be presented to a user wearing the head mounted display.
(Item 3) An image generating device according to Item 1 or 2, in which the frequency statistics means is configured to calculate the frequency based on a line-of-sight direction of the user detected by line-of-sight direction detecting means.
(Item 4) An image generating device according to Item 1 or 2, in which the frequency statistics means is configured to calculate the frequency based on output from a sensor configured to detect a direction of a head of the user.
(Item 5) An image generating device according to Item 3 or 4, in which the additional image generating means is configured to dynamically change the additional image based on one of a current line-of-sight direction of the user detected by the line-of-sight direction detecting means, and a current direction of a head of the user detected by the sensor.
(Item 6) An image generating device according to any one of Items 1 to 5, in which the low frequency area identifying means is configured to identify an area in the image, which has the frequency that falls below a first threshold, as a first low frequency area, and to identify an area in the image, which has the frequency that is equal to or exceeds the first threshold but falls below a second threshold larger than the first threshold, as a second low frequency area, and in which the additional image generating means is configured to arrange a first additional image in the first low frequency area in superimposition on the image, and to arrange a second additional image, which is different in attribute value from the first additional image, in the second low frequency area in superimposition on the image.
(Item 7) An image generating method, which is to be executed by a computer, the image generating method including: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
(Item 8) An image generating program for causing a computer to execute the procedures of: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
In the following, detailed description is given of at least one embodiment of this disclosure with reference to the drawings.
The HMD 120 is a display device to be used by being worn on a head of a user 160. The HMD 120 includes a display 122, an eye tracking device (hereinafter referred to as “ETD”) 124, and a sensor 126. In at least one embodiment, at least one of the ETD 124 or the sensor 126 is omitted. The HMD 120 may further include a speaker (headphones) and a camera (not shown), in at least one embodiment.
The display 122 is configured to present an image in a field of view of the user 160 wearing the HMD 120. For example, the display 122 may be configured as a non-transmissive display. In this case, the sight of the outside world of the HMD 120 is blocked from the field of view of the user 160, and the user 160 can see only the image displayed on the display 122. On the display 122, for example, an image generated using a computer executing graphics software is displayed. therein at least one embodiment, the generated image is a virtual reality image obtained by forming an image of a space of virtual reality (for example, a world created in a computer game). Alternatively, the real world may be expressed by the computer executing the graphics software based on positional coordinate data of, for example, the actual geography or objects in the real world. Further, instead of the computer executing the graphics software, the camera (not shown) mounted on the HMD 120 may be used to display on the display 122 a video taken from the perspective of the user 160.
The ETD 124 is configured to track the movement of the eyeballs of the user 160, to thereby detect the direction of the line of sight of the user 160. For example, the ETD 124 includes an infrared light source and an infrared camera. The infrared light source is configured to irradiate the eye of the user 160 wearing the HMD 120 with infrared rays. The infrared camera is configured to take an image of the eye of the user 160 irradiated with the infrared rays. The infrared rays are reflected on the surface of the eye of the user 160, but the reflectance of the infrared rays differs between the pupil and a part of the eyeball other than the pupil. In the image of the eye of the user 160 taken by the infrared camera, the difference in reflectance of the infrared rays appears as contrast in the image. Based on this contrast, the pupil is identified in the image of the eye of the user 160, and further the direction of the line of sight of the user 160 is detected based on the position of the identified pupil. The line-of-sight direction of the user 160 represents an area that the user 160 is gazing at in the image displayed on the display 122.
The sensor 126 is a sensor configured to detect the direction of the head of the user 160 wearing the HMD 120. Examples of the sensor 126 include a magnetic sensor, an angular velocity sensor, an acceleration sensor, or a combination thereof. When the sensor 126 is a magnetic sensor, an angular velocity sensor, or an acceleration sensor, the sensor 126 is built into the HMD 120, and is configured to output a value (magnetic, angular velocity, or acceleration value) based on the direction or the movement of the HMD 120. By processing the value output from the sensor 126 by an appropriate method, the direction of the head of the user 160 wearing the HMD 120 is calculated. The direction of the head of the user 160 can be used to change a display image of the display 122 so as to follow the movement of the head of the user 160 when the head is moved. When the display image of the display 122 is changed in accordance with the movement of the head of the user 160, the direction of the head of the user 160 represents a rough indication of a part that the user 160 is viewing at a relatively high probability in the display image of the display 122.
The sensor 126 may be a sensor provided outside of the HMD 120. For example, the sensor 126 may be an infrared sensor separated from the HMD 120. When an infrared reflecting marker formed on the surface of the HMD 120 is detected with use of the infrared sensor, the direction of the head of the user 160 wearing the HMD 120 can be identified.
The image generating device 200 is a device configured to generate an image to be displayed on the HMD 120. The image generating device 200 at least includes a processor 202, a non-transitory memory 204, and a user input interface 208. As other components, the image generating device 200 may further include, for example, a network interface (not shown) configured to communicate with other devices via a network. The image generating device 200 may be achieved as, for example, a personal computer, a game console, a smart phone, a tablet terminal, and the like.
The memory 204 has stored therein at least an operating system and an image generating program. The operating system is a computer program for controlling the entire operation of the image generating device 200. The image generating program is a computer program for the image generating device 200 to achieve respective functions of image generating processing to be described later. The memory 204 can further temporarily or permanently store data generated by the operation of the image generating device 200. Specific examples of the memory 204 include a read only memory (ROM), a random access memory (RAM), a hard disk, a flash memory, and an optical disc.
The processor 202 is configured to read out a program stored in the memory 204, to thereby execute processing in accordance with the program. When the processor 202 executes the image generating program stored in the memory 204, various functions of the image generating processing to be described later are achieved. The processor 202 includes at least a central processing unit (CPU) and a graphics processing unit (GPU).
The user input interface 208 is configured to receive input for operating the image generating device 200 from the user of the image displaying system 100. Specific examples of the user input interface 208 include a game controller, a touch pad, a mouse, and a keyboard.
The image generating unit 231 is configured to generate an image to be displayed on the HMD 120. For example, the image generating unit 231 is configured to acquire predetermined data from the storage unit 220, to thereby generate an image by computer graphics processing based on the acquired data. As at least one example, the image generating unit 231 may generate such a virtual reality image that the user 160 wearing the HMD 120 can recognize a virtual reality space of a computer game. The virtual reality image represents a sight that the user can see in the virtual reality space. For example, the virtual reality image to be generated by the image generating unit 231 includes characters that appear in the computer game, a landscape including buildings and trees, an interior design inside a room including furniture and walls, items on the ground, a part (hand or foot) of a body of an avatar that the user is operating, and an object (gun or sword) that the avatar is holding in its hand. Further, the image generating unit 231 may generate a computer graphics image that reproduces the real world based on the actual geography data of the real world or the like. Further, the image to be generated by the image generating unit 231 may be, instead of one obtained by computer graphics processing, for example, a video taken from the perspective of the user 160 by an external camera mounted on the HMD 120.
The image generating unit 231 may further change an image based on the output value from the sensor 126. For example, the image to be generated by the image generating unit 231 may be an image representing a state in which the field of view of the user in the virtual reality space transitions so as to follow the movement of the head of the user 160, which is represented by the output value from the sensor 126.
The image generated by the image generating unit 231 is output to the HMD 120 via the image outputting unit 235, to thereby be displayed on the display 122.
The frequency statistics unit 232 is configured to create statistical data based on a frequency that the user 160 wearing the HMD 120 looks at each area of the image displayed on the display 122. The statistical data represents an area that is frequently looked at and an area that is not frequently looked at in the image displayed on the display 122. For example, the frequency statistics unit 232 is configured to create the statistical data of the frequency that the user 160 looks at each area of the image based on the line-of-sight direction of the user 160 detected by the ETD 124. Further, the frequency statistics unit 232 may create the statistical data of the frequency that the user 160 looks at each area of the image based on the direction of the head of the user 160 detected by the sensor 126. Specific description is given with reference to
Each of the partial regions 501 of
As illustrated in
In the example in which the display range 530 moves in the region 540 as in
As described above, the frequency values of the respective partial regions 501 are statistically collected based on the line-of-sight direction or the head direction of the user 160. The collected frequency values of the respective partial regions 501 form the statistical data. The statistical data may be stored in the storage unit 220.
The dotted lines (frame lines of the partial regions 501) and the numbers (frequency values) shown in
The low frequency area identifying unit 233 is configured to identify a low frequency area based on the statistical data created by the frequency statistics unit 232. The low frequency area is a partial area that is looked at by the user 160 at a low frequency in the image displayed on the display 122 of the HMD 120. For example, the low frequency area identifying unit 233 is configured to compare the frequency value of each partial region 501 forming the statistical data with a predetermined threshold, and to determine, when the frequency value of a certain partial region 501 falls below the threshold as a result of the comparison, that the partial region 501 is a part of the low frequency area.
Further, the low frequency area identifying unit 233 may be configured to classify the low frequency area into a plurality of stages depending on the frequency value. For example, the low frequency area identifying unit 233 may be configured to set partial regions 501 having the frequency value of “0” or “1” as a first low frequency area, and to set partial regions 501 having the frequency value of “2” or “3” as a second low frequency area.
The additional image generating unit 234 is configured to generate an additional image to be arranged in the low frequency area. The generated additional image is output to the HMD 120 via the image outputting unit 235, and is displayed on the display 122 in superimposition on the image output from the image generating unit 231. The additional image can be used for, for example, presenting advertisements in the virtual reality space, or displaying an enemy character or a useful item in a computer game. The additional image is displayed in the low frequency area that is looked at by the user 160 at a low frequency, and hence the user 160 can visually recognize the image output from the image generating unit 231 without being affected by the additional image that much. In contrast, for example, when there is assumed such a rule that an additional image having a high value for the user 160 may be displayed in the low frequency area, the attention of the user 160 can be directed not only to the image output from the image generating unit 231 (area other than the low frequency area), but also to the low frequency area. For example, in an example of a computer game, when a high-value rare item or an enemy character that provides a high score when being defeated is displayed in the low frequency area as the additional image, the game can be more amusing.
Further, the additional image generating unit 234 may dynamically change the additional image based on the current line-of-sight direction or the current head direction of the user 160. For example, first, a certain character (ghost or mole in a whack-a-mole game) is displayed in the low frequency area as the additional image. The additional image generating unit 234 determines whether or not the user 160 intends to (or is about to) direct his/her line of sight (or his/her head) to the low frequency area based on the input from the ETD 124 or the sensor 126. When the user 160 intends to direct his/her line of sight to the low frequency area, the additional image generating unit 234 changes the additional image of the character to, for example, such an additional image that the character is escaping from the line of sight of the user 160 (the ghost disappears from the field of view, or the mole hides into the ground). The escaping degree of the character may be adjusted depending on the degree that the line of sight of the user 160 approaches the low frequency area.
When the low frequency area is classified into a plurality of stages, the additional image generating unit 234 may further generate different additional images (first additional image, second additional image, and the like) for the respective classified low frequency areas (first low frequency area, second low frequency area, and the like). For example, an attribute value of the first additional image to be displayed in the first low frequency area differs from an attribute value of the second additional image to be displayed in the second low frequency area. As an example, when the first low frequency area corresponds to the frequency values “0” and “1”, and the second low frequency area corresponds to the frequency values “2” and “3”, in the first low frequency area that is an area that is looked at by the user 160 at a lower frequency, a rare item with a higher value or an enemy character that provides a higher score when being defeated may be displayed as the first additional image. In this manner, the game can be even more amusing.
While a description has been given above on the embodiment of this disclosure, this disclosure is not limited thereto, and various modifications can be made without departing from the spirit of this disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2015-191042 | Sep 2015 | JP | national |