The present application claims priority to Japanese Application Number 2016-161038, filed Aug. 19, 2016, the disclosure of which is hereby incorporated by reference herein in its entirety.
This disclosure relates to an information processing method and a system for executing the information processing method.
Japanese Patent Application Laid-open No. 2003-319351, describes a system for distributing omnidirectional video taken by an omnidirectional camera. “Toybox Demo for Oculus Touch”, [online], Oct. 13, 2015, Oculus, [retrieved on Aug. 6, 2016], Internet <https://www.youtube.com/watch?v=iFEMiyGMa58>, describes a technology of changing a state of a hand object in a virtual reality (VR) space based on a state (for example, position and inclination) of a hand of a user in a real space, and operating the hand object to exert a predetermined action on a predetermined object in the virtual space.
In recent years, there has been proposed a technology of distributing omnidirectional video via a network so that the user can view the video with use of a head mounted display (HMD). In this case, employing a technology such as that in “Toybox Demo for Oculus Touch”, [online], Oct. 13, 2015, Oculus, [retrieved on Aug. 6, 2016], Internet <https://www.youtube.com/watch?v=iFEMiyGMa58>, is possible to provide such a virtual experience that the user can interact with virtual content, for example, the omnidirectional video. However, defining of various objects in the virtual content in order to provide a virtual experience to the user leads to a risk of an increase in a data amount of the virtual content.
At least one embodiment of this disclosure has an object to provide a virtual experience to a user while preventing an increase in a data amount of virtual content.
According to at least one embodiment of this disclosure, there is provided an information processing method for use in a system including a head mounted display (HMD) and a position sensor configured to detect a position of the HMD and a position of apart of a body of a user other than a head of the user. The information processing method includes specifying virtual space data for defining a virtual space including a virtual camera, an operation object, omnidirectional video, and a projection portion on which the omnidirectional video is projected. The method further includes projecting the omnidirectional video on the projection portion in a first mode. The method further includes moving the virtual camera based on a movement of the HMD. The method further includes defining a visual field of the virtual camera based on a movement of the virtual camera, and generating visual-field image data based on the visual field and the virtual space data. The method further includes displaying a visual-field image on the HMD based on the visual-field image data. The method further includes moving the operation object based on a movement of the part of the body. The method further includes projecting the omnidirectional video on the projection portion in a second mode different from the first mode when the operation object and the projection portion are in contact with each other.
According to at least one embodiment of this disclosure, providing a virtual experience to a user while preventing an increase in a data amount of virtual content is possible.
The summary of at least one embodiment of this disclosure is described.
(Item. 1) An information processing method for use in a system including a head mounted display (HMD) and a position sensor configured to detect a position of the HMD and a position of a part of a body of a user other than a head of the user. The information processing method includes specifying virtual space data for defining a virtual space including a virtual camera, an operation object, omnidirectional video, and a projection portion on which the omnidirectional video is projected. The method further includes projecting the omnidirectional video on the projection portion in a first mode. The method further includes moving the virtual camera based on a movement of the head mounted display. The method further includes defining a visual field of the virtual camera based on a movement of the virtual camera, and generating visual-field image data based on the visual field and the virtual space data. The method further includes displaying a visual-field image on the head mounted display based on the visual-field image data. The method further includes moving the operation object based on a movement of the part of the body. The method further includes projecting the omnidirectional video on the projection portion in a second mode different from the first mode when the operation object and the projection portion are in contact with each other.
According to the information processing method of Item 1, the display mode of the omnidirectional video is changed based on an interaction between the operation object and the projection portion on which the omnidirectional video is projected. With this, while suppressing increase in data amount of the omnidirectional video content data, a virtual experience may be provided to the user based on an interaction with the virtual content.
(Item 2) An information processing method according to Item 1, in which the projection portion is sectioned into a plurality of parts including a first part and a second part different from the first part. At least a part of a display target is displayed on the first part. Projecting the omnidirectional video includes changing a display mode of the display target to change the omnidirectional video from the first mode to the second mode when the operation object is in contact with the first part or when the operation object is in contact with the second part.
With this, the display mode of the display target that the user intends to touch can be selectively changed, and hence the virtual experience maybe provided based on an intuitive interaction with the virtual content.
(Item 3)An information processing method according to Item 1 or 2, in which the operation object includes a virtual body that is movable in synchronization with the movement of the part of the body.
With this, the virtual experience may be provided based on an intuitive interaction with the virtual content.
(Item 4) An information processing method according to Item 1 or 2, in which the operation object includes a target object capable of exhibiting a behavior operated by a virtual body that is movable in synchronization with the movement of the part of the body.
With this, the virtual experience may be provided based on an intuitive interaction with the virtual content.
(Item 5) An information processing method according to Item 3 or 4, in which the projection portion is sectioned into a plurality of parts including a first part and a second part different from the first part. At least a part of a display target is displayed on the first part. The display target is configured to change a display mode based on the first mode along with elapse of a playing time of the omnidirectional video. Projecting the omnidirectional video includes changing the display mode of the display target to change the omnidirectional video from the first mode to the second mode when the operation object is in contact with the first part or when the operation object is in contact with the second part. Projecting the omnidirectional video further includes specifying a viewing target associated with the display target based on a time at which the operation object is in contact with the first part, to thereby output information for specifying the viewing target.
With this, the viewing target with which the user desires to interact can be specified based on the part in which the operation object and the projection portion are in contact with each other. Therefore, when advertisements or other items are displayed in the omnidirectional moving image, the advertising effectiveness can be measured.
(Item 6) An information processing method according to Item 3 or 4, in which the projection portion is sectioned into a plurality of parts including a first part and a second part different from the first part. At least a part of a display target is displayed on the first part. The display target is configured to change a display mode along with elapse of a playing time of the omnidirectional video, based on one of the first mode and the second mode for displaying the same content in different display modes. Projecting the omnidirectional video includes, when the operation object is in contact with the first part or when the operation object is in contact with the second part, changing the display mode of the display target to change the omnidirectional video from the first mode to the second mode, and continuously changing the display mode of the display target based on the second mode along with the elapse of the playing time.
With this, providing, to the user, a virtual experience that is based on the interaction with the virtual content while providing the omnidirectional video that progresses based on predetermined content is possible.
(Item 7) An information processing method according to Item 3 or 4, in which the projection portion is sectioned into a plurality of parts including a first part and a second part different from the first part. At least a part of a display target is displayed on the first part. The display target is configured to change a display mode along with elapse of a playing time of the omnidirectional video, based on one of the first mode and the second mode for displaying different contents. Projecting the omnidirectional video includes changing the display mode of the display target to change the omnidirectional video from the first mode to the second mode when the operation object is in contact with the first part or when the operation object is in contact with the second part. Projecting the omnidirectional video further includes stopping the changing of the display mode of the display target based on the first mode along with the elapse of the playing time. Projecting the omnidirectional video further includes changing the display mode of the display target based on the second mode for a predetermined period along with the elapse of the playing time. Projecting the omnidirectional video further includes restarting the changing of the display mode of the display target based on the first mode along with the elapse of the playing time.
With this, providing, to the user, a virtual experience that is based on the interaction with the virtual content while providing the omnidirectional video that progresses based on predetermined content is possible.
(Item 8) A system for executing the information processing method of any one of Items 1 to 7.
At least one embodiment of this disclosure is described below with reference to the drawings. Once a component is described in this description of at least one embodiment, a description of a component having the same reference number as that of the already described component is omitted for the sake of convenience.
First, with reference to
The HMD 110 includes a display unit 112, an HMD sensor 114, and an eye gaze sensor 140. The display unit 112 includes a non-transmissive display device (or partially transmissive display device) configured to cover a field of view (visual field) of the user U wearing the HMD 110. With this, the user U can see a visual-field image displayed on the display unit 112, and hence the user U can be immersed in a virtual space. The display unit 112 may include a left-eye display unit configured to provide an image to a left eye of the user U, and a right-eye display unit configured to provide an image to a right eye of the user U. Further, the HMD 110 may include a transmissive display device. In this case, the transmissive display device may be able to be temporarily configured as the non-transmissive display device by adjusting the transmittance the display unit 112. Further, the visual-field image may include a configuration for presenting a real space in apart of the image forming the virtual space. For example, an image taken by a camera mounted to the HMD 110 may be displayed so as to be superimposed on a part of the visual-field image, or a transmittance of a part of the transmissive display device may be set high to enable the user to visually recognize the real space through a part of the visual-field image.
The HMD sensor 114 is mounted near the display unit 112 of the HMD 110. The HMD sensor 114 includes at least one of a geomagnetic sensor, an acceleration sensor, or an inclination sensor (for example, an angular velocity sensor or a gyro sensor), and can detect various movements of the HMD 110 worn on the head of the user U.
The eye gaze sensor 140 has an eye tracking function of detecting a line-of-sight direction of the user U. For example, the eye gaze sensor 140 may include a right-eye gaze sensor and a left-eye gaze sensor. The right-eye gaze sensor may be configured to detect reflective light reflected from the right eye (in particular, the cornea or the iris) of the user U by irradiating the right eye with, for example, infrared light, to thereby acquire information relating to a rotational angle of a right eyeball. Meanwhile, the left-eye gaze sensor may be configured to detect reflective light reflected from the left eye (in particular, the cornea or the iris) of the user U by irradiating the left eye with, for example, infrared light, to thereby acquire information relating to a rotational angle of a left eyeball.
The position sensor 130 is constructed of, for example, a position tracking camera, and is configured to detect the positions of the HMD 110 and the external controller 320. The position sensor 130 is connected to the control device 120 so as to enable communication to/from the control device 120 in a wireless or wired manner. In at least one embodiment, the position sensor 130 is configured to detect information relating to positions, inclinations, or light emitting intensities of a plurality of detection points (not shown) provided in the HMD 110. Further, in at least one embodiment, the position sensor 130 is configured to detect information relating to positions, inclinations, and/or light emitting intensities of a plurality of detection points 304 (see
The control device 120 is capable of acquiring movement information such as the position and the direction of the HMD 110 based on the information acquired from the HMD sensor 114 or the position sensor 130, and accurately associating a position and a direction of a virtual point of view (virtual camera) in the virtual space with the position and the direction of the user U wearing the HMD 110 in the real space based on the acquired movement information. Further, the control device 120 is capable of acquiring movement information of the external controller 320 based on the information acquired from the position sensor 130, and accurately associating a position and a direction of a hand object (described later) to be displayed in the virtual space based on a relative relationship of the position and the direction between the external controller 320 and the HMD 110 in the real space using the acquired movement information. Similar to the HMD sensor 114, the movement information of the external controller 320 may be obtained from a geomagnetic sensor, an acceleration sensor, an inclination sensor, or other sensors mounted to the external controller 320.
The control device 120 is capable of determining each of the line of sight of the right eye and the line of sight of the left eye of the user U based on the information received from the eye gaze sensor 140. The control device 120 is able to specify a point of gaze as an intersection between the line of sight of the right eye and the line of sight of the left eye. Further, the control device 120 is capable of specifying a line-of-sight direction of the user U based on the specified point of gaze. In this case, the line-of-sight direction of the user U is a line-of-sight direction of both eyes of the user U, and corresponds to a direction of a straight line passing through the point of gaze and a midpoint of a line segment connecting between the right eye and the left eye of the user U.
With reference to
With reference to
The control device 120 may be constructed as a personal computer, a tablet computer, or a wearable device separate from the HMD 110, or may be built into the HMD 110. Further, a part of the functions of the control device 120 maybe performed by a device mounted to the HMD 110, and other functions of the control device 120 may be performed by another device separate from the HMD 110.
The control unit 121 includes a memory and a processor.
The memory is constructed of, for example, a read only memory (ROM) having various programs and the like stored therein or a random access memory (RAM) having a plurality of work areas in which various programs to be executed by the processor are stored. The processor is constructed of, for example, a central processing unit (CPU), a micro processing unit (MPU) and/or a graphics processing unit (GPU), and is configured to develop, on the memory, instructions designated by various information installed into the memory to execute various types of processing in cooperation with the memory.
The control unit 121 may control various operations of the control device 120 by causing the processor to develop, on the memory, a instructions (to be described later) for executing the information processing method according to at least one embodiment to execute the instructions in cooperation with the memory. The control unit 121 executes a predetermined application program (including a game program and an interface program) stored in the memory or the storage unit 123 to provide instructions for displaying a virtual space (visual-field image) to the display unit 112 of the HMD 110. With this, the user U can be immersed in the virtual space displayed on the display unit 112.
The storage unit (storage) 123 is a storage device, for example, a hard disk drive (HDD), a solid state drive (SSD), or a USB flash memory, and is configured to store programs and various types of data. The storage unit 123 may store the instructions for causing the system to execute the information processing method according to at least one embodiment. Further, the storage unit 123 may store instructions for authentication of the user U and game programs including data relating to various images and objects. Further, a database including tables for managing various types of data may be constructed in the storage unit 123.
The I/O interface 124 is configured to connect each of the position sensor 130, the HMD 110, and the external controller 320 to the control device 120 so as to enable communication therebetween, and is constructed of, for example, a universal serial bus (USB) terminal, a digital visual interface (DVI) terminal, or a high-definition multimedia interface (TM) (HDMI) terminal. The control device 120 may be wirelessly connected to each of the position sensor 130, the HMD 110, and the external controller 320. In at least one embodiment, the control device 120 has a wired connection to at least one of position sensor 130, HMD 110 or external controller 320.
The communication interface 125 is configured to connect the control device 120 to a communication network 3, for example, a local area network (LAN), a wide area network (WAN), or the Internet. The communication interface 125 includes various wire connection terminals and various processing circuits for wireless connection for communication to/from an external device on a network via the communication network 3, and is configured to adapt to communication standards for communication via the communication network 3.
The control device 120 is connected to a content management server 4 via the communication network 3. The content management server 4 includes a control unit 41, a content management unit 42, and a viewing data management unit 43. The control unit 41 includes a memory and a processor. Each of the content management unit 42 and the viewing data management unit 43 includes a storage unit (storage). The content management unit 42 stores virtual space data for constructing virtual space content including various kinds of omnidirectional video to be described later. When the control unit 41 receives a viewing request for predetermined content from the control device 120, the control unit 41 reads out the virtual space data corresponding to the viewing request from the content management unit 42, and transmits the virtual space data to the control device 120 via the communication network 3. The control unit 41 receives data for specifying a user's viewing history, which is transmitted from the control device 120, and causes the viewing data management unit 43 to store the data.
With reference to
In
The controller 320R includes a frame 326 that extends from both side surfaces of the grip 324 in directions opposite to the top surface 322 to form a semicircular ring. The plurality of detection points 304 are embedded in the outer side surface of the frame 326. The plurality of detection points 304 are, for example, a plurality of infrared LEDs arranged in at least one row along a circumferential direction of the frame 326. The position sensor 130 detects information relating to positions, inclinations, and light emitting intensities of the plurality of detection points 304, and then the control device 120 acquires the movement information including the information relating to the position and the attitude (inclination and direction) of the controller 320R based on the information detected by the position sensor 130.
The sensor of the controller 320R may be, for example, any one of a magnetic sensor, an angular velocity sensor, or an acceleration sensor, or a combination of those sensors. The sensor outputs a signal (for example, a signal indicating information relating to magnetism, angular velocity, or acceleration) based on the direction and the movement of the controller 320R when the user U moves the controller 320R. The control device 120 acquires information relating to the position and the attitude of the controller 320R based on the signal output from the sensor.
The transceiver of the controller 320R is configured to perform transmission or reception of data between the controller 320R and the control device 120. For example, the transceiver may transmit an operation signal corresponding to the operation input of the user U to the control device 120. Further, the transceiver may receive an instruction signal for instructing the controller 320R to cause light emission of the detection points 304 from the control device 120. Further, the transceiver may transmit a signal representing the value detected by the sensor to the control device 120.
With reference to
In
In Step S2, the control unit 121 specifies a visual field CV (see
The control unit 121 can specify the visual field CV of the virtual camera 300 based on the data transmitted from the position sensor 130 and/or the HMD sensor 114. In this case, when the user U wearing the HMD 110 moves, the control unit 121 can change the visual field CV of the virtual camera 300 based on the data representing the movement of the HMD 110, which is transmitted from the position sensor 130 and/or the HMD sensor 114. That is, the control unit 121 can change the visual field CV in accordance with the movement of the HMD 110. Similarly, when the line-of-sight direction of the user U changes, the control unit 121 can move the visual field CV of the virtual camera 300 based on the data representing the line-of-sight direction of the user U, which is transmitted from the eye gaze sensor 140. That is, the control unit 121 can change the visual field CV in accordance with the change in the line-of-sight direction of the user U.
In Step S3, the control unit 121 generates visual-field image data representing the visual-field image M to be displayed on the display unit 112 of the HMD 110. Specifically, the control unit 121 generates the visual-field image data based on the virtual space data defining the virtual space 200 and the visual field CV of the virtual camera 300.
In Step S4, the control unit 121 displays the visual-field image M on the display unit 112 of the HMD 110 based on the visual-field image data. As described above, the visual field CV of the virtual camera 300 is updated in accordance with the movement of the user U wearing the HMD 110, and thus the visual-field image M to be displayed on the display unit 112 of the HMD 110 is updated as well. Thus, the user U can be immersed in the virtual space 200.
The virtual camera 300 may include a left-eye virtual camera and a right-eye virtual camera. In this case, the control unit 121 generates left-eye visual-field image data representing a left-eye visual-field image based on the virtual space data and the visual field of the left-eye virtual camera. Further, the control unit 121 generates right-eye visual-field image data representing a right-eye visual-field image based on the virtual space data and the visual field of the right-eye virtual camera. After that, the control unit 121 displays the left-eye visual-field image and the right-eye visual-field image on the display unit 112 of the HMD 110 based on the left-eye visual-field image data and the right-eye visual-field image data. In this manner, the user U can visually recognize the visual-field image as a three-dimensional image from the left-eye visual-field image and the right-eye visual-field image. In this disclosure, for the sake of convenience in description, the number of the virtual cameras 300 is one. However, at least one embodiment of this disclosure is also applicable to a case where the number of the virtual cameras is at least two.
The hand object 400 (one example of the operation object), and the target object 500 (one example of the operation object) or the projection portion 210, which are arranged in the virtual space 200, are described with reference to
In
The collision area may also be set for the projection portion 210, and the contact between the target object 500 and the projection portion 210 may be determined based on the relationship between the collision area of the projection portion 210 and the collision area of the target object 500. With this, when a behavior of the target object 500 is operated by the hand object 400 (for example, the target object 500 is thrown), an action can be easily exerted on the projection portion 210 based on the target object 500 to make various kinds of determination.
The target object 500 can be moved by the left hand object 400L and the right hand object 400R. For example, a grabbing motion can be performed by operating the controller 320 under a state in which the hand object 400 and the target object 500 are in contact with each other so that the fingers of the hand object 400 are bent. When the hand object 400 is moved under this state, the target object 500 can be moved so as to follow the movement of the hand object 400. Further, when the grabbing motion of the hand object 400 is cancelled during the movement, the target object 500 can be moved in the virtual space 200 in consideration of the moving speed, the acceleration, the gravity, and the like of the hand object 400. With this, the user can use the controller 320 to manipulate the target object 500 at will through an intuitive operation such as grabbing or throwing the target object 500. Meanwhile, the projection portion 210 is a portion on which the omnidirectional video is projected, and hence the projection portion 210 is not moved or deformed even when the hand object 400 is brought into contact with the projection portion 210.
An information processing method according to at least one embodiment is described with reference to
In at least one embodiment, the projection portion 210 is sectioned into a plurality of parts. As in
In Step S11, the control unit 121 provides instructions for moving the hand object 400 as described above based on the movement of the hand of the user U, which is detected by the controller 320.
In Step S12, the control unit 121 determines whether or not the hand object 400 is in contact with the grid section 212 of the projection portion 210 in which the advertisement AD1 is displayed. In at least one embodiment, as in
When the hand object 400 is moved under a state in which the advertisement AD1 is selected as described above, in Step S13, the control unit 121 generates a target object 510, and provides instructions for operating the target object 510 based on the operation of the hand object 400. In at least one embodiment, as in
Further, in at least one embodiment, the control unit 121 can store in advance in the storage unit 123, together with the omnidirectional moving image, the 3D object corresponding to the advertisement AD1 to be played in the omnidirectional moving image as the virtual space data. With this, based on a limited amount of data such as the omnidirectional moving image and the 3D model corresponding to the advertisement AD1, a virtual experience that is based on the interaction with the virtual content may be provided to the user.
In Step S14, the control unit 121 changes a display mode of the advertisement displayed on the display portion DP in the projection portion 210 from the advertisement AD1 to an advertisement AD2. As in of
In Step S15, as in
In at least one embodiment, the information for specifying the advertisement AD1 includes information on time at which the hand object 400 and the grid section 212 in which the advertisement AD1 is displayed are brought into contact with each other. With this, the data communication amount for transmitting or receiving the viewing data can be reduced.
Further, specifying the advertisement AD1 as the viewing target is not limited to when the hand object 400 touches the display portion DP in the projection portion 210. For example, the advertisement AD1 may be specified as the viewing target when the behavior of the operation object 500 is operated as appropriate (for example, the operation object 500 is thrown) based on the hand object 400 as described later, and thus the operation object 500 is brought into contact with the display portion DP.
In the storage unit 123 and the content management unit 42, video data for defining the omnidirectional video as in
Further, as in
In Step S17, the control unit 121 determines whether or not the target object 510 is in contact with the first part in the projection portion 210. In
In Step S18, the control unit 121 changes a display mode of the character C1 of the cat projected on the first part 211 of the projection portion 210 with which the target object 510 is in contact from a first mode (normal state) C1 before contact to a second mode (wet state) C2 as illustrated in
In Step S19, the control unit 121 provides instructions for continuously playing the omnidirectional video based on the display mode of the character after the change (character C2 of the cat in the wet state described above). As described above, the story is the same as the entire virtual content regardless of before or after the display mode is changed. Therefore, the user is provided with a virtual experience that is based on the interaction with the virtual content while providing the omnidirectional video that progresses based on predetermined content.
Regarding the at least two types of content data of
Further, the target object 510 is generated when the user performs a grabbing motion under a state in which the hand object 400 is in contact with the grid section 212. The behavior of the target object 510 is operated based on the operation of the hand object 400. When control unit 121 determines that the target object 510 is in contact with the grid section 211, the control unit 121 changes the display mode of the character C1 from the character C1 being the first mode to the character C2 being the second mode based on the video data stored in the storage unit 123. Then, the omnidirectional video that is based on a predetermined story is continuously played based on the character C2 displayed in the second mode. The omnidirectional video that is based on the predetermined story may be played based on the character C2 displayed in the second mode only for a predetermined period, and then the playing of the omnidirectional video that is based on the predetermined story may be restarted based on the character C1 displayed in the first mode.
With reference to
In Step S20, as in
In Step S21, the control unit 121 determines whether or not the target object 510 is in contact with a periphery of the first part 211 in the projection portion 210. In
In Step S22, the control unit 121 provides instructions for changing the display mode of the furniture F, which is projected on the projection portion 213 with which the target object 510 is in contact, from the first mode (normal state) before the contact to the second mode (wet state) as in
In at least one embodiment, as in
As defined in
The above description of some of the embodiments is not to be read as a restrictive interpretation of the technical scope of this disclosure. The described embodiments are merely given as an example, and a person skilled in the art would understand that various modifications can be made to the described embodiments within the scope of this disclosure set forth in the appended claims. Thus, the technical scope of this disclosure is to be defined based on the scope of this disclosure set forth in the appended claims and equivalents thereof.
In at least one embodiment, the movement of the hand object is controlled based on the movement of the external controller 320 representing the movement of the hand of the user U, but the movement of the hand object in the virtual space may be controlled based on the movement amount of the hand of the user U himself/herself. For example, instead of using the external controller, a glove-type device or a ring-type device to be worn on the hand or fingers of the user may be used. With this, the position sensor 130 can detect the position and the movement amount of the hand of the user U, and can detect the movement and the state of the hand and fingers of the user U. Further, the position sensor 130 maybe a camera configured to take an image of the hand (including the fingers) of the user U. In this case, by taking an image of the hand of the user with use of a camera, the position and the movement amount of the hand of the user U can be detected, and the movement and the state of the hand and fingers of the user U can be detected based on data of the image in which the hand of the user is displayed, without wearing any kind of device directly on the hand or fingers of the user.
Further, in at least one embodiment, there is set a collision effect for defining the influence to be exerted on the target object by the hand object based on the position and/or the movement of the hand, which is a part of the body of the user U other than the head, but the embodiments are not limited thereto. For example, there maybe set a collision effect for defining, based on apart of the body of the user U other than the head (for example, position and/or movement of the foot), the influence to be exerted on the target object by a virtual body (virtual foot, foot object: one example of the operation object) that is synchronized with the part of the body of the user U (for example, movement of the virtual foot). As described above, in at least one embodiment, there may be set a collision effect for specifying a relative relationship (distance and relative speed) between the HMD 110 and a part of the body of the user U, and defining the influence to be exerted on the target object by the virtual body (operation object) that is synchronized with the part of the body of the user U based on the specified relative relationship.
Further, in at least one embodiment, the user is immersed in a virtual space (VR space) with use of the HMD 110, but a transmissive HMD may be employed as the HMD 110. In this case, an image obtained by combining an image of the target object 500 with the real space to be visually recognized by the user U via the transmissive HMD 110 maybe output, to thereby provide a virtual experience as an AR space or an MR space. Then, the target object 500 may be selected or deformed based on the movement of a part of the body of the user instead of the first operation object or the second operation object. In this case, the real space and coordinate information of the part of the body of the user are specified, and coordinate information of the target object 500 is defined based on the relationship with the coordinate information in the real space. In this manner, an action can be exerted on the target object 500 based on the movement of the body of the user
U.
Number | Date | Country | Kind |
---|---|---|---|
2016-161038 | Aug 2016 | JP | national |