The present application claims priority to JP2016-148490 filed Jul. 28, 2016, JP 2016-148491 filed Jul. 28, 2016, JP 2017-006886 filed Jan. 18, 2017 and JP 2016-156006 filed Aug. 8, 2016 the disclosures of which are hereby incorporated by reference herein in its entirety.
This disclosure relates to an information processing method and a system for executing the information processing method.
In Non-Patent Document 1, there is described a technology of changing a state of a hand object in a virtual reality (VR) space based on a state (for example, position and inclination) of a hand of a user in a real space, and operating the hand object to exert a predetermined action on a predetermined object in the virtual space.
In Non-Patent Document 1, there is room for improving a virtual experience in a virtual reality space.
At least one embodiment of this disclosure has an object to provide an information processing method and a system for executing the information processing method, which are capable of improving a virtual experience.
According to at least one aspect of this disclosure, there is provided an information processing method to be executed by a computer in a system including a head-mounted device and a position sensor configured to detect a position of the head-mounted device and a position of a part of a body other than a head of a user. The information processing method includes generating virtual space data for defining a virtual space that includes an operation object and a target object. The method further includes displaying a visual-field image on the head-mounted device based on the position and an inclination of the head-mounted device. The method further includes identifying a position of the operation object based on the position of the part of the body of the user. The method further includes avoiding causing the operation object to exert a predetermined effect on the target object in response to a determination that a predetermined condition for determining whether or not a collision area of the operation object has touched a collision area of the target object intentionally is not satisfied.
According to at least one embodiment of this disclosure, improving the virtual experience in the virtual reality space is possible.
Embodiments of this disclosure are described below with reference to the drawings. Once a component is described in this description of the embodiments, a description on a component having the same reference number as that of the already described component is omitted for the sake of convenience.
First, with reference to
The HMD 110 includes a display unit 112, an HMD sensor 114, and an eye gaze sensor 140. The display unit 112 includes a non-transmissive display device configured to cover a field of view (visual field) of the user U wearing the HMD 110. With this, the user U can see a visual-field image displayed on the display unit 112, and thus the user U can be immersed in a virtual space. The HMD 110 is, for example, a head-mounted display device having the display unit 112 constructed integrally or separately. The display unit 112 may include a left-eye display unit configured to provide an image to a left eye of the user U, and a right-eye display unit configured to provide an image to a right eye of the user U. Further, the HMD 110 may include a transmissive display device. In this case, the transmissive display device may be able to be temporarily configured as the non-transmissive display device by adjusting the transmittance thereof. Further, the visual-field image may include a configuration for presenting a real space in a part of the image forming the virtual space. For example, an image taken by a camera mounted to the HMD 110 may be displayed so as to be superimposed on a part of the visual-field image, or a transmittance of a part of the transmissive display device may be set high to enable the user to visually recognize the real space through a part of the visual-field image.
The HMD sensor 114 is mounted near the display unit 112 of the HMD 110. The HMD sensor 114 includes at least one of a geomagnetic sensor, an acceleration sensor, or an inclination sensor (for example, an angular velocity sensor or a gyro sensor), and can detect various movements of the HMD 110 worn on the head of the user U.
The eye gaze sensor 140 has an eye tracking function of detecting a line-of-sight direction of the user U. For example, the eye gaze sensor 140 may include a right-eye gaze sensor and a left-eye gaze sensor. The right-eye gaze sensor may be configured to detect reflective light reflected from the right eye (in particular, the cornea or the iris) of the user U by irradiating the right eye with, for example, infrared light, to thereby acquire information relating to a rotational angle of a right eyeball. Meanwhile, the left-eye gaze sensor may be configured to detect reflective light reflected from the left eye (in particular, the cornea or the iris) of the user U by irradiating the left eye with, for example, infrared light, to thereby acquire information relating to a rotational angle of a left eyeball.
The position sensor 130 is constructed of, for example, a position tracking camera, and is configured to detect the positions of the HMD 110 and the external controller 320. The position sensor 130 is connected to the control device 120 so as to enable communication to/from the control device 120 in a wireless or wired manner. The position sensor 130 is configured to detect information relating to positions, inclinations, or light emitting intensities of a plurality of detection points (not shown) provided in the HMD 110. Further, the position sensor 130 is configured to detect information relating to positions, inclinations, and/or light emitting intensities of a plurality of detection points 304 (refer to
The control device 120 is capable of acquiring movement information such as the position and the direction of the HMD 110 based on the information acquired from the HMD sensor 114 or the position sensor 130, and accurately associating a position and a direction of a virtual point of view (virtual camera) in the virtual space with the position and the direction of the user U wearing the HMD 110 in the real space based on the acquired movement information. Further, the control device 120 is capable of acquiring movement information of the external controller 320 based on the information acquired from the position sensor 130, and accurately associating a position and a direction of a hand object (described later) to be displayed in the virtual space with a relative relationship of the position and the direction between the external controller 320 and the HMD 110 in the real space based on the acquired movement information. Similarly to the HMD sensor 114, the movement information of the external controller 320 may be obtained from a geomagnetic sensor, an acceleration sensor, an inclination sensor, or other sensors mounted to the external controller 320.
The control device 120 is capable of identifying each of the line of sight of the right eye and the line of sight of the left eye of the user U based on the information transmitted from the eye gaze sensor 140, to thereby identify a point of gaze being an intersection between the line of sight of the right eye and the line of sight of the left eye. Further, the control device 120 is capable of identifying a line-of-sight direction of the user U based on the identified point of gaze. In this case, the line-of-sight direction of the user U is a line-of-sight direction of both eyes of the user U, and matches with a direction of a straight line passing through the point of gaze and a midpoint of a line segment connecting between the right eye and the left eye of the user U.
With reference to
With reference to
The control device 120 may be constructed as a personal computer, a tablet computer, or a wearable device separately from the HMD 110, or may be built into the HMD 110. Further, a part of the functions of the control device 120 may be mounted to the HMD 110, and other functions of the control device 120 may be mounted to another device separate from the HMD 110.
The control unit 121 includes a memory and a processor. The memory is constructed of, for example, a read only memory (ROM) having various programs and the like stored therein or a random access memory (RAM) having a plurality of work areas in which various programs to be executed by the processor are stored. The processor is constructed of, for example, a central processing unit (CPU), a micro processing unit (MPU) and/or a graphics processing unit (GPU), and is configured to develop, on the RAM, programs designated by various programs installed into the ROM to execute various types of processing in cooperation with the RAM.
The control unit 121 may control various operations of the control device 120 by causing the processor to develop, on the RAM, a program (to be described later) for executing the information processing method on a computer according to at least one embodiment to execute the program in cooperation with the RAM. The control unit 121 executes a predetermined application program (including a game program and an interface program) stored in the memory or the storage unit 123 to display a virtual space (visual-field image) on the display unit 112 of the HMD 110. With this, the user U can be immersed in the virtual space displayed on the display unit 112.
The storage unit (storage) 123 is a storage device, for example, a hard disk drive (HDD), a solid state drive (SSD), or a USB flash memory, and is configured to store programs and various types of data. The storage unit 123 may store the program for executing the information processing method on a computer according to at least one embodiment. Further, the storage unit 123 may store programs for authentication of the user U and game programs including data relating to various images and objects. Further, a database including tables for managing various types of data may be constructed in the storage unit 123.
The I/O interface 124 is configured to connect each of the position sensor 130, the HMD 110, and the external controller 320 to the control device 120 so as to enable communication therebetween, and is constructed of, for example, a universal serial bus (USB) terminal, a digital visual interface (DVI) terminal, or a High-Definition Multimedia Interface® (HDMI) terminal. The control device 120 may be wirelessly connected to each of the position sensor 130, the HMD 110, and the external controller 320.
The communication interface 125 is configured to connect the control device 120 to a communication network 3, for example, a local area network (LAN), a wide area network (WAN), or the Internet. The communication interface 125 includes various wire connection terminals and various processing circuits for wireless connection for communication to/from an external device on a network via the communication network 3, and is configured to adapt to communication standards for communication via the communication network 3.
With reference to
In
The controller 320R includes a frame 326 that extends from both side surfaces of the grip 324 in directions opposite to the top surface 322 to form a semicircular ring. The plurality of detection points 304 are embedded in the outer side surface of the frame 326. The plurality of detection points 304 are, for example, a plurality of infrared LEDs arranged in one row along a circumferential direction of the frame 326. The position sensor 130 detects information relating to positions, inclinations, and light emitting intensities of the plurality of detection points 304, and then the control device 120 acquires the movement information including the information relating to the position and the attitude (inclination and direction) of the controller 320R based on the information detected by the position sensor 130.
The sensor of the controller 320R may be, for example, any one of a magnetic sensor, an angular velocity sensor, an acceleration sensor, or a combination of those sensors. The sensor outputs a signal (for example, a signal indicating information relating to magnetism, angular velocity, or acceleration) based on the direction and the movement of the controller 320R when the user U moves the controller 320R. The control device 120 acquires information relating to the position and the attitude of the controller 320R based on the signal output from the sensor.
The transceiver of the controller 320R is configured to perform transmission or reception of data between the controller 320R and the control device 120. For example, the transceiver may transmit an operation signal corresponding to the operation input of the user U to the control device 120. Further, the transceiver may receive from the control device 120 an instruction signal for instructing the controller 320R to cause light emission of the detection points 304. Further, the transceiver may transmit a signal representing the value detected by the sensor to the control device 120.
With reference to
In
In Step S2, the control unit 121 identifies a visual field CV (refer to
The control unit 121 can identify the visual field CV of the virtual camera 300 based on the data transmitted from the position sensor 130 and/or the HMD sensor 114. In this case, when the user U wearing the HMD 110 moves, the control unit 121 can change the visual field CV of the virtual camera 300 based on the data representing the movement of the HMD 110, which is transmitted from the position sensor 130 and/or the HMD sensor 114. That is, the control unit 121 can change the visual field CV in accordance with the movement of the HMD 110. Similarly, when the line-of-sight direction of the user U changes, the control unit 121 can move the visual field CV of the virtual camera 300 based on the data representing the line-of-sight direction of the user U, which is transmitted from the eye gaze sensor 140. That is, the control unit 121 can change the visual field CV in accordance with the change in the line-of-sight direction of the user U.
In Step S3, the control unit 121 generates visual-field image data representing the visual-field image M to be displayed on the display unit 112 of the HMD 110. Specifically, the control unit 121 generates the visual-field image data based on the virtual space data defining the virtual space 200 and the visual field CV of the virtual camera 300.
In Step S4, the control unit 121 displays the visual-field image M on the display unit 112 of the HMD 110 based on the visual-field image data (refer to
The virtual camera 300 may include a left-eye virtual camera and a right-eye virtual camera. In this case, the control unit 121 generates left-eye visual-field image data representing a left-eye visual-field image based on the virtual space data and the visual field of the left-eye virtual camera. Further, the control unit 121 generates right-eye visual-field image data representing a right-eye visual-field image based on the virtual space data and the visual field of the right-eye virtual camera. After that, the control unit 121 displays the left-eye visual-field image and the right-eye visual-field image on the display unit 112 of the HMD 110 based on the left-eye visual-field image data and the right-eye visual-field image data. In this manner, the user U can visually recognize the visual-field image as a three-dimensional image from the left-eye visual-field image and the right-eye visual-field image. In this disclosure, for the sake of convenience in description, the number of the virtual cameras 300 is one. However, at least one embodiment of this disclosure is also applicable to a case in which the number of the virtual cameras is two.
Now, a description is given of the left hand object 400L, the right hand object 400R, and the wall object 500 included in the virtual space 200 with reference to
In
The wall object 500 is a target object that the left hand object 400L and the right hand object 400R exert an effect on. For example, when the left hand object 400L has touched the wall object 500, a part of the wall object 500, which touches the collision area CA of the left hand object 400L, is destroyed. Further, the wall object 500 also has a collision area, and in at least one embodiment, the collision area of the wall object 500 is the same as the area constructing the wall object 500. In at least one embodiment, the collision area of the wall object 500 is different from the area constructing the wall object 500.
Now, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
In the information processing method according to at least one embodiment, the control unit 121 is configured to set a collision effect for defining an effect exerted by the collision area associated with left hand object 400L n the wall object 500, and to seta collision effect for defining an effect exerted by the a collision area associated with right hand object 400R the wall object 500. Meanwhile, the controllers 320L and 320R have substantially the same configuration, and thus, in the following, a description is given only of a collision effect for defining an effect exerted by the controller 320L on the wall object 500 for the sake of convenience of description. In at least one embodiment, a collision effect associated with left hand object 400L is different from a collision effect associated with right hand object 400R. Further, the control unit 121 executes processing steps in
In
Next, in Step S12, the control unit 121 identifies a relative speed V of the controller 320L with respect to the HMD 110. Specifically, the control unit 121 acquires position information on the HMD 110 and position information on the controller 320L based on information acquired from the position sensor 130, and identifies the relative speed V (example of relative relationship) of the controller 320L with respect to the HMD 110 in the w-axis direction of the HMD 110 based on those acquired pieces of position information.
For example, when the distance between the HMD 110 and the controller 320L in the w-axis direction for an n-th frame (n is an integer of 1 or more) is set to Dn, the distance between the HMD 110 and the controller 320L in the w-axis direction for an (n+1)-th frame is set to Dn+1, and a time interval between frames is set to ΔT, a relative speed Vn in the w-axis direction for the n-th frame is Vn=(Dn−Dn+1)/ΔT. When the frame rate of the moving image is 90 fps, ΔT is 1/90.
Next, in Step S13, the control unit 121 determines whether or not the identified distance D is larger than a predetermined distance Dth, and determines whether or not the identified relative speed V is larger than a predetermined relative speed Vth. The predetermined distance Dth and the predetermined relative speed Vth may appropriately be set depending on details of a game. When the control unit 121 determines that the identified distance D is larger than the predetermined distance Dth (D>Dth) and the identified relative speed V is larger than the predetermined relative speed Vth (V>Vth) (YES in Step S13), as in
Next, in Step S16, the control unit 121 determines whether or not the wall object 500 touches the collision area CA of the left hand object 400L. When the control unit 121 determines that the wall object 500 touches the collision area CA of the left hand object 400L (YES in Step S16), a predetermined effect is exerted on a part of the wall object 500, which touches the collision area CA (Step S17). For example, the part of the wall object 500, which touches the collision area CA, may be destroyed, or the wall object 500 may be damaged by a predetermined amount. In
On the contrary, when the control unit 121 determines that the wall object 500 does not touch the collision area CA of the left hand object 400L (NO in Step S16), a predetermined effect is not exerted on the wall object 500. After that, the control unit 121 updates virtual space data for defining the virtual space including the wall object 500, and displays a next frame (still image) on the HMD 110 based on the updated virtual space data (Step S18). After that, the processing returns to Step S11.
In this manner, according to at least one embodiment, the effect (collision effect) of the controller 320L exerted on the wall object 500 is set depending on the relative relationship (relative positional relationship and relative speed) between the HMD 110 and the controller 320L, and thus improving the sense of immersion of the user U in the virtual space 200 is possible. In particular, the size (diameter) of the collision area CA of the left hand object 400L is set depending on the distance D between the HMD 110 and the controller 320L and the relative speed V of the controller 320L with respect to the HMD 110. Further, a predetermined effect is exerted on the wall object 500 depending on the positional relationship between the collision area CA of the left hand object 400L and the wall object 500. Therefore, further improvement of the sense of immersion of the user U in the virtual space 200 is possible.
More specifically, in
In at least one embodiment, in Step S13, whether or not the distance D>Dth and the relative speed V>Vth are satisfied is determined. In at least one embodiment, only whether or not the distance D>Dth is satisfied may be determined. In this case, when the control unit 121 determines that the distance D>Dth is satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. On the contrary, when the control unit 121 determines that the distance D≦Dth is satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R1. Further, in at least one embodiment, in Step S13, only whether or not the relative speed V>Vth is satisfied may be determined. Also in this case, when the control unit 121 determines that the relative speed V>Vth is satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. On the contrary, when the control unit 121 determines that the relative speed V≦Vth is satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R1.
Further, in Step S13, whether or not a relative acceleration A of the controller 320L with respect to the HMD 110 is larger than a predetermined relative acceleration Ath (A>Ath) is determined. In this case, the control unit 121 identifies the relative acceleration A (example of relative relationship) of the controller 320L with respect to the HMD 110 in the w-axis direction before Step S13.
For example, when the relative speed of the controller 320L with respect to the HMD 110 in the w-axis direction for an n-th frame (n is an integer of 1 or more) is set to Vn+1, the relative speed of the controller 320L with respect to the HMD 110 in the w-axis direction for an (n+1)-th frame is set to Vn+1, and a time interval between frames is set to ΔT, a relative acceleration An in the w-axis direction for the n-th frame is Δn=(Vn−Vn+1)/ΔT.
In this case, when the control unit 121 determines that the relative acceleration A>Ath is satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. On the contrary, when the control unit 121 determines that the relative acceleration A≦Ath is satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R1.
Further, in Step S13, it may be determined whether or not the distance D>Dth and the relative acceleration a>ath are satisfied. Also in this case, when the control unit 121 determines that the distance D>Dth and the relative acceleration A>Ath are satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. On the contrary, when the control unit 121 determines that the distance D>Dth or the relative acceleration A>Ath is not satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R1.
Further, in at least one embodiment, when the determination condition defined in Step S13 is satisfied, the diameter R of the collision area CA of the left hand object 400L is set to the diameter R1. However, when the determination condition is satisfied, the control unit 121 may refer to a table or a function for showing a relationship between the diameter R of the collision area CA and the relative speed V, to thereby change the diameter R (that is, the size of the collision area CA) of the collision area CA in a continuous or stepwise manner depending on the magnitude of the relative speed V. For example, the control unit 121 may increase the diameter R (that is, the size of the collision area CA) of the collision area CA in a continuous or stepwise manner along with increase in relative speed V or relative acceleration A.
Similarly, when the determination condition is satisfied, the control unit 121 may refer to a table or a function for showing a relationship between the diameter R of the collision area CA and the distance D, to thereby change the diameter R (that is, the size of the collision area CA) of the collision area CA in a continuous or stepwise manner depending on the magnitude of the distance D. For example, the control unit 121 may increase the diameter R (that is, the size of the collision area CA) of the collision area CA in a continuous or stepwise manner along with increase in distance D.
Now, a description is given of an information processing method according to at least one embodiment with reference to
In the information processing method according to at least one embodiment, when the determination condition defined in Step S13 illustrated in
In contrast, in the information processing method according to the modification example, in response to a determination that the condition defined in Step S13 is satisfied (YES in Step S13), as in
After that, in Step S16, the control unit 121 determines whether or not the wall object 500 touches the collision area CA or the effect range EA of the left hand object 400L. When the control unit 121 determines that the wall object 500 touches the collision area CA or the effect range EA of the left hand object 400L (YES in Step S16), a predetermined effect is exerted on apart of the wall object 500, which touches the collision area CA or the effect range EA (Step S17). For example, as
On the contrary, when the control unit 121 determines that the wall object 500 does not touch the collision area CA or the effect range EA of the left hand object 400L (NO in Step S16), a predetermined effect is not exerted on the wall object 500.
In this manner, according to at least one embodiment, the effect (collision effect) of the controller 320L exerted on the wall object 500 is set depending on the relative relationship (distance and relative speed) between the HMD 110 and the controller 320L, and thus improving the sense of immersion of the user U in the virtual space 200 is possible. In particular, the size (diameter) of the effect range EA of the left hand object 400L is set depending on the determination condition defined in Step S13. Further, a predetermined effect is exerted on the wall object 500 depending on the positional relationship between the collision area CA and the effect range EA and the wall object 500. Therefore, further improvement of the sense of immersion of the user U in the virtual space 200, and to provide a rich virtual experience is possible.
Further, in order to achieve various types of processing to be executed by the control unit 121 with use of software, information processing instructions for a system for executing the information processing method according to at least one embodiment (using a processor) may be installed in advance into the storage unit 123 or the ROM. Alternatively, the information processing instructions may be stored in a computer-readable storage medium, for example, a magnetic disk (HDD or floppy disk), an optical disc (for example, CD-ROM, DVD-ROM, or Blu-ray Disc®), a magneto-optical disk (for example, MO), and a flash memory (for example, SD card, USB memory, or SSD). In this case, the storage medium is connected to the control device 120, and thus the program stored in the storage medium is installed into the storage unit 123. Then, the information processing instructions installed in the storage unit 123 is loaded onto the RAM, and the processor executes the loaded program. In this manner, the control unit 121 executes the information processing method according to at least one embodiment.
Further, the information processing instructions may be downloaded from a computer on the communication network 3 via the communication interface 125. Also in this case, the downloaded instructions are similarly installed into the storage unit 123.
Now, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
In
In Step S11A, the control unit 121 moves the hand object 400 as described above based on movement of the hand of the user U, which is detected by the controller 320.
In Step S12A, the control unit 121 determines whether or not the wall object 500 and the hand object 400 satisfy a predetermined condition. In at least one embodiment, the control unit 121 determines whether or not each hand object 400 has touched the wall object 500 based on the collision area CA set to the left hand object 400L and the right hand object 400R. When each hand object 400 has touched the wall object 500, the processing proceeds to Step S13A. When each hand object 400 does not touch the wall object 500, the control unit 121 waits for information on movement of the hand of the user again, and continues to control movement of the hand object 400.
In Step S13A, the control unit 121 changes the position of the counter part 510 of the wall object 500, which is opposed to the virtual camera 300, such that the counter part 510 becomes away from the virtual camera 300. In at least one embodiment, as in
In Step S14A, the control unit 121 determines whether or not a position at which the hand object 400 and the wall object 500 have touched each other is located within the visual field of the virtual camera 300. When the position is located within the visual field, the processing proceeds to Step S15A, and the control unit 121 executes processing of moving the virtual camera 300. When the position is not located within the visual field, the control unit 121 waits for information on movement of the hand of the user again, and continues to control movement of the hand object 400.
In Step S15A, the control unit 121 moves the virtual camera 300 without association with movement of the HMD 110. Specifically, as in
In at least one embodiment, when the virtual camera 300 is moved, the hand object 400 be moved in accordance with movement of the virtual camera 300 so as to reflect a relative positional relationship between the HMD 110 and the hand. For example, as in
In at least one embodiment, when the hand object 400 is moved in accordance with movement of the virtual camera 300, the virtual camera 300 be moved such that the hand object 400 does not touch the wall object 500. For example, as in
Now, a description is given of an example of setting the movement vector F in at least one embodiment with reference to
In at least one embodiment, the direction of the movement vector F of the virtual camera 300 be a direction of extension of the visual axis L of the virtual camera 300 at the time when the virtual camera 300 and the left hand object 400L have touched each other regardless of the positional relationship between the virtual camera 300 and the left hand object 400L. With this, the virtual camera 300 is moved in the forward direction with respect to the user U, and the user U is more likely to predict the movement direction. As a result, reducing visually induced motion sickness (so-called VR sickness) caused by movement of the virtual camera 300 and suffered by the user U is possible. In at least one embodiment, even when the virtual camera 300 starts to move and the user U moves his or her head before completion of the movement so that the direction of the virtual camera changes, the virtual camera 300 be moved in the direction of extension of the visual axis L of the virtual camera 300 at the time when the virtual camera 300 and the left hand object 400L have touched each other. With this, the user U is more likely to predict the movement direction, and the VR sickness is reduced.
The magnitude of the movement vector F of the virtual camera 300 is set smaller as the position at which the left hand object 400L and the wall object 500 have touched each other becomes more away from the visual axis L of the virtual camera 300. With this, even when the virtual camera 300 is moved in the direction of the visual axis L, preventing the left hand object 400L from touching the wall object 500 again after movement of the virtual camera 300 is possible.
In
In Step S16A, the control unit 121 updates the visual-field image based on the visual field of the moved virtual camera 300. The user U can experience movement in the virtual space by being presented with the updated visual-field image on the HMD 110.
Now, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
In the information processing method according to at least one embodiment, the control unit 121 is configured to seta collision effect for defining an effect exerted by the controller 320L on the wall object 500, and set a collision effect for defining an effect exerted by the controller 320R on the wall object 500. Meanwhile, the controllers 320L and 320R have substantially the same configuration, and thus, in the following, only the collision effect for defining an effect exerted by the controller 320L on the wall object 500 is described for the sake of convenience of description. Further, the control unit 121 executes processing steps in
In
Specifically, the control unit 121 acquires the position information on the HMD 110 based on the information acquired from the position sensor 130, and identifies the absolute speed S of the HMD 110 in the w axis direction of the HMD 110 based on the acquired position information. In at least one embodiment, the control unit 121 identifies the absolute speed S of the HMD 110 in the w axis direction, but may identify the absolute speed S of the HMD 110 in a predetermined direction other than the w axis direction.
For example, when the position in the w axis direction of a position Pn of the HMD 110 for an n-th frame (n is an integer of 1 or more) is set to wn, the position in the w axis direction of a position Pn+1 of the HMD 110 for an (n+1)-th frame is set to wn+1, and a time interval between frames is set to ΔT, the absolute speed Sn of the HMD 110 in the w-axis direction for the n-th frame is Vn=(wn+1−wn)/ΔT. When the frame rate of the moving image is 90 fps, AT is 1/90.
Next, in Step S12B, the control unit 121 determines whether or not the identified absolute speed S of the HMD 110 is larger than the predetermined speed Sth. The predetermined speed Sth may be set appropriately depending on details of a game. When the control unit 121 determines that the identified absolute speed S is larger than the predetermined speed Sth (S>Sth) (YES in Step S12B), as in
Next, in Step S15B, the control unit 121 determines whether or not the wall object 500 touches the collision area CA of the left hand object 400L. When the control unit 121 determines that the wall object 500 touches the collision area CA of the left hand object 400L (YES in Step S15B), a predetermined effect is exerted on a part of the wall object 500, which touches the collision area CA (Step S16B). For example, the part of the wall object 500, which touches the collision area CA, may be destroyed, or the wall object 500 may be damaged by a predetermined amount. As in
On the contrary, when the control unit 121 determines that the wall object 500 does not touch the collision area CA of the left hand object 400L (NO in Step S15B), a predetermined effect is not exerted on the wall object 500. After that, the control unit 121 updates virtual space data for defining the virtual space including the wall object 500, and displays a next frame (still image) on the HMD 110 based on the updated virtual space data (Step S17B). After that, the processing returns to Step S11B.
In this manner, according to at least one embodiment, the collision effect for defining an effect exerted by the controller 320L on the wall object 500 is set depending on the absolute speed S of the HMD 110. In particular, when the absolute speed S of the HMD 110 is equal to or smaller than the predetermined speed Sth, the collision effect in
More specifically, the collision area CA of the left hand object 400L is set depending on the absolute speed S of the HMD 110. In particular, when the absolute speed S of the HMD 110 is equal to or smaller than the predetermined speed Sth, the diameter R of the left hand object 400L is set to R1, whereas, when the absolute speed S of the HMD 110 is larger than the predetermined speed Sth, the diameter R of the left hand object 400L is set to R2 (R1<R2). Further, a predetermined effect is exerted on the wall object 500 depending on the positional relationship between the collision area CA of the left hand object 400L and the wall object 500. Therefore, further improvement of the sense of immersion of the user U in the virtual space 200 is possible.
In this respect, as in
In at least one embodiment, in Step S12B, a determination is made whether or not the absolute speed S of the HMD 110 in the w axis direction is larger than the predetermined speed Sth, but in at least one embodiment a determination is made whether or not the absolute speed S of the HMD 110 in the w axis direction is larger than the predetermined speed Sth and the relative speed V of the controller 320L with respect to the HMD 110 in the movement direction of the HMD 110 (in this example, w axis direction) is larger than the predetermined relative speed Vth. That is, a determination is made whether or not S>Sth and V>Vth are satisfied. The predetermined relative speed Vth may be set appropriately depending on details of a game.
In this case, the control unit 121 identifies the relative speed V of the controller 320L with respect to the HMD 110 in the w axis direction before Step S12B. For example, when the distance between the HMD 110 and the controller 320L in the w axis direction for the n-th frame (n is an integer of 1 or more) is set to Dn, the distance between the HMD 110 and the controller 320L in the w axis direction for the (n+1)-th frame is set to Dn+1, and a time interval between frames is set to ΔT, the relative speed Vn in the w axis direction for the n-th frame is Vn=(Dn−Dn+1)/ΔT.
When the control unit 121 determines that the absolute speed S of the HMD 110 in the w axis direction is larger than the predetermined speed Sth (S>Sth) and the relative speed V of the controller 320L with respect to the HMD 110 in the movement direction of the HMD 110 (w axis direction) is larger than the predetermined relative speed Vth (V>Vth), the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. On the contrary, when the control unit 121 determines that v>vth or V>Vth is not satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. In this manner, the collision effect is set depending on the absolute speed S of the HMD 110 and the relative speed V of the controller 320L with respect to the HMD 110. Therefore, further improvement the sense of immersion of the user U in the virtual space 200 is possible.
Further, in Step S12B, a determination is made whether or not the absolute speed S of the HMD 110 in the w axis direction is larger than the predetermined speed Sth and the relative acceleration a of the controller 320L with respect to the HMD 110 in the movement direction of the HMD 110 (in this example, w axis direction) is larger than the predetermined relative acceleration Ath.
In this case, the control unit 121 identifies the relative acceleration A of the controller 320L with respect to the HMD 110 in the w axis direction before Step S12B. For example, when the relative speed of the controller 320L with respect to the HMD 110 in the w axis direction for the n-th frame (n is an integer of 1 or more) is set to Vn, the relative speed of the controller 320L with respect to the HMD 110 in thew axis direction for the (n+1)-th frame is set to Vn+1, and a time interval between frames is set to ΔT, the relative acceleration an in the w axis direction for the n-th frame is An=(Vn−Vn+1)/ΔT.
When the control unit 121 determines whether or not the absolute speed S of the HMD 110 in the w axis direction is larger than the predetermined speed Sth (S>Sth) and the relative acceleration a of the controller 320L with respect to the HMD 110 in the movement direction of the HMD 110 (w axis direction) is larger than the predetermined relative acceleration Ath (A>Ath), the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. On the contrary, when the control unit 121 determines that S>Sth or A>Ath is not satisfied, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2. In this manner, the collision effect is set depending on the absolute speed S of the HMD 110 and the relative speed S of the controller 320L with respect to the HMD 110. Therefore, further improvement of the sense of immersion of the user U in the virtual space 200 is possible.
Further, when the determination condition defined in Step S12B is satisfied, the control unit 121 refers to a table or a function for showing a relationship between the diameter R of the collision area CA and the relative speed V, to thereby change the diameter R of the collision area CA (that is, size of collision area CA) in a continuous or stepwise manner depending on the magnitude of the relative speed V. For example, the control unit 121 may increase the diameter R of the collision area CA (that is, size of collision area CA) in a continuous or stepwise manner along with increase in relative speed V.
Next, a description is given of an information processing method according to at least one embodiment with reference to
In Step S22, the control unit 121 determines whether or not the absolute speed S of the HMD 110 satisfies 0<S≦S1. When the result of determination in Step S22 is YES, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R1 (Step S23). On the contrary, when the result of determination in Step S22 is NO, the control unit 121 determines whether or not the absolute speed S of the HMD 110 satisfies S1<S≦S2 (Step S24). When the result of determination in Step S24 is YES, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to the diameter R2 (Step S25). On the contrary, when the result of determination in Step S24 is NO, the control unit 121 determines whether or not the absolute speed S of the HMD 110 satisfies S2<S≦S3 (Step S26). When the result of determination in Step S26 is YES, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to a diameter R3 (Step S27). On the contrary, when the result of determination in Step S26 is NO, the control unit 121 sets the diameter R of the collision area CA of the left hand object 400L to a diameter R4 (Step S28). In at least one embodiment, the predetermined speeds S1, S2, and S3 are threshold values which satisfy the relationship of 0<S1<S2<S3. Further, in at least one embodiment, the diameters R1, R2, R3, and R4 of the collision area CA satisfy the relationship of R1<R2<R3<R4.
According to at least one embodiment, the control unit 121 can refer to a table or a function for showing a relationship between the diameter R of the collision area CA and the absolute speed S, to thereby change the diameter R of the collision area CA of the left hand object 400L in a stepwise manner depending on the magnitude of the absolute speed S of the HMD 110. Further, according to at least one embodiment, as the absolute speed S of the HMD 110 becomes larger, the collision area CA of the left hand object 400L becomes larger in a stepwise manner. In this manner, as the absolute speed S of the HMD 110 (that is, the absolute speed of the user U) becomes larger, the collision area CA of the left hand object 400L becomes larger. As a result, the collision effect for defining the effect exerted by the left hand object 400L on the wall object 500 becomes larger. Therefore, improving the sense of immersion of the user U in the virtual space, and to provide a rich virtual experience is possible.
The control unit 121 may refer to a table or a function for showing a relationship between the diameter R of the collision area CA and the absolute speed S, to thereby change the diameter R of the collision area CA continuously depending on the magnitude of the relative speed V. Also in this case, further improvement of the sense of immersion of the user U in the virtual space and to provide a rich virtual experience is possible.
Next, a description is given of an information processing method according to at least one embodiment with reference to
In the information processing method according to at least one embodiment, when the determination condition defined in Step S12B illustrated in
In contrast, in the information processing method according to at least one embodiment, when the determination condition defined in Step S12B is satisfied (YES in Step S12B), as in
Next, in Step S15B, the control unit 121 determines whether or not the wall object 500 touches the collision area CA or the effect range EA of the left hand object 400L. When the control unit 121 determines that the wall object 500 touches the collision area CA or the effect range EA of the left hand object 400L (YES in Step S15B), a predetermined effect is exerted on a part of the wall object 500, which touches the collision area CA or the effect range EA (Step S16B). For example, in
On the contrary, when the control unit 121 determines that the wall object 500 does not touch the collision area CA or the effect range EA of the left hand object 400L (NO in Step S15B), a predetermined effect is not exerted on the wall object 500.
In this manner, according to at least one embodiment, the effect (collision effect) exerted by the controller 320L on the wall object 500 is set depending on the absolute speed S of the HMD 110, and thus further improvement of the sense of immersion of the user U in the virtual space 200 and to provide a rich virtual experience is possible. In particular, the size (diameter) of the effect range EA of the left hand object 400L is set depending on the determination condition defined in Step S12B. Further, a predetermined effect is exerted on the wall object 500 depending on a positional relationship among the collision area CA, the effect range EA, and the wall object 500. Therefore, further improvement of the sense of immersion of the user U in the virtual space 200, and to provide a rich virtual experience is possible.
Now, with reference to
As described above, the virtual space 200 includes the virtual camera 300, the left hand object 400L, the right hand object 400R, the block object 500, and the button object 600. The control unit 121 generates virtual space data for defining the virtual space 200 including those objects. Further, the control unit 121 may update the virtual space data on a frame basis. As described above, the virtual camera 300 moves in accordance with movement of the HMD 110 worn by the user U. That is, the visual field of the virtual camera 300 is updated in accordance with movement of the HMD 110.
The left hand object 400L moves in accordance with movement of the controller 320L worn on the left hand of the user U. Similarly, the right hand object 400R moves in accordance with movement of the controller 320R worn on the right hand of the user U. In the following, the left hand object 400L and the right hand object 400R may simply be referred to as “hand object 400” for the sake of convenience of description.
Further, the user U can operate the operation button 302 of the external controller 320 to operate each finger of the hand object 400. That is, the control unit 121 acquires an operation signal corresponding to an input operation for the operation button 302 from the external controller 320, and controls operation of each finger of the hand object 400 based on the operation signal. For example, the user U can operate the operation button 302 so that the hand object 400 grasps the block object 500. Further, the hand object 400 and the block object 500 can be moved in accordance with movement of the controller 320 with the hand object 400 holding the block object 500. In this manner, the control unit 121 is configured to control operation of the hand object 400 in accordance with movement of a finger of the user U.
Further, the left hand object 400L and the right hand object 400R each include the collision area CA. The collision area CA is used for determination of collision (determination of hit) between the hand object 400 and a virtual object (for example, block object 500 or button object 600). The collision area CA of the hand object 400 and the collision area of the block object 500 (button object 600) have touched each other so that a predetermined effect (collision effect) is exerted on the block object 500 (button object 600).
For example, a predetermined damage can be given to the block object 500 by the collision area CA of the hand object 400 and the collision area of the block object 500 touching each other. Further, moving the hand object 400 and the block object 500 in an integrated manner with the hand object 400 holding the block object 500 is possible.
In
The block object 500 is a virtual object that the hand object 400 exerts an effect on. The block object 500 also has the collision area, and in at least one embodiment, the collision area of the block object 500 is the same as the area forming the block object 500 (exterior area of block object 500). In at least one embodiment, the collision area of the block object 500 is different from the area forming the block object 500.
The button object 600 is a virtual object that the hand object 400 exerts an effect on, and includes an operation portion 620. The button object 600 also includes the collision area, and in at least one embodiment, the collision area of the button object 600 is the same as the area forming the button object 600 (exterior area of button object 600). In at least one embodiment, the collision area of the operation portion 620 is the same as the exterior area of the operation portion 620.
A predetermined effect is exerted on a predetermined object (not shown) placed in the virtual space 200 when the operation portion 620 of the button object 600 is pressed by the hand object 400. Specifically, the collision area CA of the hand object 400 and the collision area of the operation portion 620 have touched each other so that the operation portion 620 is pressed by the hand object 400 as the collision effect. Then, the operation portion 620 is pressed so that a predetermined effect is exerted on a predetermined object placed in the virtual space 200. For example, an object (character object) present in the virtual space 200 may start to move by the operation portion 620 being pressed by the hand object 400.
Next, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
Further, in description of at least one embodiment, whether or not the right hand object 400R exerts a predetermined effect on the operation portion 620 of the button object 600 is mentioned for the sake of convenience of description, whereas whether or not the left hand object 400L exerts a predetermined effect on the operation portion 620 of the button object 600 is not mentioned. Thus, in
In
Specifically, the control unit 121 acquires position information on the HMD 110 based on data acquired by the position sensor 130, and identifies the absolute speed S of the HMD 110 based on the acquired position information. For example, when the position of the HMD 110 for an n-th frame (n is an integer of 1 or more) is set to Pn, the position of the HMD 110 for an (n+1)-th frame is set to Pn+1, and a time interval between frames is set to ΔT, the absolute speed Sn of the HMD 110 for the n-th frame is Sn=|Pn+1−Pn|/ΔT. When the frame rate of the moving image is 90 fps, the time interval ΔT is 1/90. Further, a position P of the HMD 110 is a position vector that can be displayed in a three-dimensional coordinate system. In this manner, the control unit 121 can acquire the position Pn of the HMD 110 for an n-th frame and the position Pn+1 of the HMD 110 for an (n+1)-th frame to identify the absolute speed Sn for an n-th frame based on the position vectors Pn and Pn+1 and the time interval ΔT.
The control unit 121 may identify the absolute speed S of the HMD 110 in the w axis direction, or may identify the absolute speed S of the HMD 110 in a predetermined direction other than the w axis direction. For example, when the position in the w axis direction of the position Pn of the HMD 110 for an n-th frame (n is an integer of 1 or more) is set to wn, the position in the w axis direction of a position Pn+1 of the HMD 110 for an (n+1)-th frame is set to wn+1, and a time interval between frames is set to ΔT, the absolute speed Sn of the HMD 110 in the w-axis direction for the n-th frame is Sn=(wn+1−wn)/ΔT.
Next, in Step S11C, the control unit 121 identifies the visual field CV of the virtual camera 300. Specifically, the control unit 121 identifies the position and inclination of the HMD 110 based on data from the position sensor 130 and/or the HMD sensor 114, and identifies the visual field CV of the virtual camera 300 based on the position and inclination of the HMD 110. After that, the control unit 121 identifies the position of the right hand object 400R (Step S12C). Specifically, the control unit 121 identifies the position of the controller 320R in the real space based on data from the position sensor 130 and/or a sensor of the controller 320R, and identifies the position of the right hand object 400R based on the position of the controller 320R in the real space.
Next, the control unit 121 determines whether or not the absolute speed S of the HMD 110 is equal to or smaller than the predetermined value Sth (S≦Sth) (Step S13C). The predetermined value Sth (Sth≧0) may be set appropriately depending on details of content, for example, a game. Further, the determination condition (S≦Sth) defined in Step S13C corresponds to a determination condition (first condition) for determining whether or not the collision area CA of the right hand object 400R (operation object) has touched the collision area of the operation portion 620 of the button object 600 (target object) intentionally.
When the control unit 121 determines whether or not the determination condition defined in Step S13C is not satisfied (that is, when the absolute speed S is determined to be larger than the predetermined value Sth) (NO in Step S13C), the control unit 121 does not execute processing defined in Step S14C and Step S15C. That is, the control unit 121 does not execute collision determination processing defined in Step S14C and processing of causing a collision effect defined in Step S15C, and thus the right hand object 400R does not exert a predetermined effect on the operation portion 620 of the button object 600.
On the contrary, when the control unit 121 determines that the determination condition defined in Step S13C is satisfied (that is, when the absolute speed S is determined to be equal to or smaller than the predetermined value Sth) (YES in Step S13C), the control unit 121 determines whether or not the collision area of the operation portion 620 of the button object 600 touches the collision area CA of the right hand object 400R (Step S14C). In particular, the control unit 121 determines whether or not the collision area of the operation portion 620 touches the collision area CA of the right hand object 400R based on the position of the right hand object 400R and the position of the operation portion 620 of the button object 600. When the result of determination in Step S14C is YES, the control unit 121 exerts a predetermined effect on the operation portion 620 touching the collision area CA of the right hand object 400R (Step S14C). For example, the control unit 121 may determine that the operation portion 620 is pressed by the right hand object 400R as an effect of collision between the right hand object 400R and the operation portion 620. As a result, a predetermined effect may be exerted on a predetermined object (not shown) placed in the virtual space 200. Further, a contact surface 620a of the operation portion 620 may move in the +X direction as an effect of collision between the right hand object 400R and the operation portion 620. On the contrary, when the result of determination in Step S14C is NO, an effect of collision between the right hand object 400R and the operation portion 620 is not caused.
According to at least one embodiment, a determination is made whether or not the absolute speed S of the HMD 110 is equal to or smaller than the predetermined value Sth as a determination condition for determining whether or not the collision area CA of the right hand object 400R touches the collision area of the operation portion 620 intentionally. When the absolute speed S is larger than the predetermined value Sth (when the determination condition of Step S13C is not satisfied), the processing defined in Step S14C and Step S15C is not executed. In this manner, when the right hand object 400R has touched the operation portion 620 with the absolute speed S of the HMD 110 being larger than the predetermined value Sth, the right hand object 400R is determined to have touched the operation portion 620 unintentionally. Thus, determination of collision between the right hand object 400R and the operation portion 620 is not executed, and an effect of collision between the right hand object 400R and the operation portion 620 is not caused.
Therefore, a situation is avoided in which, when the right hand object 400R has touched the operation portion 620 unintentionally, the operation portion 620 is pressed by the right hand object 400R unintentionally. In this manner, further improvement of the virtual experience of the user is possible.
Next, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
In
When the result of determination in Step S23C is NO, the collision area of the operation portion 620 does not touch the collision area CA of the right hand object 400R, and thus the control unit 121 ends this processing. On the contrary, when the result of determination in Step S23C is YES (when it is determined that the collision area of the operation portion 620 touches the collision area CA of the right hand object 400R), the control unit 121 determines whether or not the absolute speed S of the HMD 110 is equal to or smaller than the predetermined value Sth (Step S24C).
When the control unit 121 determines that the absolute speed S is larger than the predetermined value Sth (NO in Step S24C), the control unit 121 ends this processing without executing processing defined in Step S25C. On the contrary, when the control unit 121 determines that the absolute speed S is equal to or smaller than the predetermined value Sth (YES in Step S24C), the control unit 121 exerts a predetermined effect (collision effect) on the operation portion 620 touching the collision area CA of the right hand object 400R (Step S25C).
In this manner, in response to a determination that the absolute speed S is larger than the predetermined value Sth (when the determination condition of Step S24C is not satisfied), the processing defined in Step S25C is not executed. In this manner, when the right hand object 400R has touched the operation portion 620 with the absolute speed S of the HMD 110 being larger than the predetermined value Sth, the right hand object 400R is determined to have touched the operation portion 620 unintentionally, and thus an effect of collision between the right hand object 400R and the operation portion 620 is not caused. In this respect, in the information processing method according to at least one embodiment, determination of collision between the right hand object 400R and the operation portion 620 is executed, whereas an effect of collision between the right hand object 400R and the operation portion 620 is not caused when the result of determination in Step S24C is NO.
Therefore, a situation is avoided in which, when the right hand object 400R has touched the operation portion 620 unintentionally, the operation portion 620 is pressed by the right hand object 400R unintentionally. In this manner, further improvement of the virtual experience of the user is possible.
Now, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
In
When the result of determination in Step S33C is YES, the control unit 121 determines whether or not the collision area of the operation portion 620 of the button object 600 touches the collision area CA of the right hand object 400R (Step S35C). On the contrary, when the result of determination in Step S33C is NO, the control unit 121 determines whether or not the right hand object 400R is present in the visual field CV of the virtual camera 300 based on the visual field CV of the virtual camera 300 and the position of the right hand object 400R (Step S34C). In
On the contrary, when the control unit 121 determines that the determination condition defined in Step S34C is satisfied (that is, in response to a determination that the right hand object 400R is present in the visual field CV of the virtual camera 300) (YES in Step S34C), the control unit 121 executes the determination processing (collision determination processing) of Step S35C. When the result of determination in Step S35C is YES, the control unit 121 exerts a predetermined effect on the operation portion 620 touching the collision area CA of the right hand object 400R (Step S36C). On the contrary, when the result of determination in Step S35C is NO, the control unit 121 ends this processing without executing the processing of Step S36C.
According to at least one embodiment, as determination conditions for determining whether or not the collision area CA of the right hand object 400R touches the collision area of the operation portion 620 intentionally, a determination is made whether or not the absolute speed S of the HMD 110 is equal to or smaller than the predetermined value Vth (Step S33C), and a determination is made whether or not the right hand object 400R is present in the visual field CV of the virtual camera 300 (Step S34C). Further, in response to a determination that none of the determination condition of Step S33C and the determination condition of Step S34C is satisfied, the right hand object 400R does not exert a predetermined effect on the operation portion 620. In this manner, through use of two different determination conditions, whether or not the right hand object 400R has touched the operation portion 620 unintentionally is more reliably determined. In particular, when the right hand object 400R has touched the operation portion 620 under a state in which the absolute speed S of the HMD 110 is larger than the predetermined value Sth and the right hand object 400R is not present in the visual field CV, a determination is made that the right hand object 400R has touched the operation portion 620 unintentionally, and an effect of collision between the right hand object 400R and the operation portion 620 is not caused. Therefore, when the right hand object 400R has touched the operation portion 620 unintentionally, a situation in which the operation portion 620 is pressed by the right hand object 400R unintentionally can be avoided. In this manner, further improvement of the virtual experience of the user is possible.
In at least one embodiment, in Step S34C, the control unit 121 determines whether or not the right hand object 400R is present in the visual field CV. Instead, the control unit 121 may determine whether or not at least one of the right hand object 400R and the button object 600 is present in the visual field CV. In this case, when the right hand object 400R has touched the operation portion 620 under a state in which the absolute speed S of the HMD 110 is larger than the predetermined value Vth and the right hand object 400R and the button object 600 are both not present in the visual field CV, a determination is made that the right hand object 400R has touched the operation portion 620 unintentionally, and an effect of collision between the right hand object 400R and the operation portion 620 is not caused.
Now, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
In
When the result of determination in Step S43 is NO, the collision area of the operation portion 620 does not touch the collision area CA of the right hand object 400R, and thus the control unit 121 ends this processing. On the contrary, when the result of determination in Step S43 is YES (when the collision area of the operation portion 620 touches the collision area CA of the right hand object 400R), the control unit 121 determines whether or not the absolute speed S of the HMD 110 is equal to or smaller than the predetermined value Sth (Step S44).
When the control unit 121 determines that the absolute speed S is larger than the predetermined value Sth (NO in Step S44), the control unit 121 executes determination processing defined in Step S45. On the contrary, when the control unit 121 determines that the absolute speed S is equal to or smaller than the predetermined value Sth (YES in Step S44), the control unit 121 exerts a predetermined effect (collision effect) on the operation portion 620 touching the collision area CA of the right hand object 400R (Step S46).
When the control unit 121 determines that the right hand object 400R is present in the visual field CV of the virtual camera 300 (YES in Step S45), the control unit 121 exerts a predetermined effect (collision effect) on the operation portion 620 touching the collision area CA of the right hand object 400R (Step S46). On the contrary, when the result of determination in Step S45 is NO, the control unit 121 ends this processing without executing the processing of Step S46.
In the information processing method according to at least one embodiment, determination of collision between the right hand object 400R and the operation portion 620 is executed, but when none of the determination condition of Step S44 and the determination condition of Step S45 is satisfied, an effect of collision between the right hand object 400R and the operation portion 620 is not caused.
Therefore, a situation is avoided in which, when the right hand object 400R has touched the operation portion 620 unintentionally, the operation portion 620 is pressed by the right hand object 400R unintentionally. In this manner, further improvement of the virtual experience of the user is possible.
Now, a description is given of an information processing method according to at least one embodiment of this disclosure with reference to
In
Next, the control unit 121 executes processing defined in Step S52 and Step S53, and then determines whether or not the absolute speed S of the HMD 110 is equal to or smaller than the predetermined value Sth (Step S54).
When the result of determination in Step S54 is YES, the control unit 121 determines whether or not the collision area of the operation portion 620 of the button object 600 touches the collision area CA of the right hand object 400R (Step S56). On the contrary, when the result of determination in Step S54 is NO, the control unit 121 determines whether or not the relative speed V of the controller 320R with respect to the HMD 110 is larger than the predetermined value Vth (Step S55). The predetermined value Vth (Vth≧0) can be set appropriately depending on details of content, for example, a game. When the result of determination in Step S55 is NO, the control unit 121 ends this processing without executing processing of Step S56 and Step S57. That is, the control unit 121 does not execute collision determination processing defined in Step S56 and processing of causing a collision effect defined in Step S57, and thus the right hand object 400R does not exert a predetermined effect on the operation portion 620 of the button object 600. The determination condition defined in Step S55 corresponds to a determination condition (second condition) for determining whether or not the collision area CA of the right hand object 400R has touched the collision area of the operation portion 620 intentionally.
On the contrary, when the control unit 121 determines that the determination condition defined in Step S55 is satisfied (that is, when it is determined that the relative speed V is larger than the predetermined value Vth) (YES in Step S55), the control unit 121 executes the determination processing (collision determination processing) of Step S56. When the result of determination in Step S56 is YES, the control unit 121 exerts a predetermined effect on the operation portion 620 touching the collision area CA of the right hand object 400R (Step S57). On the contrary, when the result of determination in Step S56 is NO, the control unit 121 ends this processing without executing the processing of Step S57.
According to at least one embodiment, as determination conditions for determining whether or not the collision area CA of the right hand object 400R touches the collision area of the operation portion 620 intentionally, whether or not the absolute speed S of the HMD 110 is equal to or smaller than the predetermined value Sth is determined (Step S54), and whether or not the relative speed V of the controller 320R with respect to the HMD 110 is larger than the predetermined value Vth is determined (Step S55). Further, in response to a determination that none of the determination condition of Step S54 and the determination condition of Step S55 is satisfied, the right hand object 400R does not exert a predetermined effect on the operation portion 620. In this manner, through use of two different determination conditions, it is possible to more reliably determine whether or not the right hand object 400R has touched the operation portion 620 unintentionally. In particular, when the right hand object 400R has touched the operation portion 620 under a state in which the absolute speed S of the HMD 110 is larger than the predetermined value Vth and the relative speed v of the controller 320R with respect to the HMD 110 is larger than the predetermined value Vth, the right hand object 400R is determined to have touched the operation portion 620 unintentionally, and an effect of collision between the right hand object 400R and the operation portion 620 is not caused. Therefore, when the right hand object 400R has touched the operation portion 620 unintentionally, a situation in which the operation portion 620 is pressed by the right hand object 400R unintentionally can be avoided. In this manner, further improvement of the virtual experience of the user is possible.
The above description of embodiments is not to be read as a restrictive interpretation of the technical scope of this disclosure. The embodiments are merely given as an example, and is to be understood by a person skilled in the art that various modifications can be made to the embodiments within the scope of this disclosure set forth in the appended claims. Thus, the technical scope of this disclosure is to be defined based on the scope of this disclosure set forth in the appended claims and an equivalent scope thereof.
In the above described embodiments, the movement of the hand object is controlled based on the movement of the external controller 320 representing the movement of the hand of the user U, but the movement of the hand object in the virtual space may be controlled based on the movement amount of the hand of the user U. For example, instead of using the external controller, a glove-type device or a ring-type device to be worn on the hand or fingers of the user may be used. With this, the position sensor 130 can detect the position and the movement amount of the hand of the user U, and can detect the movement and the state of the hand and fingers of the user U. Further, the position sensor 130 may be a camera configured to take an image of the hand (including the fingers) of the user U. In this case, by taking an image of the hand of the user with use of a camera, the position and the movement amount of the hand of the user U can be detected, and the movement and the state of the hand and fingers of the user U can be detected based on data of the image in which the hand of the user is displayed, without wearing any kind of device directly on the hand or fingers of the user.
Further, in the above described embodiments, there is set a collision effect for defining the effect to be exerted on the wall object by the hand object based on the position and/or movement of the hand, which is a part of the body of the user U other than the head, but the embodiments are not limited thereto. For example, there may be set a collision effect for defining, based on a position and/or movement of a foot of the user U being a part of the body of the user U other than the head, an effect to be exerted on a target object by a foot object (example of operation object), which is synchronized with the movement of the foot of the user U. As described above, in the embodiments, there may be set a collision effect for identifying a relative relationship (distance and relative speed) between the HMD 110 and a part of the body of the user U, and defining the effect to be exerted on a target object by the operation object, which is synchronized with the part of the body of the user U, based on the identified relative relationship.
Further, in the above described embodiments, the wall object 500 is described as an example of a target object that the hand object exerts a predetermined effect on, but the attribute of a target object is not particularly limited. Further, an appropriate condition may be set as a condition for moving the virtual camera 300 other than a condition on collision between the hand object 400 and the wall object 500. For example, when a predetermined finger of the hand object 400 is directed to the wall object 500 for a predetermined period of time, the virtual camera 300 may be moved as well as the counter part 510 of the wall object 500. In this case, as in
In Non-Patent Document 1, there is no disclosure of setting a predetermined effect that is exerted on a predetermined object in the VR space in accordance with movement of the user in the real space. In particular, in Non-Patent Document 1, there is no disclosure of changing an effect (hereinafter referred to as “collision effect”) for defining an effect that is exerted on a virtual object (target object) by a hand object due to collision between the hand object and the virtual object in accordance with movement of the hand of the user. Therefore, there is room for improvement of experience in a VR space, an augmented reality (AR) space, and a mixed reality (MR) space of the user by improving the effect that is exerted on the virtual object in accordance with movement of the user.
(1) An information processing method to be executed in a system including a head-mounted device and a position sensor configured to detect a position of the head-mounted device and a position of a part of a body other than a head of a user. The information processing method includes generating virtual space data for defining a virtual space that includes a virtual camera, an operation object, and a target object. The method further includes updating a visual field of the virtual camera in accordance with movement of the head-mounted device. The method further includes generating visual-field image data based on the visual field of the virtual camera and the virtual space data. The method further includes displaying a visual-field image on the head-mounted device based on the visual-field image data. The method further includes moving the operation object in accordance with movement of the part of the body of the user. The method further includes identifying a relative relationship between the head-mounted device and the part of the body of the user. The method further includes setting a collision effect for defining an effect that is exerted by the operation object on the target object depending on the identified relative relationship.
According to the above-mentioned method, the collision effect is set depending on the relative relationship between the head-mounted device and the part (excluding the head of the user) of the body of the user, and thus further improvement of the experience (hereinafter referred to as “virtual experience”) of the user for the virtual object (target object) is possible.
(2) An information processing method according to Item (1), in which the setting of the collision effect includes setting a size of a collision area of the operation object depending on the identified relative relationship. The setting of the collision effect further includes exerting an effect on the target object depending on a positional relationship between the collision area of the operation object and the target object.
According to the above-mentioned method, the size of the collision area of the operation object is set depending on the relative relationship between the head-mounted device and the part (excluding the head of the user) of the body of the user, and in addition, an effect is exerted on the target object depending on the positional relationship between the collision area of the operation object and the target object. In this manner, further improvement of the virtual experience is possible.
(3) An information processing method according to Item (1) or (2), in which the identifying of the relative relationship includes a step of identifying a relative positional relationship between the head-mounted device and the part of the body of the user. The setting of the collision effect includes setting the collision effect depending on the identified relative positional relationship.
According to the above-mentioned method, the collision effect is set depending on the relative relationship between the head-mounted device and the part (excluding the head of the user) of the body of the user, and thus further improvement of the virtual experience is possible.
(4) An information processing method according to Item (3),
in which the identifying of the relative relationship includes a step of identifying a distance between the head-mounted device and the part of the body of the user. The setting of the collision effect includes setting the collision effect depending on the identified distance.
According to the above-mentioned method, the collision effect is set depending on the distance between the head-mounted device and the part (excluding the head of the user) of the body of the user, and thus further improvement of the virtual experience is possible.
(5) An information processing method according to Item (1) or (2),
in which the identifying of the relative relationship includes a step of identifying a relative speed of the part of the body of the user with respect to the head-mounted device. The setting of the collision effect includes setting the collision effect depending on the identified relative speed.
According to the above-mentioned method, the collision effect is set depending on the relative speed of the part (excluding the head of the user) of the body of the user with respect to the head-mounted device, and thus further improvement of the virtual experience is possible.
(6) An information processing method according to Item (4),
in which the identifying of the relative relationship includes a step of identifying a relative speed of the part of the body of the user with respect to the head-mounted device. The setting of the collision effect includes setting the collision effect depending on the identified distance and the identified relative speed.
According to the above-mentioned method, the collision effect is set depending on the distance between the head-mounted device and the part (excluding the head of the user) of the body of the user and the relative speed of the part of the body of the user with respect to the head-mounted device, and thus further improvement of the virtual experience is possible.
(7) An information processing method according to Item (1) or (2),
in which the identifying of the relative relationship further includes a step of identifying a relative acceleration of the part of the body of the user with respect to the head-mounted device. The setting of the collision effect includes setting the collision effect depending on the identified relative acceleration.
According to the above-mentioned method, the collision effect is set depending on the relative acceleration of the part (excluding the head of the user) of the body of the user with respect to the head-mounted device, and thus further improvement of the virtual experience is possible.
(8) An information processing method according to Item (4),
in which the identifying of the relative relationship further includes a step of identifying a relative acceleration of the part of the body of the user with respect to the head-mounted device. The setting of the collision condition includes setting the collision effect depending on the identified distance and the identified relative acceleration.
According to the above-mentioned method, the collision effect is set depending on the distance between the head-mounted device and the part (excluding the head of the user) of the body of the user and the relative acceleration of the part of the body of the user with respect to the head-mounted device, and thus further improvement of the virtual experience is possible.
(9) A system for executing the information processing method of any one of Items (1) to (8).
Providing a system capable of further improving the virtual experience is possible.
In Non-Patent Document 1, there is no disclosure of setting a predetermined effect that is exerted on a predetermined object in the VR space in accordance with movement of the user in the real space. In particular, in Non-Patent Document 1, there is no disclosure of changing an effect (hereinafter referred to as “collision effect”) for defining an effect that is exerted on a virtual object (target object) by a hand object due to collision between the hand object and the virtual object in accordance with movement of the hand of the user. Therefore, there is room for improvement of experience in the VR space, the augmented reality (AR) space, and the mixed reality (MR) space of the user by improving the effect that is exerted on the virtual object in accordance with movement of the user.
(10) An information processing method to be executed in a system including a head-mounted device and a position sensor configured to detect a position of the head-mounted device and a position of a part of a body other than a head of a user. The information processing method includes generating virtual space data for defining a virtual space that includes a virtual camera, an operation object, and a target object. The method further includes updating a visual field of the virtual camera in accordance with movement of the head-mounted device. The method further includes generating visual-field image data based on the visual field of the virtual camera and the virtual space data. The method further includes displaying a visual-field image on the head-mounted device based on the visual-field image data. The method further includes moving the operation object in accordance with movement of the part of the body of the user. The method further includes setting a collision effect for defining an effect that is exerted by the operation object on the target object depending on an absolute speed of the head-mounted device. Setting the collision effect includes setting the collision effect as a first collision effect when the absolute speed of the head-mounted device is equal to or smaller than a predetermined value. Setting the collision effect further includes setting the collision effect as a second collision effect different from the first collision effect when the absolute speed of the head-mounted device is larger than the predetermined value.
According to the above-mentioned method, the collision effect is set depending on the absolute speed of the head-mounted device. In particular, when the absolute speed of the head-mounted device is equal to or smaller than the predetermined value, the collision effect is set as the first collision effect, whereas when the absolute speed of the head-mounted device is larger than the predetermined value, the collision effect is set as the second collision effect. In this manner, further improvement of the experience (hereinafter referred to as “virtual experience”) of the user for the virtual object (target object) is possible.
(11) An information processing method according to Item 10, wherein setting the collision effect as the first collision effect includes setting, when the absolute speed of the head-mounted device is equal to or smaller than the predetermined value, a size of a collision area of the operation object to a first size. Setting the collision effect as the first collision effect includes exerting an effect on the target object depending on a positional relationship between the collision area of the operation object and the target object. Setting the collision effect as the second collision effect includes setting, when the absolute speed of the head-mounted device is larger than the predetermined value, the size of the collision area of the operation object to a second size different from the first size. Setting the collision effect as the second collision includes exerting an effect on the target object depending on the positional relationship between the collision area of the operation object and the target object.
According to the above-mentioned method, the size of the collision area of the operation object is set depending on the absolute speed of the head-mounted device. In particular, when the absolute speed of the head-mounted device is equal to or smaller than the predetermined value, the size of the collision area of the operation object is set to the first size. On the contrary, when the absolute speed of the head-mounted device is larger than the predetermined value, the size of the collision area of the operation object is set to the second size. Further, an effect is exerted on the target object depending on the positional relationship between the collision area of the operation object and the target object. In this manner, further improvement of the virtual experience is possible.
(12) An information processing method according to Item (10) or (11), further including identifying a relative speed of the part of the body of the user with respect to the head-mounted device. The setting of the collision effect includes setting the collision effect depending on the identified absolute speed of the head-mounted device and the identified relative speed.
According to the above-mentioned method, the collision effect is set depending on the absolute speed of the head-mounted device and the relative speed of the part (excluding the head of the user) of the body of the user with respect to the head-mounted device, and thus further improvement of the virtual experience is possible.
(13) An information processing method according to Item (10) or (11), further including identifying a relative acceleration of the part of the body of the user with respect to the head-mounted device. The setting of the collision effect includes setting the collision effect depending on the identified absolute speed of the head-mounted device and the identified relative acceleration.
According to the above-mentioned method, the collision effect is set depending on the absolute speed of the head-mounted device and the relative acceleration of the part (excluding the head of the user) of the body of the user with respect to the head-mounted device, and thus it is possible to further improve the virtual experience.
(14) A system for executing the information processing method of any one of Items (10) to (13).
Providing a system capable of further improving the virtual experience is possible.
In Non-Patent Document 1, the visual-field image presented on the HMD changes in accordance with movement of the head-mounted device in the real space. In this case, the user needs to move in the real space or perform input for designating a movement destination on a device, for example, a controller, in order for the user to reach a desired object in the VR space.
(20)
An information processing method to be executed by a computer configured to control a system including a head-mounted device and a position sensor configured to detect a position of the head-mounted device and a position of a part of a body other than a head of a user. The information processing method includes identifying virtual space data for defining a virtual space that includes a virtual camera, an operation object, and a target object. The method further includes moving the virtual camera in accordance with movement of the head-mounted device. The method further includes moving the operation object in accordance with movement of the part of the body. The method further includes moving the virtual camera without association with movement of the head-mounted device when the operation object and the target object satisfy a predetermined condition. The method further includes defining a visual field of the virtual camera based on movement of the virtual camera and generating visual-field image data based on the visual field and the virtual space data. The method further includes displaying a visual-field image on the head-mounted device based on the visual-field image data.
According to the information processing method of this item, when the operation object and the target object, which move in accordance with movement of the part of the body of the user, satisfy the predetermined condition, automatically moving the virtual camera is possible. With this, the user can recognize movement in the VR space that conforms to the intention of the user, and the virtual experience can be improved.
(Item 21)
A method according to Item 20, in which the moving of the virtual camera includes moving the operation object in accordance with movement of the virtual camera so that a relative positional relationship between the head-mounted device and the part of the body is maintained.
According to the information processing method of this item, the user can continue virtual experience using the operation object without feeling strange also after movement. With this, the virtual experience can be improved.
(Item 22)
A method according to Item 20 or 21, in which the predetermined condition includes touch between the operation object and the target object.
According to the information processing method of this item, the user can move in the VR space in accordance with the intention of the user.
(Item 23)
A method according to Item 22,
in which the moving of the virtual camera includes a step of processing a counter part of the target object, which is opposed to the virtual camera, based on the touch so that the counter part becomes away from the virtual camera. The method further includes moving the virtual camera so that the virtual camera approaches the counter part and the operation object does not touch the target object when the operation object is moved in accordance with movement of the virtual camera so as to keep a relative positional relationship between the head-mounted device and the part of the body.
According to the information processing method of this item, preventing occurrence of movement in the VR space that is not intended by the user due to the fact that the operation object has touched the target object after movement is possible.
(Item 24)
A method according to Item 22 or 23, in which the moving of the virtual camera includes moving the virtual camera in a direction of extension of a visual axis of the virtual camera at a time when the operation object and the target object have touched each other.
According to the information processing method of this item, the virtual camera is moved in a front direction of the user in the VR space, and thus preventing visually induced motion sickness (so-called VR sickness), which may be caused at the time of movement of the virtual camera is possible.
(Item 25)
A method according to Item 24, further including reducing a distance for which the virtual camera is moved as a position at which the operation object and the target object have touched each other becomes more away from the visual axis of the virtual camera.
According to the information processing method of this item, preventing occurrence of movement in the VR space that is not intended by the user due to the fact that the operation object has touched the target object after movement is possible.
(Item 26)
A method according to anyone of Items 22 to 25, further including avoiding moving the virtual camera when the position at which the operation object and the target object have touched each other is outside the visual field of the virtual camera.
According to the information processing method of this item, preventing occurrence of movement in the VR space that is not intended by the user is possible.
(Item 27)
A system for executing the method of any one of Items 20 to 26.
In the VR game disclosed in Non-Patent Document 1, an object may operate erroneously due to the hand object touching the object unintentionally during an operation of the hand object.
(30) An information processing method to be executed by a computer in a system including a head-mounted device and a position sensor configured to detect a position of the head-mounted device and a position of a part of a body other than a head of a user. The information processing method includes generating virtual space data for defining a virtual space that includes a virtual camera, an operation object, and a target object. The method further includes identifying a visual field of the virtual camera based on a position and an inclination of the head-mounted device. The method further includes displaying a visual-field image on the head-mounted device based on the visual field of the virtual camera and the virtual space data. The method further includes identifying a position of the operation object based on the position of the part of the body of the user. The method further includes avoiding causing the operation object to exert a predetermined effect on the target object when a predetermined condition for determining whether or not a collision area of the operation object has touched a collision area of the target object intentionally is not satisfied.
According to the above-mentioned method, when the predetermined condition for determining whether or not the collision area of the operation object has touched the collision area of the target object intentionally is not satisfied, the operation object does not exert a predetermined effect on the target object. In this manner, when the operation object is determined to have been touched the target object unintentionally, an effect of collision between the operation object and the target object does not occur. For example, a situation is avoided in which, when the hand object (example of operation object) has touched the button object (example of target object) unintentionally, the button object is pressed by the hand object unintentionally. Therefore, providing the information processing method capable of further improving the virtual experience of the user is possible.
(31) An information processing method according to Item (30), further including identifying an absolute speed of the head-mounted device, in which the predetermined condition specifies that the absolute speed of the head-mounted device is equal to or smaller than a predetermined value.
According to the above-mentioned method, when the absolute speed of the head-mounted device (HMD) is not equal to or smaller than the predetermined value (that is, absolute speed of HMD is larger than predetermined value), the operation object does not exert a predetermined effect on the target object. In this manner, when the operation object has touched the target object with the absolute speed of the HMD being larger than the predetermined value, the operation object is determined to have touched the target object unintentionally, and an effect of collision between the operation object and the target object does not occur.
(32) An information processing method according to Item (30) or (31), in which the predetermined condition includes a first condition and a second condition. The first condition is a condition on the head-mounted device. The second condition is a condition different from the first condition. The information processing method further includes avoiding causing the operation object to exert the predetermined effect on the target object when the first condition and the second condition are not satisfied.
According to the above-mentioned method, when the first condition and the second condition for determining whether or not the collision area of the operation object has touched the collision area of the target object intentionally are not satisfied, the operation object does not exert a predetermined effect on the target object. In this manner, reliably determining whether or not the operation object has touched the target object unintentionally by using two determination conditions different from each other is possible.
(33) An information processing method according to Item (32), further including identifying an absolute speed of the head-mounted device. The first condition specifies that the absolute speed of the head-mounted device is equal to or smaller than a predetermined value. The second condition specifies that the operation object is in the visual field of the virtual camera.
According to the above-mentioned method, when the absolute speed of the head-mounted device (HMD) is not equal to or smaller than the predetermined value (that is, absolute speed of HMD is larger than predetermined value) and that the operation object is outside the visual field of the virtual camera, the operation object does not exert a predetermined effect on the target object. In this manner, when the operation object has touched the target object with the absolute speed of the HMD being larger than the predetermined value and with the operation object being outside the visual field of the virtual camera, the operation object is determined to have touched the target object unintentionally, and an effect of collision between the operation object and the target object does not occur.
(34) An information processing method according to Item (32), further including identifying an absolute speed of the head-mounted device. The first condition specifies that the absolute speed of the head-mounted device is equal to or smaller than a predetermined value. The second condition specifies that at least one of the operation object and the target object is in the visual field of the virtual camera.
According to the above-mentioned method, when the absolute speed of the head-mounted device (HMD) is not equal to or smaller than the predetermined value (that is, absolute speed of HMD is larger than predetermined value) and that both of the operation object and the target object are outside the visual field of the virtual camera, the operation object does not exert a predetermined effect on the target object. In this manner, when the operation object has touched the target object with the absolute speed of the HMD being larger than the predetermined value and with both of the operation object and the target object being outside the visual field of the virtual camera, the operation object is determined to have touched the target object unintentionally, and an effect of collision between the operation object and the target object does not occur.
(35) An information processing method according to Item (32), further including identifying an absolute speed of the head-mounted device. The method further includes identifying a relative speed of the part of the body of the user with respect to the head-mounted device. The first condition specifies that the absolute speed of the head-mounted device is equal to or smaller than a predetermined value. The second condition specifies that the relative speed is larger than a predetermined value.
According to the above-mentioned method, the operation object does not exert a predetermined effect on the target object when the absolute speed of the head-mounted device (HMD) is not equal to or smaller than the predetermined value (that is, absolute speed of HMD is larger than predetermined value) and that the relative speed of the part of the body of the user with respect to the HMD is not larger than the predetermined value (that is, relative speed of part of body of user with respect to HMD is equal to or smaller than predetermined value). In this manner, when the operation object has touched the target object with the absolute speed of the HMD being larger than the predetermined value and with the relative speed of the part of the body of the user being equal to or smaller than the predetermined value, the operation object is determined to have touched the target object unintentionally, and an effect of collision between the operation object and the target object does not occur.
(36) A system for executing the information processing method of any one of Items (30) to (35).
According to the above-mentioned system, further improvement of the virtual experience of the user is possible.
Number | Date | Country | Kind |
---|---|---|---|
2016-148490 | Jul 2016 | JP | national |
2016-148491 | Jul 2016 | JP | national |
2016-156006 | Aug 2016 | JP | national |
2017-006886 | Jan 2017 | JP | national |