Preferred embodiments of the present invention will be described in detail hereinafter with reference to the accompanying drawings.
<About System Arrangement>
This embodiment relates to a system which presents a virtual space to the user, and feeds back collision feeling to the human body of the user in consideration of the spread of stimulus upon collision when a virtual object on the virtual space collides against the human body of the user.
Reference numeral 100 denotes a user who experiences the virtual space. The user 100 wears an HMD 130 on own head. The user 100 experiences the virtual space by viewing an image displayed on a display unit of the HMD 130 before the eyes.
Note that the detailed arrangement required for the user to experience the virtual space is not the gist in the following description, and it will be explained briefly. An information processing apparatus 105 acquires the position and orientation of the HMD 130, which are measured by a sensor equipped on the HMD 130. The apparatus 105 generates an image of the virtual space that can be seen from a viewpoint having the acquired position and orientation. The apparatus 105 displays the generated virtual space image on the display unit of the HMD 130 via an image output unit 113. Since there are various methods of acquiring the position and orientation of the HMD 130 and various practical methods of generating a virtual space image, and they are not the gist of the following description, no more explanation will be given.
Reference numeral 1 denotes a hand of the user 100. One or more markers 199 are arranged on the hand 1, and a wearable unit 104 is attached to it. A plurality of stimulus generators 110 is mounted on this wearable unit 104. These stimulus generators 110 apply stimuli to the human body (the hand 1 in case of
As the stimulus generator 110 used to apply a mechanical vibration stimulus, various devices may be adopted. For example, a voice-coil type stimulus generator 110 that generates mechanical vibration stimuli may be used, or a stimulus generator 110 which applies a stimulus by actuating a pin that is in contact with the human body using an actuator such as a piezoelectric element, polymeric actuator, and the like may be used. Alternatively, a stimulus generator 110 that presses against the skin surface with pneumatic pressure may be used.
The stimulus to be applied is not limited to mechanical stimulus, and electric stimulus, temperature stimulus, or the like may be used as the stimulus generator 110 as long as it stimulates a haptic sense. As the electric stimulus, a device that applies a stimulus using a micro-electrode array or the like is available. Also as the temperature stimulus, a device that uses a thermoelectric element or the like is available.
In this way, the plurality of stimulus generators 110 that can apply stimuli to a part wearing the wearable unit 104 are arranged on the wearable unit 104. This wearable unit 104 is easy to put on and take off since it has a glove or band shape, but any unit can be used as the wearable unit 104 as long as the user can appropriately wear the unit 104 so that stimuli generated by the stimulus generators 110 are transmitted to the human body. In the description of
Note that a “part” simply indicates an arm, leg, or the like. In some cases, a combination of a plurality of parts such as “arm and body” may be generally interpreted as a “part”.
A plurality of cameras 106 is laid out at predetermined positions on the physical space and is used to capture images of markers attached to respective parts of the user. The layout position of each camera 106 is not particularly limited, and its position and orientation may be appropriately changed. Frame images (physical space images) captured by the cameras 106 are output to a position detection unit 107 included in the information processing apparatus 105.
A recording device 109 holds shape information and position and orientation information of respective virtual objects that form the virtual space. For example, when each virtual object is defined by polygons, the recording device 109 holds data of normal vectors and colors of polygons, coordinate value data of vertices which form each polygon, texture data, data of the layout position and orientation of the virtual object, and the like. The recording device 109 also holds shape information of each of virtual objects that simulate the human body (respective parts) of the user (to be referred to as a virtual human body), and information indicating the relative position and orientation relationship among the respective parts.
The position detection unit 107 detects the markers 199 in the real space images input from the cameras 106, and calculates the positions and orientations of the respective parts of the user including the hand 1 using the detected markers. Then, the position detection unit 107 executes processing for laying out the virtual human bodies that simulate respective parts of the human body at the calculated positions and orientations of the respective parts. As a result, the virtual human bodies that simulate the respective parts of the user are laid out on the virtual space to have the same positions and orientations as those of the actual parts. As a technique to implement such processing, for example, a state-of-the-art technique called motion capture technique is known, and is. Note that the virtual human bodies that simulate the respective parts need not be displayed.
The reason why the virtual human body is set is as follows. That is, when shape data of a human body, e.g., a hand, is prepared in advance and is superimposed on an actual hand, the information processing apparatus can calculate an interference (contact) between the hand and a virtual object, as will be described later. In this way, even when a certain part of the human body other than the part where the markers are set has caused an interference with the virtual object, the part of the human body that causes the interference can be detected.
When an interference is detected at only marker positions or when a large number of markers are laid out, the virtual human body need not always be set. It is more desirable to determine an interference with the virtual object by setting the virtual human body, so as to detect interferences with the virtual objects at every position on the human body or to reduce the number of markers.
A position determination unit 108 executes interference determination processing between the virtual human body and another virtual object (a virtual object other than the human body; to be simply referred to as a virtual object hereinafter). Since this processing is a state-of-the-art technique, a description thereof will not be given. The following description will often make an expression “collision between the human body and virtual object”, but it means “collision between a virtual object that simulates a certain part of the human body and another virtual object” in practice.
A control unit 103 executes control processing for driving the stimulus generators 110 arranged on a part simulated by the virtual human body that interferes with (collides against) the virtual object.
Reference numeral 1401 denotes a CPU which controls the overall computer using programs and data stored in a RAM 1402 and ROM 1403, and executes respective processes to be described later, which will be explained as those to be implemented by the information processing apparatus 105. That is, when the position detection unit 107, position determination unit 108, control unit 103, and image output unit 113 shown in
The RAM 1402 has an area for temporarily storing programs and data loaded from the external storage device 1406, and an area for temporarily storing various kinds of information externally received via an I/F (interface) 1407. Also, the RAM 1402 has a work area used when the CPU 1401 executes various processes. That is, the RAM 1402 can provide various areas as needed.
The ROM 1403 stores setting data, a boot program, and the like.
Reference numeral 1404 denotes an operation unit, which comprises a keyboard, mouse, and the like. When the operator of this computer operates the operation unit 1404, the operator can input various instructions to the CPU 1401.
Reference numeral 1405 denotes a display unit which comprises a CRT, liquid crystal display, or the like. The display unit 1405 can display the processing results of the CPU 1401 by means of images, text, and the like.
The external storage device 1406 is a large-capacity information storage device represented by a hard disk drive. The external storage device 1406 saves an OS (operating system), and programs and data required to make the CPU 1401 execute respective processes (to be described later) which will be explained as those to be implemented by the information processing apparatus 105. The external storage device 1406 also saves various kinds of information held by the recording device 109 in the above description. Furthermore, the external storage device 1406 saves information described as given information.
The programs and data saved in the external storage device 1406 are loaded onto the RAM 1402 as needed under the control of the CPU 1401. When the CPU 1401 then executes processes using the loaded programs and data, this computer executes respective processes (to be described later) which will be described as those to be implemented by the information processing apparatus 105.
The I/F 1407 is connected to the aforementioned cameras 106, respective stimulus generators 110, and HMD 130. Note that the cameras 106, stimulus generators 110, and HMD 130 may have dedicated I/Fs.
Reference numeral 1408 denotes a bus which interconnects the aforementioned units.
<About Collision Between Human Body and Physical Object>
A vibration state acting on the human body upon collision between the human body and physical object will be described below. In the following description, “hand” will be used as an example of the human body. However, in the following description, the same applies to any other body parts.
Furthermore, graphs in
In this way, upon collision with the physical object, not only vibration is generated at the position of the collision point, but also an impact of the collision is transmitted to positions around that point. Vibrations around the collision point position suffer delays for predetermined time periods, and attenuations of their intensities. The delay in the generation of vibration is determined by the distance from the collision point position. In
In consideration of the above description, this embodiment has as its object to allow the user to experience collision feeling with higher reality by simulating, using the plurality of stimulus generators 110, the impact upon collision between the virtual human body of the user and the virtual object.
The following description will be given taking collision between the hand 1 of the user and the virtual object as an example. Also, the same explanation applies to collision between other parts of the user and the virtual object.
<About Collision Between Virtual Human Body and Virtual Object>
Detection of collision between the virtual human body and virtual object will be described first. The position determination unit 108 executes this detection, as described above. The position detection unit 107 calculates the positions and orientations of the respective parts of the user including the hand 1, as described above. The position detection unit 107 then executes the processing for laying out virtual objects which simulate the respective parts at the calculated positions and orientations of the respective parts. Therefore, a virtual human body that simulates the hand 1 is laid out at the position and orientation of the hand 1 of the user, as a matter of course.
The position determination unit 108 executes the interference determination processing between this virtual human body that simulates the hand 1 and the virtual object. If the unit 108 determines that they interfere with each other, it specifies the position of the interference (collision point).
The plurality of stimulus generators 110 are located on the hand 1, as described above, and their positions of the locations are measured in advance. Therefore, the positions of the stimulus generators 110 on the virtual human body that simulates the hand 1 can be specified.
Hence, the control unit 103 determines the drive control contents to be executed for each stimulus generator 110 using the position of the collision point and those of the respective stimulus generators 110.
Referring to
The stimulus generator 19 is laid out on the back side of the hand. The following description will be given under the assumption that the position of the collision point is that of the stimulus generator 16. However, practically the same description will be given irrespective of the position of the collision point.
The control unit 103 calculates the distances between the position 16 of the collision point and those of the stimulus generators 16 to 19. The control unit 103 may calculate each of these distances as a rectilinear distance between two points, or may calculate them along the virtual human body 900. As a method of calculating distances along the virtual human body, the virtual human body is divided into a plurality of parts in advance. In order to calculate the distances between points which extend over a plurality of parts, distances via joint points between parts may be calculated. For example, the method of calculating the distances along the virtual human body 900 will be described below. The distance between the position 16 of the collision point and the stimulus generator 16 is zero. The distance between the position 16 of the collision point and the stimulus generator 17 is a from a rectilinear distance between the two points. In order to calculate the distance between the position 16 of the collision point and the stimulus generator 18, a distance b1 from the position 16 of the collision point to a joint point between the palm of the hand and the thumb is calculated first. Furthermore, a distance b2 from the joint point to the stimulus generator 18 is calculated, and a distance b as a total of these distances is determined as that between the position 16 of the collision point and the stimulus generator 18. In order to calculate the distance between the position 16 of the collision point and the stimulus generator 19, a distance can be calculated in a direction to penetrate through the virtual human body 900, it is given by c in
The control unit 103 executes the drive control of the stimulus generators so as to control stimuli generated by the stimulus generators based on the distances from the position 16 of the collision point to the respective stimulus generator. Control examples of the stimulus generators by the control unit 103 will be explained below.
<About Stimulus Control>
Assume that the position of the stimulus generator 110a on the virtual human body 301 collides against the virtual object 2. In this case, the stimulus generators 110b and 110c are closer to the position of the collision point (that of the stimulus generator 110a) in the order named.
In such situation, some control examples for the stimulus generators 110a to 110c will be described below.
As shown in
In this manner, the stimulus generators 110a to 110c begin to generate vibrations later behind the collision time with increasing distance from the collision point. With this control, the spread of the vibration from the collision point can be expressed by the stimulus generators 110a to 110c.
Therefore, the control unit 103 executes the drive control of the stimulus generator 110a located at the collision point so that it begins to generate a vibration simultaneously with generation of an impact (at the collision time). After an elapse of a predetermined period of time, the control unit 103 executes the drive control of the stimulus generator 110b, thus making it begin to generate a vibration. After an elapse of another predetermined period of time, the control unit 103 executes the drive control of the stimulus generator 110c, thus making it begin to generate a vibration.
As shown in
In this way, the vibrations generated by the stimulus generators 110a to 110c become smaller with increasing distance from the collision point (distance when a path is defined on the surface of the hand 1).
Therefore, the control unit 103 sets a large amplitude in the stimulus generator 110a located at the collision point to make it generate a stimulus with a predetermined intensity. The control unit 103 sets an amplitude smaller than that of the stimulus generator 110a in the stimulus generator 110b to make it generate a stimulus with an intensity lower than the stimulus intensity generated by the stimulus generator 110a. The control unit 103 sets an amplitude smaller than that of the stimulus generator 110b in the stimulus generator 110c to make it generate a stimulus with an intensity lower than the stimulus intensity generated by the stimulus generator 110b.
In
As shown in
Note that the stimulus generators 110a to 110c which receive such input signals may feed back any stimuli. That is, irrespective of stimuli fed back by the stimulus generators 110a to 110c, a stimulus increase/decrease pattern by the stimulus generator can be controlled by varying the input signal waveform in this way.
Note that the waveform shapes and the number of times of vibrations in the graphs shown in
<About Drive Control of Stimulus Generator 110>
The drive control of the respective stimulus generators 110a to 110c based on the distance relationship between the collision point and the stimulus generators 110a to 110c will be described below using simple examples.
In the description using
The control unit 103 calculates the distances from the position of the collision point (collision position) to those of the stimulus generators 110a to 110c. In case of
Hence, the stimulus generators 110a to 110c undergo the drive control to increase the stimuli to be generated in the order of the stimulus generators 110b, 110a, and 110c. For example, when each stimulus generator comprises a vibration motor, it can apply a stronger stimulus to the human body by rotating the vibration motor faster. On the other hand, when each stimulus generator applies a stimulus to the human body by pressing against the skin surface by a pneumatic pressure, it can apply a stronger stimulus to the human body by increasing the pneumatic pressure.
That is, a stimulus intensity I (e.g., a maximum amplitude of the vibration waveform) to be generated by the stimulus generator located at a distance r from the collision position is expressed by I=f(r) using a monotone decreasing function f.
Note that
In the description using
The control unit 103 calculates the distances from the position of the collision point (collision position) to those of the stimulus generators 110a to 110c. In case of
In case of
The aforementioned methods may be used in combination. For example, as described above with reference to
In the above description, the stimulus intensity is calculated according to the distance between the position of the collision point and the position of the stimulus generator. However, the stimulus intensity to be generated by the stimulus generator may be calculated in consideration of the impact transmission state upon collision which is measured in advance, or the intervention of the skin, bone, muscle, and the like. For example, the relation between a distance from the collision point and a vibration may be expressed as mathematical expressions or a correspondence table based on vibration transmission upon collision between the human body and physical object, which is measured in advance, thus determining the stimulus intensity. Also, transmission of a stimulus amount may be calculated based on the amounts of the skin, bone, and muscle that intervene between the collision point position and stimulus generator.
For example, using a variable s which represents the influence of the skin, a variable b which represents the influence of the bone, a variable m which represents the influence of the muscle, and a variable r which represents the distance from the collision position to the stimulus generator, an stimulus intensity I to be generated by this stimulus generator may be expressed by I=f(r, s, b, m) using the function f.
The thicknesses (distances) of the skin, bone, and muscle which intervene along a path from the collision point position to the stimulus generator are input to the respective variables, and the stimulus intensity is determined in consideration of the influences of attenuation of the vibration transmission by respective human body components. By controlling the stimulus generators around the collision point position by adding components that consider impact transmission of the human body, more improved impact feeling can be expressed. The above description has been given using the stimulus intensity. Likewise, the delay time period of generation of a stimulus can also be determined.
In the aforementioned example, the drive control of the stimulus generators is executed based on the distances from respective stimulus generators arranged on the hand 1 to the collision point between the virtual human body that simulates the hand 1, and the virtual object. Therefore, when the virtual object collides against a virtual human body that simulates another part (e.g., a leg), the drive control of stimulus generators is executed based on the distances from this collision point to the stimulus generators arranged on the leg. Also, all the stimulus generators arranged on the hand 1 need not always be driven, and only stimulus generators within a predetermined distance range from the collision point may undergo the drive control.
<About General Processing to be Executed by Information Processing Apparatus 105>
As described above, the information processing apparatus 105 executes processing for presenting a virtual space image to the HMD 130, and also processing for applying, to the user, by using the stimulus generator 110, stimuli based on collision between the virtual human body of the user who wears this HMD 130 on the head, and the virtual object.
The CPU 1401 checks in step S1501 if collision has occurred between the virtual human body corresponding to each part of the human body of the user, and the virtual object. This processing corresponds to that to be executed by the position determination unit 108 in the above description. If no collision has occurred, the processing according to the flowchart of
The drive processing of the stimulus generators in
In step S1502, the CPU 1401 calculates the distances between the position of the collision point and the plurality of stimulus generators attached to the collided part. In the above example, the CPU 1401 calculates the distances between the positions, on the virtual human body that simulates the hand 1, of the respective stimulus generators arranged on the hand 1, and the position of the collision point on the virtual human body that simulates the hand 1.
In step S1503, the CPU 1401 executes the drive control of the respective stimulus generators to feed back stimuli according to the distances. This control delays the stimulation start timing or weakens the stimulus intensity with increasing distance from the collision point in the above example.
The spread range of the stimulus to be applied to the human body by the drive control for the stimulus generators will be described below. When the stimulus intensity is changed based on the positional relationship between the collision point position and stimulus generators, the stimulus intensity weakens as increasing distance from the collision point position. The stimulus generator which is separated by a given distance or more generates a stimulus equal to or weaker than that perceived by the human body. The stimulus generator which is farther away from the collision point ceases to operate since the stimulus intensity becomes zero practically or approximately. In this manner, when the stimulus intensity is changed based on the positional relationship between the collision point position and stimulus generators, the range of stimulus generators which are controlled to generate stimuli upon collision are naturally determined.
On the other hand, as another control method, the operation range of the stimulus generators may be determined in advance. For example, when the collision point position with the virtual object falls within the range of the hand, at least the stimulus generators attached to the hand may be operated. Even when there is a stimulus generator which is set with a stimulus equal to or weaker than a stimulus intensity perceived by the human body, if it is attached within the range of the hand, it is controlled to generate a stimulus with a predetermined stimulus amount.
Conversely, it may be determined in advance to operate the stimulus generators within only a predetermined range. For example, when the collision point position with the virtual object falls within the range of the hand, the stimulus generators within only the range of the hand are operated, and the surrounding stimulus generators outside the range are not driven. In this case, when collision point position falls within the range of the hand, the stimulus generators attached to the arm are not operated.
As described above, the arrangement that simulates the impact transmission upon collision has been explained. However, when the collision velocity or acceleration value between the virtual human body and virtual object is small, the aforementioned control may be skipped. When the virtual human body and virtual object slowly collide against each other, since an impact force is also weak, the surrounding stimulus generators need not always be driven. For this reason, the velocity or acceleration of the virtual human body or virtual object is set in advance, and when they collide against each other at that value or more, the surrounding stimulus generators are also driven to simulate impact feeling. In case of collision at the predetermined velocity or acceleration or less, only one stimulus generator at or near the collision point position is controlled to be operated.
After the plurality of stimulus generators around the collision point position are driven to simulate an impact, when the virtual human body and virtual object are kept in contact with each other, a given stimulus is generated to make the user perceive a contact point position.
As shown in
This embodiment is suitably applied to a technique which feeds back contact feeling to the surface of a virtual object based on the positional relationship between an actual human body position and the virtual object which virtually exists on the physical space, in combination with the method of detecting the human body position. As the methods of detecting the position and orientation of the human body (part), a method using marks and cameras or a method of obtaining the human body shape and position by applying image processing to video images captured by cameras may be used. In addition, any other methods such as a method using a magnetic sensor or an acceleration or angular velocity sensor, a method of acquiring the hand shape using a data glove using an optical fiber or the like, and the like may be used. With the aforementioned measurement methods, the motion of the human body can be reflected to the virtual human body.
As elements used to determine the stimulus intensity to be generated by the stimulus generator, the characteristics such as the velocity or acceleration of the virtual human body or virtual object upon collision, the hardness of the object, and the like may be added. For example, when the virtual human body moves fast upon collision against the virtual object, the stimulus generator is driven to generate a strong stimulus. On the other hand, when the virtual human body moves slowly, the stimulus generator is driven to generate a weak stimulus. In this case, the velocity or acceleration of the virtual human body may be calculated from the method of detecting the human body position, or that of each part may be detected by attaching a velocity sensor or acceleration sensor to each part and using the value of the velocity sensor or acceleration sensor.
When the collided virtual object is hard as its physical property, the stimulus generator is driven to generate a strong stimulus. On the other hand, when the virtual object is soft, the stimulus generator is driven to generate a weak stimulus. These different stimulus intensities determined in this way may be implemented by applying biases to the plurality of stimulus generators, or such implementation method may be used only when the stimulus intensity of the stimulus generator located at the collision point position (or closest to that position) is determined. In this case, parameters associated with the hardness of the virtual object must be determined in advance and saved in the external storage device 1406.
As the parameters of the stimulus to be changed depending on the characteristics such as the velocity or acceleration of collision or the hardness of the virtual object, not only the intensity but also the stimulus generation timing may be changed. For example, when the virtual human body collides against the virtual object at a high velocity or when it collides against a hard virtual object, the stimulus generation start timing difference by the respective stimulus generators may be set to be a relatively small value.
Furthermore, in consideration of the fact that the human body gets used to the stimulus, the stimulus intensity may be changed. For example, when the virtual human body collides against the virtual object many times within a short period of time, and the stimulus generators generate stimuli successively, the human body gets used to the stimuli, and does not feel the stimuli sufficiently. In such case, even when the virtual human body collides against the virtual object at the same velocity or acceleration as the previous collision, the stimulus intensity is enhanced, so that the human body feels the stimulus more obviously.
The aforementioned control method may be used solely or in combination. For example, the control for delaying the stimulus generation timing and that for attenuating the stimulus intensity described using
When a change in stimulus due to the influences of the skin, bone, muscle, and the like is taken into consideration, as described above, calculations may be made based on the amounts of the skin, bone, muscle, and the like which exist along a path to the respective stimulus generators. In this case, models of the skin, bone, muscle, and the like and human body shape must be prepared in advance.
Also, a stimulus determination method unique to a specific human body part may be set. For example, at the terminal part of the human body such as a finger or the like, the entire finger largely vibrates by impact transmission upon collision. In order to feed back a stimulus by simulating not only vibration transmission on the skin but also the influence of such impact, a larger vibration amount than a stimulus calculated from the collision position may be set in the stimulus generator attached to the fingertip.
As another method of calculating the positional relationship between the collision point position and the respective stimulus generators, the virtual object of each part of the human body may be divided into a plurality of regions in advance, and the positional relationship between the collision point position of the virtual object and surrounding stimulus generators may be determined based on the divided regions.
Numerical values “1” to “3” in grids represent the stimulus intensities as relative values. That is, when a stimulus generators exists in a cell corresponding to 3×3 grids near the collision point position, the stimulus intensity to be generated by this stimulus generator is set to have a value corresponding to “3”. As shown in
The method of describing relative values in the correspondence table is effectively used for a case in which the stimulus to be generated by the stimulus generator at or near the collision point position is to be changed based on the velocity or acceleration upon collision. In this example, three levels of relative values “1” to “3” are used. However, the present invention is not limited to such specific values. In place of the relative values, practical stimulus amounts such as accelerations and the like may be set as absolute values. Also, the values set in the correspondence tables are not limited to the stimulus intensities, and stimulus generation delay times, frequency components in an input signal, stimulus waveforms, and the like may be used. The values of the correspondence table may be dynamically changed depending on the velocity or acceleration upon collision or the collision position in place of using identical values constantly.
For example, assume that collision against the virtual object has occurred at the position of a cell 20. In this case, the position of a stimulus generator 14 closest to the collision point position applies to the grid 30, and the intensity of a stimulus to be generated by the stimulus generator 14 is set to be a value corresponding to “3”. Since a stimulus generator 15 is located at a cell two cells above the stimulus generator 14, it corresponds to a grid two grids above the grid 30. Therefore, the intensity of a stimulus to be generated by the stimulus generator 15 is set to be a value corresponding to “2”.
In this manner, the virtual human body that simulates a given part is divided into a plurality of cells, and the relative positional relationship between the collision point and respective stimulus generators is determined for each cell. Then, using the correspondence table that describes the stimulus intensities near the position of the collision point, the stimulus intensity to be generated by the stimulus generator near the collision point is determined.
In
The method of dividing the human body surface into a plurality of regions in advance has been explained. Alternatively, the division of regions may be done at the time of collision, and regions to be divided may be dynamically changed in accordance with the collision position, the velocity or acceleration of collision, the direction of collision, and the like.
This embodiment will explain a case wherein the distances between the collision point and stimulus generators change in correspondence with a change in shape of the human body.
In clasping and unclasping states of the hand, the hand has quite different shapes. For example, as shown in
However, in practice, since the stimulus from the collision point is transmitted to the position of the stimulus generator 1251 via the palm of the hand, if the stimulus generator 1251 is controlled using this distance d, the stimulus generation timing may be too late or the stimulus may be too weak. Hence, in such case, it is desirable to feed back an impact which is directly transmitted from the palm of the hand to the fingertip.
Thus, when it is detected that the human bodies are in contact with each other, as shown in
In the unclasping state of the hand, i.e., when the fingers are not in contact with other human body part such as the palm of the hand or the like, the distance may be calculated in a direction along a natural shape of the human body like in the first embodiment. Switching of this control can be attained by determining the human body state by the position detection unit 107. More specifically, the states of the human bodies are checked, and if the human bodies are in contact with each other, the distance from the collision point position to the stimulus generator can be calculated by assuming the contact point as a continuous shape, as described above.
In the above description, the case of a change in shape of the hand has been explained. This embodiment is not limited to only the hand part, and can be applied to any other parts as long as they can be in contact with each other like the front arm and upper arm, the arm and body, and the like.
This embodiment will explain a case wherein the surface direction of a virtual object at a collision point is presented to the user by driving a plurality of stimulus generators based on the positional relationship between a virtual human body and the virtual object upon collision between the virtual human body and virtual object.
For example, the virtual human body 161 may collide against a horizontal portion of the virtual object 162, as shown in
As the virtual human body that simulates the human body or the virtual object, volume data having no concept of the surface direction, e.g., voxels may be used. In such case, for example, a known Marching Cube method is applied to the volume data to reconstruct an isosurface, thus detecting the surface direction.
Using the distances g1 to g3 calculated in this way, control to delay the simulation start timings of the corresponding stimulus generators 184 to 186, to attenuate their stimulus intensities, and so forth can be executed in proportion to the values of the distances g1 to g3. With the above control, a feeling that the surface of the virtual object 162 has passed along the surface of the user's hand can be obtained, thus feeding back the surface direction to the user.
This embodiment will explain a case wherein the shape of a collided virtual object is fed back to the user by driving a plurality of stimulus generators based on the positional relationship between the virtual human body and virtual object when the virtual human body collides against the virtual object.
When the virtual human body 191 collides against the virtual object 192, not only the surface direction of the virtual object is fed back as in the third embodiment, but also the detailed object shape may be presented. In this embodiment, as shown in
In case of
As shown in
Each distance calculation method may be changed as needed depending on information to be presented by a stimulus, the type of stimulus generator, the position of the stimulus generator, and the like.
In the aforementioned embodiments, the control at the time of contact between the virtual human body and virtual object has been explained. In this embodiment, by driving the respective stimulus generators based on the positional relationship between the stimulus generators and virtual object, when the virtual human body breaks into the virtual object, a direction to break away from an interference between the virtual human body and virtual object due to such break-in is fed back to the user.
As shown in
As “distance” used in this embodiment, various “distances” described in the first to fourth embodiments can be used. Some examples of the processing for calculating the “distance” which is applicable to this embodiment will be described below.
In this case, rectilinear distances j1 to j3 from the position 2203 to the stimulus generators 2204 to 2206 are calculated. Then, the control for the respective stimulus generators 2204 to 2206 is executed using the distances j1 to j3.
In this case, lengths (distances) k1 to k3 of line segments perpendicularly drawing from the positions of the stimulus generators 2305 to 2307 onto the surface 2304 are calculated. Then, the control for the respective stimulus generators 2305 to 2307 is executed using the distances k1 to k3, respectively.
In this case, distances upon extending lines from the positions of the stimulus generators 2404 to 2406 in a direction along a vector from the point 2402 toward the point 2401 to positions that intersect the surface of the virtual object 2499 are calculated as 11, 12, and 13. Then, the control for the respective stimulus generators 2404 to 2406 is executed using the distances 11 to 13.
In this case, shortest distances from the positions of the stimulus generators 253 to 255 to the surface of the virtual object 252 are respectively calculated as m3, m1, and m2. Then, the control for the respective stimulus generators 253 to 255 is executed using the distances m3, m1, and m2.
In
In this way, the control to attenuate the stimulus intensities in proportion to the sizes of the distances shown in
Each of the aforementioned distance calculation methods may be applied to only the stimulus generator located at the position of an interference between the virtual human body and virtual object, and those at non-interference positions may be controlled not to generate stimuli. The aforementioned control can aid to break away when the virtual human body interferes with the virtual object.
The respective distance calculation method may be changed as needed depending on the type of stimulus generator, the position of the stimulus generator, and the like.
In the respective embodiments, the second embodiment can be practiced at the same time irrespective of practice of other embodiments, or can be switched according to the human body and the positional relationship between the human body and virtual object. The first, third, and fourth embodiments cannot be practiced at the same time, but they may be switched depending on the contents of stimuli to be fed back. For example, upon feeding back the spread of stimuli, the first embodiment is used. Upon presenting the surface direction of the virtual object, the third embodiment is used. Upon presenting the shape of the virtual object, the fourth embodiment is used.
Alternatively, a use method that switches the embodiments as needed according to the relationship between the human body and virtual object is also effective. For example, the first embodiment is normally used to express a feeling of interference with higher reality while observing the virtual object. When the virtual object that interferes with the human body is occluded by another virtual object, the embodiment to be used is switched to the third or fourth embodiment. Then, with the method of recognizing the surface direction or shape of the interfering virtual object, workability verification or the like using a virtual environment can be effectively done.
The fifth embodiment can be practiced simultaneously with the first, third, or fourth embodiment. However, since the stimulus intensity becomes enhanced more than necessity due to superposition of stimuli, when the degree of interference between the human body and virtual object becomes large, a method of switching from the first, third, or fourth embodiment to the fifth embodiment is desirable.
The objects of the present invention are also achieved as follows. That is, a recording medium (or storage medium), which records a program code of software (computer program) that can implement the functions of the aforementioned embodiments, is supplied to a system or apparatus. A computer (or a CPU or MPU) of the system or apparatus reads out and executes the program code stored in the recording medium. In this case, the program code itself read out from the recording medium implements the functions of the aforementioned embodiments, and the recording medium (computer-readable recording medium) which stores the program code constitutes the present invention.
When the computer executes the readout program code, an operating system (OS) or the like, which runs on the computer, executes some or all actual processes based on an instruction of the program code. The present invention includes a case wherein the functions of the aforementioned embodiments are implemented by these processes.
Furthermore, assume that the program code read out from the recording medium is written in a memory equipped on a function expansion card or a function expansion unit, which is inserted in or connected to the computer. The present invention also includes a case wherein the functions of the aforementioned embodiments may be implemented when a CPU or the like arranged in the expansion card or unit then executes some or all of actual processes based on an instruction of the program code.
When the present invention is applied to the recording medium, that recording medium stores program codes corresponding to the aforementioned flowcharts.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Applications No. 2006-288042 filed Oct. 23, 2006 and No. 2007-106367 filed Apr. 13, 2007 which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2006-288042 | Oct 2006 | JP | national |
2007-106367 | Apr 2007 | JP | national |