This invention relates to a head mounted display (Head Mounted Display) for virtual space (Virtual Reality), a system using the head mounted display, and a display method in the head mounted display. Note that, the head mounted display for virtual space may hereinafter be referred to as a VRHMD.
Virtual space (hereinafter referred to as VR space) is used in various fields such as gaming, education, and tourism. A VRHMD is used to experience this VR space. VRHMD, for example, is a device that is worn on the head and displays virtual space images on a goggle-like display. This device is equipped with, as an example, a camera, a sensor to measure the distance to an object, multiple sensors such as a position measurement sensor, a CPU that performs image processing, and a battery. When wearing this VRHMD and experiencing a VR space, the wearer may move freely within the VR space, depending on the content of the experience, is considered. However, the actual space where the wearer is located has walls, desks, and various other objects (obstacles, etc.) that limit where the wearer can move around. Therefore, for safety reasons, when a VRHMD wearer approaches a limit on the area of activity to avoid these obstacles, that is a boundary within which they can move around safely, it is performed that recognizing of the limits of his/her actions, such as superimposed displaying the boundary on the VRHMD's display, etc.
Here, when the obstacle is fixed, it is useful to indicate on the display that the wearer of the VRHMD is approaching the boundary of the safe activity area described above, but it is also considered that, for example, a person, an animal such as a dog, or an object such as a ball may intrude into this safe activity area. Regarding such this perspective, there are known technologies that superimpose or otherwise display the intruding person, animal, etc. on the VRHMD display when such a person, animal, etc. intrudes into the safe activity area.
When wearing a VRHMD and experiencing a VR space, the immersive feeling may cause the experiencer to want to grasp what the surrounding situation is like, especially outside the safe activity area. Some of the reasons for this include.
Here, in the conventional example, if a person or other intrudes within the safe activity area of the VRHMD wearer, the person or other can be superimposed etc. on the VRHMD display, but if a person appears outside the safe activity area, the situation cannot be grasped.
Therefore, the purpose of this invention is to provide a VRHMD and a system equipped with a VRHMD that can appropriately grasp the surrounding situation even while experiencing a VR space, by determining whether the surrounding situation is that the wearer wants to grasp, even if it is outside the safe activity area of the wearer of the VRHMD, and displaying it on the display of the VRHMD according to the result even if it is outside the safe activity area. Also, the purpose is to provide a display method regarding this display.
According to a first aspect of the present invention, the following head mounted display is provided. That is, the head mounted display is head mounted display for virtual space. The head mounted display includes a display, a camera, a distance detector, an image generator, a memory, and a controller. The display that displays image. The camera that captures real space. The distance detector that detects distance to object in real space. The image generator that generates image to be displayed on the display. The memory that stores the type condition and distance condition of the object to be displayed. And, the controller recognizes the type of object from the captured image of the camera, extracts object that matches the type condition and distance condition, superimposes image of showing extracted object on the image of the virtual space, and displays on the display.
According to a second aspect of the present invention, the following head mounted display system is provided. That is, head mounted display system including a camera that captures real space, and a head mounted display for virtual space. The head mounted display includes a display that displays image, a distance detector that detects distance to object in real space, an image generator that generates image to be displayed on the display, a memory that stores the type condition and distance condition of the object to be displayed, and a controller. And, the controller recognizes the type of object from the captured image of the camera, extracts object that matches the type condition and distance condition, superimposes image 41 showing extracted object on the image of the virtual space and displays on the display.
According to a third aspect of the present invention, the following method of displaying head mounted display is provided. This display method is method performed using a head mounted display for virtual space. This method includes a memory step that stores the type condition and distance condition of the object to be displayed, an image generation step that generates an image drawing the virtual space, a shooting step that captures the real space around the head mounted display, a distance detection step that detects the distance to object in the real space, a recognition step that recognizes the type of object from the captured image, an extraction step that extracts object that matches the type condition and distance condition, from the recognized object, and a superimposed display step that superimposes image of showing extracted object on the image of the virtual space and displays.
According to the present invention, it is provided a VRHMD and a system equipped with a VRHMD that can appropriately grasp the surrounding situation even while experiencing a VR space, by determining whether the surrounding situation is that the wearer wants to grasp, even if it is outside the safe activity area of the wearer of the VRHMD (head mounted display for virtual space), and displaying it on the display of the VRHMD according to the result even if it is outside the safe activity area. Also, it is provided a display method regarding this display.
Hereinafter, examples of embodiments of the invention will be described using the drawings. The same symbols are applied to similar configurations throughout the drawings, and duplicate explanations may be omitted. According to an embodiment, an HMD (Head Mounted Display) is provided that can appropriately grasp the surrounding situation even outside the safe activity area. By this, as an example, it can contribute to 9. Let's create a foundation for industry and technological innovation of the Sustainable Development Goals (SDGs) proposed by the United Nations.
The first embodiment will be described with reference to
As shown in
The wearer of VRHMD 1 experiences VR space in the real space as shown in
However, in the real space where the wearer is, as shown in
Next, with reference to
The control circuitry 104 can be configured using, as an example, the main processor 2, RAM (random access memory) 141, ROM (read only memory) 142, and flash memory 143 that stores initial setup information and other, and can be configured to include a controller and a memory. The main processor 2 uses the programs and data stored in the ROM 142 and flash memory 143 and the output data of each (105 to 108) to control the operation of VRHMD1 and various prescribed processes related to the invention.
Sensor 105, as an example, can be configured with a GPS receiver sensor 151 that can be used to acquire location information, a geomagnetic sensor 152, a distance sensor 153 that can detect the distance to an object, an acceleration sensor 154, a gyro sensor 155, and a temperature sensor 156, and can be used for grasping data such as the wearer's condition and the position, size, and temperature of surrounding objects. However, the sensors enumerated here are examples, and it is sufficient to be able to perform the prescribed process, and the enumerated sensors may be omitted as appropriate or other types of sensors may be included.
The image processor 107 is used to generate and display images and can be configured using, as an example, camera 200, VR space image generator 195 (in
The sound processor 108 can be configured using, as an example, a microphone 181, a codec 182 for processing sound signals, and a speaker 183. The microphone 181 may be provided as appropriate and, as an example, may be provided so that the wearer's voice is input. The microphone 181 may be provided so that external sound may be input when worn. The speaker 183 may be provided, as an example, so as to be adjacent to the wearer's ear when worn.
The communication processor 106 can be configured using, as an example, a wireless LAN interface 161 and a short distance communication interface 162. The wireless LAN interface 161 is used as a communication interface for wireless LAN communication, and the short distance communication interface 162 is used as a communication interface for short distance communication. Note that, as the short distance communication interface, for example, Bluetooth (registered trademark) can be used.
Camera 200 captures images of 360° around the wearer. Here, an example of the configuration of the camera 200 and an example of its operation will be explained using
As shown in
In other words, as indicated by arrow 8 in
Next, the setting of the safe activity area is explained. As described above using
In this embodiment, as shown in
In the above description, a 360° ambient image is captured by camera 200 and that image is used, but the boundary 100 may be set on an image that is not a 360° ambient image, such as a camera that captures only the front of the wearer. In this case, objects around the wearer may be detected each time during the VR space experience, and the boundary 100 that enables avoidance of the objects may be automatically set each time. For example, if a VRHMD is used with a camera that takes a picture of the front, the boundary 100 that enables avoidance of the objects may be set each time the head moves and the real space image in front changes.
Next, with reference to
First, the user wears the VRHMD 1 on the head. Then, the VRHMD 1 starts the initial setup for experiencing the VR space (S1). Note that, this process may be started automatically after the VRHMD 1 is worn, or this process may be started by inputting instructions from the user using an appropriate input device.
Next, the user sets the types of objects to be grasped during the experience of the VR space, e.g., person, animal, and phone which is ringing (S2). In this setting, the number of objects may be set, for example, if the user is experiencing the VR space among a large number of person and there are more than the set number of person, the setting may be made not to grasp person. Also, it is possible to set to grasp only specific person by utilizing appropriate face recognition technology. Note that, the set information is stored in the memory.
Next, the surrounding situation of the wearer is captured by the camera 200, and a 360° image is created (S3). VRHMD 1 identifies objects (obstacles) from the created image, detects the position of the obstacles, distance to the wearer, size, etc. using data obtained by sensor 105 etc., and stores the identified obstacles, their positions, distance to the wearer, size, etc. data (S4).
VRHMD 1 acquires relative positional information to the object, based on the positions and distances of objects such as chair 4, desk 5, person 20, and wall 3 in the real space obtained in S4 (S5). And, the VRHMD 1 automatically sets the boundary 100 that can avoid contact with the objects (obstacles), based on the data obtained in S4 and S5 (S6).
The VRHMD 1 superimposes the set boundary 100 on the image of the real space captured by the camera 200 and displays it on the display 130. Here, the wearer looks at the image output to the display 130 and confirms whether the boundary 100 is appropriate (S7).
If the confirmation result of S7 is OK, VRHMD 1 stores the location information of the boundary 100 and creates a VR space image (S8). The created VR space image is then displayed on the display 130 as shown in
After the VR space image is created, the initial setup is completed (S9).
Next, an example of a display method related to the invention will be described. One of the purposes of the present invention is to enable, during the experience of the VR space, to grasp the situation around the wearer only when necessary, especially the situation that exists outside the boundary 100 of the safe activity area, without compromising the immersive experience as much as possible.
Camera 200 continues to capture 360° ambient images as shown in
Note that, if a new object that did not exist at the time of initial setup appears inside the boundary 100 from the ambient image, safe movement and operation will be hindered. For this reason, VRHMD 1, regardless of the type of new object, superimposes the captured image of the object on the VR space image and displays.
As explained above, when the VRHMD1 identifies that an object set outside the boundary 100 has newly appeared, it superimposes the captured image of the object on the VR space image and displays. This enables the wearer to grasp the external situation that wearer may wish to be aware of. On the other hand, if the object is identified as an object that has not been set, it is not displayed as long as it does not interfere with safety operations, so the sense of immersion in the VR space is not compromised. Thus, this embodiment provides a VRHMD that can display with an appropriate balance between the understanding of the surrounding situation and the sense of immersion, which are in a trade-off relationship.
Next,
When the VR space experience is started (S10), the VRHMD 1 generates the VR space image by the VR space image generator 195 of the image processor 107 (S11). Here, the VR space image is generated by the same generation method as in S8 above.
When the VRHMD 1 is used, the camera 200 captures the wearer's surroundings and creates a 360° ambient image (S12). VRHMD 1 then detects objects from the created 360° ambient image (S13). VRHMD1 identifies the location of the detected object using sensor functions such as sensor 105 as appropriate and compares it with the data detected in S4 described above. Note that, if a new object is detected, the object is stored, and if the object is moving, its direction is detected (S14). Note that, the direction in which the object is moving can be detected, as an example, using the captured images (e.g., by determining the direction in which the object is moving using captured images at short time intervals).
VRHMD 1 identifies whether the object detected in S14 and the location of the moving object are located outside or inside the boundary 100 (S15). Note that, it is assumed that the wearer is inside the boundary 100.
If the VRHMD 1 identifies that the object is outside the boundary 100 in S15, the VRHMD 1 determines the type of the object. The VRHMD 1 determines that the object is, for example, a person, animal, desk, chair, etc. (S16). The VRHMD 1, as an example, can determine the type of the object using known image matching techniques. Also, the VRHMD 1 may determine the type of object by using an appropriate matching technique based on the input sound from the object.
It is determined whether the type determined in S16 matches the type previously set in S2 of
If, in S17, the objects set in S2 are determined (YES) or moving objects are detected in S14 (YES), the VRHMD 1 acquires (extracts) images of those objects (S18). The VRHMD 1 then superimposes the images acquired (extracted) in S18 on the VR space images (S20). Also, the VRHMD 1 displays the image superimposed in S20 on the display 130 (S21). Note that, after the display in S21, the process returns to S11. On the other hand, if no object is extracted in S2 in S17 (NO), VRHMD1 displays the VR space image generated in S11 on the display 130 as is (S21). Also, if an object exists inside the boundary 100 in S15 and the object is a new object detected in S14 or moving object, VRHMD) 1 extracts their images (S19). Then, VRHMD 1 superimposes the images acquired by extraction in S19 on the VR space images (S20). Also, the VRHMD 1 displays the images superimposed in S20 on the display 130 (S21). Note that, when S19 is processed, there is a high possibility that the wearer will contact an object, so this embodiment, VRHMD 1 performs a process such as superimposing the image in S19 on the central portion of the VR spatial image or the like, or displaying the image in S19 and stopping the output of the VR spatial image.
As explained above, when it is identified that the set object has newly appeared outside the boundary 100, it superimposes a captured image of the object (in detail, an image in which the object portion is cut out from the image captured by the camera 200, or an image in which the outline of the object is extracted) on the VR space image and displays. By this, it enables to grasp the external situation that the wearer may wish to be aware of. On the other hand, if the object is identified as an object that has not been set, it is not displayed as long as it does not interfere with the safety operation, so the immersion in the VR space is not compromised. Thus, this embodiment provides a VRHMD that can display with an appropriate balance between understanding the surrounding situation and a sense of immersion, which are in a trade-off relationship.
Next, the second embodiment is described with reference to
In the second embodiment, first, the camera 200 creates a 360° ambient image, and the VRHMD 1 identifies objects from that 360° image. The VRHMD 1, in the identified object, detects the object that match the object to be grasped set in S2 in
With reference to
Here,
Next, the operational flowchart of the second embodiment is described using
First, when the VR space experience is started (S10), VRHMD 1 generates a VR space image by the VR space image generator 195 of the image processor 107 (S11). Then, camera 200 captures the wearer's surroundings to create a 360° ambient image (S12), and VRHMD 1 detects objects from the created 360° ambient image (S13).
VRHMD 1 identifies the location of the detected object using sensor functions such as sensor 105 as appropriate, in addition determines whether the object is in front of, behind, to the right, to the left, or diagonally to the wearer. Also, the VRHMD 1 compares with the data detected in S2 of
Note that, the direction of an object, as an example, may be determined by the following method. VRHMD 1 determines and processes an object located in the center portion of the left and right side in the captured image, as an object located in the same direction as the camera 200 (e.g., forward or backward), and an object located on the left and right edge in the captured image, as an object located in the lateral direction (e.g., left side or right side). Then, VRHMD 1 determines and processes the object located in the middle them in the captured image as an object located in the diagonal direction.
VRHMD 1 identifies whether the location of the object detected in S14 and the moving object are outside or inside the boundary 100 (S15). Note that, here, the wearer is inside the boundary 100. Then, if the object exists outside the boundary 100 in S15, the VRHMD 1 determines the type of the object (S16).
VRHMD 1 determines whether the types determined in S16 match the types previously set in S2 in
As shown in
On the other hand, if the object set in S2 is not extracted (no match) in S17 (NO), VRHMD 1 displays the VR space image generated in S11 on the display 130 as is (S21). Also, if an object exists inside the boundary 100 in S15 and the object is a new detected in S14 or moving object, the VRHMD 1 extracts their images (S19). Note that, since there is a high possibility that the wearer will contact the objects in the S19 image, in this embodiment, VRHMD 1 superimposes the S19 image on the part of the VR space image that is not the dotted frame 111 (e.g., the center part), or displays the real space together with the S19 image instead of the VR space image, thereby it performs to process such as interrupting immersing the VR space.
As explained above, when it is identified that a set object has newly appeared outside the boundary 100, it superimposes the object on the VR space image as the virtual object and displays. By this, it enables to grasp the external situation that the wearer may wish to be aware of. On the other hand, if the object is identified as an object that has not been set, it is not displayed as long as it does not interfere with the safety operation, so the immersion in the VR space is not compromised. Thus, this embodiment provides a VRHMD that can display with an appropriate balance between understanding the surrounding situation and a sense of immersion, which are in a trade-off relationship.
Next, the third embodiment is described with reference to
As already explained, VRHMD 1 can grasp the presence of an object such as a telephone and its location, using the 360° ambient image captured by camera 200. On the other hand, to detect that the grasped object is generating sound (e.g., the ringing of a telephone or the sound of a person speaking), sound detection processing is required.
When the VR space experience begins (S10), the VRHMD 1 generates a VR space image (S11). Camera 200 captures images of the wearer's surroundings, and VRHMD 1 creates a 360° ambient image (S12). VRHMD 1 detects objects from the created 360° ambient image (S13).
VRHMD 1 measures the temperature of the object detected in S13 with the temperature sensor 156 and compares it with the data detected in S4 of
VRHMD 1 identifies the position for the object detected in S13, and in addition determines whether the object exists in front of, behind, to the right of, to the left of, or diagonally to the wearer. Also, if a new object is detected as a result of comparison with the data detected in S4 of
VRHMD 1 detects the position where the sound is generated based on the data of the sound detection processor 300, and compares the output data of S14 with the data detected in S4 of
VRHMD 1 identifies whether the output data of S42 relates to objects that exist outside or inside the boundary 100 (S43). Here, the wearer is inside the boundary 100.
If the presence of an object or the occurrence of a sound is outside the boundary 100 in S43, VRHMD 1 determines the type of object and sound. VRHMD 1 determines for example, a ringing telephone, a person's voice calling out, a chime announcing a visitor, or an emergency bell (S44).
VRHMD 1 determines whether the type determined in S24 matches the type previously set in S2 of
if the objects set in S2 are extracted in S17 and if moving objects are detected in S14 (YES), VRHMD 1 replaces those objects and sounds with virtual objects. if it is a telephone that is ringing, VRHMD 1, for example, may replace to an object in the shape of the telephone that being called. Similarly, if it is a person who is calling, it can be replaced with an object in the shape of a person who is recognized to be calling, if it is a chime that announces a visitor, it can be replaced with an object in the shape of a door chime that is recognized the chime is ringing, and if an emergency bell is ringing, it can be replaced with an emergency bell virtual object (S45).
VRHMD 1 superimposes the image replaced by the virtual object in S45 on the dotted frame 111 portion of the VR space image in accordance with the direction to the wearer detected in S14. If it determines that no corresponding object is found, such as when an emergency bell is ringing, in S44, VRHMD 1 may switch from the VR space image to the real space image. Furthermore, the virtual object of the emergency bell may be superimposed on the real space image and displayed to warn of danger (S32). Note that, the operations after S32 are the same as those in the flowchart in
As explained above, when a sound is generated that the wearer should be informed of, even if no new object appears, a virtual object indicating the generated sound is superimposed on the VR space image and is displayed. By this, it enables to grasp the external situation that the wearer may wish to be aware of. On the other hand, if the sound is identified as less necessary, it is not displayed, so the sense of immersion in the VR space is not compromised. Thus, according to this embodiment, a VRHMD that can display with an appropriate balance between the understanding of the surrounding situation and the sense of which are in a trade-off relationship, is immersion, provided.
Next, fourth embodiment is described with reference to
In the fourth embodiment, when objects to be grasped exist in the set first area, second area, and third area 3 as set in S2 of
In
With respect to this process, VRHMD 1 determines whether the object or sound exists in the second or third area at S16 and S44 of the operational flowchart above. Then, VRHMD 1 changes the size of the object to be superimposed according to the area in which it exists in S32.
Note that, the setting for grasping objects in the second area may be limited to objects generating sound, for example. By setting in this way, when the VR space experience is being conducted in a large space such as a gymnasium, the space that the wearer wants to grasp can be limited to a certain (e.g., several meters) circumference of the wearer. Also, it has feature that can grasp only emergency bells or broadcasts announcing an emergency in an area beyond a certain area, for example, outside a gymnasium etc., in the event of a fire or other emergency situation.
As explained above, multiple areas separated by boundaries are set, and virtual objects corresponding to the areas where objects exist are superimposed on the VR space image and are displayed. This enables recognition of objects according to their distance from the wearer. On the other hand, if there is no problem with safe operation, it does not perform display, in addition, if the distance is too great, the virtual object can be displayed inconspicuously, so that the immersion in the VR space is not compromised. Thus, this embodiment provides a VRHMD that can display with an appropriate balance between understanding the surrounding situation and immersion, which are in a trade-off relationship.
Next, the fifth embodiment is described with reference to
There are many devices with short-range communication interfaces (wireless communication devices), e.g., smartphones. This short-range communication interface uses radio waves and is assumed to be used in short distances, up to a communication distance of about 10 meters. This short-range communication interface transmits ID information periodically, and because it uses radio waves, it can be detected even if it is located behind a wall or in other places where it cannot be seen by the eye.
Thus, for example, as shown in
When the VR space experience begins (S10), VRHMD 1 generates VR space images (S11).
The short-range communication interface periodically transmits ID information. Therefore, VRHMD 1 detects the short-range communication interface by acquiring radio waves from the short-range communication interface (S51). VRHMD 1 also detects ID information from the acquired radio waves (S52).
Also, VRHMD 1, as an example, detects (estimates) the distance of a device equipped with a short-range communication interface from the acquired radio wave strength (S53). Note that, the VRHMD 1 may detect (estimate) the distance of a device equipped with a short-range communication interface from the delay time in communication. In addition, as an example, if a direction detectable method such as UWB (Ultra Wide Band) is used, position detection is also possible.
VRHMD 1 determines whether the location of the device detected in S53 is outside or inside the boundary 100 (S15). If the device is inside the boundary 100 in S15, the process proceeds to S21. Note that the VRHMD 1 may make a determination not simply based on the distance, but may also detect whether the device is approaching or moving away and take that information into account when making the determination. For example, if the device is outside the boundary 100 in terms of distance but is moving away, the VRHMD 1 may determine that there is little need to inform the wearer and the process may proceed to S21.
If the device exists outside the boundary 100 in S15, it determines whether the detected ID information matches the device to be grasped set in S2 in
If the device is determined in S17 (Yes), the VRHMD 1 replaces the determined device with a virtual object (S54). Then, VRHMD 1 superimposes the object in S54 on the dotted frame 112 in the lower left part of the VR space image (S32), as shown in
Note that, although the example of superimposing the object of S54 on the portion of the dotted frame 112 was described in this embodiment, the display form is not limited to this example, and for example, the position of the dotted frame to be displayed may be changed as appropriate. Also, if the direction of the device can be identified, a display that aligned the direction for the wearer, may be performed, as described in
As explained above, it uses the short-range communication interface to detect target device, superimposes virtual objects on the VR space image, and displays. By this, it possible to recognize of detect target device even in locations where it is not possible to photograph with a camera. On the other hand, devices that are not registered as detection targets are not displayed, so the sense of immersion in the VR space is not compromised. Thus, according to this embodiment, a VRHMD that can display with an appropriate balance between understanding the surrounding situation and immersion, which are in a trade-off relationship, is provided.
Next, the sixth embodiment is described with reference to
As shown in
Here, the VR goggles 90 are of an appropriate configuration to which the smartphone 110 is attached. the VR goggles 90 may be, as an example, sumaho goggles to which the smartphone 110 is attached by the user fitting the smartphone 110. The VR goggles 90 may also be sumaho goggles in which the smartphone 110 is attached by the user plugging in the smartphone 110. Here, “sumaho” is an abbreviation for smartphone.
The above description, it provides a VRHMD that recognizes the type of object from the image captured by the camera 200, extracts objects that match the type conditions and distance conditions, superimposes an image showing the extracted object on the VR space image, and displays on the display 130. Also, as an example, the method of displaying head mounted display that includes a memory step (S2) to store the type conditions and distance conditions of the object to be displayed, an image generation step (S11) to generate an image drawing the virtual space, a shooting step (S12) to capture the real space around the head mounted display, a distance detection step (S14) to detect the distance to object in the real space, a recognition step (S16) to recognize the type of object from the captured image, an extraction step (S17, S18) to extract objects that match the type conditions and distance conditions from the recognized object, and a superimposed display step (S20, S21) to superimpose images showing the extracted object on the virtual space image and to displays, is provided.
In this way, even outside the safe activity area of the VRHMD wearer, surrounding situations such as people, equipment, and sounds can be detected, and it can be determined whether it is desirable to make the situation known to the wearer. If it is determined that it should be made inform, regarding the detected situation, it superimposes such as an image taken of the detected object, a virtual object indicating the object, or a display object of the direction of the object's existence, on the VR space image and displays on the display. By this, it enables to grasp the external situation that the wearer may wish to be aware of. On the other hand, if an object is identified as an object that has not been set, it is not displayed as long as it does not interfere with safety operations, so the immersion in the VR space is not compromised. Thus, according to the present invention, it can display with an appropriate balance between understanding the surrounding situation and immersion, which are in a trade-off relationship.
Above, an embodiment of the invention has been described, needless to say, the configuration for realizing the technique of the present invention is not limited to the above-described embodiment, and various modifications are possible. For example, the aforementioned embodiments are described in detail in order to explain the invention in an easy-to-understand and it is not necessarily limited to those having all the described configurations. It is also possible to replace some of the configurations of one embodiment with those of another embodiment, and it is also possible to add configurations of other embodiments to those of one embodiment. All of these are within the scope of the invention. In addition, the numerical values, messages, etc. that appear in the text and figures are only examples, and it does not impair the effect of the invention if the use of different ones.
It is sufficient to be able to perform the prescribed processing, for example, the programs used in each processing example may be each independent programs, or multiple programs may constitute a single application program. In addition, the order in which each process is performed may be changed.
The functions, etc., of the invention described above may be realized in hardware by designing some or all of them, for example, in an integrated circuitry. They may also be realized in software by having a microprocessor unit, CPU, or the like interpret and execute an operating program that realizes the respective functions, etc. Also, the scope of software implementation is not limited, and hardware and software may be used together. In addition, part or all of each function may be realized by a server. Note that, the server may be a local server, a cloud server, an edge server, a network service, etc., as long as it is capable of executing functions in cooperation with other components part via communication, and the form does not matter. Information such as programs, tables, and files that realize each function may be stored in memory device such as memory, hard disks, SSD (Solid State Drive), or recoding media such as IC card, SD card, and DVD, or in device on the communication network.
In addition, the control lines and information lines shown in the figure are those that are considered necessary for explanation, and it does not necessarily represent all the control and information lines on the product. In reality, almost all of the components may be considered to be interconnected.
In VRHMD1, the position of cameras is not limited to the examples described above. The number and structure of the camera 200 is not limited to the examples described above and may be changed as appropriate.
A suitable camera capable of communicating with the VRHMD 1 may be installed in the environment where the VRHMD 1 is used, and the VRHMD 1 may perform processing based on captured images obtained from the camera via communication. In other words, a system may be provided with a camera and the VRHMD 1.
Also, the system may use (operate) multiple VRHMDs 1 with a single camera. Therefore, for example, it is possible to easily perform operation by installing one or a small number of cameras so as to be able to survey the entire environment.
Here, VRHMD 1 determines whether the object is the object set in S2 using the image that the camera acquires. And, if it is determined that the object is the object set in S2, the VRHMD 1 can superimpose and display the image of the object that the camera captures. Note that, the object in the image acquired by the camera or the virtual object that replaced the object, may be superimposed at a predetermined appropriate position (e.g., at the edge side of the display 130) or at a predetermined position, as an example. Also, in the case where multiple cameras are installed and images of objects are superimposed, the object acquired from any one camera or virtual object, as an example, may be superimposed.
In S2, objects that are not superimposed may be set and the memory may store information indicating the type of object not to be displayed. And, the VRHMD 1 may then perform the process of not displaying the object identified from this information. In this way, by setting the objects that are not superimposed, the wearer can immerse himself/herself in the VR space without being aware of the objects. For example, by non-displaying home appliances such as a robot vacuum cleaner, it is possible to immerse himself/herself in the VR space without being aware of the home appliance even when it is in use.
VRHMD 1 may, as an example, acquire data from sensor 105 depending on the situation and process. VRHMD 1 may, for example, detect tilt with acceleration sensor 154 or gyro sensor 155 and perform process compensated for tilt effects.
The battery 109 may be connected to the data bus 103 in order to display information (e.g., the current amount of electricity) of the battery 109. Then, VRHMD 1 may display the information of the battery 109 on the display 130.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/045020 | 12/7/2021 | WO |