Embodiments of the present invention generally relate to a mixed reality device, a display control method, and a storage medium
There are mixed reality devices that display virtual space superimposed on real space. There is a need for technology that can improve the usability of this mixed reality device.
According to one embodiment, a mixed reality device is capable of superimposing a virtual space on a real space. The mixed reality device is configured to set a three-dimensional coordinate system in the virtual space based on a prescribed object imaged in the real space. The mixed reality device is further configured to display a virtual object at a predetermined position in the three-dimensional coordinate system. The mixed reality device is further configured to change a display direction of the virtual object according to a positional relationship between the virtual object and the mixed reality device.
Embodiments of the invention will now be described with reference to the drawings. The drawings are schematic or conceptual; and the relationships between the thicknesses and widths of portions, the proportions of sizes between portions, etc., are not necessarily the same as the actual values thereof. The dimensions and/or the proportions may be illustrated differently between the drawings, even in the case where the same portion is illustrated. In the drawings and the specification of the application, components similar to those described thereinabove are marked with like reference numerals, and a detailed description is omitted as appropriate.
A first embodiment of the present invention relates to a mixed reality device (MR device). For example, as shown in
In the illustrated example, the MR device 100 is a binocular-type head-mounted display. Two lenses 111 and 112 are embedded in the frame 101. The projection devices 121 and 122 project information onto lenses 111 and 112, respectively.
The projection device 121 and the projection device 122 display the detection result of the worker's body, a virtual object, etc. on the lens 111 and the lens 112. Only one of the projection device 121 and the projection device 122 may be provided, and information may be displayed on only one of the lens 111 and the lens 112.
The lens 111 and the lens 112 are transparent. The worker can see the real-space environment through the lens 111 and the lens 112. The worker can also see the information projected onto the lens 111 and the lens 112 by the projection device 121 and the projection device 122. The projections by the projection device 121 and the projection device 122 display information overlaid on the real space.
The image camera 131 detects visible light and acquires a two-dimensional image. The depth camera 132 emits infrared light and acquires a depth image based on the reflected infrared light. The light source 133 irradiates light (e.g., infrared light) toward the wearer's eyeball. The eye-tracking camera 134 detects the light reflected by the wearer's eyeball. The sensor 140 is a 6-axis detection sensor, and can detect 3-axis angular velocity and 3-axis acceleration. The microphone 141 accepts voice input.
The processing device 150 controls each element of the MR device 100. For example, the processing device 150 controls the display by the projection device 121 and the projection device 122. Hereinafter, the processing device 150 causes the projection devices 121 and 122 to display information on lenses 111 and 112, which is also simply referred to as “the processing device displays information.” Additionally, the processing device 150 detects the movement of the field of view based on the detection result by the sensor 140. The processing device 150 changes the display by the projection device 121 and the projection device 122 in response to the movement of the field of view.
In addition, the processing device 150 can perform various processes using data obtained from the image camera 131 and the depth camera 132, the data of the storage device 170, etc. For example, the processing device 150 recognizes a prescribed object from the image acquired by the image camera 131. The processing device 150 recognizes the surface shape of the object from the image acquired by the depth camera 132. The processing device 150 calculates the viewpoint and line of sight of the worker's eyes from the detection result acquired by the eye-tracking camera 134.
The battery 160 supplies the power necessary for operation to each element of the MR device 100. The storage device 170 stores data necessary for the processing of the processing device 150, data obtained by the processing of the processing device 150, etc. The storage device 170 may be provided outside the MR device 100 and communicate with the processing device 150.
Not limited to the illustrated example, the MR device according to the embodiment may be a monocular-type head-mounted display. The MR device may be a glasses-type as illustrated, or may be a helmet type.
Here, an example in which the first embodiment of the present invention is applied to a task of turning a screw will be described. As an example, the worker performs the task on the article 200 shown in
First, a three-dimensional coordinate system in a virtual space is set based on a prescribed object. In the example shown in
As long as the three-dimensional coordinate system can be set, the object used for setting is freely-selected. Here, an example in which the three-dimensional coordinate system is set using the marker 210 will be described. At the start of the task, the image camera 131 and the depth camera 132 image the marker 210. The processing device 150 recognizes the marker 210 from the captured image. The processing device 150 sets the three-dimensional coordinate system in the virtual space based on the position and orientation of the marker 210. By setting the three-dimensional coordinate system based on the prescribed object existing in the real space, it is possible to display a virtual object corresponding to a physical object in the real space. The position at which the virtual object is displayed is pre-registered using the coordinate system based on the origin of the marker 210. The three-dimensional coordinate system used for registering the display position of a virtual object 301 and the three-dimensional coordinate system set when the MR device 100 is used are common.
When the three-dimensional coordinate system is set, the processing device 150 displays a virtual object facing a predetermined direction at a predetermined position in the coordinate system. In the example shown in
As shown in
In the task, a digital tool capable of detecting torque values may be used. The digital tool may be a digital torque wrench, a digital torque screwdriver, etc. In such a case, the detection value 301c indicates the torque value detected by the tool. The meter 301d indicates the specified torque value and the detected torque value. The percentage 301e indicates the ratio of the detected value to the specified torque value. Depending on the task, it is required to tighten the screw multiple times to one fastening location. In such a case, the number of times 301f indicates how many times the screw has been tightened at the fastening location 201. The worker performs the task while checking the contents displayed on the virtual object 301.
The processing device 150 further calculates the position of the MR device 100. As an example, the processing device 150 calculates the position and direction of the MR device 100 using spatial mapping function. In the MR device 100, the distance to the surrounding objects of the MR device 100 is measured by the depth camera 132. From the measurement results by the depth camera 132, surface information of the surrounding objects can be obtained. The surface information includes the position and orientation of the object's surface. For example, the surface of each object is represented by multiple meshes, and the position and direction are calculated for each mesh. The processing device 150 calculates the position and direction of the MR device 100 relative to the surrounding surfaces from the surface information. When the marker 210 is recognized, the position of each surface is also represented by the three-dimensional coordinate system based on the marker 210. From the positional relationship between the surfaces of the objects and the MR device 100, the position and direction of the MR device 100 in the three-dimensional coordinate system are calculated.
The spatial mapping is performed repeatedly at predetermined intervals. With each execution of spatial mapping, the surface information of the surrounding objects is obtained. The processing device 150 calculates changes in the surface position and direction between the present spatial mapping result and the last spatial mapping result. In a situation where the surrounding objects do not move, the change in the position of the surface and the change in the direction of the surface correspond to a change in the position of the MR device 100 and a change in the direction of the MR device 100, respectively. The processing device 150 calculates the change amounts in the position and direction of the MR device 100 from the change in the position of the surface and the change in the direction of the surface. The detection result of the sensor 140 may be further used to calculate the change amounts in the position and direction of the MR device 100. The processing device 150 updates the position and direction of the MR device 100 from the acquired change amounts. Instead of spatial mapping, a conventional positioning method may be used to acquire the position of the MR device 100.
The processing device 150 changes the display direction of the virtual object 301 according to the positional relationship between the MR device 100 and the virtual object 301. For example, as shown in
When the worker moves, the processing device 150 calculates the position of the MR device 100 after the move. As shown in
As shown in
The specific calculation method of the angle is freely-selected. For example, as shown in
The position calculated by the spatial mapping described above can be used as the position of the MR device 100. The worker's viewpoint may also be used as the position of the MR device 100. For example, after the position of the MR device 100 is calculated, the processing device 150 calculates the position of the worker's viewpoint using the detection result by the eye-tracking camera 134. The processing device 150 calculates the second vector V2 from the virtual object 301 to the viewpoint, and calculates the angle θ and the angle φ using the second vector V2. Thereby, the virtual object 301 can be oriented in the direction that is easier for the worker to see.
In the display control method M1 shown in
The processing device 150 controls the display direction of the virtual object (step S10). Specifically, the processing device 150 calculates the first vector that is along the display direction of the virtual object 301 and the second vector from the virtual object 301 toward the MR device 100 (step S11). The processing device 150 calculates the angle between the first vector and the second vector (step S12). The processing device 150 rotates the virtual object 301 by the calculated angles (step S13).
The processing device 150 determines whether the task has been completed (step S6). When the task is not completed, the processing device 150 executes the step S4 again. As a result, the control of the display direction is continuously repeated until the task is completed.
The advantages of the first embodiment will be described.
The MR device 100 can provide a variety of information to the wearer by displaying a virtual object. When the wearer performs a task, information related to the task can be provided by the virtual object. In addition, the MR device 100 sets the coordinate system in the virtual space based on a physical object imaged in the real space. Therefore, the MR device 100 can display the virtual object corresponding to a specific object in the real space. On the other hand, when the virtual object is displayed at a predetermined position in the real space, the virtual object may become difficult for the wearer to see depending on its direction.
For example, as shown in
In the first embodiment of the present invention, the processing device 150 changes the display direction of the virtual object 301 according to the positional relationship between the MR device 100 and the virtual object 301. Therefore, as shown in
The MR device 100 may simultaneously display multiple virtual objects. For example, as shown in
The processing device 150 may determine a virtual object that overlaps the line of sight among the multiple virtual objects. For example, when the line of sight is calculated while multiple virtual objects are displayed, the processing device 150 determines the virtual object overlapping the line of sight. As shown in
A second embodiment of the present invention will be described. In the second embodiment, the same MR device 100 as in the first embodiment may be used. Here, an example of the second embodiment of the present invention being applied to the task of turning a screw will be described.
In the second embodiment, the MR device 100 displays multiple virtual objects. For example, as shown in
In the example shown in
The processing device 150 changes the display position of at least one of the virtual object 302 and the virtual object 303, and changes the display position of at least one of the virtual object 303 and the virtual object 304. In addition, the processing device 150 changes the display position of at least one of the virtual object 306 and the virtual object 307, and changes the display position of at least one of the virtual object 307 and the virtual object 308. As a result, the overlapping amount between the virtual objects 302 and 303, the overlapping amount between the virtual objects 303 and 304, the overlapping amount between the virtual objects 306 and 307, and the overlapping amount between the virtual objects 307 and 308 are reduced. For example, as shown in
A determination method of the overlap between virtual objects will be described now. As shown in
The processing device 150 projects each virtual object onto the virtual surface 350. For example, as shown in
For example, as shown in
When the virtual object 303 is projected onto the virtual surface 350, as shown in
The processing device 150 determines whether multiple labels are assigned to one region 351. The assignment of multiple labels to one region 351 indicates that multiple virtual objects overlap in the region 351. As a result of the process shown in
The processing device 150 searches for the position of the virtual object where the overlapping amount decreases. In the example shown in
In the search for the position of the virtual object, the position of at least one virtual object is changed. During the search, the display position of the virtual object does not change. The display position of the virtual object is moved to the final adopted position after the search is completed. The processing device 150 determines whether the overlapping region exists after changing the position. When the number of overlapping regions is reduced after the position change, the processing device 150 adopts the position after the change. The processing device 150 changes the display position of the virtual object to the adopted position. This reduces the amount of overlap between the virtual objects.
A preferred specific example of the method for searching the position of the virtual object will be described. First, the processing device 150 determines the priority of each virtual object overlapping with each other. The method of determining priority is freely-selected. The priority of each virtual object is determined based on the importance of each virtual object, the importance of the task location corresponding to each virtual object, or the positional relationship between each virtual object and the MR device 100, etc. For example, the closer the virtual object is to the MR device 100, the higher the priority of the virtual object is set.
The processing device 150 changes the position of the virtual object with the lowest priority on the virtual surface 350. Specifically, the processing device 150 changes the position of the virtual object by a predetermined distance in a predetermined direction according to a preset rule.
For example, as shown in
As shown in
When a position where the virtual objects do not overlap cannot be found in the area A1, the processing device 150 searches for a position in the pair of areas A2. The search may start from any of the pair of areas A2. When a position where the virtual objects do not overlap is found in the area A2, the processing device 150 adopts that position as the changed position for the virtual object. When a position where the virtual objects do not overlap cannot be found in either of the areas A2, the processing device 150 searches for a position in the area A3. When a position where the virtual objects do not overlap is found in the area A3, the processing device 150 adopts that position as the changed position for the virtual object.
When it is not possible to find a position where the virtual objects do not overlap in any of the areas A1 to A3, the processing device 150 adopts the position with the least overlapping amount as the changed position for the virtual object. The processing device 150 reflects the adopted position on the virtual surface 350 to the display position of the virtual object. Thereby, the display position of the virtual objects is controlled so as to reduce the overlapping amount between the virtual objects. For example, by performing the reverse process of the one shown in
Not limited to the above-described examples, the shape of the virtual surface 350, the shape and arrangement of the regions 351, the projection method on the virtual surface 350, the method of searching for the position of the virtual object, etc. can be appropriately changed. For example, the virtual surface 350 is not planar but may be curved. A part of the cylindrical surface centered on the position of the MR device 100 or a part of the spherical surface centered on the position of the MR device 100 may also be used as the virtual surface 350. The region 351 may be triangular or hexagonal and may be arranged in two directions that are not orthogonal to each other.
In the display control method M2 shown in
When there is an overlapping region, the position of the virtual object is searched. In the position search, steps S23 to S25 are executed. In the step S23, the processing device 150 determines the priority of each of the virtual objects that overlap each other. In the step S24, the processing device 150 selects one of the virtual objects according to the priority and changes the position of the virtual object. In the step S25, the processing device 150 determines whether an overlapping region exists at the changed position on the virtual surface.
When it is determined that there is an overlapping region in the step S25, the processing device 150 determines whether an end condition for the search is satisfied (step S26). Searching the entire set area, performing a preset number of position changes, or the elapse of a preset time from the start of the search are set as the end conditions. When the end condition is not satisfied, the processing device 150 executes the step S24 again and continues to search for the position.
When it is determined in the step S25 that there is no overlapping region, the display position of the virtual object is changed to that position obtained in the previous step S24 (step S27). When it is determined that the end condition is satisfied in the step S26, the processing device 150 extracts the position with the least overlapping amount among the positions searched so far. The processing device 150 changes the display position of the virtual object to the position with the least overlapping amount (step S27).
When it is determined that there is no overlapping region in the step S22, or after the step S27, the processing device 150 determines whether the task is completed (step S6). When the task is not completed, the processing device 150 executes the step S4 again. This ensures that the control of the display position is continuously repeated until the task is completed.
The advantages of the second embodiment will be described.
There may be cases where multiple virtual objects are simultaneously displayed by the MR device 100. In such cases, if the virtual objects overlap each other, it may be difficult for the wearer to see the virtual objects. As shown in
According to the second embodiment of the present invention, when two or more virtual objects overlap, the processing device 150 changes the display position of at least one of the two or more virtual objects to reduce the overlapping amount. By reducing the overlapping amount between the virtual objects, each virtual object becomes easier for the wearer to see. For example, when information is displayed on these virtual objects, it becomes easier for the wearer to understand the information. According to the second embodiment, the usability of the MR device 100 can be improved.
The first embodiment and the second embodiment may be combined. In such a case, after displaying the virtual object, the processing device 150 changes the directions of the virtual objects according to the positional relationship between the MR device 100 and the virtual objects, and changes the display position of the virtual object to reduce the overlap between the virtual objects.
Here, a case in which both the first embodiment and the second embodiment are applied to the task of screw-tightening will be described.
A worker wearing the MR device 100 performs a screw-tightening task. During the task, the article 200, the worker's left hand 251, and the worker's right hand 252 are imaged by the image camera 131 and the depth camera 132. The processing device 150 recognizes the left hand 251 and the right hand 252 from the captured image using hand tracking.
For example, as shown in
When the left hand 251 and the right hand 252 are recognized, the processing device 150 measures the position of each hand. Specifically, the hand includes multiple joints, such as DIP joints, PIP joints, MP joints, CM joints, etc. The position of any of these joints is used as the position of the hand. The position of the center of gravity of the multiple joints may also be used as the position of the hand. Alternatively, the overall center position of the hand may be used as the position of the hand.
As shown in
The virtual objects 311 to 318 indicate the positions where the hand should be located when the screws are tightened into the fastening locations 201 to 208, respectively. The virtual objects 321 to 328 indicate the positions where the extension bar should be placed when the screws are tightened into the fastening locations 201 to 208, respectively. For example, when the screw is tightened into the fastening location 201, the hand comes into contact with the virtual object 311 as shown in
Displaying the virtual object 311 to 318 and the virtual object 321 to 328 allows the worker to easily understand where to position the hand and the extension bar during the screw-tightening. Thereby, task efficiency can be improved.
The processing device 150 may detect contact between a prescribed physical object and a virtual object. For example, the processing device 150 detects contact between the hand and the virtual objects 311 to 318. More specifically, the processing device 150 repeatedly calculates the distance between the hand and each of the virtual objects 311 to 318. When the distance to any virtual object falls below a preset threshold, the processing device 150 determines that the hand comes into contact with that virtual object.
When a hand comes into contact with any virtual object, the processing device 150 estimates (infers) that a screw is being turned to a location corresponding to the virtual object. For example, as shown in
Alternatively, instead of a hand, contact between a tool and a virtual object may be detected. When a tool and an object come into contact, it can be estimated that the screw is being tightened into the fastening location corresponding to the object, as in the example described above. Various methods can be used to estimate the position of the tool. For example, a sensor may be provided on the wrench 280, and the position of the wrench 280 may be estimated using the detection value of the sensor. The sensor may be an inclination sensor, an acceleration sensor, a gyro sensor, etc. The position of the wrench 280 may be estimated by combining the detection value of the sensor and the hand detection result.
Alternatively, a marker for estimating the position of the tool may be attached to the tool. In the example shown in
Once the position of the tool is calculated, the processing device 150 calculates the distance between the position of the tool and the virtual object. When the distance to any virtual object falls below a preset threshold, the processing device 150 determines that the tool comes into contact with the virtual object.
When the hand or tool comes into contact with the virtual object, the processing device 150 can estimate the location where the screw is being turned. For example, by presuming the location where the screw is being turned, a task record showing which parts have been worked on can be automatically generated. Alternatively, when the order of tasks is specified for multiple locations, it can be automatically determined whether or not the location where the screw is tightened is appropriate.
When the wrench 280 is a digital tool, the processing device 150 receives the detection value from the wrench 280. In such a case, the processing device 150 may associate the detection value such as torque with the data related to the estimated location to be worked on. When the torque required for fastening the screw is pre-registered, the processing device 150 may determine whether the required torque has been detected. In addition to the torque value, the processing device 150 registers the determination result in the task record.
In addition, the processing device 150 performs control of the display direction and display position of the virtual object. For example, virtual objects for which the display direction is controlled and virtual objects for which the display position is controlled are registered in advance. The processing device 150 changes the display direction of the virtual object for which the display direction is controlled according to the positional relationship between the virtual object and the MR device 100. The processing device 150 changes the display position of the virtual object for which the display position is controlled according to whether or not the virtual objects overlap.
For example, the display directions of the virtual objects 301 to 308 are controlled. Additionally, the display positions of the virtual objects 301 to 308, the virtual objects 311 to 318, and the virtual objects 321 to 328 are controlled. In the example shown in
The processing device 150 determines the priority of each of the virtual object 303, the virtual object 314, and the virtual object 324. In addition, the processing device 150 determines the priority of each of the virtual object 308, the virtual object 317, and the virtual object 327. The processing device 150 changes the display position of the virtual object with the lower priority to reduce the overlapping amount between the virtual objects. Note that the priority relationship for each type of virtual object may be registered in advance.
When the priority of the virtual object 303 is higher than the priority of each of the virtual object 314 and the virtual object 324, the display positions of the virtual object 314 and the virtual object 324 are changed as shown in
When the priority of the virtual object 303 is lower than the priorities of the virtual object 314 and the virtual object 324, the display position of the virtual object 303 is changed as shown in
For example, each priority of the virtual objects 301 to 308 is set lower than the priorities of the virtual objects 311 to 318 and the virtual objects 321 to 328. In such a case, the display positions of the virtual objects 301 to 308 are preferentially changed. As an example, when the display position of the virtual object 301 is searched, the processing device 150 sets the areas A1 to A3 based on the display position of the virtual object 301 as shown in
In addition, the display directions of the virtual objects 301 to 308 are appropriately changed according to the positional relationships between the MR device 100 and the virtual objects 301 to 308, respectively. Thereby, even when the worker moves, the virtual objects 301 to 308 are displayed in directions that are easy for the worker to see. Since the virtual objects 311 to 318 and the virtual objects 321 to 328 are rotationally symmetric with respect to an axis parallel to the Z-axis direction, the control of the display direction of the virtual objects may be omitted.
The MR device 100 may switch between a first mode in which only a virtual object related to a specific fastening location is displayed and a second mode in which virtual objects related to multiple fastening locations are displayed.
For example, when the order in which the screws are turned to multiple fastening locations is predetermined, the processing device 150 displays only the virtual object related to the next fastening location where a screw should be turned. In the first mode, the worker can easily understand the fastening location where the screw should be turned next. In addition, it is easy for the worker to identify the virtual object to focus on. When the task is completed for all fastening locations, the processing device 150 simultaneously displays virtual objects related to the multiple fastening locations. In the second mode, the worker can check whether the task has been appropriately performed for all fastening locations while looking at multiple virtual objects.
The worker may switch between the first mode and the second mode. For example, the worker inputs a hand gesture or voice command to the MR device 100 to switch between the first mode and the second mode. The processing device 150 switches between the first mode and the second mode in response to the input of hand gestures or voice commands.
In the second mode, as shown in
If the virtual objects 311 to 318 and the virtual objects 321 to 328 are displayed in the first mode, the display of the virtual objects 311 to 318 and the virtual objects 321 to 328 is omitted in the second mode. By displaying the virtual objects 331 to 338, the worker can easily understand the correspondence between the fastening locations 201 to 208 and the virtual objects 301 to 308. In addition, by displaying the virtual objects 331 to 338, which are more simplified than the virtual objects 321 to 328, the worker can more easily check the information of the virtual object 301 to 308.
If the virtual objects 311 to 318 and the virtual objects 321 to 328 are displayed in the first mode and the virtual objects 301 to 308 are away from the fastening locations 201 to 208, as shown in
In the processing method M3 shown in
For example, the task may be selected by the worker or by a higher-level system. Based on data obtained from a sensor provided in the workplace or a reader provided in the workplace the processing device 150 may determine the task. The task may be automatically selected based on a schedule prepared in advance.
Thereafter, the processing device 150 executes the steps S1 to S5 in the same manner as the display control method M1 or M2. In the step S1, the processing device 150 reads the origin data 172. In the step S3, the processing device 150 reads the fastening location data 173.
The origin data 172 includes a method for identifying the origin. As the method for identifying the origin, a method using a marker or a method using a hand gesture is registered.
The fastening location data 173 includes the ID of each fastening location, the position of each fastening location, the angle of the extension bar, the model of the tool used, the required torque value, the required number of screw-tightening, the color of the mark, and the ID of each virtual object. The processing device 150 acquires data related to the task to be performed from the fastening location data 173.
The ID of the fastening location is unique identification information (character string) for identifying each fastening location. The positions of the fastening locations are registered as coordinates in the three-dimensional coordinate system based on the origin. The model of tool indicates the classification of the tool by structure, appearance, performance, etc. For example, the length of the extension bar is identified from the model of the extension bar. The angle indicates the limit value of the angle of the extension bar that can be fitted with the screw when the screw is turned to the fastening location.
During the task, a mark may be attached when the screw is turned to the fastening location. The “mark color” represents the color of the mark attached to each fastening location. If different colored marks are attached according to the number of times a screw is turned, the color of the mark for each count is registered. The virtual object ID is a character string for identifying the data of the pre-registered virtual object, and the virtual object ID is associated with each fastening location. The object shape is the shape of the displayed virtual object corresponding to each fastening location. The display mode is the color and size of the virtual object to be displayed.
After the step S5, the display direction control (step S10) shown in
In the step S33, the processing device 150 associates the torque detected by the tool with the ID of the fastening location where the screw is estimated to be turned, and stores them in history data 174. As shown in
After the step S33, the processing device 150 executes the step S6. Until the task is completed, the control of display direction, the control of display position, and the estimation of the location being worked on are repeatedly performed. The order of execution of the steps S10, S20, and S31 can be changed appropriately. The step S31 may be executed before the steps S10 and S20. The steps S10, S20, and S31 may be executed in parallel.
The task data 171, the origin data 172, the fastening location data 173, and the history data 174 are stored in the storage device 170 of the MR device 100. Alternatively, the task data 171, the origin data 172, the fastening location data 173, and the history data 174 may be stored in a storage area other than the MR device 100. In such a case, the processing device 150 accesses the task data 171, the origin data 172, the fastening location data 173, and the history data 174 via wireless communication or a network.
In the example, a case in which the embodiment is applied to the task of tightening the screw has been described. Embodiments of the present invention may be applied to the task of loosening the screw. When loosening the screw, as shown in
For example, a computer 90 shown in
The ROM 92 stores programs that control the tasks of the computer 90. Programs that are necessary for causing the computer 90 to realize the processing described above are stored in the ROM 92. The RAM 93 functions as a memory region into which the programs stored in the ROM 92 are loaded.
The CPU 91 includes a processing circuit. The CPU 91 uses the RAM 93 as working memory to execute the programs stored in at least one of the ROM 92 or the storage device 94. When executing the programs, the CPU 91 controls the various components via a system bus 98 and performs various processing.
The storage device 94 stores data necessary for executing the programs and data obtained by executing the programs. The storage device 94 includes a solid state drive (SSD), etc. The storage device 94 may be used as the storage device 170.
The input interface (I/F) 95 can connect the computer 90 to input devices. The CPU 91 can read various data from input devices via the input I/F 95. The output interface (I/F) 96 can connect the computer 90 and output devices. The CPU 91 can transmit data to output devices (e.g., projection devices 121 and 122) via the output I/F 96 and can cause the output device to display information.
The communication interface (I/F) 97 can connect the computer 90 and a device outside the computer 90. For example, the communication I/F 97 connects the digital tool and the computer 90 by Bluetooth (registered trademark) communication.
The data processing of the processing device 150 may be performed by only one computer 90. A portion of the data processing may be performed by a server or the like via the communication I/F 97.
The processing of the various data described above may be recorded, as a program that can be executed by a computer, in a magnetic disk (a flexible disk, a hard disk, etc.), an optical disk (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-R, DVD-RW, etc.), semiconductor memory, or another non-transitory computer-readable storage medium.
For example, the information that is recorded in the recording medium can be read by the computer (or an embedded system). The recording format (the storage format) of the recording medium is arbitrary. For example, the computer reads the program from the recording medium and causes a CPU to execute the instructions recited in the program based on the program. In the computer, the acquisition (or the reading) of the program may be performed via a network.
Embodiments of the present invention include the following features.
A mixed reality device capable of superimposing a virtual space on a real space, configured to:
The mixed reality device according to feature 1, wherein
The mixed reality device according to feature 2, wherein
The mixed reality device according to any one of features 1 to 3, wherein
The mixed reality device according to feature 4, wherein
The mixed reality device according to feature 4, wherein
The mixed reality device according to feature 4, wherein
The mixed reality device according to feature 4, wherein
The mixed reality device according to feature 4, wherein
The mixed reality device according to feature 9, wherein
A mixed reality device capable of superimposing a virtual space on a real space, configured to:
The mixed reality device according to feature 11, wherein
The mixed reality device according to feature 11 or 12, wherein
A display control method for a mixed reality device capable of superimposing a virtual space on a real space, comprising:
A display control method for a mixed reality device capable of superimposing a virtual space on a real space, comprising:
A program causing a mixed reality device to execute the display control method according to feature 14 or 15.
A non-transitory computer-readable storage medium storing the program according to feature 16.
As described above, examples in which each embodiment of the present invention is applied to a task have been described. Each embodiment can be applied to more than just tasks. That is, control of the display direction or display position can be applied to any virtual object. By applying the display direction or display position control to the virtual object displayed at a predetermined position in the real space, the virtual object becomes easier to see for the wearer of the MR device 100. According to each embodiment, the MR device 100 that is easy to use can be provided.
In the specification, “or” shows that “at least one” of items listed in the sentence can be adopted.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the invention. Moreover, above-mentioned embodiments can be combined mutually and can be carried out.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2023-176197 | Oct 2023 | JP | national |
This application is based upon and claims the benefit of priority from Japanese Patent Application No.2023-176197, filed on Oct. 11, 2023; the entire contents of which are incorporated herein by reference.