This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2023-176225, filed on Oct. 11, 2023; the entire contents of which are incorporated herein by reference.
In tasks on articles, screws may be turned. There is a need for technology that can increase the efficiency of the task of turning screws.
According to one embodiment, a mixed reality device is configured to display a virtual object corresponding to a fastening location where a screw is turned. The virtual object is displayed at a position away from the fastening location. The mixed reality device is configured to change a display position of the virtual object with respect to the fastening location according to a physique of a wearer.
Various embodiments will be described hereinafter with reference to the accompanying drawings. The drawings are schematic and conceptual; and the relationships between the thickness and width of portions, the proportions of sizes among portions, etc., are not necessarily the same as the actual values thereof. Further, the dimensions and proportions may be illustrated differently among drawings, even for identical portions. In the specification and drawings, components similar to those described or illustrated in a drawing thereinabove are marked with like reference numerals, and a detailed description is omitted as appropriate.
Embodiments of the present invention relate to mixed reality (MR) devices. For example, as shown in
In the illustrated example, the MR device 100 is a binocular-type head-mounted display. Two lenses 111 and 112 are embedded in the frame 101. The projection devices 121 and 122 project information onto lenses 111 and 112, respectively.
The projection device 121 and the projection device 122 display the recognition result of the worker's body, a virtual object, etc. on the lens 111 and the lens 112. Only one of the projection device 121 and the projection device 122 may be provided, and information may be displayed on only one of the lens 111 and the lens 112.
The lens 111 and the lens 112 are transparent. The worker can see the real-space environment through the lens 111 and the lens 112. The worker can also see the information projected onto the lens 111 and the lens 112 by the projection device 121 and the projection device 122. The projections by the projection device 121 and the projection device 122 display information overlaid on the real space.
The image camera 131 detects visible light and acquires a two-dimensional image. The depth camera 132 emits infrared light and acquires a depth image based on the reflected infrared light. The sensor 140 is a 6-axis detection sensor, and can detect 3-axis angular velocity and 3-axis acceleration. The microphone 141 accepts voice input.
The processing device 150 controls each element of the MR device 100. For example, the processing device 150 controls the display by the projection device 121 and the projection device 122. The processing device 150 detects the movement of the field of view based on the detection result by the sensor 140. The processing device 150 changes the display by the projection device 121 and the projection device 122 in response to the movement of the field of view. In addition, the processing device 150 can perform various processes using data obtained from the image camera 131 and the depth camera 132, the data of the storage device 170, etc.
The battery 160 supplies the power necessary for operation to each element of the MR device 100. The storage device 170 stores data necessary for the processing of the processing device 150, data obtained by the processing of the processing device 150, etc. The storage device 170 may be provided outside the MR device 100 and communicate with the processing device 150.
Not limited to the illustrated example, the MR device according to the embodiment may be a monocular-type head-mounted display. The MR device may be a glasses-type as illustrated, or may be a helmet type.
For example, a screw-fastening task is performed on the article 200 shown in
A marker 210 is provided near the workpiece. In the illustrated example, the marker 210 is an AR marker. As described below, the marker 210 is provided for setting an origin of the three-dimensional coordinate system. Instead of the AR marker, a one-dimensional code (barcode), a two-dimensional code (QR code (registered trademark)), or the like may be used as the marker 210. Alternatively, instead of a marker, the origin may be indicated by a hand gesture. The processing device 150 sets the three-dimensional coordinate system based on a plurality of points indicated by the hand gesture.
Here, an example in which a screw is tightened using the MR device 100 shown in
The image camera 131 and the depth camera 132 images the article 200, the worker's left hand 251, and the worker's right hand 252. The processing device 150 recognizes the left hand 251 and the right hand 252 from the captured image. The processing device 150 may display the recognition results on the lens 111 and the lens 112 by the projection device 121 and the projection device 122. Hereinafter, the processing device displays information on the lens using the projection device, which is also referred to simply as “the processing device displays information”.
For example, as shown in
When the left hand 251 and the right hand 252 are recognized, the processing device 150 measures the coordinates of each hand. Specifically, the hand includes multiple joints, such as DIP joints, PIP joints, MP joints, CM joints, and so on. The coordinates of any of these joints are used as the coordinates of the hand. The position of the center of gravity of the multiple joints may also be used as the coordinates of the hand. Alternatively, the overall center coordinates of the hand may be used as the coordinates of the hand.
As shown in
The virtual objects 301 to 308 respectively indicate the positions where the hand should be located when tightening the screws into the fastening locations 201 to 208. The virtual objects 311 to 318 indicate the position where the extension bar should be located when tightening the screws at the fastening locations 201 to 208. For example, the distance between the fastening locations 201 to 208 and the virtual objects 301 to 308 corresponds to the length of the extension bar.
For example, as shown in
At this time, the worker places the extension bar 290 so that the extension bar 290 is in close proximity or contact with the virtual object 314. The worker also holds the head of the wrench 280 so that the hand comes into contact with the virtual object 304. By displaying the virtual objects, the worker can easily understand the positions where the tool and the hand should be located when turning the screw to the fastening location 204. Thereby, the work efficiency can be improved.
In the illustrated example, the virtual objects 301 to 308 are spherical, and the virtual objects 311 to 318 are rod-shaped. As long as the worker can see each virtual object, the shape of each object is not limited to this example. For example, the virtual objects 301 to 308 may be cubes, and the virtual objects 311 to 318 may be linear.
The processing device 150 may change the display positions of the virtual objects 301 to 308 and the virtual objects 311 to 318 according to the physique of the wearer of the MR device 100. The wearer of the MR device 100 is a worker. The “physique” referred to in the adjustment of the display position is the arm length. The “arm length” is the length from the shoulder to the hand in an outstretched state. In addition to the arm length, other data regarding the physique may also be referred.
The first worker 250a is tall and has long arms. Therefore, as shown in
On the other hand, the arms of the second worker 250b are shorter than the arms of the first worker 250a. In order to located the hand directly above the fastening location 204, it is necessary to fully extend the arm. However, if the arm is fully extended, the task of turning screws becomes difficult. During the task, the posture becomes unstable, and the second worker 250b may fall over or fall down. Therefore, when the second worker 250b works, the hand of the second worker 250b is located more in front than the hand of the first worker 250a.
In addition, the height of the second worker 250b is less than that of the first worker 250a. Thus, the hand of the second worker 250b is located at a lower position than the hand of the first worker 250a. The second worker 250b uses an extension bar 290b that is shorter than the extension bar 290a. The second worker 250b uses the extension bar 290b tilted within the range that the extension bar 290b can be fitted to the screw.
In such a case, the processing device 150 displays the virtual object 306 obliquely above the fastening location 206. Additionally, the processing device 150 displays the virtual object 316 at an inclination with respect to the vertical direction. The display position (height) of the virtual object 306 shown in
As shown in
The processing device 150 may change the display positions of the virtual objects 301 to 308 and the virtual objects 311 to 318 according to the positional relationships between the fastening locations 201 to 208 and the MR device 100. After the three-dimensional coordinate system is set based on the marker 210, the processing device 150 continuously calculates the position of the MR device 100 in the three-dimensional coordinate system. Further, the positions of the fastening locations 201 to 208 in the three-dimensional coordinate system are registered in advance. Using this data, the processing device 150 calculates the positional relationships between each of the fastening locations 201 to 208 and the MR device 100.
The method for calculating the position of the MR device 100 is freely selectable, and an conventional positioning method may be used. As an example, the processing device 150 calculates the position and direction of the MR device 100 using the spatial mapping function. In the MR device 100, the distances to the surrounding objects of the MR device 100 are measured by the depth camera 132. From the measurement results by the depth camera 132, surface information of the surrounding objects can be obtained. Surface information includes the positions and directions of the surfaces of the objects. For example, the surface of each object is represented by multiple meshes, and the position and direction are calculated for each mesh. The processing device 150 calculates the relative position and relative direction of the MR device 100 with respect to the surfaces of the surrounding objects from the surface information. When the marker 210 is recognized, the position of each surface is also represented by the three-dimensional coordinate system based on the marker 210. The position and direction of the MR device 100 are calculated based on the positional relationships between the surfaces of the objects and the MR device 100.
The spatial mapping is performed repeatedly at predetermined intervals. Each time the spatial mapping is performed, the surface information of the surrounding objects is obtained. The processing device 150 calculates changes in surface positions and directions between the present spatial mapping result and the last spatial mapping result. When the surrounding objects are not moving, the changes in the positions and directions of the surfaces corresponds to the changes in the position and direction of the MR device 100. The processing device 150 calculates the change amounts of the position and direction of the MR device 100 based on the changes in the positions of the surfaces, the changes in the directions of the surfaces, the detection results of the sensor 140, etc. The processing device 150 updates the position and direction of the MR device 100 based on the obtained change amounts.
In
As shown in
One advantage of the embodiment will be described.
As shown in
Regarding this problem, in the embodiment of the present invention, the display position of the virtual object with respect to the fastening location is changed according to the physique of the worker (the wearer of the MR device 100). By changing the display position of the virtual object, each worker can work in a more appropriate posture. This can improve work efficiency. In addition, workers can work more safely.
As a specific example, as shown in
Another advantage of the embodiment will be described.
Depending on the size or shape of the article, the fastening location may be at a position that is difficult to reach. In such a case, it may be difficult for a worker to locate the hand at the displayed position of the virtual object. As a result of locating the hand at the display position of the virtual object in an unsuitable posture, work efficiency may decrease or the worker may fall.
Regarding this problem, in the embodiment of the present invention, the display position of the virtual object with respect to the fastening location is changed according to the positional relationship between the fastening location and the MR device 100. By changing the display position of the virtual object according to the positional relationship, each worker can work in a more appropriate posture. This allows workers to work more safely. In addition, work efficiency can be improved.
As a specific example, in the state shown in
The method of changing the display position is freely selectable. For example, the standard arm length is registered in advance, and the horizontal position of the virtual object changes according to the difference between the standard arm length and the worker's arm length. Additionally, the longer the distance between the standard position of the virtual object and the MR device 100, the greater the change amount of the display position of the virtual object.
In order to display the virtual object at a more desirable position, the following process may be executed. First, the processing device 150 calculates the first range that can be worked for the fastening location based on the position of the fastening location, the length of the extension bar used, and the allowable angle of the extension bar. As a specific example, as shown in
Next, based on the position of the MR device 100 and the length of the worker's arm, the second range R2 in which the worker can work is calculated. The processing device 150 calculates ranges in which each of the first ranges R1a to R1d and the second range R2 overlap. The processing device 150 adopts positions closest to the MR device 100 in the overlapping ranges as display positions of the virtual objects. As shown in
When the first range and the second range do not overlap, the processing device 150 does not display the virtual object. The fact that the first and second ranges do not overlap indicates that the position of the worker with respect to the fastening location to be worked is inappropriate. If the worker performs the task in an inappropriate position, work efficiency or safety may be reduced. When the first range and the second range do not overlap, the processing device 150 may output an alert encouraging the worker to move to a more appropriate position.
After the virtual objects are displayed, the processing device 150 may determine whether a prescribed physical object comes into contact with the virtual objects 301 to 308. For example, the processing device 150 determines whether the hand comes into contact with the virtual object. Specifically, the processing device 150 calculates the distances between the coordinates of the hand and each of the virtual objects 301 to 308. When any distance is less than a preset threshold, the processing device 150 determines that the hand comes into contact with that virtual object. As an example, in
The processing device 150 may determine whether the tool comes into contact with the virtual objects 301 to 308. For example, as shown in
When the prescribed physical object comes into contact with one of the virtual objects 301 to 308, it can be estimated (inferred) that the screw is being turned to the fastening location corresponding to the one of the virtual objects 301 to 308. Hereinafter, among one or more fastening locations, the fastening location that is estimated to be being worked on based on the contact between the prescribed physical object and the virtual object is referred to as a “work location”. In the case where the work location is estimated, the processing device 150 may record that the screw has been turned to the work location. Thus, a task record can be automatically generated.
The tool used may be a digital tool capable of detecting torque. In such a case, the processing device 150 receives the detected torque from the tool. The torque required for fastening may be set in advance. The tool may determine whether or not the required torque has been detected, and transmit the determination result to the processing device 150. In addition, the tool transmits the rotation angle, the time when the torque is detected, etc. to the processing device 150. The processing device 150 associates the data received from the tool with data related to the work location. Thus, a more detailed task record is automatically generated.
In the processing method M1 shown in
First, the processing device 150 accepts the selection of a task and a worker (step S1). In the selection of a task, the task data 171 is loaded. In the selection of a worker, the worker data 172 is loaded. The task data 171 includes task IDs, task names, article IDs, and article names. The processing device 150 can accept either a task ID, a task name, an article ID, or an article name as the selection of task. The worker data 172 includes worker IDs, worker names, and physiques. The processing device 150 can accept a worker ID or a worker name as the selection of worker.
For example, the task and the worker may be selected by a worker or by a higher-level system. Based on the data obtained from a sensor provided in the workplace or a reader provided in the workplace, the processing device 150 may determine the task and the worker. Based on a schedule prepared in advance, the task and the worker may be automatically selected.
When the task and the worker are selected, the processing device 150 reads the data stored in the fastening location data 173 (step S2). The fastening location data 173 includes a method for identifying the origin, an ID of each fastening location, the position of each fastening location, the model of the tool used, the angle of the extension bar, the required number of fastenings, the required torque, the color of a mark, and the ID of each virtual object.
As methods for identifying the origin, a method using a marker or a method using a hand gesture is registered. The ID of each fastening location is a unique character string for identifying each fastening location. The coordinates in the three-dimensional coordinate system based on the origin are registered as the position of the fastening location. The model of the tool indicates the classification of the tool by structure, appearance, performance, etc. For example, the length of the extension bar is specified from the model of the extension bar. The angle indicates the limit value of the angle of the extension bar that can be fitted to the screw when turning the screw to the fastening location.
During the work, a mark may be attached when the screw has been turned to the fastening location. The “mark color” represents the color of the mark attached to each fastening location. When a mark of a different color is attached according to the number of times the screw has been turned, the color of the mark for each number of times is registered. The ID of each virtual object is a character string for identifying the data of a virtual object registered in advance, and it is associated with each fastening location.
The processing device 150 calculates the first range based on the position of the fastening location, the length of the extension bar used, and the allowable angle of the extension bar (step S3). The first range indicates the positions of the hand where the screw can be turned to the fastening location without considering the worker's physique. The processing device 150 calculates the second range based on the position of the MR device 100 and the length of the worker's arm (step S4). The second range indicates the positions of the hand where the worker can work. The arm length is stored in advance in the worker data 172 as physique data.
As the physique data, data other than the arm length may be further referred to. For example, three sets of data: arm length, distance from eyes to chin, and neck length are referred to. When the person extends the arm straight in front, the sum of the distance from the eyes to the chin and the length of the neck is approximately equal to the vertical distance between the shoulder and the MR device 100. Consider an imaginary triangle with the hands, the base of the neck, and the eyes as vertices. In that case, the distance between the hand and the MR device 100 can be calculated from the arm length, the distance from the eyes to the chin, and the length of the neck using trigonometry. The position of the MR device 100 is repeatedly calculated by spatial mapping. The processing device 150 can calculate the second range based on the position of the MR device 100 and the distance between the hand and the MR device 100.
As the physique data, a shoulder width may be referred to. The shoulder width is the distance between the left shoulder and the right shoulder. When the hand is extended straight in front, the distance in the left and right direction between the eyes (MR device 100) and the hand is approximately half the shoulder width. Consider an imaginary triangle with the hand, the eyes (MR device 100), and the shoulder as vertices. In that case, the distance between the hand and the MR device 100 can be calculated from the arm length and half the shoulder width using trigonometry. The processing device 150 can calculate the second range based on the position of the MR device 100 and the distance between the hand and the MR device 100. The second range may be calculated using four sets of data: arm length, distance from eyes to chin, neck length, and shoulder width.
The processing device 150 calculates the range where the first range and the second range overlap (step S5). The processing device 150 calculates the closest position to the MR device 100 from the overlapping range (step S6). The processing device 150 displays the virtual object at the closest position (step S7).
When multiple virtual objects are displayed, the steps S3 to S7 are executed for each virtual object. In addition, after the virtual object is displayed, the steps S3 to S7 are repeated. Thereby, the display position of the virtual object is updated according to the positional relationship between the fastening location and the MR device 100.
After displaying the virtual object, the processing device 150 determines whether a prescribed physical object comes into contact with the virtual object (step S8). When the prescribed physical object comes into contact with the virtual object, the processing device 150 estimates the work location based on the determination result (step S9). That is, it is estimated that the screw is being turned to the fastening location corresponding to the virtual object. The processing device 150 stores a record of the task for the work location (step S10).
In the step S10, the processing device 150 associates the torque detected by the tool with the ID of the fastening location at which the screw is estimated to be turned, and stores them in the history data 174. As shown in
The processing device 150 may display information indicating the order of tasks on the virtual objects 301 to 308. For example, as shown in
The display of the virtual object corresponding to the fastening location where the screw has been turned may differ from the display of the virtual object corresponding to the fastening location where the screw has not been turned. In the example shown in
In the case where the screw is turned multiple times to one fastening location, the display of the virtual object may change according to the number of times the screw has been turned. For example, the color of the virtual object 301 and the color of the virtual object 305 shown in
The order of tasks may vary depending on the number of times the screw has been turned. For example,
The number of times the screw has been turned is counted based on the data stored in the history data 174. When a screw is turned to one fastening location, the task record indicating this is stored in the history data 174. The processing device 150 counts the number of times the screw has been turned for each fastening location from the record stored in the history data 174, and controls the display of each virtual object based on the counted number of times.
The processing device 150 may display a virtual panel containing information related to the task. In the example shown in
As shown in
As an example, as shown in
The processing device 150 may check whether the screws have been appropriately turned to all fastening locations after the task is completed. Specifically, the processing device 150 receives a check instruction (step S11). The check instruction may be input by the worker or from a higher-level system. Upon receiving the instruction, the processing device 150 reads the data of the fastening location data 173 (step S12) and reads the data of the history data 174 (step S13).
The processing device 150 checks whether the screws have been appropriately turned to the fastening locations to be worked on (step S14). Specifically, the processing device 150 checks whether the screw has been turned the required number of times for each fastening location. In addition, the processing device 150 checks whether the required torque has been detected in each screw-tightening.
The processing device 150 determines whether there is an error based on the results of the checks (step S15). When the torque detected in any of the screw-tightening is less than the required torque registered in advance, or when the number of times the screw has been turned for any of the fastening locations is less than the required number of times registered in advance, the processing device 150 determines that there is an error in the task.
When there is no error, the processing device 150 terminates the process. When there is an error, the processing device 150 displays, to the worker, the fastening location where the error is detected (step S16). For example, as shown in
Thereafter, when the screw has been turned to the fastening location where the error was detected, the processing device 150 checks the task (step S17). When it is verified that the screw has been turned appropriately, the processing device 150 terminates the process. By performing the check, it is possible to suppress the production of inappropriately worked articles 200. For example, the quality of the article 200 can be improved.
The acquisition system 1 shown in
The imaging device 2 images a worker and acquires an image. At least an image of the upper body of the worker is obtained. The imaging device 2 includes a camera. The processing device 3 receives the image acquired by the imaging device 2. The processing device 3 inputs the image into the pose estimation model. When an image is input, the pose estimation model estimates the posture of the person in the image. The posture is represented by the joints of the human body and the skeleton connecting the joints. The joints include the head, neck, shoulders, elbows, wrists, hips, knees, ankles, etc.
The pose estimation model preferably includes a neural network. More preferably, the pose estimation model includes a convolutional neural network (CNN). As the pose estimation model, OpenPose, DarkPose, CenterNet, etc. can be used.
The processing device 3 acquires the estimation result output by the pose estimation model. The processing device 3 calculates physique data such as the height of the worker from the estimated posture. Additionally, the worker's height, neck position, shoulder position, etc., may be calculated as physique data. The processing device 3 stores the calculated physique data in the worker data 172. By using the acquisition system 1, the physique data used in the processing device 150 can be automatically acquired.
The MR device 100 may have functions as the acquisition system 1. For example, the image camera 131 photographs the worker's arm while the worker fully extends the arm forward. The processing device 150 measures the coordinates of the worker's hand and calculates the distance between the MR device 100 and the coordinates of the hand. The distance is proportional to the worker's arm length and indicates the limit of the range in which the worker can work. The processing device 150 stores the distance in the worker data 172 as the data of the worker's arm length.
The processing device 150 may determine whether or not the worker's posture is appropriate instead of controlling the display position of the virtual object. Alternatively, the processing device 150 may perform posture determination in addition to controlling the display position of the virtual object.
Specifically, the processing device 150 estimates the posture of the worker based on the position of the MR device 100, the position of the worker's hand, and the worker's physique. The processing device 150 determines whether the estimated posture is appropriate. For example, when the posture is inappropriate for the task or the posture is unsafe, the processing device 150 determines that the posture is not appropriate.
In estimating the posture, the processing device 150 measures the position of the MR device 100 and the position of each of the worker's hand. The processing device 150 calculates the distance between the MR device 100 and each hand. Further, the processing device 150 refers to the arm length stored in the worker data 172. The processing device 150 sets the first range based on the arm length. The first range may be set using the distance from the eyes to the chin, neck length, shoulder width, etc., in addition to the arm length. The processing device 150 compares the calculated distance with the first range.
The distance between the MR device 100 and the hand becomes longer as the worker extends the arm. The closer the distance is to the length of the pre-registered arm, the more it indicates that the worker's arm is extended to the limit. When the distance is too short compared to the pre-registered arm length, the worker's hand is indicated to be located close to the body. For example, the first range is calculated by multiplying the pre-registered arm length by a predetermined ratio. The predetermined ratio may be set appropriately from the viewpoint of safety or task efficiency. For example, as the lower limit of the first range, a value of 0.5 times the arm length is set. As the upper limit of the first range, a value of 0.8 times the arm length is set. When the distance is outside the first range, it indicates that the range of motion of the worker's arm is narrow and the posture is inappropriate for the task. Therefore, when any distance falls outside the first range, the processing device 150 determines that the worker's posture is not appropriate.
When the worker almost fully extends the arms, it is determined that each of both distances falls outside the first range. When the task is determined to be inappropriate, the processing device 150 displays an alert 350. As an example, the alert 350 includes a cause 351 that the posture was determined to be inappropriate and an instruction 352 to the worker. By displaying the alert 350, the worker can be encouraged to work in a more appropriate posture.
The processing device 150 may further use the inclination of the MR device 100 to determine whether the posture is appropriate. The inclination of the MR device 100 is detected by the sensor 140. Here, the inclination refers to the direction (angle) of the MR device 100 with respect to a reference direction. The reference direction is the orientation of the MR device 100 when the worker is facing horizontally. The processing device 150 determines that the task is not appropriate when the distance between the MR device 100 and the hand falls outside the first range or when the inclination exceeds the first threshold. The first threshold is appropriately set from the viewpoint of task safety. For example, any value in the range of 15 degrees to 45 degrees is set as the first threshold. when the inclination exceeds the first threshold, it indicates that the worker is leaning forward. when the worker leans forward, the worker may fall over. when the task is carried out at a high position, the worker may fall down.
When the task is appropriate, the processing device 150 does not display any information. Alternatively, the processing device 150 may display information indicating that the task is appropriate.
In the processing method M2 shown in
First, the processing device 150 executes steps S1 and S2 as in the processing method M1. Next, the processing device 150 displays a virtual object based on the data stored in the fastening location data 173 (step S7). The processing device 150 determines whether a prescribed physical object comes into contact with the virtual object (step S8).
When the prescribed physical object comes into contact with the virtual object, the processing device 150 measures the position of the hand. The processing device 150 estimates the worker's posture based on the position of the MR device 100, the position of the worker's hand, and the worker's physique (step S21). The processing device 150 determines whether the estimated posture is appropriate (step S22). Specifically, a first determination of whether or not the distance between the MR device 100 and the hand falls outside the first range and a second determination of whether or not the inclination exceeds the first threshold are performed.
When the posture is determined to be inappropriate as a result of the first determination and the second determination, the processing device 150 prohibits the execution of the task (step S23). The processing device 150 determines the display content (step S24). The display content includes, for example, a danger of an inappropriate posture, instructions for making the posture more appropriate, etc. as shown in
Two or more ranges to be compared with the distance may be set. Two or more thresholds compared to the inclination may be set. In such a case, the display content may be changed according to the comparison result between the distance and each range or the comparison result between the inclination and each threshold. For example, the further the distance deviates from a wider range or the greater the inclination exceeds a greater threshold, the more the alert emphasizes the danger is output.
As an example, one range to be compared with a distance is set, and two thresholds (a first threshold and a second threshold) to be compared with an inclination are set. The second threshold is larger than the first threshold. In step S24, the processing device 150 determines the display content based on the results of the first determination and the second determination. For example, when the distance exceeds the upper limit of the first range and the inclination exceeds the first threshold, the processing device 150 displays an alert 350a indicating that the worker's position is too far away, as shown in
When the posture is determined to be appropriate in step S22, the processing device 150 allows the task to proceed (step S26). After step S25 or S26, the processing device 150 estimates the task location (step S9). That is, it is estimated that a screw is being turned to the fastening location corresponding to the virtual object with which the prescribed physical object came into contact. Thereafter, step S10 is executed as in the processing method M1. By determining whether the task posture is appropriate, the worker can be encouraged to work in a safer or more efficient posture. As a result, the safety or efficiency of the task can be improved.
The processing method M1 shown in
Here, an example is mainly described in which an embodiment of the present invention is applied to the task of tightening a screw. Embodiments of the present invention may also be applied to the task of loosening a screw. When loosening the screw, a tool is used and a screw is turned, as shown in
For example, a computer 90 shown in
The ROM 92 stores programs that control the operations of the computer 90. Programs that are necessary for causing the computer 90 to realize the processing described above are stored in the ROM 92. The RAM 93 functions as a memory region into which the programs stored in the ROM 92 are loaded.
The CPU 91 includes a processing circuit. The CPU 91 uses the RAM 93 as working memory to execute the programs stored in at least one of the ROM 92 or the storage device 94. When executing the programs, the CPU 91 controls the various components via a system bus 98 and performs various processing.
The storage device 94 stores data necessary for executing the programs and data obtained by executing the programs. The storage device 94 includes a solid state drive (SSD), etc. The storage device 94 may be used as the storage device 170.
The input interface (I/F) 95 can connect the computer 90 to input devices. The CPU 91 can read various data from input devices via the input I/F 95. The output interface (I/F) 96 can connect the computer 90 and output devices. The CPU 91 can transmit data to output devices (e.g., the projection devices 121 and 122) via the output I/F 96 and can display the information on the output devices.
The communication interface (I/F) 97 can connect the computer 90 and a device outside the computer 90. For example, the communication I/F 97 connects the digital tool and the computer 90 via Bluetooth (registered trademark) communication.
The data processing of the processing device 3 or the processing device 150 may be performed by only one computer 90. A portion of the data processing may be performed by a server or the like via the communication I/F 97.
The processing of the various data described above may be recorded, as a program that can be executed by a computer, in a magnetic disk (a flexible disk, a hard disk, etc.), an optical disk (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-R, DVD-RW, etc.), semiconductor memory, or another non-transitory computer-readable storage medium.
For example, the information that is recorded in the recording medium can be read by the computer (or an embedded system). The recording format (the storage format) of the recording medium is arbitrary. For example, the computer reads the program from the recording medium and causes a CPU to execute the instructions recited in the program based on the program. In the computer, the acquisition (or the reading) of the program may be performed via a network.
Embodiments of the present invention include the following features.
A mixed reality device, configured to display a virtual object corresponding to a fastening location where a screw is turned, the virtual object being displayed at a position away from the fastening location,
The mixed reality device according to feature 1, wherein
The mixed reality device according to feature 1, wherein
A mixed reality device, configured to display a virtual object corresponding to a fastening location where a screw is turned, the virtual object being displayed at a position away from the fastening location,
The mixed reality device according to feature 4, wherein
A mixed reality device configured to display a virtual object corresponding to a fastening location where a screw is turned, the virtual object being displayed at a position away from the fastening location,
The mixed reality device according to any one of features 1 to 6, wherein
The mixed reality device according to any one of features 1 to 6, wherein
The mixed reality device according to any one of features 1 to 6, wherein
The mixed reality device according to feature 9, wherein
The mixed reality device according to feature 10, wherein
An acquisition system configured to acquire the physique referred by the mixed reality device according to feature 1 or 6,
A processing method, causing a mixed reality device to:
A program, causing a computer to execute the processing method according to feature 13.
A non-transitory computer-readable storage medium, storing the program according to feature 14.
According to the embodiment described above, a mixed reality device, processing method, program, and storage medium capable of improving work efficiency are provided. In addition, an acquisition system capable of automatically acquiring physique data referenced when using a mixed reality device, processing method, program, and storage medium is provided.
In the present specification, “or” indicates that “at least one or more” of the items listed in the text can be adopted.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the invention. Moreover, above-mentioned embodiments can be combined mutually and can be carried out.
Number | Date | Country | Kind |
---|---|---|---|
2023-176225 | Oct 2023 | JP | national |