This application claims priority to Japanese Patent Application No. 2023-024009, filed Feb. 20, 2023, the disclosure of which is incorporated by reference herein in its entirety.
The present disclosure relates to a work training support system and a work training support method.
Using a model generated by machine learning, a technique for estimating how to act on events that have occurred during a work is known if a skilled worker (e.g., Patent Literature 1).
Patent Literature 1: WO-2019/093386-A
It is not easy to make a manual that can accurately convey a skill of a skilled worker to a trainee, so it requires trial and error to raise the skill of the trainee to the same level as the skill of the skilled worker. Therefore, a technique capable of efficiently increasing the proficiency of the trainee is desired.
One aspect of the present disclosure provides a work training support system. The work training support system includes: a detector detecting a movement of a tool during a first period, wherein the first period is a period in which an instructor performs work using the tool; a controller programmed to generate a first image using a detection result of the detector during the first period, wherein the first image represents the movement of the tool during the first period; and a head mounted display worn on a head of a trainee, wherein the head mounted display displays the first image in a field of view of the trainee during a second period, wherein the second period is a period in which the trainee performs the work using the tool.
The work training support system 10 includes a camera 100, a sensor 150, a controller 200, and a display 300. The camera 100, the sensor 150, and the display 300 are connected to the controller 200 through wired communication or wireless communication.
The camera 100 and the sensor 150 detect the movement of the tool TL during the work. The camera 100 is fixed at a position capable of photographing a state of the work. The camera 100 detects the position and pose of the tool TL in the movement of the tool TL by motion capture technique. In the present embodiment, the camera 100 detects the position and orientation of the tool TL by reading a marker MK fixed to the tool TL. The sensor 150 is fixed to the tool TL. The sensor 150 detects a predetermined movement of the tool TL other than the position and orientation of the tool TL. In the present embodiment, the seal gun, which is a tool TL, includes a lever and a nozzle, and injects the sealant from the nozzle at an injection amount corresponding to the grip amount of the lever. The sensor 150 includes a position sensor for detecting the position of the lever, in other words, the grip amount of the lever, and a pressure sensor for detecting the pressure applied to the nozzle when the nozzle is pressed against the workpiece WK. The detected movement of the tool TL is transmitted from the camera 100 and the sensor 150 to the controller 200. Incidentally, the camera 100 and the sensor 150 is sometimes referred to as a detector.
The controller 200 is a computer including a processor 201, a memory 202, an input/output interface 203, and an internal bus 204. The processor 201, the memory 202, and the input/output interface 203 are bidirectionally communicatively connected via the internal bus 204. The memory 202 stores the work training support program PG and the three-dimensional CAD data DT of the tool TL. The processor 201 functions as a detection control unit 210, a generating unit 220, and a display control unit 230 by executing the work training support program PG. The detection control unit 210 acquires the detection result of the movement of the tool TL during the work from the camera 100 and the sensor 150. The generating unit 220 uses the three-dimensional CAD data DT and the detected movement of the tool TL to generate an image that reproduces the movement of the tool TL during the work by a computer graphics technique. The display control unit 230 causes the display 300 to display an image generated by the generating unit 220.
The display 300 is mounted on the trainee's head. The display 300 is a transmission type head mounted display. The display 300 displays an image of a virtual space superimposed on a scene of a real space by an augmented reality technology. Specifically, the display 300 displays the image generated by the generating unit 220 in the view of the trainee. In the present exemplary embodiment, the display 300 includes a camera and various sensors, and performs alignment between the real space and the virtual space by detecting an object serving as a reference point within the real space. The display 300 is sometimes referred to as a head mounted display.
In S120, the generating unit 220 generates a first image M1 representing the movement of the tool TL during the first period, that is, the movement of the tool TL due to the work of the instructor, using the three-dimensional CAD data DT stored in the memory 202 and the detected result of the movement of the tool TL during the first period. In the present embodiment, the first image M1, in addition to the movement of the tool TL during the first period, the path L1 of the movement of the tool TL during the first period is represented. The path L1 of the tool TL movement is, specifically, the path of the nozzle-tip movement.
In S130, the trainee performs the work using the tool TL while wearing the display 300. The content of the work carried out by the trainee in S130 is the same as the content of the work carried out by the instructor in S110. In the following explanation, the period in which the trainee performs the work using the tool TL is referred to as the second period. The display control unit 230 displays the first image M1 on the display 300 during the period from the beginning to the end of the second period. As shown in
In S140, the detection control unit 210 detects the movement of the tool TL during the second period using the camera 100 and the sensor 150. Processing of S140 is executed concurrently with processing of S130. The detection control unit 210 stores the detection result of the movement of the tool TL during the second period in the memory 202. The generating unit 220 generates a third image M3 that side by side represents the measured values of the physical quantity measured using the camera 100 and the sensor 150 during the second period and the measured values of the physical quantity measured using the camera 100 and the sensor 150 during the first period, and the display control unit 230 displays the third image M3 in addition to the first image M1 on the display 300 during the period from the beginning to the end of the second period. The measurement during the second period is displayed on the third image M3 in real time. As shown in
In S150, the generating unit 220 generates a second image M2 representing the movement of the tool TL during the second period, that is, the movement of the tool TL by the work of the trainee, using the three-dimensional CAD data DT stored in the memory 202 and the detected result of the movement of the tool TL during the second period. In the present embodiment, the second image M2, in addition to the movement of the tool TL during the second period, the path L2 of the movement of the tool TL during the second period is represented. The path L2 of the tool TL movement is, specifically, the path of the nozzle-tip movement. In order to easily distinguish the first image M1 from the second image M2, the generating unit 220 expresses the, color of the tool TL and the color of the path L2 in the second image M2 in a color differing from the color of the tool TL and the color of the path L1 in the first image M1.
In S160, the display control unit 230 causes the display 300 to display the first image M1 and the second image M2 superimposed on each other after the second period. The superimposing and displaying the first image M1 and the second image M2 on top of each other includes not only displaying the first image M1 and the second image M2 at the same position but also displaying the first image M1 and the second image M2 side by side. In the present exemplary embodiment, the display control unit 230 causes the display 300 to display the third image M3 in addition to the first image M1 and the second image M2. As shown in
In the example shown in
According to the work training support system 10 in the present embodiment described above, since the first image M1 is displayed on the display 300 during the second period, the trainee can perform the training of the work while watching the scene of the real space visible through the display 300 and the first image M1 displayed on the display 300. Since the motion of the tool TL by the instructor is represented in the first image M1, the trainer can experience the work technique of the instructor by moving the tool TL in the same way as the instructor. Therefore, it is possible to efficiently increase the proficiency of the trainee.
Further, in the present embodiment, the first image M1 represents the path L1 of the movement of the tool TL during the first period. Therefore, the trainee can perform the work while referring to the path L1 represented in the first image M1.
Further, in the present embodiment, in the third image M3, the real-time measured value measured using the camera 100 and the sensor 150 during the second period and the measured value measured using the camera 100 and the sensor 150 during the first period are represented side by side, so that the trainee can perform the work while referring to the measured value during the second period represented in the third image M3 and the measured value during the first period.
Further, in the present embodiment, after the work by the trainee is completed, the first image M1 and the second image M2 are displayed on the display 300 overlapping each other. Therefore, the trainee can analyze the difference between the movement of the tool TL by the instructor and the movement of the tool TL by the trainee by comparing the first image M1 displayed on the display 300 with the second image M2.
Further, in the present embodiment, the first image M1 represents the path L1 of the movement of the tool TL during the first period, and the second image M2 represents the path L2 of the movement of the tool TL during the second period. In addition, the thickness and color of each part of the path L1 and L2 are expressed according to the size of the measurement value measured using the sensor 150. Therefore, the trainee can easily recognize the difference between the movement of the tool TL by the instructor and the movement of the tool TL by the trainee.
The disclosure is not limited to any of the embodiment and its modifications described above but may be implemented by a diversity of configurations without departing from the scope of the disclosure. For example, the technical features of any of the above embodiments and their modifications may be replaced or combined appropriately, in order to solve part or all of the problems described above or in order to achieve part or all of the advantageous effects described above. Any of the technical features may be omitted appropriately unless the technical feature is described as essential in the description hereof. The present disclosure may be implemented by aspects described below.
According to the work training support system of this form, the trainee can perform the work while watching the scene visible through the head mounted display and the first image displayed on the head mounted display. Therefore, it is possible to efficiently increase the proficiency of the trainee.
According to the work training support system of this form, the trainee can carry out the work while referring to the path represented in the first image.
According to the work training support system of this form, the trainee can compare the movement of the tool by the instructor with the movement of the tool by the trainee using the first image and the second image displayed on the head mounted display.
According to the work training support system of this form, the difference between the movement of the tool by the instructor and the movement of the tool by the trainee can be easily recognized by the trainee.
According to the work training support system of this form, the trainee can perform the work while referring to the first physical quantity relating to the movement of the tool during the first period represented in the third image and the second physical quantity relating to the movement of the tool during the second period.
According to the work training support method of this form, the trainee can perform the work while watching the scene visible through the head mounted display and the first image displayed on the head mounted display. Therefore, it is possible to efficiently increase the proficiency of the trainee.
The disclosure can be implemented in various forms other than the work training support system and the work training support method. For example, it can be realized in the form of a computer program stored on a computer readable recording medium.
Number | Date | Country | Kind |
---|---|---|---|
2023-024009 | Feb 2023 | JP | national |