This invention relates generally to the field of robotic automation and training systems, and more particularly, but not by way of limitation, to an improved system and method for developing instructions for robotic movements and procedures.
Modern robots are capable of performing highly complicated maneuvers and procedures that may find utility in a variety of industrial applications. Robots are commonly deployed to perform repetitive tasks in product manufacturing and assembly. For highly complicated tasks, automated robots may need to approximate the behavior of humans as closely as possible. Programming complex movements of a robot arm, for example, often relies on a technique called inverse kinematics (IK), which is based on the desired trajectory of the end effector of the robot. While path planning and collision avoidance may be possible with simpler systems, these trajectories can be difficult to define for activities with variability and requiring a high degree of dexterity or fine control.
Accordingly, there is a need for an improved system and method for programming robots to carry out complex movements. It is to this and other needs that the present disclosure is directed.
In one aspect, the present invention provides a method for producing an optimized instruction set for guiding a robot performing a service procedure on a subject device. The method begins with the step of outfitting at least one operator with an XR headset and a controller, and connecting the XR headset and controller to a content control server with a streaming connection. The method continues with the step of providing the operator with instructions from the content control server through the headset and controllers, where the instructions require the operator to perform a series of steps within the service procedure. The method continues by monitoring the operator's movements as the operator performs the series of steps within the service procedure. In this step, the content control server records XR telemetry data produced by the headset and the controllers.
The method continues by repeating the performance of one or more of the steps in the service procedure and then aggregating the XR telemetry data recorded by the content control server. Next, the XR telemetry data is analyzed for convergence or divergence with aggregated XR telemetry data associated with each step in the service procedure. The method continues by optimizing the movements associated with each step in the service procedure using the analysis of the aggregated XR telemetry data. Once the XR telemetry data has been optimized, the method moves to the step of translating the optimized movements into a set of optimized robot instructions. The method concludes by outputting one or more optimized instruction sets configured for use in controlling the robot during the performance by the robot of the service procedure.
In accordance with an exemplary embodiment,
Each headset 102 is configured to connect through a wired or wireless connection to a content control server 104. The content control server 104 streams content to, and receives feedback from, the headsets 102 via a data transmission protocol, such as TCP or UDP. Importantly, the communication protocol used to connect the headsets 102 to the server 104 permits multiple headsets 102 to be simultaneously connected to the server 104, with each headset 102 configured to display unique information to the operator 100. In this way, each operator 100 wearing a headset 102 will be provided a unique, independent experience while connected to a common content control server 104. In certain embodiments where there are a large number of operators 100 and headsets 102, or if the content streamed between the content control server 104 and the headsets 102 is very data intensive, multiple content control servers 104 may be used to provide content to the headsets 102.
Each operator 100 may also be provided with controllers 106 that are also connected via a streaming connection to the server 104. As depicted in
In other embodiments, the controllers 106 are configured as a wrench, screwdriver, or other tool or instrument that is configured to measure and transmit data to the content control server 104 about the configuration, position and use of the tool or instrument by the operator 100. The headsets 102 and controllers 106 may include inertial motion units (IMUs), accelerometers, gyroscopes, proximity, optical, magnetometers, cameras and other sensors to detect, monitor and report the position, orientation and movement of the controllers 106 and headsets 102. The operators 100 may use a variety of controllers 106 while performing the service procedure 200 and that the content control server 104 is configured to track and record controller changes in real time without disrupting the streaming connections between the content control server 104, the headsets 102 and the controllers 106.
In addition to streaming content to the headsets 102, the content control server 104 also retrieves data and feedback from the headsets 102. In particular, the content control server 104 continuously records the position, orientation, motion, and images retrieved by the sensors and cameras on the headsets 102. In certain embodiments, cameras, microphones and other external sensors 108 may also be used to provide additional visual, spatial and audio information to the server 104. By connecting the headsets 102, controllers 106, and external sensors 108 to the content control server 104 with a streaming connection, the computer processing load can be borne primarily by the content control server 104. This permits the use of smaller, less expensive processors on the headsets 102, controllers 106 and sensors 108. As used herein, the term “XR telemetry tracking system 110” refers to the various collections of headsets 102, controllers 106, external sensors 108 and the content control server 104.
The service procedure 200 can be any procedure in which the operator 100 is manipulating the subject device 300. In the example depicted in
The content control server 104 can be connected to a training module 112. The training module 112 may be configured to run on the same processors that run the content control server 104, or the training module 112 may be located on a separate computer. The training module 112 is configured to aggregate, process and analyze the data and feedback produced by headsets 102 and controllers 106, and correlate that data with the steps carried out during the repeated performance of the service procedure 200 to develop sets of optimized instructions for a robot to perform the same service procedure 200. To optimize the robot instructions, the training module 112 is provided with specific parameters, inputs, goals, targets or operational criteria that should be considered as the training module 112 produces the optimized robot instructions.
In exemplary embodiments, the training module 112 uses machine learning and neural networking functions to derive the optimized robot instructions through an iterative process in which the training module 112 analyzes the feedback and data generated by the repeated performance by one or more operators 102 of the service procedure 200. For example, the training module 112 can be provided with the physical dimensions and performance characteristics of the robot or system of robots that will be deployed to perform the service procedure 200 using the optimized instruction set. Using these inputs and the aggregated data from the headsets 102 and controllers 106, the training module 112 can produce a series of optimized robot instruction sets that are based on inverse kinematic functions to control the robot's end-effectors in accordance with the optimized steps for the service procedure 200.
Turning to
At step 406, the content control server 104 records the movements of the operators 100 in response to the guidance provided to the operators 100 for the step in the service procedure 200 using streaming XR telemetry. At step 408, the XR telemetry data is stored by the content control server 104, the training module 112, or both. It will be appreciated that the method 400 repeats steps 404, 406 and 408 for the various steps in the service procedure 200. In some embodiments, the content control server 104 and training module 112 may autonomously request that the operators 100 repeat individual steps or groups of steps within the service procedure 200. For example, the supervisory systems in the content control server 104 and training module 112 may detect a divergence among the data produced by the operator 100 during a specific step within the service procedure 200. In that case, the content control server 104 may instruct the operator 100 to repeat the same step several times to obtain better convergence of the telemetry data received by the content control server 104.
At step 410, the telemetry data is aggregated and processed by one or both of the content control server 104 and the training module 112. The training module 112 analyzes the aggregated telemetry data at step 412 and produces one or more optimized instructions at step 414. Using inverse kinematic functions, the optimized instructions are translated into a series optimized robot movements at step 416. The series of optimized robot movements are consolidated into one or more optimized robot instruction sets at step 418.
It will be appreciated that the method 400 can be iterative and that the repeated performance of the service procedure 200 by a plurality of operators 100 may be useful in developing the optimized set of robot instructions. In some embodiments, the steps of aggregating, analyzing, optimizing and translating the telemetry data is performed in real time while the operators 100 are performing the service procedure 200. In other embodiments, the XR telemetry data is analyzed, optimized and used to produce the robot instruction set after the operators 100 have completed multiple iterations of the service procedure 200.
It is to be understood that even though numerous characteristics and advantages of various embodiments of the present invention have been set forth in the foregoing description, together with details of the structure and functions of various embodiments of the invention, this disclosure is illustrative only, and changes may be made in detail, especially in matters of structure and arrangement of parts within the principles of the present invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/909,519 filed Oct. 2, 2019, entitled “Telemetry Harvesting and Analysis from Extended Reality Streaming,” the disclosure of which is herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62909519 | Oct 2019 | US |