Active damping system

Information

  • Patent Grant
  • 12311546
  • Patent Number
    12,311,546
  • Date Filed
    Tuesday, July 16, 2019
    6 years ago
  • Date Issued
    Tuesday, May 27, 2025
    a month ago
Abstract
The present disclosure provides a system for performing interactions within a physical environment, the system including: (a) a robot base; (b) a robot base actuator that moves the robot base relative to the environment; (c) a robot arm mounted to the robot base, the robot arm including an end effector mounted thereon; (d) a tracking system that measures at least one of (i) a robot base position indicative of a position of the robot base relative to the environment; and, (ii) a robot base movement indicative of a movement of the robot base relative to the environment; (e) an active damping system that actively damps movement of the robot base relative to the environment; and, (f) a control system that: (i) determines a movement correction in accordance with signals from the tracking system; and, (ii) controls the active damping system at least partially in accordance with the movement correction.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a United States national phase entry of International Application No. PCT/AU2019/050742 filed on Jul. 16, 2019, which claims priority to Australian Patent Application No. 2018902566 filed on Jul. 16, 2018, both of which are incorporated herein by reference in their entireties.


BACKGROUND OF THE INVENTION

The present invention relates to systems and methods for performing interactions within a physical environment, and in one particular example, systems and methods using a robot arm mounted on a robot base in conjunction with active damping.


DESCRIPTION OF THE PRIOR ART

The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that the prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.


It is known to provide systems in which a robot arm mounted on a moving robot base is used to perform interactions within a physical environment. For example, WO 2007/076581 describes an automated brick laying system for constructing a building from a plurality of bricks comprising a robot provided with a brick laying and adhesive applying head, a measuring system, and a controller that provides control data to the robot to lay the bricks at predetermined locations. The measuring system measures in real time the position of the head and produces position data for the controller. The controller produces control data on the basis of a comparison between the position data and a predetermined or pre-programmed position of the head to lay a brick at a predetermined position for the building under construction. The controller can control the robot to construct the building in a course by course manner where the bricks are laid sequentially at their respective predetermined positions and where a complete course of bricks for the entire building is laid prior to laying of the brick for the next course.


The arrangement described in WO 2007/076581 went a long way toward addressing issues associated with long booms deflecting due to gravity, wind, movement of the end effector, and movement of the boom. Nevertheless, even with the arrangement described in WO 2007/076581, errors in positioning of the end effector could still occur, particularly as the distance from the base of the robot and the end effector increased.


SUMMARY OF THE PRESENT INVENTION

In one broad form an aspect of the present invention seeks to provide a system for performing interactions within a physical environment, the system including: a robot base; a robot base actuator that moves the robot base relative to the environment; a robot arm mounted to the robot base, the robot arm including an end effector mounted thereon; a tracking system that measures at least one of: a robot base position indicative of a position of the robot base relative to the environment; and, a robot base movement indicative of a movement of the robot base relative to the environment; an active damping system that actively damps movement of the robot base relative to the environment; and, a control system that: determines a movement correction in accordance with signals from the tracking system; and, controls the active damping system at least partially in accordance with the movement correction.


In one embodiment the control system determines the movement correction using at least one of: a path deviation based on movement of the robot base from a robot base path; a position deviation based on a current robot base position and an expected robot base position; a movement based on a change in robot base position; an acceleration based on a rate of change in a robot base position; a movement deviation based on a change in a robot base position relative to an expected robot base position; and, an acceleration deviation based on a rate of change in a robot base position relative to an expected robot base position.


In one embodiment the active damping system is coupled to at least one of: the robot base; and, the robot base actuator.


In one embodiment the active damping system generates a motive force opposing at least one of: unintentional movement of the robot base; and, movement of the robot base away from a robot base path.


In one embodiment the active damping system includes at least one of: an adaptive structural member; an inertial actuator; a linear inertial actuator; a rotational inertial actuator; at least one nozzle for emitting a pressurised fluid; at least one fan mounted on the robot base; and, the end effector.


In one embodiment the active damping system includes: at least one actuator operatively coupled to the robot base; and, at least one mass coupled to the actuator to allow the mass to be moved relative to the actuator.


In one embodiment the active damping system includes a flywheel and drive supported by the robot base.


In one embodiment the robot base actuator includes an actuator base, wherein the robot base is spaced from the actuator base in a first direction, and wherein the active damping is configured to apply forces to the robot base in at least two directions orthogonal to the first direction.


In one embodiment the robot base actuator includes: a boom having a head including the robot base; and, a boom base, the boom extending from the boom base.


In one embodiment the boom includes an adaptive structural member that can alter a dynamic response of the boom.


In one embodiment the adaptive structural member includes at least one of: electroactive polymers; and, shape-memory alloys.


In one embodiment the control system: determines an end effector path extending to an end effector destination; generates robot control signals to control movement of the end effector; applies the robot control signals to the robot arm to cause the end effector to be moved.


In one embodiment the control system generates the robot control signals to take into account: movement of the robot base; and, operation of the active damping.


In one embodiment the control system generates the robot control signals using the movement deviation.


In one embodiment the control system: calculates a robot base deviation based on the robot base position and an expected robot base position; calculates a stabilisation response based on the robot base deviation; modifies the stabilisation response based on the movement deviation; and, generates the robot control signals using the stabilisation response.


In one embodiment the control system: acquires an indication of an end effector destination defined relative to an environment coordinate system; calculates a robot base path extending from a current robot base position at least in part in accordance with the end effector destination; generates robot base control signals based on the robot base path; and, applies the robot base control signals to the robot base actuator to cause the robot base to be moved along the robot base path.


In one embodiment the control system at least one of: controls the robot base at least in part using a movement correction; and, generates the robot base control signals at least in part using a movement correction.


In one broad form an aspect of the present invention seeks to provide a method for performing interactions within a physical environment using a system including: a robot base; a robot base actuator that moves the robot base relative to the environment; a robot arm mounted to the robot base, the robot arm including an end effector mounted thereon; a tracking system that measures at least one of: a robot base position indicative of a position of the robot base relative to the environment; and, a robot base movement indicative of a movement of the robot base relative to the environment; and, an active damping system that actively damps movement of the robot base relative to the environment, wherein the method includes, in a control system: determining a movement correction in accordance with signals from the tracking system; and, controlling the active damping system at least partially in accordance with the movement correction.


In one broad form an aspect of the present invention seeks to provide a computer program product including computer executable code, which when executed by a suitably programmed control system causes the control system to control a system for performing interactions within a physical environment, the system including: a robot base; a robot base actuator that moves the robot base relative to the environment; a robot arm mounted to the robot base, the robot arm including an end effector mounted thereon; a tracking system that measures at least one of: a robot base position indicative of a position of the robot base relative to the environment; and, a robot base movement indicative of a movement of the robot base relative to the environment; and, an active damping system that actively damps movement of the robot base relative to the environment and wherein the control system: determines a movement correction in accordance with signals from the tracking system; and, controls the active damping system at least partially in accordance with the movement correction.


It will be appreciated that the broad forms of the invention and their respective features can be used in conjunction and/or independently, and reference to separate broad forms is not intended to be limiting.





BRIEF DESCRIPTION OF THE DRAWINGS

Various examples and embodiments of the present invention will now be described with reference to the accompanying drawings, in which: —



FIG. 1A is a schematic diagram of an example of a system for performing interactions within a physical environment;



FIG. 1B is a schematic plan view of the system of FIG. 1A;



FIG. 2 is a schematic diagram of an example of a control system for the system of FIGS. 1A to 1B;



FIG. 3 is a flowchart of an example of a process for performing a physical interaction;



FIG. 4 is a flow chart of an example of an active damping process;



FIGS. 5A to 5C are schematic side, plan and front views of a first example of an active damping system provided in a robot base;



FIGS. 6A to 6C are schematic side, plan and front views of a second example of an active damping system provided in a robot base;



FIGS. 7A to 7C are schematic side, plan and front views of a third example of an active damping system provided in a robot base;



FIGS. 8A and 8B are schematic side and plan views of an example of an active damping system provided in a robot base actuator;



FIG. 9 is a flowchart of an example of a process for controlling end effector movement in conjunction with active damping;



FIG. 10 is a flowchart of a first example of a process for controlling a robot arm to provide end effector stabilisation;



FIGS. 11A and 11B are schematic diagrams illustrating an example of the control process of FIG. 10 to provide the end effector at a static position;



FIGS. 11C and 11D are schematic diagrams illustrating an example of the control process of FIG. 10 to move an end effector along an end effector path;



FIGS. 11E and 11F are schematic diagrams illustrating an example of the control process of FIG. 10 to move the robot base along a robot base path and the end effector along an end effector path;



FIG. 12 is a flowchart of a second example of a process for controlling a robot arm to provide end effector stabilisation;



FIGS. 13A to 13C are schematic diagrams illustrating an example of the control process of FIG. 12 to provide the end effector at a static position;



FIG. 14 is a flowchart of a third example of a process for controlling a robot arm to provide end effector stabilisation;



FIGS. 15A and 15B are schematic diagrams illustrating an example of the control process of FIG. 14 to provide the end effector at a static position;



FIGS. 15C and 15D are schematic diagrams illustrating an example of the control process of FIG. 14 to move an end effector along an end effector path;



FIGS. 15E and 15F are schematic diagrams illustrating an example of the control process of FIG. 14 to move the robot base along a robot base path and the end effector along an end effector path; and,



FIGS. 16A to 16C are a flowchart of a specific example of an end effector and robot base control process incorporating active damping.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The following description explains a number of different systems and methods for performing interactions within an environment. For the purpose of illustration, the following definitions apply to terminology used throughout.


The term “interaction” is intended to refer to any physical interaction that occurs within, and including with or on, an environment. Example interactions could include placing material or objects within the environment, removing material or objects from the environment, moving material or objects within the environment, modifying, manipulating, or otherwise engaging with material or objects within the environment, modifying, manipulating, or otherwise engaging with the environment, or the like. Further examples of interactions will become apparent from the following description, and it will be appreciated that the techniques could be extended to a wide range of different interactions, and specified examples are not intended to be limiting. Furthermore, in some examples, interactions may comprise one or more distinct steps. For example, when brick laying, an interaction could include the steps of retrieving a brick from a brick supply mechanism and then placing the brick in the environment.


The term “environment” is used to refer to any location, region, area or volume within which, or on which, interactions are performed. The type and nature of the environment will vary depending on the preferred implementation and the environment could be a discrete physical environment, and/or could be a logical physical environment, delineated from surroundings solely by virtue of this being a volume within which interactions occur. Non-limiting examples of environments include building or construction sites, parts of vehicles, such as decks of ships or loading trays of lorries, factories, loading sites, ground work areas, or the like, and further examples will be described in more detail below.


A robot arm is a programmable mechanical manipulator. In this specification a robot arm includes multi axis jointed arms, parallel kinematic robots (such as Stewart Platform, Delta robots), spherical geometry robots, Cartesian robots (orthogonal axis robots with linear motion) etc.


A boom is an elongate support structure such as a slewing boom, with or without stick or dipper, with or without telescopic elements, telescoping booms, telescoping articulated booms. Examples include crane booms, earthmover booms, truck crane booms, all with or without cable supported or cable braced elements. A boom may also include an overhead gantry structure, or cantilevered gantry, or a controlled tensile truss (the boom may not be a boom but a multi cable supported parallel kinematics crane (see PAR systems, Tensile Truss—Chernobyl Crane)), or other moveable arm that may translate position in space.


An end effector is a device at the end of a robotic arm designed to interact with the environment. An end effector may include a gripper, nozzle, sand blaster, spray gun, wrench, magnet, welding torch, cutting torch, saw, milling cutter, router cutter, hydraulic shears, laser, riveting tool, or the like, and reference to these examples is not intended to be limiting.


TCP is an abbreviation of tool centre point. This is a location on the end effector (or tool), whose position and orientation define the coordinates of the controlled object. It is typically located at the distal end of the kinematic chain. Kinematic chain refers to the chain of linkages and their joints between the base of a robot arm and the end effector.


CNC is an abbreviation for computer numerical control, used for automation of machines by computer/processor/microcontroller executed pre-programmed sequences of machine control commands.


The application of coordinate transformations within a CNC control system is usually performed to allow programming in a convenient coordinate system. It is also performed to allow correction of workpiece position errors when clamped in a vice or fixture on a CNC machining centre.


These coordinate transformations are usually applied in a static sense to account for static coordinate shifts or to correct static errors.


Robots and CNC machines are programmed in a convenient Cartesian coordinate system, and kinematic transformations are used to convert the Cartesian coordinates to joint positions to move the pose of the robot or CNC machine.


Measuring the position of a robot arm end effector close to the TCP in real time increases the accuracy of a robot. This is performed on static end effectors on robots used for probing and drilling. This is achieved by a multi-step process of moving to the programmed position, taking a position measurement, calculating a correction vector, adding the compensation vector to the programmed position and then moving the TCP to the new position. This process is not done in hard real time and relies on a static robot arm pose.


An example of a system for performing interactions within a physical environment will now be described with reference to FIGS. 1A and 1B and FIG. 2.


In the example of FIG. 1A, the system 100 includes a robot assembly 110 including a robot base 111, a robot arm 112 and an end effector 113. The robot assembly 110 is positioned relative to an environment E, which in this example is illustrated as a 2D plane, but in practice could be a 3D volume of any configuration. In use, the end effector 113 is used to perform interactions within the environment E, for example to perform bricklaying, object manipulation, or the like.


The system 100 also includes a tracking system 120, which is able to track robot assembly movement, and in one particular example, movement of the robot base relative to the environment. In one example, the tracking system includes a tracker base 121, which is typically statically positioned relative to the environment E and a tracker target 122, mounted on the robot base 111, allowing a position of the robot base 111 relative to the environment E to be determined. In other arrangements, movement of the end effector 113 may be tracked instead of, or in addition to the robot base 111. Also, the tracker base may be positioned on the robot assembly 110 so as to move therewith and the tracker target(s) may be statically positioned relative to the environment E.


In one example, the tracking system 120 includes a tracking base 121 including a tracker head having a radiation source arranged to send a radiation beam to the target 122 and a base sensor that senses reflected radiation. A base tracking system is provided which tracks a position of the target 122 and controls an orientation of the tracker head to follow the target 122. In one example, the target 122 includes a target sensor that senses the radiation beam and a target tracking system that tracks a position of the tracking base and controls an orientation of the target to follow the tracker head (i.e. an active target). In other examples, the target 122 is a passive instrument that does follow the tracker head. Angle sensors are provided in the tracker head that determine an orientation of the head (e.g. in elevation and azimuth). Optionally, angle sensors are also provided in the target that determine an orientation of the target. A processing system determines a position of the target relative to the tracker base in accordance with signals from the sensors, specifically using signals from the angle sensors to determine relative angles between the tracker and target, whilst time of flight of the radiation beam can be used to determine a physical separation, thereby allowing a position of the target relative to the tracking base to be determined. In a further example, the radiation can be polarised in order to allow a roll angle of the target relative to the tracking base to be determined.


Although a single tracking system 120 including a tracker head and target is shown, this is not essential and in other examples multiple tracking systems and/or targets can be provided as will be described in more detail below. In some examples, the tracking system may include tracker heads positioned on the robot assembly configured to track one or more targets located in the environment.


In one particular example, the tracking system is a laser tracking system and example arrangements are manufactured by API (Radian and OT2 optionally with STS (Smart Track Sensor)), Leica (AT960 and optionally Tmac) and Faro. These systems measure position at 300 Hz, or 1 kHz or 2 kHz (depending on the equipment) and rely on a combination of sensing arrangements, including laser tracking, vision systems using 2D cameras, accelerometer data such as from a tilt sensor or INS (Inertial navigation System) and can be used to make accurate measurements of position, with data obtained from the laser tracker and optionally the active target equating to position and optionally orientation of the active target relative to the environment E. As such systems are known and are commercially available, these will not be described in any further detail.


It will also be appreciated that other position/movement sensors, such as an inertial measurement unit (IMU) can also be incorporated into the system, as will be described in more detail below.


In practice, in the above described examples, the robot base 111 undergoes movement relative to the environment E. The nature of the movement will vary depending upon the preferred implementation. For example, the robot base 111 could be mounted on tracks, wheels or similar, allowing this to be moved within the environment E.


Alternatively, in the example shown in FIG. 1B, the robot base 111 is supported by a robot base actuator 140, which can be used to move the robot base. In this example, the robot base actuator is in the form of a boom assembly including a boom base 141, boom 142 and stick 143. The boom is typically controllable allowing a position and/or orientation of the robot base to be adjusted. The types of movement available will vary depending on the preferred implementation. For example, the boom base 141 could be mounted on a vehicle allowing this to be positioned and optionally rotated to a desired position and orientation. The boom 142 and stick 143 can be telescopic arrangements, including a number of telescoping boom or stick members, allowing a length of the boom or stick to be adjusted. Additionally, angles between the boom base 141 and boom 142, and boom 142 and stick 143, can be controlled, for example using hydraulic actuators, allowing the robot base 111 to be provided in a desired position relative to the environment E. Such operation is typically performed in the robot base actuator coordinate system BACS, although this is not essential as will become apparent from the remaining description.


An example of a system of this form for laying blocks, such as bricks, is described in WO2018/009981 the content of which is incorporated herein by cross reference. It will be appreciated however that such arrangements are not limited to block laying, but could also be utilised for other forms of interactions.


In the system shown in FIGS. 1A and 1B, a control system 130 is provided in communication with the tracking system 120, the robot assembly 110 and the robot base actuator 140, allowing the robot assembly 110 and robot base actuator 140 to be controlled based on signals received from the tracking system. The control system typically includes one or more control processors 131 and one or more memories 132. For ease of illustration, the remaining description will make reference to a processing device and a memory, but it will be appreciated that multiple processing devices and/or memories could be used, with reference to the singular encompassing the plural arrangements and vice versa. In use the memory stores control instructions, typically in the form of applications software, or firmware, which is executed by the processor 131 allowing signals from the tracking system 120 and robot assembly 110 to be interpreted and used to control the robot assembly 110 to allow interactions to be performed.


An example of the control system 130 is shown in more detail in FIG. 2.


In this example the control system 230 is coupled to a robot arm controller 210, a tracking system controller 220 and a boom controller 240. This is typically performed via a suitable communications network, including wired or wireless networks, and more typically an Ethernet or Ethercat network. The robot arm controller 210 is coupled to a robot arm actuator 211 and end effector actuator 212, which are able to control positioning of the robot arm 112 and end effector 113, respectively. The tracking system controller 220 is coupled to the tracking head 221 and target 222, allowing the tracking system to be controlled and relative positions of the tracking head 221 and target 222 to be ascertained and returned to the control system 230. The boom controller 240 is typically coupled to boom actuators 241, 242 which can be used to position the boom and hence robot base. A second tracking system may also be provided, which includes sensors 226, such as inertial sensors, optionally coupled to a controller 225. It is to be understood that in practice the robot arm, end effector and boom will have multiple actuators such as servo motors, hydraulic cylinders and the like to effect movement of their respective axes (i.e. joints) and reference to single actuators is not intended to be limiting.


Each of the robot arm controller 210, tracking system controller 220, second tracking system controller 225 and boom controller 240 typically include electronic processing devices, operating in conjunction with stored instructions, and which operate to interpret commands provided by the control system 230 and generate control signals for the respective actuators and/or the tracking system and/or receive signals from sensors and provide relevant data to the control system 230. The electronic processing devices could include any electronic processing device such as a microprocessor, microchip processor, logic gate configuration, firmware optionally associated with implementing logic such as an FPGA (Field Programmable Gate Array), or any other electronic device, system or arrangement. It will be appreciated that the robot arm controller 210, tracking system controller 220 and boom controller 240 typically form part of the robot assembly, tracking system and boom assembly, respectively. As the operation of such systems would be understood in the art, these will not be described in further detail.


The control system 230 typically includes an electronic processing device 231, a memory 232, input/output device 233 and interface 234, which can be utilised to connect the control system 230 to the robot arm controller 210, tracking system controller 220 and boom controller 240. Although a single external interface is shown, this is for the purpose of example only, and in practice multiple interfaces using various methods (eg. Ethernet, serial, USB, wireless or the like) may be provided.


In use, the processing device 231 executes instructions in the form of applications software stored in the memory 232 to allow the required processes to be performed. The applications software may include one or more software modules, and may be executed in a suitable execution environment, such as an operating system environment, or the like.


Accordingly, it will be appreciated that the control system 230 may be formed from any suitable processing system, such as a suitably programmed PC, computer server, or the like. In one particular example, the control system 230 is a standard processing system such as an Intel Architecture based processing system, which executes software applications stored on non-volatile (e.g., hard disk) storage, although this is not essential. However, it will also be understood that the processing system could be any electronic processing device such as a microprocessor, microchip processor, logic gate configuration, firmware optionally associated with implementing logic such as an FPGA (Field Programmable Gate Array), or any other electronic device, system or arrangement.


It will also be appreciated that the above described arrangements are for the purpose of illustration only and in practice a wide range of different systems and associated control configurations could be utilised. For example, it will be appreciated that the distribution of processing between the controllers and/or control system could vary depending on the preferred implementation.


For the purpose of the following examples, reference will be made to an environment coordinate system ECS, which is static relative to the environment E, and a robot base coordinate system RBCS, which is static relative to the robot base 111. Additionally, some examples will make reference to a robot base actuator coordinate system BACS, which is a coordinate system used to control movement of the robot base, for example to control movement of the boom assembly.


Depending on the implementation, the boom assembly can have a significant length, so for example in the case of a construction application, the boom may need to extend across a construction site and could have a length of tens of meters. In such circumstances, the boom is typically subject to a variety of loads, including forces resulting from movement of the boom and/or robot arm, wind loading, machinery vibrations, or the like, which can in turn induce oscillations or other movement in the end of the boom, in turn causing the robot base to move relative to the environment. Such movement will be referred to generally as unintentional movement. Additionally, as described above, the robot base can be moved in a controlled manner by actively moving the boom and such movement will be referred to generally as intentional movement.


In any event, it will be appreciated that in the above described example, the robot base and hence the robot base coordinate system RBCS moves relative to the environment and hence environment coordinate system ECS, which substantially complicates the control process, and in particular the ability of the end effector to be accurately positioned so as to perform an interaction within the environment. In this regard, in normal robot applications, the end effector is controlled in the robot base coordinate system RBCS, whilst the end effector needs to be positioned in the environment coordinate system ECS, and as the movement results in the two coordinate systems moving relative to each other, this makes accurately positioning the end effector difficult.


An example of the process for performing an interaction within the environment E will now be described with reference to FIG. 3.


For the purpose of the following explanation reference will be made to a term “destination”. The term is intended to refer to a position and optionally orientation (in combination referred to as a pose) at which the end effector 113 is to be provided, either as part of performing an interaction or otherwise. For example, the destination could correspond to the location within the environment at which the interaction is to occur. However, this is not essential, alternatively the destination could correspond to any position through which the end effector should pass, in effect defining multiple destinations leading to a final destination. For example, an interaction may involve sequences of end effector movements, optionally forming part of different steps, and the term destination could refer to any position forming part of the different steps. Thus, the term destination should therefore be interpreted to refer to any particular point at which the end effector is to be positioned and in some examples, a destination could be a static point at which an end effector is to be maintained for a period of time for example while other processes are performed, whereas in other cases the destination could be transitory and correspond to a point on a path through which the end effector is to traverse.


In this example, one or more destination positions are determined at step 300. The manner in which this is achieved will vary depending on the preferred implementation. In one example, destinations can be retrieved from a database or other data store, received from another processing system, determined based on signals from sensors or user input commands, or the like. For example, end effector destinations could be derived from a plan, such as a construction plan for a building, in which case the plan could be retrieved and the destinations derived from the plan. In this regard, the construction plan may identify positions at which objects such as bricks are to be placed in order for a building to be constructed. In this example, the destination positions can simply be retrieved from the plan.


However, this is not essential and alternatively, destination positions may need to be ascertained in other manners. For example, it may be necessary to retrieve an object from an environment, in which case the destination of the end effector corresponds to the object position. In this example, the object position may not be known in advance, in which case the position of the object may need to be detected, for example using a camera based vision system, or other localisation system, allowing the detected position to be used in order to define the destination position. In this regard, the object could be static or moving, meaning whilst the destination is normally static relative to the environment coordinate system ECS, in some examples, the destination could be moving.


It will also be appreciated that destinations could be determined in other appropriate manners, and the above described examples are not intended to be restrictive.


At step 310, a robot base path to allow for movement of the robot base 111 is optionally planned. The robot base path may not be required, for example in the event that the robot base 111 is static or already positioned. However, it will be appreciated that the robot base path may be used to move the robot base 111 to different positions within or relative to the environment E, in order to allow the end effector 113 to be more conveniently provided at the respective destination. The manner in which the base path is calculated will vary depending upon the preferred implementation and examples will be described in more detail below.


At step 320, an end effector path is planned to move the end effector 113 to the destination. The end effector path is typically planned based on a planned or ideal position of the robot base 111 relative to the environment E, for example to take into account movement of the robot base 111 along the robot base path. The end effector path may extend from an expected previous position of an end effector 113, for example at the completion of a previous interaction or other step, or could be calculated in real time based on a current end effector position. It will be appreciated that in the event that the destination is based on a current position, the end effector path could be a null path with zero length, with this being used for the purpose of positioning the end effector 113 statically relative to the environment E.


At step 330, the robot base 111 is optionally moved based on the robot base path, for example by controlling the boom assembly 140, or another form of robot base actuator. This process is typically performed in the robot base actuator coordinate system BACS, although this is not essential and robot base path planning and/or control of robot base movement could be performed in other coordinate systems. During and/or following this process, the commencement of end effector movement is performed at step 340, causing the end effector to start moving along the end effector path, assuming this is required. This process is typically performed in the robot base coordinate system RBCS, although this is not essential and end effector path planning and/or control could be performed in other coordinate systems.


As movement of the end effector 113 is performed, or otherwise if the end effector 113 is being held at a static position relative to the environment E, movement of the robot base is monitored at step 350, using the tracking system 120 to continuously detect a position of the robot base 111 relative to the environment E. This is used to adjust end effector movement, for example by adjusting pose of robot arm, at step 360 to ensure the destination position is reached.


In this regard, the robot base may undergo unintentional movement relative to the environment E, either due to a shift in the environment, or due to an unexpected movement of the robot base, resulting from vibrations in or wind loading of the boom, or the like. Such motions mean that the robot base may not be provided in an expected or ideal position relative to the environment, for example as a result of the robot base 111 deviating from the calculated robot base path. In this example, by monitoring movement of the robot base 111, such movements can be corrected for, ensuring that the end effector moves correctly along the end effector path to the destination position.


Thus, in one example, a robot base actuator is used to provide a coarse positioning system, whilst the robot arm provides a fine positioning system to allow an end effector to be accurately positioned relative to the environment. Operation is controlled by a control system that uses a tracking system to measure a position and optionally orientation of the robot base in real time, with a measured position (and orientation) of the robot base being used to calculate an offset that is added as a position transformation to the relative position of the fine positioning mechanism so that the end effector is positioned correctly relative to the environment. Thus a large and relatively light and flexible structure can be used to approximately position a fast and accurate fine positioning mechanism, which can be accurately controlled in real time allowing an end effector to be moved relative to an environment in an accurate and fast motion.


This form of operation is referred to by the applicant as dynamic stabilisation technology (DST) and is described in prior publications including U.S. Pat. No. 8,166,727, WO2009/026641, WO2009/026642, WO2018/009981 and WO2018/009986, the contents of which are incorporated herein by cross reference.


It will also be appreciated that DST can also be used to account for intentional movement of the robot base, for example to account for the fact that the robot base 111 may be traversing a robot path whilst an interaction is performed.


An example of a number of different aspects of the above described system will now be described in further detail. These different aspects of the system can be used independently or can be used in conjunction depending on the preferred implementation. It will be appreciated from this that reference to separate aspects should not be considered limiting and that aspects can be used in any number of different combinations, depending on the preferred implementation and the scenario in which the system is used.


In one aspect a process for controlling an active damping system is provided and an example of this will now be described in more detail with reference to FIG. 4.


For the purpose of this example, it is assumed that the robot base can be moved relative to the environment using a robot base actuator. The nature of the actuator and manner in which this is performed can vary depending on the preferred implementation. In one example, the robot base actuator could be a boom assembly 140 similar to that described above with respect to FIGS. 1A and 1B. However, any form of robot base actuator could be provided and this could include a vehicle with the robot base being mounted on the vehicle, or could include the use of a crane or other similar arrangement for suspending the robot assembly above a work environment, or could include the use of self-powered robot bases, for example including wheels or tracks, or the like.


Additionally, in this example, it is assumed that the system includes an active damping system that actively damps movement of the robot base relative to the environment. The nature of the active damping system and how this is attached and used will vary depending on the preferred implementation, and specific examples will be described in more detail below.


In this example, at step 400 the control system 130 acquires tracking signals from the tracking system. The signals can be obtained from either the first or second tracking system, depending on the preferred implementation, although more typically signals will be in the form of position signals indicative of a position of the robot base relative to the environment E, obtained from the first tracking system 120.


At step 410, the control system determines a movement correction in accordance with signals from the tracking system. The movement correction is typically indicative of a degree of movement away from an expected or ideal position, and could take into account the robot base position, movement and/or acceleration.


At step 420, the control system 130 controls the active damping system at least partially in accordance with the movement correction, thereby at least partially offsetting movement of the robot base away from the expected position.


At step 430, any dynamic stabilisation applied to control movement of the end effector can be optionally updated to take into account the impact of the active damping. In this regard, if a dynamic stabilisation response is calculated based on a deviation of a robot base position from an expected position and this effect is then mitigated by the active damping, then the dynamic stabilisation response may lead to over or under compensation if this is not corrected to take into account the effect of the active damping. Accordingly, any calculated stabilisation response can be modified in a feedforward process, to take into account the active damping, thereby ensuring the stabilisation response is appropriate.


Accordingly, it will be appreciated that the above described system describes active damping of a movable robot base. This is used to reduce unintentional movements of the robot base that typically arise due to forces resulting from movement of the boom and/or robot arm, wind loading, machinery vibrations, or the like. Reducing such unintentional movement can assist in ensuring the end effector can be accurately positioned within the environment, thereby allowing interactions to be more easily performed. In particular, this reduces the amount of compensation that needs to be performed using DST, which can allow DST to operate in circumstances where it may not otherwise be feasible, such as in windy environments, which is particularly pertinent when building high rise buildings.


A number of further features will now be described.


The movement correction can be calculated in a range of different manners depending on the preferred implementation. For example, the movement correction could be based on a path deviation indicative of movement of the robot base from a robot base path. In this example, a robot base path is used by the robot base actuator to move the robot base, with movement away from the robot base path being ascertained as unintentional and therefore used to determine movement correction. Similarly, this can be based on a position deviation indicative of deviation of a current robot base position from an expected robot base position. Additionally, and/or alternatively, the movement correction could be ascertained from a movement or acceleration of the robot base, based on a change or rate of change in robot base position. Finally, the movement correction could be based on a movement or acceleration deviation based on a change or rate of change in a robot base position relative to an expected robot base position.


Thus, it will be appreciated that whilst damping could be performed solely based on movement or acceleration of the robot base, it could also take into account an intended position of the robot base. This can assist in providing a more effective damping outcome. For example, if a movement is heading towards an expected position of the robot base, then it may not be as desirable to damp the movement, than if the movement is away from the expected position. Thereby taking into account both the degree of movement and/or acceleration, and the position of this relative to an expected position, allows the damping to be controlled more effectively.


The damping system could be coupled to the robot base and/or the robot base actuator, depending on the nature of the damping system, and how it is to be implemented, and example arrangements will be described in more detail below.


In one example, the active damping system is used to generate a motive force opposing either unintentional movement of the robot base and/or movement of the robot base away from a robot base path. The motive force is typically generated by an active element, and in this regard, the damping system could include any one or more of an adaptive structural member, an inertial actuator, a linear inertial actuator, a rotational inertial actuator, at least one nozzle for emitting a pressurised fluid, at least one fan mounted on the robot base, the end effector, or the like. It will also be appreciated that this list is not intended to be exhaustive and other arrangements could be used.


A number of specific example damping arrangements will now be described.


In the example of FIGS. 5A to 5C, a damping system is shown which includes a number of linear actuators mounted within the robot base.


Specifically, in this example, the robot base 111 includes active members, each including a linear actuator 571.1, 571.2, 571.3, each having a mass 572.1, 572.2, 572.3 coupled thereto. In use, operation of the linear actuators 571.1, 571.2, 571.3 cause the corresponding mass 572.1, 572.2, 572.3 to move and thereby counteract movement of the robot base 111. In the example shown, the linear actuators 571.1, 571.2, 571.3 are orthogonally positioned, and arranged around a centre of mass of the robot base 111, allowing translational movements along axes in the robot base coordinate system RBCS to be damped.


However, it will be appreciated that other arrangements could be used. For example, the linear actuators could be replaced with, or used in conjunction with rotary actuators including a flywheel and drive supported by the robot base. In this instance, rotation of the flywheel can be used to counteract rotational movement of the robot base 111. It will also be appreciated that rotational and translational movement could be damped using multiple linear actuators. For example, if two linear actuators are provided in a parallel spaced apart arrangement in an X-Y plane, this can be used to damp rotational movement about a Z axis, as well as counteracting translational movement. Thus, in one example, six linear actuators can be used to provide damping in each of six degrees of freedom.


Furthermore, although damping is shown provided in three translation axes in FIGS. 5A to 5C, this may not be required. In this regard, when the robot base actuator includes an actuator base and the robot base is spaced from the actuator base in a first direction, such as when it is supported by a boom 142, 143 extending from a boom base 141, then the majority of unwanted oscillations tend to be in a plane orthogonal to the first direction. Thus in one example, active damping may only be required to apply forces to the robot base in two directions orthogonal to the first direction. By way of illustration, in the particular configuration shown in FIG. 1A, the boom extends along an axis XRB of the robot base coordinate system RBCS, meaning the majority of damping needs to be provided in the ZRB-YRB plane.


Alternative actuator arrangements are shown in FIGS. 6A to 6C and 7A to 7C.


In the example of FIGS. 6A to 6C, pairs of fans 671.1, 671.2, 672.1, 672.2, 673.1, 673.2 are mounted to the robot base 111, with each pair of fans facing in an orthogonal direction. In this instance, operation of the fans can be used to generate a forward or backward thrust, thereby imparting a damping force. Activation of a pair of fans with common thrust can be used to provide a translational damping effect, whilst an opposing thrust could generate a rotational damping effect. In this example, variation in thrust could be achieved by altering the fan rotational speed and/or by changing the fan blade pitch.


In the example of FIGS. 7A to 7C, pairs of nozzles 771.1, 771.2, 772.1, 772.2, 773.1, 773.2 are mounted on opposing faces of the robot base 111, with the nozzles being used to emit fluid, such as air, under pressure, to thereby generate a motive force similar to that provided by a reaction control system (RCS). It will be appreciated that appropriate configuration of nozzles could be used to provide translational and/or rotational damping.


In the example of FIGS. 8A and 8B, damping is provided in the boom of the robot base actuator.


In this example, two different types of damping are shown, including the user of linear actuators 871.1, 872.1 having masses 871.2, 872.2 mounted thereon. In this example these are mounted longitudinally within the boom 142 and stick 143, thereby providing longitudinal damping within these members.


Additionally, pairs of structural members 873.1, 873.2, 874.1, 874.2, 875.1, 875.2, 876.1, 876.2 are provided on upper and lower, and left and right sides of the boom 142 and stick 143. The structural members can be configured to provide passive damping, for example to reduce torsional or longitudinal flexure of the boom and stick. In this instance, the structural members 873.1, 873.2, 874.1, 874.2, 875.1, 875.2, 876.1, 876.2 can be formed from multi-layer visco-elastic structures, which operate to absorb forces and reduce bending of the boom and/or stick, which can in turn result in unwanted oscillations in the boom and/or stick if not absorbed by the passive damping system.


In another example, the structural members 873.1, 873.2, 874.1, 874.2, 875.1, 875.2, 876.1, 876.2 are adaptive structural members that can alter a dynamic response of the boom, and in particular impart a force on the boom. Such adaptive structural members can be formed from active materials, such as electroactive polymers, shape-memory alloys, or the like, allowing the member to impart a force through the application of a control signal, such as an electrical and/or temperature change. It will be appreciated that in this instance, if movement of the robot base is detected, then the structural members 873.1, 873.2, 874.1, 874.2, 875.1, 875.2, 876.1, 876.2 can be activated to apply a force to the boom and counteract the movement.


Additionally and/or alternatively, as opposed to applying a force to the boom and/or stick, the structural members 873.1, 873.2, 874.1, 874.2, 875.1, 875.2, 876.1, 876.2 can be adapted to modify the rigidity of the boom, which in turn alters the response of the boom to vibrations, for example by adjusting a resonant frequency of the boom, which can ensure the resonant frequency is different to a frequency of any current vibrations.


It will also be appreciated that as the robot arm and/or end effector have an inherent mass, this can also act to provide damping. For example, moving the robot arm and end effector in a direction counter to an unintentional movement can provide a damping effect. Whilst this effect might not be large for the robot arm end effector alone, in the event that the end effector is being used to carry a mass, such as a brick, this can enhance the damping effect.


Whilst a number of different damping mechanisms have been described independently, it will be appreciated that these could be used in combination. For example, this system could include damping arrangements in the robot base and in the boom, with combinations of different damping arrangements being used depending on the nature of the movement of the robot base. For example, only one damping arrangement might be used for small movements, whilst multiple arrangements might be used for larger movements.


As previously described, movement of the end effector is typically controlled to take into account, and in particular correct for movement of the robot base, thereby enabling the end effector to be accurately controlled within the environment coordinate system ECS, irrespective of relative movement between the environment and the robot base. Thus, such DST dynamically adjusts the end effector in order to account for movement of the robot base, which can be used, for example, to keep the end effector static within or moving along or in accordance with a defined path within the environment, irrespective of movement of the robot base.


Dynamic stabilisation technology can be implemented utilising different approaches and three example mechanisms will now be described, with these hereinafter being referred to as dynamic compensation, dynamic coordinate system and dynamic path planning.


Dynamic compensation operates by generating a path correction and applying the path correction when generating control signals that control the robot arm, so that the arm follows a modified path that brings the end effector back on to the original planned path.


Dynamic coordinate systems operate by calculating robot arm kinematics in a moving coordinate system which tracks movement of the robot base, so that the end effector always has a correct position in the environment coordinate system ECS. This generally involves shifting the origin of the robot arm kinematics, to ensure the end effector is correctly positioned.


Dynamic path planning involves recalculating end effector paths as the robot base and environment move relative to each other, so that the new path ensures the end effector always progresses to the end effector destination.


In general, the dynamic stabilisation is implemented in conjunction with active damping. Accordingly, in addition to performing active damping the control system typically determines an end effector path extending to an end effector destination, generates robot control signals to control movement of the end effector and applies the robot control signals to the robot arm to cause the end effector to be moved.


As the active damping will influence movement of the robot base, the control system typically generates the robot control signals to take into account movement of the robot base and operation of the active damping. To achieve this, the control system typically generates the robot control signals using the movement deviation, and in one particular example calculates a robot base deviation based on the robot base position and an expected robot base position, calculates a stabilisation response based on the robot base deviation, modifies the stabilisation response based on the movement deviation and then generates the robot control signals using the stabilisation response.


Similarly, the damping can also be taken into account when controlling the robot base actuator. In this instance, the control system typically acquires an indication of an end effector destination defined relative to the environment coordinate system, calculates a robot base path extending from a current robot base position at least in part in accordance with the end effector destination, generates robot base control signals based on the robot base path and applies the robot base control signals to the robot base actuator to cause the robot base to be moved in accordance with the robot base path.


The control system can then control the robot base at least in part using a movement correction and generates the robot base control signals at least in part using the movement correction, so that this takes into account any damping signals applied to movement of the robot base.


An example of the control process will now be described in more detail with reference to FIG. 9.


In this example, at step 900, the control system 130 acquires a robot base path. The robot base path can be a pre-calculated path that is retrieved, or alternatively can be calculated, for example based on a number of end effector destinations. Similarly, an end effector path is determined at step 905, again by retrieving a pre-determined path, or calculating an end effector path based on an end effector destination.


At step 910, the control system 130 acquires tracking signals from the tracking system, and uses these to determine a robot base position and/or movement at step 915. In this regard, signals from the first tracking system 120 can be used to determine the position of the robot base relative to the environment coordinate system ECS, which can then be used to determine movement from a previous position. Additionally and/or alternatively movement can be determined from a movement sensor 226, such as an inertial measurement unit (IMU) or similar.


At step 920, the control system 130 determines a movement correction. The movement correction typically takes into account the magnitude of any current unintentional movement, and can be based on a movement velocity and/or acceleration and the current position of the robot base relative to an expected position, as derived from the robot base path. In particular, the correction is calculated to attempt to minimise the magnitude of any unintentional movements, but also to facilitate returning the robot base to the intended robot base path. For example, a large movement of the robot base towards the robot base path may result in less damping than a small movement of the robot base away from the robot base path. The degree of damping may also depend on whether DST is currently being used or not. Thus, for example, if there is no need to stabilise the end effector, then damping may not be required. Nevertheless, there can be benefits in providing damping to prevent large oscillations of the boom developing, which can in turn cause problems when dynamic stabilisation is required.


In any event, at step 925 damping control signals are generated, with these being applied to the damping arrangements to activate the active damping at step 930. Thus, for example, this will involve having the control system 130 generate control signals to operate damping actuators, or the like.


Concurrently with this process, at step 935, the control system calculates a stabilisation response for the end effector in order to allow the DST process to be implemented, and examples of the manner in which this is achieved for different DST processes will be described in more detail below. At step 940, the control system 130 modifies the stabilisation response to take into account the damping control that is being applied. In this regard, if the stabilisation is calculated based on a current movement, and the damping reduces the magnitude of the movement, then the stabilisation might over or under compensate. Accordingly, in one example, the stabilisation is scaled depending on the movement correction, to thereby maintain the end effector in a desired position.


Having modified the stabilisation, at step 945 control signals are generated, with these being applied to the robot base actuator and robot arm, to move the robot base and end effector in accordance with the respective paths, with this typically being performed concurrently with the application of damping.


Accordingly, it will be appreciated that this provides a mechanism for damping to be performed in conjunction with dynamic stabilisation. Examples of the different types of dynamic stabilisation will now be described in more detail.


An example of a process for performing dynamic compensation shall now be described with reference to FIG. 10, and making reference to the system of FIGS. 1A and 1B.


In this example, at step 1000 the control system 130 acquires an end effector destination, which as will be appreciated can be achieved using techniques described above.


At step 1010, a reference robot base position is determined by the control system 130, with this typically being performed relative to the environment coordinate system ECS. The reference robot base position can be a current position and/or could be a position at a future point in time, and may therefore represent an expected position when the end effector is to perform an interaction, such as positioning an object within the environment.


An end effector path is then calculated by the control system 130 at step 1020, with the path extending to the end effector destination. This process can be performed at least in part using the reference robot base position in order to take into account any movement from the current robot base position to the reference robot base position and/or to transform between the robot base coordinate system RBCS and environment coordinate system ECS, if this required. For example, the end effector path is typically calculated in the robot base coordinate system RBCS, as this allows control signals for the robot arm to more easily map to the movement, meaning the end effector destination is transformed from the environment coordinate system ECS to the robot base coordinate system RBCS. This is performed based on the relative position of the coordinate systems when the robot base is positioned in the reference robot base position, taking into account that the relative position of the robot base coordinate system RBCS and environment coordinate system ECS will vary as the robot base 111 moves along the robot base path. However, this is not essential, and alternatively the current end effector position could be transferred into the environment coordinate system ECS, allowing the end effector path to be calculated in the environment coordinate system ECS.


Having calculated a path, at step 1030, a current robot base position is determined using signals from the tracking system. This is used to calculate a correction which is indicative of a path modification at step 1040. The nature of the correction and the manner in which this is calculated will vary depending upon the preferred implementation but in one example this is in the form of a vector representing deviation of the current end effector position with respect to the end effector path, as determined based on movement of the robot base. For example, if the robot base 111 undergoes movement away from an expected robot base position, this will result in equivalent movement of the end effector 113 away from the end effector path, which will need to be corrected in order to ensure the end effector continues to traverse the path.


Robot control signals are generated by the control system 130 at step 1050, based on the end effector path and the correction, with these being applied to the robot arm to cause the end effector 113 to be moved in accordance with the end effector path and the correction, so that the end effector moves back to the end effector path and continues towards the destination at step 1060. Steps 1030 to 1060 are then repeated as needed until the end effector has reached the destination.


Accordingly, the above described technique operates by calculating a correction based on a deviation of the current measured robot base position from an expected robot base position, using the correction when generating robot control signals to thereby correct the position of the end effector. In one example, the robot base moves with a slower dynamic response, whilst the end effector moves with a faster dynamic response, so that movement of the end effector can be used to correct for movement of the robot base away from an expected robot base position.


A number of different example scenarios will now be described with reference to FIGS. 11A to 11F, to more clearly explain operation of the dynamic compensation in a number of different scenarios. These examples show an end effector 1113 attached to a robot arm 1112, which is in turn attached to a robot base (which is not shown for clarity). The following examples will illustrate the dynamic compensation mechanism operating in two dimensions only, but it will be appreciated that this can extend to six degrees of freedom, and reference to two dimensions only is not intended to be limiting.


In the example of FIGS. 11A and 11B, the end effector destination is coincident with a current end effector position, meaning the calculated end effector path is in fact a null path with zero path length. Such an arrangement would be used in order to maintain a static end effector position within the environment E.


In this example, an unintentional movement 1161 of the robot base, for example caused by vibrations, wind loading of a boom, or the like, moves the robot arm 1112 and hence end effector 1113. As a result, the end effector 1113 is now offset from the destination 1151. In this instance, a correction is calculated based on the movement 1161, which generates a path correction, in this instance effectively creating a new path 1162, which causes the robot arm to move the end effector and counteract the unintentional movement 1161, thereby returning the end effector to the destination 1151. In this regard, the pose of the robot arm will change in accordance with the unintentional movement of the robot base in order to effect the path correction and bring the end effector 1113 back to the destination 1151.


In the example of FIGS. 11C and 11D, the end effector 1113 is traversing along an end effector path 1155 to destination 1151. In this instance, unintentional movement 1161 occurs whilst the robot arm is simultaneously moving along the path as shown by the arrow 1163. In this instance, the correction is calculated to cause the end effector to move in accordance with arrow 1162, with this being combined together with a next movement along the path 1164, resulting in a net movement 1165, which returns the end effector 1113 to the original path 1155.


A further example shown in FIGS. 11E and 11F involves moving the end effector along an end effector path 1155, and simultaneously moving the robot base along a robot base path 1153. In this instance the reference robot base position is shown in dotted lines in FIG. 11E, which is based on the expected robot base position when the end effector 1113 reaches the end effector destination 1151. In this instance, from the reference robot base position, the end effector path 1155 is vertically down to the destination 1151, and with a net path 1156 being formed from a combination of the end effector path 1155 and the robot base path 1153, resulting in net end effector movement from the current end effector position to the destination 1151.


In this instance as the robot base and end effector 1113 are moved along the net path, as shown at 1163, an unintentional movement arises as shown as 1161. In this instance a correction 1162 is calculated which combined with next path movement 1164, results in the end effector moving along a path 1165, returning to the original net path 1156.


Accordingly, it will be appreciated that the above described processes operate to correct for at least unintentional movement of the end effector to thereby maintain the end effector 1113 at a desired position, or travelling in accordance with a desired path, within the environment coordinate system ECS, even though the end effector is controlled in the robot base coordinate system RBCS.


As mentioned above, in one preferred example, the control system 130 calculates the end effector path in the robot base coordinate system RBCS, whilst the end effector destination is typically defined in the environment coordinate system ECS. This involves having the control system 130 determine a transformed end effector destination by transforming the end effector destination from the environment coordinate system ECS to the robot base coordinate system RBCS, at least in part using the reference robot base position. The control system 130 can then calculate an end effector path extending to the transformed end effector destination in the robot base coordinate system RBCS. It will be appreciated however that this is not essential and alternatively path calculation can be performed in the environment coordinate system ECS.


In one example, the control system determines an end effector position and then calculates the end effector path using the end effector position, so that the end effector path extends from the end effector position to the end effector destination. The end effector position is typically determined in a robot base coordinate system using robot arm kinematics. In one example, the end effector position is the current position, but alternatively, the end effector position could be an expected position when the robot base reaches the reference robot base position, in which case the end effector position can be determined by transforming a current end effector position based on the reference robot base position.


The correction is typically indicative of a deviation of the end effector from the end effector path, which in turn is based on deviation of the robot base from an expected position. Accordingly, the control system calculates a robot base deviation based on the current robot base position and an expected robot base position, and then calculates the correction based on the robot base deviation. The expected robot base position can be based on a reference robot base position or a position the robot base is expected to be in based on traversal of a robot base path. Additionally, in situations where the end effector is being held stationary within the environment, the expected robot base position can be based on an initial or previous robot base position.


The reference robot base position is used to allow the calculation of the end effector path to take into account that the robot base is expected to move between end effector movement commencing and the destination being reached. Accordingly, while the reference robot base position can be a current or initial robot base position, more typically the reference robot base position is a predicted robot base position, based on movement of the robot base along a robot base path. Specifically, the reference robot base position is preferably an intended robot base position when the end effector reaches the end effector destination. Thus it will be appreciated that the reference robot base position could be calculated based on the expected position of the robot base as the destination is reached, so that the end effector moves along a direct path from the current position to the end effector destination in the robot base coordinate system RBCS.


When the reference robot base position is based on the position during interaction, the compensation need only account for unintentional movement of the robot base away from a robot base path extending to the reference robot base position. Although it will be appreciated this is not essential and alternatively the correction can take into account both unintentional and intentional movement. For example, the end effector path could be calculated based on an initial robot base position, with movement along a robot base path being compensated for using the end effector path.


In some circumstances it may be desirable to control the position of the end effector without using DST. An example of this arises when the end effector is interacting with objects solely within the robot base coordinate system RBCS, for example, to retrieve an object from a delivery mechanism mounted on the robot base. In this instance, as the object and end effector both move with the robot base, this does not require DST to be active.


Accordingly, the DST mechanism may need to be activated and deactivated, for example activating DST as the end effector transitions from collecting an object in the robot base coordinate system RBCS to placing the object in the environment E. As the robot base path might have undergone significant movement between the end effector path being calculated and DST being activated, fully activating the compensation mechanism could result in a large correction being calculated, which might not be practical as a result of robot arm dynamics.


Accordingly, in one example, the control system 130 scales the correction based on a relative distance of the current end effector position from the end effector destination. In particular, scaling can be used to effectively transition turning the DST on and off. Whilst any form of scaling could be used, the control system typically scales the correction using an S shaped curve, to progressively apply the correction. This gradually turns on the correction and reduces the rate of increase of the scaling as the end effector nears the destination, thereby reducing large end effector corrections.


In one particular example, the control system 130 moves the end effector 113 between first and second end effector destinations defined in the robot base and environment coordinate systems RBCS, ECS respectively, with the control system scaling the correction based on the relative distance of the current end effector position from the first and second end effector destinations. Consequently, no correction is performed when the end effector is near the first end effector destination, whilst full correction is performed when the end effector is near the second end effector destination.


Throughout the above example, reference has been made to a robot base and end effector position. However, it will be appreciated that DST can also be applied to the orientation of the end effector, meaning in effect the above described process is implemented based on the end effector and robot base pose, and with the end effector destination being in the form of an end effector pose.


In this particular example, the control system determines an end effector pose relative to the robot base coordinate system, calculates the end effector path using the end effector pose and a reference robot base pose in the environment coordinate system, determines a current robot base pose using signals from the tracking system and calculates the correction based on the current robot base pose. In this example, the correction is typically in the form of a vector indicative of movement in each of six degrees of freedom. The control system then controls the end effector pose, thereby providing correction in all six degrees of freedom.


Whilst the above example has focused on scenarios in which the robot base moves, it will be appreciated that this is not essential and the same techniques can be implemented when the robot base is static and the environment is moving relative to the robot base.


An example of a process for using dynamic coordinate system stabilisation will now be described with reference to FIG. 12.


In this example, an end effector destination is acquired by the control system 130 at step 1200, using the techniques similar to those described above. At step 1210 the control system 130 determines a reference robot base position, which is typically determined in the environment coordinate system ECS, and which again could correspond to a current position, a position at which an interaction is expected to occur, or the like. At step 1220, an end effector path extending to the end effector destination is determined at least in part using the reference robot base position.


It will be appreciated that the above described steps 1200 to 1220 are generally similar to the equivalent steps previously described with respect to FIG. 10. However, in this example whilst the end effector path could be calculated in the robot base coordinate system RBCS more typically this is performed in the environment coordinate system ECS using the reference robot base position.


At step 1230, the control system 130 determines a current robot base position using signals from the tracking system 120. This is used to calculate robot arm kinematics at step 1240, so that the kinematics take into account the current robot position, and more typically a deviation from an expected robot base position. Specifically, in one example, the robot base position is used as an origin point of the robot arm kinematics, so that as the robot base moves within the environment coordinate system, this allows the origin point of the robot arm kinematics to be updated, allowing the robot arm to be controlled in the environment coordinate system.


At step 1250 control signals are generated based on the end effector path and the calculated robot arm kinematics, allowing the control system to apply the control signals to the robot arm at step 1260, thereby causing the robot arm to move. Steps 1230 to 1260 can then be repeated as needed, for example until an end effector destination is reached.


Accordingly, in contrast to the previous example which calculated a path correction indicative of a deviation of the end effector from a planned path, the above described example operates by modifying robot arm kinematics. In one particular example this is achieved by shifting an origin of the robot arm kinematic chain, based on movement of the robot base from an expected robot base position, so that the origin shift reflects movement of the robot base. This then modifies the positioning of the robot arm, so that the robot arm is controlled in a dynamic coordinate system that moves relative to the environment coordinate system ECS and enables the end effector to remain on the end effector path. This has the benefit of avoiding the need to calculate path corrections, and thereby reducing the computational complexity of performing DST.


Irrespective of this however, the robot base moves with a slower dynamic response, whilst the robot arm and hence end effector moves with a faster dynamic response, so that movement of the robot arm can be used to correct for movement of the robot base away from an expected robot base position, so that the end effector can be maintained in a desired position.


A specific example correction will now be described with reference to FIG. 13A to 13C, which show an end effector 1313 that is attached to a robot arm 1312, extending from a robot base 1311. The following example illustrates the dynamic coordinate system mechanism operating in two dimensions only, but it will be appreciated that this can extend to six degrees of freedom, and reference to two dimensions only is not intended to be limiting.


In this example, the end effector is maintained at a stationary position relative to the environment coordinate system ECS, and so the calculated end effector path is controlled based on a null path having an effective zero path length. Accordingly, the end effector 1313 is initially provided coincident with the destination 1351, as shown in FIG. 13A.


As shown in FIG. 13B, the robot base undergoes unintentional movement, moving a distance shown by arrow 1361, so that the robot base is now offset from the original robot base coordinate system RBCS. This results in a modified robot base coordinate system RBCS′, which can be applied to the robot arm kinematics as a shift in origin of the kinematics, causing the robot arm kinematics to be recalculated so that the end effector is moved as shown in FIG. 13C, thereby aligning the end effector with the destination 1351.


Accordingly, for an end effector path having a zero path length, the calculated robot arm kinematics returns the end effector to the end effector destination to thereby maintain the end effector static within an environment coordinate system. In particular, as the origin of the kinematic chain of the robot arm dynamically shifts in the environment coordinate system, the end effector destination is used to update the inverse kinematics (i.e. joint angles for each link of the robot arm) necessary for the end effector to remain static to thereby compensate for the moving origin of the robot arm.


It will be appreciated that similar techniques can be applied when traversing the end effector 1313 along a path and/or when moving the robot base 1311 and this will not therefore be described in further detail. However, it will be appreciated that for an end effector path having a non-zero path length, the calculated robot arm kinematics return the end effector to the end effector path. In this regard, as the origin of the kinematic chain of the robot arm shifts, the control system determines the desired end effector position on the end effector path and calculates the inverse kinematics required to achieve the desired end effector position on the path. In this example, the dynamic origin of the kinematic chain of the robot arm and the end effector destination (which may be a final destination or path point along the end effector path) are both expressed in the environment coordinate system which simplifies control of the system.


In one example, the control system determines an end effector position and then calculates the end effector path using the end effector position, so that the end effector path extends from the end effector position to the end effector destination.


As mentioned above the end effector position is typically determined using robot arm kinematics, based on the robot base position in the environment coordinate system, allowing the end effector path to be directly calculated in the environment coordinate system ECS, and control performed in the environment coordinate system ECS, by shifting an origin of the robot arm kinematics based on movement of the robot base.


In another example, the end effector destination (i.e. desired end effector position) is determined in the environment coordinate system and then a transformation is applied to transform the desired end effector position into the robot base coordinate system RBCS. This is achievable since the origin of the robot base coordinate system RBCS is measured by the tracking system and expressed in environment coordinates. In this example, control could then be implemented in the robot base coordinate system RBCS by calculating the inverse kinematics required to move the end effector to the desired end effector position (which may be a path point along the end effector path or a final destination).


In one example, the end effector position is the current position, but alternatively, the end effector position could be an expected position when the robot base reaches the reference robot base position, in which case the end effector position can be determined by transforming a current end effector position based on the reference robot base position.


Typically the control system operates by calculating a robot base movement based on the current robot base position and then modifies the robot arm kinematics using the robot base movement. The movement is typically movement from an initial or expected robot base position, based on a robot base path extending to the robot base reference position. As in the previous example, the reference robot base position can be based on a current robot base position, a predicted robot base position based on movement of the robot base from a current robot base position, a predicted robot base position based on movement of the robot base along the robot base path, or an intended robot base position when an end effector reaches the end effector destination.


As in the previous example, although reference has been made to position only, the techniques are applicable to position and orientation. Accordingly, the end effector destination typically includes an end effector pose with the tracking system measuring a robot base pose and the control system determining a current robot base pose using signals from the tracking system and calculating the robot arm kinematics based on the robot base pose. In this case, the control system can determine an end effector pose and calculate the end effector path extending from the end effector pose to the end effector destination. Thus it will be appreciated that the above described technique can correct for variations in pose as well as adjusting for position only.


Whilst the above example has focused on scenarios in which the robot base moves, it will be appreciated that this is not essential and the same techniques can be implemented when the robot base is static and the environment is moving relative to the robot base. Additionally, whilst the description focuses on correction of unintentional movement only, it will be appreciated that the arrangement can also compensate for intentional movement of the robot base.


An example of a process for performing dynamic path planning stabilisation will now be described with reference to FIG. 14.


In this example, the end effector path is recalculated as needed based on movements in the robot base position. In order to achieve this, the control system 130 acquires the end effector destination at step 1400.


At step 1410, a robot base position is determined by the control system 130. An end effector path is then calculated at step 1420, with the path extending from the end effector position to the end effector destination. The path is typically calculated in the robot base coordinate system RBCS, which is achieved by transforming the end effector destination into the robot base coordinate system RBCS using the robot base position, although alternatively the path could be calculated in the environment coordinate system ECS.


At step 1430 robot control signals are generated with these being applied to the robot arm to cause the robot arm to move at step 1440. Steps 1410 to 1440 are then repeated as needed so that an end effector path is constantly recalculated based on a robot base position, for example taking into account deviation of the robot base position from a robot base path, thereby moving the end effector towards the destination.


A number of different example scenarios will now be described with reference to FIGS. 15A to 15F, which show an end effector 1513 that is attached to a robot arm 1512, which is in turn attached to a robot base (which is not shown for clarity). The following examples will illustrate the dynamic path planning mechanism operating in two dimensions only, but it will be appreciated that this can extend to six degrees of freedom, and reference to two dimensions only is not intended to be limiting.


In the example shown in FIG. 15A, the end effector path is of zero length so that a static end effector position relative to the environment is maintained. Accordingly, the end effector 1513 is initially aligned with the destination 1551. Unintentional movement of the robot base introduces an offset 1561 which results in anew path 1562 being calculated at step 1430, which returns the end effector 1513 to the destination 1551.


In the example at FIGS. 15C and 15D, the end effector 1513 is being moved along an end effector path 1555 to the destination 1551. In this instance, an unintentional movement results in an offset 1561 as the end effector is moved along the path 1563. This causes a new path 1565 to be calculated returning the end effector towards the destination 1551.


A further example is shown in FIGS. 15E and 15F, which involves moving the end effector along an end effector path 1555, and simultaneously moving the robot base along a robot base path 1553. In this instance, the reference robot base position is shown in dotted lines in FIG. 15E, so that the end effector path 1555 is initially calculated to be vertically down to the destination 1551 from the reference robot base position. The net path 1556 formed from a combination of the end effector path 1555 and the robot base path 1553 results in end effector movement from the current end effector position to the destination 1551.


In this instance as the robot base and end effector are moved along the net path as shown at 1563 an unintentional movement arises as shown as 1561. In this instance a new path 1565 is calculated from an updated reference robot base position, which when combined with robot base movement results in a new net path 1565.


It will be appreciated that whilst the above described technique requires that the path is constantly recalculated, which is generally computational more expensive than the previously described DST arrangements, this can have benefits. For example, the path traversed by the end effector tends to head in a more direct manner towards the destination, which then can result in a reduced number of path corrections and/or a reduced path distance. Additionally or alternatively, by reducing the number of corrections required, this avoids the end effector path oscillating around a target path to correct for movement of the robot base, which can reduce the need for sharp changes in direction, which can in turn help ensure that the path is within the constraints of the robot dynamics and can hence be more easily achieved.


As mentioned above, in one example, the end effector destination is typically defined relative to an environment coordinate system in which case the control system calculates a transformed end effector destination by transforming the end effector destination from the environment coordinate system ECS to the robot base coordinate system RBCS at least in part using the robot base position. An end effector path can then be calculated extending to the transformed end effector destination in the robot base coordinate system. However alternatively the path calculation can be performed in the environment coordinate system ECS.


In one example, the control system determines an end effector position and then calculates the end effector path using the end effector position, so that the end effector path extends from the end effector position to the end effector destination. The end effector position is typically determined in a robot base coordinate system using robot arm kinematics.


It will be appreciated that the current robot base position could take into account robot base movement along a robot base path. Accordingly, in this instance the end effector path would be calculated based on a reference robot base position, to take into account movement of the robot base along a robot base path, with the end effector path being calculated based on a deviation of the robot base position from the robot base path.


As in previous examples, the robot base moves with a slower dynamic response, whilst the end effector moves with a faster dynamic response, so that movement of the end effector can be used to correct for movement of the robot base away from an expected robot base position.


Whilst the above has been described with reference to position only, it will be appreciated that the techniques are also applicable to position and orientation. Accordingly, in one example the end effector destination includes an end effector pose and the tracking system measures a robot base pose. In this example, the control system determines a robot base pose using signals from the tracking system and calculates an end effector path based on the current base pose. In one particular example the control system determines an end effector pose relative to the robot base coordinate system and determines a current robot base pose using signals from the tracking system using both of these to calculate the end effector path.


Whilst the above example has focused on scenarios in which the robot base moves, it will be appreciated that this is not essential and the same techniques can be implemented when the robot base is static and the environment is moving relative to the robot base.


In the above described examples, the robot arm actuator is moved with a fast dynamic response to compensate for movement or positional error of the robot base in order to accurately position and stabilise the end effector of the robot arm. In addition to this compensation, in some examples, the control system also monitors the position of the robot base and controls the robot base actuator with a slow dynamic response to cause the robot base to move towards its expected or ideal position. This additional correction may reduce the amount of compensation required by the robot arm actuator and is particularly useful to keep the robot arm within its working envelope in order to position the end effector at its destination. Whilst dynamic compensation of both the robot base actuator and robot arm may be implemented together, this is not essential and one could be performed without the other.


In one example, the control system includes a computer numerical control (CNC) system. In this regard, the CNC system can be formed as a standalone module, implemented as software, firmware, hardware or a combination thereof. In this instance, additional functionality can be calculated by other modules. For example, the system may implement a DST module, which interfaces with the CNC module, to allow the system to be controlled. For example, the DST module can calculate a correction or robot arm kinematic origin shift, providing this to the CNC module to allow the robot arm to be controlled.


Throughout the above examples, and particularly when implementing DST, the steps are repeated to constantly update or correct for movement of the robot base. This is typically repeated for processing cycles of the control system, and in particular consecutive processing cycles of the control system. Thus, a new correction, robot arm kinematic origin shift or new path can be calculated for each clock cycle of the control system. In a further example, this is also performed based on a refresh rate of the tracking system, so that a new correction, etc, is calculated each time the tracking system updates the robot base position. It will be appreciated from this, in one preferred example, the processing cycle of the control system and refresh rate of the tracking system have the same frequency, and even more preferably are time synchronised.


The control signals are typically generated taking into account an end effector velocity profile, robot dynamics and/or robot kinematics. This is performed to ensure that the robot arm is able to perform the necessary motion. For example, a calculated end effector path could exceed the capabilities of the robot arm, for example requiring a change in movement that is not feasible, or requiring movement at a rate that cannot be practically achieved. In this instance, the path can be recalculated to ensure it can be executed.


In one example, this can be achieved by performing a movement that corresponds to the original planned movement, but which is limited in magnitude to a feasible movement. In this instance, if further movement is required, this can be implemented in successive processing cycles.


An example of an overall control approach in which DST is performed using dynamic compensation in conjunction with active damping will now be described with reference to FIGS. 16A to 16C. For the purpose of this example, it is assumed that the system is similar to that described above with respect to FIGS. 1A and 1B, with the robot arm being mounted on a boom.


In this example, a robot base path is retrieved at step 1600. It will be appreciated that this can involve calculating a robot base path.


In one example, this is performed so that the path shape and velocity profile, are carefully controlled to minimise changes in robot base velocity, which in turn can be used to avoid discontinuities, such as stepwise or sharp velocity changes. Sudden velocity changes, for example increasing or decreasing the speed of the robot base movement, or changing the direction of movement, can induce vibrations within the robot base actuator, such as the boom arm of a boom assembly. This in turn can lead to greater unintentional movement of the robot base, including more movements and/or movements of larger magnitude, making it more difficult for the damping and/or DST to correct for movement of the robot base and ensure the end effector is provided at a correct position.


In order to minimise the magnitude of velocity changes, including speed and/or direction changes, a number of different approaches can be used. In one example, the robot base path is curved and/or configured to allow the robot base to be moved gradually whilst interactions are performed, so that the robot base does not need to be halted.


Additionally and/or alternatively, path planning can take into account an interaction time, indicative of a time to perform an interaction, which is then used to calculate the robot base path velocity profile and optionally define an interaction window, which can then be used in controlling the robot base dynamically. In this regard, interaction windows typically correspond to a region of the environment surrounding the end effector destination in which the virtual robot base can be provided, whilst still allow interaction to be performed, and so this allows the velocity of the robot base as it traverses the robot base path to be controlled, for example depending on a completion status of the interaction.


The interaction windows are typically determined based on the interaction time and a velocity, so that the time required to perform an interaction, such as to pick up an object or place an object, corresponds to the time taken to traverse the interaction window at the defined robot base path velocity profile. In one particular example, interaction windows are defined based on a set distance surrounding a destination, derived for example based on robot arm kinematics and/or dynamics such as the reach and or velocity of the end effector.


Having defined the interaction windows, these can then be used in order to control movement of the robot base and end effector and in particular to ensure an interaction is completed without requiring a discrete velocity change. For example, the control system can monitor end effector interaction to determine a completion status, and selectively modify the robot base control signals to cause the robot base to move at different velocities, depending on results of the monitoring.


In one particular example, when the robot base path includes an interaction window associated with each end effector destination, as the robot base enters an interaction window the control system can control the robot arm to commence interaction and/or movement of the end effector along an end effector path to the end effector destination. The control system can then monitor interaction by determining if the interaction will be completed by the time the robot base approaches an exit to the interaction window, optionally progressively reducing the robot base velocity to ensure that the interaction is completed by the time the robot base reaches the exit to the interaction window.


Accordingly, the above described arrangement operates to calculate a path that avoids discontinuities and/or sudden or sharp changes in direction or speed, to thereby minimise unintentional movements of the robot base, such as unwanted oscillations or other movements. Additionally and/or alternatively, the above described approach uses interaction windows to control the robot base speed during the process of performing interaction within the environment. In this regard, the interaction window is defined together with a path velocity profile, based on a time taken to perform the interaction, so that the interaction can be performed without deviating from the velocity profile. In operation, completion of the interaction is monitored with movement of the robot base along the robot base path being progressively slowed if the interaction is running behind schedule. This is performed to ensure that the interaction can be performed before the robot base exits the interaction window.


Additionally, in this example, the interaction is assumed to include a number of steps, with the control system monitoring the interaction by monitoring completion of steps. As part of this process, the control system determines an end effector path for a next step and then generates control signals to move the end effector to thereby complete the step. For example, the steps may include moving the end effector to an end effector destination and then returning the end effector to a starting position, home or reference position. Thus, in the case of brick laying, the interaction could involve collecting a brick from a presentation mechanism mounted on the boom and/or robot base, moving the end effector and brick to a destination in the environment to allow the brick to be laid, before returning the end effector to allow a next brick to be collected.


At step 1602 tracking system signals are acquired with these being used to determine a current robot base pose at step 1604. In particular, this would be calculated based on a tracking target pose, and transformed into a current robot base pose using a geometrical transformation. In one example, the robot base pose is a virtual robot base pose, which is physically offset from the robot base, and aligned with the end effector, which can be beneficial in allowing the robot base to be more easily positioned in order to allow interactions to be performed.


For example, when calculating a robot base path, the control system can simply acquire an end effector destination and then use this destination, together with the tracking target position, to define the robot base path, causing the robot base to traverse the environment to a position which is suitable for the interaction to be performed. In particular this can be used to align the end effector with the end effector destination, thereby reducing the complexity of the end effector path and the need for significant control of the end effector.


Additionally and/or alternatively, this can assist with path planning. For example, path planning and/or tracking of movement of the robot base using a virtual robot base position aligned with the end effector can help avoid collisions of the end effector with the environment or objects or material provided therein.


At step 1606 it is determined if an interaction window is reached and if not the process moves on to step 1630. Otherwise assuming an interaction window has been reached a next step is selected at step 1608, with an end effector path being calculated and/or retrieved at step 1610, for example using steps 1000 to 1030 of FIG. 10.


At step 1612 it is determined if stabilisation is required and if not, for example if the step involves retrieving an object from a delivery mechanism mounted on the robot base, the process proceeds to step 1624.


Otherwise, at step 1614, a robot base pose deviation is calculated based on a deviation between a current robot base pose and expected robot base pose, as calculated from the robot base path. A scaling factor is then determined at step 1616, based on a proximity of the end effector to the end effector destination. At step 1618, the robot base deviation is used to calculate a correction in the form of a vector including offsets for each of six degrees of freedom, and representing the offset of the robot base pose from the expected robot base pose. The correction is then scaled based on the scaling factor.


Concurrently with this, at step 1617, an active damping response is calculated based on the movement of the robot base. This is then used together with the correction, to thereby modify the correction so that the correction takes into account the damping effect.


A robot kinematic transformation is calculated using the end effector path and the scaled correction at step 1620, with this being assessed to ensure dynamics are feasible at step 1622. In this regard, the correction may require that the robot arm undergo a movement which exceeds the robot arm's capabilities, for example requiring a movement that is too rapid. If the movement is not feasible, this can be recalculated or modified, for example by limiting the resulting magnitude of the correction based on the robot arm dynamics. In one example, this is achieved by returning to step 1618, to recalculate the correction. However, this is not essential and in one example, the control signals could be generated at step 1624 based on the robot arm dynamics to simply implement the correction to the maximum degree possible before the next processing cycle of the control system. Thus, if the correction requires end effector movement of 10 mm, but only a 5 mm movement can be achieved prior to the next processing cycle implemented by the controller, then the 5 mm movement would be implemented.


At this point, the control system 130 can determine if the interaction is proceeding on schedule at step 1626, and if not the control system 130 modifies the boom speed at step 1628, for example to slow down movement of the boom. Whether or not the boom speed is modified, the resulting boom control signals are generated by the control system 130 at step 1630.


Control signals are then applied to the respective actuators at step 1632, and to the damping system at step 1631, to thereby move the boom and end effector, and perform damping. Tracking system signals are acquired at step 1634, with this being used to determine a current base pose, following movement of the end effector and robot base, at step 1636.


At step 1638, an assessment is made of whether the step is completed and if not the process returns to step 1612 to again determine if stabilisation is required. Otherwise it is determined if all steps are complete at step 1640, with the process returning to step 1608 to select a next step if not. Otherwise the process returns to 1606 to determine whether a next interaction window has been reached.


It will be appreciated that by following the above described sequence, this allows the boom to be progressively moved along the boom path with interactions being performed by performing sequences of steps, with each step involving the determination of an end effector path with the end effector being moved along the end effector path to a destination.


Whilst the example of FIGS. 16A to 16C focus on the use of dynamic compensation, it will be appreciated that similar approaches can be used for both dynamic coordinate system and dynamic path planning approaches to DST in conjunction with active damping.


The above described damping arrangements, are useful in many applications requiring fine position and motion control over a large working volume.


Some example applications are given below:


Ship Transfer


Ship to ship, or ship to oil rig, or ship to gas rig, or ship to wind turbine, transfer of goods, liquids or personnel, is a potential application for the control system of the invention. It is known to stabilise a vessel for position holding. It is also known to roll stabilise a vessel with gyros or thrusters. It is known to yaw stabilise a vessel with thrusters. It is also known to provide heave, pitch, roll and yaw compensation to working devices such as booms.


However, it is known that for long booms in heavy sea states the existing methods of compensation have limitations. A coarse boom positioning and fine end effector positioning, or even additional stages of fine positioning would enable safer transfer, hook up, disconnection and operations in larger sea states and rougher weather.


This could have great benefit for petrochemical, renewable energy and military operators (and others) that require or desire to transfer things from vessel to vessel or vessel to fixed objects in all weather conditions.


Long Building


Long structures such as road freeway sound walls can be built by the brick laying machine. With traditional arrangements it is necessary to build from one location, then reposition periodically and build from the next stationary location. It would be advantageous to be able to build from a creeping machine. This would reduce lost time to reposition and would enable a smaller more compact machine with a shorter boom. A track mounted machine with a short boom would be ideal. Multiple fixed ground references are provided to facilitate this.


Long Trenching


Long trenches for infrastructure such as underground pipe lines and underground cables can be dug with known continuous trenching machines (such as made by Ditch Witch or Vermeer) or for larger cross section trenches with excavators (such as made by Caterpillar, Volvo, John Deere, Komatsu and others). For many applications the precise grade and location of the trench and pipe is important, such as for sewerage pipe. For many applications knowing the precise position is important, such as in cities to avoid damaging existing infrastructure such as pipes, cables, foundations and underground train and road tunnels. Current systems allow some control of the digging and provide feedback to the operator of dig depth or bucket position. In current system the base of the machine (the tracks) must be stationary.


The dynamic control system described allows precision digging to a tolerance that cannot be currently achieved by other methods. Further-more it allows pre-programmed digging for completely autonomous operation. Further-more it allows precision digging from a continuously moving machine such as a tracked excavator creeping along the path of the proposed trench.


Ground Contouring


It is known to use graders, bulldozers, loaders, gradall or automated screeding machines to smooth earth or concrete surfaces with blades or buckets. The inherent design of the machine will achieve a flatter surface than it moves over because the geometry of the machine provides a smoothing action. It is known that a more accurate and faster result can be achieved with automatic control to maintain the bucket or blade on a predefined level, grade or contour. The blade or bucket is moved up or down or tilted about a roll axis automatically to maintain a laser plane level or grade or to match a contour referenced by GPS or total station measurements. These known control systems have a low bandwidth and the machine achieves an accurate result because the inherent design of the machine will achieve a flatter surface than it drives over, even without machine guidance.


The present invention allows more complex machine arrangements such as a (modified) excavator, to be fitted with a multi axis controlled blade or bucket to achieve very complex earthmoving tasks in a completely programmable way.


Mining


It is known to use autonomous trucks for mining.


Excavators and face shovels are currently operated by machine operators. This technology enables autonomous control of excavators and face shovels by pre-programming the base movement (track base) and the dig program in mine coordinates.


Dredging


Excavators mounted on barges are used for dredging. Dredged channel depth, width, profile and location is extremely important for shipping safety. Dredging is expensive so it is advantageous to minimise the amount of spoil moved. The more accurate the dredging, the less spoil needs to be removed.


The barges are floating so as the excavator moves, the barge pitches and rolls and moves. Measuring the barge position and orientation in 6dof in real time enables the bucket position to be precisely calculated (via known sensors that measure the pose of the excavator), or even controlled to a set of pre-programmed dig locations.


Elevated Work Platforms


It is known to use various kinds of elevated work platforms (EWP) such as boom lifts or scissor lifts or vertical telescoping lifts made by manufacturers such as JLG, Snorkel and Genie. It is known that very tall boom lifts sway with a large amplitude and make work difficult, dangerous or impossible. The sway is the limiting factor for the height that boom lifts can work at. It is known that driving the boom lift or EWP with the platform up excites sway and makes the platform uncomfortable or dangerous. The present invention provides means to make a stabilised platform so that the platform is stabilised relative to the ground, or to a desired trajectory when the platform or EWP is moved.


Cable Suspended Robots


It is known to support a robot on a platform suspended by cables in tension supported by an overhead gantry or towers (see PAR Systems—Tensile Truss and Chernobyl Crane and demolition robot). The cables can support high loads but the structure has low stiffness. The lateral stiffness is very low. The accuracy of the positioning of the robot and end effector would be greatly improved by adding a tracking component to the suspended platform to provide a 6DOF position of the base of the robot arm. This would enable such a system to do accurate work, rather than the relatively inaccurate demolition work it is presently employed to do.


Very Accurate Applications


Such a system may include a galvo mirror to be used with a high power laser for laser cutting, laser engraving or 3D additive laser melting manufacture.


It will be appreciated that a wide range of other uses are also envisaged. For example, the system can be used to perform construction of multi-story and/or high-rise buildings. In this regard, the robot base can be supported by or remotely to the building during construction, with the system being used to compensate for movement of the robot base relative to the building, which might arise from wind loading of the building and/or the support system used to support the robot base.


The system could also be used with a wide range of additional vehicles to those mentioned above, such as space vehicles. In this example, the robot base could be mounted on the space vehicle, allowing this to be used to perform an interaction with another vehicle, for example to facilitate docking, satellite retrieval, or the like, or other objects, such as interaction with an asteroid or similar.


In one example, the system uses a cascading system of positioning devices, measurement systems and control channels. In one embodiment, a wide ranging inaccurate gross motion system guides a vehicle which supports a large area coarse positioning boom which then supports a small dynamic compensation and fine positioning robot which then in turn supports an even finer dynamic compensation and positioning mechanism.


In one example, the system describes dynamic coordinate systems and methods of moving machines and stabilising end effectors. In preferred embodiments, methods of transitioning compensation on and off, or damping transitioning are provided, so that the robot arm moving the end effector may work alternately in a head coordinate system and a ground or work coordinate system.


It is advantageous to code a kinematic transformation as a stand-alone piece of software. This means that the CNC kernel does not have to be modified to accommodate different kinematic chains. By using a dynamic coordinate system as the base of the end effector robot kinematic chain, the end effector can be programmed in a work coordinate system and all of the normal CNC coordinate shifts and transformations work, such as offsets for work coordinates and coordinate system rotation.


With a dynamic coordinate system for the base of the kinematic chain of the robot arm the concept of a compensation amount is abstract. If the base of the kinematic chain of the robot arm was at its programmed location there would be no compensation amount and the robot arm would be in a first pose. If the base is at its actual location and the robot arm was in the first pose, the end effector would be at the wrong location (and in the wrong orientation), the difference being the compensation amount.


In one example, there is provided a control system for an arm supported from an arm base, said arm having an end effector mounted therefrom, said end effector having a further arm supported by a further arm base and said further arm having a further end effector mounted thereon, said arm being moveable relative to said arm base by an arm controller interfaced with an arm actuator to position said end effector to a programmed location, said further arm being movable by a further arm controller interfaced with a further arm actuator to position said further end effector at a programmed position; said control system having a tracker system to track the position of a first target located by an offset proximal to said further arm base or end effector, and to track the position and orientation of a second target located with a TCP offset from said further end effector; wherein said tracker system tracks the position of said first target and feeds data to said arm controller to operate said arm actuator with a slow dynamic response to dynamically position said first target close to said offset to position said further arm base close to said programmed location, and said tracker system tracks the position and orientation of said second target and feeds data to said further arm controller to operate said further arm actuator with a fast dynamic response to dynamically position and optionally orientate said second target to said TCP offset from said programmed position and optionally orientation. The TCP offset may be defined by position and optionally orientation data. The difference between the slow dynamic response and the fast dynamic response is inversely proportional to the potential inertia of the arm and the further arm. Where the further arm is much smaller than the arm, the further arm will possess less potential inertia and may be moved with a relatively fast dynamic response.


In one example, said second target is located with said TCP offset from said further end effector so as to move with movement and pose of said further end effector. In this case the TCP offset is defined by position and orientation data, and said tracker system measures the position and orientation of said second target.


By “close to” said programmed location, the further arm base is moved sufficiently close that the further end effector is within range of its programmed task, i.e. the further arm can move the further end effector to a position in order that the task the further end effector is to perform can be completed. By dynamically position and dynamically position and orientate, it is to be understood that as the position of the further arm base varies due to deflection, its position (and orientation if applicable, see hereinafter) is constantly under review and adjusted by the arm actuator with slow dynamic response, and the position and orientation of the further end effector is also constantly under review and adjusted by the further arm actuator with fast dynamic response.


In one example, said further arm base is mounted proximal to a remote end of said arm, away from said arm base.


In one example, said further arm base and said first target is mounted on a head, mounted to the remote end of the arm.


In one example, said head is pivotally mounted to the remote end of the arm.


In one example, said head is pivotally mounted about a horizontal axis to the remote end of the arm.


In one example, said tracker system tracks the position and orientation of said first target, and feeds data to said arm controller to operate said arm actuator with a slow dynamic response to position and orientate said first target close to said offset to position said further arm base close to said programmed location.


Where the head is pivotally mounted to the remote end of the arm, the poise of the head may be controlled by a separate controller to the arm controller, in which case the arm controller need only operate the arm actuator to position the first target along three orthogonal axes. However, control of the poise of the head may be integrated into the arm controller, in which case the position and orientation of the first target can be tracked.


Where the head is pivotally mounted to the remote end of the arm about a multi axis mechanism, the position and orientation of the first target can be tracked with six degrees of freedom. The position and orientation of the second target can be tracked with six degrees of freedom.


In one example, said tracker system includes separate target tracking devices for said first target and said second target.


In one example, said further arm controller may be controllably switched between a first state wherein said further arm controller is responsive to positioning feedback data derived from said tracker system, to a second state where pre-calibrated positioning data referenced to the further arm base (and hence the remote end of the arm) is relied on, and when switched between said first state and said second state, said further arm controller controls movement of said further arm to dampen movement of the further arm, to avoid sudden movement of said further arm and said further end effector. Such sudden movement could feed back to the arm, causing the arm to undergo reactive movement.


In one example, said arm base is provided with movement apparatus to move said arm base relative to the ground. The movement apparatus may be selected from a wheeled conveyance, incorporating locomotion or not, or self-powered endless tracks. The movement apparatus may incorporate self-levelling to level the arm base.


In one example, said arm base is mounted on an active suspension system, and said arm base incorporates a third target for said tracker system, said active suspension system having a suspension controller interfaced with a suspension actuator to control the position and orientation of said arm base in response to data from said tracker system reading the position and orientation of said third target.


Alternatively, said arm base is mounted to an object having larger inertia than said arm on an active suspension system, and said arm base incorporates a third target for said tracker system; said active suspension system having a suspension controller interfaced with a suspension actuator to control the position and orientation of said arm base relative to said object in response to data from said tracker system reading the position and orientation of said third target, said suspension actuator to control the position of said arm base with a slower dynamic response than said arm controller operates said arm actuator.


In another example, there is provided a control system for a boom supported from a boom base, said boom having a robot arm mounted by a robot base therefrom, said robot arm having an end effector, said boom being moveable relative to said boom base by a boom controller interfaced with a boom actuator to position said robot base to a programmed location, said robot arm being movable by a robot arm controller interfaced with a robot arm actuator to position said end effector at a programmed position and orientation; said control system having a tracker system to track the position of a first target located by an offset proximal to said robot base, and to track the position and orientation of a second target located with a TCP offset from said end effector TCP; wherein said tracker system tracks the position of said first target and feeds data to said boom controller to operate said boom actuator with a slow dynamic response to dynamically position said first target close to said offset to position said robot base close to said programmed location, and said tracker system tracks the position and orientation of said second target and feeds data to said robot arm controller to operate said robot arm actuator with a fast dynamic response to dynamically position and orientate said second target to said TCP offset from said programmed position and orientation. The TCP offset may be defined by position and orientation data.


In one example, said second target is located with said TCP offset from said end effector TCP so as to move with movement and pose of said end effector.


By “close to” said programmed location, the robot base is moved sufficiently close that the end effector is within range of its programmed task, i.e. the robot arm can move the end effector to a position in order that the task the end effector is to perform can be completed. By dynamically position and dynamically position and orientate, it is to be understood that as the position of the robot base varies due to deflection, its position (and orientation if applicable, see hereinafter) is constantly under review and adjusted by the boom actuator with slow dynamic response, and the position and orientation of the end effector is also constantly under review and adjusted by the robot arm actuator with fast dynamic response.


In one example, said robot base is mounted proximal to a remote end of said boom, away from said boom base.


In one example, said robot base and said first target is mounted on a head, mounted to the remote end of the boom.


In one example, said head is pivotally mounted to the remote end of the boom.


In one example, said head is pivotally mounted about a horizontal axis to the remote end of the boom.


In one example, said tracker system tracks the position and orientation of said first target, and feeds data to said boom controller to operate said boom actuator with a slow dynamic response to position and orientate said first target close to said offset to position said robot base close to said programmed location.


Where the head is pivotally mounted to the remote end of the boom, the poise of the head may be controlled by a separate controller to the boom controller, in which case the boom controller need only operate the boom actuator to position the first target along three orthogonal axes. However, control of the poise of the head may be integrated into the boom controller, in which case the position and orientation of the first target can be tracked.


Where the head is pivotally mounted to the remote end of the boom about a multi axis mechanism, the position and orientation of the first target can be tracked with six degrees of freedom. The position and orientation of the second target can be tracked with six degrees of freedom.


In one example, said tracker system includes separate target tracking devices for said first target and said second target.


In one example, said robot arm controller may be controllably switched between a first state wherein said robot arm controller is responsive to positioning feedback data derived from said tracker system, to a second state where pre-calibrated positioning data referenced to the robot base (and hence the remote end of the boom) is relied on, and when switched between said first state and said second state, said robot arm controller controls movement of said robot arm to dampen movement of the robot arm, to avoid sudden movement of said robot arm and said end effector. Such sudden movement could feed back to the boom, causing the boom to undergo reactive movement.


In one example, said boom base is provided with movement apparatus to move said boom base relative to the ground. The movement apparatus may be a vehicle selected from a wheeled conveyance, incorporating locomotion or not, or self-powered endless tracks. The movement apparatus may incorporate self-levelling to level the boom base. Such self-levelling should move the boom base to stabilise the boom base and hence the boom, against changes of position and orientation of the boom base, brought about by undulations in the ground over which the vehicle traverses.


In one example, said boom base is mounted on an active suspension system, and said boom base incorporates a third target for said tracker system, said active suspension system having a suspension controller interfaced with a suspension actuator to control the position and orientation of said boom base in response to data from said tracker system reading the position and orientation of said third target.


Alternatively, said boom base is mounted to an object having larger inertia than said boom on an active suspension system, and said boom base incorporates a third target for said tracker system; said active suspension system having a suspension controller interfaced with a suspension actuator to control the position and orientation of said boom base relative to said object in response to data from said tracker system reading the position and orientation of said third target, said suspension actuator to control the position of said boom base with a faster dynamic response than said boom controller operates said boom actuator.


The control system may include multiple tracker components at various positions on the machine so that a tracker (or multiple trackers) has or have line(s) of sight to one or more tracker components supported by the machine.


In one example, the control system of the machine includes algorithms to evaluate line of sight so that the best line of sight, between tracker and tracker component, in a particular pose can be chosen. The criteria for the best line of sight include, most accurate position and orientation solution (which may depend on the pose of the tracker or its sensor), field of view of the tracker or the sensor, distance to the end effector (closer is better), maintaining line of sight at all times during a programmed path or a critical operation.


In one example, said machine includes a further tracker component supported on said robotic arm, or on said end effector, and said machine uses a further tracker system to measure the position of the further tracker component and applies further compensating movement to the robotic arm assembly to correct for variance between programmed further tracker component position and measured further tracker component position.


The boom base may be a vehicle which may include a tracker component at a position on the vehicle or a plurality of tracker components at various positions on the vehicle. The tracker component(s) may be used to determine the position and orientation of the vehicle relative to a workspace coordinate system. The tracker component(s) may be used to determine the position and orientation of a vehicle for a moving vehicle. The tracker system may include multiple ground references to track the tracker targets as the vehicle progresses along a path.


The arrangements described above can achieve a high degree of dynamic motion quality and position tolerance over a large size of workspace. This results in smoother motion for end effectors located at the end of long booms or towers or supported on long cable trusses. The arrangements of the invention can smooth motion for an end effector supported by a long boom or tower supported by a moving vehicle.


Further details of the applicants technology are described in patent publications and applications U.S. Pat. No. 8,166,727, PCT/AU2008/001274, PCT/AU2008/001275, PCT/AU2017/050731, PCT/AU2017/050730, PCT/AU2017/050728, PCT/AU2017/050739, PCT/AU2017/050738, PCT/AU2018/050698, AU2017902625, AU2017903310, AU2017903312, AU2017904002 and AU2017904110, the contents of which are incorporated herein by cross reference.


Throughout this specification and claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated integer or group of integers or steps but not the exclusion of any other integer or group of integers. As used herein and unless otherwise stated, the term “approximately” means±20%.


Persons skilled in the art will appreciate that numerous variations and modifications will become apparent. All such variations and modifications which become apparent to persons skilled in the art, should be considered to fall within the spirit and scope that the invention broadly appearing before described.

Claims
  • 1. A system for performing interactions within a physical environment, the system including: a) a robot base being mounted on a boom;b) a robot base actuator configured to move the robot base relative to the environment;c) a robot arm mounted to the robot base, the robot arm including an end effector mounted thereon;d) a tracking system configured to measure: i) a robot base position indicative of a position of the robot base relative to the environment; and/orii) a robot base movement indicative of a movement of the robot base relative to the environment;e) an active damping system operably coupled to the robot base and configured to actively damp movement of the robot base relative to the environment; and,f) a control system configured to: i) cause the robot base actuator to move the robot base along a robot base path,ii) cause the robot arm to move the end effector along an end effector path;iii) determine, based on signals from the tracking system, that the robot base has deviated from the robot base path;iv) correct for the deviation of the robot base from the robot base path by: (1) calculating a movement correction for the robot base in accordance with the signals from the tracking system and based on the robot base path, and(2) controlling the active damping system at least partially in accordance with the movement correction to return the robot base to the robot base path;v) stabilize the end effector to account for unintentional end effector movement caused by the deviation of the robot base from the robot base path by: (1) generating robot control signals to control movement of the end effector, wherein the robot control signals are based at least in part on: (a) the deviation of the robot base from the robot base path, and(b) the movement correction applied to the robot base by the active damping system; and(2) applying the robot control signals to the robot arm to stabilize the end effector.
  • 2. A system according to claim 1, wherein the control system determines the movement correction using at least one of: a) a position deviation based on a current robot base position and an expected robot base position;b) a movement based on a change in robot base position;c) an acceleration based on a rate of change in the robot base position;d) a movement deviation based on a change in a robot base position relative to an expected robot base position; and,e) an acceleration deviation based on a rate of change in a robot base position relative to an expected robot base position.
  • 3. A system according to claim 1, wherein the active damping system is coupled to at least one of: a) the robot base; and,b) the robot base actuator.
  • 4. A system according to claim 1, wherein the active damping system generates a motive force opposing at least one of: a) unintentional movement of the robot base; and,b) movement of the robot base away from the robot base path.
  • 5. A system according to claim 1, wherein the active damping system includes at least one of: a) an adaptive structural member;b) at least one nozzle for emitting a pressurised fluid;c) at least one fan mounted on the robot base; and,d) the end effector.
  • 6. A system according to claim 1, wherein the active damping system includes: a) at least one actuator operatively coupled to the robot base; and,b) at least one mass coupled to the actuator to allow the mass to be moved relative to the actuator.
  • 7. A system according to claim 1, wherein the robot base actuator includes an actuator base, wherein the robot base is spaced from the actuator base in a first direction, and wherein the active damping system is configured to apply forces to the robot base in at least two directions orthogonal to the first direction.
  • 8. A system according to claim 1, wherein the robot base actuator includes: a) a boom having a head including the robot base; and,b) a boom base, the boom extending from the boom base.
  • 9. A system according to claim 1, wherein the boom includes an adaptive structural member that can alter a dynamic response of the boom.
  • 10. A system according to claim 9, wherein the adaptive structural member includes at least one of: a) electroactive polymers; and,b) shape-memory alloys.
  • 11. A system according to claim 1, wherein a) the end effector path extends to an end effector destination; andb) the robot control signals are configured to cause the end effector to be moved along the end effector path toward the end effector destination.
  • 12. A system according to claim 1, wherein, to generate the robot control signals, the control system is configured to: a) calculate a stabilization response based on the deviation of the robot base from the robot base path; andb) modify the stabilization response based on the movement correction applied to the robot base.
  • 13. A system according to claim 1, wherein the control system is configured to: a) acquire an indication of an end effector destination defined relative to an environment coordinate system;b) calculate the robot base path extending from a current robot base position at least in part in accordance with the end effector destination;c) generate robot base control signals based on the robot base path; and,d) apply the robot base control signals to the robot base actuator to cause the robot base to be moved along the robot base path.
  • 14. A system according to claim 13, wherein the control system is configured to: a) control the robot base at least in part using the movement correction; and/orb) generate the robot base control signals at least in part using the movement correction.
  • 15. A method for performing interactions within a physical environment using a system including: a) a robot base being mounted on a boom;b) a robot base actuator configured to move the robot base relative to the environment;c) a robot arm mounted to the robot base, the robot arm including an end effector mounted thereon;d) a tracking system configured to measure: i) a robot base position indicative of a position of the robot base relative to the environment; and/orii) a robot base movement indicative of a movement of the robot base relative to the environment; and,e) an active damping system operably coupled to the robot base and configured to actively damp movement of the robot base relative to the environment,f) wherein the method includes, in a control system: i) causing the robot base actuator to move the robot base along a robot base path, andii) causing the robot arm to move the end effector along an end effector path;iii) determining, based on signals from the tracking system, that the robot base has deviated from the robot base path;iv) correcting for the deviation of the robot base from the robot base path by: (1) calculating a movement correction for the robot base in accordance with the signals from the tracking system and based on the robot base path, and(2) controlling the active damping system at least partially in accordance with the movement correction to return the robot base to the robot base path;v) stabilizing the end effector to account for unintentional end effector movement caused by the deviation of the robot base from the robot base path by: (1) generating robot control signals to control movement of the end effector, wherein the robot control signals are based at least in part on: (a) the deviation of the robot base from the robot base path, and(b) the movement correction applied to the robot base by the active damping system; and(2) applying the robot control signals to the robot arm to stabilize the end effector.
  • 16. A computer program product including computer executable code, which when executed by a suitably programmed control system causes the control system to control a system for performing interactions within a physical environment, the system including: a) a robot base being mounted on a boom;b) a robot base actuator configured to move the robot base relative to the environment;c) a robot arm mounted to the robot base, the robot arm including an end effector mounted thereon;d) a tracking system configured to measure: i) a robot base position indicative of a position of the robot base relative to the environment; and/orii) a robot base movement indicative of a movement of the robot base relative to the environment; and,e) an active damping system operably coupled to the robot base and configured to actively damp movement of the robot base relative to the environment,f) wherein the computer executable code, when executed by the control system, causes the control system to: i) cause the robot base actuator to move the robot base along a robot base path;ii) cause the robot arm to move the end effector along an end effector path;iii) determine, based on signals from the tracking system, that the robot base has deviated from the robot base path,iv) correct for the deviation of the robot base from the robot base path by: (1) calculating a movement correction for the robot base in accordance with the signals from the tracking system and based on the robot base path, and(2) controlling the active damping system at least partially in accordance with the movement correction to return the robot base to the robot base path,v) stabilize the end effector to account for unintentional end effector movement caused by the deviation of the robot base from the robot base path by: (1) generating robot control signals to control movement of the end effector, wherein the robot control signals are based at least in part on: (a) the deviation of the robot base from the robot base path, and(b) the movement correction applied to the robot base by the active damping system; and(2) applying the robot control signals to the robot arm to stabilize the end effector.
Priority Claims (1)
Number Date Country Kind
2018902566 Jul 2018 AU national
PCT Information
Filing Document Filing Date Country Kind
PCT/AU2019/050742 7/16/2019 WO
Publishing Document Publishing Date Country Kind
WO2020/014736 1/23/2020 WO A
US Referenced Citations (469)
Number Name Date Kind
1633192 Reagan Jun 1927 A
1829435 Barnhart Oct 1931 A
3438171 Demarest Apr 1969 A
3757484 Williamson et al. Sep 1973 A
3790428 Lingl Feb 1974 A
RE28305 Williamson et al. Jan 1975 E
3930929 Lingl Jan 1976 A
3950914 Lowen Apr 1976 A
4033463 Cervin Jul 1977 A
4106259 Taylor-smith Aug 1978 A
4221258 Richard Sep 1980 A
4245451 Taylor-smith Jan 1981 A
4303363 Cervin Dec 1981 A
4523100 Payne Jun 1985 A
4708562 Melan et al. Nov 1987 A
4714339 Lau Dec 1987 A
4758036 Legille et al. Jul 1988 A
4765789 Lonardi et al. Aug 1988 A
4790651 Brown et al. Dec 1988 A
4827689 Lonardi et al. May 1989 A
4852237 Tradt et al. Aug 1989 A
4911595 Kirchen et al. Mar 1990 A
4945493 Huang et al. Jul 1990 A
4952772 Zana Aug 1990 A
4954762 Miyake et al. Sep 1990 A
4969789 Searle Nov 1990 A
5004844 Van et al. Apr 1991 A
5013986 Gauggel May 1991 A
5018923 Melan et al. May 1991 A
5049797 Phillips Sep 1991 A
5080415 Bjornson Jan 1992 A
5196900 Pettersen Mar 1993 A
5284000 Milne et al. Feb 1994 A
5321353 Furness Jun 1994 A
5403140 Carmichael et al. Apr 1995 A
5413454 Movsesian May 1995 A
5419669 Kremer et al. May 1995 A
5420489 Hansen et al. May 1995 A
5469531 Faure et al. Nov 1995 A
5497061 Nonaka et al. Mar 1996 A
5523663 Tsuge et al. Jun 1996 A
5527145 Duncan Jun 1996 A
5557397 Hyde et al. Sep 1996 A
5737500 Seraji et al. Apr 1998 A
5838882 Gan et al. Nov 1998 A
6018923 Wendt Feb 2000 A
6049377 Lau et al. Apr 2000 A
6101455 Davis Aug 2000 A
6134507 Markey, Jr. et al. Oct 2000 A
6166809 Pettersen et al. Dec 2000 A
6166811 Long et al. Dec 2000 A
6172754 Niebuhr Jan 2001 B1
6213309 Dadisho Apr 2001 B1
6285959 Greer Sep 2001 B1
6310644 Keightley Oct 2001 B1
6330503 Sharp et al. Dec 2001 B1
6370837 Mcmahon et al. Apr 2002 B1
6427122 Lin Jul 2002 B1
6429016 Mcneil Aug 2002 B1
6512993 Kacyra et al. Jan 2003 B2
6516272 Lin Feb 2003 B2
6584378 Anfindsen Jun 2003 B1
6611141 Schulz Aug 2003 B1
6618496 Tassakos et al. Sep 2003 B1
6628322 Cerruti Sep 2003 B1
6643002 Drake, Jr. Nov 2003 B2
6664529 Pack et al. Dec 2003 B2
6681145 Greenwood et al. Jan 2004 B1
6683694 Cornil Jan 2004 B2
6704619 Coleman et al. Mar 2004 B1
6741364 Lange et al. May 2004 B2
6825937 Gebauer et al. Nov 2004 B1
6850946 Rappaport et al. Feb 2005 B1
6859729 Breakfield et al. Feb 2005 B2
6864966 Giger Mar 2005 B2
6868847 Ainedter et al. Mar 2005 B2
6873880 Hooke et al. Mar 2005 B2
6917893 Dietsch et al. Jul 2005 B2
6935036 Barber et al. Aug 2005 B2
6957496 Raab et al. Oct 2005 B2
6965843 Hobden et al. Nov 2005 B2
6970802 Ban et al. Nov 2005 B2
6996912 Raab et al. Feb 2006 B2
7044314 Nayfeh May 2006 B2
7050930 Hobden et al. May 2006 B2
7051450 Barber et al. May 2006 B2
7069664 Barber et al. Jul 2006 B2
7107144 Capozzi et al. Sep 2006 B2
7111437 Ainedter Sep 2006 B2
7130034 Barvosa-carter et al. Oct 2006 B2
7142981 Hablani Nov 2006 B2
7145647 Suphellen et al. Dec 2006 B2
7153454 Khoshnevis Dec 2006 B2
7174651 Barber et al. Feb 2007 B2
7230689 Lau Jun 2007 B2
7246030 Raab et al. Jul 2007 B2
7269910 Raab et al. Sep 2007 B2
7305094 Kashani Dec 2007 B2
7347311 Rudge Mar 2008 B2
7519493 Atwell et al. Apr 2009 B2
7551121 Oconnell et al. Jun 2009 B1
7564538 Sakimura et al. Jul 2009 B2
7570371 Storm Aug 2009 B1
7576836 Bridges Aug 2009 B2
7576847 Bridges Aug 2009 B2
7591078 Crampton Sep 2009 B2
7639347 Eaton Dec 2009 B2
7693325 Pulla et al. Apr 2010 B2
7701587 Shioda et al. Apr 2010 B2
7774159 Cheng et al. Aug 2010 B2
7800758 Bridges et al. Sep 2010 B1
7804602 Raab Sep 2010 B2
RE42055 Raab et al. Jan 2011 E
RE42082 Raab et al. Feb 2011 E
7881896 Atwell et al. Feb 2011 B2
7967549 Geist et al. Jun 2011 B2
7993289 Quistgaard et al. Aug 2011 B2
8036452 Pettersson et al. Oct 2011 B2
8054451 Karazi et al. Nov 2011 B2
8060344 Stathis Nov 2011 B2
8145446 Atwell et al. Mar 2012 B2
8166727 Pivac et al. May 2012 B2
8169604 Braghiroli et al. May 2012 B2
8185240 Williams et al. May 2012 B2
8195368 Leban et al. Jun 2012 B1
8229208 Pulla et al. Jul 2012 B2
8233153 Knuettel Jul 2012 B2
8244030 Pettersson et al. Aug 2012 B2
8248620 Wicks et al. Aug 2012 B2
8269984 Hinderling et al. Sep 2012 B2
8287522 Moses Oct 2012 B2
8322468 Nagasaka Dec 2012 B2
8327555 Champ Dec 2012 B2
8337407 Quistgaard et al. Dec 2012 B2
8345926 Clark et al. Jan 2013 B2
8346392 Walser et al. Jan 2013 B2
8352129 Yuan et al. Jan 2013 B2
8401698 Kamrani Mar 2013 B2
8405716 Yu et al. Mar 2013 B2
8467072 Cramer et al. Jun 2013 B2
8467888 Gahinet Jun 2013 B2
8537372 Siercks et al. Sep 2013 B2
8537376 Day et al. Sep 2013 B2
8558992 Steffey Oct 2013 B2
8588974 Aoba Nov 2013 B2
8593648 Cramer et al. Nov 2013 B2
8595948 Raab et al. Dec 2013 B2
8606399 Williams et al. Dec 2013 B2
8634950 Simonetti et al. Jan 2014 B2
8644964 Hendron et al. Feb 2014 B2
8670114 Bridges et al. Mar 2014 B2
8677643 Bridges et al. Mar 2014 B2
8792709 Pulla et al. Jul 2014 B2
8803055 Lau et al. Aug 2014 B2
8812155 Brethe Aug 2014 B2
8825208 Benson Sep 2014 B1
8832954 Atwell et al. Sep 2014 B2
8848203 Bridges et al. Sep 2014 B2
8875409 Kretschmer et al. Nov 2014 B2
8898919 Bridges et al. Dec 2014 B2
8902408 Bridges Dec 2014 B2
8913814 Gandyra Dec 2014 B2
8931182 Raab et al. Jan 2015 B2
8942940 York Jan 2015 B2
8965571 Peters et al. Feb 2015 B2
8996244 Summer et al. Mar 2015 B2
8997362 Briggs et al. Apr 2015 B2
9020240 Pettersson et al. Apr 2015 B2
9033998 Schaible et al. May 2015 B1
RE45565 Bridges et al. Jun 2015 E
9046360 Atwell et al. Jun 2015 B2
9074381 Drew Jul 2015 B1
9109877 Thierman Aug 2015 B2
9146315 Bosse et al. Sep 2015 B2
9151830 Bridges Oct 2015 B2
9163922 Bridges et al. Oct 2015 B2
9170096 Fowler et al. Oct 2015 B2
9188430 Atwell et al. Nov 2015 B2
9207309 Bridges Dec 2015 B2
9223025 Debrunner et al. Dec 2015 B2
9229108 Debrunner et al. Jan 2016 B2
9266238 Huettenhofer Feb 2016 B2
9267784 Atwell et al. Feb 2016 B2
9278448 Freeman Mar 2016 B2
9279661 Tateno et al. Mar 2016 B2
9303988 Tani Apr 2016 B2
9353519 Williams May 2016 B2
9354051 Dunne et al. May 2016 B2
9358688 Drew Jun 2016 B2
9367741 Le Marec Jun 2016 B2
9377301 Neier et al. Jun 2016 B2
9383200 Hulm et al. Jul 2016 B2
9395174 Bridges Jul 2016 B2
9405293 Meuleau Aug 2016 B2
9423282 Moy Aug 2016 B2
9437005 Tateno et al. Sep 2016 B2
9443308 Pettersson et al. Sep 2016 B2
9452533 Calkins et al. Sep 2016 B2
9454818 Cramer Sep 2016 B2
9476695 Becker et al. Oct 2016 B2
9482524 Metzler et al. Nov 2016 B2
9482525 Bridges Nov 2016 B2
9482746 Bridges Nov 2016 B2
9494686 Maryfield et al. Nov 2016 B2
9513100 Raab et al. Dec 2016 B2
9536163 Veeser et al. Jan 2017 B2
9541371 Pettersson et al. Jan 2017 B2
9561019 Mihailescu et al. Feb 2017 B2
9593046 Bastelberger Mar 2017 B2
9607239 Bridges et al. Mar 2017 B2
9618620 Zweigle et al. Apr 2017 B2
9658061 Wilson et al. May 2017 B2
9671221 Ruhland et al. Jun 2017 B2
9679385 Suzuki et al. Jun 2017 B2
9686532 Tohme Jun 2017 B2
9708079 Desjardien et al. Jul 2017 B2
9715730 Suzuki Jul 2017 B2
9720087 Christen et al. Aug 2017 B2
9734609 Pulla et al. Aug 2017 B2
9739595 Lau Aug 2017 B2
9746308 Gong Aug 2017 B2
9757859 Kolb et al. Sep 2017 B1
9768837 Charvat et al. Sep 2017 B2
9772173 Atwell et al. Sep 2017 B2
9803969 Gong Oct 2017 B2
9816813 Lettau et al. Nov 2017 B2
9829305 Gong Nov 2017 B2
9835717 Bosse et al. Dec 2017 B2
9844792 Pettersson et al. Dec 2017 B2
9879976 Bridges et al. Jan 2018 B2
9897442 Pettersson et al. Feb 2018 B2
9903939 Charvat et al. Feb 2018 B2
9909855 Becker et al. Mar 2018 B2
9915733 Fried et al. Mar 2018 B2
9921046 Gong Mar 2018 B2
9958268 Ohtomo et al. May 2018 B2
9958545 Eichenholz et al. May 2018 B2
9964398 Becker et al. May 2018 B2
9964402 Tohme et al. May 2018 B2
9967545 Tohme May 2018 B2
9987746 Bradski Jun 2018 B2
9989353 Bartmann et al. Jun 2018 B2
10012732 Eichenholz et al. Jul 2018 B2
10030972 Iseli et al. Jul 2018 B2
10041793 Metzler et al. Aug 2018 B2
10054422 Böckem et al. Aug 2018 B2
10058394 Johnson et al. Aug 2018 B2
10059003 Linnell et al. Aug 2018 B1
10073162 Charvat et al. Sep 2018 B2
10074889 Charvat et al. Sep 2018 B2
10082521 Atlas et al. Sep 2018 B2
10089586 Vestal Oct 2018 B2
10090944 Charvat et al. Oct 2018 B1
10094909 Charvat et al. Oct 2018 B2
10126415 Becker et al. Nov 2018 B2
10150653 Kyllingstad Dec 2018 B2
10189176 Williams Jan 2019 B2
10220511 Linnell et al. Mar 2019 B2
10240949 Peters et al. Mar 2019 B2
10437252 Liu et al. Oct 2019 B1
10627211 Luthi Apr 2020 B2
10635758 Pivac et al. Apr 2020 B2
10744645 Wang et al. Aug 2020 B2
10865578 Pivac et al. Dec 2020 B2
10876308 Pivac et al. Dec 2020 B2
11106836 Pivac et al. Aug 2021 B2
11187793 Liu Nov 2021 B1
11299894 Pivac Apr 2022 B2
11364630 Henriksson Jun 2022 B2
11401115 Pivac Aug 2022 B2
11441899 Pivac et al. Sep 2022 B2
11951616 Pivac et al. Apr 2024 B2
20010055525 Inokuchi Dec 2001 A1
20020126852 Kashani Sep 2002 A1
20020175594 Kornbluh Nov 2002 A1
20020176603 Bauer et al. Nov 2002 A1
20030048459 Gooch Mar 2003 A1
20030090682 Gooch et al. May 2003 A1
20030120377 Hooke et al. Jun 2003 A1
20030206285 Lau Nov 2003 A1
20030208302 Lemelson Nov 2003 A1
20040073343 Nayfeh Apr 2004 A1
20040078137 Breakfield et al. Apr 2004 A1
20040093119 Gunnarsson et al. May 2004 A1
20040200947 Lau Oct 2004 A1
20050007450 Hill et al. Jan 2005 A1
20050057745 Bontje Mar 2005 A1
20050060092 Hablani Mar 2005 A1
20050086901 Chisholm Apr 2005 A1
20050131619 Rappaport et al. Jun 2005 A1
20050196484 Khoshnevis Sep 2005 A1
20050252118 Matsufuji Nov 2005 A1
20060167587 Read Jul 2006 A1
20060215179 Mcmurtry et al. Sep 2006 A1
20070024870 Girard et al. Feb 2007 A1
20070106421 Kamrani May 2007 A1
20070229802 Lau Oct 2007 A1
20070284215 Rudge Dec 2007 A1
20080030855 Lau Feb 2008 A1
20080189046 Eliasson et al. Aug 2008 A1
20080235970 Crampton Oct 2008 A1
20090038258 Pivac et al. Feb 2009 A1
20090074979 Krogedal et al. Mar 2009 A1
20090240372 Bordyn et al. Sep 2009 A1
20100025349 Khoshnevis Feb 2010 A1
20100092032 Boca Apr 2010 A1
20100095835 Yuan et al. Apr 2010 A1
20100103431 Demopoulos Apr 2010 A1
20100138185 Kang Jun 2010 A1
20100143089 Hvass Jun 2010 A1
20100152899 Chang et al. Jun 2010 A1
20100206651 Nagasaka Aug 2010 A1
20100274390 Walser et al. Oct 2010 A1
20100281822 Murray Nov 2010 A1
20100312364 Eryilmaz et al. Dec 2010 A1
20110043515 Stathis Feb 2011 A1
20110066393 Groll et al. Mar 2011 A1
20110153524 Schnackel Jun 2011 A1
20110208347 Otake et al. Aug 2011 A1
20110279231 Schwenkel Nov 2011 A1
20110282490 Weigele Nov 2011 A1
20120038074 Khoshnevis Feb 2012 A1
20120053726 Peters et al. Mar 2012 A1
20120099096 Bridges et al. Apr 2012 A1
20120136524 Everett et al. May 2012 A1
20120185089 Schreiber Jul 2012 A1
20120265391 Letsky Oct 2012 A1
20120277898 Kawai et al. Nov 2012 A1
20130028478 St-pierre et al. Jan 2013 A1
20130068061 Yoon Mar 2013 A1
20130103192 Huettenhofer Apr 2013 A1
20130104407 Lee May 2013 A1
20130222816 Briggs et al. Aug 2013 A1
20130250285 Bridges et al. Sep 2013 A1
20130286196 Atwell Oct 2013 A1
20130297046 Hendron et al. Nov 2013 A1
20130310982 Scheurer Nov 2013 A1
20140002608 Atwell et al. Jan 2014 A1
20140067121 Brooks et al. Mar 2014 A1
20140176677 Valkenburg et al. Jun 2014 A1
20140192187 Atwell et al. Jul 2014 A1
20140309960 Vennegeerts et al. Oct 2014 A1
20140343727 Calkins et al. Nov 2014 A1
20140348388 Metzler et al. Nov 2014 A1
20140365258 Vestal Dec 2014 A1
20140366481 Benson Dec 2014 A1
20140376768 Troy Dec 2014 A1
20150082740 Peters et al. Mar 2015 A1
20150100066 Kostrzewski et al. Apr 2015 A1
20150134303 Chang et al. May 2015 A1
20150153720 Pettersson et al. Jun 2015 A1
20150158181 Kawamura Jun 2015 A1
20150165620 Osaka Jun 2015 A1
20150166413 Bastelberger et al. Jun 2015 A1
20150241203 Jordil et al. Aug 2015 A1
20150258694 Hand et al. Sep 2015 A1
20150276402 Grsser et al. Oct 2015 A1
20150280829 Breuer Oct 2015 A1
20150293596 Krausen et al. Oct 2015 A1
20150309175 Hinderling et al. Oct 2015 A1
20150314890 Desjardien et al. Nov 2015 A1
20150345959 Meuleau Dec 2015 A1
20150352721 Wicks et al. Dec 2015 A1
20150355310 Gong et al. Dec 2015 A1
20150367509 Georgeson Dec 2015 A1
20150371082 Csaszar et al. Dec 2015 A1
20150377606 Thielemans Dec 2015 A1
20160005185 Geissler Jan 2016 A1
20160093099 Bridges Mar 2016 A1
20160153786 Liu et al. Jun 2016 A1
20160187130 Metzler et al. Jun 2016 A1
20160187470 Becker et al. Jun 2016 A1
20160194183 Kyllingstad Jul 2016 A1
20160221187 Bradski Aug 2016 A1
20160223364 Peters et al. Aug 2016 A1
20160239013 Troy Aug 2016 A1
20160242744 Mihailescu et al. Aug 2016 A1
20160263767 Williams Sep 2016 A1
20160274237 Stutz Sep 2016 A1
20160282107 Roland et al. Sep 2016 A1
20160282110 Vagman et al. Sep 2016 A1
20160282179 Nazemi et al. Sep 2016 A1
20160288331 Sivich et al. Oct 2016 A1
20160292918 Cummings et al. Oct 2016 A1
20160313114 Tohme et al. Oct 2016 A1
20160318187 Tan et al. Nov 2016 A1
20160327383 Becker et al. Nov 2016 A1
20160340873 Eidenberger et al. Nov 2016 A1
20160341041 Puura et al. Nov 2016 A1
20160349746 Grau Dec 2016 A1
20160363436 Clark et al. Dec 2016 A1
20160363659 Mindell et al. Dec 2016 A1
20160363663 Mindell et al. Dec 2016 A1
20160363664 Mindell et al. Dec 2016 A1
20160364869 Siercks et al. Dec 2016 A1
20160364874 Tohme et al. Dec 2016 A1
20170028550 Terada Feb 2017 A1
20170066157 Peters et al. Mar 2017 A1
20170067739 Siercks et al. Mar 2017 A1
20170071680 Swarup Mar 2017 A1
20170082436 Siercks et al. Mar 2017 A1
20170091922 Siercks et al. Mar 2017 A1
20170091923 Siercks et al. Mar 2017 A1
20170108528 Atlas et al. Apr 2017 A1
20170122733 Brown May 2017 A1
20170122736 Dold et al. May 2017 A1
20170166399 Stubbs Jun 2017 A1
20170173795 Tan et al. Jun 2017 A1
20170173796 Kim et al. Jun 2017 A1
20170176572 Charvat et al. Jun 2017 A1
20170179570 Charvat Jun 2017 A1
20170179603 Charvat et al. Jun 2017 A1
20170191822 Becker et al. Jul 2017 A1
20170225330 Wagner Aug 2017 A1
20170227355 Pettersson et al. Aug 2017 A1
20170236299 Valkenburg et al. Aug 2017 A1
20170254102 Peters Sep 2017 A1
20170269203 Trishaun Sep 2017 A1
20170291805 Hao et al. Oct 2017 A1
20170307757 Hinderling et al. Oct 2017 A1
20170314909 Dang Nov 2017 A1
20170314918 Shah Nov 2017 A1
20170329321 Dai et al. Nov 2017 A1
20170333137 Roessler Nov 2017 A1
20170343336 Lettau Nov 2017 A1
20170371342 Hashimoto Dec 2017 A1
20180001479 Li et al. Jan 2018 A1
20180003493 Bernhard et al. Jan 2018 A1
20180017384 Siercks et al. Jan 2018 A1
20180023935 Atwell et al. Jan 2018 A1
20180038684 Fröhlich et al. Feb 2018 A1
20180043838 Ellerman Feb 2018 A1
20180046096 Shibazaki Feb 2018 A1
20180052233 Frank et al. Feb 2018 A1
20180074201 Sakai et al. Mar 2018 A1
20180093380 Yoshida Apr 2018 A1
20180108178 Murugappan et al. Apr 2018 A1
20180121571 Tiwari et al. May 2018 A1
20180149469 Becker et al. May 2018 A1
20180156601 Pontai Jun 2018 A1
20180168749 Dozeman Jun 2018 A1
20180170719 Tasch et al. Jun 2018 A1
20180180416 Edelman et al. Jun 2018 A1
20180180740 Shaffer Jun 2018 A1
20180202796 Ziegenbein Jul 2018 A1
20180209156 Pettersson Jul 2018 A1
20180222047 Nishi Aug 2018 A1
20180239010 Mindell et al. Aug 2018 A1
20180283017 Telleria et al. Oct 2018 A1
20180300433 Maxam et al. Oct 2018 A1
20190026401 Benjamin et al. Jan 2019 A1
20190032348 Parkes Jan 2019 A1
20190099902 Yamamoto et al. Apr 2019 A1
20190184555 Linnell et al. Jun 2019 A1
20190224846 Pivac et al. Jul 2019 A1
20190251210 Pivac et al. Aug 2019 A1
20200009723 Eisenwinter Jan 2020 A1
20200009730 Henriksson Jan 2020 A1
20200173777 Pivac et al. Jun 2020 A1
20200206923 Pivac et al. Jul 2020 A1
20200206924 Pivac et al. Jul 2020 A1
20200215688 Pivac et al. Jul 2020 A1
20200215692 Pivac et al. Jul 2020 A1
20200215693 Pivac et al. Jul 2020 A1
20200324981 Pivac et al. Oct 2020 A1
20210016437 Pivac et al. Jan 2021 A1
20210016438 Pivac et al. Jan 2021 A1
20210080582 Pivac et al. Mar 2021 A1
20210370509 Pivac et al. Dec 2021 A1
Foreign Referenced Citations (165)
Number Date Country
645640 Jan 1994 AU
673498 Mar 1990 CH
2730976 Oct 2005 CN
2902981 May 2007 CN
2923903 Jul 2007 CN
101100903 Jan 2008 CN
201184054 Jan 2009 CN
101360873 Feb 2009 CN
101476883 Jul 2009 CN
100557169 Nov 2009 CN
101694130 Apr 2010 CN
201972413 Sep 2011 CN
102359282 Feb 2012 CN
202248944 May 2012 CN
202292752 Jul 2012 CN
102995911 Mar 2013 CN
103042527 Apr 2013 CN
202925913 May 2013 CN
103363902 Oct 2013 CN
103698769 Apr 2014 CN
203701626 Jul 2014 CN
104141391 Nov 2014 CN
104153591 Nov 2014 CN
104493810 Apr 2015 CN
204295678 Apr 2015 CN
104612411 May 2015 CN
204311767 May 2015 CN
103774859 Nov 2015 CN
103753586 Dec 2015 CN
105113373 Dec 2015 CN
105178616 Dec 2015 CN
105257008 Jan 2016 CN
105452806 Mar 2016 CN
105544998 May 2016 CN
205290958 Jun 2016 CN
104806028 Nov 2016 CN
205668271 Nov 2016 CN
205840368 Dec 2016 CN
205990775 Mar 2017 CN
106607907 May 2017 CN
206185879 May 2017 CN
206189878 May 2017 CN
105089274 Jun 2017 CN
105064699 Jul 2017 CN
107217859 Sep 2017 CN
107237483 Oct 2017 CN
107357294 Nov 2017 CN
107605167 Jan 2018 CN
206844687 Jan 2018 CN
107654077 Feb 2018 CN
107675891 Feb 2018 CN
107740591 Feb 2018 CN
106088632 Mar 2018 CN
107762165 Mar 2018 CN
207063553 Mar 2018 CN
106088631 May 2018 CN
107975245 May 2018 CN
108061551 May 2018 CN
108222527 Jun 2018 CN
108301628 Jul 2018 CN
108331362 Jul 2018 CN
106150109 Aug 2018 CN
108457479 Aug 2018 CN
108708560 Oct 2018 CN
208023979 Oct 2018 CN
106881711 Apr 2019 CN
107083845 Jun 2019 CN
108016585 Jul 2019 CN
3430915 Mar 1986 DE
4038260 Jun 1991 DE
4207384 Sep 1993 DE
19509809 Oct 1995 DE
4417928 Nov 1995 DE
29601535 May 1997 DE
19600006 Jul 1997 DE
19603234 Sep 1997 DE
19743717 Apr 1999 DE
19849720 May 2000 DE
10230021 Jul 2003 DE
102006030130 Sep 2007 DE
102009014766 Sep 2010 DE
102009018070 Oct 2010 DE
102009042014 Mar 2011 DE
202012100646 Jun 2013 DE
102013019869 May 2015 DE
190076 Aug 1986 EP
370682 May 1990 EP
456020 Jan 1995 EP
493020 Apr 1995 EP
495525 Apr 1995 EP
836664 Jan 1999 EP
674069 Dec 1999 EP
1375083 Jan 2004 EP
1918478 May 2008 EP
2112291 Oct 2009 EP
2219528 Aug 2010 EP
2249997 Nov 2010 EP
2353801 Aug 2011 EP
2631040 Aug 2013 EP
2199719 Oct 2014 EP
3084719 Oct 2016 EP
3582934 Nov 2021 EP
2296556 Apr 2008 ES
2230825 Dec 1974 FR
2524522 Oct 1983 FR
119331 Oct 1918 GB
2198105 May 1923 GB
673472 Jun 1952 GB
682010 Nov 1952 GB
839253 Jun 1960 GB
1067604 May 1967 GB
1465068 Feb 1977 GB
125079 D Dec 2001 GB
2422400 Jul 2006 GB
64006719 Jan 1989 JP
H07101509 Nov 1999 JP
2005283600 Oct 2005 JP
4294990 Apr 2009 JP
2009521630 Jun 2009 JP
5508895 Mar 2014 JP
87054 Jun 1989 LU
87381 Jun 1990 LU
88144 Apr 1994 LU
85392 Aug 2009 RU
9702397 Jan 1997 WO
2001076830 Oct 2001 WO
2004020760 Mar 2004 WO
2004083540 Sep 2004 WO
2005014240 Feb 2005 WO
2005017550 Feb 2005 WO
2005070657 Aug 2005 WO
2004011734 Nov 2005 WO
2006111827 Oct 2006 WO
2007076581 Jul 2007 WO
2008110559 Sep 2008 WO
2008124713 Oct 2008 WO
2009026641 Mar 2009 WO
2009026642 Mar 2009 WO
2010020457 Feb 2010 WO
2010054519 May 2010 WO
2010069160 Jun 2010 WO
2011077006 Jun 2011 WO
2013088154 Jun 2013 WO
2013134559 Sep 2013 WO
2015142784 Sep 2015 WO
2017162630 Sep 2017 WO
2018009978 Jan 2018 WO
2018009980 Jan 2018 WO
2018009981 Jan 2018 WO
2018009985 Jan 2018 WO
2018009986 Jan 2018 WO
2018052469 Apr 2018 WO
2018099323 Jun 2018 WO
2018149502 Aug 2018 WO
2019006511 Jan 2019 WO
2019014701 Jan 2019 WO
2019014702 Jan 2019 WO
2019014705 Jan 2019 WO
2019014706 Jan 2019 WO
2019014707 Jan 2019 WO
2019033165 Feb 2019 WO
2019033166 Feb 2019 WO
2019033170 Feb 2019 WO
2019068128 Apr 2019 WO
2019071313 Apr 2019 WO
Non-Patent Literature Citations (155)
Entry
Delgado, R. et al.: “Development and Control of an Omnidirectional Mobile Robot on an EtherCAT Network”, International Journal of Applied Engineering Research, vol. 11, No. 21, 2016, pp. 10586-10592, XP055574484.
Dorfler, K. et al.: “Mobile Robotic Brickwork, Automation of a Discrete Robotic Fabrication Process Using an Autonomous Mobile Robot Robotic Fabrication in Architecture”, Art and Design 2016, Feb. 4, 2016 (Feb. 4, 2016), pp. 204-217, XP055567451.
Egerstedt, M. et al.: “Control of Mobile Platforms using a Virtual Vehicle Approach”, IEEE Transactions on Automatic Control, vol. 46, No. 11, Nov. 2001 (Nov. 1, 2001), XP055567515.
Fastbrick Robotics, Fastbrick Robotics: Hadrian 105 First Look Revealed, Nov. 16, 2015 (Nov. 16, 2015), XP054978174, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=7Zw7qHxMtrY> [retrieved on Nov. 16, 2015].
Fastbrick Robotics: Hadrian 105 Demonstrative Model Animation, Jun. 29, 2015 (Jun. 29, 2015), XP054979424, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=Rebqcsb61gY> [retrieved on Mar. 7, 2018].
Fastbrick Robotics: Hadrian 105 Time Lapse, Fastbrick Robotics Time Lapse, May 22, 2016 (May 22, 2016), XP054978173, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=4YcrO8ONcfY> [retrieved on May 22, 2016].
Feng, C. et al.: “Vision Guided Autonomous Robotic Assembly and as-built Scanning on Unstructured Construction Sites”, Automation in Construction, vol. 59, Nov. 2015 (Nov. 1, 2015), pp. 128-138, XP055567454.
Gao, X. et al.: “Complete Solution Classification for the Perspective-Three-Point Problem”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 8, Aug. 2003 (Aug. 1, 2003), pp. 930-943, XP011099374.
Giftthaler, M. et al., “Efficient Kinematic Planning for Mobile Manipulators with Non-holonomic Constraints Using Optimal Control”, 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, May 29-Jun. 3, 2017.
Heintze, H., “Design and Control of a Hydraulically Actuated Industrial Brick Laying Robot,” 264 pages.
Heintze, J. et al., “Controlled hydraulics for a direct drive brick laying robot,” Automation in Construction 5 (1996), pp. 23-29.
Helm, V. et al.: “Mobile Robotic Fabrication on Construction Sites: dimRob”, IEEE /RSJ International Conference on Intelligent Robots and Systems, Oct. 7, 2012 (Oct. 7, 2012), Vilamoura, Portugal, pp. 4335-4341, XP032287463.
http://www.new-technologies.org/ECT/Other/brickrob.htm. “Emerging Construction Technologies.” Dec. 1, 2006.
Huang, S. et al., “Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties,” Sensors, 2016, 16, 1195, 15 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2017/050731; Date of Mailing: Jan. 15, 2019; 5 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2017/050738; Date of Mailing: Jan. 15, 2019; 13 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2017/050739; Date of Mailing: Jan. 15, 2019; 6 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2018/050733; Date of Mailing: Jan. 21, 2020; 6 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2018/050734; Date of Mailing: Jan. 21, 2020; 9 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2018/050737; Date of Mailing: Jan. 21, 2020; 6 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2018/050739; Date of Mailing: Jan. 21, 2020; 6 pages.
International Preliminary Report on Patentability for International Application No. PCT/AU2018/050740; Date of Mailing: Jan. 21, 2020; 6 pages.
International Search Report and Written Opinion for International Application No. PCT/AU2017/050730; Date of Mailing: Aug. 23, 2017; 17 pages.
International Search Report and Written Opinion for International Application No. PCT/AU2017/050731; Date of Mailing: Aug. 31, 2017; 8 pages.
International Search Report and Written Opinion for International Application No. PCT/AU2017/050738; Date of Mailing: Oct. 17, 2017; 19 pages.
International Search Report and Written Opinion for International Application No. PCT/AU2017/050739; Date of Mailing: Sep. 28, 2017; 9 pages.
Kazemi, M. et al.: “Path Planning for Image-based Control of Wheeled Mobile Manipulators”, 2012 IEEE /RSJ International Conference on Intelligent Robots and Systems, Oct. 7, 2012 (Oct. 7, 2012), Vilamoura, Portugal, XP055567470.
Kleinkes, M. et al.: “Laser Tracker and 6DoF measurement strategies in industrial robot applications”, CMSC 2011: Coordinate Metrology System Conference, Jul. 25, 2011 (Jul. 25, 2011), XP055456272.
Koren et al.: “End-effector guidance of robot arms”, CIRP Annals-Manufacturing Technology, vol. 36, No. 1, 1987, pp. 289-292, XP055456270.
Kwon, S. et al., “On the Coarse/Fine Dual-Stage Manipulators with Robust Perturbation Compensator,” IEEE, May 21-26, 2001, pp. 121-126.
Kyle in CMSC: Charlotte-Concord, Jul. 21-25, 2008.
Latteur, et al., “Drone-Based Additive Manufacturing of Architectural Structures,” IASS Symposium 2015, Amsterdam, The Netherlands; Aug. 17-20, 2015; 12 pages.
Lippiello, V. et al.: “Position-Based Visual Servoing in Industrial Multirobot Cells Using a Hybrid Camera Configuration”, IEEE Transactions on Robotics, vol. 23, No. 1, Feb. 2007 (Feb. 1, 2007), XP011163518.
Liu, Z. et al.: “EtherCAT Based Robot Modular Joint Controller”, Proceeding of the 2015 IEEE International Conference on Information and Automation, Aug. 2015 (Aug. 1, 2015), Lijiang, China, pp. 1708-1713, XP033222650.
Notice of Acceptance of Patent Application received for priority Australian Patent Application No. 2017294796, mailed May 15, 2019 (158 pages).
Partial Supplementary European Search Report mailed Apr. 14, 2020 in European Patent Application No. 17826696.1, 10 pages.
Pless, R .: “Using Many Cameras as One”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 18, 2003 (Jun. 18, 2003), Madison , WI, USA, pp. 1-7, XP055564465.
Posada et al.: “High accurate robotic drilling with external sensor and compliance model-based compensation”, Robotics and Automation (ICRA), 2016 IEEE International Conference, May 16, 2016 (May 16, 2016), pp. 3901-3907, XP032908649.
Pritschow, G. et al., “A Mobile Robot for On-Site Construction of Masonry,” Inst. Of Control Tech. for Machine Tools and Manuf. Units, pp. 1701-1707.
Pritschow, G. et al., “Application Specific Realisation of a Mobile Robot for On-Site Construction of Masonry,” Automation and Robotics in Construction XI, 1994, pp. 95-102.
Pritschow, G. et al., “Configurable Control System of a Mobile Robot for ON-Site Construction of Masonry,” Inst. Of Control Technology for Machine Tools and Manuf. Units, pp. 85-92.
Pritschow, G. et al., “Technological aspects in the development of a mobile bricklaying robot,” Automation in Construction 5 (1996), pp. 3-13.
Riegl Laser Measurement Systems. “Long Range & High Accuracy 3D Terrestrial Laser Scanner System—LMS-Z420i.” pp. 1-4.
Salcudean, S. et al., “On the Control of Redundant Coarse-Fine Manipulators,” IEEE, pp. 1834-1840.
Sandy, T. et al.: “Autonomous Repositioning and Localization of an In Situ Fabricator”, 2016 IEEE International Conference on Robotics and Automation (ICRA), May 16, 2016 (May 16, 2016), pp. 2852-2858, XP055567467.
Skibniewski, M.J., “Current Status of Construction Automation and Robotics in the United States of America,” The 9th International Symposium on Automation and Robotics in Construction, Jun. 3-5, 1992, 8 pages.
Trimble ATS. “Advanced Tracking Sensor (ATS) with target recognition capability for stakeless machine control survey applications.” pp. 1-4.
Vincze, M. et al., “A Laser Tracking System to Measure Position and Orientation of Robot End Effectors Under Motion,” The International Journal of Robotics Research, vol. 13, No. 4, Aug. 1994, pp. 305-314.
Warszawski, A. et al., “Implementation of Robotics in Building: Current Status and Future Prospects,” Journal of Construction Engineering and Management, Jan./Feb. 1998, 124(1), pp. 31-41.
Willmann, J. et al.: “Robotic Timber Construction—Expanding Additive Fabrication to New Dimensions”, Automation in Construction, vol. 61, 2016, pp. 16-23, XP029310896.
Xu, H. et al.: “Uncalibrated Visual Servoing of Mobile Manipulators with an Eye-to-hand Camera”, Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics, Dec. 3, 2016 (Dec. 3, 2016), Qingdao, China, pp. 2145-2150, XP033071767.
Yu, S.N. et al., “Feasibility verification of brick-laying robot using manipulation trajectory and the laying pattern optimization,” Dept. of Mech. Eng., Automation in Construction (2009), pp. 644-655.
Zaki, T., “Parametric modeling of Blackwall assemblies for automated generation of shop drawings and detailed estimates using BIM”, Master's Thesis, May 23, 2016, pp. 1-151.
“Critical Damping Ratio Explained.” EngineerExcel. 2022. 16 pages.
Boston Dynamics: “Introducing Spot (previously SpotMini)”, Jun. 28, 2016, YouTube video, 1 page (screenshot of video); video retrieved at https://www.youtube.com/watch?v=tf7IEVTDjng.
Decision of Rejection issued May 19, 2023 on Chinese Patent Application No. 201780056460.8.
Decision on Re-examination issued Sep. 13, 2023 on Chinese Patent Application No. 201780056460.8.
European search report issued Jul. 12, 2022 on European Patent Application No. 19885448.1.
European search report issued Mar. 28, 2022 in European Patent Application No. 19837417.5, 10 pages.
European search report issued Mar. 7, 2022 in European Patent Application No. 19838430.7, 9 pages.
Examination Report dated Sep. 11, 2023 on UAE Patent Application No. P60000862019.
Examination Report dated Nov. 16, 2023 on European Patent Application No. 17826697.9.
Examination Report dated Aug. 2, 2023 on Chinese Patent Application No. 201880057411.0.
Examination Report dated Feb. 29, 2024 on Australian Patent Application No. 2018317936.
Examination Report dated Sep. 29, 2023 on European Patent Application No. 18845794.9.
Examination Report dated Oct. 30, 2023 on UAE Patent Application No. P60002212020.
Examination Report dated Oct. 30, 2023 on UAE Patent Application No. P60019022019.
Examination Report dated Oct. 31, 2023 on Australian Patent Application No. 2018317937.
Examination Report dated Nov. 8, 2023 on UAE Patent Application No. P60005242020.
Examination Report dated Apr. 4, 2024 on European Patent Application No. 18846395.4.
Examination Report dated Apr. 30, 2024 on Australian Patent Application No. 2018317936.
Examination Report dated Jan. 31, 2024 on Chinese Patent Application No. 201980059588.9.
Examination Report dated Feb. 27, 2024 on Australian Patent Application No. 2018304730.
Examination Report dated Feb. 16, 2024 on Australian Patent Application No. 2018303837.
Examination Report dated Apr. 12, 2024 on European Patent Application No. 18835861.8.
Examination report issued Feb. 11, 2023 on Chinese Patent Application No. 2018800554140.
Examination report issued Aug. 12, 2023 on Chinese Patent Application No. 201880067283.8.
Examination report issued Jul. 13, 2022 on Chinese Patent Application No. 201780056460.8.
Examination report issued Jun. 13, 2023 on Australian Patent Application No. 2018317941.
Examination report issued Jun. 14, 2023 on Chinese Patent Application No. 201880066598.0.
Examination report issued Nov. 14, 2022 on Australian Patent Application No. 2018317937.
Examination report issued Aug. 15, 2023 on Australian Patent Application No. 2018317937.
Examination report issued Feb. 16, 2024 on Australian Patent Application No. 2018303837.
Examination report issued Jul. 17, 2023 on Australian Patent Application No. 2018303838.
Examination report issued Jul. 17, 2023 on Australian Patent Application No. 2018304730.
Examination report issued Oct. 17, 2022 on European Patent Application No. 18834893.2.
Examination report issued Sep. 19, 2022 on Chinese Patent Application No. 201880057400.2.
Examination report issued Feb. 2, 2023 on Chinese Patent Application No. 201780056460.8.
Examination report issued Mar. 2, 2023 on Australian Patent Application No. 2018303330.
Examination report issued May 2, 2022 on Australian Patent Application No. 2018295572.
Examination report issued Oct. 20, 2022 on Australian Patent Application No. 2018303330.
Examination report issued Jun. 21, 2023 on European Patent Application No. 18828425.1.
Examination report issued Oct. 21, 2022 on Chinese Patent Application No. 201880057441.1.
Examination report issued Sep. 22, 2022 on Chinese Patent Application No. 201880057383.2.
Examination report issued Sep. 22, 2022 on Chinese Patent Application No. 2018800574110.
Examination report issued Apr. 24, 2023 on Chinese Patent Application No. 201880057400.2.
Examination report issued Oct. 24, 2022 on Chinese Patent Application No. 2018800573813.
Examination report issued Jan. 25, 2023 on European Patent Application No. 18834673.8.
Examination report issued Apr. 27, 2023 on Chinese Patent Application No. 201880057383.2.
Examination report issued Feb. 27, 2023 on Saudi Arabian Patent Application No. 520410931.
Examination report issued Sep. 27, 2022 on Saudi Arabian Patent Application No. 520411375.
Examination report issued Dec. 28, 2023 on Chinese Patent Application No. 201980060671.8.
Examination report issued Jul. 28, 2023 on Chinese Patent Application No. 201980089047.0.
Examination report issued Oct. 28, 2022 on Chinese Patent Application No. 201880067520.0.
Examination report issued Mar. 29, 2023 on European Patent Application No. 18834565.6.
Examination report issued Sep. 29, 2022 on Chinese Patent Application No. 201880067283.8.
Examination report issued Aug. 3, 2022 on European Patent Application No. 18835861.8.
Examination report issued Nov. 3, 2022 on European Patent Application No. 18835737.0.
Examination report issued May 30, 2022 on Chinese Patent Application No. 201880067520.0.
Examination report issued Sep. 4, 2023 on Australian Patent Application No. 2018317936.
Examination report issued Jul. 5, 2023 on Australian Patent Application No. 2018303329.
Examination report issued Jun. 6, 2023 on Chinese Patent Application No. 2018800573813.
Examination report issued Jul. 7, 2023 on Australian Patent Application No. 2018303837.
Examination report issued Dec. 26, 2021 in Saudi Arabia Patent Application No. 519400899, 8 pages.
Examination report issued Feb. 24, 2022 in Australian Patent Application No. 2017295317, 3 pages.
Examination report issued Feb. 9, 2022 in Chinese Patent Application No. 201880067520.0, with English translation, 14 pages.
Examination Report mailed Apr. 18, 2021 in GCC Patent Application No. 2018-35644, 5 pages.
Examination Report mailed Apr. 30, 2021 in GCC Patent Application No. 2018-35643, 3 pages.
Examination Report mailed Jun. 29, 2021 for India Patent Application No. 201927004006, 6 pages.
Examination Report mailed Sep. 30, 2021 for Australian Patent Application No. 2017295316, 3 pages.
Extended European Search Report mailed Jun. 4, 2021 for European Patent Application No. 18865644.1, 7 pages.
Extended European Search Report mailed Mar. 16, 2021 for European Patent Application No. 18834565.6, 19 pages.
Extended European Search Report mailed Mar. 17, 2021 for European Patent Application No. 18835861.8, 12 pages.
Extended European Search Report mailed Mar. 18, 2021 for European Patent Application No. 18834673.8, 14 pages.
Extended European Search Report mailed Mar. 18, 2021 for European Patent Application No. 18834893.2, 12 pages.
Extended European Search Report mailed Mar. 18, 2021 for European Patent Application No. 18835737.0, 10 pages.
Extended European Search Report mailed Mar. 30, 2021 for European Patent Application No. 18845794.9, 13 pages.
Extended European Search Report mailed Mar. 5, 2021 for European Patent Application No. 18828425.1, 7 pages.
Fastbrick Robotics. “Fastbrick Robotics Building a revolution.” Jun. 2015. 14 pages.
Fastbrick Robotics: Hadrian X Digital Construction System, published on Sep. 21, 2016 <URL: https://www.youtube.com/watch?v=5bW1 vuCgEaA >.
Gander H et al: “Application of a floating point digital signal processor to a dynamic robot measurement system”, Instrumentation and Measurement Technology Conference, 1994. IMTC/94. Conference Proceedings. 10th Anniversary. Advanced Technologies in I & M., 1994 IEEE Hamamatsu, Japan May 10-12, 1994, New York, NY, USA, IEEE, May 10, 1994 (May 10, 1994), pp. 372-375, XP010121924, DOI: 10.1109/IMTC.1994.352046, ISBN: 978-0-7803-1880-9, *whole document*.
Garrido, S. et al., “FM2: A real-time fast marching sensor based motion planner”, Advanced Intelliget Mechatronics, 2007 IEEE/ASME International Conference on, IEEE, PI, Sep. 1, 2007 (Sep. 1, 2007), pp. 1-6.
HandWiki. Damping ratio. Cited by U.S. Patent and Trademark Office in Nov. 21, 2022 Final Office Action for U.S. Appl. No. 16/631,404. 7 pages.
International Search Report and Written Opinion for International Patent Application No. PCT/AU19/50742; Date of Mailing Sep. 23, 2019; 5 pages.
International Search Report and Written Opinion for International Patent Application No. PCT/AU19/50743; Date of Mailing mailed Oct. 1, 2019; 10 pages.
International Search Report and Written Opinion for International Patent Application No. PCT/AU20/50367; Date of Mailing Jun. 29, 2020; 15 pages.
International Search Report and Written Opinion for International Patent Application No. PCT/AU20/50368; Date of Mailing Jun. 25, 2020; 11 pages.
Jiang, B.C. et al., “A Review of Recent Developments in Robot Metrology.” J. of Manufacturing Systems, vol. 7 No. 4, (1988) pp. 339-357.
Kleinigger, M. et al: “Application of 6-DOF sensing for robotic disturbance compensation”, Automation Science and Engineering (CASE), 2010 IEEE Conference on, IEEE, Piscataway, NJ, USA, Aug. 21, 2010 (Aug. 21, 2010, pp. 344-349, XP031762876, ISBN: 978-1-4244-5477-1, *abstract*, *sections 1 to 3*.
Losada, D.P. et al., “Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration.” May 3, 2017, pp. 1-17.
Mercedes-Benz: “Mercedes-Benz “Chicken” Magic Body Control TV commercial”, YouTube, Sep. 23, 2013, 1 page. Retrevied from the internet: <https://www.youtube.com/watch?v+nLwML2PagbY>.
Muralikrishnan, B. et al., “Laser Trackers for Large-Scale Dimensional Metrology: A Review.” Precision Engineering 44 (2016), pp. 13-28.
Office Action dated Sep. 11, 2023 on Canadian Patent Application No. 3030764.
Office Action dated Nov. 25, 2023 on Chinese Patent Application No. 201780056460.8.
Office Action mailed Apr. 21, 2021 in Japanese Patent Application No. 2019-523148, 4 pages.
Office Action mailed Aug. 20, 2021 for Japanese Patent Application No. 2019-523147, 3 pages.
Office Action mailed Jul. 5, 2021 for Japanese Patent Application No. 2019-523145, 4 pages.
Office Action mailed May 24, 2021 for Chinese Patent Application No. 201880067520.0, 8 pages.
Office Action mailed Sep. 3, 2021 for Chinese Patent Application No. 201780056460.8, 9 pages.
Siciliano, B. et al., “Robotics—chapters 2-4” Robotics, Dec. 31, 2009 (Dec. 31, 2009), Springer London, London, pp. 39-189.
Wijenayake, U. et al., “Stereo Vision-Based 3D Pose Estimation of Product Labels for Bin Picking.” Journal of Inst. Of Control Robotics and Systems (2016): 22(1): pp. 8-16.
Abed,et al., “A Field Bus Network With CAN Protocol and a Fuzzy Neural Petri Net Controller.” Jan. 2013, Basrah Journal of Science, vol. 31(2), pp. 86-102.
Examination Report dated Oct. 1, 2024 on European Patent Application No. 18846395.4.
Examination Report dated Jul. 24, 2024 on Australian Patent Application No. 2019379873.
Examination Report dated Jun. 5, 2024 on European Patent Application No. 18835737.0.
Related Publications (1)
Number Date Country
20210291362 A1 Sep 2021 US