This application claims the benefit of Korean Patent Application No. 2010-0131263, filed on Dec. 21, 2010 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.
1. Field
Embodiments relate to a walking robot which walks according to torque control-based dynamic walking, and a control method thereof.
2. Description of the Related Art
In general, research and development of walking robots which have a joint system similar to that of humans and coexist with humans in human working and living spaces is actively progressing. The walking robots are multi-legged walking robots having a plurality of legs, such as two or three legs or more, and in order to achieve stable walking of the robot, actuators, such as electric motors or hydraulic motors, located at respective joints of the robot need to be driven. As methods to drive these actuators, there are a position-based Zero Moment Point (hereinafter, referred to as ZMP) control method in which command angles of respective joints, i.e., command positions, are given and the joints are controlled so as to track the command positions, and a torque-based Finite State Machine (hereinafter, referred to as FSM) control method in which command torques of respective joints are given and the joints are controlled so as to track the command torques.
In the ZMP control method, walking direction, stride, and walking velocity of a robot are set in advance so as to satisfy a ZMP constraint. As an example, a ZMP constraint may be a condition that a ZMP is present in a safety region within a support polygon formed by a supporting leg(s) (if the robot is supported by one leg, this means the region of the leg, and if the robot is supported by two legs, this means a region set to have a small area within a convex polygon including the regions of the two legs in consideration of safety). Walking patterns of the respective legs corresponding to the set factors are generated, and walking trajectories of the respective legs are calculated based on the walking patterns. Further, angles of joints of the respective legs are calculated through inverse kinematics of the calculated walking trajectories, and target control values of the respective joints are calculated based on current angles and target angles of the respective joints. Moreover, servo control allowing the respective legs to track the calculated walking trajectories per control time is carried out. That is, during walking of the robot, whether or not positions of the respective joints precisely track the walking trajectories according to the walking patterns is detected, and if it is detected that the respective legs deviate from the walking trajectories, torques of the motors are adjusted so that the respective legs precisely track the walking trajectories. The ZMP control method is a position-based control method and thus achieves precise position control, but needs to perform precise angle control of the respective joints in order to control the ZMP and thus requires high servo gain. Thereby, the ZMP control method requires high current and thus has low energy efficiency and high stiffness of the joints.
On the other hand, in the FSM control method, instead of tracking positions per control time, a finite number of operating states (herein, the states mean states in a finite state machine) of a robot is defined in advance, target torques of respective joints are calculated with reference to the respective operating states during walking, and the joints are controlled so as to track the target torques. The FSM control method controls torques of the respective torques during walking, and thus requires low servo gain and has high energy efficiency and low stiffness of the joints. Further, the FSM control method does not need to avoid kinematic singularities, thus allowing the robot to have a natural gait in the same manner as that of a human.
Actuated dynamic walking is not a position-based control method but is a torque-based control method, thus having high energy efficiency and allowing a robot to have a natural gait in the same manner as that of a human. However, the actuated dynamic walking does not carry out precise position control, thus having difficulty in precisely controlling stride or walking velocity. Further, differing from the position-based control method, the actuated dynamic walking plans walking patterns directly in a joint space, thus having difficulty in generating a walking pattern having desired stride, velocity and direction.
Therefore, it is an aspect of an embodiment to provide a robot which generates a walking pattern having desired stride, velocity and direction through optimization of actuated dynamic walking and walks based on the walking pattern so as to naturally walk with high energy efficiency similar to a human, and a control method thereof.
Additional aspects of embodiments will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments.
In accordance with an aspect of an embodiment, a control method of a walking robot includes defining a plurality of unit walking motions, in which stride, velocity, rotating angle and direction of the robot are designated, through combination of parameters to generate target joint paths, and constructing a database in which the plurality of unit walking motions is stored, setting an objective path up to an objective position, performing interpretation of the objective path as unit walking motions, generating walking patterns consisting of at least one unit walking motion to cause the robot to walk along the objective path based on the interpretation of the objective path, and allowing the robot to walk based on the walking patterns.
In the control method, the walking of the robot may be torque control-based dynamic walking.
In the control method, the parameters to determine the target joint paths may include at least one of a parameter indicating left and right movement of hip joints of the robot, a parameter indicating inclination of a torso of the robot, a parameter indicating a stride length of the robot, a parameter indicating a bending angle of knees of the robot, a parameter indicating a walking velocity of the robot, a parameter indicating movement of ankles of the robot in the y-axis direction, a parameter indicating an initial state of the left and right movement of the hip joints of the robot, and a parameter indicating an initial state of the stride of the robot.
In the control method, the walking patterns may include a broad walking pattern generated in consideration of the entirety of the objective path, and a local walking pattern forming a part of the broad walking pattern.
In the control method, the broad walking pattern may be a walking pattern generated in consideration of avoidance of a static obstacle recognized in advance.
In the control method, the local walking pattern may be a walking pattern generated in consideration of avoidance of a new obstacle recognized during walking of the robot along the broad walking pattern.
In the control method, the local walking pattern may be generated by combining unit walking motions necessary to avoid the new obstacle from among the plurality of unit walking motions stored in the database.
In accordance with another aspect of an embodiment, a walking robot includes a plurality of joints to achieve walking of the robot, a database in which a plurality of unit walking motions, in which stride, velocity, rotating angle and direction of the robot are designated, is defined through combination of parameters to generate target joint paths, and a control unit to control the plurality of joints by setting an objective path up to an objective position, performing interpretation of the objective path as unit walking motions, generating walking patterns consisting of at least one unit walking motion to cause the robot to walk along the objective path based on the interpretation of the objective path, and allowing the robot to walk based on the walking patterns.
In the walking robot, the walking of the robot may be torque control-based dynamic walking.
In the walking robot, the parameters to determine the target joint paths may include at least one of a parameter indicating left and right movement of hip joints of the robot, a parameter indicating inclination of a torso of the robot, a parameter indicating a stride length of the robot, a parameter indicating a bending angle of knees of the robot, a parameter indicating a walking velocity of the robot, a parameter indicating movement of ankles of the robot in the y-axis direction, a parameter indicating an initial state of the left and right movement of the hip joints of the robot, and a parameter indicating an initial state of the stride of the robot.
In the walking robot, the walking patterns may include a broad walking pattern generated in consideration of the entirety of the objective path, and a local walking pattern forming a part of the broad walking pattern.
In the walking robot, the broad walking pattern may be a walking pattern generated in consideration of avoidance of a static obstacle recognized in advance.
In the walking robot, the local walking pattern may be a walking pattern generated in consideration of avoidance of a new obstacle recognized during walking of the robot along the broad walking pattern.
In the walking robot, the local walking pattern may be generated by combining unit walking motions necessary to avoid the new obstacle from among the plurality of unit walking motions stored in the database.
These and/or other aspects of embodiments will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
Hereinafter, among multi-legged walking robots, a bipedal walking robot will be exemplarily described.
As shown in
The upper body 101 of the robot 100 includes the torso 102, the head 104 connected to the upper portion of the torso 102 through a neck 120, the two arms 106R and 106L connected to both sides of the upper portion of the torso 102 through shoulders 114R and 114L, and hands 108R and 108L respectively connected to tips of the two arms 106R and 106L.
The lower body 103 of the robot 100 includes the two legs 110R and 110L connected to both sides of the lower portion of the torso 102 of the upper body 101, and feet 112R and 112L respectively connected to tips of the two legs 110R and 110L.
Here, “R” and “L” respectively indicate the right and left sides of the robot 100, and “COG” indicates the center of gravity of the robot 100.
As shown in
A waist joint unit 15 having 1 degree of freedom in the yaw direction so as to rotate the upper body 101 is installed on the torso 102.
Further, cameras 41 to capture surrounding images and microphones 42 for user's voice input are installed on the head 104 of the robot 100.
The head 104 is connected to the torso 102 of the upper body 101 through a neck joint unit 280. The neck joint unit 280 includes a rotary joint 281 in the yaw direction (rotated around the z-axis), a rotary joint 282 in the pitch direction (rotated around the y-axis), and a rotary joint 283 in the roll direction (rotated around the x-axis), and thus has 3 degrees of freedom.
Motors (for example, actuators, such as electric motors or hydraulic motors) to rotate the head 104 are connected to the respective rotary joints 281, 282, and 283 of the neck joint unit 280.
The two arms 106L and 106R of the robot 100 respectively include upper arm links 31, lower arm links 32, and hands 33.
The upper arm links 31 are connected to the upper body 101 through shoulder joint units 250L and 250R, the upper arm links 31 and the lower arm links 32 are connected to each other through elbow joint units 260, and the lower arm links 32 and the hands 33 are connected to each other by wrist joint units 270.
The shoulder joint units 250L and 250R are installed at both sides of the torso 102 of the upper body 101, and connect the two arms 106L and 106R to the torso 102 of the upper body 101.
Each elbow joint unit 260 has a rotary joint 261 in the pitch direction and a rotary joint 262 in the yaw direction, and thus has 2 degrees of freedom.
Each wrist joint unit 270 has a rotary joint 271 in the pitch direction and a rotary joint 272 in the roll direction, and thus has 2 degrees of freedom.
Each hand 33 is provided with five fingers 33a. A plurality of joints (not shown) driven by motors may be installed on the respective fingers 33a. The fingers 33a perform various motions, such as gripping, of an article or pointing in a specific direction, in connection with movement of the arms 106.
The two legs 110L and 110R of the robot 100 respectively include thigh links 21, calf links 22, and the feet 112L and 112R.
The thigh links 21 correspond to thighs of a human and are connected to the torso 102 of the upper body 101 through hip joint units 210, the thigh links 21 and the calf links 22 are connected to each other by knee joint units 220, and the calf links 22 and the feet 112L and 112R are connected to each other by ankle joint units 230.
Each hip joint unit 210 has a rotary joint (hip yaw joint) 211 in the yaw direction (rotated around the z-axis), a rotary joint (hip pitch joint) 212 in the pitch direction (rotated around the y-axis), and a rotary joint (hip roll joint) 213 in the roll direction (rotated around the x-axis), and thus has 3 degrees of freedom.
Each knee joint unit 220 has a rotary joint 221 in the pitch direction, and thus has 1 degree of freedom.
Each ankle joint unit 230 has a rotary joint 231 in the pitch direction and a rotary joint 232 in the roll direction, and thus has 2 degrees of freedom.
Since six rotary joints of the hip joint unit 210, the knee joint unit 220, and the ankle joint unit 230 are provided on each of the two legs 110L and 110R, a total of twelve rotary joints is provided to the two legs 110L and 110R.
Further, multi-axis force and torque (F/T) sensors 24 are respectively installed between the feet 112L and 112R and the ankle joint units 230 of the two legs 110L and 110R. The multi-axis F/T sensors 24 measure three-directional components Fx, Fy, and Fz of force and three-directional components Mx, My, and Mz of moment transmitted from the feet 112L and 112R, thereby detecting whether or not the feet 112L and 112R touch the ground and load applied to the feet 112L and 112R.
Although not shown in the drawings, actuators, such as motors, to drive the respective rotary joints are installed on the robot 100. A control unit to control the overall operation of the robot 100 properly controls the motors, thereby allowing the robot 100 to perform various motions.
With reference to
The first operating state (flight state) S1 corresponds to a pose of swinging the leg 110L or 110R, the second operating state (loading state) S2 corresponds to a pose of loading the foot 112 on the ground, the third operating state (heel contact state) S3 corresponds to a pose of bringing the heel of the foot 112 into contact with the ground, the fourth operating state (heel and toe contact state) S4 corresponds to a pose of bringing both the heel and the toe of the foot 112 into contact with the ground, the fifth operating state (toe contact state) S5 corresponds to a pose of bringing the toe of the foot 112 into contact with the ground, and the sixth operating state (unloading state) S6 corresponds to a pose of unloading the foot 112 from the ground.
In order to transition from one operating state to another operating state, a control action to achieve such transition is required.
In more detail, if the first operating state S1 transitions to the second operating state S2 (S1→S2), a control action in which the heel of the foot 112 touches the ground is required.
If the second operating state S2 transitions to the third operating state S3 (S2→S3), a control action in which the knee (particularly, the knee joint unit) of the foot 112 touching the ground bends is required.
If the third operating state S3 transitions to the fourth operating state S4 (S3→S4), a control action in which the ball of the foot 112 touches the ground is required.
If the fourth operating state S4 transitions to the fifth operating state S5 (S4→S5), a control action in which the knee of the foot 112 touching the ground extends is required.
If the fifth operating state S5 transitions to the sixth operating state S6 (S4→S5), a control action in which the knee of the foot 112 touching the ground fully extends is required.
If the sixth operating state S6 transitions to the first operating state S1 (S6→S1), a control action in which the ball of the foot 112 leaves the ground is required.
Therefore, in order to perform the control actions, the robot 100 calculates torque commands of the respective joints corresponding to the respective control actions, and outputs the calculated torque commands to the actuators, such as the motors, installed on the respective joints to drive the actuators.
In such a torque-based FSM control method, walking of the robot 100 is controlled depending on the operating states S1, S2, S3, S4, S5, and S6, defined in advance.
As shown in
The control unit 410 to control the overall operation of the robot 100 includes a command interpretation unit 412, a motion trajectory generation unit 414, a storage unit 416, and a motion command unit 418. Particularly, the control unit 410 controls the respective joints of the robot 100 so as to generate walking patterns of the robot 100 and to allow the robot 100 to walk according to the walking patterns.
The command interpretation unit 412 interprets the action command received through the input unit 400, and recognizes robot parts which perform a main motion having high relevance to a commanded action and robot parts which perform remaining motions having low relevance to the commanded action, respectively.
The motion trajectory generation unit 414 generates optimized motion trajectories for the robot parts performing the main motion among the robot parts recognized by the command interpretation unit 412 through optimization in consideration of robot dynamics, and generates predetermined motion trajectories for the robot parts performing the remaining motions so as to correspond to the commanded action. Here, each motion trajectory is one of joint trajectories, link trajectories, and end-effecter (for example, finger tip or toe tip) trajectories.
The storage unit 416 divisionally stores the robot parts performing the main motion having high relevance to the commanded action and the robot parts performing the remaining motions having low relevance to the commanded action according to respective action commands, and stores predetermined motion trajectories so as to correspond to the commanded action according to the respective action commands.
The motion command unit 418 outputs a motion command, causing the robot parts performing the main motion to move along the optimized motion trajectories generated by the motion trajectory generation unit 141, to the corresponding driving units 420 and thus controls operation of the driving units 420, and outputs a motion command, causing the robot parts performing the remaining motions to move along the predetermined motion trajectories corresponding to the commanded action generated by the motion trajectory generation unit 141, to the corresponding driving units 420 and thus controls operation of the driving units 420.
Hereinafter, an optimization process of walking of the robot in accordance with an embodiment will be described in detail.
The control unit 410 sets control gain and a plurality of variables, which determine paths of target joints to be controlled during walking of the robot, as optimization variables.
When all of numerous control variables of the respective joints relating to walking are set as the optimization variables, complexity is increased and thus optimization time and a rate of convergence are lowered.
The minimum number of control variables to determine target joint paths may be set using periodicity in walking and symmetry in swing of legs. The control variables to determine the target joint paths in accordance with this embodiment include a variable (P1=q_hip_roll) indicating left and right movement of the hip joints of the robot, a variable (P2=q_torso) indicating inclination of the torso of the robot, a variable (P3=q_hipsweep) indicating a stride length of the robot, a variable (P4=q_kneebend) indicating a bending angle of the knees of the robot, a variable (P5=tf) indicating a walking velocity of the robot, a variable (P6=q_ankle) indicating movement of the ankles of the robot in the y-axis direction, a variable (P7=q_hip_roll_ini) indicating an initial state of the left and right movement of the hip joints of the robot, and a variable (P8=q_hipsweep-ini) indicating an initial state of the stride of the robot. The variables P7 and P8 are control variables indicating an initial walking pose of the robot. The above-described control variables except for the variable P5 indicating the walking velocity are expressed as angles in the directions of corresponding degrees of freedom of corresponding joints. Although this embodiment describes eight variables to determine the target joint paths, the number of the variables is not limited thereto. Further, the contents of the variables are not limited thereto.
The control unit 410 calculates torque input values by parameterizing Expression to calculate the torque input values using the set optimization variables.
τi=kip(qid−qi)−kid
Herein, τ represents a torque input value, and i means each of joints relating to walking, i.e., including a torso joint movable at an angle of θ1 (of
Further, kp represents position gain, and kd represents damping gain. qd represents a target joint path, q represents a current angle measured by an encoder, and
The control unit 410 calculates the torque input values through Expression 1 using the control gain and the variables determining the target joint paths, which are set as optimization variables.
The control unit 410 selects plural poses from among continuous poses assumed by the robot during walking of the robot in a half cycle in which the robot makes one step with its one foot to determine the target joint paths, and sets the selected plural poses as reference poses. In this embodiment, the reference poses include a pose when the walking of the robot in the half cycle is started, a pose when the walking of the robot in the half cycle is completed, and a pose halfway between the point of time when the walking of the robot in the half cycle is started and the point of time when the walking of the robot in the half cycle is completed.
The control unit 410 calculates angles of the torso joint, the hip joints, the knee joints, and the ankle joints in the directions of the corresponding degrees of freedom in the respective reference poses, and determines paths of the respective target joints of the robot during walking of the robot in the half cycle through spline interpolation of the calculated angles.
The control unit 410 sets an objective function J consisting of the sum total of various performance indices to allow the robot to perform natural walking similar to a human with high energy efficiency.
Expression 2 represents the objective function J consisting of the sum total of plural performance indices J1˜J5 relating to walking of the robot. Coefficients preceding the performance indices mean weights assigned to apply importance to the respective performance indices.
The control unit 410 checks whether or not a value of the objective function J satisfies a convergence condition. The convergence condition is satisfied if a difference between the current value of the objective function and a value of the objective function calculated in the previous process is less than a designated value. Here, the designated value may be predetermined by a user. The control unit 410 makes the value of the objective function to satisfy the convergence condition, thereby allowing the robot to perform natural walking similar to a human with high energy efficiency.
Expression 3 represents the respective performance indices of the objective function.
J1 is a performance index indicating a position error of a foot of the robot contacting the ground, x represents an actual position of the foot contacting the ground, xd represents an objective position of the foot contacting the ground, and i represents the number of steps (hereinafter, the same). J1 is a difference between the actual position and the objective position of the foot, and thus represents the position error of the foot.
In J2, F represents force applied to the foot of the robot when the foot of the robot contacts the ground. The first term in J2 represents force applied to the foot of the robot when the foot of the robot contacts the ground, and the second term represents a difference of forces applied to the feet of the robot at respective steps. No difference in forces means that walking of the robot is periodically achieved, and thus J2 is a performance index indicating a force error applied to the foot of the robot when the foot of the robot contacts the ground and a periodicity error of walking of the robot.
In J3, v represents an actual walking velocity of the robot, and vd represents an objective walking velocity of the robot. Therefore, J3 is a performance index indicating a walking velocity error of the robot.
In J4, τ represents a torque of each of the respective joints required for walking of the robot, and tf represents time to complete planned walking. Therefore, J4 is a performance index indicating torques required during walking of the robot.
In J5, P represents a vector consisting of a variable to determine an actual target joint path, and Ppredi represents a vector consisting of a variable to determine an objective target joint path. T is a mark representing transposition of the vector, W represents a diagonal matrix in which the number of elements of the vector P is expressed in rows and columns, and respective diagonal elements serve as weights applied to the respective elements of the vector. Therefore, J5 is a performance index indicating a walking style error.
Although this embodiment illustrates the objective function as consisting of five performance indices, the configuration of the objective function is not limited thereto. Further, the performance indices are not limited to the above-described contents and may include other limitations relating to walking of the robot.
The control unit 410 obtains a resultant motion of the robot through calculation of forward dynamics using the torque input values, and calculates the value of the objective function using data of the resultant motion and actual walking of the robot. If the calculated value of the objective function does not satisfy the convergence condition, the control unit 410 adjusts optimization variables several times so that the value of the objective function satisfies the convergence condition.
Table 1 below represents parameters to generate target joint paths in accordance with an embodiment.
Unit walking motions A={A1, A2, A3, . . . , An} are defined through combination of the variables to generate target joint paths. Here, n is the total number of the defined unit walking motions. The unit walking motions mean specific walking motions which are defined in advance, and an objective series of walking patterns may be generated by variously combining the unit walking motions. Each unit walking motion is characterized by a size of a stride (p4, calculated as a leg length of the robot), a rotating angle (p2) and a step time (tf). One unit walking motion Ai is defined as {Pr1, Pr2, Pr3, Pr9} (i.e., Ai={Pr1, Pr2, Pr3, Pr9}). Here, Pr1, Pr2, Pr3, . . . Pr9 are referred to as control parameters, and combination of the minimum number of the control parameters is used so as to perform one unit walking motion. Table 2 represents a composition example of these parameters.
As described above, n unit walking motions A={A1, A2, A3, . . . , An} are defined in advance and stored to construct a database 422 (see
As shown in
If a dynamic obstacle 708, i.e., a new movable obstacle, is located on the existing walking path, as shown in
As shown in
As is apparent from the above description, in a robot and a control method thereof in accordance with an embodiment, a walking pattern having desired stride, velocity and direction is generated through optimization of actuated dynamic walking and the robot walks based on the walking pattern, thereby allowing the robot to naturally walk with high energy efficiency similar to a human.
The embodiments can be implemented in computing hardware and/or software, such as (in a non-limiting example) any computer that can store, retrieve, process and/or output data and/or communicate with other computers. For example, control unit 410 in
Although a few embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2010-0131263 | Dec 2010 | KR | national |