This application relates to planning and control technologies of legged robots in the technical field of legged robots, and in particular, to a method, an apparatus, and an electronic device for controlling a legged robot, a computer-readable storage medium, a computer program product, and a legged robot.
With the wide application of artificial intelligence (AI) and legged robot technology in civilian and commercial fields, legged robots based on AI and the legged robot technology play an increasingly important role in fields such as intelligent transportation and smart home, and also face higher requirements.
At present, the legged robot (such as a quadruped robot) is capable of performing a plurality of different actions, for example, bounding and flipping. The legged robot often performs these actions stiffly during landing, and an impact force withstood by each joint is greater than an impact force threshold, which increases the body rebound and increases the probability of damage to the legged robot.
In view of the foregoing problems, embodiments of this application provide a method, an apparatus, and an electronic device for controlling a legged robot, a computer-readable storage medium, a computer program product, and a legged robot.
A method for controlling a legged robot is provided, the legged robot including a base and at least two robotic legs, each of the robotic legs including at least one joint, the method including:
An electronic device for controlling a legged robot is provided, including:
A non-transitory computer-readable storage medium is provided, having a computer-executable program stored therein, the computer-executable program, when executed by a processor, causing the processor to perform the method for controlling a legged robot provided in the embodiments of this application.
The embodiments of this application have at least the following beneficial effects. The center of mass and the trajectory of the robotic leg of the legged robot after landing are planned, and the action of each joint of the legged robot is controlled based on the planned center of mass and trajectory of the foot end of the robotic leg. In this way, during the landing of the legged robot, the impact force withstood by each joint can be reduced, the body rebound can be reduced, and the impact resistance of the legged robot during the landing can be improved, thereby reducing the probability of damage to the legged robot.
To describe the technical solutions in embodiments of this application clearly, the following briefly describes the accompanying drawings that need to be used in the description of the embodiments. Apparently, the accompanying drawings described below are merely some exemplary embodiments of this application, and a person of ordinary skill in the art may still derive other accompanying drawings from these accompanying drawings without creative efforts. The following accompanying drawings are not intentionally scaled to an actual size.
To make the objectives, technical solutions, and advantages of this application clearer, the following describes exemplary embodiments according to this application in detail with reference to the accompanying drawings. Apparently, the described embodiments are merely some but not all of the embodiments of this application. It is to be understood that, this application is not limited by the exemplary embodiments described herein.
As shown in the embodiments of this application and claims, words such as “a/an”, “one”, “a kind”, and/or “the” do not refer specifically to the singular and may also include the plural, unless the context clearly indicates an exception. In general, terms “comprise” and “include” merely indicate including clearly identified steps and elements. The steps and elements do not constitute an exclusive list, and may also include other steps or elements.
Although the embodiments of this application make various references to some modules in an apparatus provided in the embodiments of this application. However, any quantity of different modules may be used and run on a user terminal and/or a server. The modules are merely illustrative, and different aspects of the apparatus and the method may use different modules.
Flowcharts are used in the embodiments of this application to illustrate operations performed by the method and the apparatus according to the embodiments of this application. It is to be understood that, the foregoing or following operations are not necessarily strictly performed according to an order. On the contrary, various steps may be performed in reverse order or simultaneously as required. In addition, other operations may also be added to the processes. Alternatively, one or more operations may be deleted from the processes.
To facilitate description of the embodiments of this application, the following introduces concepts related to the embodiments of this application.
A legged robot provided in the embodiments of this application is a robot that uses robotic legs to move. The legged robot is biomimetically designed based on animals, to simulate motion patterns of the animals and replicate the motion capabilities of the animals based on engineering technology and scientific research achievements. The legged robot is adapted to various environments (including a structured environment (such as a road, a railway, and a treated flat road surface) and an unstructured environment (such as a mountain land, a swamp, and a rugged road surface)), may adapt to various changes in a terrain and climb over various obstacles, and may effectively reduce the load and improve energy utilization efficiency of a system. The legged robots may be divided into a monopod robot, a bipedal robot, a quadruped robot, a hexapod robot, an octopod robot, and the like based on quantities of feet. The quadruped robot has higher static stability than the bipedal robot, and moves more simply and flexibly than the hexapod robot and the octopod robot. Therefore, the quadruped robot is a common choice for research on the legged robots. A gait of the quadruped robot refers to coordination among four robotic legs in time and space in order for the quadruped robot to move continuously. The gait of the quadruped robot is derived from a gait of a quadruped animal, which may include, but is not limited to, the following three simplified forms: walk, trot, and bound.
The method for controlling a legged robot provided in the embodiments of this application may be implemented based on artificial intelligence (AI). AI is a theory, a method, a technology, and an application system that uses a digital computer or a machine controlled by the digital computer to simulate, extend, and expand human intelligence, perceive the environment, acquire knowledge, and use the knowledge to obtain the best result. In other words, AI is a comprehensive technology of computer science, which is used to understand the essence of intelligence and produces a new intelligent machine that can respond in a manner similar to human intelligence. For example, according to the method for controlling a legged robot based on AI, a motion trajectory and a gait of the legged robot can be planned in a manner similar to that of guiding motion of a living animal by human, so that the motion of the legged robot is flexible and bionic. Through research on design principles and implementation methods of various intelligent machines, AI enables the method for controlling a legged robot provided in the embodiments of this application to automatically and efficiently design the subsequent motion trajectory and gait of the legged robot based on a current motion state of the legged robot.
Based on the above, the method for controlling a legged robot provided in the embodiments of this application relates to technologies such as AI and machine learning. The method for controlling a legged robot provided in the embodiments of this application is described below with reference to the accompanying drawings.
As shown in
In the embodiments of this application, the exemplary legged robot can move based on four robotic legs. Each of the robotic legs may include a thigh and a calf, and each robotic leg may include at least one joint. For example, each robotic leg may include a plurality of lower limb joints. The plurality of lower limb joints are, for example, a hip joint having two degrees of freedom and a knee joint having one degree of freedom.
In the embodiments of this application, each robotic leg may further be configured with a plurality of motors. The plurality of motors may be used individually or in combination to control the two degrees of freedom of the hip joint and the degree of freedom of the knee joint of the quadruped robot.
The legged robot may further be equipped with a variety of sensors, such as an inertial measurement unit (IMU) sensor and a joint angle encoder. The IMU sensor may provide an acceleration and pose information of the legged robot in real time. The joint angle encoder may provide joint angle information (such as an angle of the joint angle and an angular velocity feedback value) of each joint of the legged robot in real time.
In the embodiments of this application, the exemplary legged robot may implement an action such as flipping or bounding under control of the plurality of motors, and land back onto a plane in the form of free fall after performing the actions. To alleviate the stiffness of the actions of the legged robot during the landing, reduce an impact force withstood by each joint and body rebound, and reduce the probability of damage to the legged robot during the landing, the legged robot is often controlled during free landing of the legged robot and during the contact between the legged robot and the plane.
To control the free fall process of the legged robot, for example, the process in which each foot end of the quadruped robot contacts the plane is equivalent to an action process of two virtual springs in x-axis and z-axis (a vertical direction perpendicular to the plane) directions. If a control scheme (such as a PD control scheme) is used to adjust stiffness and damping parameters of the virtual springs, an output torque of each joint motor can be derived equivalently, thereby enabling the legged robot to land dexterously. For another example, a robotic leg and an environment may be equivalent to two different models (such as an RLC model). Based on the two models, data-driven (a machine learning control scheme) may be used to derive the output torque of each joint motor, thereby enabling the legged robot to land dexterously.
However, the foregoing schemes for controlling the free fall process of the legged robot are all establishing a spring damping model based on the robotic leg model or the environmental model of the legged robot. Dynamic constraints and characteristics of the legged robot are reflected by changes in the robotic leg and a center of mass. Therefore, a deviation exists between the scheme of controlling the free fall process of the legged robot and the dynamic constraints and characteristics of the legged robot. This affects a control effect.
Therefore, for the foregoing problems, an embodiment of this application provides a method for controlling a legged robot. The legged robot includes a base and at least two robotic legs. Each of the robotic legs includes at least one joint. The method includes: determining a first expected moving trajectory corresponding to the legged robot and determining a second expected moving trajectory corresponding to the legged robot in response to the legged robot falling to contact a plane, the first expected moving trajectory indicating an expected moving trajectory of a center of mass of the legged robot, and the second expected moving trajectory indicating an expected moving trajectory of a foot end of each of the at least two robotic legs; and controlling, based on a dynamic model corresponding to the legged robot, the first expected moving trajectory, and the second expected moving trajectory, an action of each joint after the legged robot contacts the plane.
An embodiment of this application further provides an apparatus for controlling a legged robot. The legged robot includes a base and at least two robotic legs. Each of the robotic legs includes at least one joint. The apparatus includes: a planning and calculation module, configured to determine a first expected moving trajectory corresponding to the legged robot and determine a second expected moving trajectory corresponding to the legged robot in response to the legged robot falling to contact a plane, the first expected moving trajectory indicating an expected moving trajectory of a center of mass of the legged robot, and the second expected moving trajectory indicating an expected moving trajectory of a foot end of each of the at least two robotic legs; and a control module, configured to control, based on a dynamic model corresponding to the legged robot, the first expected moving trajectory, and the second expected moving trajectory, an action of each joint after the legged robot contacts the plane.
An embodiment of this application further provides a legged robot, including: a base; a lower limb portion, connected to the base, the lower limb portion including at least two robotic legs, each of the robotic legs including a hip joint and a knee joint, the hip joint including at least two degrees of freedom, and the knee joint including at least one degree of freedom; and an electronic device, arranged on the legged robot and configured to perform the method for controlling a legged robot provided in the embodiments of this application.
An embodiment of this application provides a computer-readable storage medium, having a computer-executable program stored therein, the computer-executable program, when executed by a processor, causing the processor to perform the method for controlling a legged robot provided in the embodiments of this application.
An embodiment of this application provides a computer program product, including a computer-executable program, the computer-executable program, when executed by a processor, implementing the method for controlling a legged robot provided in the embodiments of this application.
According to the method for controlling a legged robot provided in the embodiments of this application, the trajectory and gait planning of the legged robot can be automatically implemented, and it can also be ensured that the impact force withstood by each joint and the body rebound are reduced during the landing of the legged robot, to achieve the anti-impact protection effect on the legged robot while ensuring the landing function.
An execution subject of the method for controlling a legged robot provided in the embodiments of this application described below is an electronic device for controlling the legged robot. In addition, the electronic device may be integrated on the legged robot or independent of the legged robot. This is not limited in the embodiments of this application.
In step S201, in response to the legged robot falling to contact a plane, a first expected moving trajectory corresponding to the legged robot and a second expected moving trajectory corresponding to the legged robot are determined.
The first expected moving trajectory indicates an expected moving trajectory of a center of mass of the legged robot, and the second expected moving trajectory indicates an expected moving trajectory of a foot end away from each of the at least two robotic legs.
As an example, step S201 may be performed by any electronic device. The electronic device herein may be a terminal or a server. Alternatively, the electronic device herein may be both the terminal and the server, which is not limited. The terminal may be a smart phone, a computer (such as a tablet computer, a laptop, and a desktop computer), a smart wearable device (such as a smart watch and smart glasses), a smart voice interactive device, a smart home appliance (such as a smart television), an onboard terminal, an aircraft, or the like. The server may be an independent physical server, or may be a server cluster formed by a plurality of physical servers or a distributed system, and may further be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), and a big data and artificial intelligence platform.
In this embodiment of this application, the terminal and the server may be located within or outside a blockchain network, which is not limited. In this embodiment of this application, the terminal and the server may also upload any data stored internally to the blockchain network for storage to prevent the data stored internally from being tampered with and improve data security.
For example, during falling of the legged robot, the contact status between the at least two robotic legs of the legged robot and the plane (such as the ground and a table top) may change, so that the legged robot may include a plurality of motion forms during the contact with the plane, such as a form in which all the robotic legs leave the plane, a form in which some of the robotic legs contact the plane, and a form in which all the robotic legs contact the plane. The legged robot falls at different initial velocities. Therefore, contact information of the contact between the legged robot and the plane needs to be determined, and the first expected moving trajectory and the second expected moving trajectory are determined based on a pose and state information of the legged robot at the contact moment. The process of determining the contact information of the contact between the legged robot and the plane is described later with reference to
As described above, the first expected moving trajectory indicates the expected moving trajectory of the center of mass of the legged robot. For example, the first expected moving trajectory may include expected position information, velocity information, and acceleration information of the center of mass of the legged robot at each time step. The first expected moving trajectory may be represented by a timing value sequence composed of information related to the center of mass corresponding to each time step. Certainly, the first expected moving trajectory may also be represented by another data structure, and this application is not limited thereto. The process of determining the first expected moving trajectory is described later with reference to
An end of the robotic leg away from the base is referred to as a foot end, and the second expected moving trajectory indicates the expected moving trajectory of the foot end of each of at least two robotic legs. For example, the second expected moving trajectory may include expected position information, velocity information, acceleration information, angular velocity information, angular acceleration information, and the like of the foot end of each robotic leg at each time step. For another example, the second expected moving trajectory may further include expected position information, velocity information, acceleration information, angular velocity information, angular acceleration information, and the like of each joint of each robotic leg at each time step. Similarly, the second expected moving trajectory may be represented by a timing sequence composed of information related to each robotic leg corresponding to each time step. Certainly, the second expected moving trajectory may also be represented by another data structure, and this application is not limited thereto. The process of determining the second expected moving trajectory is described later with reference to
A time step may also be referred to as a frame. A time difference between adjacent time steps may be the same or different. For example, because a change in the action and the force of the legged robot is greater than a change threshold during a period of time immediately after the legged robot comes into contact with the plane, a difference between the time steps may be less than a duration threshold, thereby improving flexibility of motion control of the legged robot in the early stage. In a process of the legged robot gradually reaching a stable state, the difference between the time steps may be greater than the duration threshold to save computing power. The time difference between adjacent time steps is not limited in the embodiments of this application.
In step S202, an action of each joint after the legged robot contacts the plane is controlled based on a dynamic model corresponding to the legged robot, the first expected moving trajectory, and the second expected moving trajectory.
The dynamic model is configured to determine a change relationship among motion information (for example, including an angle, an angular velocity, an angular acceleration, and a joint torque) of each joint, motion information of the center of mass (for example, including an angle, an angular velocity, and an angular acceleration), and an external contact force. In other words, the dynamic model corresponding to the legged robot is configured to represent the change relationship among the following information during the motion of the legged robot: each joint, an angle, an angular velocity, and an angular acceleration respectively corresponding to the center of mass, the joint torque, and the external contact force. For example, the dynamic model may describe the foregoing change relationship from the perspective of an energy change. For example, the dynamic model may also describe the foregoing change relationship from the perspective of a momentum change or a force change. This is not limited in this application.
In the process from a moment the legged robot falls to contact the plane to a moment the legged robot stands on the plane stably, an acting force withstood by the legged robot includes gravity, a driving force of each joint motor, and a contact force (also referred to as a support force) applied to the legged robot by the plane. Based on the three forces of the gravity, the driving force, and the contact force, and basic information of the legged robot such as a size, a mass, a moment of inertia, and a joint connection mode of each part of a body of the legged robot, the dynamic model corresponding to the legged robot may be correspondingly established. The contact force and the driving force withstood by the legged robots with different poses are different. A contact force between the plane and the legged robot at each time step is determined based on the dynamic model corresponding to the legged robot, so that an actual trajectory of the center of mass of the legged robot is consistent with the first expected moving trajectory.
In the embodiments of this application, a motor torque provided by each joint motor at each time step may also be determined based on the dynamic model corresponding to the legged robot and the contact force between the legged robot and the plane at each time step, so that the trajectory of the foot end of each of the at least two robotic legs is consistent with the second expected moving trajectory.
In other words, through the dynamic model, the first expected moving trajectory, and the second expected moving trajectory, the contact force between the plane and the legged robot is determined, and the motor torque provided by each joint motor is determined, and the action of each joint is controlled based on the determined contact force and motor torque.
“Consistency” in the embodiments of this application means that during actual real machine testing, the actual trajectory of the center of mass of the legged robot is very close to or even the same as the first expected moving trajectory (a trajectory deviation is less than a trajectory deviation threshold), and the trajectory of the foot end away from each of the at least two robotic legs is very close to or even the same as the second expected moving trajectory. Due to the limitation on the performance of the joint motor, the joint motor often cannot output ideal torque. In addition, considering a change in the external environment (for example, sudden occurrence of disturbances such as wind), it is often difficult to control the legged robot to fully follow the first expected moving trajectory and the second expected moving trajectory. Therefore, the consistency described in the embodiments of this application may mean that a difference between the actual trajectory and the expected moving trajectory is less than a difference threshold.
In some embodiments of this application, the contact force required to cause the center of mass of the legged robot to reach the position, the velocity, and the acceleration indicated by the first expected moving trajectory at each time step may be solved correspondingly based on the dynamic model corresponding to the legged robot. The contact force is the support force provided by the plane to the foot end of each robotic leg. Moreover, joint control information required to cause the robotic leg of the legged robot to reach the pose indicated by the second expected moving trajectory at each time step may be solved based on the dynamic model corresponding to the legged robot and the foregoing contact force.
In the embodiments of this application, the joint control information may be either an acceleration of each joint motor or a torque of the joint motor. In an actual physical system, a difference exists between measurement accuracy of the acceleration and the torque of the joint motor. Therefore, in a practical application, a person skilled in the art may select a physical quantity with higher accuracy from the acceleration and the torque of the joint motor for subsequent calculation based on an actual situation.
As described above, landing buffer of the legged robot may be implemented by determining a contact state between the at least two robotic legs and the plane at a current moment. The so-called current moment refers to a most recent system moment as time progresses during landing of the legged robot. For example, the contact state between the at least two robotic legs and the plane at the current moment includes: information such as whether the at least two robotic legs of the legged robot contact the plane, a quantity of contact points between the at least two robotic legs and the plane, and positions of the contact points, to determine the first expected moving trajectory and the second expected moving trajectory of the legged robot.
In some embodiments of this application, the contact state is determined by current state information corresponding to the legged robot at the current moment.
In the embodiments of this application, an IMU sensor in the legged robot may be invoked to determine the current state information of the legged robot. For example, first, acceleration information (which may include accelerations of the legged robot in a plurality of directions (such as a vertical direction and a horizontal direction)) of the legged robot at the current moment and current pose information may be collected by using the IMU sensor, and a joint angle encoder is invoked to determine joint angle information (such as an angle of a joint angle and an angular velocity feedback value) of each joint of the legged robot at the current moment. Next, the current pose information and the joint angle information (such as the angle of the joint angle and the angular velocity feedback value) may be imported into a leg odometry to calculate position information (which may be represented by y1). The position information may include calculated positions of at least two robotic legs of the legged robot at the current moment. In addition, the acceleration information may be inputted into a state space observer, so that the state space observer may output a position observation result (which may be represented by ym) based on the acceleration information and a historically obtained state estimation result of the legged robot at the current moment. The position observation result may include observed positions of the at least two robotic legs of the legged robot at the current moment. The state estimation result of the legged robot at the current moment may be obtained by estimating a state of the legged robot at the current moment when a previous moment of the current moment arrives. The state estimation result of the legged robot at the current moment may be stored in a vector or another data structure, which is not limited. Then a state of the legged robot at a next moment of the current moment may be estimated based on the position information and the position observation result.
For example, the position information and the position observation result may also be used as input of an extended Kalman filter (EKF) unit to perform state estimation by using the unit, to obtain the state estimation result of the legged robot at the next moment. The so-called EKF is an extended form of a standard Kalman filter (a Kalman filter for short) in a nonlinear situation. Linearization of a nonlinear function is implemented by performing Taylor expansion on the nonlinear function, omitting a higher-order term, and retaining a first-order term of an expansion term.
In the embodiments of this application, the position information and the position observation result may alternatively be used as input of the Kalman filter unit or input of the state estimation model obtained based on machine learning, to perform state estimation by using the Kalman filter unit or the state estimation model, to obtain the state estimation result of the legged robot at the next moment. The state estimation result of the legged robot at the next moment may be used for both control of the legged robot and input of the state space observer during the next state estimation. In other words, the estimation result obtained through the state estimation may be used for feedback control of the legged robot to form a closed loop.
Several implementations of determining the contact information based on the current state information of the legged robot are described below.
Because a sudden change to any state value corresponding to the robotic leg may occur when the contact information between the robotic leg and the plane changes, the contact information between the robotic leg and the plane at the current moment may be determined by using the current state value of the robotic leg. Therefore, in the embodiments of this application, the manner of determining the contact information based on the current state information includes: obtaining a historical state value of any robotic leg at a previous moment of a current moment, and determining a current state value of any robotic leg from the current state information, so that it may be determined, based on the historical state value, whether a sudden change to the current state value of any robotic leg occurs.
In the embodiments of this application, the sudden change to the current state value means that a difference between the current state value and the historical state value is greater than a preset difference. Based on this, a difference between the historical state value and any current state value may be calculated. If the calculated difference is greater than the preset difference, it is determined that a sudden change to the current state value occurs. If the calculated difference is greater than the preset difference, it is determined that no sudden change to the current state value occurs. For example, the historical state value is set to 20, and the preset difference is set to 50. If the current state value is 100, it may be determined that a sudden change to the current state value occurs because 100 minus 20 is equal to 80 and 80 is greater than 50. If the current state value is 30, it may be determined that no sudden change to the current state value occurs because 30 minus 20 is equal to 10 and 10 is less than 50.
If it is determined, based on the historical state value, that a sudden change to the current state value of any robotic leg occurs, and the current state value of the robotic leg is greater than the historical state value, it is determined that the robotic leg contacts the plane at the current moment. If it is determined, based on the historical state value, that no sudden change to the current state value of any robotic leg occurs, the contact information between the robotic leg and the plane at the previous moment is used as the contact information of the current moment. In other words, if any robotic leg contacts the plane at the previous moment, it is determined that the robotic leg also contacts the plane at the current moment. If any robotic leg does not contact the plane at the previous moment, it is determined that the robotic leg does not contact the plane at the current moment.
In some embodiments of this application, the current state information may include a joint motor torque or a current value or a voltage value of the at least two robotic legs.
When the robotic leg of the legged robot does not contact the plane (for example, the robotic leg does not contact the ground) and is suspended in the air, the load of the robotic leg is only a mass of the robotic leg. Because the mass of the robotic leg of the legged robot is negligible with respect to an overall mass, the load is less than a load threshold, and a feedback current value of each joint and the joint motor torque are relatively less than corresponding thresholds. When the robotic leg of the legged robot contacts the plane (for example, the robotic leg contacts the ground), the load of the legged robot includes its total mass plus an equivalent inertial force of moving downward under the action of inertia thereof. Therefore, the load is greater than the load threshold, and the feedback current value of each joint and the joint motor torque are relatively greater than the corresponding thresholds. Based on this, when a sudden change in the joint motor torque or the feedback current value from being small to large is detected (a variation is greater than a variation threshold value within the duration threshold), it is determined that the legged robot lands from the air to the plane (such as the ground).
In some embodiments of this application, the current state information includes a height of the center of mass and a pose of the center of mass of the legged robot, and current joint angle information corresponding to the at least two robotic legs.
In the embodiments of this application, a moment the foot end of the legged robot contacts the plane may be calculated based on the height of the center of mass and the pose of the center of mass of the legged robot detected by an external vision or motion capture system and the joint angle information of the legged robot, to determine whether the corresponding leg contacts the plane at the current moment.
The manner of detecting the contact information between the robotic leg and the plane at the current moment based on the current state information includes: calculating a height of any robotic leg from the plane based on the height of the center of mass, the pose of the center of mass, and the current joint angle information corresponding to any robotic leg; determining that any robotic leg contacts the plane at the current moment if the calculated height is less than or equal to a height threshold (such as a numerical value 0 or 0.005); and determining that any robotic leg does not contact the plane at the current moment if the calculated height is greater than the height threshold.
In some embodiments of this application, the current state information may include a current plantar tactile feedback value corresponding to the at least two robotic legs, the plantar tactile feedback value being generated by using a plantar tactile sensor of the corresponding robotic leg.
In the embodiments of this application, it may be determined, by using the plantar tactile sensor, whether the corresponding robotic leg contacts the plane at the current moment. In addition, when any plantar tactile sensor detects that the corresponding robotic leg contacts the plane, a first numerical value is generated as the plantar tactile feedback value, and when it is detected that the corresponding leg does not contact the plane, a second numerical value is generated as the plantar tactile feedback value. The first numerical value and the second numerical value herein may be set based on actual needs. For example, the first numerical value is set to a numerical value 1, and the second numerical value is set to a numerical value 0, or the first numerical value is set to the numerical value 0, and the second numerical value is set to the numerical value 1. The manner of detecting the contact information between the robotic leg and the plane at the current moment based on the current state information includes: obtaining the current plantar tactile feedback value corresponding to the robotic leg from the current state information; determining that any robotic leg contacts the plane at the current moment if the obtained current plantar tactile feedback value is the first numerical value; and determining that any robotic leg does not contact the plane at the current moment if the obtained current plantar tactile feedback value is the second numerical value.
In some embodiments of this application, the current state information includes a current acceleration of the legged robot in the vertical direction. At the previous moment of the current moment, a historical acceleration of the legged robot in the vertical direction is known. If it is determined, based on the historical acceleration, that a sudden change to the current acceleration occurs, it is determined that the legged robot has landed.
When the legged robot stably stands on the plane, the acceleration of the legged robot in a z-axis direction collected by the IMU sensor is twice an acceleration of gravity g. When the legged robot is completely weightless in the air, the acceleration of the legged robot in the z-axis direction collected by the IMU sensor is close to 0. In both a process in which the legged robot steps hard on the plane before preparing to lift, and a process in which the legged robot buffers toward the plane after landing, the acceleration of the legged robot in the z-axis direction collected by the IMU sensor is greater than twice the acceleration of gravity g. It may be learned accordingly that at a moment the legged robot lands, a sudden change to the acceleration of the legged robot in the vertical direction occurs.
In the embodiments of this application, a sudden change in the current acceleration means that a difference between the current acceleration and the historical acceleration is greater than a difference threshold. Based on this, the electronic device may calculate the difference between the historical acceleration and the current acceleration. If the calculated difference is greater than the difference threshold, it is determined that a sudden change to the current acceleration occurs. If the calculated difference is not greater than the difference threshold, it is determined that no sudden change to the current acceleration occurs. For example, the historical acceleration is set to 2, and the difference threshold is set to 5. If the current acceleration is 9, then it may be determined that a sudden change to the current acceleration occurs because 9 minus 2 is equal to 7 and 7 is greater than 5. If the current acceleration is 4, then it may be determined that no sudden change to the current acceleration occurs because 4 minus 2 is equal to 2 and 2 is less than 5.
It is to be understood that the foregoing illustrates some implementations of determining the contact information of the robotic leg by using examples, and is not exhaustive. The embodiments of this application are not limited thereto.
Next, the process of determining the first expected moving trajectory of the legged robot is described with reference to
Two curves are shown in
As shown in
Based on this, to implement the buffer effect of the legged robot during the landing and reduce the body rebound of the legged robot, an optimization objective may be set based on a relationship between the solid line 4-1 and the dashed line 4-2 in
In some embodiments of this application, an approximate model corresponding to the legged robot may be used to determine the expected moving trajectory of the center of mass of the legged robot. In the approximate model, the legged robot is approximately a single rigid body, and a resultant force of the at least two robotic legs forms upward thrust on the single rigid body during the contact between the legged robot and the plane. Further, a support force of the legged robot is determined based on the upward thrust on the single rigid body.
For example, the legged robot may be approximated as a single rigid body having a mass of m. In a case that the legged robot includes four robotic legs, the resultant force of the four robotic legs forms upward thrust u on the single rigid body. Based on such an approximate model and according to the Newton's second law, Formula (1) (referred to as a dynamic equation) may be determined.
m{umlaut over (x)}=u+mg (1)
The dynamic equation is written in the form of a state space representation, that is, Formula (2) show below.
Formula (2) may be abbreviated to the form of Formula (3). In the embodiments of this application, bold is used to represent a vector (matrix).
{dot over (x)}=A
c
x+B
c
y (3)
x
i+1
={dot over (x)}
i
Δt+x
i=(Acxi+Bcui)Δt+xi=(AcΔt+I)xi+BxΔtui (4)
Let Ad=AcΔt+I, and Bd=BcΔt. Based on model predictive control (MPC), Formula (5) may be obtained from Formula (4).
X=A
qp
x
0
+B
qp
U (6)
where
A mathematical expression corresponding to each time step is described in Formula (6). Based on this, the optimization objective corresponding to this embodiment of this application may be designed correspondingly based on the buffer effect expected to achieve during the falling of the legged robot, to achieve the solution of the optimal first expected moving trajectory. For example, the first expected moving trajectory causes combination values of the following to reach an extreme value: a fluctuation quantity of the center of mass of the legged robot, a total quantity of impact forces withstood by the legged robot, a squatting amount of the legged robot, and a sudden change amount of the impact forces withstood by the legged robot. The foregoing may each have a corresponding weight coefficient and combined in various manners.
For example, the optimization objective function −Z function shown in Formula (7) may be set to solve optimal thrust U.
The first term ∥AqpX0+BqpU −Xref|L2 of the Z function may be used as a representation form of the fluctuation quantity of the center of mass of the legged robot. To be specific, the legged robot is to satisfy a weighted value of a dynamic equation (the weight coefficient is L). For example, in
The second term ∥U∥K2 of the Z function may be used as a representation form of a total quantity of impact forces withstood by the legged robot, that is, a weighted value of an integral of a sum of reaction forces of the plane withstood by the legged robot over time (the weight coefficient is K). A smaller ∥U∥2 leads to a smaller sum of the impact forces withstood by the legged robot during the falling of the legged robot.
The third term ∥h−x∥2Q of the Z function represents a weighted value of a distance between a lowest point of the center of mass of the legged robot and the resting height during the whole falling (the weight coefficient is Q). A smaller ∥h−x∥2 indicates a lower degree of squatting of the legged robot during the falling of the legged robot (to be specific, the legged robot can still maintain balance without squatting too low (below a squat threshold)). The third ∥h−x∥2Q term of the Z function may be used as a representation form of a squatting amount of the legged robot.
The fourth term ∥uk+1−uk∥W2 of the Z function represents a weighted value of a difference in the reaction forces provided by the plane to the legged robot between adjacent time steps (the weight coefficient is W). A smaller ∥uk+1−uk∥2 indicates a smaller sudden change of the impact force withstood by the legged robot during falling of the legged robot. The fourth term ∥uk+1−uk∥W2 of the Z function may be used as a representation form of a sudden change amount of the impact force withstood by the legged robot.
The foregoing is just a combination of the Z function. The foregoing terms of the Z function are just exemplary representation forms of the fluctuation quantity of the center of mass of the legged robot, the total quantity of impact forces withstood by the legged robot, the squatting amount of the legged robot, and the sudden change amount of the impact force withstood by the legged robot. The embodiments of this application are not limited thereto.
Importance corresponding to each term is adjusted by using the foregoing weight coefficients in the embodiments of this application. For example, a larger K indicates a higher degree of importance of the impact force withstood by the legged robot in the method for controlling a legged robot provided in the embodiments of this application
In the embodiments of this application, a plurality of weighting schemes are provided. For example, the weighting scheme may be a multiplicative weighting scheme, and the first term of the Z function may be expressed as (AqpX0+BqpU−Xref)TL(AqpX0+BqpU−Xref).
The weighting scheme may alternatively be a power weighting scheme or an addition scheme. The embodiments of this application are not limited thereto. The remaining terms of the Z function may alternatively be calculated by using different weighting schemes, and so on. The details are not described herein in the embodiments of this application.
The following constraints also need to be considered in the process of solving the Z function.
For example, a first constraint is u0≤uU. u0 represents an impact force (referred to as an instantaneous impact force) withstood by the legged robot at a first instant (referred to as an instantaneous moment) the legged robot contacts the plane. The impact force is less than a maximum impact force uU with standable by the legged robot. The maximum impact force uU withstandable by the legged robot depends on structural characteristics of the legged robot and strength of a rigid body, and an example value thereof is 200 N. This application is not limited by the example value.
For example, a second constraint is FL≤u≤FU. FL represents a lower limit of a support force that the plane can provide, and F u represents an upper limit of the support force that the plane can provide. F L is usually 0 because the support force cannot be less than 0.
For example, a third constraint is
The third constraint indicates that a height of the center of mass of the legged robot in a z-axis direction at each moment is always greater than a minimum height x. x is a column vector composed of lowest height sequence values.
In addition, depending on different configurations of the legged robot, more or fewer constraints (relative to the first constraint, the second constraint, and the third constraint) may alternatively be included. The embodiments of this application are not limited thereto.
Mathematical equivalent transformation is performed on Formula (7) to obtain Formula (8).
Mathematical equivalent transformation is performed on Formula (8) to obtain Formula (9).
Mathematical equivalent transformation is performed on Formula (9) to obtain Formula (10).
Mathematical equivalent transformation is performed on Formula (10) to obtain Formula (11).
In other words,
may be finally expressed by Formula (13).
Based on the solved U and x that minimize Z, the optimal first expected moving trajectory in
In the embodiments of this application, a motion trajectory of the center of mass of the legged robot after landing is planned based on the approximate model (or the full model). Therefore, the impact force withstood by each joint and the body rebound can be reduced during the landing of the legged robot, and a good anti-impact protection effect can be achieved on the legged robot while the landing function is ensured.
Next, the process of determining a second expected moving trajectory of the legged robot is described with reference to
As shown in
For example, the determining a motion trajectory of a foot end away from a remaining robotic leg based on the first expected moving trajectory includes: determining an initial position of a foot end of the remaining robotic leg based on a position corresponding to the instantaneous moment in the first expected moving trajectory at the instantaneous moment the single robotic leg falls onto the ground (contacts the plane); determining, based on the first expected moving trajectory, position coordinates of the foot end corresponding to the remaining robotic leg corresponding to a stable moment, at the stable moment, a pose of the center of mass of the legged robot returning to be parallel to the plane, the four robotic legs completely contacting the plane, and leg lengths of the four robotic legs being equal; and performing interpolation based on the initial foot end position corresponding to the instantaneous moment and the position coordinates of the foot end corresponding to the remaining robotic leg corresponding to the stable moment (for example, performing interpolation by using cubic spline interpolation), to determine the motion trajectory of the foot end away from the remaining robotic leg.
In the embodiments of this application, at the instantaneous moment the single robotic leg falls onto the ground, the position coordinates of the foot end of the remaining robotic leg may be correspondingly calculated based on the position and pose of the center of mass of the legged robot, and the position coordinates of the foot end may be used as the initial foot end position at the instantaneous moment of falling onto the ground. The calculation process of the position coordinates of the foot end corresponding to the remaining robotic leg at the stable moment is similar, and the details are not described herein in the embodiments of this application.
For example, referring to a pose 5-1 of the legged robot in
In the embodiments of this application, the electronic device may input sensing information of the legged robot collected at the current moment into a leg odometer, so that the leg odometer calculates positions of at least two robotic legs of the legged robot at the current moment based on the sensing information to obtain position information.
The position information of the position coordinates of the foot end may include at least directional position vectors of the other three robotic legs in a world coordinate system. Different directional position vectors correspond to different coordinate axis directions. One directional position vector is used for indicating a position of the at least two robotic legs of the legged robot in the corresponding coordinate axis direction. The leg odometer calculates the directional position vector corresponding to a horizontal axis direction in the following manners. First, a rotation matrix may be calculated based on current pose information. The so-called rotation matrix refers to a matrix that maps any vector to a base coordinate system of a robot by changing a direction of any vector. A base pose angle of the legged robot may be determined herein based on the current pose information, and the rotation matrix may be calculated based on the base pose angle. In addition, a reference position vector may also be calculated based on joint angle information of each joint, and the reference position vector is used for indicating a relative position between a center of mass of the base of the legged robot and the foot end of each robotic leg. Next, the rotation matrix may be used to map the reference position vector to the base coordinate system of the robot to obtain a target position vector. The rotation matrix herein may be multiplied by the reference position vector to obtain the target position vector.
In addition, a three-dimensional position vector of the center of mass of the legged robot in the world coordinate system may be first obtained. Then fusion processing is performed on a component of the target position vector in the horizontal axis direction and a component of the three-dimensional position vector in the horizontal axis direction, to obtain the directional position vector corresponding to the horizontal axis direction. The fusion processing herein may include summation processing.
The manner in which the leg odometer calculates the directional position vector corresponding to another coordinate axis (such as a vertical axis or a perpendicular axis) direction is similar to the manner of calculating the directional position vector corresponding to the horizontal axis direction, and details are not described herein again. In addition, the position information may include not only at least two directional position vectors in the world coordinate system, but also another vector such as a foot end position vector or a foot end velocity vector in the base coordinate system of the robot. The foot end position vector is used for indicating three-dimensional positions of the foot ends of the at least two robotic legs of the legged robot in the base coordinate system of the robot. The manner in which the leg odometer calculates the foot end position vector may include: performing inversion processing on the target position vector to obtain the foot end position vector. The foot end velocity vector is used for indicating a three-dimensional velocity of the foot ends of the at least two robotic legs of the legged robot in the base coordinate system of the robot. The manner in which the leg odometer calculates the foot end velocity vector may include: performing derivation on the target position vector (pf), and performing inversion processing on the derivative result to obtain the foot end velocity vector.
Referring to a pose 5-2 of the legged robot in
During the evolution of the legged robot from the pose 5-1 in
As shown in
In the embodiments of this application, a z-direction sequence of values of the other three robotic legs may be correspondingly solved based on the first expected moving trajectory. After the single robotic leg of the legged robot contacts the plane herein, a length of the remaining robotic leg varies with the height of the center of mass of the legged robot. Therefore, the z-direction sequence of values of the other three robotic legs may be described as the height at which the foot ends of the three robotic legs may just contact the ground when the center of mass of the legged robot reaches the position indicated by the first expected moving trajectory in the direction z. In addition, the z-direction sequences of values of the other three robotic legs may also be solved by using the cubic spline interpolation, and the embodiments of this application are not limited thereto.
Next, the process of controlling the action of each joint after the legged robot contacts the plane is described with reference to
The scheme of controlling the legged robot based on the dynamic equation of the legged robot and the first expected moving trajectory is also referred to as model predictive control (MPC). The scheme of controlling each joint based on the dynamic equation and a second expected moving trajectory is also referred to as whole-body dynamics control (WBC).
In the embodiments of this application, the MPC and the WBC are combined to implement the buffer control during the landing. The process of implementing the buffer control includes: optimizing an output of a controller (that is, a torque of each joint motor) by calculating a trajectory of a future control variable (that is, the first expected moving trajectory and the second expected moving trajectory). The optimization process is performed in a limited time window, and initial system information of the time window is used for optimization. A starting moment of the time window is an instant the legged robot contacts the plane, and an ending moment is a moment the legged robot stands stably.
As an example, the dynamic equation of the legged robot may be expressed by Formula (14).
First 6 lines of Formula (14) (as shown in Formula (15) below) are particle dynamics information of the legged robot.
M
v
{umlaut over (p)}+C
v
=J
v
T
f (15)
The lower half of Formula (14) (as shown in Formula (16) below) is dynamic information for joints of the legged robot.
M
θ
{umlaut over (θ)}+C
θ
=τ+J
θ
T
f (16)
In the embodiments of this application, Formula (16) may also be written in the form of Formula (17).
τ=−JθTf+Mθ{umlaut over (θ)}+Cθ≈−JθTf+Mθ{umlaut over (θ)} (17)
{umlaut over (x)}
d
=J
p
{umlaut over (p)}+{dot over (J)}
p
{dot over (p)}+J
θ
{umlaut over (θ)}+{dot over (J)}
θ{dot over (θ)} (18)
In other words, the motor torque provided by each joint motor at each time step is determined based on the dynamic model corresponding to the legged robot and the contact force between the plane and the legged robot at each time step, to enable a trajectory of an end of each robotic leg away from the base to be consistent with the second expected moving trajectory.
As shown in
As shown in a pose 8-11 to a pose 8-14 in
In the embodiments of this application, a model is established for the legged robot in free-fall motion, the motion trajectory of the center of mass and the position trajectory of the foot end of the legged robot after landing are planned based on the model, and the control torque of each motor is solved based on the planned motion trajectory of the center of mass and the position trajectory of the foot end, to control the legged robot. Therefore, during the landing of the legged robot, the impact force withstood by each joint can be reduced, the body rebound can be reduced, and a good anti-impact protection effect may be achieved on the legged robot while the landing function is ensured.
An embodiment of this application provides a legged robot 900.
The legged robot 900 may include a base 910 and a lower limb portion 920 connected to the base 910. The lower limb portion 920 may include at least two robotic legs (for example, four lower limbs). Each of the robotic legs includes a hip joint and a knee joint. The hip joint includes at least two degrees of freedom, and the knee joint includes at least one degree of freedom (for example, each lower limb may include the hip joint having two degrees of freedom and the knee joint having one degree of freedom).
The lower limb portion refers to a legged component of the legged robot for implementing motion, including, for example, a robotic leg and a motor connecting the robotic leg to the base and configured to implement motion control of the robotic leg. The embodiments of this application are not limited by a specific composition type of the lower limb portion and a quantity of lower limbs of the legged robot.
The base refers to a main body part of the legged robot. For example, the base may be a trunkportion of the legged robot, and the embodiments of this application are not limited by a specific shape and composition of the base.
In some embodiments, the base includes, for example, 2 spinal joints, and the lower limb portion may include, for example, 8 lower limb joints. The embodiments of this application are not limited by the quantity of specific joints included in the base and the lower limb portion, and is not limited by the configuration of the specific joint of the legged robot either.
The legged robot may further include an electronic device 930. The electronic device 930 is arranged on the legged robot, can perform the method for controlling a legged robot described above, and has the functions described above.
The electronic device 930 includes, for example, a processing apparatus. The processing apparatus may include a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array, a state machine, or another processing device for processing an electrical signal received from a sensor line. The processing device may include a programmable electronic device, for example, a PLC, a programmable interrupt controller (PIC), a programmable logic device (PLD), a programmable read-only memory (PROM), and an electronic programmable read-only memory.
In addition, the legged robot may further include a bus, a memory, a sensor assembly, a communication module, an input/output apparatus, and the like.
The bus may be a circuit that interconnects components of the legged robot and transmits communication information (for example, control messages or data) among the components.
The sensor assembly may be configured to perceive the physical world, including, for example, a camera, an infrared sensor, and an ultrasonic sensor. In addition, the sensor assembly may further include an apparatus for measuring a current operation and motion state of the legged robot, for example, a Hall sensor, a laser position sensor, or a strain force sensor.
The communication module may be connected to a network, for example, in a wired or wireless manner, to facilitate communication with the physical world (for example, a server). The communication module may be wireless and may include a wireless interface, for example, an Institute of Electrical and Electronics Engineers (IEEE) 802.11 interface, a Bluetooth interface, a wireless local area network (WLAN) transceiver, or a radio interface for accessing a cellular telephone network (for example, a transceiver/an antenna for accessing CDMA, GSM, UMTS, or another mobile communication network). In the embodiments of this application, the communication module may be wired and may include an interface such as an Ethernet interface, a universal serial bus (USB) interface, or an IEEE 1394 interface.
The input/output apparatus may, for example, transmit an instruction or data inputted from a user or any other external device to one or more other components of the legged robot, or may output, to a user or another external device, an instruction or data received from one or more other components of the legged robot.
A plurality of legged robots may constitute a legged robot system to collaboratively complete a task. The plurality of legged robots are communicatively connected to a server, and receive instructions for collaboration of the legged robot from the server.
The following continues to describe an apparatus for controlling a legged robot provided in the embodiments of this application being implemented as an exemplary structure of a software module. In some embodiments, as shown in
In the embodiments of this application, the planning and calculation module 10551 is further configured to determine the first expected moving trajectory corresponding to the legged robot based on an approximate model corresponding to the legged robot in response to the legged robot falling to contact the plane, the legged robot being a single rigid body in the approximate model, and a resultant force of the at least two robotic legs forming upward thrust on the single rigid body during the contact between the legged robot and the plane.
In the embodiments of this application, the first expected moving trajectory is used to enable combination values of the following to reach an extreme value: a fluctuation quantity of the center of mass of the legged robot, a total quantity of impact forces withstood by the legged robot, a squatting amount of the legged robot, and a sudden change amount of the impact forces withstood by the legged robot.
In the embodiments of this application, the first expected moving trajectory satisfies the following constraints: a first constraint, used for indicating that an instantaneous impact force is less than a maximum impact force withstandable by the legged robot, the instantaneous impact force referring to the impact force withstood by the legged robot at an instantaneous moment the legged robot contacts the plane; a second constraint, used for indicating that the impact force withstood by the legged robot is less than an upper limit of a support force provided by the plane, and is greater than a lower limit of the support force provided by the plane; and a third constraint, used for indicating that a height of the center of mass of the legged robot is greater than a minimum height.
In the embodiments of this application, the planning and calculation module 10551 is further configured to: determine a contact position where a foot end of a single robotic leg contacts the plane at an instantaneous moment the single robotic leg contacts the plane, and use each contact position corresponding to each time step as an expected moving trajectory corresponding to the single robotic leg, each contact position remaining unchanged at each time step; determine a motion trajectory of a foot end of a remaining robotic leg based on the first expected moving trajectory, and use the motion trajectory as an expected moving trajectory corresponding to the remaining robotic leg, the remaining robotic leg referring to the robotic leg other than the single robotic leg in the at least two robotic legs; and determine the expected moving trajectory corresponding to the single robotic leg and the expected moving trajectory corresponding to the remaining robotic leg as the second expected moving trajectory corresponding to the legged robot.
In the embodiments of this application, the planning and calculation module 10551 is further configured to: determine an initial foot end position corresponding to the remaining robotic leg at the instantaneous moment based on a position corresponding to the instantaneous moment in the first expected moving trajectory at the instantaneous moment the single robotic leg contacts the plane; determine position coordinates of the foot end corresponding to the remaining robotic leg at a stable moment based on the first expected moving trajectory, at the stable moment, a pose of the center of mass of the legged robot returning to be parallel to the plane, the at least two robotic legs contacting the plane, and leg lengths of the at least two robotic legs being equal; and perform interpolation based on the initial foot end position and the position coordinates of the foot end corresponding to the remaining robotic leg at the stable moment, to obtain the motion trajectory of the foot end of the remaining robotic leg.
In the embodiments of this application, the control module 10552 is further configured to, by controlling the action of each joint after the legged robot contacts the plane, control the single robotic leg of the legged robot to first contact the plane and maintain the contact position unchanged, and control the remaining robotic leg to contact the plane in sequence and then maintain the contact with the plane until the center of mass of the legged robot reaches an expected resting height.
In the embodiments of this application, the first expected moving trajectory indicates that after the legged robot contacts the plane, the height of the center of mass of the legged robot gradually decreases and then gradually increases.
In the embodiments of this application, the second expected moving trajectory indicates that after a single robotic leg of the legged robot contacts the plane, a length of the remaining robotic leg varies with the height of the center of mass of the legged robot.
In the embodiments of this application, the controlling, based on a dynamic model corresponding to the legged robot, the first expected moving trajectory, and the second expected moving trajectory, an action of each joint after the legged robot contacts the plane includes: determining a contact force between the plane and the legged robot at each time step based on the dynamic model corresponding to the legged robot, the contact force being used for controlling an actual trajectory of the center of mass of the legged robot to be consistent with the first expected moving trajectory; and determining, based on the dynamic model corresponding to the legged robot and each contact force, a motor torque provided by each joint motor at each time step, the motor torque being used for controlling a trajectory of the foot end of each of the at least two robotic legs to be consistent with the second expected moving trajectory.
In the embodiments of this application, the apparatus 1055 for controlling a legged robot further includes a contact determination module 10553, configured to: determine contact information based on current state information of the legged robot, the contact information indicating a contact state between the at least two robotic legs and the plane at a current moment; and determine, based on the contact information, that the legged robot falls to contact the plane.
In the embodiments of this application, the contact determination module 10553 is further configured to: obtain a historical state value of any one of the robotic legs at a previous moment of the current moment; determine a current state value of the robotic leg based on the current state information of the legged robot; determine, based on the current state value and the historical state value, whether a sudden change to the current state value occurs; and determine the contact information corresponding to the robotic leg depending on whether the sudden change to the current state value occurs.
In the embodiments of this application, the current state information includes at least one of the following: a joint motor torque or a current value or a voltage value of the at least two robotic legs; the height of the center of mass and the pose of the center of mass of the legged robot, and current joint angle information corresponding to the at least two robotic legs; a current plantar tactile feedback value corresponding to the at least two robotic legs; and a current acceleration of the legged robot in a vertical direction.
It may be understood that the embodiments of this application relate to related data such as motion information of the legged robot. User permission or consent needs to be obtained when the embodiments of this application are applied to specific products or technologies, and the collection, use, and processing of related data need to comply with relevant laws, regulations, and standards of relevant countries and regions.
The program part of the technology may be considered as a “product” or “artifact” existing in the form of executable code and/or related data, which is involved in or implemented by using a computer-readable medium. A tangible and permanent storage medium may include an internal memory or a memory used by any computer, processor, or similar device, or related module, for example, various semiconductor memories, tape drives, diskdrives, or any similar device capable of providing storage functions for software.
All or a part of the software may sometimes communicate over a network, such as the Internet or another communication network. The software may be loaded from a computer device or a processor to another through such communication. Therefore, another medium capable of transferring a software element may alternatively be used for physical connection between local devices. For example, a light wave, a radio wave, an electromagnetic wave, and the like are propagated through cables, optical cables, air, and the like. The physical medium for carrying waves, for example, a similar device such as a cable, a wireless connection, or an optical cable, may alternatively be considered as a medium that carries the software. Unless the usage herein limits a tangible “storage” medium, another term that represents a computer- or machine- “readable medium” represents a medium involved during execution of any instruction by a processor.
Specific terms are used in this application to describe the embodiments of this application. For example, “the embodiments of this application” and/or “some embodiments of this application” mean specific features, structures, or characteristics related to at least one embodiment of this application. Therefore, “the embodiments of this application” or “some embodiments of this application” mentioned twice or a plurality of times at different locations in this application does not necessarily refer to the same embodiment. In addition, some features, structures, or characteristics of one or more embodiments of this application may be properly combined.
In addition, it is understood by a person skilled in the art that all aspects of this application may be illustrated and described by using several categories or circumstances, including any new and useful combination of processes, machines, products, or substances, or any new and useful improvement thereof. Accordingly, all aspects of this application may be completely executed by hardware, may be completely executed by software (including firmware, resident software, microcode, and the like), or may be executed by a combination of hardware and software. The foregoing hardware or software may be referred to as “data block”, “module”, “engine”, “unit”, “component”, or “system”. In addition, various aspects of this application may be embodied as a computer product located in one or more computer-readable media, the product including computer-readable program code.
In this application, the term “module” in this application refers to a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal and may be all or partially implemented by using software, hardware (e.g., processing circuitry and/or memory configured to perform the predefined functions), or a combination thereof. Each module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules. Moreover, each module can be part of an overall module that includes the functionalities of the module. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by a person of ordinary skill in the art to which the present invention belongs. It is further to be understood that, the terms such as those defined in commonly used dictionaries are to be interpreted as having meanings that are consistent with the meanings in the context of the related art, and are not to be interpreted in an idealized or extremely formalized sense, unless explicitly defined in this way herein.
The foregoing is descriptions of this application, and not to be considered as a limitation on this application. Although several exemplary embodiments of this application are described, a person skilled in the art may easily understand that, many modifications may be made to the exemplary embodiments without departing from novel teaching and advantages of this application. Therefore, all such modifications are intended to be included within the scope of this application defined by the claims. It is to be understood that, the foregoing is descriptions of this application, and is not to be considered to be limited to the disclosed specific embodiments, and modifications to the disclosed embodiments and other embodiments are intended to be included within the scope of the appended claims. This application is subject to the claims and equivalents thereof
Number | Date | Country | Kind |
---|---|---|---|
202210877092.6 | Jul 2022 | CN | national |
This application is a continuation application of PCT Patent Application No. PCT/CN2023/092460, entitled “METHOD, APPARATUS, AND ELECTRONIC DEVICE FOR CONTROLLING LEGGED ROBOT, COMPUTER-READABLE STORAGE MEDIUM, COMPUTER PROGRAM PRODUCT, AND LEGGED ROBOT” filed on May 6, 2023, which is based on Chinese Patent Application No. 202210877092.6, entitled “METHOD, APPARATUS, AND ELECTRONIC DEVICE FOR CONTROLLING LEGGED ROBOT, COMPUTER-READABLE STORAGE MEDIUM, COMPUTER PROGRAM PRODUCT, AND LEGGED ROBOT” filed on Jul. 25, 2022, all of which is incorporated herein by reference in its entirety. This application relates to U.S. patent application Ser. No. ______, entitled “METHOD, APPARATUS, AND DEVICE FOR CONTROLLING LEGGED ROBOT, LEGGED ROBOT, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT” filed on xxx, (Attorney Docket No. 031384-8012-US), which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/092460 | May 2023 | US |
Child | 18419470 | US |