The present disclosure claims priority to Chinese Patent Application No. 202111426710.7, filed Nov. 27, 2021, which is hereby incorporated by reference herein as if set forth in its entirety.
The present disclosure relates to robot technology, and particularly to a robot control method, a robot, and a computer-readable storage medium.
At present, robots are used in more and more fields and can perform more complex actions such as cleaning tables and cleaning shoes. In the process of making a robot to perform a complex action, in order to improve the control accuracy of the robot, it is necessary to consider the position of the end of the robot and that of each joint at the same time. In the existing control method, it generally obtains the angle of each joint corresponding to the trajectory by calculating the inverse solution after obtaining the trajectory of the end of the robot. However, there is a contradiction between the positional adjustment of the end of the robot and that of each joint of the robot, that is, adjusting the position of the joint will affect that of the end, and problems such as physical constraints, singular configurations, and multi-solution switching will be caused during calculating the inverse solution, and the accuracy of the positional calculation will be affected. In order to improve the calculation accuracy, the method of calculating the inverse solution can be replaced by the method of optimization problem solving, but the differential motion model of the robot is a nonlinear motion model, and the process of optimization problem solving is a process of solving the nonlinear optimization problem which has a problem of low efficiency.
To describe the technical schemes in the embodiments of the present disclosure or in the prior art more clearly, the following briefly introduces the drawings required for describing the embodiments or the prior art.
In the following descriptions, for purposes of explanation instead of limitation, specific details such as particular system architecture and technique are set forth in order to provide a thorough understanding of embodiments of the present disclosure. However, it will be apparent to those skilled in the art that the present disclosure may be implemented in other embodiments that are less specific of these details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
It is to be understood that, when used in the description and the appended claims of the present disclosure, the terms “including” and “comprising” indicate the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or a plurality of other features, integers, steps, operations, elements, components and/or combinations thereof.
In the existing control method for a robot, it generally obtains the angle of each joint corresponding to the trajectory by calculating the inverse solution after obtaining the trajectory of the end of the robot. However, there are problems such as physical constraints, singular configurations, and multi-solution switching during calculating the inverse solution, and the accuracy of the positional calculation will be affected. In order to improve the calculation accuracy, the method of calculating the inverse solution can be replaced by the method of optimization problem solving, but the differential motion model of the robot is a nonlinear motion model, and the process of optimization problem solving is a process of solving the nonlinear optimization problem which has a problem of low efficiency.
To this end, in the present disclosure, a control method of a robot is provided. In which, a linear motion model of a robot is obtained by linearizing a differential motion model of the robot, thereby transforming the problem solving during the movement of the robot into a linear problem solving. After that, by determining a predicted state of the robot corresponding to each moment in a preset time period based on the linear motion model; determining an expected state of the robot corresponding to each moment in the preset time period based on a reference position of end(s) of the robot, a reference position of joint(s) of the robot, and a preset admittance control equation; and determining a compensation value of a velocity of the joint(s) at each moment from k-th moment in the preset time period to k+N−1-th moment in the preset time period based on the predicted state corresponding to each moment in the preset time period and the expected state corresponding to each moment in the preset time period, determining instruction parameter(s) at the k-th moment based on the compensation value of the velocity of the joint(s) at the k-th moment, and adjusting a position of each of the joint(s) of the robot according to the instruction parameter at the k-th moment, so as to transform the problem solving during the movement of the robot into a linear optimization problem solving, thereby improving the computational accuracy and the computational efficiency.
S101: obtaining a linear motion model of the robot by linearizing a differential motion model of the robot.
In this embodiment, the robot may be a robotic arm including a series of joints and an end. There is a correspondence {dot over (x)}=J(θ){dot over (θ)} between the velocity of the end and that of the joints at the same robotic arm, where J(θ) is a Jacobian matrix established according to the above-mentioned correspondence between the velocity of the end and that of the joints, {dot over (x)} is the velocity of the end, θ is the positions of the joints which may be obtained through a position sensor (e.g., an encoder) disposed at the joint, and {dot over (θ)} is the velocity of the joint of the robot. It defines the state of the robot as X,
where x represents the position of the end of the robot which may be obtained through a position sensor (e.g., an encoder) disposed at the end, then the differential motion model of the robot will be
where {dot over (X)} is the first derivative of the state X of the robot.
It defines that {dot over (X)}=F(X,u), where u={dot over (θ)} is the velocity of the joint of the robot. It takes the state of the robot at the last moment as the reference state Xr at the current moment, and takes the velocity of the joints at the last moment as the reference velocity ur of the joint at the current moment, then {dot over (X)}r=F(Xr,ur). By performing Taylor expansion on {dot over (X)}=F(X,u) at (Xr,ur) and ignoring the higher order terms, it can get:
Let
it can get {dot over (X)}=F(Xr, ur)+A(k)(X−Xr)+B(k)(u−ur), and further get {dot over (X)}−{dot over (X)}r=A(k)(X−Xr)+B(k)(u−ur).
Let {dot over (X)}−{dot over (X)}r={tilde over ({dot over (X)})}, X−Xr={dot over (X)}, and u−ur={dot over (u)}, it can get {tilde over ({dot over (X)})}=A(k){tilde over (X)}+B(k)ũ. In which, {tilde over ({dot over (X)})} represents the difference between the first derivative of the state of the robot and the first derivative of the reference state, {tilde over (X)} represents the difference between the state of the robot and the reference state. Since the reference state is the state of the previous moment, {tilde over (X)} also represents the state change amount of the robot, and ũ represents the difference between the velocity of the joint and the reference velocity of the joint.
By discretizing the forgoing equation, it can get {tilde over (X)}(k+1)=(TpAk+I){tilde over (X)}(k)+TpBkũ(k). In which, {tilde over (X)}(k+1) represents the state change amount at the k+1-th moment, Tp represents the step size of prediction, Ak and Bk are both coefficients, I is the identity matrix, and {tilde over (X)}(k) represents the state change amount at the k-th moment, ũ(k) represents the difference between the velocity of the joint at the k-th moment and the reference velocity of the joint at the k-th moment and also represents the input variable at the k-th moment.
Let TpAk+I=Am(k) and TpBk=Bm(k), it can get {tilde over (X)}(k+1)=Am(k){tilde over (X)}(k)+Bm(k)ũ(k), that is, the linear motion model. The linear motion model represents a linear relationship among a state change amount at the k+1-th moment, a state change amount at the k-th moment, and an input variable at the k-th moment.
S102: determining a predicted state of the robot corresponding to each moment in a preset time period based on the linear motion model, where the preset time period is from k+1-th moment to k+N-th moment, and k and the N are positive integers.
In this embodiment, the current moment is the k-th moment. The state of the current moment may be predicted based on the state of the previous moment, then the predicted state corresponding to each moment may be obtained by predicting the states from the k+1-th moment to the k+N-th moment based on the linear motion model.
For example, if the linear motion model is {tilde over (X)}(k+1)=Am(k){tilde over (X)}(k)+Bm(k)ũ(k), by analogizing the state change amount at the k+2-th moment according to the expression of the state change amount at the k+1-th moment, it can get:
{tilde over (X)}(k+2)=Am,(k+1){tilde over (X)}(k+1)+Bm(k+1)ũ(k+1)=Am(k+1)Am(k){tilde over (X)}(k)+Am(k+1)Bm(k)ũ(k)+Bm(k+1)ũ(k+1)
where, {tilde over (X)}(k+2) represents the state change amount at the k+2-th moment, Am (k+1) and Bm(k+1) represent the coefficients corresponding to the k+2-th moment that are obtained by analogy. u(k+1) represents the difference between the velocity of the joint at the k+1-th moment and the reference velocity of the joint at the time k+1-th moment. The reference velocity of the joint at the k+1-th moment is the velocity of the joint at the k-th moment, then ũ(k+1)=u(k+1)−u(k) and ũ(k)=u(k)−u(k−1). u(k−1) is the velocity of the joint at time k−1-th moment, where u(k) represents the velocity of the joint at the k-th moment, and u(k+1) represents the velocity of the joint at the k+1-th moment.
Then, it can get by analogy that:
where, {tilde over (X)}(k+N) represents the state change amount at the k+N-th moment, and ũ(k+N−1) represents the difference between the velocity of the joint at the k+N−1-th moment and the reference velocity of the joint at the k+N−1-th moment, Am(k+N−1) and Bm(k+N−1) represent the coefficients corresponding to the k+N-th moment that are obtained by analogy.
Let the coefficients corresponding to the state change amounts at the k+1-th moment, the k+2-th moment, . . . , the k+N-th moment be equal to the coefficient at the k-th moment and are denoted as Am,k and Bm,k then there is an equation of:
Let
{tilde over (X)}(k)=Xe(k),
then the above-mentioned equation may be expressed as Xe_expand=Am_expand·Xe(k)+Bm_expand·ue_expand. Therefore, Xe_expand represents the state change amount corresponding to each moment from the k+1-th moment to the k+N-th moment.
Then, according to:
Let
then the above-mentioned matrix may be expressed as Xexpand=Cm_expand·Xe_expand+X(k). Correspondingly, the predicted state corresponding to each moment from the k+1-th moment to the k+N-th moment may be represented as Xexpand, where the k+1-th moment to the k+N-th moment is also called the prediction time domain.
S103: determining an expected state of the robot corresponding to each moment in the preset time period based on a reference position of end(s) of the robot, a reference position of joint(s) of the robot, and a preset admittance control equation.
In this embodiment, the reference position of the end of the robot and that of the joint are substituted into the preset admittance control equation to obtain the expected state at the k-th moment, then the expected state corresponding to the k+1-th time to the k+N-th moment, that is, the expected state corresponding to each moment in the preset time period may be obtained based on the expected state at the k-th moment and the motion law of the robot
S201: determining an expected position of the end(s) of the robot corresponding to each moment in the preset time period based on the reference position of the end(s) and a first admittance control equation.
In one embodiment, the first admittance control equation is a Cartesian space-based admittance control equation. For example, the first admittance control equation may be M({umlaut over (x)}c−{umlaut over (x)}r)+B({dot over (x)}c−{dot over (x)}r)+K(xc−xr)=F, where M represents the inertia matrix, B represents the damping matrix, K represents the stiffness matrix. The inertia matrix, the damping matrix and the stiffness matrix are all adjustable parameters, which can be determined according to the interaction characteristics of the expected values. xr represents the reference position of the end at the k-th moment, xc represents the expected position of the end at the k-th moment, {dot over (x)}c represents the first derivative of the expected position of the end at the k-th moment with respect to time, {dot over (x)}r represents the first derivative of the reference position of the end at the k-th moment with respect to time, {umlaut over (x)}c is the second derivative of the expected position of the end at the k-th moment with respect to time, {umlaut over (x)}r is the second derivative of the reference position of the end at the k-th moment with respect to time, and F is the force acting on the end. By inputting the inertia matrix, the damping matrix, the stiffness matrix, the reference position of the end corresponding to the k-th moment, and the force acting on the end into the first admittance control equation, the expected position of the end at the k-th moment can be obtained.
After determining the expected position of the end at the k-th moment, the expected position of the end at the k-th moment is extended to obtain the expected positions of the end from the k+1-th moment to the k+N-th moment. For example, according to the formula xc(k+i)=xcurrent+{dot over (x)}c·Tpre·i, the expected positions of the end of the k+1-th moment to the k+N-th moment may be calculated, where i=1, 2, 3, . . . N, xc(k+i) represents the expected position of the end at the k+i-th moment, xcurrent represents the expected position of the end at the previous moment of the k+i-th moment, {dot over (x)}c represents the first derivative of the expected position of the end at the k-th moment with respect to time, that is, the expected velocity of the end at the k-th moment, and Tpre represents the step size of prediction.
S202: determining an expected position of the joint(s) of the robot corresponding to each moment in the preset time period based on the reference position of the joint(s) and a second admittance control equation.
In one embodiment, the second admittance control equation is a joint space-based admittance control equation. For example, the second admittance control equation may be M({umlaut over (q)}c−{umlaut over (q)}r)+B({dot over (q)}c−{dot over (q)}r)+K(qc−qr)=τ. In which, Mrepresents the inertia matrix, B represents the damping matrix, and K represents the stiffness matrix. The inertia matrix, the damping matrix and the stiffness matrix in the second admittance control equation are all adjustable parameters, which are different from the inertia matrix, the damping matrix and the stiffness matrix in the first admittance control equation. qc represents the expected position of the joint at the k-th moment, qr represents the reference position of the joint at the k-th moment, {dot over (q)}c represents the first derivative of the expected position of the joint at the k-th moment with respect to time, and {dot over (q)}r represents the first derivative of the reference position of the joint at the k-th moment, {dot over (q)}c is the second derivative of the expected position of the joint at the k-th moment with respect to time, {umlaut over (q)}r is the second derivative of the reference position of the joint at the k-th moment, and τ is the force acting on the joint. By inputting the inertia matrix, the damping matrix, the stiffness matrix, the reference position of the joint at the k-th moment and the force acting on the joint into the second admittance control equation, the expected position of the joint at the k-th moment can be obtained.
After determining the expected position of the joint at the k-th moment, the expected position of the joint at the k-th moment may be extended to obtain the expected positions of the joint from the k+1-th moment to the k+N-th moment. For example, according to the formula qc(k+i)=qcurrent+{dot over (q)}c·Tpre·i, the expected positions of the joint from the k+1-th moment to the k+N-th moment may be calculated, where qc(k+i) represents the expected position of the joint at the k+i-th moment, qcurrent represents the expected position of the joint at the previous moment of the k+i-th moment, and ac represents the first derivative of the expected position of the joint at the k-th moment with respect to time, that is, the expected velocity of the joint at the k-th moment.
By determining the expected state through the Cartesian space-based admittance control equation and the joint space-based admittance control equation, the cooperative admittance of the Cartesian space and the joint space can be realized, thereby achieving the smooth interaction in the joint space without affecting the trajectory tracking and smooth interaction in the Cartesian space.
S203: determining the expected state corresponding to each moment in the preset time period based on the expected position of the end(s) corresponding to each moment in the preset time period and the expected position of the joint(s) corresponding to each moment in the preset time period.
In this embodiment, after obtaining the expected position of the end at the k-th moment and that of the joint at the k-th moment, the expected state at the k-th moment may be obtained as
Correspondingly, the expected state from the k+1-th moment to the k+N-th moment may be expressed as
that is
where Xd(k+i) represents the expected state at the k+i-th moment, i=1, 2, 3 . . . N.
Let
where Xd_expand represents the expected state corresponding to each moment in the preset time period.
In the forgoing embodiment, by respectively determining the expected position of the end corresponding to each moment in the preset time period and that of the joint corresponding to each moment in the preset time period, the expected state corresponding to each moment in the preset time period can be determined based on the expected position of the end corresponding to each moment in the preset time period and that of the joint corresponding to each moment in the preset time period, thereby realizing the coordinated control of the end and joint of the robot, which reduces the problem of mutual restriction between the adjustment of the joint and that of the end.
S104: determining a compensation value of a velocity of the joint(s) at each moment from k-th moment in the preset time period to k+N−1-th moment in the preset time period based on the predicted state corresponding to each moment in the preset time period and the expected state corresponding to each moment in the preset time period, determining instruction parameter(s) at the k-th moment based on the compensation value of the velocity of the joint(s) at the k-th moment, and adjusting a position of each of the joint(s) of the robot according to the instruction parameter(s) at the k-th moment.
In which, the process of solving the instruction parameter(s) according to the predicted state and the expected state is a process of optimization problem solving. The instruction parameter(s) may be the instruction parameter(s) of the joints, or the instruction parameter(s) of the end. The instruction parameter(s) may be parameters such as velocity, acceleration, and position. The instruction parameter(s) at the k-th moment are used to input into the low layer controller of the robot, so that the low layer controller can adjust the position of the joint(s) of the robot according to the instruction parameter(s).
S401: determining objective function(s) and constraint condition(s) based on the predicted state corresponding to each moment in the preset time period and the expected state corresponding to each moment in the preset time period.
In one embodiment, the optimization problem solving of the instruction parameter(s) at the k-th moment may be performed using the model predictive control (MPC) algorithm, and the process of the optimization problem solving is in the process of solving the maximum or minimum value of the objective function(s) under the condition that the constraint condition(s) are satisfied. For example, the objective function(s) need to satisfy the tracking control of the predicted state to the expected state and avoid the sudden change of the velocity of the joint that is input by the system of the robot. Therefore, it is necessary to solve the minimum value of the difference between the expected state and the predicted state as well as the minimum value between the input velocity of the joint and the reference velocity of the joint.
In one embodiment, a weight coefficient may be determined based on a position tracking priority of the joint(s) and a position tracking priority of the end(s) first, and the objective function(s) may be determined based on the weight coefficient, the predicted state corresponding to each moment in the preset time period, and the expected state corresponding to each moment in the preset time period. In this embodiment, the weight coefficient x is in the form of
where χx is the control weight of the position tracking of the end, and χq is the control weight of the position tracking of the joint. When χx>χq, the high priority of the task of the position tracking of the end can be ensures. When χx<χq, it can make the task of the position tracking of the joint to be mapped to the null space of the trajectory tracking of the end, which ensures the high priority of the task of the position tracking of the joint, that is, the smoothness of the joint will not affect the trajectory tracking of the end and the smoothness of the end at all. The priority of the position tracking of the joint and that of the position tracking of the end is determined according to the task executed by the robot, then the values of χx and χq in the weight coefficient are determined. In the objective function, χx and χq in the weight coefficient are multiplied by the parameter corresponding to the end and that corresponding to the joint, respectively, thereby realizing the adjustments of the priority of the position tracking of the joint and that of the end, which further achieves the flexible control of the robot.
In one embodiment, the constraint condition(s) may include a range threshold of the predicted state corresponding to each moment in the preset time period, and a range threshold of the compensation value of the velocity of the joint(s) at each moment from the k-th moment to the k+N−1-th moment. In which, the compensation value of the velocity of the joint(s) at the k-th moment refers to the difference ũ(k) between the velocity of the joint at the k-th moment and the reference velocity of the joint at the k-th moment, and the compensation values of the velocity of the joint at each moment from the k-th moment to the k+N−1-th moment is ue_expand. By setting the range threshold of the predicted state and that of the compensation value of the velocity of the joint, the rationality of the obtained instruction parameter(s) can be improved, thereby avoiding the problem of instability of the robot caused by the sudden change of the instruction parameter(s).
For example, the objective function and the constraint may be set as:
where O represents the zero matrix, {umlaut over (θ)}max represents the limit of the acceleration of the joint, T represents the integration time length, θ represents the upper limit of the position of the joint, and
S402: determining the compensation value of the velocity of the joint(s) at each moment from k-th moment in the preset time period to the k+N−1-th moment in the preset time period based on the objective function(s) and the constraint condition(s).
In this embodiment, in the case of satisfying the constraint condition(s), the minimum value of the objective function(s) is solved. When the objective function(s) is at the minimum value, the obtained ue_expand is the output value, that is, the compensation value of the velocity of the joint at each moment from the k-th moment to the k+N−1-th moment.
In one embodiment, after determining the compensation value ue_expand of the velocity of the joint at each moment from the k-th moment to the k+N−1-th moment, the first dimension ũ(k) in ue_expand is the compensation value ũ(k) of the velocity of the joint at the k-th moment. The velocity of the joint at the k-th moment may be obtained based on the formula u(k)=u(k−1)+ũ(k), and the velocity of the joint at the k-th moment may be used as the instruction parameter at the k-th moment, where u(k−1) represents the velocity of the joint at the k−1-th moment. In addition, after obtaining the velocity of the joint at the k-th moment, it may also calculate the instruction position of the joint at the k-th moment based on the formula qi=qi_last+u(k)·Tctrl to use as the instruction parameter at the k-th moment, where represents the instruction position of the joint at the k-th moment, qi_last represents the instruction position of the joint at the k−1-th moment, and Tctrl. represents the control cycle.
By determining the objective function(s) and the constraint condition(s), determining the compensation value of the velocity of the joint(s) at the k-th moment according to the objective function(s) and the constraint condition(s), and determining the instruction parameter(s) at the k-th moment according to the compensation value of the velocity of the joint(s) at the k-th moment, the obtained instruction parameter(s) can adapt to the adjustment of the joint and that of the end simultaneously, thereby achieving the coordinated control of the end and the joint.
In this embodiment, a linear motion model of a robot is obtained by linearizing a differential motion model of the robot, thereby transforming the problem solving during the movement of the robot into a linear problem solving. After that, by determining a predicted state of the robot corresponding to each moment in a preset time period based on the linear motion model, where the preset time period is from k+1-th moment to k+N-th moment, and k and the N are positive integers; determining an expected state of the robot corresponding to each moment in the preset time period based on a reference position of end(s) of the robot, a reference position of joint(s) of the robot, and a preset admittance control equation; and determining a compensation value of a velocity of the joint(s) at each moment from k-th moment in the preset time period to k+N−1-th moment in the preset time period based on the predicted state corresponding to each moment in the preset time period and the expected state corresponding to each moment in the preset time period, determining instruction parameter(s) at the k-th moment based on the compensation value of the velocity of the joint(s) at the k-th moment, and adjusting a position of each of the joint(s) of the robot according to the instruction parameter at the k-th moment, so as to transform the problem solving during the movement of the robot into a linear optimization problem solving, thereby improving the computational accuracy and the computational efficiency.
It should be understood that, the sequence of the serial number of the steps in the above-mentioned embodiments does not mean the execution order while the execution order of each process should be determined by its function and internal logic, which should not be taken as any limitation to the implementation process of the embodiments.
In one embodiment, the linear motion model represents a linear relationship among a state change amount at the k+1-th moment, a state change amount at the k-th moment, and an input variable at the k-th moment, and the predicted state determining module 52 may be configured to:
In one embodiment, the expected state determining module 53 may be configured to:
In one embodiment, the instruction parameter determining module 54 may be configured to:
In one embodiment, the instruction parameter determining module 54 may be configured to:
In one embodiment, the instruction parameter determining module 54 may be configured to:
It should be noted that, the information exchange, execution process and other contents between the above-mentioned device/units are based on the same concept as the method embodiments of the present disclosure. For the specific functions and technical effects, please refer to the method embodiments, which will not be repeated herein.
For example, the computer program 63 may be divided into one or more modules/units, and the one or more modules/units are stored in the storage 62 and executed by the processor 61 to realize the present disclosure. The one or more modules/units may be a series of computer program instruction sections capable of performing a specific function, and the instruction sections are for describing the execution process of the computer program 63 in the robot 6.
It can be understood by those skilled in the art that
The processor 61 may be a central processing unit (CPU), or be other general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or be other programmable logic device, a discrete gate, a transistor logic device, and a discrete hardware component. The general purpose processor may be a microprocessor, or the processor may also be any conventional processor.
The storage 62 may be an internal storage unit of the robot 6, for example, a hard disk or a memory of the robot 6. The storage 62 may also be an external storage device of the robot 6, for example, a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, flash card, and the like, which is equipped on the robot 6. Furthermore, the storage 62 may further include both an internal storage unit and an external storage device, of the robot 6. The storage 62 is configured to store the computer program 63 and other programs and data required by the robot 6. The storage 62 may also be used to temporarily store data that has been or will be output.
Those skilled in the art may clearly understand that, for the convenience and simplicity of description, the division of the above-mentioned functional units and modules is merely an example for illustration. In actual applications, the above-mentioned functions may be allocated to be performed by different functional units according to requirements, that is, the internal structure of the device may be divided into different functional units or modules to complete all or part of the above-mentioned functions. The functional units and modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional unit. In addition, the specific name of each functional unit and module is merely for the convenience of distinguishing each other and are not intended to limit the scope of protection of the present disclosure. For the specific operation process of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the above-mentioned method embodiments, and are not described herein.
In the embodiments provided by the present disclosure, it should be understood that the disclosed apparatus (device)/robot and method may be implemented in other manners. For example, the above-mentioned apparatus/robot embodiment is merely exemplary. For example, the division of modules or units is merely a logical functional division, and other division manner may be used in actual implementations, that is, multiple units or components may be combined or be integrated into another system, or some of the features may be ignored or not performed. In addition, the shown or discussed mutual coupling may be direct coupling or communication connection, and may also be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms.
The units described as separate components may or may not be physically separated. The components represented as units may or may not be physical units, that is, may be located in one place or be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of this embodiment.
When the integrated module/unit is implemented in the form of a software functional unit and is sold or used as an independent product, the integrated module/unit may be stored in a non-transitory computer readable storage medium. Based on this understanding, all or part of the processes in the method for implementing the above-mentioned embodiments of the present disclosure are implemented, and may also be implemented by instructing relevant hardware through a computer program. The computer program may be stored in a non-transitory computer readable storage medium, which may implement the steps of each of the above-mentioned method embodiments when executed by a processor. In which, the computer program includes computer program codes which may be the form of source codes, object codes, executable files, certain intermediate, and the like. The computer readable medium may include any entity or device capable of carrying the computer program codes, a recording medium, a USB flash drive, a portable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a random access memory (RAM), electric carrier signals, telecommunication signals and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, a computer readable medium does not include electric carrier signals and telecommunication signals.
Those ordinary skilled in the art may clearly understand that, the exemplificative units and steps described in the embodiments disclosed herein may be implemented through electronic hardware or a combination of computer software and electronic hardware. Whether these functions are implemented through hardware or software depends on the specific application and design constraints of the technical schemes. Those ordinary skilled in the art may implement the described functions in different manners for each particular application, while such implementation should not be considered as beyond the scope of the present disclosure.
The above-mentioned embodiments are merely intended for describing but not for limiting the technical schemes of the present disclosure. Although the present disclosure is described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that, the technical schemes in each of the above-mentioned embodiments may still be modified, or some of the technical features may be equivalently replaced, while these modifications or replacements do not make the essence of the corresponding technical schemes depart from the spirit and scope of the technical schemes of each of the embodiments of the present disclosure, and should be included within the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202111426710.7 | Nov 2021 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
5355064 | Yoshino | Oct 1994 | A |
8447708 | Sabe | May 2013 | B2 |
9031691 | Yamane | May 2015 | B2 |
10296675 | Li | May 2019 | B2 |
10678210 | Haddadin | Jun 2020 | B2 |
11389957 | Romeres | Jul 2022 | B2 |
20110112997 | Sabe | May 2011 | A1 |
20140249670 | Yamane | Sep 2014 | A1 |
20150005941 | Milenkovic | Jan 2015 | A1 |
20170193137 | Li | Jul 2017 | A1 |
20180025664 | Clarke | Jan 2018 | A1 |
20180081340 | Haddadin | Mar 2018 | A1 |
20190205145 | Xiong | Jul 2019 | A1 |
20230166400 | Zeng | Jun 2023 | A1 |
Number | Date | Country |
---|---|---|
2933068 | Oct 2015 | EP |
Entry |
---|
Do et al.; Linearization of dynamic equations for vibration and modal analysis of flexible joint manipulators, 2022; Elsevier; Mechanism and Machine Theory, vol. 167, https://doi.org/10.1016/j.mechmachtheory.2021.104516. (https://www.sciencedirect.com/science/article/pii/S0094114X2100 (Year: 2022). |
Cambera, J.C. and Feliu-Batlle, V., Input-state feedback linearization control of a single-link flexible robot arm moving under gravity and joint friction. 2017; Elsevier; Robotics and Autonomous Systems, 88, pp. 24-36. (Year: 2017). |
Koivo, A. and Guo, T.H., Adaptive linear controller for robotic manipulators. 1983; IEEE Transactions on Automatic Control, 28(2), pp. 162-171. (Year: 1983). |
Number | Date | Country | |
---|---|---|---|
20230166400 A1 | Jun 2023 | US |