Embodiments of the invention relate to a method for estimating movement of a poly-articulated mass object and are applicable to capturing the movement of a poly-articulated object using inertial sensors arranged on the object.
A number of existing techniques can be used to capture the movement of a poly-articulated object. The systems implementing these techniques can be classified into two categories.
The first category combines the systems that use at least one sensor and have recourse to an external device that serves as an absolute reference. By way of illustration, the external device can include one or more cameras situated at known positions. In another example, this external device can include one or more emitters of ultrasound, electromagnetic or other type of energy, said emitters being positioned at known positions in the environment.
The second category combines the systems consisting exclusively of sensors that do not have recourse to an external device serving as an absolute reference. Advantageously, the systems belonging to the second category benefit from an unlimited space of use because the poly-articulated object is not constrained to remain within the operational space of external devices used as absolute reference, this operational space corresponding, for example, to the field of view of a camera or to the range of an emitter. However, the systems of this category exhibit a phenomenon of estimation drift because they are deprived of any reference relative to the environment. In other words, the error in the estimations of the positions relative to the environment of the elements that make up the poly-articulated object is not bounded in time. The elements that make up the poly-articulated object are called segments.
Some systems belonging to the second category, that is to say comprising only sensors that do not have recourse to an external device serving as absolute reference, use inertial/magnetic sensors such as, for example, gyrometers, accelerometers or magnetometers. These sensors can be implemented by using the technology of micro-electro-mechanical systems (MEMS). This makes it possible to obtain compact and lightweight sensors. Thus, it is possible to instrument the poly-articulated object with this type of sensor without hampering its movement. Since the MEMS sensors are low energy consumers, they are often embedded onboard small independent and energy-autonomous modules which transmit their information to a central unit via a wireless link. Moreover, the MEMS sensors are particularly inexpensive.
Hereinafter in the description, the word “accelerometer” denotes a sensor capable of measuring the vector sum of its proper acceleration and of gravity. The word “gyrometer” denotes a sensor capable of measuring the rotational speed vector relative to an immobile reference frame. The word “magnetometer” denotes a sensor capable of measuring the Earth's magnetic field vector. The measurements are expressed in the reference frame of the sensor.
Estimating the translational position of a free mobile object in space via measurements from MEMS inertial/magnetic sensors is a problem that is difficult to resolve. The only information that emerges from the translation is the measurement from the accelerometer. To obtain a translational position, it is possible to integrate proper acceleration twice as a function of time. Since an accelerometer measures both the proper acceleration of the object to which it is attached but also the acceleration linked to the ambient gravitational field, it is necessary to extract gravity from the accelerometer measurement before performing the double time integration. The capacity to cancel the effects of gravity depends on the capacity of the system to finely estimate its orientation, and to do so for all the dynamics of the movement.
Even a very small error of a milli-radian (mrad) in the estimation of the orientation induces an error of 0.01 m/s2 on the extracted proper acceleration. If this error is not corrected, it is translated, after the double integration, into an error of 4.5 meters at the end of thirty seconds. The orientation estimation errors of the devices are generally significantly greater than a milli-radian, all the more so as the efficiency of the methods is very generally dependent on the dynamics of the movement captured. Thus, if this estimation drift is not reduced, an accurate estimation of the translation of a free mobile object is impossible.
A movement estimation system that makes it possible to reduce this drift is described in U.S. Pat. No. 8,165,844. The proposed solution relies on a device made up of a set of capture modules arranged on segments of the poly-articulated object. Each module is equipped with inertial/magnetic sensors (accelerometer, gyrometer and possibly magnetometer). The device further comprises a merging circuit that makes it possible to estimate the position and the orientation of the segments. The merging is based on the inertial/magnetic measurements originating from the capture modules. The measurements can be collected by wire or wirelessly.
The method making it possible to reduce the drift on the estimation of the position comprises an estimation phase and a correction phase.
The estimation phase comprises two steps. In a first step, kinematic quantities are estimated for each capture module from the measurements that they supply. The expression “kinematic quantities” denotes the position, the speed and the acceleration of a rigid body. These quantities are expressed relative to the capture module and not relative to the segment to which the sensor is attached. In a second step, the kinematic quantities of the segments are estimated based on the knowledge of the positioning of the sensors on the segments and the morphology of the poly-articulated object by using, for example, the length of the segments. The output from the estimation phase is an estimation of the kinematic quantities of the segments.
The correction phase improves this first estimation. It can contain up to three steps. In a first step, the limitations linked to the articulations are used to correct the estimation of the kinematic quantities of the segments. By way of example, an information item indicating a prohibited rotation about a particular axis can be used. In a second step, the detection of external contacts is used to correct the estimation of the kinematic quantities of the segments. This detection is made by using an external sensor or a heuristic based on the kinematic quantities of the segments expressed at particular points. In a third step, the measurements of external sensors are used in order to correct the estimation of the kinematic quantities of the segments. The external sensors are, for example, of GPS (global positioning system) or RFID (radio frequency identification) type. The data presented as output from the three steps of the correction phase are identical. They are an estimation of the kinematic quantities of the segments. The correction phase of this method therefore corresponds to a series of successive individual corrections.
A translational drift inherent in this method is the consequence of the use of a double time integration of the estimates of the proper acceleration for each sensor. If these estimates are very slightly incorrect, the estimates of the translational positions of the sensors are very different from their real values. The correction phase is implemented to correct these errors.
The various embodiments of the invention described hereinbelow offer an alternative to this method that makes it possible to reduce the drift on the estimation of the translational position by avoiding having recourse to a double time integration of the estimate of the proper acceleration at the level of each sensor.
One of the aims of various embodiments of the invention is to remedy drawbacks of the prior art and provide improvements thereto.
To this end, a subject of a preferred embodiment of the invention is a method for estimating movement of a poly-articulated mass object, comprising a plurality of segments linked by at least one articulation. The method comprises the following steps: acquisition of inertial measurements obtained from at least one capture module that is fixed relative to a segment, said module including an accelerometer and a gyrometer, said measurements being expressed in a measurement reference frame that is fixed relative to said segment; determination from said measurements of at least one stress vector external to the mass object so as to control a physical model of this object, said model representing the mass object; estimation of the kinematic quantities associated with the segments of the object by applying the fundamental principle of dynamics with the at least one stress vector determined in the preceding step.
The steps of the method are, for example, applied periodically with a predefined time step.
A step is, for example, applied for at least one articulation of the object so as to determine a stress and/or a strain associated with said articulation from the parameters representative of the links implemented by this articulation, this stress and/or this strain being used in the application of the fundamental principle of dynamics.
Measurements of the Earth's magnetic field can be acquired from the capture module, said module comprising, for this, a third sensor being a magnetometer.
A step of estimation of the orientation Rsensor of the capture modules relative to a reference of the environment is for example applied for the at least one capture module by using the measurements acquired.
A step of determination of a moment {right arrow over (Mposition)} is for example applied by using the positioning Rsensorsgmt of the at least one capture module on its segment, the orientation Rsensor of the capture module and the orientation Rsgmt of the segment to which it is attached estimated in the preceding time step.
The moment {right arrow over (Mposition)} can be determined by using the following expression:
=Kr[(RsensorsgmtRsgmt−Rsensor)−(RsensorsgmtRsgmt−Rsensor)T]
in which:
A step of determination of a moment {right arrow over (Mspeed)} representative of a speed control on a segment is for example applied after the acquisition of the measurements, said moment being determined from the positioning Rsensorsgmt, from the speed of rotation of the capture module {right arrow over (ωsensor)} and from the speed of rotation of the segment to which it is attached {right arrow over (ωsgmt)} estimated in the preceding time step.
The moment {right arrow over (Mspeed)} is for example determined by using the following expression:
=Br({right arrow over (ωsgmt)}−Rsensorsgmt{right arrow over (ωsensor)})
in which:
A step determines, for example, a force {right arrow over (Facc)} representative of an acceleration control including gravity on the segment by using the masses m of the segments to which the capture modules are attached, measurements produced by the accelerometer and the estimation of the orientation Rsensor of said module.
The force {right arrow over (Facc)} is for example determined for each segment by using the following expression:
{right arrow over (Facc)}=mRsensorRsensorT({right arrow over (g)}+{right arrow over (asensor)})
A step determines, for example, a force {right arrow over (Fsim)} simulating the effect of gravity on the segment by using the masses m of the segments and the position of the segments {right arrow over (psgmt)}, Rsgmt estimated in the preceding time step.
The aim of a step is, for example, to determine collision forces {right arrow over (Fcollision)}, said forces being determined from contact information obtained from a detection of collision between the geometries of the segments and the geometry of the environment and from the modeling of the friction between the poly-articulated object and the environment.
Also a subject of an embodiment of the invention is a system for estimating movement of a poly-articulated mass object, comprising a plurality of segments linked by at least one articulation, the system comprising:
Also a subject of an embodiment of the invention is a computer program comprising instructions for the execution of the method described previously, when the program is executed by a processor.
Other features and advantages of the invention will become apparent from the following description given by way of illustration and in a nonlimiting manner, in light of the attached drawings in which:
Kinematic quantities 100 combine the position 101, the speed 102 and the acceleration 103 of a rigid body. These quantities can refer equally to translation and to rotation.
Thus, the kinematic quantities combine:
The dynamic quantities 104 correspond to a set including the kinematic quantities 100 and of stresses 105. The stresses can refer equally to translation and to rotation. The term “stress” combines:
Speed is the temporal derivative 106 of position. If an estimate of speed is available, it is possible to estimate position by proceeding with a time integration 107 of the speed.
Acceleration is the temporal derivative 108 of speed. If an estimate of acceleration is available, it is possible to estimate speed by proceeding with a time integration 109 of the acceleration.
A position difference can be converted into stress by the introduction of stiffness 110.
A speed difference can be converted into stress by the introduction of damping 111.
The sum of the stresses and of the acceleration are linked by Newton's second law 112, via the introduction of the mass and of the inertia matrix.
Finally, it is possible to obtain the position from the sum of the stresses by incorporating the equation of the fundamental principle of dynamics 113.
The object of interest is a poly-articulated object including segments linked by articulations. Hereinafter in the description, a segment denotes an object that is rigid or assumed to be rigid, defined by:
Moreover, an articulation denotes a link between two segments. This link defines the relative configuration that a remote segment can have relative to a segment to which it is linked. An articulation is defined by:
In the example of
The articulation of the elbow defines the movements of the forearm relative to the arm. This articulation allows two degrees of freedom permitting two concurrent axis rotations corresponding to bending/extension 300 and pronosupination 301 (pronation-supination).
The strains imposed by the articulation of the elbow are:
The aim of the method is notably to produce estimations of the kinematic quantities for the segments of the poly-articulated object. These estimations correspond, for the different segments, to the translational position {right arrow over (psgmt)}, the rotational position Rsgmt, the speed of translation {right arrow over (vsgmt)} and the speed of rotation {right arrow over (ωsgmt)}. These estimations can be produced periodically by using a time step, for example of ten milliseconds.
The method is executed iteratively.
On the first iteration, the initialization step 400 is implemented. For that, default values are used. These default values are obtained for example by asking a player to assume a previously known static pose. His or her translational position relative to the environment is initialized with a chosen value, for example zero.
Then, a step of acquisition of the measurements 401 is executed. These measurements are the result of the operations performed by sensors embedded in the capture modules positioned on the poly-articulated object. In a preferred embodiment, the capture modules comprise two sensors. The first sensor is a gyrometer and the second an accelerometer. In an alternative embodiment, the capture modules comprise a magnetometer as a third sensor.
A step of estimation of the orientation of the capture modules 402 is then applied. For each module, the orientation relative to a reference of the environment is estimated. The result of this step corresponds for example to a rotation matrix Rsensor.
A rotation matrix R is a matrix of the form:
with:
R
−1
=R
Tε3×3
det(R)=1, det(.) representing the determinant.
To write the rotation matrix of a rigid body j relative to another rigid body i serving as reference, the following methodology can be used.
In a first step, a direct orthonormal reference frame ψi is chosen as frame of reference, in which the configuration of the rigid body j will be expressed.
In a second step, the direct orthonormal reference frame ψj is attached to the rigid body, that is to say that the coordinates of all points of the rigid body expressed in ψj are constant in time and are so regardless of the movement of the rigid body j.
In a third step, the matrix R is defined as the matrix of size 3×3 of the coordinates of the three unitary vectors of the axes of the reference frame ψj expressed in ψi. The concatenation is done along the columns.
The estimation of the orientation is produced by using the measurements performed by the sensors of the different capture modules. Thus, the measurement from the accelerometer and the measurement of the rotation speed vector {right arrow over (ωsensor)} of the gyrometer can be used. In addition, a measurement of the initial Earth's magnetic field {right arrow over (hsensor)} from a magnetometer can also be used. It should be noted that the estimation of the accelerometer corresponds to the following quantity:
R
sensor
T({right arrow over (g)}+{right arrow over (asensor)})
in which:
{right arrow over (g)} represents the gravity;
{right arrow over (asensor)} represents the proper acceleration of the sensor, and therefore of the capture module in which it is embedded.
The estimation of the orientation can be produced by using the technique described in the international patent application publication WO 2010/007160, which is incorporated herein by reference.
A step 403 of determination of a moment {right arrow over (Mposition)} is then applied. For that, a stiffness is used. The moment {right arrow over (Mposition)} is determined from the knowledge of the positioning Rsensorsgmt of the sensors on the segments, the orientation of the capture module Rsensor estimated in the step 402 and the orientation Rsgmt of the segment to which it is attached estimated in the preceding time step. The stiffness is a coefficient used as parameter to convert an orientation deviation into a moment. The deviation is expressed as being the difference between the orientation estimated at the instant t from the measurements acquired and the orientation estimated at the preceding estimation instant.
The stiffness is therefore a parameter and its choice makes it possible to refine the efficiency of estimation of the method. The stiffnesses used can be different from one segment to another.
By way of example, if the stiffness is introduced using a diagonal matrix Kr of size 3×3, the moment {right arrow over (Mposition)} can be determined by using the following expression:
=Kr[(RsensorsgmtRsgmt−Rsensor)−(RsensorsgmtRsgmt−Rsensor)T]
in which:
XT represents the transpose of a matrix X;
̂ represents the cap operator such that:
A step 404 of determination of a moment {right arrow over (Mspeed)} is then applied and uses a damping. This moment is representative of a speed control on a segment and is generated from the knowledge of the positioning of the capture modules on the segments Rsensorsgmt, from the speed of rotation of the capture module {right arrow over (ωsensor)} and from the speed of rotation of the segment to which it is attached {right arrow over (ωsgmt)} estimated in the preceding time step, that is to say at the preceding estimation instant.
The damping is a coefficient used as parameter to convert a rotation speed deviation into a moment. The deviation is expressed as the difference between the speed of rotation estimated at the instant t and the speed of rotation estimated at the preceding estimation instant.
The damping is therefore a parameter and its choice makes it possible to refine efficiency of estimation of the method. The dampings used can be different from one segment to another.
By way of example, if Br is a diagonal matrix of size 3×3 corresponding to damping coefficients, the moment {right arrow over (Mspeed)} can be determined by using the following expression:
=Br({right arrow over (ωsgmt)}−Rsensorsgmt{right arrow over (ωsensor)})
The aim of a step 405 is to determine a force {right arrow over (Facc)} representative of an acceleration control. This control corresponds to two phenomena, that is, an action of gravity and a proper acceleration control. The masses m of the segments to which the capture modules are attached are used in order to generate a force {right arrow over (Facc)} from the measurement of the accelerometer RsensorT({right arrow over (g)}+{right arrow over (asensor)}) and from the knowledge of the orientation Rsensor of the capture module. By way of example, the force {right arrow over (Facc)} can be determined for each segment by using the following expression:
F
acc
=mR
sensor
R
sensor
T({right arrow over (g)}+{right arrow over (asensor)})
The aim of step 406 is to determine a force {right arrow over (Fsim)}. The masses m of the segments are used in order to generate forces from the position of the segments {right arrow over (psgmt)}, Rsgmt estimated in the preceding time step. These forces correspond to the simulation of the weight of the segments not involved in the step 405. It is then possible to determine, for each segment, the force {right arrow over (Fsim)} simulating the effect of gravity on the segment by using the following expression:
{right arrow over (Fsim)}=m{right arrow over (g)}
The steps 405, 406 make it possible to realistically simulate the effect of gravity. The masses of the segments are taken into account independently and the gravity is seen as an acceleration.
The aim of step 407 is to determine collision forces {right arrow over (Fcollision)} representative of the stresses that the environment exerts on the poly-articulated object. These forces are determined from contact information obtained from a detection of collision between the geometries of the segments and the geometry of the environment and from the modeling of the friction between the poly-articulated object and the environment. Such contact information corresponds, for example, to the 3D points and the normals for each point of contact. These forces can be computed based on a constraint- or penalty-based method by using the position estimates from the preceding time step. The resolution of the contacts can be produced from a number of different friction modelings and in accordance with the methods conventionally used in the prior art. By way of example, such a method is described in the paper by Trinkle, entitled Interactive Simulation of Rigid Body Dynamics in Computer Graphics, State of the Art Reports, 2012.
It appears that a number of techniques 403, 404, 405, 406 for determining stresses 420 can be used. A person skilled in the art can choose at least one of these techniques or a combination of several of these techniques in a particular implementation of the invention. If the steps 405 and 406 are implemented together, one or other of these techniques can be chosen for each segment.
For each articulation, the aim of a step 408 that can be applied in parallel to the stress determination steps 420 previously mentioned is to determine a stress 421 and/or a strain 422 from the link parameters of the physical model and the kinematic quantities of the segments Rsgmt and {right arrow over (ωsgmt)}. The link parameters of the physical model are parameters representative of the links implemented by this articulation so as to translate, for example, prohibited rotation axes or stops.
This stress corresponds for example to a set including a force {right arrow over (Farticulation)} and of a moment {right arrow over (Marticulation)} for each articulation.
These stresses 421 and/or strains 422 make it possible to ensure that the poly-articulated object is not disarticulated. They can be obtained according to a number of prior art methods such as, for example, the penalties method, the Lagrange multiplying coefficients method, the impulse method or by a reparameterizing of the system, as described in the paper by Trinkle cited previously.
Once the stresses/strains have been determined by the steps 402, 403, 404, 405, 406, 407, 408, the aim of a step 409 is to resolve, under the physical constraints, the equation system of the fundamental principle of dynamics and to do so from the mass and the inertia matrices of the segments. The result of this step corresponds to an estimation 410 of the kinematic quantities {right arrow over (psgmt)}, Rsgmt, {right arrow over (vsgmt)} and {right arrow over (ωsgmt)} of each segment.
There is then a check 411 to see if the movement estimation is finished. If it is, the execution is terminated. Otherwise, the time step is incremented 412 and a new iteration 430 is launched.
The method is distinguished from the methods belonging to the prior art notably by the fact that it does not involve a double time integration of the estimates of proper accelerations for each sensor. It is performed globally under physical constraint. Thus, the problem of translational drift is constrained.
The integration and the synthesis of the stresses are performed at the global level by resolving the equation system of the fundamental principle of dynamics under physical constraint. All the constraints are taken into account in a single final step. There is no estimation of intermediate kinematic quantities such as, for example, an estimation of the translational position of the sensors.
Advantageously, the method does not use immobility detection heuristics which makes it possible to set up the method easily. In effect, these heuristics are often difficult to set up because they are dependent on the usage which is made of the movement capture system.
Two segments are represented. The first segment 500 corresponds to a foot and the second segment 501 corresponds to the tibia linked to the foot by an articulation. The segment 500 is in contact with the ground 502. A collision detection phase makes it possible to determine which are the areas of contact of the segment 500 with the ground 502. Then, by taking into account the weight 503, 504 of the segments and the modeling of the friction, collision forces {right arrow over (Fcollision)} 505, 506, 507 can be determined as explained previously.
Capture modules are used. A capture module has for example three sensors, that is a triaxial accelerometer, a triaxial gyrometer and a triaxial magnetometer. The operator 600 is for example equipped with fourteen capture modules 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614 fixed rigidly to his or her body by a suitable clothing combination. The capture modules can be positioned as follows:
The information from the capture modules is collected by a movement controller, using, for example, a wireless receiver connected by USB to a remote processing unit 615 such as a computer. The processing unit 615 implements the method described previously by using a physical model of the operator.
The aim of the system is for example to animate a virtual human. The physical model includes a plurality of segments linked to one another by articulations. This physical model is for example arranged as follows:
The fingers are not modeled, that is to say that they are considered rigidly fixed to the palms. The kinematic modeling comprises, for example, twenty links for a total of forty-five internal degrees of freedom. The articulations are for example limited by articular stops. The geometries of the segments can be chosen as expanded segments, expanded quadrilaterals or complex geometries deriving from computer-assisted design. The inertias can be computed from the geometries of each segment.
The physical model is then positioned in an environment. This environment is for example modeled by an infinite flat floor. The frictions can be modeled by a Coulomb's law with an adhesion coefficient of 0.9.
Advantageously, embodiments of the invention can be applied for very wide-ranging aims and needs such as, for example, movement analysis. In this case, from the knowledge of the movement, specific information is extracted for particular application needs.
Embodiments of the invention can also be used for the complete reconstruction of movement in order for it to be displayed. The movement can then be modified or amplified to obtain desired effects.
Thus, the invention can be used notably in the context of functional re-education, actimetry, sports training, video games, therapeutic diagnostics, therapeutic follow-up, professional training in technical movement, the ergonomics of products, the ergonomics of the workstation, animation, military troop tracking and/or tracking players in emergency services.
Number | Date | Country | Kind |
---|---|---|---|
1350012 | Jan 2013 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2013/076004 | 12/10/2013 | WO | 00 |