The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 10 2024 200 156.2 filed on Jan. 8, 2024, which is expressly incorporated herein by reference in its entirety.
The present invention relates to the calibration routine of the inertial measurement units for 3D motion capture using a network of interconnected devices and sensor fusion of accelerometers and gyroscopes for orientation estimation.
Accurate determination of a human body pose in 3D space is highly important for applications in the motion capture industry (3D avatars, animation, cinema production), healthcare (rehabilitation, gait analysis), augmented reality (AR), and virtual reality (VR). Having access to affordable, low-power consumption, and wireless systems will greatly promote progress in the abovementioned fields.
This task requires reliable orientation determination of wearable devices in 3D space. This is achieved by employing Inertial Measurement Units (IMUs) as hardware base. The sensor fusion algorithm based on the Kalman filter, combining accelerometer and gyroscope data, provides relative orientation data in the form of Euler angles or quaternions.
Using orientation data has the advantage of high relative orientation precision in 3D space and low power consumption on the order of 3 mA (2.68-3.44 mA, depending on the number of devices). Combined with the star-shaped architecture of the Smart Connected Sensor (SCS) network (see
Current drawbacks of mobile motion capture systems are constraints on the mounting orientation of the devices on the body parts and time- or movement-dependent drift of the orientation information when no magnetometer sensor is used. Therefore, calibration of the sensors' orientation and drift compensation routines are required.
The present invention provides a mounting and drift correction routine for the network of SCS wearable devices in 3D motion tracking.
The present invention relates to a method for calibrating a plurality of wearable inertial measurement units. According to an example embodiment of the present invention, the method comprises the following steps:
The present invention provides calibration routines for mounting and drift compensation of IMU orientation in 3D space which is suitable to provide a reliable 3D avatar visualization. Predefined calibration poses allow the remapping of coordinate systems for each device to offer one-to-one 3D avatar visualization and data, which can be directly employed for quantitative data analysis. Further ideas of pose detection and translational movement prediction based on quaternions are disclosed.
The calibration of the SCS mounting is based on the fact that the heading direction of the avatar is defined by the startup procedure, where all devices are aligned with the chosen heading direction. The orientation drift of sensors is corrected when the mounting calibration of the devices is known. This two-step procedure allows to run 3D avatar applications non-stop for extended periods of time.
An example embodiment of the present invention provides that in step A, the plurality of wearable inertial measurement units is aligned by placing them in a common receptacle unit. Advantageously, this mounting device, for example a box, ensures that the IMUs adopt a specified orientation and position in relation to the heading direction.
An example embodiment of the present invention provides that step G is repeated after some time to compensate for a drift of orientation. An advantage of repeating the calibration is that the drift is recognized and compensated for in the future as well. This means that calibration has to be carried out less frequently.
An example embodiment of the present invention provides that the wearable inertial measurement units establish leaf nodes which communicate with a central node wirelessly.
The method comprises the steps of:
The aim of calibration routines is to compensate for two types of errors:
A first error type is a mounting error, which is a change in orientation of the sensors in the SCS devices when placed in the alignment box lying on the desk versus when worn on the body.
A second error type is a sensor drift error, which is an orientation sensor data error accumulated over time due to the movements of the user wearing the devices for the demo. This error results from integration operation withing the fusion algorithm and noise in the sensor data.
Choose the direction of the user performing the demo, which stands for heading direction. All the SCS devices must be aligned before the demonstration and coordinate systems are initialized as shown in
The user can assign the body positions to SCS wearables by mapping body positions to the corresponding MAC-ID of the SCS wearable in a config file.
Mount devices on corresponding limbs as set up in the config file.
The user must stand in a T-shape pose in the heading direction as shown in
While performing the demo, the orientation drift occurs due to the integration of gyroscope readouts and noise in IMUs. The user performs the T-pose again in the heading direction. Then, the drift compensation is applied by subtracting the current SCS boards' orientation misalignment with respect to a known calibration pose.
The orientation of a 3D mesh of the Avatar is defined by the current reading of a game rotation vector sensor (GRV) in the form of a unit quaternion {circumflex over (q)}i. This value is received directly as a virtual sensor GRV from the SCS board and applied as a rotation of a mesh from the original avatar pose.
The update rule for the quaternion to be applied as a rotation of the mesh is the following (implemented in the update method of the Avatar class):
The local rotation of any child mesh connected to the parent mesh is calculated as follows:
here {circumflex over (q)}* is a conjugate operation.
Here, {circumflex over (q)}imounting, {circumflex over (q)}idrift are correction quaternions stored as the amounting adrift Avatar properties and updated via T-pose resets as follows.
After the devices are turned on in the starting box in an aligned manner and mounted on the body with the predefined heading the mounting of the devices is reset as follows:
The person stands in the T-pose [or any other pre-defined pose in 3D view] and a set of current quaternion readouts multiplied by quaternion fixing a drift is recorded: {circumflex over (q)}i†={circumflex over (q)}idrift⊗{circumflex over (q)}i
The quaternion used to compensate for mounting is calculated as the conjugate of {circumflex over (q)}i†: {circumflex over (q)}imounting=({circumflex over (q)}idrift⊗{circumflex over (q)}i)*
If the drift of the GRV sensor readout occurs during the demo operation the T-pose reset for drift is done similarly
The person stands in the T-pose and a set of current quaternion readouts multiplied by quaternion fixing a mounting is recorded: {circumflex over (q)}i‡={circumflex over (q)}i⊗{circumflex over (q)}imounting
The quaternion used to compensate for drift is calculated as; the conjugate of {circumflex over (q)}i‡: {circumflex over (q)}idrift=({circumflex over (q)}i⊗{circumflex over (q)}imounting)
Pose detection is possible based on the provided hardware and a mathematical framework. A normalized quaternion {circumflex over (q)}ref defines a reference pose. Each device is assigned with a normalized quaternion {circumflex over (q)}.
To detect a pose, the following routine is applied:
Number | Date | Country | Kind |
---|---|---|---|
10 2024 200 156.2 | Jan 2024 | DE | national |