POSTURE ESTIMATION METHOD, POSTURE ESTIMATION DEVICE, AND VEHICLE

Information

  • Patent Application
  • 20230202486
  • Publication Number
    20230202486
  • Date Filed
    March 09, 2023
    a year ago
  • Date Published
    June 29, 2023
    a year ago
Abstract
A posture estimation method includes calculating a posture change amount of an object based on an output of an angular velocity sensor, predicting posture information of the object by using the posture change amount, adjusting error information in a manner of determining whether or not the output of the angular velocity sensor is within an effective range and, when it is determined that the output of the angular velocity sensor is not within the effective range, increasing a posture error component in error information and reducing a correlation component between the posture error component and an error component other than the posture error component in the error information, and correcting the predicted posture information of the object based on the error information.
Description

The present application is based on, and claims priority from JP Application Serial Number 2018-143691, filed Jul. 31, 2018, the disclosure of which is hereby incorporated by reference herein in its entirety.


BACKGROUND
1. Technical Field

The present disclosure relates to a posture estimation method, a posture estimation device, and a vehicle.


2. Related Art

A device or a system of mounting an inertial measurement unit (IMU) on an object and calculating the position or posture of the object with an output signal of the inertial measurement unit (IMU) is known. A bias error is provided in the output signal of the inertial measurement unit (IMU) , and an error also occurs in posture calculation. Thus, a technique of correcting such an error with a Kalman filter and estimating the accurate posture of the object is proposed. For example, JP-A-2015-179002 discloses a posture estimation method of calculating a posture change amount of an object by using an output of an angular velocity sensor and estimating the posture of the object by using the posture change amount.


JP-A-2015-179002 is an example of the related art.


An effective measurement range is defined in an angular velocity sensor. When an angular velocity which is out of the range is temporarily input, the output of the angular velocity sensor may include a not-assumed large error. Regarding this, in the posture estimation method disclosed in JP-A-2015-179002, there is a concern of not assuming a case where the output of the angular velocity sensor is even temporarily out of the effective range and decreasing accuracy in estimating the posture of an object.


SUMMARY

An aspect of a posture estimation method according to the present disclosure includes calculating a posture change amount of an object based on an output of an angular velocity sensor, predicting posture information of the object by using the posture change amount, adjusting error information in a manner of determining whether or not the output of the angular velocity sensor is within an effective range, and when it is determined that the output of the angular velocity sensor is not within the effective range, increasing a posture error component in error information and reducing a correlation component between the posture error component and an error component other than the posture error component in the error information, and correcting the predicted posture information of the object based on the error information.


In the aspect of the posture estimation method, the adjusting of the error information may include determining whether or not the current time is in a first period after it is determined that the output of the angular velocity sensor is not within the effective range, increasing the posture error component in the first period, and reducing the correlation component between the posture error component and the error component other than the posture error component, in the first period.


In the aspect of the posture estimation method, the error component other than the posture error component may include a bias error component of an angular velocity.


In the aspect of the posture estimation method, the correlation component between the posture error component and the error component other than the posture error component may be zero.


The aspect of the posture estimation method may further include calculating a velocity change amount of the object based on an output of an acceleration sensor and the output of the angular velocity sensor. In the adjusting of the error information, whether or not the output of the acceleration sensor is within an effective range may be determined, and, when it is determined that the output of the angular velocity sensor or the output of the acceleration sensor is not within the corresponding effective range, a motion velocity error component in the error information may be increased, and a correlation component between the motion velocity error component and an error component other than the motion velocity error component in the error information may be reduced. In the predicting of the posture information, velocity information of the object may be predicted by using the velocity change amount.


In the aspect of the posture estimation method, the adjusting of the error information may include determining whether or not the current time is in a second period after it is determined that the output of the acceleration sensor is not within the effective range, increasing the motion velocity error component in the second period, and reducing the correlation component between the motion velocity error component and the error component other than the motion velocity error component, in the second period.


In the aspect of the posture estimation method, the error component other than the motion velocity error component may include a bias error component of an acceleration.


In the aspect of the posture estimation method, the correlation component between the motion velocity error component and the error component other than the motion velocity error component may be zero.


An aspect of a posture estimation device according to the present disclosure includes a posture-change-amount calculation unit that calculates a posture change amount of an object based on an output of an angular velocity sensor, a posture information prediction unit that predicts posture information of the object by using the posture change amount, an error information adjustment unit that determines whether or not the output of the angular velocity sensor is within an effective range, and when it is determined that the output of the angular velocity sensor is not within the effective range, increases a posture error component in error information, and reduces a correlation component between the posture error component and an error component other than the posture error component in the error information, and a posture information correction unit that corrects the predicted posture information of the object based on the error information.


An aspect of a vehicle according to the present disclosure includes the aspect of the posture estimation device and a control device that controls a posture of the vehicle based on posture information of the vehicle, which is estimated by the posture estimation device.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flowchart illustrating an example of procedures of a posture estimation method.



FIG. 2 is a flowchart illustrating an example of procedures of a process S3 in FIG. 1.



FIG. 3 is a flowchart illustrating an example of procedures of a process S5 in FIG. 1.



FIG. 4 is a flowchart illustrating an example of procedures of a process S6 in FIG. 1.



FIG. 5 is a flowchart illustrating an example of procedures of a process S7 in FIG. 1.



FIG. 6 is a diagram illustrating an example of a configuration of a posture estimation device according to an embodiment.



FIG. 7 is a diagram illustrating a sensor coordinate system and a local coordinate system.



FIG. 8 is a diagram illustrating an example of a configuration of a processing unit in the embodiment.



FIG. 9 is a flowchart illustrating another example of the procedures of the process S5 in FIG. 1.



FIG. 10 is a block diagram illustrating an example of a configuration of an electronic device in the embodiment.



FIG. 11 is a plan view illustrating a wrist watch-type activity meter as a portable type electronic device.



FIG. 12 is a block diagram illustrating an example of a configuration of the wrist watch-type activity meter as the portable type electronic device.



FIG. 13 illustrates an example of a vehicle in the embodiment.



FIG. 14 is a block diagram illustrating an example of a configuration of the vehicle.



FIG. 15 illustrates another example of the vehicle in the embodiment.



FIG. 16 is a block diagram illustrating an example of a configuration of the vehicle.





DESCRIPTION OF EXEMPLARY EMBODIMENTS

Hereinafter, a preferred embodiment according to the present disclosure will be described in detail with reference to the drawings. Embodiments described below do not unduly limit the contents of the present disclosure described in the appended claims. All components described below are not essential components in the present disclosure.


1. Posture Estimation Method
1-1. Posture Estimation Theory
11. IMU Output Model

The output of the inertial measurement unit (IMU) includes angular velocity data dω, k as an output of a three-axis angular velocity sensor and acceleration data dα, k as an output of a three-axis acceleration sensor at each sampling time point (tk). Here, as shown in Expression (1), the angular velocity data dω, k is represented by the sum of an average value of an angular velocity vector ω in a sampling interval (Δt=tk-tk-1) and the residual bias bω.







d

ω
,
k



=








d

ω
x
,
k









d

ω
y
,
k









d

ω
z
,
k










=


ω
¯

k


+


b
ω

,


















ω
¯

k


=


1

Δ
t








t

k

1





t
k




ω

d
t







Similarly, as shown in Expression (2), the acceleration data dα, k is also represented by the sum of an average value of an acceleration vector α and the residual bias bα.







d

α
,
k



=








d

α
x
,
k









d

α
y
,
k









d

α
z
,
k










=


α
¯

k


+


b
α

,




α
¯

k


=


1

Δ
t








t

k

1





t
k




α

d
t







12. Calculation of Three-Dimensional Posture by Angular Velocity Integration

When a three-dimensional posture is represented by quaternions, a relation between the posture quaternion q and the angular velocity vector ω [rad/s] is represented by a differential equation in Expression (3).







d

d
t


q

=


1
2

q

ω




Here, a symbol obtained by superimposing O and × on each other indicates quaternion multiplication. For example, elements of quaternion multiplication of q and p are calculated as shown in Expression (4).






q

p =






+

q
0







q
1







q
2







q
3







+

q
1





+

q
0







q
3





+

q
2







+

q
2





+

q
3





+

q
0







q
1







+

q
3







q
2





+

q
1





+

q
0














p
0








p
1








p
2








p
3







=







q
0


p
0



q
1


p
1



q
2


p
2



q
3


p
3








q
1


p
0

+

q
0


p
1



q
3


p
2

+

q
2


p
3








q
2


p
0

+

q
3


p
1



q
0


p
2

+

q
1


p
3








q
3


p
0



q
2


p
1

+

q
1


p
2

+

q
0


p
3











As shown in Expression (5), the angular velocity vector ω is considered as being equivalent to a quaternion in which the real (scalar) component is zero, and the imaginary (vector) component coincides with the component of w.






ω


=








ω
x








ω
y








ω
z








=






0






ω
x








ω
y








ω
z








=






0




ω









If the differential equation (3) is solved, it is possible to calculate the three-dimensional posture. However, unfortunately, the general solution thereof has not been found. Further, the value of the angular velocity vector ω is also obtained in only a form of a discrete average value. Thus, it is necessary that approximation calculation is performed with Expression (6) for each short (sampling) time.









q
k



q

k

1





Δ

q
k











Δ

q
k


=

1

+





ω
¯

k


2

Δ
t
+






ω
¯


k

1


×


ω
¯

k



24











ω
¯

k




2


8



Δ

t
2










ω
¯

k




2



ω
¯

k



48


Δ

t
3







=







1


1
8







ω
¯

k

Δ
t



2








1
2



ω
¯

k

Δ
t

+


1

24






ω
¯


k

1


Δ
t

×



ω
¯

k

Δ
t





1

48








ω
¯

k

Δ
t



2



ω
¯

k

Δ
t


















Expression (6) is an expression calculated based on Taylor expansion at each of t=tk-1 to the third-order term of Δt, considering the posture quaternion q and an integration relation of the angular velocity vector ω for each axis. The term including ωk-1 in the expression corresponds to the Corning correction term. The symbol × indicates a cross product (vector product) of a three-dimensional vector. For example, the elements of v×w are calculated as in Expression (7).






v

×

w

=






0





v
z





+

v
y







+

v
z




0





v
x









v
y





+

v
x




0












w
x








w
y








w
z








=







v
y


w
z



v
z


w
y








v
z


w
x



v
x


w
z








v
x


w
y



v
y


w
x











13. Tilt Error Observation by Gravitational Acceleration

The acceleration sensor detects an acceleration generated by the movement thereof. However, on the earth, the acceleration is normally detected in a state of adding a gravitational acceleration of about 1 [G] (=9.80665 [m/s2] ) . The gravitational acceleration is normally a vector in a vertical direction. Thus, it is possible to know an error of a tilt (roll and pitch) component of the posture by comparison to the output of the three-axis acceleration sensor. Therefore, firstly, it is necessary that an acceleration vector α in the sensor coordinate system (xyz coordinate system), which is observed by the three-axis acceleration sensor is transformed to an acceleration vector α′ in a coordinate system (XYZ coordinate system) of a local space obtained by horizontal orthogonal axes and a vertical axis. The coordinate (rotation) transformation can be calculated with the posture quaternion q and conjugate quaternion q*, as shown in Expression (8).







α



=

q



α




q
*





Expression (8) can be expressed with a three-dimensional coordinate transformation matrix C, as in Expression (9).









α



=

C
α








C

=







q
0
2

+

q
1
2



q
2
2



q
3
2





2



q
1


q
2



q
0


q
3







2



q
1


q
3

+

q
0


q
2









2



q
1


q
2

+

q
0


q
3








q
0
2



q
1
2

+

q
2
2



q
3
2





2



q
2


q
3



q
0


q
1









2



q
1


q
3



q
0


q
2







2



q
2


q
3

+

q
0


q
1








q
0
2



q
1
2



q
2
2

+

q
3
2

















A tilt error is obtained by comparing the acceleration vector α′ to the gravitational acceleration vector g in the local-space coordinate system (XYZ coordinate system). The gravitational acceleration vector g is represented by Expression (10). In Expression (10), Δg indicates a gravitational-acceleration correction value indicating a difference [G] from the standard value of the gravitational acceleration vector g.






g

=






0




0








1
+
Δ
g












14. Observation of Zero Motion Velocity

In particular, the motion velocity of the IMU is considered to be substantially equal to zero in a long term, in user interface applications. A relation between the motion velocity vector v in the local-space coordinate system, and the acceleration vector α and the angular velocity vector ω in the sensor coordinate system is expressed with the coordinate transformation matrix C by a differential equation in Expression (11).









d

d
t


v

=

C
α




g





d

d
t


C

=

C





0





ω
z





+

ω
y







+

ω
z




0





ω
x









ω
y





+

ω
x




0











Here, the values of the acceleration vector α and the angular velocity vector ω are obtained only in a form of a discrete average value. Thus, the motion velocity vector is calculated by performing approximation calculation with Expression (12) for each short (sampling) time.









v
k




v

k

1



+




C
k


λ
k




g



Δ
t











λ
k


=



α
¯

k


+







ω
¯

g

×


α
¯

k


2


+





ω
¯


k

1


×


α
¯

k




ω
¯

k


×



α
¯


k

1




12





Δ
t
+








ω
¯

k






α
¯

k





ω
¯

k








ω
¯

k




2



α
¯

k


6


Δ

t
2







+






ω
¯


k

1





α
¯

k




ω
¯

k




α
¯


k

1






ω
¯

k






ω
¯


k

1





ω
¯

k





α
¯

k

+






ω
¯

k




2



α
¯


k

1




24


Δ

t
2











ω
¯

k




2



24




ω
¯

k

×


α
¯

k

Δ

t
3









=



α
¯

k

+

1
2





ω
¯

k

Δ
t

×



α
¯

k




+


1

12






ω
¯


k

1


Δ
t

×



α
¯

k






1

12






ω
¯

k

Δ
t

×



α
¯


k

1




+

1
6





ω
¯

k

Δ
t

×





ω
¯

k

Δ
t

×



α
¯

k











+


1

24






ω
¯


k

1


Δ
t

×





ω
¯

k

Δ
t

×



α
¯

k









1

24






ω
¯

k

Δ
t

×





ω
¯

k

Δ
t

×



α
¯


k

1










1

24









ω
¯

k

Δ
t



2





ω
¯

k

Δ
t
×


α
¯

k















Expression (12) is an expression calculated based on Taylor expansion at each of t=tk-1 to the third-order term of Δt, considering the motion velocity vector v and integration relations of the acceleration vector α and the angular velocity vector ω for each axis. The third-order term is ignored in Expression (12) because of being sufficiently small although the residual error ελ is provided in the third-order term. The symbol · indicates a dot product (scalar product) of a three-dimensional vector. For example, v·w is calculated as in Expression (13).






v



w

=


v
x


w
x


+


v
y


w
y

+


v
z


w
z





15. Posture Quaternion and Error Thereof

It is considered that the true value (^) of the calculated posture quaternion q has an error εq, as in Expression (14) .








q

=


q
^


+


ϵ
q


=









q
^

0

+

ϵ

q
0










q
^

1

+

ϵ

q
1










q
^

2

+

ϵ

q
2










q
^

3

+

ϵ

q
3















q
2


=

E




ϵ
q


ϵ
q
T











Here, ∑q2 represents an error covariance matrix indicating the magnitude of the error εq. E [ · ] indicates an expected value. T on the right shoulder indicates transposition of the vector·matrix.


The quaternion and the error have four values, but there are just three degrees of freedom in a three-dimensional posture (rotational transformation). The fourth degree of freedom of the posture quaternion corresponds to enlargement/reduction conversion. However, it is necessary that the enlargement/reduction ratio is normally fixed to 1 in posture detection processing. In practice, a enlargement/reduction ratio component changes by various calculation errors. Thus, processing of suppressing the change of the enlargement/reduction ratio is required.


In a case of the posture quaternion q, the square of the absolute value thereof corresponds to the enlargement/reduction ratio. Thus, the change is suppressed by normalization processing in which the absolute value is set to 1, as in Expression (15).






q




q


q




=


1




q
0
2

+

q
1
2

+

q
2
2

+

q
3
2












q
0








q
1








q
2








q
3











In a case of the posture error εq, it is necessary to hold the rank (order number) of the error covariance matrix ∑q2 to be three. Thus, the rank is limited as in Expression (16), considering a (three-dimensional) error rotation vector εθ in the local-space coordinate system (assuming that a posture error angle is sufficiently small).









ϵ
θ


=








ϵ
θ



x








ϵ

θ
y









ϵ

θ
z











2

D


ϵ
q








ϵ
q





D
T

D

ϵ
q








q
2





D
T

D



q
2



D
T

D














D

=









q
1





+

q
0







q
3





+

q
2









q
2





+

q
3





+

q
0







q
1









q
3







q
2





+

q
1







q
0














D
T

D

=








q
1
2

+

q
2
2

+

q
3
2







q
0


q
1







q
0


q
2







q
0


q
3









q
0


q
1






q
0
2

+

q
2
2

+

q
3
2







q
1


q
2







q
1


q
3









q
0


q
2







q
1


q
2






q
0
2

+

q
1
2

+

q
3
2







q
2


q
3









q
0


q
3







q
1


q
3







q
2


q
3






q
0
2

+

q
1
2

+

q
3
2



















16. Removal (Ignoring) of Azimuth Error

When an azimuth observation section such as a magnetic sensor is not provided in posture detection, an azimuthal component of the posture error just monotonously increases and does not serve for any purpose. Further, the increased error estimate causes a feedback gain to unnecessarily increase, and this causes the azimuth to unexpectedly change or vary. Thus, an azimuth error component εθz is removed from Expression (16) .










ϵ


θ


=








ϵ
θ



x








ϵ

θ
y











2


D




ϵ
q








ϵ
q






D


T


D



ϵ
q








q
2






D


T


D





q
2




D


T


D

















D



=









q
1





+

q
0







q
3





+

q
2









q
2





+

q
3





+

q
0







q
1















D


T


D



=








q
1
2

+

q
2
2







q
0


q
1



q
2


q
3






q
1


q
3



q
0


q
2




0







q
0


q
1



q
2


q
3






q
0
2

+

q
3
2




0




q
0


q
2



q
1


q
3








q
1


q
3



q
0


q
2




0




q
0
2

+

q
3
2







q
0


q
1



q
2


q
3






0




q
0


q
2



q
1


q
3







q
0


q
1



q
2


q
3






q
1
2

+

q
2
2



















17. Extended Kalman Filter

An extended Kalman filter that calculates a three-dimensional posture based on the above model expressions can be designed.


State Vector and Error Covariance Matrix

As in Expression (18), the posture quaternion q, the motion velocity vector v, the residual bias bω (offset to angular velocity vector ω) of the angular velocity sensor, the residual bias bα (offset to acceleration vector α) of the acceleration sensor, and the gravitational-acceleration correction value Δg, as unknown state values to be obtained, constitute a state vector × (14-dimensional vector) of the extended Kalman filter. In addition, the error covariance matrix ∑x2 is defined.










x
=





q




v






b
ω








b
α








Δ
g







,





ε
x


=








ε
q








ε
v








ε

b
ω









ε

b
α









ε

Δ
g

















x
2


=
E



ε
x


ε
x
T



,






x

=


x
˙


+


ε
x









Process Model

In a process model, the value of the latest state vector is predicted based on the sampling interval Δt and values of the angular velocity data dω and the acceleration data dα, as in Expression (19).









x
k


=








q
k








v
k








b

ω
,
k









b

α
,
k








Δ

g
k









=








q
k


1

Δ

q
k








v

k

1


+



C
k


λ
k



g

k

1




Δ
t







b

ω
,
k

1









b

α
,
k

1








Δ

g

k

1


















Δ

q
k

=
1
+




ω
¯

k


2

Δ
t
+






ω
¯


k

1


×


ω
¯

k



24













ω
¯

k




2


8



Δ

t
2










ω
¯

k




2



ω
¯

k



48


Δ

t
3








C

k
+
1



=








q

0
,
k

2

+

q

1
,
k

2



q

2
,
k

2



q

3
,
k

2





2



q

1
,
k



q

2
,
k




q

0
,
k



q

3
,
k








2



q

1
,
k



q

3
,
k


+

q

0
,
k



q

2
,
k










2



q

1
,
k



q

2
,
k


+

q

0
,
k



q

3
,
k









q

0
,
k

2



q

1
,
k

2

+

q

2
,
k

2



q

3
,
k

2





2



q

2
,
k



q

3
,
k




q

0
,
k



q

1
,
k










2



q

1
,
k



q

3
,
k




q

0
,
k



q

2
,
k








2



q

2
,
k



q

3
,
k


+

q

0
,
k



q

1
,
k









q

0
,
k

2



q

1
,
k

2



q

2
,
k

2

+

q

3
,
k

2
















λ
k


=



α
¯

k

+






ω
¯

k

×


α
¯

k


2


+





ω
¯


k

1


×


α
¯

k




ω
¯

k

×


α
¯


k

1




12





Δ
t
+






ω
¯

k

×


α
¯

k





ω
¯

k








ω
¯

k




2



α
¯

k


6

Δ

t
2





+






ω
¯


k

1





α
¯

k




ω
¯

k




α
¯


k

1






ω
¯

k






ω
¯


k

1





ω
¯

k





α
¯

k

+






ω
¯

k




2



α
¯


k

1




24


Δ

t
2












ω
¯

k




2



24




ω
¯

k

×


α
¯

k

Δ

t
3










g
k

=




0

0





1
+
Δ

g
k






T









ω
¯

k

=

d

ω
,
k




b

ω
,
k

1










α
¯

k

=

d

α
,
k




b

α
,
k

1














The covariance matrix of a state error is updated as in Expression (20) by receiving influences of noise components ηω and ηα of the angular velocity data dω, and the acceleration data dα, and process noise ρω, ρa, and ρg indicating instability of the residual bias bω of the angular velocity sensor, the residual bias bα of the acceleration sensor, and the value (gravitational acceleration value) of the gravitational acceleration vector g.












x
,
k

2


=


A
k





x
,
k

1

2



A
k
T


+


W
k





ρ
,
k

2



W
k
T


















A
k









J

q
/
q
,
k







0

4
×
3








J

q
/
ω
,
k







0


4
×
3







0


4
×
1









J

v
/
q
,
k







I

3
×
3







0


3
×
3








C
k






0

3
×
1









0

3
×
4







0

3
×
3







I

3
×
3







0

3
×
3







0

3
×
1









0

3
×
4







0

3
×
3







0

3
×
3







I

3
×
3







0

3
×
1









0

1
×
4







0

3
×
3







0

3
×
1







0

1
×
3







I

1
×
]















W
k









J

q
/
ω
,
k







0

4
×
3







0

4
×
3







0

4
×
3







0

4
×
1









0

3
×
3







C
k






0

3
×
3







0

3
×
3







0

3
×
1









0

3
×
3







0

3
×
3








Δ
t



I

3
×
3







0

3
×
3







0

3
×
1









0

3
×
3







0

3
×
3







0

3
×
3








Δ
t



I

3
×
3







0

3
×
1









0

1
×
3







0

1
×
3







0

1
×
3







0

1
×
3








Δ
t



I

1
×
1


















ρ
,
k

2


=











η
ω
,
k

2








0

3
×
3







0

3
×
3







0

3
×
3







0

3
×
1









0

3
×
3










η
α
,
k

2








0

3
×
3







0

3
×
3







0

3
×
1









0

3
×
3







0

3
×
3










ρ
ω

2








0

3
×
3







0

3
×
1









0

3
×
3







0

3
×
3







0

3
×
3










ρ
α

2








0

3
×
1









0

1
×
3







0

1
×
3







0

1
×
3







0

1
×
3







σ

ρ
g

2


I

1
×
1






















Here, 0n×m indicates a zero matrix having n rows and m columns. In×m indicates an identity matrix having n rows and m columns. J... indicates a matrix of propagation coefficients of each error obtained by partial differentiation of the process model, as with Expression (21). ∑... indicates a covariance matrix of each type of noise as with Expression (22) .









J

q
/
q
,
k



=







+
Δ

q

0
,
k







-
Δ

q

1
,
k






-
Δ

q

2
,
k






+
Δ

q

3
,
k








+
Δ

q

1
,
k







+
Δ

q

0
,
k







+
Δ

q

3
,
k








Δ

q

2
,
k









+
Δ

q

2
,
k







-
Δ

q

3
,
k







+
Δ

q

0
,
k







+
Δ

q

1
,
k









+
Δ

q

3
,
k







+
Δ

q

2
,
k








Δ

q

1
,
k







+
Δ

q

0
,
k














J

q
/
ω
,
k



=


1
2









q

1
,
k

1








q

2
,
k

1








q

3
,
k

]








+

q

0
,
k

1








q

3
,
k

1






+

q

2
,
k

]








+

q

3
,
k

1






+

q

0
,
k

1








q

1
,
k

]










q

2
,
k

1






+

q

]
,
k

1






+

q

0
,
k

]









Δ
t





J

v
/
q
,
k



=

2







+

r

1
,
k






+

r

0
,
k






+

r

3
,
k








r

2
,
k








+

r

2
,
k








r

3
,
k






+

r

0
,
k






+

r

1
,
k







0


0


0


0






Δ
t
,








r

0
,
k









r

1
,
k









r

2
,
k









r

3
,
k









=







+

q

1
,
k

1






+

q

2
,
k

1






+

q

3
,
k

1








+

q

0
,
k

1








q

3
,
k

1






+

q

2
,
k

1








+

q

3
,
k

1






+

q

0
,
k

1








q

1
,
k

1










q

2
,
k

1






+

q

1
,
k

1






+

q

0
,
k

1










λ
k



















η
ω
,
k

2


=

E




η
ω


η
ω
T




=








σ

η
ω
x
,
k

2




0


0




0




σ

η
ω
y
,
k

2




0




0


0




σ

η
ω
z
,
k

2



















η
α
,
k

2


=

E




η
α


η
α
T




=








σ

η
α
x
,
k

2




0


0




0




σ

η
α
y
,
k

2




0




0


0




σ

η
α
z
,
k

2



















ρ
ω
,
k

2


=

E




ρ
ω


ρ
ω
T




=








σ

ρ
ω
x
,
k

2




0


0




0




σ

ρ
ω
y
,
k

2




0




0


0




σ

ρ
ω
z
,
k

2



















ρ
α

2


=

E




ρ
α


ρ
α
T




=








σ

ρ
α
x
,
k

2




0


0




0




σ

ρ
α
y
,
k

2




0




0


0




σ

ρ
α
z
,
k

2

















Observation Model

In an observation model, an observation residual Δz in which an observation residual Δza of the gravitational acceleration based on acceleration data dα and an observation residual Δzv of the zero motion velocity are used as elements is calculated as in Expression (23).






Δ

z
k


=







Δ

z

α
,
k








Δ

z

v
,
k









=








C
k



α
¯

k



g
k








v
k











Here, a Kalman coefficient K is calculated, as in Expression (24), by adding the noise component ηα of the acceleration data dα and the motion acceleration component ζa and the motion velocity component ζv as observation errors.









K
k


=






x
,
k

2



H
k
t




H
k





x
,
k

2



H
k
T

+

V
k





ζ
,
k

2



V
k
T











1











H
k


=








J

z
a
/
q
,
k







0

3
×
3







0

3
×
3







0

3
×
3







J

z
α
/
g
,
k









0

3
×
4







I

3
×
3







0

3
×
3







0

3
×
3







0

3
×
1


















ζ
,
k

2


=










η
α
,
k

2








0

3
×
3







0

3
×
3









0

3
×
3










ζ
α
,
k

2








0

3
×
3









0

3
×
3







0

3
×
3










ζ
v
,
k

2


















V
k


=








C
k







I

3
×
3







0

3
×
3









0

3
×
3







0

3
×
3








I

3
×
3


















Here, J... is the matrix of propagation coefficients of each error obtained by partial differentiation of the observation model, as with Expression (25). ∑... indicates a covariance matrix of each type of noise as with Expression (26) .











J

z
a
/
q
,
k


=

2






+

s

1
,
k






+

s

0
,
k






+

s

3
,
k








s

2
,
k








+

s

2
,
k








s

3
,
k






+

s

0
,
k






+

s

1
,
k







0


0


0


0







,








s

0
,
k









s

1
,
k









s

2
,
k









s

3
,
k









=







+

q

1
,
k






+

q

2
,
k






+

q

3
,
k








+

q

0
,
k








q

3
,
k






+

q

1
,
k








+

q

3
,
k






+

q

0
,
k








q

1
,
k










q

2
,
k






+

q

1
,
k






+

q

0
,
k










α
¯

k








J

z
a
/
g
,
k



=






0




0




1

























ζ
α
,
k

2


=

E



ζ
α


ζ
α
T




=








σ

ζ
α
x
,
k

2




0


0




0




σ

ζ
α
y
,
k

2




0




0


0




σ

ζ
α
z
,
k

2







+




μ
2


3





Δ

z

α
,
k





2


I

3
×
3














ζ
v
,
k

2


=

E



ζ
v


ζ
v
T




=








σ

ζ
v
x
,
k

2




0


0




0




σ

ζ
v
y
,
k

2




0




0


0




σ

ζ
v
z
,
k

2

















Here, µ is a coefficient for calculating the RMS of the error of each axis from the predicted motion acceleration. The state vector × is corrected, as in Expression (27), and the error covariance matrix ∑x2 thereof is updated, by using the Kalman coefficient K.











x
k




x
k





K
k

Δ

z
k











x
,
k

2








x
,
k

2




K
k


H
k





x
,
k

2















Posture Normalization Model

In the posture normalization model, Expression (28) is updated in order to maintain the posture quaternion and the error covariance thereof to proper values.









x
k

=







q
k








v
k








b

ω
,
k









b

α
,
k








Δ

g
k



















q
k






q
k












v
k








b

ω
,
k









b

α
,
k








Δ

g
k















x
,
k

2












D


k
T



D


k






0

4
×
10









0

10
×
4







I

10
×
10















x
,
k

2











D


k
T



D


k






0

4
×
10









0

10
×
4







I

10
×
10
















Here, D′TD′ indicates a matrix of Expression (29) for limiting the rank of the posture error and removing the azimuthal component.








D


k
T



D


k


=








q

1
,
k

2

+

q

2
,
k

2







q

0
,
k



q

1
,
k




q

2
,
k



q

3
,
k







q

1
,
k



q

3
,
k




q

0
,
k



q

2
,
k





0







q

0
,
k



q

1
,
k




q

2
,
k



q

3
,
k







q

0
,
k

2

+

q

3
,
k

2




0




q

0
,
k



q

2
,
k




q

1
,
k



q

3
,
k









q

1
,
k



q

3
,
k




q

0
,
k



q

2
,
k





0




q

0
,
k

2

+

q

3
,
k

2







q

0
,
k



q

1
,
k




q

2
,
k



q

3
,
k







0




q

0
,
k



q

2
,
k




q

1
,
k



q

3
,
k








q

0
,
k



q

1
,
k




q

2
,
k



q

3
,
k







q

1
,
k

2

+

q

2
,
k

2











18. Initial Value
State Vector and Error Covariance Matrix

Initial values of the state vector × and the error covariance matrix ∑x2 are given as in Expression (30).











x
0


=








q
0








v
0








b

ω
,
0









b

α
,
0








Δ

g
0

















x
,
0

2


=











q
,
0

2








0

4
×
3







0

4
×
3







0

4
×
3







0

4
×
1









0

3
×
4










υ
,
0

2








0

3
×
3







0

3
×
3







0

3
×
1









0

3
×
4







0

3
×
3










b
ω
,
0

2








0

3
×
3







0

3
×
1









0

3
×
4







0

3
×
3







0

3
×
3










b
α
,
0

2








0

3
×
1









0

1
×
4







0

1
×
3







0

1
×
3







0

1
×
1







σ

Δ
g
,
0

2


I

1
×
1


















Posture Quaternion

It is necessary that the posture of the IMU is given in quaternion expression. The posture quaternion q can be calculated from a roll (bank) angle φ [rad], a pitch (elevation) angle θ [rad], and a yaw angle (azimuth) ψ [rad] used for the posture and the like of an aircraft, as in Expression (31).






q

=








q
0








q
1








q
2








q
3








=







sin

ϕ
2


sin

θ
2

sin

ψ
2


+

cos

ϕ
2


cos

θ
2


cos

ψ
2








sin

ϕ
2


cos

θ
2


cos

ψ
2







cos

ϕ
2



sin

θ
2


sin

ψ
2









cos

ϕ
2


cos

θ
2



cos

ψ
2


+


sin

ϕ
2



sin

θ
2


sin

ψ
2








cos

ϕ
2


cos

θ
2



sin

ψ
2






sin

ϕ
2



sin

θ
2


cos

ψ
2












The error covariance matrix ∑q2 is calculated from RMSσϕ [rad RMS] of a roll angle error and RMSσθ [rad RMS] of a pitch angle error, as in Expression (32) (yaw angle error is ignored).











q
2

=



1
4



D


T








σ
ϕ
2




0




0




σ
θ
2









D







D



=









q
1





+

q
0







q
3





+

q
2









q
2





+

q
3





+

q
0







q
1













Motion Velocity Vector

If the initial state is stationary, the motion velocity vector v may be to be required to 0. The error covariance matrix ∑v2 is given based on RMSσvx, RMSσvy, and RMSσvz [Gs RMS] of the errors of the motion velocity vector v in the axes, regardless of being stationary, as in Expression (33).









v
2


=








σ

v
x

2




0


0




0




σ

v
y

2




0




0


0




σ

v
z

2













Residual Biases of Angular Velocity/Acceleration Sensor

If the residual bias bω of the angular velocity sensor and the residual bias bα of the acceleration sensor are known, the residual biases are required to be appropriately set. When the residual biases are unknown, zero as an expected value is given to the residual biases. Error covariance matrixes ∑2 and ∑2 are given based on the errors RMSσbωx, RMSσbωy, and RMSσbωz [rad/s RMS] of the residual biases of the angular velocity sensor in the axes and the errors RMSσbαx, RMSσbαy, and RMSσbαz [G RMS] of the residual biases of the acceleration sensor in the axes, as in Expression (34).












b
ω

2


=








σ

b
ω
x

2




0


0




0




σ

b
ω
y

2




0




0


0




σ

b
ω
z

2

















b
α

2


=








σ

b
α
x

2




0


0




0




σ

b
α
y

2




0




0


0




σ

b
α
z

2















Gravitational-Acceleration Correction Value

If the gravitational acceleration value is known, a difference from the standard value 1 [G] (=9.80665 [m/s2] ) is required to be appropriately set. When the gravitational acceleration value is unknown, zero as an expected value is given to the gravitational acceleration value. RMSσΔg [G RMS] of the error of the gravitational acceleration value is applied to the error covariance matrix.


19. Setting Value
Sampling Interval

Since the posture detection processing from the output of the IMS corresponds to a time integration operation in principle, the sampling interval Δt [s] is an important value, and thus is required to be appropriately set.


Output Noise of Angular Velocity/Acceleration Sensor

The noise components η included in outputs of the angular velocity sensor and the acceleration sensor are considered as the white Gaussian noise of the variance ση2 which is average zero independent for each axis. The magnitude of the noise components is designated by the corresponding RMS values (σηωx, σηωy, σηωz) [rad/s RMS] and σηαx, σηαy, σηαz) [G RMS] , as in Expression (35).











η
ω


=








η

ω
x









η

ω
y









η

ω
z

















N


0
,


σ

η
ω
x

2









N


0
,


σ

η
ω
y

2









N


0
,


σ

η
ω
z

2










=

N


0
,





η
ω

2






,





η
ω

2


=








σ

η
ω
x

2




0


0




0




σ

η
ω
y

2




0




0


0




σ

η
ω
z

2




















η
α


=








η

α
x









η

ω
y









η

ω
z

















N


0
,


σ

η
α
x

2









N


0
,


σ

η
α
y

2









N


0
,


σ

η
α
z

2










=

N


0
,





η
α

2






,





η
α

2


=








σ

η
α
x

2




0


0




0




σ

η
α
y

2




0




0


0




σ

η
α
z

2



























Biases of Angular Velocity/Acceleration Sensor and Instability of Gravitational Acceleration Value

It is considered that the biases of the angular velocity sensor and the acceleration sensor are not constant and change with time. It is considered that the gravitational acceleration value also slightly changes depending on the surrounding environment. Considering the change as an individual random walk, the instability is designated by (σpωx, σpωy, σpωz) [rad/s/√s], (σpαx, σpαy, σpαz) [G/√s], and σpg[G/√s], as in Expression (36).









b

ω
,
k


=

b

ω
,
k

1


+
N


0
,




ρ
ω

2


Δ
t




.






ρ
ω

2


=







σ

ρ
ω
,
τ

2




0


0




0




σ

ρ
ω
y

2




0




0


0




σ

ρ
ω
z

2














b

α
,
k


=

b

α
,
k

1


+
N


0
,




ρ
α

2


Δ
t




.







ρ
a

2


=







σ

ρ
α
x

2




0


0




0




σ

ρ
α
y

2




0




0


0




σ

ρ
α
z

2


















Δ

g
k

=
Δ

g

k

1



+

N



0
,

σ

ρ
g

2

Δ
t








Motion Acceleration

When the gravitational acceleration is observed in order to correct the posture, the motion acceleration component ζa acts as the observation error. When this observation error is considered as simple white Gaussian noise, a result that the posture sensitively responds to a large motion acceleration due to a rapid motion is obtained. Thus, a noise model as with Expression (37) of changing the level depending on the magnitude of the estimated motion acceleration (difference between the observed acceleration and the estimated gravitational acceleration) is used, and a linear coefficient µ [N/A] and a constant term (σζαx, σζαy, σζαz) [G RMS] are used as setting items.







ζ

α
,
k





N


0
,






ζ
α
,
k

2





,








ζ
α
,
k

2



=







σ

ζ
α
x

2




0


0




0




σ

ζ
α
y

2




0




0


0




σ

ζ
α
z

2









+



μ
2


3





Δ

z

α
,
k





2


I

3
×
3






Motion Velocity

In observing the zero motion velocity, it is observed that the motion velocity of the IMU is substantially zero in a long term. The motion velocity component ζv appearing in a short term acts as the observation error. This observation error is considered as the white Gaussian noise independent for each axis, and the magnitude of this observation error is designated by the RMS values (σζvx, σζvy, σζvz) [Gs RMS], as in Expression (38).







ζ
υ




N


0
,






ζ
υ

2





.







ζ
υ

2



=







σ

ζ
υ
x

2




0


0




0




σ

ζ
υ
y

2




0




0


0




σ

ζ
υ
z

2











110. Limitation of Yaw-Axis Component of Bias Error in Angular Velocity Sensor

When an azimuth observation section such as a magnetic sensor is not provided, only a yaw-axis component (vertical component) of the bias error of the angular velocity sensor also monotonously increases. Normally, a yaw-axis direction changes with the posture and the sensor coordinates. Thus, for example, even though a z-axis of the angular velocity sensor at a certain time point coincides with a yaw axis, and thus error estimate increases, correction is performed by observing the gravitational acceleration when the posture changes, and thus the z-axis becomes an inclination (for example, a horizontal-axis direction) other than the yaw axis. Thus, the error estimate is reduced. However, for example, when the posture changes small, and the substantially same posture continues for a long time, the error estimate of the yaw-axis component increases, and the feedback gain increases. In addition, the increase of the feedback gain causes the bias estimation value of the angular velocity sensor to change without intention, and thus causes the azimuth of the posture to drift. In addition, it is unlikely that the bias estimation error of even the yaw-axis component of the practical angular velocity sensor increases beyond, for example, the initial bias error without limit. Accordingly, an upper limit value is provided only in the yaw-axis component of the bias estimation error of the angular velocity sensor in the state error covariance matrix, and thus, the yaw-axis component is limited not to exceed the upper limit value.


Firstly, as in Expression (39), the variance σbcωv2 of the yaw-axis component is calculated from the error covariance matrix ∑2 of the residual bias in the angular velocity sensor, based on the current posture quaternion.









σ

b
ω
r

2


=


c
r
T





b
ω

2



c
r


=




0
7
T




c
r
T




0
4
T











χ
2












0
7








c
r








0
4














c
r


=







2



q
1


q
3





q
0


q
2









2



q
2


q
3


+


q
0


q
1










q
0
2





q
1
2





q
2
2

+

q
3
2















When the variance σbωv2 of the yaw-axis component exceeds the upper limit value σbωmax2, the yaw-axis component is limited as in Expression (40).











χ
2





L
χ




χ
2



L
χ
T






















L
χ


=








I

7
×
7







0

7
×
3







0

7
×
4









0

3
×
7







L

b
ω







0

3
×
4









0

4
×
7







0

4
×
3







I

4
×
4















L

b
ω



=


C
T







1


0


0




0


1


0




0


0








σ

b
ω
m
a
x

2




σ

b
ω
c

2












C






C

=








q
0
2

+


q
1
2



q
2
2



q
3
2





2



q
1


q
2



q
0


q
3







2



q
1


q
3



q
0


q
2









2



q
1


q
2



q
0


q
3








q
0
2




q
1
2

+

q
2
2



q
3
2





2



q
2


q
3



q
0


q
1









2



q
1


q
3



q
0


q
2







2



q
2


q
3

+

q
0


q
1








q
0
2




q
1
2



q
2
2

+

q
3
2



















111. Countermeasure for Off-Scale of Sensor

An effective measurement range is defined in an actual inertial sensor (angular velocity sensor and acceleration sensor). When a physical quantity (angular velocity or acceleration) exceeding the range is input, it is not possible to obtain an accurate output by saturation of a signal in the inertial sensor. Exceeding the effective measurement range is referred to as “off-scale” below. Normally, a sensor to be used is selected to correspond to a range to be measured such that off-scale does not occur. However, a state (off-scale state) exceeding the range by an impact and the like in a very short time is assumed depending on the application.


Influence of Sensor Off-Scale

If the angular velocity sensor is in the off-scale state, a large error is included in the angular velocity, and an error in a posture angle obtained by integrating the angular velocity increases. An error in the motion velocity increases by integration of the acceleration based on the posture angle having an increased error. If the acceleration sensor is in the off-scale state, a large error is included in the acceleration, and the error in the motion velocity obtained by integrating the acceleration increases. Since the error by the off-scale is not the stochastic process, obtaining the average or the variance is not possible. Modeling of a Kalman filter is difficult because the amount of error is huge and unpredictable. Thus, in the current Kalman filter, the above error is not recognized as an error to be corrected. Thus, when the output value of the sensor is out of the effective range of the sensor, this situation is recognized as off-scale, and a proper period after the time point at which the output value of the sensor is out of the effective range is defined as a recovery period from the off-scale. Then, processing different from that in the normal time is performed.


Off-Scale Recovery Processing

Following processing is applied to the state error covariance matrix to handle the state error increasing by an influence of off-scale. That is, the error variance component of the covariance matrix is appropriately inflated in response to an increase of the error. The covariance component is set to zero in order to separate the increased error from other state variable errors. It is possible to strongly correct the increased state variable of the error and to suppress the negative influence on other state variables such as the bias estimation value to the minimum, by performing the above processing before correction processing.


Angular Velocity Off-Scale Recovery Processing

Processing of Expression (41) is applied in order to handle the posture angle and the error in motion velocity, which increase by the off-scale state of the angular velocity sensor. That is, the error component (posture error component) of the posture quaternion q in the error covariance matrix ∑x2 is appropriately increased in response to an increase of the posture angle error. A correlation component (covariance component) between the posture error component and an error component other than the posture error component is set to zero in order to separate the posture error component from other error components, in the error covariance matrix ∑x2. The error component other than the posture error component includes a motion velocity error component, an error component of the residual bias bω of the angular velocity sensor (bias error component of angular velocity), an error component of the residual bias bα of the acceleration sensor (bias error component of acceleration), and an error component of the gravitational-acceleration correction value Δg. Setting the correlation component (covariance component) to zero can include a value approximate to zero and a value allowing a predetermined error component to be separated from other error components.


The error component of the motion velocity vector v (motion velocity error component) in the error covariance matrix ∑x2 is appropriately increased in response to an increase of the motion velocity error. A correlation component (covariance component) between the motion velocity error component and an error component other than the motion velocity error component is set to zero in order to separate the motion velocity error component from other error components, in the error covariance matrix ∑x2. The error component other than the motion velocity error component includes the posture error component, the bias error component of the angular velocity, the bias error component of the acceleration, and the error component of the gravitational-acceleration correction value. In Expression (41), σoq2 indicates the variance of the posture error at time of off-scale recovery, and σov2 indicates the variance of the motion velocity error at the time of off-scale recovery.











χ
2





R
ω






χ
2


R
ω
T


+





R
ω

2

=














σ

o
q

2


4



D


T


D







0

4
×
3







0

4
×
3







0

4
×
3







0
4








0

3
×
4







σ

o
r

2


I
3






0

3
×
3







0

3
×
3







0
3








0

3
×
4







0

3
×
3









ω
2











ω
α










σ

ω
g









0

3
×
4







0

3
×
3










ω
α












α
2








σ

α
g









0
4
T






0
3
T






σ

ω
g

T






σ

α
g

T






σ
g
2


















R
ω

=







0

7
×
7







0

7
×
7









0

7
×
7







I
7

















R
ω

2


=









σ

o
q

2


4



D


T


D







0

4
×
3







0

4
×
7









0

3
×
4







σ

o
r

2


I
3






0

3
×
7









0

7
×
3







0

7
×
4







0

7
×
7



















χ
2


=










q
2











q
r













q
ω













q
α










σ

q
g












q
r












r
2











r
ω













r
α










σ

r
g












q
ω













r
α












ω
2










α
2








σ

ω
g












q
α













r
α













ω
α












α
2








σ

α
g









σ

q
g

T






σ

r
g

T






σ

ω
g

T






σ

α
g

T






σ
g
2

















D


T


D



=








q
1
2

+

q
2
2







q
0


q
1



q
2


q
3






q
1


q
3



q
0


q
2




0







q
0


q
1



q
2


q
3






q
0
2

+

q
3
2




0




q
1


q
3



q
0


q
2








q
1


q
3



q
0


q
2




0




q
0
2

+

q
3
2







q
0


q
1



q
2


q
3






0




q
1


q
3



q
0


q
2







q
0


q
1



q
2


q
3






q
1
2

+

q
2
2



















Acceleration Off-Scale Recovery Processing

Processing of Expression (42) is applied in order to handle the error in motion velocity, which increases by the off-scale state of the acceleration sensor. That is, the motion velocity error component in the error covariance matrix ∑x2 is appropriately increased in response to an increase of the motion velocity error. A correlation component (covariance component) between the motion velocity error component and an error component other than the motion velocity error component is set to zero in order to separate the motion velocity error component from other state variable errors, in the error covariance matrix ∑x2.













χ
2





R
α




χ
2



R
α
T






+





R
α

2

=










q
2








0

4
×
3










q
ω













q
α










σ

q
g









0

3
×
4







σ

o
r

2


I
3






0

3
×
3







0

3
×
3







0
3











q
ω










0

3
×
3









ω
2











ω
α










σ

ω
g












q
α










0

3
×
3










ω
α












α
2








σ

α
g









σ

q
g

T






0
3
T






σ

ω
g

T






σ

α
g

T






σ
g
2




















R
α


=








I
4






0

4
×
3







0

4
×
7









0

3
×
4







0

3
×
3







0

3
×
7









0

7
×
4







0

7
×
3







I
7

















R
α

2


=







0

4
×
4







0

4
×
3







0

4
×
7









0

3
×
4







σ

o
r

2


I
3






0

3
×
7









0

7
×
4







0

7
×
3







0

7
×
7
























1-2. Flowchart of Posture Estimation Method


FIG. 1 is a flowchart illustrating an example of procedures of a posture estimation method according to the embodiment. The procedures in FIG. 1 are performed by a posture estimation device that estimates the posture of an object to which an IMU is attached, for example. The object is not particularly limited, and for example, a vehicle, an electronic device, exercise equipment, a person, and an animal may be provided as the object. The IMU may be detachable from the object. The IMU may be provided in a state where the IMU is fixed to the object, and thus detaching the IMU is not possible, for example, the IMU is mounted in the object. For example, the posture estimation device may be a personal computer (PC) or may be various portable devices such as a smart phone.


As illustrated in FIG. 1, in the posture estimation method in the embodiment, firstly, the posture estimation device initializes the state vector xk and the error covariance matrix ∑x, k2 as error information (initialization step S1) . That is, the posture estimation device sets k=0 and sets (initializes) the state vector x0 and the error covariance matrix ∑x, 02 at a time point t0. Specifically, the posture estimation device sets the elements q0, v0, bω, 0, bα, 0, and Δg0 of the state vector x0 and ∑q, 02, ∑v, 02, ∑, 02, ∑, 02, and σΔg, 02 included in the error covariance matrix ∑x, 02, which are expressed as in Expression (30).


For example, the posture estimation device may set the initial posture of the inertial measurement unit (IMU) to have a roll angle, a pitch angle, and a yaw angle which have been determined in advance, and may set q0 by substituting the roll angle, the pitch angle, and the yaw angle into Expression (31). Alternatively, the posture estimation device may acquire acceleration data from the acceleration sensor in a state where the inertial measurement unit (IMU) is stopped. The posture estimation device may specify a direction of the gravitational acceleration from the acceleration data, calculate a roll angle and a pitch angle, and set the yaw angle to a predetermined value (for example, 0) . Then, the posture estimation device may set q0 by substituting the roll angle, the pitch angle, and the yaw angle into Expression (31). The inertial measurement unit (IMU) sets ∑q, 02 by substituting RMSσϕ of a roll angle error and RMSσθ of a pitch angle error into Expression (32).


For example, the posture estimation device sets a state where the inertial measurement unit (IMU) is stopped to be an initial state and sets v0 to 0. Then, the posture estimation device sets ∑v, 02 by substituting RMSσvx, RMSσvy, and RMSσvz of the error of the motion velocity vector v in the axes into Expression (33).


If the residual bias bω of the angular velocity sensor and the residual bias bα of the acceleration sensor are known, the posture estimation device sets the values of the residual biases to bω, 0 and bα, 0. If bω and bα are not known, the posture estimation device sets bω, 0 and bα, 0 to zero. The posture estimation device sets ∑, 02 and ∑, 02 by substituting the errors RMSσbωx, RMSσbωy, and RMSσbωz of the residual bias of the angular velocity sensor in the axes and the errors RMSσbαx, RMSσbαy, and RMSσbαz of the residual bias of the acceleration sensor in the axes into Expression (34).


If the gravitational acceleration value is known, the posture estimation device sets the difference from 1 G to Δg0. If the gravitational acceleration value is not known, the posture estimation device sets Δg0 to zero. The posture estimation device sets RMSσΔg of the error of the gravitational acceleration value in σΔg, 02.


Then, the posture estimation device acquires a measured value of the inertial measurement unit (IMU) (measured-value acquisition step S2). Specifically, the posture estimation device waits until the sampling interval Δt elapses. If the sampling interval Δt elapses, the posture estimation device sets k=k+1 and tk=tk-1+Δt and acquires angular velocity data dω, k and acceleration data dα, k from the inertial measurement unit (IMU).


Then, the posture estimation device performs prediction processing (also referred to as time update processing) of the state vector xk (including the posture quaternion qk being the posture information at a time point tk, as an element) and the error covariance matrix ∑x, k2 being error information at the time point tk (prediction step S3) .



FIG. 2 is a flowchart illustrating an example of procedures of the step S3 in FIG. 1. As illustrated in FIG. 2, firstly, the posture estimation device performs processing of removing the residual biases bω, k-1 and bα, k-1 which have been estimated at a time point tk-1, from angular velocity data dω, k and acceleration data dα, k at the time point tk. The posture estimation device performs the above processing with Expression (19) (bias removal step S31).


Then, the posture estimation device performs processing (posture-change-amount calculation processing) of calculating the posture change amount Δqk at the time point tk based on the output of the angular velocity sensor (posture-change-amount calculation step S32). Specifically, the posture estimation device calculates the posture change amount Δqk with Expression (6), based on the angular velocity data dω, k •


The posture estimation device performs processing (velocity-change-amount calculation processing) of calculating the velocity change amount (Ckλk-gk-1) Δt of the object based on the output of the acceleration sensor and the output of the angular velocity sensor (velocity-change-amount calculation step S33). Specifically, the posture estimation device calculates the velocity change amount (Ckλk-gk-1) Δt with Expressions (9), (10), and (12), based on the acceleration data dα, k and the angular velocity data dω, k •


The posture estimation device performs processing (posture information prediction processing) of predicting the posture quaternion qk being posture information of the object at the time point tk, by using the posture change amount Δqk (posture information prediction step S34). In the posture information prediction step S34, the posture estimation device further performs processing (velocity information prediction processing) of predicting the motion velocity vector vk being velocity information of the object at the time point tk by using the velocity change amount (Ckλk-gk-1) Δt. Specifically, in the posture information prediction step S34, the posture estimation device performs processing of predicting the posture quaternion qk and the state vector xk including the motion velocity vector vk as the element, by Expressions (6), (12), and (19).


Lastly, the posture estimation device performs processing of updating the error covariance matrix ∑x, k2 at the time point tk by Expressions (20) and (21) (error information update step S35).


Returning to FIG. 1, the posture estimation device performs processing (rotational error component removal processing) of removing a rotational error component around a reference vector in the error covariance matrix ∑x, k2 being the error information (rotational error-component removal step S4). The reference vector is a vector observed by the observation section. In the embodiment, the reference vector is a gravitational acceleration vector observed by the acceleration sensor being the observation section. In the embodiment, a rotation error around the reference vector is an azimuth error. Thus, in the step S4, the posture estimation device performs processing of removing the azimuth error in the error covariance matrix ∑x, k2. Specifically, the posture estimation device generates the error covariance matrix ∑x, k2 in which the rank limitation and removal of the azimuth error component εθz in the error covariance matrix ∑q, k2 of the posture quaternion qk are performed, by Expressions (28) and (29).


Then, the posture estimation device performs off-scale recovery processing (error information adjustment step S5). Specifically, in the error information adjustment step S5, the posture estimation device determines whether or not the output of the angular velocity sensor is within the effective range. When it is determined that the output of the angular velocity sensor is not within the effective range, the posture estimation device performs processing of increasing the posture error component in the error covariance matrix ∑x, k2 being the error information and reducing the correlation component between the posture error component and the error component other than the posture error component, in the error covariance matrix ∑x, k2 (error information adjustment step S5) . In the error information adjustment step S5, the posture estimation device determines whether or not the output of the acceleration sensor is within the effective range. When it is determined that the output of the angular velocity sensor or the output of the acceleration sensor is not within the corresponding effective range, the posture estimation device performs processing of increasing the motion velocity error component in the error covariance matrix ∑x, k2 being the error information and reducing the correlation component between the motion velocity error component and the error component other than the motion velocity error component in the error covariance matrix ∑x, k2.



FIG. 3 is a flowchart illustrating an example of procedures of the step S5 in FIG. 1. As illustrated in FIG. 3, firstly, the posture estimation device determines whether or not the output (angular velocity data dω) of the angular velocity sensor is within the effective range (Step S51) . When it is determined that the output of the angular velocity sensor is within the effective range (Y in Step S51), the posture estimation device checks the remaining of an angular-velocity off-scale recovery period (Step S52a). When it is determined that the output of the angular velocity sensor is not within the effective range (N in Step S51), the posture estimation device starts or prolongs the angular-velocity off-scale recovery period (Step S52b) .


Then, the posture estimation device determines whether or not the output (acceleration data dα) of the acceleration sensor is within the effective range (Step S53) . When the posture estimation device determines that the output of the acceleration sensor is within the effective range (Y in Step S53), the posture estimation device checks the remaining of an acceleration off-scale recovery period (Step S54a) . When the posture estimation device determines that the output of the acceleration sensor is not within the effective range (N in Step S53), the posture estimation device starts or prolongs the acceleration off-scale recovery period (Step S54b).


Then, the posture estimation device determines whether or not the current time is in the angular-velocity off-scale recovery period as a first period after it is determined that the output of the angular velocity sensor is not within the effective range (Step S55). When the posture estimation device determines that the current time is in the angular-velocity off-scale recovery period (Y in Step S55), the posture estimation device increases the posture error component in the error covariance matrix ∑x, k2 being the error information (Step S56a). The posture estimation device reduces the correlation component between the posture error component and the error component other than the posture error component in the error covariance matrix ∑x, k2, for example, sets the correlation component to zero (Step S56b). The posture estimation device increases the motion velocity error component in the error covariance matrix ∑x, k2 (Step S58a) . The posture estimation device reduces the correlation component between the motion velocity error component and the error component other than the motion velocity error component in the error covariance matrix ∑x, k2, for example, sets the correlation component to zero (Step S58b). In practice, the posture estimation device performs the processes of Step S56a, Step S56b, Step S58a, and Step S58b by updating the error covariance matrix ∑x, k2 with Expression (41) .


When the posture estimation device determines that the current time is not in the angular-velocity off-scale recovery period (N in Step S55), the posture estimation device determines whether or not the current time is in the acceleration off-scale recovery period as a second period after it is determined that the output of the acceleration sensor is not within the effective range (Step S57) . When the posture estimation device determines that the current time is in the acceleration off-scale recovery period (Y in Step S57), the posture estimation device increases the motion velocity error component in the error covariance matrix ∑x, k2 (Step S58a) . The posture estimation device reduces the correlation component between the motion velocity error component and the error component other than the motion velocity error component in the error covariance matrix ∑x, k2, for example, sets the correlation component to zero (Step S58b). In practice, the posture estimation device performs the processes of Step S58a and Step S58b by updating the error covariance matrix ∑x, k2 with Expression (42).


When the posture estimation device determines that the current time is in neither the angular-velocity off-scale recovery period nor the acceleration off-scale recovery period (N in Step S55 and N in Step S57), the posture estimation device does not perform the processes of Step S56a, Step S56b, Step S58a, and Step S58b.


Returning to FIG. 1, the posture estimation device performs processing (bias error limitation processing) of limiting the bias error component of the angular velocity around the reference vector, in the error covariance matrix ∑x, k2 being the error information (bias error limitation step S6). As described above, in the embodiment, the reference vector is the gravitational acceleration vector. Thus, in Step S6, the posture estimation device limits the bias error component of the angular velocity around the gravitational acceleration vector, that is, limits the vertical component (yaw-axis component) of the bias error of the angular velocity.



FIG. 4 is a flowchart illustrating an example of procedures of Step S6 in FIG. 1. As illustrated in FIG. 4, firstly, the posture estimation device performs processing of calculating the variance σbωv2 of the vertical component of the bias error in the angular velocity with Expression (39) and the state vector xk and the error covariance matrix ∑x, k2 at the time point tk (bias-error vertical component calculation step S61).


Then, the posture estimation device performs processing of determining whether or not the variance σbωv2 exceeds the upper limit value (bias error determination step S62) .


When the variance σbωv2 exceeds the upper limit value (Y in Step S62), the posture estimation device performs limitation operation processing of limiting the vertical component of the bias error in the angular velocity (limitation operation step S63). Specifically, when the variance σbωv2 exceeds the upper limit value σbωmax2, the posture estimation device updates the error covariance matrix ∑x, k2 by Expression (40) . When the variance σbωv2 is equal to or less than the upper limit value (N in Step S62), the posture estimation device does not perform the limitation operation processing in Step S63.


Returning to FIG. 1, the posture estimation device performs correction processing (also referred to as observation update processing) of the state vector xk and the error covariance matrix ∑x, k2 at the time point tk (correction step S7). In the embodiment, the posture estimation device performs processing (posture information correction step) of correcting the posture quaternion qk being the posture information of the object, which has been predicted in Step S3, based on the error covariance matrix ∑x, k2 being the error information. Specifically, in the posture information correction step, the posture estimation device corrects the state vector xk including the posture quaternion qk being the posture information as the element, based on the observation residual Δza, k being a difference between the acceleration vector (obtained based on the output of the acceleration sensor) and the gravitational acceleration vector being the reference vector. FIG. 5 is a flowchart illustrating an example of procedures of Step S7 in FIG. 1.


As illustrated in FIG. 5, firstly, the posture estimation device performs processing of calculating the observation residual Δzk, a Kalman coefficient Kk, and a transformation matrix Hk at the time point tk by Expressions (23) and (24) (correction coefficient calculation step S71) .


Then, the posture estimation device performs processing (posture information correction processing) of correcting the posture quaternion qk being the predicted posture information of the object at the time point tk (posture information correction step S72). Specifically, the posture estimation device performs processing of correcting the state vector xk at the time point tk with Expression (27), the observation residual Δzk, and the Kalman coefficient Kk.


The posture estimation device performs processing of normalizing the state vector xk at the time point tk by Expression (28) (normalization step S73).


Then, the posture estimation device performs processing of correcting the error covariance matrix ∑x, k2 at the time point tk with Expression (27), the Kalman coefficient Kk, and the transformation matrix Hk (error information correction step S74).


Returning to FIG. 1, the posture estimation device repeats the processes of Step S2 to Step S7 until an end instruction is received (N in Step S8). If the end instruction is received (Y in Step S8), the posture estimation device ends the processing.


The order of the steps in FIG. 1 can be appropriately changed. For example, the order of Step S4, Step S5, and Step S6 may be changed, and at least one of Step S4, Step S5, and Step S6 may be performed after Step S7.


As described above, according to the posture estimation method in the embodiment, the posture change amount and the velocity change amount of an object are calculated with Expressions (6) and (12) derived from Expressions (1) and (2) of the output model of the IMU. Then, the posture of the object is estimated with the posture change amount and the velocity change amount. In Expressions (6) and (12), a calculation error in the posture change amount or the velocity change amount is reduced in comparison to that in the related art, by calculating the posture change amount and the velocity change amount with not only the first-order term of Δt but also the second-order term and the third-order term thereof.


If the object rotates, the coordinate transformation matrix Ck changes. However, the coordinate transformation matrix Ck is calculated from the elements of the posture quaternion qk estimated by the Kalman filter. Thus, when the object rotates rapidly, the coordinate transformation matrix Ck may not immediately follow the rotation of the object. Regarding this, according to the posture estimation method in the embodiment, since the acceleration λk is calculated with not only the acceleration but also the angular velocity in Expression (12), the rotation of the object is immediately reflected to the acceleration λk. Thus, even when the object rotates rapidly, it is possible to reduce deterioration of calculation accuracy of the velocity change amount.


Further, according to the posture estimation method in the embodiment, with Expression (23), the observation residual of the gravitational acceleration by the output of the acceleration sensor and the motion velocity of the object become zero in a long term, and the Kalman coefficient Kk in Expression (24) is calculated with the observation residual of the zero motion velocity. Thus, even though observation information of the azimuth is not provided, it is possible to estimate the posture of the object with high accuracy.


In the posture estimation method in the embodiment, when the output of the angular velocity sensor is not within the effective range, the posture error component in the error covariance matrix ∑x, k2 is increased, and the correlation component between the posture error component and the error component other than the posture error component is set to zero, by Expression (41). Accordingly, even when a situation in which an angular velocity exceeding the effective measurement range of the angular velocity sensor is temporarily input, and the output of the angular velocity sensor is temporarily not within the effective range occurs, it is possible to reduce a concern of decreasing estimation accuracy of the posture of the object.


In the posture estimation method in the embodiment, when the output of the angular velocity sensor is not within the effective range, the motion velocity error component in the error covariance matrix ∑x, k2 is increased, and the correlation component between the motion velocity error component and the error component other than the motion velocity error component is set to zero, by Expression (41). In addition, when the output of the acceleration sensor is not within the effective range, the motion velocity error component in the error covariance matrix ∑x, k2 is increased, and the correlation component between the motion velocity error component and the error component other than the motion velocity error component is set to zero, by Expression (42). Accordingly, even when a situation in which an angular velocity exceeding the effective measurement range of the angular velocity sensor or an acceleration exceeding the effective measurement range of the acceleration sensor is temporarily input, and the output of the angular velocity sensor or the output of the acceleration sensor is temporarily not within the corresponding effective range occurs, it is possible to reduce a concern of decreasing estimation accuracy of the motion velocity of the object.


Since information regarding the azimuth of the object is not included in the output of the angular velocity sensor and the output of the acceleration sensor, the azimuth error of the object is not corrected. However, if the azimuth error included in the updated posture error remains, reliability of the azimuth error is monotonously reduced, and the posture error monotonously increases. Thus, the estimation accuracy of the posture may be deteriorated. Regarding this, according to the posture estimation method in the embodiment, the azimuth error included in the posture error of the object, which has been updated by using the output of an angular velocity sensor 12 and the output of an acceleration sensor 14 is removed by Expression (17). Thus, it is possible to reduce a concern of monotonously increasing the posture error and to reduce a concern of decreasing the estimation accuracy of the posture.


The output of the angular velocity sensor and the output of the acceleration sensor do not include the information regarding the azimuth of the object. Thus, for example, when the object continues in the substantially same posture for a long term, the vertical component of the updated bias error of the angular velocity sensor monotonously increases, and the Kalman coefficient Kk becomes too large. Consequently, there is a concern of deteriorating the estimation accuracy of the posture. Regarding this, according to the posture estimation method in the embodiment, when the vertical component of the bias error in the angular velocity sensor exceeds the upper limit value, the vertical component of the bias error is limited to the upper limit value by Expression (40). Thus, it is possible to reduce a concern of increasing the Kalman coefficient Kk too large and to reduce a concern of decreasing the estimation accuracy of the posture.


With the above descriptions, according to the posture estimation method in the embodiment, it is possible to reduce a concern of decreasing the estimation accuracy of the posture of the object even when the posture of the object changes small and to estimate the posture of the object with sufficient accuracy.


2. Posture Estimation Device
2-1. Configuration of Posture Estimation Device


FIG. 6 is a diagram illustrating an example of a configuration of the posture estimation device in the embodiment. As illustrated in FIG. 6, in the embodiment, a posture estimation device 1 includes a processing unit 20, a ROM 30, a RAM 40, a recording medium 50, and a communication unit 60. The posture estimation device 1 estimates the posture of an object based on an output of an inertial measurement unit (IMU) 10. In the embodiment, the posture estimation device 1 may have a configuration obtained by changing or removing some components or by adding another component.


As illustrated in FIG. 6, in the embodiment, the posture estimation device 1 is separated from the IMU 10. However, the posture estimation device 1 may include the IMU 10. The IMU 10 and the posture estimation device 1 may be accommodated in one casing. The IMU 10 may be separated or separable from the main body in which the posture estimation device 1 is accommodated. In the former case, the posture estimation device 1 is mounted on an object. In the latter case, the IMU 10 is mounted on the object.


In the embodiment, the IMU 10 includes the angular velocity sensor 12, the acceleration sensor 14, and a signal processing unit 16. In the embodiment, the IMU 10 may have a configuration obtained by changing or removing some components or by adding another component.


The angular velocity sensor 12 measures an angular velocity in each of directions of three axes which intersect with each other, in ideal, are perpendicular to each other. The angular velocity sensor 12 outputs an analog signal depending on the magnitude and the orientation of the measured three-axis angular velocity.


The acceleration sensor 14 measures an acceleration in each of directions of three axes which intersect with each other, in ideal, are perpendicular to each other. The acceleration sensor 14 outputs an analog signal depending on the magnitude and the orientation of the measured three-axis acceleration.


The signal processing unit 16 performs processing of performing sampling of an output signal of the angular velocity sensor 12 at a predetermined sampling interval Δt to convert the output signal into angular velocity data having a digital value. The signal processing unit 16 performs processing of performing sampling of an output signal of the acceleration sensor 14 at a predetermined sampling interval Δt to convert the output signal into acceleration data having a digital value.


Ideally, the angular velocity sensor 12 and the acceleration sensor 14 are attached to the IMU 10 such that the three axes coincide with three axes (x-axis, y-axis, and z-axis) of the sensor coordinate system which is an orthogonal coordinate system defined for the IMU 10. However, in practice, an error occurs in a mounting angle. Thus, the signal processing unit 16 performs processing of converting the angular velocity data and the acceleration data into data in an xyz coordinate system, by using a correction parameter which has been, in advance, calculated in accordance with the error in the mounting angle. The signal processing unit 16 also performs processing of correcting the temperature in the angular velocity data and the acceleration data in accordance with temperature characteristics of the angular velocity sensor 12 and the acceleration sensor 14.


A function of A/D conversion or temperature correction may be embedded in the angular velocity sensor 12 and the acceleration sensor 14.


The IMU 10 outputs angular velocity data dω and acceleration data dα after the processing by the signal processing unit 16 to the processing unit 20 of the posture estimation device 1.


The ROM 30 stores programs used when the processing unit 20 performs various types of processing, and various programs or various types of data for realizing application functions, for example.


The RAM 40 is a storage unit that is used as a work area of the processing unit 20, and temporarily stores a program or data read out from the ROM 30 or operation results obtained by the processing unit 20 performing processing in accordance with various programs, for example.


The recording medium 50 is a non-volatile storage unit that stores data required to be preserved for a long term among pieces of data generated by processing of the processing unit 20. The recording medium 50 may store programs used when the processing unit 20 performs various types of processing, and various programs or various types of data for realizing application functions, for example.


The processing unit 20 performs various types of processing in accordance with the program stored in the ROM 30 or the recording medium 50 or in accordance with the program which is received from a server via a network and then is stored in the RAM 40 or the recording medium 50. In particular, in the embodiment, the processing unit 20 executes the program to function as a bias removal unit 22, a posture-change-amount calculation unit 24, a velocity-change-amount calculation unit 26, and a posture estimation unit 28 . Thus, the processing unit 20 performs a predetermined operation on the angular velocity data dω and the acceleration data dα output at an interval of Δt by the IMU 10 so as to perform processing of estimating the posture of the object.


In the embodiment, as illustrated in FIG. 7, the sensor coordinate system (xyz coordinate system constituted by the x-axis, the y-axis, and the z-axis which are perpendicular to each other) as the coordinate system of the IMU 10 and a local-space coordinate system (XYZ coordinate system constituted by an X-axis, a Y-axis, and a Z-axis which are perpendicular to each other) as a coordinate system of a space in which the object exists are considered. The processing unit 20 estimates the posture of the object (may also be referred to as the posture of the IMU 10) in the local-space coordinate system from the three-axis angular velocity and the three-axis acceleration in the sensor coordinate system, which are output from the IMU 10 mounted on the object.


The bias removal unit 22 performs processing of calculating the three-axis angular velocity obtained by removing a bias error from the output of the angular velocity sensor 12 and performs processing of calculating the three-axis acceleration obtained by removing a bias error from the output of the acceleration sensor 14.


The posture-change-amount calculation unit 24 calculates the posture change amount of the object based on the output of the angular velocity sensor 12. Specifically, the posture-change-amount calculation unit 24 performs processing of calculating the posture change amount of the object by approximation with a polynomial expression in which the sampling interval Δt is used as a variable. The posture-change-amount calculation unit 24 performs the processing with the three-axis angular velocity in which the bias error has been removed by the bias removal unit 22.


The velocity-change-amount calculation unit 26 calculates the velocity change amount of the object based on the output of the acceleration sensor 14 and the output of the angular velocity sensor 12. Specifically, the velocity-change-amount calculation unit 26 performs processing of calculating the velocity change amount of the object with the three-axis angular velocity and the three-axis acceleration in which the bias error has been removed by the bias removal unit 22.


The posture estimation unit 28 functions as an integration calculation unit 101, a posture information prediction unit 102, an error information update unit 103, a correction coefficient calculation unit 104, a posture information correction unit 105, a normalization unit 106, an error information correction unit 107, a rotational error-component removal unit 108, a bias error limitation unit 109, and an error information adjustment unit 110. The posture estimation unit 28 performs processing of estimating the posture of the object with the posture change amount calculated by the posture-change-amount calculation unit 24 and the velocity change amount calculated by the velocity-change-amount calculation unit 26. In practice, the posture estimation unit 28 performs processing of estimating a state vector x defined in Expression (18) and an error covariance matrix ∑x2 thereof with an extended Kalman filter.


The integration calculation unit 101 performs integration processing of integrating the posture change amount calculated by the posture-change-amount calculation unit 24 with the previous estimated value of the posture, which has been corrected by the posture information correction unit 105 and normalized by the normalization unit 106. The integration calculation unit 101 performs integration processing of integrating the velocity change amount calculated by the velocity-change-amount calculation unit 26 with the previous estimated value of the velocity, which has been corrected by the posture information correction unit 105 and normalized by the normalization unit 106.


The posture information prediction unit 102 performs processing of predicting posture quaternion q as posture information of the object, with the posture change amount calculated by the posture-change-amount calculation unit 24. The posture information prediction unit 102 also performs processing of predicting a motion velocity vector v as velocity information of the object, based on the velocity change amount calculated by the velocity-change-amount calculation unit 26. In practice, the posture information prediction unit 102 performs processing of predicting the state vector x including the posture quaternion q and the motion velocity vector v as elements.


The error information update unit 103 performs processing of updating the error covariance matrix ∑x2 as error information, based on the output of the angular velocity sensor 12. Specifically, the error information update unit 103 performs processing of updating a posture error of the object with the three-axis angular velocity in which the bias error has been removed by the bias removal unit 22. In practice, the error information update unit 103 performs processing of updating the error covariance matrix ∑x2 with the extended Kalman filter.


The rotational error-component removal unit 108 performs processing of removing a rotational error component around a reference vector, in the error covariance matrix ∑x2 being the error information. Specifically, the rotational error-component removal unit 108 performs processing of removing an azimuth error component included in the posture error in the error covariance matrix ∑x2 updated by the error information update unit 103. In practice, the rotational error-component removal unit 108 performs processing of generating the error covariance matrix ∑x2 in which the rank limitation and removal of the azimuth error component are performed in the error covariance matrix ∑q2 of the posture, on the error covariance matrix ∑x2.


The error information adjustment unit 110 performs processing as follows. That is, the error information adjustment unit 110 determines whether or not the output of the angular velocity sensor 12 is within an effective range. When the error information adjustment unit 110 determines that the output of the angular velocity sensor 12 is not within the effective range, the error information adjustment unit 110 increases a posture error component in the error covariance matrix ∑x2 being the error information and reduces a correlation component between the posture error component and an error component other than the posture error component in the error covariance matrix ∑x2 (for example, sets the correlation component to zero). In addition, the error information adjustment unit 110 performs processing as follows. That is, the error information adjustment unit 110 determines whether or not the output of the acceleration sensor 14 is within an effective range. When the error information adjustment unit 110 determines that the output of the angular velocity sensor 12 or the output of the acceleration sensor 14 is not within the corresponding effective range, the error information adjustment unit 110 increases a motion velocity error component in the error covariance matrix ∑x, k2 and reduces a correlation component between the motion velocity error component and an error component other than the motion velocity error component in the error covariance matrix ∑x, k2 (for example, sets the correlation component to zero). Specifically, in an angular-velocity off-scale recovery period after the error information adjustment unit 110 determines that the output of the angular velocity sensor 12 is not within the effective range, the error information adjustment unit 110 performs processing of increasing the posture error component and the motion velocity error component in the error covariance matrix ∑x2 generated by the rotational error-component removal unit 108 and reducing the correlation component between the posture error component and the error component other than the posture error component and the correlation component between the motion velocity error component and the error component other than the motion velocity error component (for example, setting the correlation components to zero). In an acceleration off-scale recovery period after the error information adjustment unit 110 determines that the output of the angular velocity sensor 12 is within the effective range, and the output of the acceleration sensor 14 is not within the effective range, the error information adjustment unit 110 performs processing of increasing a motion velocity error component in the error covariance matrix ∑x2 generated by the rotational error-component removal unit 108 and reducing a correlation component between the motion velocity error component and an error component other than the motion velocity error component (for example, setting the correlation component to zero).


The bias error limitation unit 109 performs processing of limiting a bias error component of an angular velocity around the reference vector, in the error covariance matrix ∑x2 being the error information. Specifically, the bias error limitation unit 109 performs processing of limiting a vertical component of the bias error of the angular velocity, in the error covariance matrix ∑x2 generated by the error information adjustment unit 110. In practice, the bias error limitation unit 109 performs processing as follows. That is, the bias error limitation unit 109 determines whether or not the vertical component of the bias error of the angular velocity exceeds an upper limit value. When the vertical component exceeds the upper limit value, the bias error limitation unit 109 generates the error covariance matrix ∑x2 in which limitation is applied such that the vertical component has the upper limit value.


The correction coefficient calculation unit 104 performs processing of calculating correction coefficients based on the error covariance matrix ∑x2 which has been generated by the bias error limitation unit 109 and is the error information. The correction coefficients are used for determining the correction amount of the posture information (posture quaternion q) or the velocity information (motion velocity vector v) of the object by the posture information correction unit 105 and the correction amount of the error information (error covariance matrix ∑x) by the error information correction unit 107. In practice, the correction coefficient calculation unit 104 performs processing of calculating an observation residual Δz, a Kalman coefficient K, and a transformation matrix H.


The posture information correction unit 105 performs processing of correcting the posture information (posture quaternion q) of the object, which has been predicted by the posture information prediction unit 102, based on the error covariance matrix ∑x being the error information. Specifically, the posture information correction unit 105 performs processing of correcting the error covariance matrix ∑x generated by the bias error limitation unit 109 and the posture quaternion q. The posture information correction unit 105 performs the processing with the Kalman coefficient K and the observation residual Δza, of the gravitational acceleration calculated by the correction coefficient calculation unit 104 based on the gravitational acceleration vector g being the reference vector and the acceleration vector α obtained from the output of the acceleration sensor 14. In practice, the posture information correction unit 105 performs processing of correcting the state vector x predicted by the posture information prediction unit 102, with the extended Kalman filter.


The normalization unit 106 performs processing of normalizing the posture information (posture quaternion q) of the object, which has been corrected by the posture information correction unit 105 such that the magnitude thereof does not change. In practice, the normalization unit 106 performs processing of normalizing the state vector x corrected by the posture information correction unit 105.


The error information correction unit 107 performs processing of correcting the error covariance matrix ∑x being the error information. Specifically, the error information correction unit 107 performs processing of correcting the error covariance matrix ∑x generated by the bias error limitation unit 109 with the extended Kalman filter, and the transformation matrix H and the Kalman coefficient K calculated by the correction coefficient calculation unit 104.


The posture information (posture quaternion q) of the object, which has been estimated by the processing unit 20 can be transmitted to another device via the communication unit 60.


2-2. Configuration of Processing Unit


FIG. 8 is a diagram illustrating a specific example of a configuration of the processing unit 20. In FIG. 8, components as same as those in FIG. 6 are denoted by the same reference signs. As illustrated in FIG. 8, angular velocity data dω, k and acceleration data dα, k at a time point tk, which are output by the IMU 10 are input to the bias removal unit 22. As shown in Expression (1), the angular velocity data dω, k is represented by the sum of the average value of the angular velocity vector ω in a period from a time point tk-1 to the time point tk and the residual bias bω of the angular velocity sensor 12. Similarly, as shown in Expression (2), the acceleration data dα, k is represented by the sum of the average value of the acceleration vector α in the period from the time point tk-1 to the time point tk and the residual bias bα of the acceleration sensor 14.


The bias removal unit 22 calculates the average value of the angular velocity vector ω in the period from the time point tk-1 to the time point tk by Expression (19) in a manner of subtracting the residual bias bω, k-1 at the time point tk-1 from the angular velocity data dω, k at the time point tk. The bias removal unit 22 calculates the average value of the acceleration vector α in the period from the time point tk-1 to the time point tk by Expression (19) in a manner of subtracting the residual bias bα, k-1 at the time point tk-1 from the acceleration data dα, k at the time point tk.


The posture-change-amount calculation unit 24 calculates the approximation of the posture change amount Δqk at the time point tk by substituting the average value of the angular velocity vector ω in the period from the time point tk-1 to the time point tk and the average value of the angular velocity vector ω in a period from a time point tk-2 to the time point tk_1 into the polynomial expression of Expression (6). The above-described average values of the angular velocity vector ω have been calculated by the bias removal unit 22.


The velocity-change-amount calculation unit 26 calculates a coordinate transformation matrix Ck at the time point tk from posture quaternion qk-1 at the time point tk-1 with Expression (9). The velocity-change-amount calculation unit 26 calculates the approximation of the acceleration λk at the time point tk by substituting the average value of the acceleration vector α and the average value of the angular velocity vector ω in the period from the time point tk-1 to the time point tk, and the average value of the acceleration vector α and the average value of the angular velocity vector ω in the period from the time point tk-2 to the time point tk-1 into the polynomial expression of Expression (12). The above-described average values of the angular velocity vector ω have been calculated by the bias removal unit 22. The velocity-change-amount calculation unit 26 calculates a gravitational acceleration vector gk-1 at the time point tk-1 by substituting a gravitational-acceleration correction value Δgk-1 at the time point tk-1 into Expression (10). The velocity-change-amount calculation unit 26 calculates the velocity change amount (Ckλk-gk-1) Δt at the time point tk from the coordinate transformation matrix Ck, the acceleration λk, and the gravitational acceleration vector gk-1, which have been calculated.


As illustrated in FIG. 8, the posture estimation unit 28 includes the integration calculation unit 101, the posture information prediction unit 102, the error information update unit 103, the correction coefficient calculation unit 104, the posture information correction unit 105, the normalization unit 106, the error information correction unit 107, the rotational error-component removal unit 108, and the bias error limitation unit 109. The posture estimation unit 28 estimates the state vector x and the error covariance matrix ∑x, k2 at the time point tk with the extended Kalman filter.


The integration calculation unit 101 performs quaternion multiplication of the posture quaternion qk-1 at the time point tk-1 and the posture change amount Δqk at the time point tk, which has been calculated by the posture-change-amount calculation unit 24, with Expression (6). The integration calculation unit 101 adds a motion velocity vector vk-1 at the time point tk-1 and the velocity change amount (Ckλk-gk-1) Δt at the time point tk, which has been calculated by the velocity-change-amount calculation unit 26, with Expression (12).


The posture information prediction unit 102 predicts the posture quaternion qk, the motion velocity vector vk, the residual bias bω, k of the angular velocity sensor 12, the residual bias bα, k of the acceleration sensor 14, and the gravitational-acceleration correction value Δgk which are elements of the state vector Xk, with Expression (19). Specifically, the posture information prediction unit 102 predicts the posture quaternion qk as a result of the quaternion multiplication of the posture quaternion qk-1 and the posture change amount Δqk by the integration calculation unit 101. The posture information prediction unit 102 predicts the motion velocity vector vk as a result of adding the motion velocity vector vk-1 and the velocity change amount (Ckλk-gk-1) Δt by the integration calculation unit 101. The posture information prediction unit 102 predicts the residual bias bω, k of the angular velocity sensor 12 to be the residual bias bω, k-1 of the angular velocity sensor 12 at the time point tk-1. The posture information prediction unit 102 predicts the residual bias bα, k of the acceleration sensor 14 to be the residual bias bα, k-1 of the acceleration sensor 14 at the time point tk-1. The posture information prediction unit 102 predicts the gravitational-acceleration correction value Δgk to be the gravitational-acceleration correction value Δgk-1 at the time point tk-1.


The error information update unit 103 updates the error covariance matrix ∑x, k2 at the time point tk with Expressions (20) and (21). The error information update unit 103 performs the update with the posture change amount Δqk calculated by the posture-change-amount calculation unit 24, the acceleration λk and the coordinate transformation matrix Ck calculated by the velocity-change-amount calculation unit 26, the posture quaternion qk-1 at the time point tk-1, and the error covariance matrix ∑x, k-12 at the time point tk-1.


The rotational error-component removal unit 108 calculates a matrix D′TD′ with the posture quaternion qk predicted by the posture information prediction unit 102, by Expression (29). The rotational error-component removal unit 108 updates the error covariance matrix ∑x, k2 updated by the error information update unit 103, with Expression (28) and the matrix D′ TD′ . Thus, the error covariance matrix ∑x, k2 in which the rank of the error covariance matrix ∑q, k2 of the posture quaternion qk is limited to 3, and an azimuth error component εθz has been removed from the error covariance matrix ∑q, k2 is generated.


In the angular-velocity off-scale recovery period, the error information adjustment unit 110 updates the error covariance matrix ∑x, k2 generated by the rotational error-component removal unit 108, with Expression (41) and the posture quaternion qk predicted by the posture information prediction unit 102. In the acceleration off-scale recovery period, the error information adjustment unit 110 updates the error covariance matrix ∑x, k2 generated by the rotational error-component removal unit 108 with Expression (42) and the posture quaternion qk predicted by the posture information prediction unit 102.


The bias error limitation unit 109 calculates the variance σbωv2 of the vertical component of the bias error in the angular velocity sensor 12 with the posture quaternion qk predicted by the posture information prediction unit 102 and the error covariance matrix ∑x, k2 generated by the error information adjustment unit 110, by Expression (39) . When the variance σbωv2 exceeds the upper limit value σbωmax2, the bias error limitation unit 109 updates the error covariance matrix ∑x, k2 generated by the error information adjustment unit 110, with Expression (40). Thus, in the error covariance matrix ∑x, k2, the variance σbωv2 of the vertical component of the bias error in the angular velocity sensor 12 is limited to the upper limit value σbωmax2 .


The correction coefficient calculation unit 104 calculates the observation residual Δzk, the transformation matrix Hk, and the Kalman coefficient Kk at the time point tk with Expressions (23) to (26). The correction coefficient calculation unit 104 performs the calculation with the error covariance matrix ∑x, k2 generated by the bias error limitation unit 109, the coordinate transformation matrix Ck calculated by the velocity-change-amount calculation unit 26, the average value of the acceleration vector α calculated by the bias removal unit 22 in the period from the time point tk-1 to the time point tk, and the posture quaternion qk and the gravitational-acceleration correction value Δgk predicted by the posture information prediction unit.


The posture information correction unit 105 corrects the elements (posture quaternion qk, motion velocity vector vk, residual bias bω, k of the angular velocity sensor 12, residual bias bα, k of the acceleration sensor 14, and gravitational-acceleration correction value Δgk) of the state vector Xk predicted by the posture information prediction unit 102. The posture information correction unit 105 performs the correction by Expression (27) with the observation residual Δzk and the Kalman coefficient Kk calculated by the correction coefficient calculation unit 104.


The normalization unit 106 normalizes the elements (posture quaternion qk, motion velocity vector vk, residual bias bω, k of the angular velocity sensor 12, residual bias bα, k of the acceleration sensor 14, and gravitational-acceleration correction value Δgk) of the state vector Xk corrected by the posture information correction unit 105, with Expression (28).


The error information correction unit 107 corrects the error covariance matrix ∑x, k2 generated by the bias error limitation unit 109 by Expression (27) with the transformation matrix Hk and the Kalman coefficient Kk calculated by the correction coefficient calculation unit 104.


If the next sampling interval Δt has elapsed, the state vector Xk calculated by the normalization unit 106 and the error covariance matrix ∑x, k2 corrected by the error information correction unit 107 are fed back to the bias removal unit 22, the velocity-change-amount calculation unit 26, the integration calculation unit 101, the posture information prediction unit 102, and the error information update unit 103 in a state of being set to be the state vector xk-1 and the error covariance matrix ∑x, k-12 at the time point tk-1.


The processing unit 20 described above performs posture estimation processing of estimating the posture of the object, for example, in accordance with the procedures illustrated in FIGS. 1 to 5.


According to the above-described posture estimation device 1 in the embodiment, the processing unit 20 estimates the posture of an object in accordance with the procedures illustrated in FIGS. 1 to 5. Thus, even when a situation in which the output of the angular velocity sensor is temporarily not within the effective range occurs, it is possible to reduce a concern of decreasing the estimation accuracy of the posture of the object and to estimate the posture of the object with sufficient accuracy. According to the posture estimation device 1 in the embodiment, even when a situation in which the output of the angular velocity sensor or the output of the acceleration sensor is temporarily not within the corresponding effective range occurs, it is possible to reduce a concern of decreasing the estimation accuracy of the motion velocity of the object and to estimate the motion velocity of the object with sufficient accuracy. In addition, according to the posture estimation device 1 in the embodiment, it is possible to exhibit effects similar to those in the above-described posture estimation method in the embodiment.


3. Modification Examples

In the above-described embodiment, the angular velocity sensor and the acceleration sensor are integrated by being accommodated in one inertial measurement unit (IMU). However, the angular velocity sensor and the acceleration sensor may be provided to be individual from each other.


In the above-described embodiment, the posture estimation device outputs only posture information of the object. However, the posture estimation device may output another kind of information. For example, the posture estimation device may output position information of the object, which is obtained by integrating the velocity information or the motion velocity vector vk of the object based on the motion velocity vector vk at the time point tk.


In the above-described embodiment, the acceleration sensor and the angular velocity sensor update the outputs at the same sampling interval Δt. However, the sampling interval Δta of the acceleration sensor may be different from the sampling interval Δtω of the angular velocity sensor. In this case, the prediction processing (time update processing) of the state vector and the error covariance matrix in Step S3 in FIG. 1 may be performed for each Δtω, and correction processing (observation update processing) of the state vector and the error covariance matrix in Step S7 in FIG. 1 may be performed for each Δta. The rotational error-component removal processing in Step S4 in FIG. 1 and the bias error limitation processing in Step S6 in FIG. 1 may be performed for each Δtω, for each Δta, or performed in a period different from Δtω or Δta.


In the above-described embodiment, the posture estimation device estimates the posture of an object by using the gravitational acceleration vector observed by the acceleration sensor as the reference vector, and by using the output of the angular velocity sensor and the reference vector. However, the reference vector may not be the gravitational acceleration vector. For example, when the posture estimation device estimates the posture of an object by using the angular velocity sensor and a geomagnetic sensor, the reference vector may be a geomagnetic vector (vector directed toward the north) observed by using a geomagnetic sensor. For example, when the posture estimation device estimates the posture of a satellite as an object by the angular velocity sensor and a star tracker, the reference vector may be a vector which is directed from the object toward a fixed star and is observed by the star tracker. The acceleration sensor, the geomagnetic sensor, and the star tracker are examples of a reference observation sensor that observes the reference vector.


In the above-described embodiment, the posture information of an object is expressed in quaternion. However, the posture information may be information expressed by a roll angle, a pitch angle, and a yaw angle or may be a posture transformation matrix.


In the above-described embodiment, the state vector Xk includes the posture quaternion qk, the motion velocity vector vk, the residual bias bω, k of the angular velocity sensor, the residual bias bα, k of the acceleration sensor, and the gravitational-acceleration correction value Δgk, as the elements. However, the state vector Xk is not limited thereto. For example, the state vector Xk may include the posture quaternion qk, the residual bias bω, k of the angular velocity sensor, the residual bias bα, k of the acceleration sensor, and the gravitational-acceleration correction value Δgk, as the elements, and may not include the motion velocity vector vk. In this case, the flowchart of the bias error limitation step illustrated in FIG. 3 is changed to the flowchart illustrated in FIG. 9, for example. In FIG. 9, steps as same as those in FIG. 3 are denoted by the same reference signs. The flowchart in FIG. 9 is obtained by removing some steps from the flowchart in FIG. 3, not by adding a new step. Thus, descriptions thereof will not be repeated.


4. Electronic Device


FIG. 10 is a block diagram illustrating an example of a configuration of an electronic device 300 in the embodiment. The electronic device 300 includes the posture estimation device 1 and the inertial measurement unit 10 in the above embodiment. The electronic device 300 can include a communication unit 310, an operation unit 330, a display unit 340, a storage unit 350, and an antenna 312.


The communication unit 310 is, for example, a wireless circuit. The communication unit 310 performs processing of receiving data from the outside of the device or transmitting data to the outside thereof, via the antenna 312.


The posture estimation device 1 performs processing based on the output signal of the inertial measurement unit 10. Specifically, the posture estimation device 1 performs processing of estimating the posture of the electronic device 300 based on an output signal (output data) such as detection data of the inertial measurement unit 10. The posture estimation device 1 may perform signal processing such as correction processing or filtering on the output signal (output data) such as detection data of the inertial measurement unit 10. In addition, based on the output signals, the posture estimation device 1 may perform various types of control processing for the electronic device 300, such as control processing of the electronic device 300 or various types of digital processing of data transmitted or received via the communication unit 310. The function of the posture estimation device 1 can be realized by a processor such as an MPU or a CPU, for example.


The operation unit 330 is used when a user performs an input operation. The operation unit 330 can be realized by an operation button, a touch panel display, or the like.


The display unit 340 displays various types of information and can be realized by a display of liquid crystal, organic EL, or the like. The storage unit 350 stores data. The function of the storage unit 350 can be realized by, for example, a semiconductor memory such as a RAM or a ROM.


The electronic device 300 in the embodiment can be applied to, for example, an image-related device (such as a digital still camera or a video camera), an in-vehicle device, a wearable device (such as a head mounted display device or a watch-related device) , an ink jet discharge device, a robot, a personal computer, a portable information terminal, a printing device, and a projection device. The in-vehicle device includes a car navigation device or a device for automatic driving, for example. The watch-related device includes a watch or a smart watch, for example. For example, an ink jet printer is provided as the ink jet discharge device. The portable information terminal includes a smart phone, a portable phone, a portable game device, a notebook PC, or a tablet terminal, for example. The electronic device 300 in the embodiment can also be applied to an electronic notebook, an electronic dictionary, a calculator, a word processor, a workstation, a videophone, a television monitor for crime prevention, electronic binoculars, a POS terminal, a medical device, a fish finder, a measuring device, and a device for a base station of a mobile terminal, instruments, a flight simulator, and a network server, for example. The medical device includes an electronic thermometer, a sphygmomanometer, a blood glucose meter, an electrocardiogram measuring device, an ultrasonic diagnostic device, an electronic endoscope, and the like. The instruments are instruments of vehicles, aircraft, ships, and the like.



FIG. 11 is a plan view illustrating a wrist-watch type activity meter 400 as a portable electronic device. FIG. 12 is a block diagram illustrating an example of a configuration of the activity meter 400. The activity meter 400 is put on the region (such as the wrist) of a user by a band 401. The activity meter 400 which is an active tracker includes a display unit 402 for digital display. The activity meter 400 can perform wireless communication by Bluetooth (registered trademark), Wi-Fi (registered trademark), or the like.


As illustrated in FIGS. 11 and 12, the activity meter 400 includes a case 403, the posture estimation device 1, the display unit 402, and a translucent cover 404. In the case 403, the inertial measurement unit 10 is accommodated. The posture estimation device 1 is accommodated in the case 403 and performs processing based on an output signal from the inertial measurement unit 10. The display unit 402 is accommodated in the case 403. The translucent cover 404 closes an opening portion of the case 403. A bezel 405 is provided on the outside of the translucent cover 404. A plurality of operation buttons 406 and 407 are provided on a side surface of the case 403. The acceleration sensor 14 that detects a three-axis acceleration and the angular velocity sensor 12 that detects a three-axis angular velocity are provided in the inertial measurement unit 10.


In the display unit 402, position information or the movement amount obtained by a GPS sensor 411 or a geomagnetic sensor 412, motion information (such as a momentum) obtained by the acceleration sensor 14 or the angular velocity sensor 12, biometric information (such as a pulse rate) obtained by a pulse rate sensor 416, and time point information such as the current time point are displayed in accordance with various detection modes. An environmental temperature obtained by a temperature sensor 417 can also be displayed. A communication unit 422 communicates with an information terminal such as a user terminal. The posture estimation device 1 is realized by an MPU, a DSP, and an ASIC, for example. The posture estimation device 1 performs various types of processing based on programs stored in a storage unit 420 and information input by an operation unit 418 such as the operation buttons 406 and 407. The posture estimation device 1 performs processing of estimating posture information of the activity meter 400 based on the output signal of the inertial measurement unit 10. The posture estimation device 1 may perform processing based on output signals of the GPS sensor 411, the geomagnetic sensor 412, a pressure sensor 413, the acceleration sensor 14, the angular velocity sensor 12, the pulse rate sensor 416, the temperature sensor 417, and a timekeeping unit 419. The posture estimation device 1 can also perform display processing of displaying an image in the display unit 402, sound output processing of outputting sound to a sound output unit 421, communication processing of communicating with an information terminal via the communication unit 422, power control processing of supplying power from a battery 423 to the components, and the like.


According to the activity meter 400 having the above-described configuration in the embodiment, it is possible to exhibit the above-described effects of the posture estimation device 1 and to exhibit high reliability. The activity meter 400 includes the GPS sensor 411 and can measure the movement distance and a movement trajectory of the user. Thus, the activity meter 400 having high usability is obtained. The activity meter 400 can be widely applied to a running watch, a runner watch, an outdoor watch, a GPS watch equipped with a GPS, and the like.


5. Vehicle

In the embodiment, a vehicle includes the posture estimation device 1 in the above embodiment, and a control device that controls the posture of the vehicle based on the posture information of the vehicle, which has been estimated by the posture estimation device 1.



FIG. 13 illustrates an example of a vehicle 500. FIG. 14 is a block diagram illustrating an example of a configuration of the vehicle 500. As illustrated in FIG. 13, the vehicle 500 includes a vehicle body 502 and wheels 504 . A positioning device 510 is mounted on the vehicle 500. A control device 570 that performs vehicle control and the like is provided in the vehicle 500. As illustrated in FIG. 14, the vehicle 500 includes a driving mechanism 580 such as an engine and a motor, a braking mechanism 582 such as a disk brake and a drum brake, and a steering mechanism 584 realized by a steering wheel, a steering gear box and the like. As described above, the vehicle 500 is equipment or a device that includes the driving mechanism 580, the braking mechanism 582, and the steering mechanism 584 and moves on the ground, in the sky, or in the sea. For example, the vehicle 500 is a four-wheel vehicle such as an agricultural machine.


The positioning device 510 is a device that is mounted on the vehicle 500 and performs positioning of the vehicle 500. The positioning device 510 includes the inertial measurement unit 10, a GPS receiving unit 520, an antenna 522 for GPS reception, and the posture estimation device 1. The posture estimation device 1 includes a position information acquisition unit 532, a position composition unit 534, an operational processing unit 536, and a processing unit 538. The inertial measurement unit 10 includes a three-axis acceleration sensor and a three-axis angular velocity sensor. The operational processing unit 536 receives acceleration data and angular velocity data from the acceleration sensor and the angular velocity sensor, performs inertial navigation operational processing on the received data, and outputs inertial navigation positioning data. The inertial navigation positioning data indicates the acceleration or the posture of the vehicle 500.


The GPS receiving unit 520 receives a signal from a GPS satellite via the antenna 522. The position information acquisition unit 532 outputs GPS positioning data based on a signal received by the GPS receiving unit 520. The GPS positioning data indicates the position, the speed, and the direction of the vehicle 500 on which the positioning device 510 is mounted. The position composition unit 534 calculates a position at which the vehicle 500 runs on the ground at the current time, based on the inertial navigation positioning data output from the operational processing unit 536 and the GPS positioning data output from the position information acquisition unit 532. For example, if the posture of the vehicle 500 differs by an influence of an inclination (θ) of the ground and the like as illustrated in FIG. 13 even though the position of the vehicle 500, which is included in the GPS positioning data is the same, the vehicle 500 runs at a different position on the ground. Therefore, it is not possible to calculate the accurate position of the vehicle 500 only with the GPS positioning data. Thus, the position composition unit 534 calculates a position at which the vehicle 500 runs on the ground at the current time, by using data regarding the posture of the vehicle 500 among types of inertial navigation positioning data. Position data output from the position composition unit 534 is subjected to predetermined processing by the processing unit 538, and is displayed in a display unit 550, as a positioning result. The position data may be transmitted to an external device by a communication unit 560.


The control device 570 controls the driving mechanism 580, the braking mechanism 582, and the steering mechanism 584 of the vehicle 500. The control device 570 is a controller of controlling the vehicle. For example, the control device 570 can be realized by a plurality of control units. The control device 570 includes a vehicle control unit 572 being a control unit that controls the vehicle, an automatic driving control unit 574 being a control unit that controls automatic driving, and a storage unit 576 realized by a semiconductor memory and the like. A monitoring device 578 monitors an object such as an obstacle around the vehicle 500. The monitoring device 578 is realized by a surrounding monitoring camera, a millimeter wave radar, a sonar or the like.


As illustrated in FIG. 14, the vehicle 500 in the embodiment includes the posture estimation device 1 and the control device 570. The control device 570 controls the posture of the vehicle 500 based on posture information of the vehicle 500, which has been estimated by the posture estimation device 1. For example, the posture estimation device 1 performs various types of processing as described above, based on an output signal including detection data from the inertial measurement unit 10, so as to obtain information of the position or the posture of the vehicle 500. For example, the posture estimation device 1 can obtain information of the position of the vehicle 500 based on the GPS positioning data and the inertial navigation positioning data as described above. The posture estimation device 1 can estimate information of the posture of the vehicle 500 based on angular velocity data and the like included in the inertial navigation positioning data, for example. The information of the posture of the vehicle 500 can be represented by quaternion or by a roll angle, a pitch angle, and a yaw angle, for example. The control device 570 controls the posture of the vehicle 500 based on the posture information of the vehicle 500, which has been estimated by the processing of the posture estimation device 1. The control is performed by the vehicle control unit 572, for example. The control of the posture can be implemented by the control device 570 controlling the steering mechanism 584, for example. Alternatively, in control (such as slip control) for stabilizing the posture of the vehicle 500, the control device 570 may control the driving mechanism 580 or control the braking mechanism 582. According to the embodiment, with the posture estimation device 1, it is possible to estimate information of a posture with high accuracy. Accordingly, it is possible to realize appropriate posture control of the vehicle 500.


In the embodiment, the control device 570 controls at least one of accelerating, braking, and steering of the vehicle 500 based on information of the position and the posture of the vehicle 500, which has been obtained by the posture estimation device 1. For example, the control device 570 controls at least one of the driving mechanism 580, the braking mechanism 582, and the steering mechanism 584 based on the information of the position and the posture of the vehicle 500. Thus, for example, it is possible to realize automatic driving control of the vehicle 500 by the automatic driving control unit 574. In the automatic driving control, a monitoring result of a surrounding object by the monitoring device 578, map information or running route information stored in the storage unit 576, and the like are used in addition to the information of the position and the posture of the vehicle 500. The control device 570 switches the execution or non-execution of the automatic driving of the vehicle 500 based on a monitoring result of the output signal of the inertial measurement unit 10. For example, the posture estimation device 1 monitors the output signal such as detection data from the inertial measurement unit 10. When a decrease of detection accuracy of the inertial measurement unit 10 or a sensing problem is detected based on the monitoring result, for example, the control device 570 performs switching from the execution of the automatic driving to the non-execution of the automatic driving. For example, in the automatic driving, at least one of accelerating, braking, and steering of the vehicle 500 is automatically controlled. When the automatic driving is not executed, such automatic control of accelerating, braking, and steering is not performed. In this manner, an assistance having higher reliability in running of the vehicle 500 that performs automatic driving is possible. An automation level of the automatic driving may be switched based on the monitoring result of the output signal of the inertial measurement unit 10.



FIG. 15 illustrates an example of another vehicle 600. FIG. 16 is a block diagram illustrating an example of a configuration of the vehicle 600. The posture estimation device 1 in the embodiment can be effectively used in posture control and the like of a construction machine. FIGS. 15 and 16 illustrate a hydraulic shovel being an example of the construction machine as the vehicle 600.


As illustrated in FIG. 15, in the vehicle 600, a vehicle body is constituted by a lower running body 612 and an upper revolving body 611, and a work mechanism 620 is provided in the front of the upper revolving body 611. The upper revolving body 611 is mounted to be capable of revolving on the lower running body 612. The work mechanism 620 is constituted by a plurality of members capable of pivoting in an up-and-down direction. A driver seat (not illustrated) is provided in the upper revolving body 611. An operation device (not illustrated) that operates the members constituting the work mechanism 620 is provided on the driver seat. An inertial measurement unit 10d functioning as an inclination sensor that detects an inclination angle of the upper revolving body 611 is disposed in the upper revolving body 611.


The work mechanism 620 includes a boom 613, an arm 614, a bucket link 616, a bucket 615, a boom cylinder 617, an arm cylinder 618, and a bucket cylinder 619, as the plurality of members. The boom 613 is attached to the front portion of the upper revolving body 611 to be capable of elevating. The arm 614 is attached to the tip of the boom 613 to be capable of elevating. The bucket link 616 is attached to the tip of the arm 614 to be rotatable. The bucket 615 is attached to the tip of the arm 614 and the bucket link 616 to be rotatable. The boom cylinder 617 drives the boom 613. The arm cylinder 618 drives the arm 614. The bucket cylinder 619 drives the bucket 615 through the bucket link 616.


The base end of the boom 613 is supported by the upper revolving body 611 to be rotatable in the up-and-down direction. The boom 613 is rotationally driven relative to the upper revolving body 611 by expansion and contraction of the boom cylinder 617. An inertial measurement unit 10c functioning as an inertial sensor that detects the motion state of the boom 613 is disposed in the boom 613.


One end of the arm 614 is supported by the tip of the boom 613 to be rotatable. The arm 614 is rotationally driven relative to the boom 613 by expansion and contraction of the arm cylinder 618. An inertial measurement unit 10b functioning as an inertial sensor that detects the motion state of the arm 614 is disposed in the arm 614.


The bucket link 616 and the bucket 615 are supported by the tip of the arm 614 to be rotatable. The bucket link 616 is rotationally driven relative to the arm 614 by expansion and contraction of the bucket cylinder 619. The bucket 615 is rotationally driven relative to the arm 614 with the bucket link 616 driven. An inertial measurement unit 10a functioning as an inertial sensor that detects the motion state of the bucket link 616 is disposed in the bucket link 616.


Here, the inertial measurement unit 10 described in the above embodiment can be used as the inertial measurement units 10a, 10b, 10c, and 10d. The inertial measurement units 10a, 10b, 10c, and 10d can detect at least any of an angular velocity and an acceleration acting on the members of the work mechanism 620 or the upper revolving body 611. As illustrated in FIG. 16, the inertial measurement units 10a, 10b, and 10c are coupled in series and can transmit a detection signal to a calculation device 630. As described above, since the inertial measurement units 10a, 10b, and 10c are coupled in series, it is possible to reduce the number of wires for transmitting a detection signal in a movable region and to obtain a compact wiring structure. With the compact wiring structure, it is easy to select a method of laying the wire, and it is possible to reduce an occurrence of deteriorating or damaging the wire, for example.


Further, as illustrated in FIG. 15, the calculation device 630 is provided in the vehicle 600. The calculation device 630 calculates an inclination angle of the upper revolving body 611 or the positions or postures of the boom 613, the arm 614, and the bucket 615 constituting the work mechanism 620. As illustrated in FIG. 16, the calculation device 630 includes the posture estimation device 1 in the above embodiment and a control device 632. The posture estimation device 1 estimates posture information of the vehicle 600 based on an output signal of the inertial measurement units 10a, 10b, 10c, and 10d. The control device 632 controls the posture of the vehicle 600 based on the posture information of the vehicle 600, which has been estimated by the posture estimation device 1. Specifically, the calculation device 630 receives various detection signals input from the inertial measurement units 10a, 10b, 10c, and 10d and calculates the positions and postures (posture angles) of the boom 613, the arm 614, and the bucket 615 or an inclination state of the upper revolving body 611 based on the various detection signals. The calculated position-and-posture signal including a posture angle of the boom 613, the arm 614, or the bucket 615 or an inclination signal including a posture angle of the upper revolving body 611, for example, the position-and-posture signal of the bucket 615 is used in feedback information for a display of a monitoring device (not illustrated) on the driver seat or for controlling an operation of the work mechanism 620 or the upper revolving body 611.


As the construction machine in which the posture estimation device 1 in the above embodiment is used, for example, a rough terrain crane (crane car), a bulldozer, an excavator/loader, a wheel loader, and an aerial work vehicle (lift car) are provided in addition to the hydraulic shovel (jumbo, back hoe, and power shovel) exemplified above.


According to the embodiment, with the posture estimation device 1, it is possible to obtain information of a posture with high accuracy. Thus, it is possible to realize appropriate posture control of the vehicle 600. According to the vehicle 600, since the compact inertial measurement unit 10 is mounted, it is possible to provide a construction machine in which a plurality of inertial measurement units can be compactly disposed at installation sites of the inertial measurement units 10 by serial coupling (multi-coupling) , or cable routing of coupling the inertial measurement units 10 installed at the sites to each other in series by a cable can be compactly performed, even in a very narrow region such as the bucket link 616.


In the embodiment, descriptions are made by using the four-wheel vehicle such as the agricultural machine and the construction machine as an example of the vehicle in which the posture estimation device 1 is used. However, in addition, motorcycles, bicycles, trains, airplanes, biped robots, remote-controlled or autonomous aircraft (such as radio-controlled aircraft, radio-controlled helicopters and drones), rockets, satellites, ships, automated guided vehicles (AGVs) are provided.


The present disclosure is not limited to the embodiment, and various modifications can be made in a range of the gist of the present disclosure.


The above-described embodiment and modification examples are examples, and the present disclosure is not limited thereto. For example, the embodiment and the modification examples can be appropriately combined.


The present disclosure includes a configuration which is substantially the same as the configuration described in the embodiment (for example, configuration having the same function, method, and result or configuration having the same purpose and effect). The present disclosure includes a configuration in which the not-essential component in the configuration described in the embodiment has been replaced. The present disclosure includes a configuration of exhibiting the same effects as those in the configuration described in the embodiment or a configuration capable of achieving the same purpose. The present disclosure includes a configuration in which a known technique is added to the configuration described in the embodiment.

Claims
  • 1. A posture estimation method for an electronic device, the method comprising: calculating a posture change amount of the electronic device with a processor based on an output from an angular velocity sensor of an inertial measurement unit included in the electronic device;predicting posture information of the electronic device with the processor by using the posture change amount;adjusting error information with the processor in a manner of determining whether or not the output of the angular velocity sensor is within an effective measurement range, where an input to the angular velocity sensor exceeding the effective measurement range is off-scale, and when it is determined that the output of the angular velocity sensor is not within the effective measurement range, increasing a posture error component in the error information and reducing a correlation component between the posture error component and an error component other than the posture error component by reducing the correlation component between the posture error component and a bias error component of an angular velocity from the output of the angular velocity sensor; andcorrecting the predicted posture information of the electronic device with the processor based on the error information.
  • 2. The posture estimation method according to claim 1, wherein the adjusting of the error information includes determining whether or not a current time is in a first period after it is determined that the output of the angular velocity sensor is not within the effective measurement range,increasing the posture error component in the first period, andreducing the correlation component between the posture error component and the error component other than the posture error component, in the first period.
  • 3. The posture estimation method according to claim 1, wherein the correlation component between the posture error component and the error component other than the posture error component is zero.
  • 4. The posture estimation method according to claim 1, further comprising: calculating a velocity change amount of an object based on an output of an acceleration sensor and the output of the angular velocity sensor, whereinin the adjusting of the error information, whether or not the output of the acceleration sensor is within an effective measurement range is determined, where an input to the acceleration sensor exceeding the effective measurement range is off-scale, and, when it is determined that the output of the angular velocity sensor or the output of the acceleration sensor is not within the corresponding effective measurement range, a motion velocity error component in the error information is increased, and a correlation component between the motion velocity error component and an error component other than the motion velocity error component in the error information is reduced, andin the predicting of the posture information, velocity information of the electronic device is predicted by using the velocity change amount.
  • 5. The posture estimation method according to claim 4, wherein the adjusting of the error information includes determining whether or not a current time is in a second period after it is determined that the output of the acceleration sensor is not within the effective measurement range,increasing the motion velocity error component in the second period, andreducing the correlation component between the motion velocity error component and the error component other than the motion velocity error component, in the second period.
  • 6. The posture estimation method according to claim 4, wherein the error component other than the motion velocity error component includes a bias error component of an acceleration.
  • 7. The posture estimation method according to claim 5, wherein the error component other than the motion velocity error component includes a bias error component of an acceleration.
  • 8. The posture estimation method according to claim 4, wherein the correlation component between the motion velocity error component and the error component other than the motion velocity error component is zero.
  • 9. A posture estimation device included in an electronic device, the posture estimation device comprising a memory that stores a program, and a processor that executes the program to function as: a posture-change-amount calculation unit that calculates a posture change amount of an object based on an output of an angular velocity sensor;a posture information prediction unit that predicts posture information of the object by using the posture change amount;an error information adjustment unit that determines whether or not the output of the angular velocity sensor is within an effective measurement range, where an input exceeding the effective measurement range is off-scale, and when it is determined that the output of the angular velocity sensor is not within the effective measurement range, increases a posture error component in error information, and reduces a correlation component between the posture error component and an error component other than the posture error component by reducing the correlation component between the posture error component and a bias error component of an angular velocity from the output of the angular velocity sensor; anda posture information correction unit that corrects the predicted posture information of the object based on the error information.
  • 10. A vehicle comprising: a posture estimation device and a control device that controls a posture of the vehicle based on posture information of the vehicle, which is estimated by the posture estimation device,wherein the posture estimation device is included in an electronic device, and the posture estimation device comprises a memory that stores a program, and a processor that executes the program to function as: a posture-change-amount calculation unit that calculates a posture change amount of an object based on an output of an angular velocity sensor;a posture information prediction unit that predicts posture information of the object by using the posture change amount;an error information adjustment unit that determines whether or not the output of the angular velocity sensor is within an effective measurement range, where an input exceeding the effective measurement range is off-scale, and when it is determined that the output of the angular velocity sensor is not within the effective measurement range, increases a posture error component in error information, and reduces a correlation component between the posture error component and an error component other than the posture error component by reducing the correlation component between the posture error component and a bias error component of an angular velocity from the output of the angular velocity sensor; anda posture information correction unit that corrects the predicted posture information of the object based on the error information.
Priority Claims (1)
Number Date Country Kind
2018-143691 Jul 2018 JP national
Continuations (1)
Number Date Country
Parent 16526297 Jul 2019 US
Child 18181119 US