LOCALIZING A MOVING VEHICLE

Information

  • Patent Application
  • 20240353578
  • Publication Number
    20240353578
  • Date Filed
    April 19, 2024
    a year ago
  • Date Published
    October 24, 2024
    9 months ago
Abstract
The disclosure notably relates to a computer-implemented method for localization of a moving vehicle based on GNSS data and vehicle sensor data. The method comprises, in real-time, obtaining vehicle motion data stemming from at least one vehicle sensor. The method also comprises obtaining, while the GNSS signal is available, GNSS data of a positioning of the vehicle. The GNSS data includes a distance variation and an orientation variation. The method also comprises calibrating parameters of an odometer of the vehicle. The calibration is based on a data fusion that uses a Kalman filter. The Kalman filter determines a predicted distance variation and a predicted orientation variation of the vehicle based on a current calibration of the odometer parameters and on the motion data. The Kalman filter also compares the predicted distance variation and predicted orientation variation to the distance variation and the orientation variation of the GNSS data.
Description
TECHNICAL FIELD

The disclosure relates to the field of computer programs and systems, and more specifically to a method, system and program for localization of a moving vehicle.


BACKGROUND

A number of systems and programs are offered on the market for tracking the localization of a vehicle.


Within this context, there is still a need for an improved method for localization of a moving vehicle.


SUMMARY

It is therefore provided a computer-implemented method for localization of a moving vehicle based on GNSS data and vehicle sensor data. The method comprises, in real-time, obtaining vehicle motion data. The vehicle motion data stems from at least one vehicle sensor. The method also comprises, in real-time, obtaining, while the GNSS signal is available, GNSS data of a positioning of the vehicle. The GNSS data includes a distance variation and an orientation variation. The method also comprises, in real-time, calibrating parameters of an odometer of the vehicle. The calibration is based on a data fusion that uses a Kalman filter. The Kalman filter determines a predicted distance variation and a predicted orientation variation of the vehicle based on a current calibration of the odometer parameters and on the motion data. The Kalman filter also compares the predicted distance variation and predicted orientation variation to the distance variation and the orientation variation of the GNSS data.


The method may comprise one or more of the following:

    • the odometer predicts cyclically in time a new location of the vehicle and a new heading of the vehicle based on a location and heading predicted at the previous cycle and on the motion data;
    • the motion data includes the vehicle speed in the current cycle, the vehicle speed in the previous cycle, and the vehicle yaw rate in the current cycle;
    • the parameters of the odometer include a vehicle speed scaling, a vehicle yaw rate scaling, and a vehicle yaw rate offset;
    • calibrating parameters includes correcting one or any combination of the vehicle speed scaling, the vehicle yaw rate scaling, and the vehicle yaw rate offset;
    • the motion data stems from a wheel sensor, an Inertial Measurement Unit (IMU), and a steering system sensor;
    • the GNSS data further includes a location and heading of the vehicle, and the method further comprises, in real-time:
      • when the GNSS signal is available, determining a localization of the vehicle by performing a data fusion that is based on a Kalman filter that predicts the vehicle localization based on a fusion of the location and heading of the GNSS data and a location and heading predicted according to the calibrated odometer parameters;
      • when the GNSS signal is lost, determining a localization of the vehicle based on a location and heading predicted according to the calibrated odometer parameters;
    • the GNSS data stems from a GNSS device that comprises only one antenna; and/or
    • the vehicle is a motorbike, a car, a bus or a truck.


It is further provided a computer program comprising instructions which, when executed by a computer system, cause the system to perform the method.


It is further provided a computer readable storage medium having recorded thereon the computer program.


It is further provided a system comprising a processor coupled to a memory, the memory having recorded thereon the computer program.


The system may comprise one or more of the following:

    • the system is coupled with or further comprises the GNSS device, the odometer, and the at least one sensor; and/or
    • the at least one sensor includes a wheel sensor, an Inertial Measurement Unit (IMU), and a steering system sensor;


It is further provided a vehicle equipped with the system.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting examples will now be described in reference to the accompanying drawings, where:



FIG. 1 shows an example of the system; and



FIGS. 2 to 4 illustrate the method.





DETAILED DESCRIPTION

It is proposed a computer-implemented method for localization of a moving vehicle based on Global Navigation Satellite System (also called GNSS herein below) data and vehicle sensor data. The method comprises, in real-time, obtaining vehicle motion data. The vehicle motion data stems from at least one vehicle sensor. The method also comprises, in real-time, obtaining, while the GNSS signal is available, GNSS data of a positioning of the vehicle. The GNSS data includes a distance variation and an orientation variation. The method also comprises, in real-time, calibrating parameters of an odometer of the vehicle. The calibration is based on a data fusion that uses a Kalman filter. The Kalman filter determines a predicted distance variation and a predicted orientation variation of the vehicle based on a current calibration of the odometer parameters and on the motion data. The Kalman filter also compares the predicted distance variation and predicted orientation variation to the distance variation and the orientation variation of the GNSS data.


Such a method improves the localization of the moving vehicle.


Notably, the method achieves an improved accuracy for localizing the moving vehicle regardless of where the vehicle may be spatially localized, for example, regardless of whether the vehicle is localized inside a tunnel or in an open field. This is because the localization of the vehicle is determined from two types of data: the vehicle motion data and the GNSS data while the GNSS signal is available. On one hand, vehicle motion data consists on local data which is acquired from the at least one vehicle sensor (for example comprising a wheel sensor, an Inertial Measurement Unit (IMU), and a steering system sensor) and thus provides data about the vehicle motion. On the other hand, the GNSS data provides an accurate data of positioning of the vehicle (including GNSS motion data such as the distance variation and the orientation variation) while the GNSS signal is available, for example, in an open field covered by GPS coverture (which is the case during the majority of the vehicle's trip, except on short periods where the vehicle goes through a tunnel or the like). However, the vehicle motion data on its own is sensible to sensor-related errors. For example, the speed obtained from a sensor such as a wheel encoder may be based on a default tire size (wheel circumference), which could vary due to change of tire pressure/temperature or exchanging the tires with others with different circumference like the summer/winter exchange. Such change in the tire size may introduce scaling factor error in the measured speed. Moreover, a sensor such as an IMU provides angular speeds and acceleration for yaw, pitch, and roll based on built-in gyroscope. The readings of such electro-mechanical sensor may be highly affected by significant temperature/pressure changes, and a slight shift in the mounting of the sensor shall introduce bias error to it. Finally, a sensor such as a steering system sensor is based on feeding back the place of the slider on the steering rack to indicate the steering measurement, which may present offset errors due to its mechanical sliding resolution. Hence, the Odometer sensors scaling and offset errors may be dynamically changing, and the present method may estimate them continuously in order to locate the vehicle accurately. However, as the method also relies also on the GNSS data, the method re-calibrates the odometer using the GNSS data, which is known to be particularly reliable, so as to compensate for such errors.


Indeed, thanks to the method achieving a continuous calibration of the parameters of the odometer of the vehicle, the method further improves the robustness for localizing the vehicle based on both the vehicle motion data and the GNSS data. As outlined above, the odometer (providing information on the motion of the vehicle) may be subject to changes in pressure and/or temperature which may introduce error factors in the vehicle motion data stemming from the at least one sensor. However, thanks to the use of the Kalman filter, the method determines a predicted distance variation and a predicted orientation variation based on a data fusion of the GNSS and sensor data so that the method corrects parameters of the odometer based on the comparison of the predicted (i.e. using the sensor motion data) distance variation and predicted (i.e. using the sensor motion data) orientation variation to the distance variation and the orientation variation of the GNSS data. Due to the method performing the signals capturing in real time, the calibration is also done relatively continuously (e.g., continuously while the GNSS signal is available, e.g. at regular and short time intervals/steps) and thus the odometer is persistently calibrated as the comparison is done every time there is a track (that is, a distance traveled by the vehicle) that can be extracted from both the odometer and GNSS data, for example where there is suitable coverage from a GNSS system such as satellites. In other words, upon losing the GNSS data, the calibrated odometer, which is conform to the last available GNSS data, makes on its own accurate prediction using the calibrated parameters during the absence of the GNSS data. In yet other words, as long as the GNSS data is available, which again corresponds the majority of the vehicle's trip, the odometer is continuously/in real-time calibrated and the vehicle localization can be determined based on the calibrated odometer and on the GNSS and sensor data. When the GNSS data is lost, for example in a tunnel, which often correspond to a relatively short period of time comparted to the duration of the vehicle's trip, localization can be made on the sole prediction performed by the odometer based on the lastly calibrated parameters and based on the sensor motion data.


The method is computer-implemented. This means that steps (or substantially all the steps) of the method are executed by at least one computer, or any system alike. Thus, steps of the method are performed by the computer, possibly fully automatically, or, semi-automatically. In examples, the triggering of at least some of the steps of the method may be performed through user-computer interaction. The level of user-computer interaction required may depend on the level of automatism foreseen and put in balance with the need to implement user's wishes. In examples, this level may be user-defined and/or pre-defined.


A typical example of computer-implementation of a method is to perform the method with a system adapted for this purpose. The system may comprise a processor coupled to a memory and a graphical user interface (GUI), the memory having recorded thereon a computer program comprising instructions for performing the method. The memory may also store a database. The memory is any hardware adapted for such storage, possibly comprising several physical distinct parts (e.g. one for the program, and possibly one for the database). The system may be coupled or further comprise the GNSS device, the odometer, and the at least one sensor. The at least one sensor may include a wheel sensor, an Inertial Measurement Unit (IMU), and a steering system sensor.



FIG. 1 shows an example of the system, wherein the system is a computer system 100 configured to be mounted on a vehicle.


The computer 100 of the example comprises a processor or processing unit such as a central processing unit (CPU) 1010 connected to an internal communication BUS 1000. A random access memory (RAM) may be optionally connected to the BUS (not shown). A memory controller 1020 manages accesses to a mass memory device, such as hard drive or other form of non-volatile memory such as EPROM, EEPROM, and flash memory devices. Any of the foregoing may be supplemented by, or incorporated in, specially designed ASICs (application-specific integrated circuits). The computer may optionally also include a display, e.g., a touch screen (not shown). The display may be configured for displaying the localization of the vehicle. The computer 100 may be coupled with a GNSS 1080 through a port or interface 1090 and with a wheel sensor 1050 like a wheel encoder, an IMU 1040 and a steering system sensor 1030 through a port or interface 1060. The computer 100 may comprise an output port 1070 for sending calibration parameters to the odometer 1110. The computer 100 may also comprise an output port 1100 for outputting the data forming the vehicle's predicted localization. The odometer 1110 may be coupled with the computer 100 as shown in the figure to send its predictions to the computer. The data may comprise the localization and the heading of the vehicle and any other localization data discussed herein. The display may also be configured for displaying the localization of the vehicle.


The computer program may comprise instructions executable by a computer, the instructions comprising means for causing the above system to perform the method. The program may be recordable on any data storage medium, including the memory of the system. The program may for example be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The program may be implemented as an apparatus, for example a product tangibly embodied in a machine-readable storage device for execution by a programmable processor. Method steps may be performed by a programmable processor executing a program of instructions to perform functions of the method by operating on input data and generating output. The processor may thus be programmable and coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. The application program may be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired. In any case, the language may be a compiled or interpreted language. The program may be a full installation program or an update program. Application of the program on the system results in any case in instructions for performing the method. The computer program may alternatively be stored and executed on a server of a cloud computing environment, the server being in communication across a network with one or more clients. In such a case a processing unit executes the instructions comprised by the program, thereby causing the method to be performed on the cloud computing environment.


It is further provided a vehicle equipped with the system. The vehicle may be a motorbike, a car, a bus or a truck.


The method is for determining the localization of the moving vehicle, that is, the spatial location of the vehicle as it follows a generally continuous movement (this does however not exclude vehicle stops, for example at red lights). The localization may be expressed in terms of the 3D cartesian coordinates (x,y,z) with z denoting (by convention) an altitude of the vehicle with respect to the x-y plane or 2D cartesian coordinates (x,y), by considering the projection of the vehicle to the ground in the x-y plane. The method may output the localization of the vehicle. For example, the method may output the location of the vehicle in 3D or 2D cartesian coordinates. The method may also output the speed of the vehicle, the heading and/or the yaw rate.


Steps of the method are performed in real-time. In other words, steps of the method may be performed each in a relatively short time (for example in the order of milliseconds) so that the duration of the steps is low enough to so as to appear continuous (within a discrete approximation).


The vehicle motion data is obtained (as it stems from the at least one vehicle sensor) in real time, that is, as a sequence of discrete values being separated by a short time-difference. The GNSS data is also obtained in real time while the GNSS signal is available. The method may obtain the vehicle motion data and/or the GNSS data as a sequence of values. The sequence of values may be time-ordered, i.e., the vehicle motion data and the GNSS data may be time-ordered sequences, where respective values are ordered according to the time by which the value was obtained. Each value in a respective time-ordered sequence may be associated with a time-stamp. The time-stamp may be a piece of data comprising a value of time, e.g., a time (having for example date and hour or alternatively indicated by an index) in which the vehicle motion data is acquired by the at least one vehicle sensor (in the case of the vehicle motion data) or a time in which the GNSS data is acquired while the GNSS signal is available. In examples, two values of the sequence of values may be separated by a time difference, for example an uniform time-difference between each other (for example, two consecutive values) or alternatively, by a non-uniform time-difference between two values.


The vehicle motion data stems from at least one vehicle sensor. In other words, values obtained from vehicle motion data are the result of the measurement by the sensor corresponding to the vehicle's motion. By “motion data” it is meant any time-ordered sequence of values indicative of the vehicle's motion, for example indicating a position, speed, yaw and/or steering. The motion data may stem from a wheel sensor, an Inertial Measurement Unit (IMU), and a steering system sensor. In other words, the vehicle motion data may stem from the wheel sensor, the Inertial Measurement Unit (IMU), and the steering system sensor.


The method obtains in real-time, while the GNSS signal is available, GNSS data of a positioning of the vehicle. The GNSS data includes a distance variation and an orientation variation. The GNSS data may be a time-ordered sequence of values, each value including the distance variation and a orientation variation at each respective time. The “distance variation” may mean any variation of the distance travelled by the vehicle in a predetermined time-step according to the measurements of the GNSS data. The “orientation variation” may mean any variation of the orientation of the vehicle in a predetermined time-step according to the GNSS data. In other words, the GNSS data of the altitude of the vehicle is provided from the Global Navigation Satellite System while the GNSS signal is available, for example when the vehicle is in an open area with GNSS coverage. The Global Navigation Satellite System may be for example the GPS system, GLONASS or GALILEO navigation system.


The method calibrates in real-time parameters of the odometer. By “calibration” it is to be understood that the method modifies the parameters of the odometer so that the odometer makes vehicle localization predictions that are in line with the GNSS data. The parameters of the odometer may include a vehicle speed scaling, a vehicle yaw rate scaling, and a vehicle yaw rate offset. The method may calibrate only one of the vehicle scaling, the vehicle yaw rate scaling or the vehicle yaw rate offset. Alternatively, the method may calibrate more than one (for example all) of the parameters.


The calibration is based on a data fusion that uses a Kalman filter. By “data fusion” it is meant that the vehicle motion data is used concomitantly with the GNSS data (for example by comparing the vehicle motion data with the GNSS data, while the GNSS signal is available) for calibrating parameters of the odometer. The Kalman filter (also denoted herein below as a “Fusion block 1”) determines a predicted distance variation and a predicted orientation variation of the vehicle based on a current calibration of the odometer parameters and on the motion data. This correspond to the prediction step of the Kalman filter, as known per se from Kalman filters. The Kalman filter also compares the predicted distance variation and predicted orientation variation to the distance variation and the orientation variation of the GNSS data. This correspond to the correction step of the Kalman filter, as known per se from Kalman filters.


The GNSS data may stem from a GNSS device that comprises only one antenna. The only one antenna may be configured to receive the signal from the GNSS system, such as the GPS system. As the method comprises only one antenna, the system is simple and cheap as it relies on less resources for obtaining the GNSS data. In addition, the localization is made entirely online, so that the use of localization sensors or lasers may be avoided (which may add further errors and cost).


The odometer may predict cyclically in time a new location of the vehicle and a new heading of the vehicle based on a location and heading predicted at the previous cycle and on the motion data. In other words, the odometer may perform the prediction periodically (e.g., at regular and short time steps) based on a location and heading predicted at a timing prior to the prediction (e.g. at the previous time step). “Current cycle” may refer to the current time step, and “previous cycle” may refer to the preceding/previous time step. This ensures that the filter Kalman provides a sequential and cyclical prediction of the location as it receives the motion data and the GNSS data.


The odometer may predict cyclically in time the new location of the vehicle and the new heading of the vehicle as variables [x, y, vx, vy] based on the location predicted at the previous cycle and heading of the previous cycle and on motion data I={vn, v0, {dot over (θ)}n}. The variables x, y denote the location of the vehicle in an x-y plane. The variables vx, vy denote the speed components of the vehicle. The vehicle's heading angle is denoted by








tan

-
1


(


v
y


v
x


)

.




The motion data may include the vehicle speed in the current cycle, the vehicle speed in the previous cycle (at a timing preceding the current cycle), and the vehicle yaw rate in the current cycle. The motion data may be captured as the formula I={vn, v0, {dot over (θ)}n} (e.g. implemented in the computer using this formula), including the variables vn, v0, {dot over (θ)}n, where vn denotes the vehicle speed in the current cycle, v0 denotes the vehicle speed in the previous cycle, and {dot over (θ)}n denotes the vehicle yaw rate in the current cycle.


Calibrating parameters of the odometer may include correcting one or any combination of the vehicle speed scaling, the vehicle yaw rate scaling, and the vehicle yaw rate offset. The calibration of the parameters of the odometer allows to compensate scaling and offset errors in the vehicle motion data, thereby improving the real-time estimation of the positioning of the vehicle.


The parameters of the odometer may be the vehicle speed scaling, the vehicle yaw rate scaling, and the vehicle yaw rate offset, for example respectively denoted as [Ss, {dot over (θ)}s, {dot over (θ)}O]. The parameters of the odometer may be calibrated by comparing a distance variation ΔS (also referred to as “driven distance”) and an orientation variation/change Δθ (computed by the Odometer for a specific vehicle's track) with a corresponding distance variation ΔS and orientation variation Δθ included in the GNSS data.


The Kalman filter that determines the predicted distance variation and the predicted orientation variation of the vehicle may be denoted mathematically by a prediction function ƒ1( ) that receives the initial state of the parameters output from a previous cycle ({circumflex over (x)}10) and the current vehicle motion data (I1) (obtained at a current cycle) to generate a priori predicted state of the parameters ({circumflex over (x)}1). The priori predicted state is fed to a function h1( ). The function h1( ) converts the priori predicted state to the measurement space where the predicted measurement (x1m) is obtained. The Kalman filter computes the fused posteriori state ({circumflex over (x)}1n) based on the Kalman innovation, which is the difference between the measurement from GNSS data and the predicted measurement from motion signals.


The vehicle motion data (I1) may be presented with the integrals of the speed signals and yaw rate signals along the track beside the elapsed time (ΔT) along the track, so as to take into account that the Kalman filters determines the predicted distance variation and the predicted orientation variation of the vehicle on a track between the current cycle and the previous cycle. The method may use such integrals to output a resulting driven distance (ΔSin) and orientation change (Δθin) of the track from the vehicle motion data, which is presented with (n) samples of input signals of vehicle speed (vi) and yaw rate ({dot over (θ)}i) coming at time intervals (Δti).







Δ


S
in


=




i
=
1

n


[


v
i

×
Δ


t
i


]









Δθ
in

=




i
=
1

n


[



θ
˙

i

×
Δ


t
i


]









Δ

T

=




i
=
1

n


[

Δ


t
i


]









I
1

=

{


Δ


S
in


,

Δ


θ
in


,

Δ

T


}








x
1
-

=


f
1

(


x

1
0


,

I
1


)









x
^

1
-

=


f
1

(



x
^


1
0


,

I
1


)











[




S
s







θ
˙

s







θ
˙

O




]







x
^

1
-




=


f
1

(





[




S
s







θ
˙

s







θ
˙

O




]







x
^


1
0





,

I
1


)









x
^


1
m


=

[




Δ


S
m







Δ


θ
m





]









x
^


1
m


=


h
1

(



x
^

1
-

,

I
1


)








Z
1

=

[




Δ


S
GNSS







Δ


θ
GNSS





]









x
^


1
n


=



x
^

1
-

+


K
1

[


Z
1

-


x
^


1
m



]






The GNSS data may further include a location and heading of the vehicle. The method may further comprise, in real-time, when the GNSS signal is available, determining a localization of the vehicle by performing a data fusion that is based on a Kalman filter (also denoted herein below as “Fusion block 2”) that predicts the vehicle localization based on fusion of the location and heading of the GNSS data and a location and heading predicted according to the calibrated odometer parameters. it is meant that the method uses the usual Kalman filter methods for performing the prediction and calibration. The method may calibrate the parameters of the odometer based on the predicted vehicle localization. In examples, the method may set the parameters of the odometer after determining the localization of the vehicle. This ensures that the calibration is performed dynamically.


The method may obtain from the odometer the vehicle's position and heading based on the result of the comparison (that is, the latest modified calibration parameters), which may be added to the vehicle position and heading that are fused with the ones obtained from GNSS in the Fusion block 2 to generate the vehicle's location.


The method may further comprise, in real-time, when the GNSS signal is lost, determining a localization of the vehicle based on a location and heading predicted according to the calibrated odometer parameters. In other words, upon losing GNSS data (for instance whenever the vehicle is on an area not covered by GNSS signal such as a tunnel or interior area), the method determines the localization of the vehicle using only the prediction of the vehicle's heading and position (which then form the localization data) by the odometer according to its calibrated parameters (that is, according to the parameters lastly calibrated according of the lastly available GNSS data). In yet other words, the odometer thus relies on such calibration while the GNSS data is lost, and the localization determination is solely based on the odometer's processing of the motion data according to this calibration. When the GNSS signal is recovered, the method may perform the calibration again as describe above and thus re-adjusts the odometer with the available GNSS data.


This further improves the accuracy of the calibration of the odometer, as the method accurately determines the location of the vehicle, including the steering. This is because the method determines the location and heading according to the calibrated parameters of the odometer, and thus, according to odometer parameters that are in line with the lastly available GNSS data, and that are thereby reliable. Moreover, as the calibration is performed dynamically, the method feeds back the odometer to recalibrate the odometer, thereby reducing bias errors that may be present in electro-mechanical sensors.


Examples of the Kalman Filter that predicts the vehicle localization are now discussed. Let [Ss, {dot over (θ)}s, {dot over (θ)}O] (which denote the parameters of the odometer) denote state of the Kalman filter that determines a predicted distance variation and a predicted orientation variation of the vehicle (that is, the Fusion block 1). Let [x, y, vx, vy] denote the state of the Kalman filter that predicts the localization of the vehicle (That is, the Fusion block 2), the state representing [x, y, vx, vy] the vehicle's position and orientation presented with the vehicle velocity components in x and y directions.


The Kalman's filter corresponding to Fusion block 2 is denoted by the equations:








I
2

=

{


v
n

,

v
0

,


θ
˙

n


}


,








x
2
-

=


f
2

(


x

2
0


,

x

1
n


,

I
2


)


,









x
^

2
-

=


f
2

(



x
^


2
0


,


x
^


1
n


,

I
2


)


,








[



x




y





v
x






v
y




]

=


f
2




(





[



x




y





v
x






v
y




]







x
^


2
0





,




[




S
s







θ
˙

s







θ
˙

O




]







x
^


1
n





,

I
2


)



,








Z
2

=

[




x
GNSS






y
GNSS






v

x
GNSS







v

y
GNSS





]


,









x
^


2
m


=


h
2

(



x
^

2
-

,


x
^


1
n



)


,








x
^


2
n


=



x
^

2
-

+



K
2

[



Z
^

2

-


x
^


2
m



]

.






The method may calibrate the parameters of the odometer dynamically, based on the Fusion block 1. The method may perform a fusion of the prediction of the vehicle localization and the adaptation may be expressed mathematically by a mean equation {{circumflex over (x)}1, {circumflex over (x)}1n)} and a covariance equation {P1, P1n} expressed as follows:







x
1
-

=


f
1

(


x

1
0


,

I
1


)









x
^

1
-

=


f
1

(



x
^


1
0


,

I
1


)








I
1

=

{


Δ


S
in


,

Δθ
in

,

Δ

T


}








(


x
1
-

·

S
s


)

=

(


x

1
0


·

S
s


)








(


x
1
-

·


θ
.

s


)

=

(


x

1
0


·


θ
˙

s


)








(


x
1
-

·


θ
˙

O


)

=

(


x

1
0


·


θ
˙

O


)














F

1
x


=




x
1
-





x

1
0








"\[RightBracketingBar]"





x

1
0


=


x
^


1
0




=





f
1

(


x

1
0


,

I
1


)





x

1
0








"\[RightBracketingBar]"





x

1
0


=


x
^


1
0




=

I

3
×
3









P
1
-

=




F

1
x




P

1
0




F

1
x

T


+

Q
1


=




I

3
×
3




P

1
0




I

3
×
3

T


+

Q
1


=


P

1
0


+

Q
1














(


x
1
-

·

S
s


)





(


x

1
0


·

S
s


)



=


1






(


x
1
-

·

S
s


)





(


x

1
0


·


θ
.

s


)




=


0






(


x
1
-

·

S
s


)





(


x

1
0


·


θ
.

O


)




=
0












(


x
1
-

·


θ
.

s


)





(


x

1
0


·

S
s


)



=


0






(


x
1
-

·


θ
.

s


)





(


x

1
0


·


θ
.

s


)




=


1






(


x
1
-

·


θ
.

s


)





(


x

1
0


·


θ
.

O


)




=
0












(


x
1
-

·


θ
.

O


)





(


x

1
0


·

S
s


)



=


0






(


x
1
-

·


θ
.

O


)





(


x

1
0


·


θ
.

s


)




=


0






(


x
1
-

·


θ
.

O


)





(


x

1
0


·


θ
.

O


)




=
1









F

1
x


=


[



1


0


0




0


1


0




0


0


1



]


=

I

3
×
3







However, a process noise (Q1) may be added to the calibration parameters covariance as it may be slightly changing over time for mainly temperature/pressure changes. Such added process noise may lead to force retuning of the parameters when significantly long time passed without adaption.







P
1
-

=


P

1
0


+

Q
1









x

1
m


=

[




Δ


S
m







Δ


θ
m





]








x

1
m


=


h
1

(


x
1
-

,

I
1


)









x
^


1
m


=


h
1

(



x
^

1
-

,

I
1


)








(




x
^


1
m


·
Δ


S

)

=



(



x
^

1
-

·

S
s


)

×




i
=
1

n


[


v
i

×
Δ


t
i


]



=


(



x
^

1
-

·

S
s


)

×
Δ


S
in










(



x
^


1
m


·
Δθ

)

=



(



x
^

1
-

·


θ
˙

s


)

×




i
=
1

n


[



θ
˙

i

×
Δ


t
i


]



+


(



x
^

1
-

·


θ
˙

O


)

×




i
=
1

n


[

Δ


t
i


]











(



x
^


1
m


·
Δθ

)

=



(



x
^

1
-

·


θ
˙

s


)

×
Δ


θ
in


+


(



x
^

1
-

·


θ
˙

O


)

×
Δ

T












(



x

1
m


·
Δ


S

)





(


x
1
-

·

S
s


)



=


Δ


S
in







(



x

1
m


·
Δ


S

)





(


x
1
-

·


θ
.

s


)




=


0






(



x

1
m


·
Δ


S

)





(


x
1
-

·


θ
.

O


)




=
0












(



x

1
m


·
Δ


θ

)





(


x
1
-

·

S
s


)



=


0






(



x

1
m


·
Δ


θ

)





(


x
1
-

·


θ
.

s


)




=


Δ


θ
in







(



x

1
m


·
Δ


θ

)





(


x
1
-

·


θ
.

O


)




=

Δ

T
















H
1

=





h
1

(


x
1
-

,

I
1


)





x
1
-







"\[RightBracketingBar]"





x
1
-

=


x
^

1
-



=




x

1
m






x
1
-







"\[RightBracketingBar]"





x
1
-

=


x
^

1
-



=

[




Δ


S
in




0


0




0



Δ


θ
in





Δ

T




]








K
1

=


P
1
-




H
1
T





(



H
1



P
1
-




H
1
T


+
R

)


-
1











x
^


1
n


=



x
^

1
-

+


K
1

[



Z
^

1

-


h
1

(



x
^

1
-

,

I
1


)


]









P

1
n


=


P
1
-

-


K
1




H
1




P
1
-







It shall be noted that the GNSS measurement state Z1 is presented with mean {circumflex over (Z)}1 and covariance R1. The covariance matrix of the GNSS measurements can be set to a diagonal matrix, with each of the diagonal elements presents the variance of one of the state elements assuming no correlation between them as follows.







Z
1

=

[




Δ


S
GNSS







Δ


θ
GNSS





]









Z
^

1

=

[




GNSS






GNSS




]








R
1

=

[





(

σ

Δ


S
GNSS



)

2



0




0




(

σ

Δθ
GNSS


)

2




]





In order to estimate the variance in the driven distance and orientation change along a tack of GNSS detected location points, the standard deviation provided in the GNSS data for both the GNSS estimated speed and heading shall be used. The signals sigma_speed and sigma_heading provided by GNSS can be used to present the standard deviation in the GNSS speed (σvGNSS) and the standard deviation in the GNSS heading (σθGNSS), respectively at every GNSS location point. Thus, the computing of the variance of the driven distance (σΔSGNSS)2 and orientation change (σΔθGNSS) along GNSS track can be formulated as follows.







Δ


S
GNSS


=




i
=
1

n


(


v

i
GNSS


×
Δ


t

i
GNSS



)






where ΔSGNSS is the driven distance along GNSS track, viGNSS is the speed provided by GNSS data at every GNSS location point with index i, and ΔtiGNSS is the time span of the GNSS provided data at index i till the data in the next step is provided.


By retrieving the Expectation E[ ] and Variance Var[ ] definitions for single random variable, the following rules can be derived assuming x, y, and z are random variables and a is an arbitrary constant value.








if

y

=

a


x


,



then







E
[
y
]


=


E
[

a


x

]

=

a



E
[
x
]











then




,



Var
[
y
]

=


E
[


(

y
-

y
ˆ


)

2

]

=


E
[


y
2

+


y
ˆ

2

-

2

y


y
ˆ



]

=



E
[

y
2

]

+

E
[


y
ˆ

2

]

-

2



E
[

y


y
ˆ


]



=




E
2

[
y
]

-

E
[

y
2

]











Var
[
y
]

=



E
[


a
2



x
2


]

-


a
2




E
2

[
x
]



=



a
2



(



E
2

[
x
]

-


E
2

[
x
]


)


=


a
2



Var
[
x
]













if




y

=

x
+
z


,


then



E
[
y
]


=


E
[
x
]

+

E
[
z
]









then

,



Var
[
y
]

=



E
[

y
2

]

-


E
2

[
y
]



=


E
[


x
2

+

z
2

+

2

x

z


]

-


E
2

[
x
]

-


E
2

[
z
]

-

2



E
[
x
]



E
[
z
]












Var
[
y
]

=


E
[

x
2

]

-


E
2

[
x
]

+

E
[

z
2

]

-


E
2

[
z
]

+

2


(


E
[

x

z

]

-


E
[
x
]



E
[
z
]



)







if x and z are independent, then Var[y]=Var[x]+Var[z]








(

σ

Δ


S
GNSS



)

2

=




i
=
1

n




(

Δ


t

i
GNSS



)

2




(

σ

v

i
GNSS



)

2







On the other hand, the orientation change is the difference in the heading at the final and initial points along the track with length of n points. Thus, its variance can be computed as follows.







Δ


θ
GNSS


=


θ

n
GNSS


-

θ

1
GNSS












if




y

=

x
-
z


,


then



E
[
y
]


=


E
[
x
]

-

E
[
z
]







then

,




Var

[
y
]

=



E
[

y
2

]

-


E
2

[
y
]


=


E
[


x
2

+

z
2

-

2

x

z


]

-


E
2

[
x
]

-


E
2

[
z
]

+

2



E
[
x
]



E
[
z
]










Var
[
y
]

=


E
[

x
2

]

-


E
2

[
x
]

+

E
[

z
2

]

-


E
2

[
z
]

-

2



(


E
[

x

z

]

-


E
[
x
]



E
[
z
]



)








if x and z are independent, then Var[y]=Var[x]+Var[z]








(

σ

Δθ
GNSS


)

2

=



(

σ

θ

n
GNSS



)

2

+


(

σ

θ

1
GNSS



)

2






Examples of the Kalman filter corresponding to the Fusion block 2 are now discussed. Let (vs) denote a scaling factor of the vehicle's input speed from one cycle to another (for example from a previous cycle to a current cycle). The scaling factor may be computed as the ratio between the speed in the current cycle (vn) and the speed in the previous cycle (v0). The absolute value of the speed in the previous cycle |v0| may be greater than zero. This yields:







x
2
-

=


f
2

(


x

2
0


,

x

1
n


,

I
2


)









x
^

2
-

=


f
2

(



x
^


2
0


,


x
^


1
n


,

I
2


)








I
2

=

{


v
n

,

v
0

,


θ
˙

n


}








(



x
^

2
-

·
x

)

=


(



x
^


2
0


·
x

)

+


(



x
^


1
n


·

S
s


)

×

(



x
^

2
-

·

v
x


)

×
Δ

t









(



x
^

2
-

·
y

)

=


(



x
^


2
0


·
y

)

+


(



x
^


1
n


·

S
s


)

×

(



x
^

2
-

·

v
y


)

×
Δ

t








rot
=


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
.

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t








if



(




"\[LeftBracketingBar]"


v
0



"\[RightBracketingBar]"


>
0

)



then



v
s


=




v
n


v
0




eIse



v
s


=
1








(



x
^

2
-

·

v
x


)

=


v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)









(



x
^

2
-

·

v
y


)

=


v
s

×

(



(



x
^


2
0


·

v
x


)

×

sin

(
rot
)


+


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)









(



x
^

2
-

·
x

)

=


(



x
^


2
0


·
x

)

+


(



x
^


1
n


·

S
s


)

×

v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×
Δ

t









(



x
^

2
-

·
y

)

=


(



x
^


2
0


·
y

)

+


(



x
^


1
n


·

S
s


)

×

v
s

×

(



(



x
^


2
0


·

v
x


)

×

sin

(
rot
)


+


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×
Δ

t






At the prediction, the method may overwrite the speed in the state of the Fusion block 2 with the input speed (vn) projected on the vehicle's cartesian (x, y) coordinates.








x
2
-

-


x
^

2
-


=



f
2

(


x

2
0


,

x

1
n


,

I
2


)

-


f
2

(



x
^


2
0


,


x
^


1
n


,

Î
2


)






















x
2
-

-


x
^

2
-


=





f
2

(


x

2
0


,

x

1
n


,

I
2


)





x

2
0







"\[RightBracketingBar]"







x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2








(


x

2
0


-


x
^


2
0



)


+












f
2

(


x

2
0


,

x

1
n


,

I
2


)





x

1
n









"\[RightBracketingBar]"








x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2






+





f
2

(


x

2
0


,

x

1
n


,

I
2


)





I
2







"\[RightBracketingBar]"








x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2








(


I
2

-

Î
2


)













F

2

x
2



=




x
2
-





x

2
0







"\[RightBracketingBar]"







x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2






=





f
2

(


x

2
0


,

x

1
n


,

I
2


)





x

2
0








"\[RightBracketingBar]"








x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2

















F

2

x
1



=




x
2
-





x

1
n







"\[RightBracketingBar]"







x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2






=





f
2

(


x

2
0


,

x

1
n


,

I
2


)





x

1
n








"\[RightBracketingBar]"








x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2

















F

2

I
2



=




x
2
-





I
2






"\[RightBracketingBar]"







x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2






=





f
2

(


x

2
0


,

x

1
n


,

I
2


)





I
2







"\[RightBracketingBar]"








x

2
0


=


x
^


2
0









x

1
n


=


x
^


1
n









I
2

=

Î
2













x
2
-

-


x
^

2
-


=



F

2

x
2



(


x

2
0


-


x
^


2
0



)

+


F

2

x
1



(


x

1
n


-


x
^


1
n



)

+


F

2

I
2



(


I
2

-

Î
2


)









E
[


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


]

=

E

[


[



F

2

x
2



(


x

2
0


-


x
^


2
0



)

+


F

2

x
1



(


x

1
n


-


x
^


1
n



)

+





F

2

I
2



(


I
2

-


I
^

2


)



]






[



F

2

x
2



(


x

2
0


-


x
^


2
0



)

+



F

2

x
1



(


x

1
n


-


x
^


1
n



)





+


F

2

I
2



(


I
2

-


I
^

2


)


]

T



]

=


E

[


[



F

2

x
2



(


x

2
0


-


x
^


2
0



)

+


F

2

x
1



(


x

1
n


-


x
^


1
n



)

+



F

2

I
2



(


I
2

-

Î
2


)


]





[




(


x

2
0


-


x
^


2
0



)

T



F

2

x
2


T


+



(


x

1
n


-


x
^


1
n



)

T



F

2

x
1


T


+



(


I
2

-


I
^

2


)

T



F

2

I
2


T



]



]

=


E

[




F

2

x
2



(


x

2
0


-


x
^


2
0



)




(


x

2
0


-


x
^


2
0



)

T



F

2

x
2


T


+



F

2

x
1



(


x

1
n


-


x
^


1
n



)




(


x

1
n


-


x
^


1
n



)

T



F

2

x
1


T


+



F

2

I
2



(


I
2

-


I
^

2


)




(


I
2

-


I
^

2


)

T



F

2

I
2


T


+



F

2

x
2



(


x

2
0


-


x
^


2
0



)




(


x

1
n


-


x
^


1
n



)

T



F

2

x
1


T


+



F

2

x
2



(


x

2
0


-


x
^


2
0



)




(


I
2

-


I
^

2


)

T



F

2

I
2


T


+



F

2

x
1



(


x

1
n


-


x
^


1
n



)




(


I
2

-


I
^

2


)

T



F

2

I
2


T


+



F

2

x
1



(


x

1
n


-


x
^


1
n



)




(


x

2
0


-


x
^


2
0



)

T



F

2

x
2


T


+



F

2

I
2



(


I
2

-


I
^

2


)




(


x

1
n


-


x
^


1
n



)

T



F

2

x
1


T


+



F

2

I
2



(


I
2

-


I
^

2


)




(


x

2
0


-


x
^


2
0



)

T



F

2

x
2


T



]

=




F

2

x
2





E

[


(


x

2
0


-


x
^


2
0



)




(


x

2
0


-


x
^


2
0



)

T


]




F

2

x
2


T


+


F

2

x
1





E

[


(


x

1
n


-


x
^


1
n



)




(


x

1
n


-


x
^


1
n



)

T


]




F

2

x
1


T


+


F

2

I
2





E

[


(


I
2

-


I
^

2


)




(


I
2

-


I
^

2


)

T


]




F

2

I
2


T


+


F

2

x
2





E

[


(


x

2
0


-


x
^


2
0



)




(


x

1
n


-


x
^


1
n



)

T


]




F

2

x
1


T


+


F

2

x
2





E

[


(


x

2
0


-


x
^


2
0



)




(


I
2

-


I
^

2


)

T


]




F

2

I
2


T


+


F

2

x
1





E

[


(


x

1
n


-


x
^


1
n



)




(


I
2

-


I
^

2


)

T


]




F

2

I
2


T


+


F

2

x
1





E

[


(


x

1
n


-


x
^


1
n



)




(


x

2
0


-


x
^


2
0



)

T


]




F

2

x
2


T


+


F

2

I
2





E

[


(


I
2

-


I
^

2


)




(


x

1
n


-


x
^


1
n



)

T


]




F

2

x
1


T


+


F

2

I
2





E

[


(


I
2

-


I
^

2


)




(


x

2
0


-


x
^


2
0



)

T


]




F

2

x
2


T



=




F

2

x
2



[

Var

(

x

2
0


)

]



F

2

x
2


T


+



F

2

x
1



[

Var

(

x

1
n


)

]




F

2

x
1


T


+



F

2

I
2



[

Var

(

I
2

)

]




F

2

I
2


T


+



F

2

x
2



[

Covar

(


x

2
0


,

x

1
n



)

]




F

2

x
1


T


+



F

2

x
2



[

Covar

(


x

2
0


,

I
2


)

]




F

2

I
2


T


+



F

2

x
1



[

Covar

(


x

1
n


,

I
2


)

]




F

2

I
2


T


+





F

2

x
1



[

Covar

(


x

2
0


,

x

1
n



)

]



T



F

2

x
2


T


+





F

2

I
2



[

Covar

(


x

1
n


,

I
2


)

]



T



F

2

x
1


T


+




F

2

I
2



[

Covar

(


x

2
0


,

I
2


)

]

T




F

2

x
2


T














As the recent calibrated parameters state (x1n), the previous position and heading state (x20), and the input signals (I2) are independent from each other, this results in that such states are not correlated.







Covar

(


x

2
0


,

x

1
n



)

=


Covar

(


x

2
0


,

I
2


)

=


Covar

(


x

1
n


,

I
2


)

=
0.









P
2
-

=


E
[


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


]

=




F

2

x
2



[

Var

(

x

2
0


)

]



F

2

x
2


T


+



F

2

x
1



[

Var

(

x

1
n


)

]




F

2

x
1


T


+



F

2

I
2



[

Var

(

I
2

)

]




F

2

I
2


T











P

2
0


=


Var

(

x

2
0


)

=

E

[


(


x

2
0


-


x
^


2
0



)




(


x

2
0


-


x
^


2
0



)

T


]









P

1
n


=


Var

(

x

1
n


)

=

E

[


(


x

1
n


-


x
^


1
n



)




(


x

1
n


-


x
^


1
n



)

T


]








Q
=


Var

(

I
2

)

=

E

[


(


I
2

-


I
^

2


)




(


I
2

-


I
^

2


)

T


]









P
2
-

=



F

2

x
2





P

2
0





F

2

x
2


T


+


F

2

x
1





P

1
n





F

2

x
1


T


+


F

2

I
2




Q



F

2

I
2


T







The method may consider the equations:










(


x
2
-

·
x

)





(


x

2
0


·
x

)



=
1










(


x
2
-

·
y

)





(


x

2
0


·
x

)



=
0










(


x
2
-

·

v
x


)





(


x

2
0


·
x

)



=
0










(


x
2
-

·

v
y


)





(


x

2
0


·
x

)



=
0










(


x
2
-

·
x

)





(


x

2
0


·
y

)



=
0










(


x
2
-

·
y

)





(


x

2
0


·
y

)



=
1










(


x
2
-

·

v
x


)





(


x

2
0


·
y

)



=
0










(


x
2
-

·

v
y


)





(


x

2
0


·
y

)



=
0










(


x
2
-

·
x

)





(


x

2
0


·

v
x


)



=


(


x

1
n


·

S
s


)

×

v
s

×

cos

(
rot
)

×
Δ

t











(


x
2
-

·
y

)





(


x

2
0


·

v
x


)



=


(


x

1
n


·

S
s


)

×

v
s

×

sin

(
rot
)

×
Δ

t











(


x
2
-

·

v
x


)





(


x

2
0


·

v
x


)



=


v
s

×

cos

(
rot
)












(


x
2
-

·

v
y


)





(


x

2
0


·

v
x


)



=


v
s

×

sin

(
rot
)












(


x
2
-

·
x

)





(


x

2
0


·

v
y


)



=


-

(


x

1
n


·

S
s


)


×

v
s

×

sin

(
rot
)

×
Δ

t











(


x
2
-

·
y

)





(


x

2
0


·

v
y


)



=


(


x

1
n


·

S
s


)

×

v
s

×

cos

(
rot
)

×
Δ

t











(


x
2
-

·

v
x


)





(



x
^


2
0


·

v
y


)



=


-

v
s


×

sin

(
rot
)












(


x
2
-

·

v
y


)





(



x
^


2
0


·

v
y


)



=


v
s

×

cos

(
rot
)






The method may consider the matrix F2x2 of the form:







F

2

x
2



=

[



1


0




(



x
^


1
n


·

S
s


)

×

v
s

×

cos

(
rot
)

×
Δ

t





-

(



x
^


1
n


·

S
s


)


×

v
s

×

sin

(
rot
)

×
Δ

t





0


1




(



x
^


1
n


·

S
s


)

×

v
s

×

sin

(
rot
)

×
Δ

t





(



x
^


1
n


·

S
s


)

×

v
s

×

cos

(
rot
)

×
Δ

t





0


0




v
s

×
cos


(
rot
)






-

v
s


×

sin

(
rot
)






0


0




v
s

×
sin


(
rot
)






v
s

×
cos


(
rot
)





]





The matrix






F

2

x
2






has a first column with components:

    • 1
    • 1
    • 0′
    • 0


The matrix






F

2

x
2






has a second column with components:

    • 0
    • 1
    • 0′
    • 0


The matrix






F

2

x
2






has a third column with components:







(



x
^


1
n


·

S
s


)

×

v
s

×
cos



(


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)

×
Δ

t







(



x
^


1
n


·

S
s


)

×

v
s

×
sin



(


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)

×
Δ

t







v
s

×
cos



(


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)









v
s

×
sin



(


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)


,




The matrix






F

2

x
2






has a fourth column with components:







-

(



x
^


1
n


·

S
s


)


×

v
s

×
sin



(


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)

×
Δ

t







(



x
^


1
n


·

S
s


)

×

v
s

×
cos



(


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)

×
Δ

t








-

v
s


×
sin



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)







v
s

×
cos



(


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

)





The method may consider the following equations:











(


x
2
-

·
x

)





(


x

1
n


·

S
s


)



=


v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×
Δ

t


,











(


x
2
-

·
y

)





(


x

1
n


·

S
s


)



=


v
s

×

(



(



x
^


2
0


·

v
x


)

×
sin


(
rot
)


+


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×
Δ

t


,











(


x
2
-

·

v
x


)





(


x

1
n


·

S
s


)



=
0

,











(


x
2
-

·

v
y


)





(


x

1
n


·

S
s


)



=
0

,











(


x
2
-

·
x

)





(


x

1
n


·


θ
˙

s


)



=


v
s

×

(



x
^


1
n


·

S
s


)

×

(



-

(



x
^


2
0


·

v
x


)


×
sin


(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×


θ
˙

n

×


(

Δ

t

)

2



,











(


x
2
-

·
y

)





(


x

1
n


·


θ
˙

s


)



=


v
s

×

(



x
^


1
n


·

S
s


)

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×


θ
˙

n

×


(

Δ

t

)

2



,











(


x
2
-

·

v
x


)





(


x

1
n


·


θ
˙

s


)



=


v
s

×

(



-

(



x
^


2
0


·

v
x


)


×
sin


(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×


θ
˙

n

×
Δ

t


,











(


x
2
-

·

v
y


)





(


x

1
n


·


θ
˙

s


)



=


v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×
sin


(
rot
)



)

×


θ
˙

n

×
Δ

t


,











(


x
2
-

·
x

)





(


x

1
n


·


θ
˙

O


)



=


v
s

×

(



x
^


1
n


·

S
s


)

×

(



-

(



x
^


2
0


·

v
x


)


×
sin


(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×


(

Δ

t

)

2



,











(


x
2
-

·
y

)





(


x

1
n


·


θ
˙

O


)



=


v
s

×

(



x
^


1
n


·

S
s


)

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×


(

Δ

t

)

2



,











(


x
2
-

·

v
x


)





(


x

1
n


·


θ
˙

O


)



=


v
s

×

(



-

(



x
^


2
0


·

v
x


)


×

sin

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×
Δ

t


,










(


x
2
-

·

v
y


)





(


x

1
n


·


θ
˙

O


)



=


v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×
Δ


t
.








rot
=


(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t





The method may consider a matrix






F

2

x
1






with a first column having components







v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×
Δ

t







v
s

×

(



(



x
^


2
0


·

v
x


)

×

sin

(
rot
)


+


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×
Δ

t





0




0



The matrix






F

2

x
1






also has a second column having components







v
s

×

(



x
^


1
n


·

S
s


)

×

(



-

(



x
^


2
0


·

v
x


)


×

sin

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×


θ
˙

n

×


(

Δ

t

)

2








v
s

×

(



x
^


1
n


·

S
s


)

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×


sin

(
rot
)



)

×


θ
˙

n

×


(

Δ

t

)

2








v
s

×

(



-

(



x
^


2
0


·

v
x


)


×

sin

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×


θ
˙

n

×
Δ

t







v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×


θ
˙

n

×
Δ

t




The matrix






F

2

x
1






also has a third column having components







v
s

×

(



x
^


1
n


·

S
s


)

×

(



-

(



x
^


2
0


·

v
x


)


×

sin

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×


(

Δ

t

)

2








v
s

×

(



x
^


1
n


·

S
s


)

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×


sin

(
rot
)



)

×


(

Δ

t

)

2








v
s

×

(



-

(



x
^


2
0


·

v
x


)


×

sin

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

cos

(
rot
)



)

×
Δ

t







v
s

×

(



(



x
^


2
0


·

v
x


)

×

cos

(
rot
)


-


(



x
^


2
0


·

v
y


)

×

sin

(
rot
)



)

×
Δ

t




The method may consider I2={vn, {dot over (θ)}n} as the vehicle bus input. Indeed, this is because the speed in the state is overwritten with the input speed (vn) on the bus projected in the Cartesian (x, y) coordinates. The current heading of the vehicle's motion is presented by the speed components in the state as follows.







head
0

=


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)








head
0

=


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)





Moreover, the method may take into account the following variables:











(


x
2
-

·
x

)





v
n



=


(



x
^


1
n


·

S
s


)

×

cos

(

rot
+

he


ad
0



)

×
Δ

t


,











(


x
2
-

·
y

)





v
n



=


(



x
^


1
n


·

S
s


)

×

sin

(

rot
+

he


ad
0



)

×
Δ

t


,










(


x
2
-

·

v
x


)





v
n



=


cos

(

rot
+

head
0


)

.












(


x
2
-

·

v
y


)





v
n



=

sin

(

rot
+

head
0


)


,











(


x
2
-

·
x

)






θ
˙

n



=


-

v
n


×

(



x
^


1
n


·

S
s


)

×

(



x
^


1
n


·


θ
˙

s


)

×

sin

(

rot
+

head
0


)

×


(

Δ

t

)

2



,











(


x
2
-

·
y

)






θ
˙

n



=


v
n

×

(



x
^


1
n


·

S
s


)

×

(



x
^


1
n


·


θ
˙

s


)

×

cos

(

rot
+

head
0


)

×


(

Δ

t

)

2



,











(


x
2
-

·

v
x


)






θ
˙

n



=


-

v
n


×

(



x
^


1
n


·


θ
˙

s


)

×

sin

(

rot
+

he


ad
0



)

×
Δ

t


,










(


x
2
-

·

v
y


)






θ
˙

n



=


v
n

×

(



x
^


1
n


·


θ
˙

s


)

×

cos

(

rot
+

head
0


)

×
Δ


t
.






The method may consider a matrix






F

2

I
2






of the form:







F

2

I
2



=



[





(



x
^


1
n


·

S
s


)

×

cos

(

rot
+

head
0


)

×
Δ

t








-

v
n


×

(



x
^


1
n


·

S
s


)

×

(



x
^


1
n


·


θ
˙

s


)

×






sin


(

rot
+

head
0


)

×


(

Δ

t

)

2











(



x
^


1
n


·

S
s


)

×

sin

(

rot
+

head
0


)

×
Δ

t









v
n

×

(



x
^


1
n


·

S
s


)

×

(



x
^


1
n


·


θ
˙

s


)

×







cos

(

rot
+

head
0


)

×


(

Δ

t

)

2










cos


(

rot
+

head
0


)









-

v
n


×

(



x
^


1
n


·

S
s


)

×







sin

(

rot
+

head
0


)

×
Δ


t
2










sin


(

rot
+

head
0


)









v
n

×

(



x
^


1
n


·

S
s


)

×







cos

(

rot
+

head
0


)

×
Δ


t
2








]










F

2

I
2






is matrix having a left column with components:







(



x
^


1
n


·

S
s


)

×
cos



(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)

×
Δ

t







(



x
^


1
n


·

S
s


)

×
sin



(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)

×
Δ

t






cos



(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)







sin



(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)





and a right column with components:







-

v
n


×

(



x
^


1
n


·

S
s


)

×

(



x
^


1
n


·


θ
˙

s


)

×

sin


(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)

×


(

Δ

t

)

2








v
n

×

(



x
^


1
n


·

S
s


)

×

(



x
^


1
n


·


θ
˙

s


)

×

cos


(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)

×


(

Δ

t

)

2








-

v
n


×

(



x
^


1
n


·


θ
˙

s


)

×

sin


(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)

×
Δ

t







v
n

×

(



x
^


1
n


·


θ
˙

s


)

×
cos



(



(



(



x
^


1
n


·


θ
˙

s


)

×


θ
˙

n


+

(



x
^


1
n


·


θ
˙

O


)


)

×
Δ

t

+


tan

-
1


(




x
^


2
0


·

v
y





x
^


2
0


·

v
x



)


)

×
Δ

t




Moreover, the method may consider the following formulae:








x
^


2
m


=



[




x
m






y
m






v

x
m







v

y
m





]

=


h
2

(



x
^

2
-

,


x
^


1
n



)










x
^


2
m


=


h
2

(



x
^

2
-

,


x
^


1
n



)








(



x
^


2
m


·
x

)

=

(



x
^

2
-

·
x

)








(



x
^


2
m


·
y

)

=

(



x
^

2
-

·
y

)








(



x
^


2
m


·

v
x


)

=


(



x
^


1
n


·

S
s


)

×

(



x
^

2
-

·

v
x


)









(



x
^


2
m


·

v
y


)

=


(



x
^


1
n


·

S
s


)

×

(



x
^

2
-

·

v
y


)












H

2

x
2



=




x

2
m






x
2
-






"\[RightBracketingBar]"







x
2
-

=


x
^

2
-








x

1
n


=


x
^


1
n







=






h
2

(



x
^

2
-

,


x
^


1
n



)





x
2
-



=

[



1


0


0


0




0


1


0


0




0


0



(



x
^


1
n


·

S
s


)



0




0


0


0



(



x
^


1
n


·

S
s


)




]












H

2

x
1



=




x

2
m






x

1
n







"\[RightBracketingBar]"







x
2
-

=


x
^

2
-








x

1
n


=


x
^


1
n







=






h
2

(



x
^

2
-

,


x
^


1
n



)






x
^


1
n




=

[



0


0


0




0


0


0





(



x
^

2
-

·

v
x


)



0


0





(



x
^

2
-

·

v
y


)



0


0



]






It can be concluded that:







K
2

=


P
2
-






H

2

x
2


T

(



H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n





H

2

x
1


T


+

R
2


)


-
1











x
^


2
n


=




x
^

2
-

+


K
2

[



Z
^

2

-


h
2

(



x
^

2
-

,


x
^


1
n



)


]


=



x
^

2
-

+


K
2

[



Z
^

2

-


x
^


2
m



]










P

2
n


=


P
2
-

-


K
2




H

2

x
2






P
2
-

.







The above equations result from the proof of the innovative Kalman filter described below.


It shall be noted that the GNSS measurement state Z2 is presented with mean {circumflex over (Z)}2 and covariance R2. The covariance matrix of the GNSS measurements can be set to a diagonal matrix, with each of the diagonal elements presents the variance of one of the state elements assuming no cross correlation between them as follows.







Z
2

=

[




x
GNSS






y
GNSS






v

x
GNSS







v

y
GNSS





]









Z
^

2

=

[





x
^

GNSS







y
^

GNSS







v
^


x
GNSS








v
^


y
GNSS





]








R
2

=

[





(

σ

x
GNSS


)

2



0


0


0




0




(

σ

y
GNSS


)

2



0


0




0


0




(

σ

v

x
GNSS



)

2



0




0


0


0




(

σ

v

y
GNSS



)

2




]





In order to estimate the variance in the positions and speeds of GNSS detected location points, the standard deviation provided in the GNSS data for both the GNSS estimated speed and heading shall be used. The signals sigma_speed and sigma_heading provided by GNSS can be used to present the standard deviation in the GNSS speed (σvGNSS) and the standard deviation in the GNSS heading (σθGNSS), respectively at every GNSS location point. Moreover, the Horizontal Estimated Position Error (HEPE) and Horizontal Dilution Of Precision (HDOP) signals shall be used to estimate the position errors, especially the second one according to reference M. Specht, “Experimental Studies on the Relationship Between HDOP and Position Error in the GPS System”, March 2022, Metrology and Measurement Systems 29 (1): 17-36, DOI: 10.24425/mms.2022.138549, which is incorporated herein by reference. Thus, the computing of the variance of the state elements can be formulated as follows.







σ

x
GNSS


=

HEPE

×
HDOP








σ

y
GNSS


=

HEPE

×
HDOP









(

σ

x
GNSS


)

2

=



(
HEPE
)

2

×


(
HDOP
)

2










(

σ

y
GNSS


)

2

=



(
HEPE
)

2

×


(
HDOP
)

2






The accuracy of the GNSS data is highly affected during the vehicle sharp steering maneuvers. In order to consider the effect of the yaw rate on the accuracy of the obtained GNSS data, the following equations are formulated to use the average yaw rate ({dot over (θ)}avg) from the vehicle bus data along GNSS cycle in estimating the vehicle heading. θGNSS presents the heading obtained from GNSS at the previous GNSS cycle, and ΔtGNSS presents the cycle time of the input GNSS data. σ{dot over (θ)} presents the variance in the input yaw rate, which is used as well in Q, while σΔtGNSS presents the variance in the cycle time of the GNSS. Both can be predefined according to the available GNSS data.







v

x
GNSS


=



v
GNSS

×
cos



(


90

°

-

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)


)


=


v
GNSS

×

sin

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)










v

y
GNSS


=



v
GNSS

×
sin



(


90

°

-

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)


)


=


v
GNSS

×

cos

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)













v

x
GNSS






v
GNSS



=

sin

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)











v

x
GNSS






θ
GNSS



=


v
GNSS

×

cos

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)












v

x
GNSS







θ
.

avg



=


v
GNSS

×

cos

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

×
Δ


t
GNSS












v

x
GNSS






Δ



t
GNSS



=


v
GNSS

×

cos

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

×


θ
.

avg












v

y
GNSS






v
GNSS



=

cos

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)











v

y
GNSS






θ
GNSS



=


-

v
GNSS


×

sin

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)












v

y
GNSS







θ
.

avg



=


-

v
GNSS


×

sin

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

×
Δ


t
GNSS












v

y
GNSS






Δ



t
GNSS



=


-

v
GNSS


×

sin

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

×


θ
.

avg






As already proved, it can be deduced the following relations.








(

σ

v

x
GNSS



)

2

=



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

v
GNSS


)

2


+



(

v
GNSS

)

2



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

θ
GNSS


)

2


+



(

v
GNSS

)

2



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

Δ


t
GNSS


)

2




(

σ

θ
.


)

2


+



(

v
GNSS

)

2



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

θ
.

)

2




(

σ

Δ


t
GNSS



)

2











(

σ

v

y
GNSS



)

2

=



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

v
GNSS


)

2


+



(

v
GNSS

)

2



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

θ
GNSS


)

2


+



(

v
GNSS

)

2



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

Δ


t
GNSS


)

2




(

σ

θ
.


)

2


+



(

v
GNSS

)

2



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

θ
.

)

2




(

σ

Δ


t
GNSS



)

2







Finally, according to previously-cited reference M. Specht, “Experimental Studies on the Relationship Between HDOP and Position Error in the GPS System”, March 2022, Metrology and Measurement Systems 29 (1): 17-36, DOI: 10.24425/mms.2022.138549, which is incorporated herein by reference, the HDOP term can be introduced:








(

σ

v

x
GNSS



)

2

=


[



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

v
GNSS


)

2


+



(

v
GNSS

)

2



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

θ
GNSS


)

2


+



(

v
GNSS

)

2



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

Δ


t
GNSS


)

2




(

σ

θ
.


)

2


+



(

v
GNSS

)

2



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

θ
.

)

2




(

σ

Δ


t
GNSS



)

2



]

×


(
HDOP
)

2










(

σ

v

x
GNSS



)

2

=


[



(


cos
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

v
GNSS


)

2


+



(

v
GNSS

)

2



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

σ

θ
GNSS


)

2


+



(

v
GNSS

)

2



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

Δ


t
GNSS


)

2




(

σ

θ
.


)

2


+



(

v
GNSS

)

2



(


sin
2

(


θ
GNSS

+



θ
.

avg

×
Δ


t
GNSS



)

)




(

θ
.

)

2




(

σ

Δ


t
GNSS



)

2



]

×


(
HDOP
)

2






Examples are discussed with reference to FIGS. 2 to 4.



FIG. 2 shows a diagram illustration of implementations of the fusion blocks 1 and 2 by the method.


In these implementations, the method obtains the vehicle motion data 210 as I={vn, v0, {dot over (θ)}n}, where vn denotes the vehicle speed in the current cycle, v0 denotes the vehicle speed in the previous cycle, and {dot over (θ)}n denotes the vehicle yaw rate in the current cycle.


In these implementations, the method also obtains, while the GNSS signal is available, GNSS data 220. The GNSS data 220 comprises a driven distance for the vehicle denoted by ΔS and orientation change denoted by Δθ for the vehicle. The GNSS data 220 further includes a location and heading of the vehicle denoted by [x, y, vx, vy].


In these implementations, the method calibrates parameters of the odometer 230. The parameters of the odometer 230 include speed scaling, the vehicle yaw rate scaling, and the vehicle yaw rate offset denoted as [Ss, {dot over (θ)}s, {dot over (θ)}O].


The calibration is based on a data fusion that uses a Kalman filter 240 (denoted as Fusion block 1) that determines a predicted distance variation and a predicted orientation variation of the vehicle based on the current calibration parameters of the odometer 230 and on the motion data 210. The Kalman filter 240 compares the predicted distance variation and predicted orientation variation to the distance variation and the orientation variation of the GNSS data 220.


When the GNSS signal is available, the method determines a localization 260 of the vehicle by performing a data fusion that is based on a Kalman filter 250 (denoted as Fusion block 2) that predicts the vehicle localization based on a fusion of the location and heading [x, y, vx, vy] of the GNSS data 220 and a location and heading predicted, from the odometer 230, according to the calibrated odometer parameters.


When the GNSS signal is lost, the method determines a localization 260 of the vehicle based on a location and heading predicted according to the calibrated odometer parameters.



FIGS. 3 and 4 illustrate the performance of the method for Dead Reckoning localization. The performance is evaluated by comparing the computed vehicle location from the prediction odometer equations using the calibrated Odometer parameters against the vehicle location from a reference Differential GPS (DGPS). FIGS. 3 and 4 show two categories of recorded traces.


The tracks input to the Fusion block 1 are selected by filtering the straight motion parts where the displacement is the track length and orientation change is almost zero.



FIG. 3 shows the vehicle's straight motion so as to illustrate the calibration of the speed scale parameter, represented with GNSS points 310 stemming from the GNSS data, the pure prediction model 320, the prediction after calibration of the odometer 330 and the reference DGPS 340. The x-axis in FIG. 3 corresponds to the x-coordinates in cartesian coordinates of the projection of the vehicle to the ground. Accordingly, the y-axis corresponds to the y-coordinates in cartesian coordinates.


The straight motion trace plotted in FIG. 3 has at least one straight motion track, where the Fusion block 1 calibrates the speed scale. Moreover, the not-counter steering tracks with yaw rate having same sign connecting two straight motions are selected for the comparison of the predicted distance variation and predicted orientation variation to the distance variation and the orientation variation of the GNSS data. For such tracks, the displacement is the track length, and the orientation change is the difference in the angles of the two straight lines at the start and end of the track.



FIG. 4 shows the vehicle's U-shape motion so as to illustrate the calibration of both the yaw rate scale and offset parameters, represented with GNSS points 410 stemming from the GNSS data, the pure prediction model 420, the prediction after calibration of the odometer 430 and the reference DGPS 440. The x-axis in FIG. 4 corresponds to the x-coordinates in cartesian coordinates of the projection of the vehicle to the global earth horizontal east direction. Accordingly, the y-axis corresponds to the y-coordinates in cartesian coordinates of the projection of the vehicle to the global earth vertical north direction. The circular path 440 connecting the two straight segments is selected as a track for fusion to calibrate the yaw rate scaling and offset where the orientation difference is approximately 180°.


The straight motion track is filtered by fetching GNSS location points stored in history over specified time window. Then, line fitting is done for the GNSS points that achieves the least sum of square of the perpendicular distances from the points to the fitting line. If the mean of the absolute values of the perpendicular distances is less than specific threshold, then the GNSS points are categorized as straight motion track to be selected as input to Fusion block 1.


The plots shown in FIGS. 3 and 4 show how the predicted location of the vehicle based on the calibrated parameters in green is enhanced compared to the predicted location of the vehicle without parameters calibration in blue. Both are evaluated against the reference DGPS location in solid red, while the GNSS locations received at lower rate are the red points. It is clear from the figures the enhancement in localization after applying the proposed design for dynamic calibration of the parameters of the odometer.


A proof the innovative Kalman filter described above is shown below:







P

2
n


=

E

[


(


x

2
n


-


x
^


2
n



)




(


x

2
n


-


x
^


2
n



)

T


]








x

2
n


=



x
2
-

+


K
2

[


Z
2

-


h
2

(


x
2
-

,

x

1
n



)


]


=


x
2
-

+


K
2

[


Z
2

-

x

2
m



]











x
^


2
n


=




x
^

2
-

+


K
2

[



Z
^

2

-


h
2

(



x
^

2
-

,


x
^


1
n



)


]


=



x
^

2
-

+


K
2

[



Z
^

2

-


x
^


2
m



]










P

2
n


=

E

[


(


x

2
n


-


x
^


2
n



)




(


x

2
n


-


x
^


2
n



)

T


]








P

2
n


=

E

[


(


x
2
-

+


K
2

[


Z
2

-


h
2

(


x
2
-

,

x

1
n



)


]

-


x
^

2
-

-


K
2

[



Z
^

2

-


h
2

(



x
^

2
-

,


x
^


1
n



)


]


)




(


x
2
-

+


K
2

[


Z
2

-


h
2

(


x
2
-

,

x

1
n



)


]

-


x
^

2
-

-


K
2

[



Z
^

2

-


h
2

(



x
^

2
-

,


x
^


1
n



)


]


)

T


]










P

2
n



=

E

[


(


x
2
-

-


x
^

2
-

+


K
2

[


Z
2

-


Z
^

2


]

-


K
2

[



h
2

(


x
2
-

,

x

1
n



)

-


h
2

(



x
^

2
-

,


x
^


1
n



)


]


)




(


x
2
-

-


x
^

2
-

+


K
2

[


Z
2

-


Z
^

2


]

-


K
2

[



h
2

(


x
2
-

,

x

1
n



)

-


h
2

(



x
^

2
-

,


x
^


1
n



)


]


)

T


]




















h
2

(


x
2
-

,

x

1
n



)


=



h
2

(



x
^

2
-

,


x
^


1
n



)

+

[





h
2

(


x
2
-

,

x

1
n



)





x
2
-








"\[RightBracketingBar]"







x
2
-

=


x
^

2
-








x

1
n


=


x
^


1
n







]



(


x
2
-

-


x
^

2
-


)


+


[





h
2

(


x
2
-

,

x

1
n



)





x

1
n









"\[RightBracketingBar]"








x
2
-

=


x
^

2
-








x

1
n


=


x
^


1
n







]



(


x

1
n


-


x
^


1
n



)









h
2

(


x
2
-

,

x

1
n



)

=



h
2

(



x
^

2
-

,


x
^


1
n



)

+


H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)











h
2

(


x
2
-

,

x

1
n



)

-


h
2

(



x
^

2
-

,


x
^


1
n



)


=



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)











P

2
n



=

E

[


(


x
2
-

-


x
^

2
-

+


K
2

[


Z
2

-


Z
^

2


]

-


K
2

[



h

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]


)




(


x
2
-

-


x
^

2
-

+


K
2

[


Z
2

-


Z
^

2


]

-


K
2

[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]


)

T


]








P

2
n


=

E

[




(


(


x
2
-

-


x
^

2
-


)

+


K
2

[


Z
2

-


Z
^

2


]

-


K
2


[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]


)






(



(


x
2
-

-


x
^

2
-


)

T

+



[


Z
2

-


Z
^

2


]

T





K
2
T


[



H

2

x
2





(


x
2
-

-


x
^

2
-


)


+


H

2

x
1





(


x

1
n


-


x
^


1
n



)



]

T



K
2
T



)




]








P

2
n


=

E

[

(



(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


+



K
2

[


Z
2

-


Z
^

2


]




(


x
2
-

-


x
^

2
-


)

T


-



K
2

[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]




(


x
2
-

-


x
^

2
-


)

T


+




(


x
2
-

-


x
^

2
-


)


[


Z
2

-


Z
^

2


]

T



K
2
T


+





K
2

[


Z
2

-


Z
^

2


]

[


Z
2

-


Z
^

2


]

T



K
2
T


-





K
2

[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]

[


Z
2

-


Z
^

2


]

T



K
2
T


-




(


x
2
-

-


x
^

2
-


)

[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]

T



K
2
T


-





K
2

[


Z
2

-


Z
^

2


]

[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]

T



K
2
T


+





K
2

[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+


H

2

x
1



(


x

1
n


-


x
^


1
n



)


]

[



H

2

x
2



(


x
2
-

-


x
^

2
-


)

+



H

2

x
1



(


x

1
n


-


x
^


1
n



)


]

T



K
2
T



)

]








P

2
n


=

E

[


(


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


)

+

(



K
2

[


Z
2

-


Z
^

2


]




(


x
2
-

-


x
^

2
-


)

T


)

-

(


K
2




H

2

x
2



(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


)

-

(


K
2




H

2

x
1



(


x

1
n


-


x
^


1
n



)




(


x
2
-

-


x
^

2
-


)

T


)

+

(




(


x
2
-

-


x
^

2
-


)

[


Z
2

-


Z
^

2


]

T



K
2
T


)

+

(





K
2

[


Z
2

-


Z
^

2


]

[


Z
2

-


Z
^

2


]

T



K
2
T


)

-

(


K
2



H

2

x
2







(


x
2
-

-


x
^

2
-


)

[


Z
2

-


Z
^

2


]

T



K
2
T


)

-

(


K
2






H

2

x
1



(


x

1
n


-


x
^


1
n



)

[


Z
2

-


Z
^

2


]

T



K
2
T


)

-

(


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T



H

2

x
2


T



K
2
T


)

-

(


(


x
2
-

-


x
^

2
-


)




(


x

1
n


-


x
^


1
n



)

T



H

2

x
1


T



K
2
T


)

-

(



K
2

[


Z
2

-


Z
^

2


]




(


x
2
-

-


x
^

2
-


)

T



H

2

x
2


T



K
2
T


)

-

(



K
2

[


Z
2

-


Z
^

2


]




(


x

1
n


-


x
^


1
n



)

T



H

2

x
1


T



K
2
T


)

+

(


K
2




H

2

x
2



(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T



H

2

x
2


T



K
2
T


)

+

(


K
2




H

2

x
2



(


x
2
-

-


x
^

2
-


)




(


x

1
n


-


x
^


1
n



)

T



H

2

x
1


T



K
2
T


)

+

(


K
2




H

2

x
1



(


x

1
n


-


x
^


1
n



)




(


x
2
-

-


x
^

2
-


)

T



H

2

x
2


T



K
2
T


)

+

(


K
2




H

2

x
1



(


x

1
n


-


x
^


1
n



)




(


x

1
n


-


x
^


1
n



)

T



H

2

x
1


T



K
2
T


)


]








P

2
n


=


E

[


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


]

+

(


K
2



E
[


(


Z
2

-


Z
^

2


)




(


x
2
-

-


x
^

2
-


)

T


]


)

-

(


K
2



H

2

x
2





E
[


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


]


)

-

(


K
2



H

2

x
1





E
[


(


x

1
n


-


x
^


1
n



)




(


x
2
-

-


x
^

2
-


)

T


]


)

+

(


E

[


(


x
2
-

-


x
^

2
-


)




(


Z
2

-


Z
^

2


)

T


]



K
2
T


)

+

(


K
2



E

[


(


Z
2

-


Z
^

2


)




(


Z
2

-


Z
^

2


)

T


]



K
2
T


)

-

(


K
2



H

2

x
2





E

[


(


x
2
-

-


x
^

2
-


)




(


Z
2

-


Z
^

2


)

T


]



K
2
T


)

-

(


K
2



H

2

x
1





E

[


(


x

1
n


-


x
^


1
n



)




(


Z
2

-


Z
^

2


)

T



K
2
T




)

-

(


E
[


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


]



H

2

x
2


T



K
2
T


)

-

(


E

[


(


x
2
-

-


x
^

2
-


)




(


x

1
n


-


x
^


1
n



)

T


]



H

2

x
1


T



K
2
T


)

-

(


K
2



E
[


(


Z
2

-


Z
^

2


)




(


x
2
-

-


x
^

2
-


)

T


]



H

2

x
2


T



K
2
T


)

-

(


K
2



E
[


(


Z
2

-


Z
^

2


)




(


x

1
n


-


x
^


1
n



)

T


]



H

2

x
1


T



K
2
T


)

+

(


K
2



H

2

x
2





E

[


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


]




H

2

x
2


T



K
2
T


)

+

(


K
2



H

2

x
2





E

[


(


x
2
-

-


x
^

2
-


)




(


x

1
n


-


x
^


1
n



)

T


]




H

2

x
1


T



K
2
T


)

+

(


K
2




H

2

x
1





E
[


(


x

1
n


-


x
^


1
n



)




(


x
2
-

-


x
^

2
-


)

T


]



H

2

x
2


T



K
2
T


)

+

(


K
2




H

2

x
1





E

[


(


x

1
n


-


x
^


1
n



)




(


x

1
n


-


x
^


1
n



)

T


]



H

2

x
1


T



K
2
T


)









E

[


(


x
2
-

-


x
^

2
-


)




(


x
2
-

-


x
^

2
-


)

T


]

=

P
2
-








E
[


(


x

1
n


-


x
^


1
n



)




(


x

1
n


-


x
^


1
n



)

T


]

=

P

1
n









E

[


(


Z
2

-


Z
^

2


)




(


Z
2

-


Z
^

2


)

T


]

=
R




It results that, as there is no correlation between {x1n, x2, Z2} because they are independent, their corresponding covariance equals to zero.







E
[


(


x

1
n


-


x
^


1
n



)




(


x
2
-

-


x
^

2
-


)

T


]

=


E

[


(


x
2
-

-


x
^

2
-


)




(


x

1
n


-


x
^


1
n



)

T


]

=
0








E
[


(


Z
2

-


Z
^

2


)




(


x
2
-

-


x
^

2
-


)

T


]

=


E

[


(


x
2
-

-


x
^

2
-


)




(


Z
2

-


Z
^

2


)

T


]

=
0








E
[


(


x

1
n


-


x
^


1
n



)




(


Z
2

-


Z
^

2


)

T


]

=


E

[


(


Z
2

-


Z
^

2


)




(


x

1
n


-


x
^


1
n



)

T


]

=
0








P

2
n


=


P
2
-

-

(


K
2




H

2

x
2





P
2
-


)

+

(


K
2


R



K
2
T


)

-

(


P
2
-




H

2

x
2


T



K
2
T


)

+

(


K
2




H

2

x
2





P
2
-




H

2

x
2


T



K
2
T


)

+

(


K
2




H

2

x
1





P

1
n





H

2

x
1


T



K
2
T


)






In order to obtain optimal K2 at which P2n is minimized, K2 is solved as in the following equation:










P

2
n






K
2
T



=
0




Noting the following rules for matrices derivatives shall be applied:










(
Ax
)




x


=
A










(


x
T


A

)




x


=

A
T











(


x
T


A


x

)




x


=

2


x
T


A











P

2
n






K
2
T



=



-


(


H

2

x
2





P
2
-


)

T


+

(

2



K
2



R

)

-

(


P
2
-




H

2

x
2


T


)

+

(

2



K
2




H

2

x
2





P
2
-




H

2

x
2


T


)

+

(

2



K
2




H

2

x
1





P

1
n





H

2

x
1


T


)


=
0





As P2 is symmetric matrix, then:







P
2

-
T


=

P
2
-











P

2
n






K
2
T



=



2



(


K
2



R

)


-

2



(


P
2
-




H

2

x
2


T


)


+

2



(


K
2




H

2

x
2





P
2
-




H

2

x
2


T


)


+

2



(


K
2




H

2

x
1





P

1
n





H

2

x
1


T


)



=
0









-

(


P
2
-




H

2

x
2


T


)


+


K
2

(

R
+


H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T



)


=
0







K
2

=


(


P
2
-




H

2

x
2


T


)


(

R
+


H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T



)









K
2

=


P
2
-






H

2

x
2


T

(



H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1










K
2

=


P
2
-






H

2

x
2


T

(



H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1










K
2
T

=



(



H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n





H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-









P

2
n


=


P
2
-

-

(


K
2




H

2

x
2





P
2
-


)

+

(


K
2



R



K
2
T


)

-

(


P
2
-




H

2

x
2


T



K
2
T


)

+

(


K
2




H

2

x
2





P
2
-




H

2

x
2


T



K
2
T


)

+

(


K
2




H

2

x
1





P

1
n





H

2

x
1


T



K
2
T


)









P

2
n


=


P
2
-

-

(


K
2




H

2

x
2





P
2
-


)

+

(


K
2



R



K
2
T


)

-

(


P
2
-




H

2

x
2


T



K
2
T


)

+

(



K
2


[



H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T



]




K
2
T


)









P

2
n


=


P
2
-

-

(


K
2




H

2

x
2





P
2
-


)

-

(


P
2
-




H

2

x
2


T



K
2
T


)

+

(



K
2


[



H

2

x
2





P
2
-




H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

]




K
2
T


)






By substitution with K2 and K2T:









K
2



H

2

x
2





P
2
-


=


P
2
-





H

2

x
2


T

(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-








P
2
-



H

2

x
2


T



K
2
T


=


P
2
-





H

2

x
2


T

(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-












K
2

[



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

]



K
2
T


=



P
2
-






H

2

x
2


T

(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1


[



H

2

x
2





P
2
-



H

2

x
2


T


+



H

2

x
1





P

1
n




H

2

x
1


T


+
R

]




(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-


==



P
2
-



H

2

x
2


T




(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-












P

2
n



=


P
2
-

-

(


K
2



H

2

x
2





P
2
-


)

-

(


P
2
-



H

2

x
2


T



K
2
T


)

+

(



K
2

[



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

]



K
2
T


)






By substitution in the equation of P2n:







P

2
n


=


P
2
-

-


P
2
-





H

2

x
2


T

(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-


-


P
2
-





H

2

x
2


T

(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-


+


P
2
-





H

2

x
2


T

(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-










P

2
n


=


P
2
-

-


P
2
-





H

2

x
2


T

(



H

2

x
2





P
2
-



H

2

x
2


T


+


H

2

x
1





P

1
n




H

2

x
1


T


+
R

)


-
1




H

2

x
2





P
2
-







It is therefore obtained:







P

2
n


=


P
2
-

-


K
2



H

2

x
2





P
2
-






Claims
  • 1. A computer-implemented method for localization of a moving vehicle based on GNSS data and vehicle sensor data, the method comprising, in real-time: obtaining: vehicle motion data stemming from at least one vehicle sensor, andwhile the GNSS signal is available, GNSS data of a positioning of the vehicle, including a distance variation and an orientation variation, andcalibrating parameters of an odometer of the vehicle based on a data fusion that uses a Kalman filter that determines a predicted distance variation and a predicted orientation variation of the vehicle based on a current calibration of the odometer parameters and on the motion data, and that compares the predicted distance variation and predicted orientation variation to the distance variation and the orientation variation of the GNSS data.
  • 2. The method of claim 1, wherein the odometer predicts cyclically in time a new location of the vehicle and a new heading of the vehicle based on a location and heading predicted at the previous cycle and on the motion data.
  • 3. The method of claim 2, wherein the motion data includes the vehicle speed in the current cycle,the vehicle speed in the previous cycle, andthe vehicle yaw rate in the current cycle.
  • 4. The method of claim 1, wherein the parameters of the odometer include:a vehicle speed scaling,a vehicle yaw rate scaling, anda vehicle yaw rate offset.
  • 5. The method of claim 4, wherein calibrating parameters includes correcting one or any combination of the vehicle speed scaling, the vehicle yaw rate scaling, and the vehicle yaw rate offset.
  • 6. The method of claim 1, wherein the motion data stems from a wheel sensor, an Inertial Measurement Unit (IMU), and a steering system sensor.
  • 7. The method of claim 1, wherein the GNSS data further includes a location and heading of the vehicle, andwherein the method further comprises, in real-time:when the GNSS signal is available, determining a localization of the vehicle by performing a data fusion that is based on a Kalman filter that predicts the vehicle localization based on a fusion of the location and heading of the GNSS data and a location and heading predicted according to the calibrated odometer parameters;when the GNSS signal is lost, determining a localization of the vehicle based on a location and heading predicted according to the calibrated odometer parameters.
  • 8. The method of claim 1, wherein the GNSS data stems from a GNSS device that comprises only one antenna.
  • 9. The method of claim 1, wherein the vehicle is a motorbike, a car, a bus or a truck.
  • 10. A non-transient computer-readable medium comprising instructions which, when executed by a computer system, cause the system to perform the method of claim 1.
  • 11. (canceled)
  • 12. A system comprising a processor coupled to a memory, wherein the memory has recorded thereon the computer program of claim 10.
  • 13. The system of claim 12, wherein the system is coupled with or further comprises the GNSS device, the odometer, and the at least one sensor.
  • 14. The system of claim 12, wherein the at least one sensor includes a wheel sensor, an Inertial Measurement Unit (IMU), and a steering system sensor.
  • 15. A vehicle equipped with the system according to claim 12.
Priority Claims (1)
Number Date Country Kind
23169054.6 Apr 2023 EP regional