This application relates to the field of intelligent driving, furthermore, to a data processing method and apparatus.
As the most fundamental service module of autonomous driving or assisted driving, an odometry can provide relative positioning information for planning and control, sensing, fusion, prediction, and other modules. Different from a requirement for absolute precision in global positioning, a requirement for the odometry in the autonomous driving is that the odometry is continuous, smooth, stable, and reliable, works upon power-on, and can ensure specific relative positioning precision.
In an increasingly complex autonomous driving or assisted driving scenario, there are increasingly high requirements for precision and reliability of the odometry. Currently, a commonly used odometry has many limitations. For example, during vehicle traveling, information (for example, angular velocity information and acceleration information) output by an inertial measurement unit (inertial measurement unit, IMU) is usually used as prediction. When the IMU is blocked or faulty, precision of the odometry is severely reduced, or the odometry even cannot work normally. This affects normal running of a vehicle and may cause a serious security risk.
This application provides a data processing method and apparatus, to help avoid a reduction in precision of an odometry caused by a fault of a single sensor, thereby helping improve stability and reliability of a vehicle in a navigation process.
According to a first aspect, a data processing method is provided. The method is applied to a vehicle, the vehicle includes one or more sensors, and the method includes: determining first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment and a first model, where the first model is a pose estimation model from the first moment to the second moment; obtaining data collected by one or more sensors; and determining second pose information of the vehicle at the second moment based on the first pose information and the data.
In this embodiment of this application, first pose information at a current moment may be estimated based on pose information at a previous moment and a first model, and then second pose information at the current moment is obtained based on the estimated first pose information at the current moment and sensor data. There is no need to estimate the pose information at the current moment based on data output by a single sensor (for example, an IMU), to avoid a reduction in precision of an odometry caused by a fault of the sensor, thereby helping improve stability and reliability of the vehicle in a navigation process.
It should be understood that the vehicle may include one or more different types of transportation, or may include one or more different types of transportation tools or movable objects that operate or move on land (for example, a highway, a road, or a railway), water surface (for example, a waterway, a river, or an ocean), or in space. For example, the vehicle may include a car, a bicycle, a motorcycle, a train, a subway, an airplane, a ship, an aircraft, a robot, an uncrewed aerial vehicle, or another type of transportation tool or movable object. This is not limited in this embodiment of this application.
In some possible implementations, the first model may be stored in a cloud server, the vehicle may send the pose information at the first moment to the cloud server, and the cloud server determines the first pose information of the vehicle at the second moment based on the pose information at the first moment and the first model. The cloud server may send the first pose information to the vehicle, so that the vehicle may determine the second pose information of the vehicle at the second moment based on the data collected by the sensor and the first pose information.
With reference to the first aspect, in some implementations of the first aspect, the determining first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment and a first model includes: determining an initial state transition matrix of the vehicle at the first moment based on the pose information of the vehicle at the first moment and the first model; and determining the first pose information based on the initial state transition matrix.
In this embodiment of this application, the vehicle may first determine the initial state transition matrix based on the pose information of the first moment and the first model, and then determine the first pose information at the second moment based on the initial state transition matrix. There is no need to estimate the pose information at the current moment based on the data output by the single sensor (for example, the IMU), to avoid the reduction in the precision of the odometry caused by the fault of the sensor, thereby helping improve the stability and reliability of the vehicle in the navigation process.
In some possible implementations, the first model includes one or more of a location estimation model from the first moment to the second moment, a speed estimation model from the first moment to the second moment, an acceleration estimation model from the first moment to the second moment, a roll estimation model from the first moment to the second moment, a pitch estimation model from the first moment to the second moment, a yaw estimation model from the first moment to the second moment, and an angular velocity estimation model from the first moment to the second moment.
In this embodiment of this application, the first model may include one or more of the location, speed, acceleration, roll, pitch, yaw, and angular velocity estimation models. The pose information of the vehicle at the current moment may be predicted based on these models, so that the pose information at the current moment does not need to be predicted based on the data output by the single sensor, to avoid the reduction in the precision of the odometry caused by the fault of the sensor, thereby helping improve the stability and reliability of the vehicle in the navigation process.
In some possible implementations, the first model is determined according to one or more of the following formulas:
Pkn is a location of the vehicle at the second moment in an odometry coordinate system. Pk-1n is a location of the vehicle at the first moment in the odometry coordinate system. Cbn is a rotation matrix from a vehicle coordinate system to the odometry coordinate system. Vk-1b is a speed of the vehicle at the first moment in the vehicle coordinate system. T is a time difference between the first moment and the second moment. ak-1b is an acceleration of the vehicle at the first moment in the vehicle coordinate system. akb is an acceleration of the vehicle at the second moment in the vehicle coordinate system. Vkb is a speed of the vehicle at the second moment in the vehicle coordinate system. ϕk is a roll of the vehicle at the second moment. ϕk-1 is a roll of the vehicle at the first moment. θk-1 is a pitch of the vehicle at a (k−1)th moment. {dot over (ϕ)}K-1 is a roll rate of the vehicle at the first moment. {dot over (θ)}k-1 is a pitch rate of the vehicle at the first moment. {dot over (φ)}k-1 is a yaw rate of the vehicle at the first moment. θk is a pitch of the vehicle at the second moment. θk-1 is a pitch of the vehicle at the first moment. φk is a yaw of the vehicle at the second moment. φk-1 is a yaw of the vehicle at the first moment. wkb is an angular velocity of the vehicle at the second moment in the vehicle coordinate system. wk-1b is an angular velocity of the vehicle at the first moment in the vehicle coordinate system.
With reference to the first aspect, in some implementations of the first aspect, the method further includes: determining first covariance information of the vehicle at the second moment based on covariance information of the vehicle at the first moment and the first model; and determining second covariance information of the vehicle at the second moment based on the first covariance information and the data.
In this embodiment of this application, first covariance information at the current moment may be estimated based on covariance information at the previous moment and the first model, and then second covariance information at the current moment is obtained based on the estimated first covariance information at the current moment and the sensor data. There is no need to estimate the covariance information at the current moment based on the data output by the single sensor (for example, the IMU), to avoid the reduction in the precision of the odometry caused by the fault of the sensor, thereby helping improve the stability and reliability of the vehicle in the navigation process.
With reference to the first aspect, in some implementations of the first aspect, before the determining second pose information of the vehicle at the second moment based on the first pose information and the data, the method further includes: obtaining a first calibration result, where the first calibration result includes an online calibration result and/or an offline calibration result. The determining second pose information of the vehicle at the second moment based on the first pose information and the data includes: performing error compensation on the data based on the first calibration result to obtain error-compensated data; and determining the second pose information based on the first pose information and the error-compensated data.
In this embodiment of this application, error compensation is performed on the sensor in an online or offline calibration manner, to help further improve the precision of the odometry.
In some possible implementations, the vehicle may perform a cross-check on an online calibration result and an offline calibration result of a same parameter, to ensure accuracy of the calibration parameter.
With reference to the first aspect, in some implementations of the first aspect, the first calibration result includes one or more of a wheel speed scale coefficient, a zero offset of an inertial measurement unit IMU, and a lever arm parameter.
With reference to the first aspect, in some implementations of the first aspect, before the performing error compensation on the data based on the first calibration result, the method further includes: performing a check on the data.
In some possible implementations, a check manner includes but is not limited to a rationality check, a cross-check, and the like.
With reference to the first aspect, in some implementations of the first aspect, the determining second pose information of the vehicle at the second moment based on the first pose information and the data includes: performing an optimal estimation based on the first pose information and the data to obtain the second pose information.
In some possible implementations, the performing an optimal estimation based on the first pose information and the data includes: performing the optimal estimation through Kalman filtering based on the first pose information and the data; or performing the optimal estimation through non-Kalman filtering based on the first pose information and the data.
In this embodiment of this application, the first pose information that is at the current moment and that is determined based on the pose information of the vehicle at the previous moment and the first model is used as a predicted value, and the data output by the sensor (or data obtained after the data output by the sensor is processed) is used as an observed value. The second pose information of the vehicle at the current moment may be obtained by performing the optimal estimation through Kalman filtering. This helps avoid dependency on the single sensor, and resolves a problem that the precision of the odometry is severely reduced, or the odometry even cannot work normally because of a fault in the IMU, thereby improving the stability and reliability of the vehicle.
According to a second aspect, a data processing apparatus is provided. The apparatus includes: a determining unit, configured to determine first pose information of a vehicle at a second moment based on pose information of the vehicle at a first moment and a first model, where the first model is a pose estimation model from the first moment to the second moment; and an obtaining unit, configured to obtain data collected by one or more sensors. The determining unit is further configured to determine second pose information of the vehicle at the second moment based on the first pose information and the data.
With reference to the second aspect, in some implementations of the second aspect, the determining unit is configured to: determine an initial state transition matrix of the vehicle at the first moment based on the pose information of the vehicle at the first moment and the first model; and determine the first pose information based on the initial state transition matrix.
In some possible implementations, the first model includes one or more of a location estimation model from the first moment to the second moment, a speed estimation model from the first moment to the second moment, an acceleration estimation model from the first moment to the second moment, a roll estimation model from the first moment to the second moment, a pitch estimation model from the first moment to the second moment, a yaw estimation model from the first moment to the second moment, and an angular velocity estimation model from the first moment to the second moment.
In some possible implementations, the first model is determined according to one or more of the following formulas:
Pkn is a location of the vehicle at the second moment in an odometry coordinate system. Pk-1n is a location of the vehicle at the first moment in the odometry coordinate system. Cbn is a rotation matrix from a vehicle coordinate system to the odometry coordinate system. Vk-1b is a speed of the vehicle at the first moment in the vehicle coordinate system. T is a time difference between the first moment and the second moment. ak-1b is an acceleration of the vehicle at the first moment in the vehicle coordinate system. akb is an acceleration of the vehicle at the second moment in the vehicle coordinate system. Vkb is a speed of the vehicle at the second moment in the vehicle coordinate system. ϕk is a roll of the vehicle at the second moment. ϕk-1 is a roll of the vehicle at the first moment. θk-1 is a pitch of the vehicle at a (k−1)th moment. {dot over (ϕ)}K-1 is a roll rate of the vehicle at the first moment. {dot over (θ)}k-1 is a pitch rate of the vehicle at the first moment. {dot over (φ)}k-1 is a yaw rate of the vehicle at the first moment. θk is a pitch of the vehicle at the second moment. θk-1 is a pitch of the vehicle at the first moment. φk is a yaw of the vehicle at the second moment. φk-1 is a yaw of the vehicle at the first moment. wkb is an angular velocity of the vehicle at the second moment in the vehicle coordinate system. wk-1b is an angular velocity of the vehicle at the first moment in the vehicle coordinate system.
With reference to the second aspect, in some implementations of the second aspect, the determining unit is further configured to: determine first covariance information of the vehicle at the second moment based on covariance information of the vehicle at the first moment and the first model; and determine second covariance information of the vehicle at the second moment based on the first covariance information and the data.
With reference to the second aspect, in some implementations of the second aspect, the obtaining unit is further configured to obtain a first calibration result before the determining unit determines the second pose information of the vehicle at the second moment based on the first pose information and the data, where the first calibration result includes an online calibration result and/or an offline calibration result. The determining unit is configured to: perform error compensation on the data based on the first calibration result to obtain error-compensated data; and determine the second pose information based on the first pose information and the error-compensated data.
With reference to the second aspect, in some implementations of the second aspect, the first calibration result includes one or more of a wheel speed scale coefficient, a zero offset of an inertial measurement unit IMU, and a lever arm parameter.
With reference to the second aspect, in some implementations of the second aspect, the apparatus further includes: a checking unit, configured to perform a check on the data before the determining unit performs error compensation on the data based on the first calibration result.
With reference to the second aspect, in some implementations of the second aspect, the determining unit is configured to perform an optimal estimation through Kalman filtering based on the first pose information and the data to obtain the second pose information.
According to a third aspect, a data processing apparatus is provided. The apparatus includes: a memory, configured to store computer instructions; and a processor, configured to execute the computer instructions stored in the memory, so that the apparatus performs the method in the first aspect.
According to a fourth aspect, a vehicle is provided. The vehicle includes the apparatus according to the second aspect or the third aspect.
According to a fifth aspect, a computer program product is provided. The computer program product includes computer program code. When the computer program code is run on a computer, the computer is enabled to perform the method in the first aspect.
It should be noted that all or some of computer program code may be stored in a first storage medium. The first storage medium may be encapsulated together with a processor, or may be encapsulated separately from a processor. This is not limited in this embodiment of this application.
According to a sixth aspect, a computer-readable medium is provided. The computer-readable medium stores program code. When the computer program code is run on a computer, the computer is enabled to perform the method in the first aspect.
According to a seventh aspect, an embodiment of this application provides a chip system. The chip system includes a processor, configured to invoke a computer program or computer instructions stored in a memory, so that the processor performs the method according to any one of the foregoing aspects.
With reference to the seventh aspect, in a possible implementation, the processor is coupled to the memory through an interface.
With reference to the seventh aspect, in a possible implementation, the chip system further includes the memory. The memory stores the computer program or the computer instructions.
The following describes technical solutions of this application with reference to accompanying drawings.
The sensing system 120 may include several types of sensors that sense information about an ambient environment of the vehicle 100. For example, the sensing system 120 may include a positioning system (the positioning system may be a global positioning system (global positioning system, GPS) system, a BeiDou system, or another positioning system), an IMU, a LiDAR, a millimeter-wave radar, an ultrasonic radar, an image shooting apparatus, and a wheel speed sensor (wheel speed sensor, WSS). A visual odometry (visual odometry, VO) may estimate a pose of the vehicle based on image data output by the image shooting apparatus. A LiDAR odometry (LiDAR odometry, LO) may estimate the pose of the vehicle based on point cloud data output by the LiDAR. The sensing system 120 may further include sensors (for example, an in-vehicle air quality monitor, a fuel gauge, and an oil temperature gauge) of an internal system of the vehicle 100 that is monitored. Sensor data from one or more of these sensors may be used to detect an object and corresponding features (a location, a shape, a direction, a speed, and the like) of the object. Such detection and recognition are key functions of safe operation of the vehicle 100.
Some or all functions of the vehicle 100 are controlled by the computing platform 150. The computing platform 150 may include at least one processor 151. The processor 151 may execute instructions 153 stored in a non-transitory computer-readable medium such as a memory 152.
In some embodiments, the computing platform 150 may alternatively be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner. The processor 151 may be any conventional processor, such as a central processing unit (central processing unit, CPU). Alternatively, the processor 151 may further include a graphics processing unit (graphics processing unit, GPU), a field-programmable gate array (field-programmable gate array, FPGA), a system on chip (system on chip, SOC), an application-specific integrated circuit (application-specific integrated circuit, ASIC), or a combination thereof.
In addition to the instructions 153, the memory 152 may further store data, such as a road map, route information, a position, a direction, a speed, and other vehicle data of the vehicle, and other information. Such information may be used by the vehicle 100 and the computing platform 150 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
It should be understood that a structure of the vehicle in
It should be further understood that the vehicle 100 may include one or more different types of transportation, or may include one or more different types of transportation tools or movable objects that operate or move on land (for example, a highway, a road, or a railway), water surface (for example, a waterway, a river, or an ocean), or in space. For example, the vehicle may include a car, a bicycle, a motorcycle, a train, a subway, an airplane, a ship, an aircraft, a robot, an uncrewed aerial vehicle, or another type of transportation tool or movable object. This is not limited in this embodiment of this application.
As described above, in an increasingly complex autonomous driving or assisted driving scenario, there are increasingly high requirements for precision and reliability of the odometry. Currently, a commonly used odometry has many limitations. For example, during vehicle traveling, information (for example, angular velocity information and acceleration information) output by the IMU is usually used as a predicted value. When the IMU is blocked or faulty, the precision of the odometry is severely reduced, or the odometry even cannot work normally. This affects normal running of the vehicle and may cause a serious security risk.
Embodiments of this application provides a data processing method and apparatus, to help avoid a reduction in precision of an odometry caused by a fault of a single sensor, thereby helping improve stability and reliability of a vehicle in a navigation process. In some embodiments, the vehicle may use a motion status at a previous moment as a predicted value, and use information output by a plurality of sensors as an observed value, to perform an optimal estimation on a pose of the vehicle, to obtain pose information of the vehicle at a current moment. The vehicle may obtain motion information or positioning information of the vehicle based on the pose information at the current moment.
Before embodiments of this application are described, terms in embodiments of this application are first explained.
A predicted value is a motion status that is at a current moment and that is predicted based on a motion status at a previous moment.
An observed value is data measured by a sensor at the current moment.
The predicted value and the observed value may be used for an optimal estimation, to obtain a more accurate motion status at the current moment.
Alternatively, the vehicle 100 may send pose information at the previous moment to the cloud server, and the cloud server determines the first pose information of the vehicle 100 at the current moment based on the pose information at the previous moment and a first model. The cloud server 200 may send the first pose information to the vehicle 100, so that the vehicle 100 may determine the second pose information of the vehicle 100 at the current moment based on the information output by the one or more sensors and the first pose information.
It should be understood that the data processing process shown in
The odometry fusion module may use a motion status of the vehicle at a previous moment (for example, pose information and covariance information at the previous moment) as a predicted value, use error-compensated or pre-processed sensor data as an observed value, perform an optimal estimation through Kalman filtering, and output pose information and covariance information at a current moment.
S601: A data processing module obtains data collected by one or more sensors.
In an embodiment, the data processing module may be located in a vehicle, or may be located in a cloud server. This is not limited in this embodiment of this application.
For example, the one or more sensors may be one or more of an IMU, a WSS, an image shooting apparatus, or a LiDAR.
It should be understood that the following describes an example in which data of the IMU and data of the WSS are obtained. The data of the IMU may include information such as an angular velocity and an acceleration. The data of the WSS may include information such as wheel speeds of four wheels of the vehicle and a steering wheel angle.
In an embodiment, the data processing module may create three or more independent threads, including an IMU thread, a WSS thread, and a timer thread. The data of the IMU is received, checked, or otherwise operated in the IMU thread. The data of the WSS is received, checked, or otherwise operated in the WSS thread. A system status equation is constructed, and prediction and updating are performed based on a motion status of the vehicle at a previous moment in the timer thread.
S602: The data processing module performs a check on the data from the one or more sensors.
In an embodiment, a check manner includes but is not limited to a rationality check, a cross-check, and the like.
For example, a process of the direct rationality check includes: The data processing module determines whether the angular velocity in the data of the IMU is greater than or equal to an angular velocity threshold. If the angular velocity in the data of the IMU is less than the angular velocity threshold, the check succeeds. Otherwise, the check fails.
For example, the data processing module determines whether the acceleration in the data of the IMU is greater than or equal to an acceleration threshold. If the acceleration in the data of the IMU is less than the acceleration threshold, the check succeeds. Otherwise, the check fails.
For example, the data processing module determines whether the wheel speeds of the four wheels in the data of the WSS are greater than or equal to a speed threshold. If all the wheel speeds of the four wheels are less than the speed threshold, the check succeeds. If the wheel speed of at least one of the four wheels is greater than or equal to the speed threshold, the check fails.
It should be understood that the data processing module may perform the direct rationality check based on one or more of angular velocity information, acceleration information, or the wheel speeds of the four wheels.
For example, a cross-check method is as follows:
(a) The data processing module may calculate a yaw speed wodom according to the following formula (1) and wheel speed data:
Vrr is a speed of a right rear wheel of the vehicle. Vrl is a speed of a left rear wheel of the vehicle. r is a distance between the rear wheels of the vehicle. fabs indicates an absolute value.
(b) A vertical angular velocity wimu output by the IMU is obtained.
(c) Whether a difference between wodom and wimu is greater than or equal to a preset difference is determined.
For example, if the difference between wodom and wimu is less than a first preset difference, the check succeeds. Otherwise, it is considered that the vehicle slips, and the check fails.
S603: The data processing module obtains an online calibration result or an offline calibration result.
In an embodiment, the data processing module obtains the online sensor calibration result in an online real-time estimation manner. Alternatively, the data processing module obtains the offline sensor calibration result by reading an offline calibration parameter.
In an embodiment, the calibration result includes but is not limited to a wheel speed scale coefficient, a zero offset of a gyroscope and a zero offset of an accelerometer in the IMU, a lever arm parameter between sensors, and the like.
For example, the offline calibration result may include the wheel speed scale coefficient, the lever arm parameter, and the like, and may be obtained by reading a file or loading a parameter.
For example, the online calibration result includes the zero offset of the gyroscope and the zero offset of the accelerometer in the IMU, the wheel speed scale coefficient, and the like. In a main method, a to-be-estimated parameter, for example, the zero offset of the gyroscope and the zero offset of the accelerometer in the IMU, and the wheel speed scale coefficient, is extended to a status variable, and an online estimation is implemented via a Kalman filter.
In an embodiment, the data processing module may obtain both the online calibration result and the offline calibration result, to perform a cross-check on the online calibration result and the offline calibration result. For example, after obtaining an online calibration result and an offline calibration result of a same parameter, the data processing module may determine a difference between the online calibration result and the offline calibration result. If the difference is less than a second preset difference, it is considered that the calibration results are normal. Otherwise, it is considered that the calibration results are abnormal.
S604: The data processing module performs, based on the online calibration result or the offline calibration result, error compensation on sensor data on which the check succeeds.
In an embodiment, compensated data may be used as an input into an odometry fusion module (or the Kalman filter), or the compensated data may be used as an observed value in the odometry fusion module.
For example, an error-compensated item of the data of the IMU includes the zero offset of the gyroscope and the zero offset of the accelerometer. The zero offset of the gyroscope and the zero offset of the accelerometer obtained through calibration are subtracted based on the data of the IMU, and the compensated data is used as an input into the Kalman filter.
For example, the data of the WSS includes information such as the wheel speeds of the four wheels and the steering wheel angle. Error compensation of the data of the WSS is mainly performed based on rear wheel speed data and an average value of rear wheel speeds, impact on the scale coefficient is subtracted, and compensated data is used as an input into the Kalman filter.
S605: The odometry fusion module establishes a vehicle-based kinematic model.
In an embodiment, the vehicle kinematic model established by the odometry fusion module may be as the following formulas (2) to (8):
Pkn is a location of the vehicle at a kth moment in an odometry coordinate system. Pk-1n is a location of the vehicle at a (k−1)th moment in the odometry coordinate system. Cbn is a rotation matrix from a vehicle coordinate system to the odometry coordinate system. Vk-1b is a speed of the vehicle at the (k−1)th moment in the vehicle coordinate system. T is a time difference and is related to a period of the timer. ak-1b is an acceleration of the vehicle at the (k−1)th moment in the vehicle coordinate system.
Vkb is a speed of the vehicle at the kth moment in the vehicle coordinate system.
akb is an acceleration of the vehicle at the kth moment in the vehicle coordinate system.
ϕk is a roll of the vehicle at the kth moment. ϕk-1 is a roll of the vehicle at the (k−1)th moment. θk-1 is a pitch of the vehicle at the (k−1)th moment. {dot over (ϕ)}K-1 is a roll rate of the vehicle at the (k−1)th moment. {dot over (θ)}k-1 is a pitch rate of the vehicle at the (k−1)th moment. {dot over (φ)}k-1 is a yaw rate of the vehicle at the (k−1)th moment.
θx is a pitch of the vehicle at the kth moment.
φk is a yaw of the vehicle at the kth moment. φk-1 is a yaw of the vehicle at the (k−1)th moment.
wkb is an angular velocity of the vehicle at the kth moment in the vehicle coordinate system. wk-1b is an angular velocity of the vehicle at the (k−1)th moment in the vehicle coordinate system.
The odometry fusion module may obtain an initial state transition matrix F according to one or more of the formulas (2) to (8).
It should be understood that the formulas (2) to (8) are merely examples. In this embodiment of this application, an estimation may be performed on a location, a speed, an acceleration, a roll, a pitch, a yaw, an angular velocity, and the like at a current moment according to another formula, equation, or function.
S606: The odometry fusion module performs prediction based on the kinematic model in S605 and the motion status at the previous moment, to obtain a motion status at the current moment.
In an embodiment, the motion status at the previous moment (for example, the (k−1)th moment) may be pose information and covariance information output by the Kalman filter at the previous moment.
For example, the odometry fusion module may perform prediction on pose information at the current moment according to the following formula (9):
X(k) is pose information of the vehicle at the kth moment predicted by the odometry fusion module. F[k−1, X(k−1)] is a state transition matrix at the (k−1)th moment. W(k−1) is system noise of the vehicle at the (k−1)th moment.
For example, the odometry fusion module may perform prediction on covariance matrix information at the current moment according to the following formula (10):
P(k|k−1) is covariance matrix information that is of the vehicle at the kth moment and that is predicted by the odometry fusion module. First-order linearization is performed on the initial state transition matrix F in consideration of a non-linear feature of the initial state transition matrix F, to obtain a state transition matrix ψ. P(k−1|k−1) is covariance matrix information of the vehicle at the (k−1)th moment. Q is a mean square deviation of the system noise.
S607: The odometry fusion module uses the error-compensated data in S604 as an observed value for the Kalman filter.
For example, the odometry fusion module may use the following formula (11) as an observation equation:
Z(k) is the observed value. H[k, X(k)] is an observation matrix at the kth moment. V(k) is observed noise at the kth moment.
S608: The odometry fusion module performs an optimal estimation through Kalman filtering and the motion status at the current moment predicted in S606 and the observed value in S607 to obtain the pose information and the covariance information of the vehicle at the current moment.
For example, a Kalman filtering gain may be calculated according to formula (12):
K(k) is the Kalman filtering gain. R(k) is the mean square deviation of the observed noise.
For example, the odometry fusion module may obtain updated pose information and covariance information according to the following formulas (13) and (14).
X(k) is pose information output by the Kalman filter. X(k|k−1) is a one-step predicted value of the state variable.
P(k) is covariance matrix information output by the Kalman filter. I is an identity matrix.
In this embodiment of this application, a test is further performed according to the data processing method. In the test, the data of the IMU collected by the vehicle, wheel speed data, and combined navigation data are used. In an entire vehicle traveling process, real-time kinematic (real-time kinematic, RTK) is a fixed state, and data obtained after post-processing of the combined navigation data is used as ground truth (ground truth). A traveling trajectory is shown in
It can be seen from the trajectory that when the IMU is abnormal or faulty, the odometry works normally and no obvious error increase occurs. To verify precision of the odometry, the foregoing part of traveling data is captured. A difference between current odometry data and the ground truth is calculated every time the vehicle travels 200 m. A statistical result is shown in Table 1 below.
S801: The vehicle determines first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment and a first model, where the first model is a pose estimation model from the first moment to the second moment.
In some embodiments, the first model includes one or more of a location estimation model from the first moment to the second moment, a speed estimation model from the first moment to the second moment, an acceleration estimation model from the first moment to the second moment, a roll estimation model from the first moment to the second moment, a pitch estimation model from the first moment to the second moment, a yaw estimation model from the first moment to the second moment, and an angular velocity estimation model from the first moment to the second moment.
For example, the location estimation model from the first moment to the second moment may be shown in the formula (2).
For example, the speed estimation model from the first moment to the second moment may be shown in the formula (3).
For example, the acceleration estimation model from the first moment to the second moment may be shown in the formula (4).
For example, the roll estimation model from the first moment to the second moment may be shown in the formula (5).
For example, the pitch estimation model from the first moment to the second moment may be shown in the formula (6).
For example, the yaw estimation model from the first moment to the second moment may be shown in the formula (7).
For example, the angular velocity estimation model from the first moment to the second moment may be shown in the formula (8).
It should be understood that the formulas (2) to (8) are merely examples. The location estimation model, the speed estimation model, the acceleration estimation model, the roll estimation model, the pitch estimation model, the yaw estimation model, or the angular velocity estimation model may be alternatively indicated by another formula, equation, or function. This is not limited in this embodiment of this application.
In some embodiments, that the vehicle determines first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment and a first model includes: The vehicle determines an initial state transition matrix of the vehicle at the first moment based on the pose information of the vehicle at the first moment and the first model. The vehicle determines the first pose information based on the initial state transition matrix.
In some embodiments, that the vehicle determines the first pose information based on the initial state transition matrix includes: The vehicle determines the first pose information based on the initial state transition matrix and system noise at the first moment.
For example, the first pose information may be determined according to the formula (9).
In some embodiments, the method further includes: The vehicle determines first covariance information of the vehicle at the second moment based on covariance information of the vehicle at the first moment and the first model. The vehicle determines second covariance information of the vehicle at the second moment based on the first covariance information and the data.
In some embodiments, that the vehicle determines first covariance information of the vehicle at the second moment based on covariance information of the vehicle at the first moment and the first model includes: linearizing the initial state transition matrix at the first moment to obtain a state transition matrix; and determining the first covariance information based on the state transition matrix, the covariance information at the first moment, and a mean square deviation of the system noise.
For example, the first covariance information may be determined according to the formula (10).
S802: The vehicle obtains data collected by the one or more sensors.
In some embodiments, the one or more sensors may be one or more of an IMU, a WSS, an image shooting apparatus, or a LiDAR.
It should be understood that there is no actual sequence between S801 and S802.
S803: The vehicle determines second pose information of the vehicle at the second moment based on the first pose information and the data.
In some embodiments, before the vehicle determines second pose information of the vehicle at the second moment based on the first pose information and the data, the method further includes: The vehicle obtains a first calibration result, where the first calibration result includes an online calibration result and/or an offline calibration result. That the vehicle determines second pose information of the vehicle at the second moment based on the first pose information and the data includes: The vehicle performs error compensation on the data based on the first calibration result to obtain error-compensated data. The vehicle determines the second pose information based on the first pose information and the error-compensated data.
In some embodiments, the first calibration result includes one or more of a wheel speed scale coefficient, a zero offset of an inertial measurement unit IMU, and a lever arm parameter.
In some embodiments, before the vehicle performs error compensation on the data based on the first calibration result, the method further includes: The vehicle performs a check on the data, where the check includes one or more of a rationality check and a cross-check.
In some embodiments, that the vehicle determines second pose information of the vehicle at the second moment based on the first pose information and the data includes: The vehicle performs an optimal estimation based on the first pose information and the data to obtain the second pose information.
In some embodiments, that the vehicle performs an optimal estimation based on the first pose information and the data includes: The vehicle performs the optimal estimation through Kalman filtering based on the first pose information and the data. Alternatively, the vehicle performs the optimal estimation through non-Kalman filtering based on the first pose information and the data.
The determining unit 910 is further configured to determine second pose information of the vehicle at the second moment based on the first pose information and the data.
In some embodiments, the determining unit 910 is configured to: determine an initial state transition matrix of the vehicle at the first moment based on the pose information of the vehicle at the first moment and the first model; and determine the first pose information based on the initial state transition matrix.
In some embodiments, the determining unit 910 is further configured to: determine first covariance information of the vehicle at the second moment based on covariance information of the vehicle at the first moment and the first model; and determine second covariance information of the vehicle at the second moment based on the first covariance information and the data.
In some embodiments, the obtaining unit 920 is further configured to obtain a first calibration result before the determining unit 910 determines the second pose information of the vehicle at the second moment based on the first pose information and the data, where the first calibration result includes an online calibration result and/or an offline calibration result.
The determining unit 910 is configured to: perform error compensation on the data based on the first calibration result to obtain error-compensated data; and determine the second pose information based on the first pose information and the error-compensated data.
In some embodiments, the first calibration result includes one or more of a wheel speed scale coefficient, a zero offset of an inertial measurement unit IMU, and a lever arm parameter.
In some embodiments, the apparatus 900 further includes: a checking unit, configured to perform a check on the data before the determining unit performs error compensation on the data based on the first calibration result.
In some embodiments, the check includes one or more of a rationality check and a cross-check.
In some embodiments, the determining unit 910 is configured to perform an optimal estimation based on the first pose information and the data to obtain the second pose information.
An embodiment of this application further provides an apparatus. The apparatus includes a processing unit and a storage unit. The storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the apparatus performs the data processing method in the foregoing embodiments.
An embodiment of this application further provides a vehicle. The vehicle may include the apparatus 900 or the apparatus 1000.
An embodiment of this application further provides a computer program product. The computer program product includes computer program code, and when the computer program code is run on a computer, the computer is enabled to perform the method.
An embodiment of this application further provides a computer-readable medium. The computer-readable medium stores program code, and when the computer program code is run on a computer, the computer is enabled to perform the method.
In an implementation process, steps in the foregoing methods can be implemented by using a hardware integrated logical circuit in the processor, or by using instructions in a form of software. The method disclosed according to embodiments of this application may be directly performed by a hardware processor, or may be performed by using a combination of hardware in the processor and a software module. A software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory, and a processor reads information in the memory and completes the steps in the foregoing methods in combination with hardware of the processor. To avoid repetition, details are not described herein again.
It should be understood that, the processor in embodiments of this application may be a central processing unit (central processing unit, CPU), or may be another general-purpose processor, a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA), or another programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.
It should also be understood that in embodiments of this application, the memory may include a read-only memory and a random access memory, and provide instructions and data to the processor.
In embodiments of this application, “first”, “second”, and various numbers are merely used for differentiation for ease of description, and are not used to limit the scope of embodiments of this application. For example, “first”, “second”, and various numbers are used to distinguish between different pipelines, through holes, or the like.
It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of this application. The execution sequences of the processes should be determined according to functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of embodiments of this application.
A person of ordinary skill in the art may be aware that, in combination with the examples described in embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.
It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments. Details are not described herein again.
In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, division into the units is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.
In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units are integrated into one unit.
When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application. The foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disc.
The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
202111276896.2 | Oct 2021 | CN | national |
This application is a continuation of International Patent Application No. PCT/CN2022/113271, filed on Aug. 18, 2022, which claims priority to Chinese Patent Application No. 202111276896.2, filed on Oct. 29, 2021. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/113271 | Aug 2022 | WO |
Child | 18647621 | US |