The present disclosure relates to automobile vehicle steering wheel speed sensing systems for prediction of vehicle motion.
Known automobile vehicle wheel speed sensing (WSS) systems commonly include a slotted wheel that co-rotates with each of the vehicle wheels that includes multiple equally spaced teeth about a perimeter of the slotted wheel. A sensor detects rotary motion of the slotted wheel and generates a square wave signal that is used to measure wheel rotation angle and rotation speed. Known WSS systems have a resolution of about 2.6 cm of vehicle travel for a system using a slotted wheel having 96 counts per revolution, or about 5.2 cm for a system using a slotted wheel having 48 counts per revolution, for a standard wheel size of 16 inch radius. Different resolutions are calculated for different wheel sizes. Resolution of the signal is a function of a quantity of teeth of the slotted wheel and the capability of the sensor to capture accurate images of the teeth as the slotted wheel rotates. Better resolution of vehicle progression is desired for several applications including for autonomous and active safety systems, for parking maneuvers, and for trailering. Resolution solutions that estimate and predict vehicle motion at slow speeds are also currently not available or are limited by the existing slotted wheel sensor systems.
Thus, while current automobile vehicle WSS systems achieve their intended purpose, there is a need for a new and improved system and method for incorporating vehicle kinematics to calculate higher resolution vehicle displacement and motion and to create improved path planning algorithms. Higher resolution predictions are also required for vehicle displacement at low speeds.
According to several aspects, a method for producing high resolution virtual wheel speed sensor data includes: collecting wheel speed sensor (WSS) data from multiple wheels of an automobile vehicle; generating a camera image from at least one camera mounted to the automobile vehicle; overlaying multiple distance intervals onto the camera image each representing a vehicle distance travelled obtained from the WSS data; and applying an optical flow program to discretize the camera image in pixels to increase a resolution of each vehicle distance traveled.
In another aspect of the present disclosure, the method further includes determining if a vehicle steering angle is greater than a predetermined threshold; and normalizing the WSS data if vehicle steering angle identifies the vehicle is turning.
In another aspect of the present disclosure, the method further includes adding data from multiple camera feeds of the vehicle plus a steering angle, one or more tire pressures, global positioning system (GPS) data, and vehicle kinematics.
In another aspect of the present disclosure, the method further includes incorporating an effective tire radius by adding a tire pressure and tire slip to account for different wheel rotational speeds occurring due to tire size and tire wear.
In another aspect of the present disclosure, the method further includes identifying wheel rotational speeds from the WSS data; and normalizing the wheel rotational speeds by scaling up or down time depending on steering wheel angle.
In another aspect of the present disclosure, the method further includes during a learning phase accessing data including a steering angle and each of a tire pressure and a tire slip for each of the multiple wheels; and creating a probability distribution function defining a relationship between first tick distribution values of one wheel speed sensor versus second tick distribution values from the one wheel speed sensor.
In another aspect of the present disclosure, the method further includes applying an Ackerman steering model to include wheel speed differences occurring during steering or vehicle turns at vehicle speeds below a predetermined threshold.
In another aspect of the present disclosure, the method further includes inputting each of: a value of an effective tire radius; and a value of tire slip.
In another aspect of the present disclosure, the effective tire radius defines a tire radius for each of a front left tire, a front right tire, a rear left tire and a rear right tire.
In another aspect of the present disclosure, the method further includes: enabling an optical flow program including: in a first optical flow feature detecting corners and features of a camera image; in a second optical feature running an optical flow algorithm; in a third optical feature, obtaining output vectors; and in a fourth optical feature averaging the output vectors and deleting outliers to obtain a highest statistically significant optical vector that defines a vehicle distance traveled in pixels.
According to several aspects, a method for producing high resolution virtual wheel speed sensor data including: simultaneously collecting wheel speed sensor (WSS) data from multiple wheel speed sensors each sensing rotation of one of multiple wheels of an automobile vehicle; generating a camera image of a vehicle environment from at least one camera mounted in the automobile vehicle; overlaying multiple distance intervals onto the camera image each representing a vehicle distance traveled generated from the WSS data; and creating a probability distribution function predicting a distance traveled for a next WSS output.
In another aspect of the present disclosure, the probability distribution function defines a relationship between first tick distribution values of individual ones of the wheel speed sensors versus second tick distribution values from the same one of the wheel speed sensors.
In another aspect of the present disclosure, the method further includes applying an optical flow program to discretize the camera image in pixels.
In another aspect of the present disclosure, the method further includes applying a predetermined quantity of pixels per centimeter for each of the distance intervals such that the discretizing step enhances the resolution from centimeters to millimeters.
In another aspect of the present disclosure, the method further includes identifying wheel rotational speeds from the WSS data and normalizing the wheel rotational speeds by dividing each of the wheel rotational speeds by a same one of the wheel rotational speeds.
In another aspect of the present disclosure, the method further includes generating optical flow output vectors for the camera image; and discretizing the camera image to represent a physical distance traveled by the automobile vehicle.
In another aspect of the present disclosure, the method further includes generating the wheel speed sensor (WSS) data using slotted wheels co-rotating with each of the multiple wheels, with a sensor reading ticks as individual slots of the slotted wheels pass the sensor, the slotted wheels each having a quantity of slots defining a resolution for each of the multiple distance intervals.
According to several aspects, a method for producing high resolution virtual wheel speed sensor data includes simultaneously collecting wheel speed sensor (WSS) data from multiple wheel speed sensors each sensing rotation of one of multiple wheels of an automobile vehicle. A camera image is generated of a vehicle environment from at least one camera mounted in the automobile vehicle. Multiple distance intervals are overlayed onto the camera image each representing a vehicle distance traveled defining a resolution of each of the multiple wheel speed sensors. An optical flow program is applied to discretize the camera image in pixels including applying approximately 10 pixels per centimeter for each of the distance intervals. A probability distribution function is created predicting a distance traveled for a next WSS output.
In another aspect of the present disclosure, each wheel speed sensor determines rotation of a slotted wheel co-rotating with one of the four vehicle wheels, each slotted wheel including multiple equally spaced teeth positioned about a perimeter of the slotted wheel; and the applying step enhances the resolution from centimeters derived from a spacing of the teeth to millimeters.
In another aspect of the present disclosure, the method further includes: identifying wheel speeds from the WSS data; applying an Ackerman steering model with Ackerman error correction to include differences in the wheel speeds occurring during steering or vehicle turns at vehicle speeds below a predetermined threshold; generating optical flow output vectors for the camera image; and averaging the output vectors to obtain a highest statistically significant optical vector to further refine a value of the vehicle distance traveled.
Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses.
Referring to
R=(2×π×wheel radius)/quantity of slots per revolution
According to several aspects, the wheel speed sensor (WSS) portion 14 includes a slotted wheel 26 provided for each of the four vehicle wheels shown in reference to
Referring to
Referring to
Referring to
The four WSSs used concurrently can also be further enhanced by adding data from all of the camera feeds of the vehicle 42 plus other vehicle information, which can include but is not limited to a steering angle, one or more tire pressures, global positioning system (GPS) data, vehicle kinematics, and the like, which is all fused together using the algorithm discussed in reference to
Referring to
Referring to
Referring to
Referring to
Referring to
The different wheel speeds are obtained using the following equations: ωrlrrl=ωzRrl, ωrrrrr=ωzRrr, ωflrfl=ωzRfl, ωfrrfr=ωzRfr. The wheel speeds obtained from the above equations can each be normalized, for example by dividing each wheel speed by ωr1 as follows:
ωrl/ωrl; ωrr/ωrl; ωfr/ωrl; ωfl/ωrl.
Referring to
Following the learning phase 92, in an enablement block 98 multiple enablement conditions are assessed. These include each of: a first enablement condition 100 wherein it is determined if the vehicle is on; a second enablement condition 102 wherein it is determined if the vehicle is moving slowly defined as a vehicle speed below a predetermined threshold speed; a third enablement condition 104 wherein it is determined if an absolute value of a steering wheel angle gradient is less than a predetermined threshold; and a fourth enablement condition wherein it is determined if a value of tire slip is less than a predetermined threshold. If the outcome of each of the enablement conditions is yes, the algorithm 88 initiates multiple sub-routines, including a first sub-routine 108, a second sub-routine 110 and a third sub-routine 112.
In the first sub-routine 108 WSS data is normalized for a turning vehicle by determining in a first phase 114 if a vehicle steering angle is greater than a predetermined threshold. If the output from the first phase 114 is yes, in a second phase 116 WSS time scales are normalized.
In the second sub-routine 110 an optical flow program is enabled. The optical flow program includes in a first optical flow feature 118 performing image warping to identify a birds-eye view of the roadway or vehicle environment image. In a second optical flow feature 120 corners and features are detected, for example applying the Shi-Tomasi algorithm for corner detection, to extract features and infer the contents of an image. In a third optical feature 122 an optical flow algorithm is run, for example applying the Lucas-Kanade method in an image pair. The Lucas-Kanade method is a differential method for optical flow estimation which assumes that a flow is essentially constant in a local neighborhood of a pixel under consideration, and solves the basic optical flow equations for all the pixels in that neighborhood using least squares criterion. In a fourth optical feature 124, output vectors are obtained. In a fifth optical feature 126, the output vectors are averaged and outliers are deleted to obtain a highest statistically significant optical vector that defines a vehicle distance travelled. The present disclosure is not limited to the performing optical flow using the Shi-Tomasi algorithm and the Lucas-Kanade method, as other algorithms and methods can also be applied.
In the third sub-routine 112 elements identified in each of the first sub-routine 108 and the second sub-routine 110 are applied against each output from each WSS. Following a first WSS period 138, in a triggering step 140 it is determined if any other WSS edge or tooth is triggered. If the response to the triggering step is yes, in an updating step 142 velocity and displacement values are updated using the probability distribution function 82 described in reference to
In parallel with the first sub-routine 108 and the second sub-routine 110, a vehicle kinematics sub-routine 128 is run using the Ackerman Steering Model described in reference to
Returning to the third sub-routine 112, the optical flow vector output from the normalization step 146 is applied in a sensor fusion step 150 which also incorporates the wheel velocity output from the vehicle kinematics sub-routine 128. Sensor data fusion is performed using either Kalman filters (KF) or extended Kalman filters (EKF).
Following the sensor fusion step 150, in a subsequent triggering step 152 it is determined if a subsequent WSS edge is triggered. If the response to the triggering step 152 is no, in a return step 154 the algorithm 88 returns to the triggering step 140. If the response to the triggering step 152 is yes, a continuation step 156 is performed wherein the output from the third sub-routine 112 is averaged to account for changes in phases between each of the WSS counts. The algorithm 88 ends at an end or repeat step 158.
The method for producing high resolution virtual wheel speed sensor data 10 of the present disclosure offers several advantages. These include provision of an algorithm that fuses WSS data, steering and on-vehicle camera feeds, along with other vehicle information including vehicle steering, tire pressure, and vehicle kinematics to calculate a higher resolution vehicle displacement and motion and to create improved path planning algorithms. Higher resolution predictions are made of vehicle displacement at low vehicle speeds. The resolution improves from use of a single WSS only when cameras are used and fused with all 4 WSS concurrently.
The description of the present disclosure is merely exemplary in nature and variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure.