The present invention generally relates to a collision detection system for a vehicle that actuates a countermeasure device to mitigate or avoid a collision with an object. More specifically, the invention relates to a collision detection system having at least a camera to measure data of an object relative to a host vehicle and based on the measured data and estimations of collision actuating an autonomous braking system of the vehicle.
Automotive vehicles are increasingly being equipped with collision detection systems to identify objects in a host vehicle's path of travel, including pedestrians and other vehicles. To mitigate or avoid collisions, these systems are used in conjunction with countermeasure devices, such as autonomous braking, adaptive cruise control, emergency steering assistance, and warning systems. For instance, collision mitigation by braking (CMbB) is capable of performing autonomous braking up to full anti-lock brake system levels, which must be validated to ensure an exceptionally low rate of false brake actuation. Increased collision detection reliability without a prolonged and expensive validation process is desirable.
According to one aspect of the present invention, a collision detection system for a host vehicle includes a sensor for detecting an object in a field of view and measuring a first set of target data of the object relative to the host vehicle. The system also includes a camera for capturing a plurality of images from the field of view and processing the plurality of images to measure a second set of target data of the object relative to the host vehicle and to measure an image-based time-to-collision (TTCIMAGE) of the host vehicle with the object based on scalable differences of the plurality of images. A fusion module determines a matched set of target data of the object relative to the host vehicle based on the first and second sets of target data received from the sensor and the camera, respectively. The fusion module estimates a threat of collision of the host vehicle with the object based on the matched set of target data. A plausibility module calculates a steering-based time-to-collision (TTCSTEERING) and a braking-based time-to-collision (TTCBRAKING) of the host vehicle with the object based on the second set of target data received from the camera and an additional set of data received from a vehicle dynamics detector. The plausibility module generates an actuation signal if the measured TTCIMAGE is less than the calculated TTCSTEERING and the TTCBRAKING. A countermeasure module actuates a countermeasure device if the threat of collision received from the fusion module exceeds an actuation threshold and the actuation signal from the plausibility module is generated and received, thereby statistically reducing the rate of falsely actuating the countermeasure device.
According to another aspect of the present invention, a collision detection system for a vehicle includes a sensor and a camera. The sensor measures data of an object relative to the vehicle. The camera also measures data of the object relative to the vehicle and measures an image-based time-to-collision (TTCIMAGE) with the object based on scalable differences of captured images. A fusion module matches data from the sensor and the camera and estimates a collision threat based on the matched data. A plausibility module generates a signal if the measured TTCIMAGE is less than a calculated steering-based time-to-collision (TTCSTEERING) and a braking-based time-to-collision (TTCBRAKING) with the object. A countermeasure module actuates a countermeasure device if the collision threat exceeds an actuation threshold and the signal from the plausibility module is generated.
According to yet another aspect of the present invention, a vehicle collision detection system comprises a sensor and a camera. A fusion module estimates a collision threat with an object using data of the object relative to the vehicle from the sensor and the camera. A plausibility module generates a signal if an image-based time-to-collision is less than a steering-based time-to-collision and a braking-based time-to-collision. A countermeasure actuates if the collision threat exceeds a threshold and the signal is received.
According to another aspect of the present invention, a method is provided for actuating an autonomous braking controller for a brake system of a host vehicle. The method comprises the step of sensing an object in a field of view by an object detection sensor on the host vehicle. A first data set of the object is measured with the object detection sensor, including a first range and range rate of the object relative to the host vehicle, a first angle and angle rate of the object relative to the host vehicle, and a relative movement determination of the object. The method also includes the step of capturing a plurality of images based on light waves from the field of view by a camera on the host vehicle at known time intervals between instances when the images of the plurality of images are captured. The captured images are processed to measure a second data set of the object, including second range and range rate of the object relative to the host vehicle, a second angle and angle rate of the object relative to the host vehicle, a width of the object, and an image based time-to-collision (TTCIMAGE) of the host vehicle with the object based on scalable differences of the object derived from the plurality images. An additional data set is measured with a vehicle dynamics detector, including a yaw-rate sensor for measuring a yaw rate of the host vehicle and a speed sensor for measuring the longitudinal velocity of the host vehicle. A controller is provided that receives the first and second data sets, the TTCIMAGE, and the additional data set. The method further includes the step of estimating a threat of collision of the host vehicle with the object based on a combination of the first and second data sets. A steering-based time-to-collision (TTCSTEERING) of the host vehicle with the object is calculated as a function of the second data set, the longitudinal velocity of the host vehicle, and the yaw rate of the host vehicle. A braking-based time-to-collision (TTCBRAKING) of the host vehicle with the object is calculated as a function of the longitudinal velocity of the host vehicle and a maximum rate of deceleration of the host vehicle. The method also includes the step of generating an actuation signal if the measured TTCIMAGE is less than the calculated TTCSTEERING and the TTCBRAKING. The autonomous braking controller for the brake system of the host vehicle is actuated based on the threat of collision and the actuation signal.
According to yet another aspect of the present invention, a collision detection system includes a camera and a sensor to measure data of an object relative a host vehicle, such that a threat of collision is estimated from combined data of the camera and the sensor. The independent plausibility module receives an image-based time-to-collision measured directly and independently by the camera based on a measured rate of expansion of the object. The independent plausibility module generates an actuation signal if the image-based time-to-collision is less than both a steering-based time-to-collision and a braking-based time-to-collision, which are calculated as a function of measurements received from the camera based relative to a general horizon plane. An autonomous braking controller for a brake system of the vehicle is actuated if the threat of collision is greater than a threshold and the independent plausibility module generates the actuation signal. The check against the signal from the independent plausibility module statistically increases reliability of the overall collision detection system and reduces the expense and extent of a validation process for implementing the system, without adding additional sensors to the vehicle.
These and other aspects, objects, and features of the present invention will be understood and appreciated by those skilled in the art upon studying the following specification, claims, and appended drawings.
In the drawings:
For purposes of description herein, the terms “upper,” “lower,” “right,” “left,” “rear,” “front,” “vertical,” “horizontal,” and derivatives thereof shall relate to the vehicle and its collision detection system as oriented in
Referring to
As illustrated in
The object detection sensor 14 monitors the field of view 18 and when the sensor 14 detects the object 30 in the field of view 18, the sensor 14 measures a first set of target data of the object 30 relative to the host vehicle 10, based on a position of the object relative to the host vehicle. The first set of target data of the object 30 relative to the host vehicle 10 includes a first range R1 (radial distance) measurement between the object 30 and the host vehicle 10, a first range rate {dot over (R)}1 (time rate of change of radial distance) of the object 30 relative to the host vehicle 10, a first angle θ1 (azimuth) measurement of the direction to the object 30 relative to the host vehicle 10, a first angle rate {dot over (θ)}1 (time rate of change of azimuth) of the direction to the object 30 relative to the host vehicle 10, and a relative movement determination of the object 30 relative to the road. As shown in
The camera 16 also monitors the field of view 18 for detecting one or more objects, such as the object 30. The camera 16 captures a plurality of images based on light waves from the field of view 18 at known time intervals between instances when the images of the plurality of images are captured. The camera 16 processes the plurality of images to measure a second set of target data of the object 30 relative to the host vehicle 10 and to measure an image-based time-to-collision (TTCIMAGE) of the host vehicle 10 with the object 30 based on scalable differences of the plurality of images. More specifically, the image-based time-to-collision (TTCIMAGE) is independently based on measuring various aspects of the object 30 in the plurality of images to determine rate of expansion of the object 30 from the perspective of the camera on the host vehicle 10.
The second set of target data of the object 30 relative to the host vehicle 10 includes a second range measurement R2 between the object 30 and the host vehicle 10, a second range rate {dot over (R)}2 of the object 30 relative to the host vehicle 10, a second angle θ2 of the direction to the object 30 relative to the host vehicle 10, a second angle rate {dot over (θ)}2 of the direction to the object 30 relative to the host vehicle 10, a width measurement of the object WLEAD, an object classification 34 of the object 30, and a confidence value 36 of the object 30. The object classification 34 value is based upon common characteristics of known objects, such as height and width, to identify the object 30, for example, as a passenger vehicle, a pedestrian, a bicycle, or a stationary structure. The confidence value 36 of the object 30 is essentially a measurement of whether the individual parts of the object 30 in the field of view 18 are moving together consistently to constitute a singular object 30. For example, if side rearview mirrors 38 (
Referring now to
As illustrated in
As further shown in
The collision threat controller 24, as shown in
Still referring to
Referring now to
The method further includes the step 94 of capturing the plurality of images based on light waves from the field of view 18 by the camera 16 on the host vehicle 10 at known time intervals between instances when the images of the plurality of images are captured. The captured images are processed at step 95, illustrated utilizing the image processor 44. Thereafter, at step 96, the processed images are used to measure the second data set of the object 30, including the second range R2 and second range rate {dot over (R)}2 of the object 30 relative to the host vehicle 10, the second angle θ2 and second angle rate {dot over (θ)}2 of the object 30 relative to the host vehicle 10, the width WLEAD of the object 30, and the confidence value 36 of the object 30. The captured images are also processed at step 97 to independently measure the TTCIMAGE of the host vehicle 10 with the object 30 based solely on scalable differences of the object 30 derived from the plurality of images.
The vehicle dynamics detector 26 at step 98 senses the kinematics of the host vehicle 10. At step 99, the additional data set is measured with the kinematic values from the vehicle dynamics detector 26, including the yaw-rate sensor 46 for measuring the yaw rate ω of the host vehicle 10 and the speed sensor 48 for measuring the longitudinal velocity VH of the host vehicle 10. As previously mentioned, it is contemplated that the GPS 50 or other sensors could be used to measure components of this additional data set.
The method further includes step 100 of fusing of the first and second data sets to obtain the matched data set. The fusion module 56 (
Still referring to
The TTCBRAKING of the host vehicle 10 with the object 30 is calculated at step 106 as a function of the additional data set, namely the longitudinal velocity VH of the host vehicle 10. The plausibility module 58 (
The method includes a determination step 108 of generating an actuation signal if the measured TTCIMAGE is less than the calculated TTCSTEERING and the TTCBRAKING. Step 108 is contemplated as a function of the plausibility module 58 (
Referring now to
More specifically, TTCSTEERING can be expressed as the following algorithm:
In the above expression, TTCSTEERING represents the maximum calculated time to avoid collision by steering the host vehicle 10. The TTCSTEERING logic is a simplified equation assuming no relative lateral velocity of the host vehicle 10 or the object 30. A more complex strategy could be defined using measured lateral velocity, among other things. WLEAD represents the width of the object 30, or lead vehicle, such as the width of a car, motorcycle, or pedestrian. WLEAD may either be a constant or measured by the camera 16 or other sensor. WHOST, in turn, represents the width of the host vehicle 10. R equates to R2 and represents the range from the host vehicle 10 to the object 30, as measured by the camera 16. The ω variable represents the measured yaw-rate of the host vehicle 10, which can be measured by the yaw-rate sensor 46, the GPS 50, the camera 16, or an inertial sensor. V equates to VH and represents the measured longitudinal velocity of the host vehicle 10, which can be measured by the speed sensor 48, the GPS 50, wheel speed sensors, the camera 16, or an inertial sensor. θ equates to θ2 and represents the relative angle from the host vehicle 10 to the object 30, as measured by the camera 16. ALAT
As illustrated in
Again referencing
More specifically, TTCBRAKING can be expressed as the following algorithm:
In the above expression, TTCBRAKING represents the maximum calculated time to avoid collision impact by braking the host vehicle 10. Again, a more complex strategy could be defined using measured lateral velocity, among other things. V equates to VH and represents the measured longitudinal velocity of host vehicle 10. ALONG
Referring again in
Specifically, the IN_PATH value function or pseudocode logic determination can be expressed as follows:
In the above expression, the input variables represent the same values as measured or calculated in the TTCSTEERING expression. Accordingly, it is conceivable that a more complex strategy could be defined using measured lateral velocity, among other things.
Still referring to
If all the plausibility checks have been passed, the plausibility module 58 may optionally include a time delay at step 132 to continue to generate the actuation signal for a set constant period of time, KOFF
As shown at step 134, the time delay at step 132 may alternatively be used to generate the actuation signal when it is not received directly from step 128. It is also contemplated that other time delays may be included at several other locations in the plausibility module, such as in concert with the plausibility checks 120 and 130.
In a simplified expression of the plausibility module 58, utilizing the optional IN_PATH value check, the actuation signal is enabled, or generated, when a CMbB_PLAUSIBLE variable is true. This function or pseudocode logic determination, as also partially illustrated in
In the above expression, or logical determination, TTCMEASURED equates to TTCIMAGE and represents the time to collision between the host vehicle 10 and the object 30, as measured by the camera 16. In addition, KUNCERTAINTY again simply represents a constant that can be calibrated to receive desirable outcomes.
Referring again to
It will be understood by one having ordinary skill in the art that construction of the described invention and other components is not limited to any specific material. Other exemplary embodiments of the invention disclosed herein may be formed from a wide variety of materials, unless described otherwise herein.
For purposes of this disclosure, the term “coupled” (in all of its forms, couple, coupling, coupled, etc.) generally means the joining of two components (electrical or mechanical) directly or indirectly to one another. Such joining may be stationary in nature or movable in nature. Such joining may be achieved with the two components (electrical or mechanical) and any additional intermediate members being integrally formed as a single unitary body with one another or with the two components. Such joining may be permanent in nature or may be removable or releasable in nature unless otherwise stated.
It is also important to note that the construction and arrangement of the elements of the invention as shown in the exemplary embodiments is illustrative only. Although only a few embodiments of the present innovations have been described in detail in this disclosure, those skilled in the art who review this disclosure will readily appreciate that many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.) without materially departing from the novel teachings and advantages of the subject matter recited. For example, elements shown as integrally formed may be constructed of multiple parts or elements shown as multiple parts may be integrally formed, the operation of the interfaces may be reversed or otherwise varied, the length or width of the structures and/or members or connector or other elements of the system may be varied, the nature or number of adjustment positions provided between the elements may be varied. It should be noted that the elements and/or assemblies of the system may be constructed from any of a wide variety of materials that provide sufficient strength or durability, in any of a wide variety of colors, textures, and combinations. Accordingly, all such modifications are intended to be included within the scope of the present innovations. Other substitutions, modifications, changes, and omissions may be made in the design, operating conditions, and arrangement of the desired and other exemplary embodiments without departing from the spirit of the present innovations.
It will be understood that any described processes or steps within described processes may be combined with other disclosed processes or steps to form structures within the scope of the present invention. The exemplary structures and processes disclosed herein are for illustrative purposes and are not to be construed as limiting.
It is also to be understood that variations and modifications can be made on the aforementioned structure without departing from the concepts of the present invention, and further it is to be understood that such concepts are intended to be covered by the following claims unless these claims by their language expressly state otherwise.
This application claims priority under 35 U.S.C. §119(e) to, and the benefit of, U.S. Provisional Patent Application No. 61/677,274, entitled “COLLISION DETECTION SYSTEM WITH A PLAUSIBILITY MODULE,” filed on Jul. 30, 2012, the entire disclosure of which is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
6607255 | Bond, III et al. | Aug 2003 | B2 |
6944543 | Prakah-Asante et al. | Sep 2005 | B2 |
7016782 | Schiffmann | Mar 2006 | B2 |
7512516 | Widmann | Mar 2009 | B1 |
7777618 | Schiffmann et al. | Aug 2010 | B2 |
7786898 | Stein et al. | Aug 2010 | B2 |
8082101 | Stein et al. | Dec 2011 | B2 |
20080046145 | Weaver et al. | Feb 2008 | A1 |
20080183360 | Zhang et al. | Jul 2008 | A1 |
20080269997 | Ezoe et al. | Oct 2008 | A1 |
20090143986 | Stein et al. | Jun 2009 | A1 |
20100191391 | Zeng | Jul 2010 | A1 |
20110133914 | Griffin et al. | Jun 2011 | A1 |
20110163904 | Alland et al. | Jul 2011 | A1 |
20110178658 | Kotaba et al. | Jul 2011 | A1 |
20110178710 | Pilutti et al. | Jul 2011 | A1 |
20120140061 | Zeng | Jun 2012 | A1 |
Number | Date | Country |
---|---|---|
1898232 | Mar 2008 | EP |
2008004077 | Jan 2008 | WO |
Number | Date | Country | |
---|---|---|---|
20140032093 A1 | Jan 2014 | US |
Number | Date | Country | |
---|---|---|---|
61677274 | Jul 2012 | US |