An embodiment relates generally to image capture and display in vehicle imaging systems.
Vehicle systems often use in-vehicle vision systems for rear-view scene detection. Many cameras may utilize a fisheye camera or similar that distorts the captured image displayed to the driver such as a rear back up camera. In such instance when the view is reproduced on the display screen, due to distortion and other factors associated with the reproduced view, objects such as vehicles approaching to the sides of the vehicle may be distorted as well. As a result, the driver of the vehicle may not take notice that of the object and its proximity to the driven vehicle. As a result, a user may not have awareness of a condition where the vehicle could be a potential collision to the driven vehicle if the vehicle crossing paths were to continue, as in the instance of a backup scenario, or if a lane change is forthcoming. While some vehicle system of the driven vehicle may attempt to ascertain the distance between the driven vehicle and the object, due to the distortions in the captured image, such system may not be able to determine such parameters that are required for alerting the driver of relative distance between the object and a vehicle or when a time-to-collision is possible.
An advantage of an embodiment is the display of vehicles in a dynamic rearview mirror where the objects such as vehicles are captured by a vision based capture device and objects identified are highlighted for generating an awareness to the driver of the vehicle and a time-to-collision is identified for highlighted objects. The time-to-collision is determined utilizing temporal differences that are identified by generating an overlay boundary about changes to the object size and the relative distance between the object and the driven vehicle.
Detection of objects by sensing devices other than the vision-based capture device are cooperatively used to provide a more accurate location of an object. The data from the other sensing devices are fused with data from the vision based imaging device for providing a more accurate location of the position of the vehicle relative to the driven vehicle.
In addition to cooperatively utilizing each of the sensing devices and image capture device to determine a more precise location of the object, a time-to-collision can be determined for each sensing and imaging device and each of the determined time-to-collisions can be utilized to determine a comprehensive time-to-collision that can provide greater confidence than a single calculation. Each of the respective time-to-collisions of an object for each sensing device can be given a respective weight for determining how much each respective time-to-collision determination should be relied on in determining the comprehensive time-to-collision.
Moreover, when dynamic expanded image is displayed on the rearview mirror display, the display may be toggled between displaying the dynamic expanded image and a mirror with typical reflective properties.
An embodiment contemplates a method of displaying a captured image on a display device of a driven vehicle. A scene exterior of the driven vehicle is captured by an at least one vision-based imaging device mounted on the driven vehicle. Objects are detected in the captured image. A time-to-collision is determined for each object detected in the captured image. Objects are sensed in a vicinity of the driven vehicle by sensing devices. A time-to-collision is determined for each respective object sensed by the sensing devices. A comprehensive time-to-collision is determined for each object. The comprehensive time-to-collision for each object is determined as a function of each of the determined time-to-collisions for each object. An image of the captured scene is generated by a processor. The image is dynamically expanded to include sensed objects in the image. Sensed objects are highlighted in the dynamically expanded image. The highlighted objects identify objects proximate to the driven vehicle that are potential collisions to the driven vehicle. The dynamically expanded image with highlighted objects and associated collective time-to-collisions are displayed for each highlighted object in the display device that is determined as a potential collision.
An embodiment contemplates a method of displaying a captured image on a display device of a driven vehicle. A scene exterior of the driven vehicle is captured by an at least one vision-based imaging device mounted on the driven vehicle. Objects are detected in the captured image. Objects in a vicinity of the driven vehicle are sensed by sensing devices. An image of the captured scene by a processor is generated. The image is dynamically expanded to include sensed objects in the image. Sensed objects are highlighted in the dynamically expanded image that are potential collisions to the driven vehicle. The dynamically expanded image is displayed with highlighted objects on the rearview mirror. The rearview mirror is switchable between displaying the dynamically expanded image and displaying mirror reflective properties.
There is shown in
The vision-based imaging system 12 includes a front-view camera 14 for capturing a field-of-view (FOV) forward of the vehicle 10, a rear-view camera 16 for capturing a FOV rearward of the vehicle, a left-side view camera 18 for capturing a FOV to a left side of the vehicle, and a right-side view camera 20 for capturing a FOV on a right side of the vehicle. The cameras 14-20 can be any camera suitable for the purposes described herein, many of which are known in the automotive art, that are capable of receiving light, or other radiation, and converting the light energy to electrical signals in a pixel format using, for example, charged coupled devices (CCD). The cameras 14-20 generate frames of image data at a certain data frame rate that can be stored for subsequent processing. The cameras 14-20 can be mounted within or on any suitable structure that is part of the vehicle 10, such as bumpers, facie, grill, side-view mirrors, door panels, behind the windshield, etc., as would be well understood and appreciated by those skilled in the art. Image data from the cameras 14-20 is sent to a processor 22 that processes the image data to generate images that can be displayed on a review mirror display device 24. It should be understood that a one camera solution is included (e.g., rearview) and that it is not necessary to utilize 4 different cameras as describe above.
The present invention utilizes the captured scene from the vision imaging based device 12 for detecting lighting conditions of the captured scene, which is then used to adjust a dimming function of the image display of the rearview mirror 24. Preferably, a wide angle lens camera is utilized for capturing an ultra-wide FOV of a scene exterior of the vehicle, such a region represented by 26. The vision imaging based device 12 focuses on a respective region of the captured image, which is preferably a region that includes the sky 28 as well as the sun, and high-beams from other vehicles at night. By focusing on the illumination intensity of the sky, the illumination intensity level of the captured scene can be determined. This objective is to build a synthetic image as taken from a virtual camera having an optical axis that is directed at the sky for generating a virtual sky view image. Once a sky view is generated from the virtual camera directed at the sky, a brightness of the scene may be determined. Thereafter, the image displayed through the rearview mirror 24 or any other display within the vehicle may be dynamically adjusted. In addition, a graphic image overlay may be projected onto the image display of the rearview mirror 24. The image overlay replicates components of the vehicle (e.g., head rests, rear window trim, c-pillars) that includes line-based overlays (e.g., sketches) that would typically be seen by a driver when viewing a reflection through the rearview mirror having ordinary reflection properties. The image displayed by the graphic overlay may also be adjusted as to the brightness of the scene to maintain a desired translucency such that the graphic overlay does not interfere with the scene reproduced on the rearview mirror, and is not washed out.
In order to generate the virtual sky image based on the capture image of a real cameral, the captured image must be modeled, processed, and view synthesized for generating a virtual image from the real image. The following description details how this process is accomplished. The present invention uses an image modeling and de-warping process for both narrow FOV and ultra-wide FOV cameras that employs a simple two-step approach and offers fast processing times and enhanced image quality without utilizing radial distortion correction. Distortion is a deviation from rectilinear projection, a projection in which straight lines in a scene remain straight in an image. Radial distortion is a failure of a lens to be rectilinear.
The two-step approach as discussed above includes (1) applying a camera model to the captured image for projecting the captured image on a non-planar imaging surface and (2) applying a view synthesis for mapping the virtual image projected on to the non-planar surface to the real display image. For view synthesis, given one or more images of a specific subject taken from specific points with specific camera setting and orientations, the goal is to build a synthetic image as taken from a virtual camera having a same or different optical axis.
The proposed approach provides effective surround view and dynamic rearview mirror functions with an enhanced de-warping operation, in addition to a dynamic view synthesis for ultra-wide FOV cameras. Camera calibration as used herein refers to estimating a number of camera parameters including both intrinsic and extrinsic parameters. The intrinsic parameters include focal length, image center (or principal point), radial distortion parameters, etc. and extrinsic parameters include camera location, camera orientation, etc.
Camera models are known in the art for mapping objects in the world space to an image sensor plane of a camera to generate an image. One model known in the art is referred to as a pinhole camera model that is effective for modeling the image for narrow FOV cameras. The pinhole camera model is defined as:
Equation (1) includes the parameters that are employed to provide the mapping of point M in the object space 34 to point m in the image plane 32. Particularly, intrinsic parameters include fu, fν, uc, νc and γ and extrinsic parameters include a 3 by 3 matrix R for the camera rotation and a 3 by 1 translation vector t from the image plane 32 to the object space 34. The parameter γ represents a skewness of the two image axes that is typically negligible, and is often set to zero.
Since the pinhole camera model follows rectilinear projection which a finite size planar image surface can only cover a limited FOV range (<<180° FOV), to generate a cylindrical panorama view for an ultra-wide (˜180° FOV) fisheye camera using a planar image surface, a specific camera model must be utilized to take horizontal radial distortion into account. Some other views may require other specific camera modeling, (and some specific views may not be able to be generated). However, by changing the image plane to a non-planar image surface, a specific view can be easily generated by still using the simple ray tracing and pinhole camera model. As a result, the following description will describe the advantages of utilizing a non-planar image surface.
The rearview mirror display device 24 (shown in
A view synthesis technique is applied to the projected image on the non-planar surface for de-warping the image. In
Dynamic view synthesis is a technique by which a specific view synthesis is enabled based on a driving scenario of a vehicle operation. For example, special synthetic modeling techniques may be triggered if the vehicle is in driving in a parking lot versus a highway, or may be triggered by a proximity sensor sensing an object to a respective region of the vehicle, or triggered by a vehicle signal (e.g., turn signal, steering wheel angle, or vehicle speed). The special synthesis modeling technique may be to apply respective shaped models to a captured image, or apply virtual pan, tilt, or directional zoom depending on a triggered operation.
In block 62, the real camera model is defined, such as the fisheye model (rd=func(θ) and φ). That is, the incident ray as seen by a real fish-eye camera view may be illustrated as follows:
where xc1, yc1, and zc1 are the camera coordinates where zc1 is a camera/lens optical axis that points out the camera, and where uc1 represents ureal and νc1 represents νreal. A radial distortion correction model is shown in
r
d
=r
0(1+k1·r02+k2·r04+k2·r06+ . . . (3)
The point r0 is determined using the pinhole model discussed above and includes the intrinsic and extrinsic parameters mentioned. The model of equation (3) is an even order polynomial that converts the point r0 to the point rd in the image plane 72, where k is the parameters that need to be determined to provide the correction, and where the number of the parameters k define the degree of correction accuracy. The calibration process is performed in the laboratory environment for the particular camera that determines the parameters k. Thus, in addition to the intrinsic and extrinsic parameters for the pinhole camera model, the model for equation (3) includes the additional parameters k to determine the radial distortion. The non-severe radial distortion correction provided by the model of equation (3) is typically effective for wide FOV cameras, such as 135° FOV cameras. However, for ultra-wide FOV cameras, i.e., 180° FOV, the radial distortion is too severe for the model of equation (3) to be effective. In other words, when the FOV of the camera exceeds some value, for example, 140°-150°, the value r0 goes to infinity when the angle θ approaches 90°. For ultra-wide FOV cameras, a severe radial distortion correction model shown in equation (4) has been proposed in the art to provide correction for severe radial distortion.
The values q in equation (4) are the parameters that are determined. Thus, the incidence angle θ is used to provide the distortion correction based on the calculated parameters during the calibration process.
r
d
=q
1·θ0+q2·θ03+q3·θ05+ . . . (4)
Various techniques are known in the art to provide the estimation of the parameters k for the model of equation (3) or the parameters q for the model of equation (4). For example, in one embodiment a checker board pattern is used and multiple images of the pattern are taken at various viewing angles, where each corner point in the pattern between adjacent squares is identified. Each of the points in the checker board pattern is labeled and the location of each point is identified in both the image plane and the object space in world coordinates. The calibration of the camera is obtained through parameter estimation by minimizing the error distance between the real image points and the reprojection of 3D object space points.
In block 63, a real incident ray angle (θreal) and (φreal) are determined from the real camera model. The corresponding incident ray will be represented by a (θreal, φreal).
In block 64, a virtual incident ray angle θvirt and corresponding φvirt is determined. If there is no virtual tilt and/or pan, then (θvirt,φvirt) will be equal to (θreal,φreal). If virtual tilt and/or pan are present, then adjustments must be made to determine the virtual incident ray. Discussion of the virtual incident ray will be discussed in detail later.
Referring again to
In block 66, the virtual incident ray that intersects the non-planar surface is determined in the virtual image. The coordinate of the virtual incident ray intersecting the virtual non-planar surface as shown on the virtual image is represented as (uvirt,νvirt). As a result, a mapping of a pixel on the virtual image (uvirt,νvirt) corresponds to a pixel on the real image (ureal,νreal).
It should be understood that while the above flow diagram represents view synthesis by obtaining a pixel in the real image and finding a correlation to the virtual image, the reverse order may be performed when utilizing in a vehicle. That is, every point on the real image may not be utilized in the virtual image due to the distortion and focusing only on a respective highlighted region (e.g., cylindrical/elliptical shape). Therefore, if processing takes place with respect to these points that are not utilized, then time is wasted in processing pixels that are not utilized. Therefore, for an in-vehicle processing of the image, the reverse order is performed. That is, a location is identified in a virtual image and the corresponding point is identified in the real image. The following describes the details for identifying a pixel in the virtual image and determining a corresponding pixel in the real image.
where uvirt is the virtual image point u-axis (horizontal) coordinate, fu is the u direction (horizontal) focal length of the camera, and u0 is the image center u-axis coordinate.
Next, the vertical projection of angle θ is represented by the angle β. The formula for determining angle β follows the rectilinear projection as follows:
where νvirt is the virtual image point v-axis (vertical) coordinate, fν is the ν direction (vertical) focal length of the camera, and ν0 is the image center v-axis coordinate.
The incident ray angles can then be determined by the following formulas:
As described earlier, if there is no pan or tilt between the optical axis of the virtual camera and the real camera, then the virtual incident ray (θvirt,φvirt) and the real incident ray (θreal,φreal) are equal. If pan and/or tilt are present, then compensation must be made to correlate the projection of the virtual incident ray and the real incident ray.
For each determined virtual incident ray (θvirt,φvirt), any point on the incident ray can be represented by the following matrix:
where ρ is the distance of the point form the origin.
The virtual pan and/or tilt can be represented by a rotation matrix as follows:
where α is the pan angle, and β is the tilt angle.
After the virtual pan and/or tilt rotation is identified, the coordinates of a same point on the same incident ray (for the real) will be as follows:
The new incident ray angles in the rotated coordinates system will be as follows:
As a result, a correspondence is determined between (θvirt,φvirt) and (θreal,φreal) when tilt and/or pan is present with respect to the virtual camera model. It should be understood that that the correspondence between (θvirt,φvirt) and (θreal,φreal) is not related to any specific point at distance ρ on the incident ray. The real incident ray angle is only related to the virtual incident ray angles (θvirt,φvirt) and virtual pan and/or tilt angles α and β.
Once the real incident ray angles are known, the intersection of the respective light rays on the real image may be readily determined as discussed earlier. The result is a mapping of a virtual point on the virtual image to a corresponding point on the real image. This process is performed for each point on the virtual image for identifying corresponding point on the real image and generating the resulting image.
The images by the image capture devices 80 are input to a camera switch. The plurality of image capture devices 80 may be enabled based on the vehicle operating conditions 81, such as vehicle speed, turning a corner, or backing into a parking space. The camera switch 82 enables one or more cameras based on vehicle information 81 communicated to the camera switch 82 over a communication bus, such as a CAN bus. A respective camera may also be selectively enabled by the driver of the vehicle.
The captured images from the selected image capture device(s) are provided to a processing unit 22. The processing unit 22 processes the images utilizing a respective camera model as described herein and applies a view synthesis for mapping the capture image onto the display of the rearview mirror device 24.
A mirror mode button 84 may be actuated by the driver of the vehicle for dynamically enabling a respective mode associated with the scene displayed on the rearview mirror device 24. Three different modes include, but are not limited to, (1) dynamic rearview mirror with review cameras; (2) dynamic mirror with front-view cameras; and (3) dynamic review mirror with surround view cameras.
Upon selection of the mirror mode and processing of the respective images, the processed images are provided to the rearview image device 24 where the images of the captured scene are reproduced and displayed to the driver of the vehicle via the rearview image display device 24. It should be understood that any of the respective cameras may be used to capture the image for conversion to a virtual image for scene brightness analysis.
If only a single camera is used, camera switching is not required. The captured image is input to the processing unit 22 where the captured image is applied to a camera model. The camera model utilized in this example includes an ellipse camera model; however, it should be understood that other camera models may be utilized. The projection of the ellipse camera model is meant to view the scene as though the image is wrapped about an ellipse and viewed from within. As a result, pixels that are at the center of the image are viewed as being closer as opposed to pixels located at the ends of the captured image. Zooming in the center of the image is greater than at the sides.
The processing unit 22 also applies a view synthesis for mapping the captured image from the concave surface of the ellipse model to the flat display screen of the rearview mirror.
The mirror mode button 84 includes further functionality that allows the driver to control other viewing options of the rearview mirror display 24. The additional viewing options that may be selected by driver includes: (1) Mirror Display Off; (2) Mirror Display On With Image Overlay; and (3) Mirror Display On Without Image Overlay.
“Mirror Display Off” indicates that the image captured by the capture image device that is modeled, processed, displayed as a de-warped image is not displayed onto the rearview mirror display device. Rather, the rearview mirror functions identical as a mirror displaying only those objects captured by the reflection properties of the mirror.
The “Mirror Display On With Image Overlay” indicates that the captured image by the capture image device that is modeled, processed, and projected as a de-warped image is displayed on the image capture device 24 illustrating the wide angle FOV of the scene. Moreover, an image overlay 92 (shown in
The “Mirror Display On Without Image Overlay” displays the same captured images as described above but without the image overlay. The purpose of the image overlay is to allow the driver to reference contents of the scene relative to the vehicle; however, a driver may find that the image overlay is not required and may select to have no image overlay in the display. This selection is entirely at the discretion of the driver of the vehicle.
Based on the selection made to the mirror button mode 84, the appropriate image is presented to the driver via the rearview mirror in block 24. It should be understood that if more than one camera is utilized, such as a plurality of narrow FOV cameras, where each of the images must be integrated together, then image stitching may be used. Image stitching is the process of combining multiple images with overlapping regions of the images FOV for producing a segmented panoramic view that is seamless. That is, the combined images are combined such that there are no noticeable boundaries as to where the overlapping regions have been merged. After image stitching has been performed, the stitched image is input to the processing unit for applying camera modeling and view synthesis to the image.
In systems were just an image is reflected by a typical rearview mirror or a captured image is obtained where dynamic enhancement is not utilized such as a simple camera with no fisheye or a camera having a narrow FOV, objects that are possible a safety issue or could by on a collision with the vehicle are not captured in the image. Other sensors on the vehicle may in fact detect such objects, but displaying a warning and identifying the image in the object is an issue. Therefore, by utilizing a captured image and utilizing a dynamic display where a wide FOV is obtained either by a fisheye lens, image stitching, or digital zoom, an object can be illustrated on the image. Moreover, symbols such a parking assist symbols and object outlines for collision avoidance may be overlaid on the object.
In typical systems, as shown in
Image overlay 138 generates a vehicle boundary of the vehicle. Since the virtual image is generated less any of only the objects and scenery exterior of the vehicle, the virtual image captured will not capture any exterior trim components of the vehicle. Therefore, image overlay 138 is provided that generates a vehicle boundary as to where the boundaries of the vehicle would be located had they been shown in the captured image.
In block 144, various systems are used to identify objects captured in the captured image. Such objects include, but not limited to, vehicles from devices described herein, lanes of the road based on lane centering systems, pedestrians from pedestrian awareness systems, parking assist system, and poles or obstacles from various sensing systems/devices.
A vehicle detection system estimates the time to collision herein. The time to collision and object size estimation may be determined using an image based approach or may be determined using a point motion estimation in the image plane, which will be described in detail later.
The time to collision may be determined from various devices. Lidar is a remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light. Lidar provides object range data directly. A difference between a range change is the relative speed of the object. Therefore, the time to collision may be determined by the change in range divided by the change in relative speed.
Radar is an object detection technology that uses radio waves to determine the range and speed of objects. Radar provides an object's relative speed and range directly. The time to collision may be determined as function of the range divided by the relative speed.
Various other devices may be used in combination to determine whether a vehicle is on a collision course with a remote vehicle in a vicinity of the driven vehicle. Such devices include lane departure warning systems which indicate a lane change may be occurring during a non-activation of a turn signal. If the vehicle is departing a lane toward a lane of the detected remote vehicle, then a determination may be made that a time to collision should be determined and the driver made aware. Moreover, pedestrian detection devices, parking assist devices, and clear path detection systems may be used to detect objects in the vicinity for which a time to collision should be determined.
In block 146, the objects with object overlay are generated along with the time to collision for each object.
In block 120, the results are displayed on the dynamic rearview display mirror.
In block 152, the object size, distance, and vehicle coordinate is recorded. This is performed by defining a window overlay for the detected object (e.g., the boundary of the object as defined by the rectangular box). The rectangular boundary should encase the each element of the vehicle that can be identified in the captured image. Therefore, the boundaries should be close to those outermost exterior portions of the vehicle without creating large gaps between an outermost exterior component of the vehicle and the boundary itself.
To determine an object size, an object detection window is defined. This can be determined by estimating the following parameters:
def:win
t
det:(uWt,νHt,νBt):object detection window size and location (on image) at time t
where
uWt:detection—window width, νHt:detection—window height,
and νBt:detection—window bottom.
Next, the object size and distance represented as vehicle coordinates is estimated by the following parameters:
def:x
t=(wto,hto,dto) is the object size and distance (observed) in vehicle coordinates
where wto is the object width(observed), hto is the object height(observed), and dto is the object distance(observed) at time t.
Based on camera calibration, the (observed) object size and distance Xt can be determined from the in-vehicle detection window size and location wintdet as represented by the following equation:
In block 153, the object distance and relative speed of the object is calculated as components in Yt. In this step, the output Yt is determined which represents the estimated object parameters (size, distance, velocity) at time t. This is represented by the following definition:
def:Y
t=(wte,hte,dte,νt)
where wte, hte, dte are estimated object size and distance,
and νt is the object relative speed at time t.
Next, a model is used to estimate object parameters and a time-to-collision (TTC) and is represented by the following equation:
Y
t=ƒ(X1,Xt−1,Xt−2, . . . ,Xt−n)
A more simplified example of the above function ƒ can be represented as follows:
In block 154, the time to collision is derived using the above formulas which is represented by the following formula:
TTC:TTC
t
=d
t
e/νt
In block 162, changes to the object size and to the object point location are determined. By comparing where an identified point in a first image is relative to the same point in another captured image where temporal displacement has occurred, the relative change in the location using the object size can be used to determine the time to collision.
In block 163, the time to collision is determined is based on the occupancy of the target in the majority of the screen height.
To determine the change in height and width and corner points of the object overlay boundary, the following technique is utilized. The following parameters are defined:
Δwt=wt−wt−1,
Δht=hwt−ht−1,
Δx(pti)=x(pti)−x(pt−1i),Δy(pii)=y(pti)−y(pt−1i)
where
w
t=0.5*(x(pt1)−x(pt2))+0.5*(x(pt3)−x(pt4)),
h
t=0.5*(y(pt2)−y(pt4))+0.5*(y(pt3)−y(pt1)).
The following estimates are defined by ƒw, ƒh, ƒx, ƒy:
Δwt+1=ƒw(Δwt,Δwt−1,Δwt−2, . . . ),
Δht+1=ƒh(Δht,Δht−1,Δht−2, . . . ),
Δxt+1=ƒx(Δxt,Δxt−1,Δxt−2, . . . ),
Δyt+1=ƒy(Δyt,Δyt−1,Δyt−2, . . . ),
The TTC can be determined using the above variables Δwt+1, Δht+1, Δxt+1 and, Δyt+1 with a function ƒTCC which is represented by the following formula:
TTC
t+1=ƒTCC(Δwt+1,Δht+1,Δxt+1,Δyt+1 . . . ).
In block 164, a sensor fusion technique is applied to the results of each of the sensors fusing the objects of images detected by the image capture device with the objects detected in other sensing systems. Sensor fusion allows the outputs from at least two obstacle sensing devices to be performed at a sensor level. This provides richer content of information. Both detection and tracking of identified obstacles from both sensing devices is combined. The accuracy in identifying an obstacle at a respective location by fusing the information at the sensor level is increased in contrast to performing detection and tracking on data from each respective device first and then fusing the detection and tracking data thereafter. It should be understood that this technique is only one of many sensor fusion techniques that can be used and that other sensor fusion techniques can be applied without deviating from the scope of the invention.
In block 166, the object detection results from the sensor fusion technique are identified in the image and highlighted with an object image overlay (e.g., Kalaman filtering, Condensation filtering).
In block 120, the highlighted object image overlay are displayed on the dynamic rearview mirror display device.
An interior passenger compartment is shown generally at 200. An instrument panel 202 includes a display device 204 for displaying the dynamically enhanced image. The instrument panel may further include a center console stack 206 that includes the display device 204 as well as other electronic devices such as multimedia controls, navigation system, or HVAC controls.
The dynamically enhanced image may be displayed on a heads-up-display HUD 208. The TTC may also be projected as part of the HUD 208 for alerting the driver to a potential collision. Displays such as those shown in
The dynamically enhanced image may further be displayed on a rearview mirror display 212. The rearview mirror display 212 when not projecting the dynamically enhanced image may be utilized as a customary rearview reflective mirror having usual mirror reflection properties. The rearview mirror display 212 may be switched manually or autonomously between the dynamically enhanced image projected on the rearview mirror display and a reflective mirror.
A manual toggling between the dynamically enhanced display and the reflective mirror may be actuated by the driver using a designated button 214. The designated button 214 may be disposed on the steering wheel 216 or the designated button 214 may be disposed on the rearview mirror display 212.
An autonomous toggling to the dynamically enhanced display may be actuated when a potential collision is present. This could be determined by various factors such as remote vehicles detected within a respective region proximate to the vehicle and an other imminent collision factor such as a turn signal being activated on the vehicle that indicates that vehicle is being transitioned or intended to be transitioned into an adjacent lane with the detected remote vehicle. Another example would be a lane detection warning system that detects a perceived unwanted lane change (i.e., detecting a lane change based on detection lane boundaries and while no turn signal activated). Given those scenarios, the rearview mirror display will automatically switch to the dynamically enhanced image. It should be understood that the above scenarios only a few of the examples that are used for autonomous enablement of the dynamically enhanced image, and that other factors may be used for switching the to the dynamically enhanced image. Alternatively, if a potential collision is not detected, the rearview image display will maintain the reflective display.
If more than one indicator and/or output display devices are used in the vehicle to display the dynamically enhanced image, then a display closest to what the driver is currently focusing on can be used to attract the driver's attention to notify the driver if a probability of a driver is likely. Such systems that can be used in cooperation with the embodiments described herein include a Driver Gaze Detection System described in copending application ______ filed ______ and Eyes-Off-The Road Classification with Glasses Classifier ______ filed ______, incorporated herein by reference in its entirety. Such detections devices/systems is shown generally at 218.
In block 228, a time to collision fusion technique is applied to the results of each of the time to collision data output in blocks 220-226. Time to collision fusion allows the time to collision from each output of the various systems to be cooperatively combined for providing enhanced confidence for a time to collision determination in comparison to just a single system determination. Each time to collision output from the each device or system for a respective object may be weighted in the fusion determination. Although the sensing and image capture devices are used to determine a more precise location of the object, each time-to-collision determined for each sensing and imaging device can be used to determine a comprehensive time-to-collision that can provide greater confidence than a single calculation. Each of the respective time-to-collisions of an object for each sensing device can be given a respective weight for determining how much each respective time-to-collision determination should be relied on in determining the comprehensive time-to-collision.
The number of time to collision inputs available will determine how each input will be fused. If there is only a single time to collision input, then the resulting time to collision will be the same as the input time to collision. If more than one time to collision input is provided, then the output will be a fused result of the input time to collision data. As described earlier, the fusion output is a weighted sum of each of the time to collision inputs. The following equation represents the fused and weighted sum of each of the time to collision inputs:
ΔtTTCout=wim1·ΔtTTCim1+wim2·ΔtTTCim2+wsens·ΔtTTCsens+wv2v·ΔtTTCsv2v
where Δt is a determined time-to-collision, w is a weight, and
im1, im2, sens, and v2v represent which image device and sensing device the data is obtained from for determining the time-to-collision. The weights can be either predefined from training, learning or can be dynamically adjusted.
In block 230, the object detection results from the sensor fusion technique are identified in the image and highlighted with an object image overlay.
In block 120, the highlighted object image overlay are displayed on the dynamic rearview mirror display.
While certain embodiments of the present invention have been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention as defined by the following claims.
The application is a continuation-in-part of U.S. application Ser. No. 14/059,729, filed Oct. 22, 2013.
Number | Date | Country | |
---|---|---|---|
Parent | 14059729 | Oct 2013 | US |
Child | 14071982 | US |