This application claims the priority benefit of Italian Application for Patent No. 102016000094858, filed on Sep. 21, 2016, the disclosure of which is hereby incorporated by reference in its entirety.
The present disclosure relates to an advanced Cross Traffic Alert (CTA) method and a corresponding Cross Traffic Alert system. The Cross Traffic Alert (CTA) method is an important feature of the Automotive Advanced Driver Assistance Systems (ADAS). The Cross Traffic Alert (CTA) system is designed to alert drivers in the case in which an encroaching vehicle is detected.
In the last decade, automotive companies have significantly invested in innovation concerning many aspects of Automatic Driver Assistance Systems (ADAS). Due to the increasing attention toward automotive smart systems, much effort has been expended in terms of new hardware and software equipment.
For example, modern cars may use back, forward and/or side cameras for different purposes. Some of the most popular applications are, for example: Cross Traffic Alert (CTA), Lane Departure Warning (LDW), Collision Avoidance (CA) and Blind Spot Detection (BSD).
The different Advanced Driver Assistance Systems (ADAS) solutions may be used advantageously in different road scenarios. For example, the Cross Traffic Alert (CTA) may be useful in city road environments where other vehicles can cross the road. Conversely, Lane Departure Warning (LDW) or Blind Spot Detection (BSD) may be useful on highways where the car reaches high speeds and a brief distraction of the driver can lead to an accident.
Therefore, a wide range of advanced technologies are currently being introduced into production automobiles, with investments being made in terms of innovation about many aspects regarding Advanced Driver Assistance Systems (ADAS).
As said before, an Advanced Driver Assistance System (ADAS) is a vehicle control system that uses environment sensors (for example, radar, laser, vision, image camera) and the processing of environment information to improve traffic safety by assisting the driver in recognizing and reacting to potentially dangerous traffic situations.
Different types of intelligent vehicle assistance systems are used in driver information systems, for example:
In particular, the driver warning systems actively warn the driver of a potential danger, allowing the driver to take appropriate corrective actions in order to mitigate or completely avoid the dangerous event.
Among these systems, in addition to the security aid, Cross Traffic Alert (CTA) is an important system to reduce the stress felt by the driver, as disclosed in the document: B. Reimer, B. Mehler and J. F. Coughlin, “An Evaluation of Driver Reactions to New Vehicle Parking Assist Technologies Developed to Reduce Driver Stress”, New England University Transportation Center, White Paper, 2010 (incorporated by reference).
All of these systems are designed to alert drivers, for example with acoustic warning signal sounds, of the presence of encroaching vehicles. This warning can be useful in different situations, like backing out of parking spaces, and/or slowly arriving/leaving to traffic lights or crossroads.
A physical limitation of the Cross Traffic Alert (CTA) system is that the sensors cannot reveal obstructing objects or vehicles in the scene, so in this case cannot properly work.
The Cross Traffic Alert (CTA) system requires efficient algorithms and methods for real-time processing of the information collected. A range sensor mounted on the vehicle could provide a practical solution to the problem.
Typically a radar sensor, or both radar and image sensors, have been proposed for this purpose, as described for example in United States Patent Application Publication Nos. 2014/0015693 and 2011/0133917 (both incorporated by reference).
These known systems achieve good performance, but they are too expensive to enter the automotive mass market.
Moreover, interesting approaches to the problem are the data fusion techniques, which combine information from several sensors in order to provide a complete view of the environment.
Furthermore, different well performing approaches have also been proposed, like image infrared and visible light sensors, as described for example in: J. Thomanek and G. Wanielik, “A new pixel-based fusion framework to enhance object detection in automotive applications”, 17th International Conference on Information Fusion (FUSION), 2014 (incorporated by reference), and object detection sensor (radar or camera), and in-vehicle sensor (for example for steering wheel and speedometer) as disclosed in United States Patent Application Publication No. 2008/0306666 (incorporated by reference).
Unfortunately, these less expensive systems are still too costly to be suitable for a potential automotive mass market.
Since the need is felt for very low-cost systems, the attention has been focused only to the use of a single low cost image camera.
In the art, different approaches, based on a single low cost image camera, have been proposed:
At the system level, these approaches are typically based on a combination of an image sensor and an image processor, as described for example in: United States Patent Application Publication No. 2010/0201508 (incorporated by reference), and usually on Engine Control Unit (ECU) with multi-core (Micro Controller Unit) MCU as described by: V. Balisavira and V. K. Pandey, “Real-time Object Detection by Road Plane Segmentation Technique for ADAS”, Eighth International Conference on Signal Image Technology and Internet Based Systems (SITIS), 2012 (incorporated by reference) to intensively elaborate the image data.
The main drawbacks of the cited prior art techniques are the need of at least an external Image Processor for heavy computation on the image data and the need of in-vehicle sensor (gear status, vehicle speed) to refine the results.
There is accordingly a need in the art to provide solutions for low cost Cross Traffic Alert method and system.
A Cross Traffic Alert method is designed to alert drivers in the case in which encroaching vehicles are detected. This method is effective in different situations, like backing out of parking spaces and/or slowly arriving/leaving to traffic lights or crossroads. Many automotive companies (like Volvo, Toyota, Ford and so on) have implemented this method for the high-end market.
However, an embodiment herein is focused on a low-cost Cross Traffic Alert system to address the automotive mass market.
According to one or more embodiments, a Cross Traffic Alert method is presented.
In particular, the proposed solution for a Cross Traffic Alert method is based on a low cost camera and is entirely based on the processing of the Optical Flow (OF) data.
The method reaches high performance just using Optical Flow (OF) data.
In one or more embodiments, the method further comprises the steps of:
Additionally, in one or more embodiments for calculating a Horizontal Filter subset comprises the steps of:
Embodiments of the present disclosure will now be described with reference to the annexed drawings, which are provided purely by way of non-limiting example and in which:
In the following description, numerous specific details are given to provide a thorough understanding of embodiments. The embodiments can be practiced without one or several specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the embodiments.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The headings provided herein are for convenience only and do not interpret the scope or meaning of the embodiments.
In particular, the Cross Traffic Alert (CTA) system is able to detect vehicles that move into the vehicle driving path from the left side or from the right side.
The camera that acquires an image of the surrounding area is supposed to be not only on a vehicle that is still, for example, stopped at a traffic light or at a road intersection, but also moving slowly, for example, in a car park zone.
Since a growing numbers of companies produce image sensors with hardware implemented Optical Flow (OF) analysis, as disclosed in: A. A. Stocker and R. J. Douglas, “Analog integrated 2-D optical flow sensor with programmable pixels”, Proceedings of the 2004 International Symposium on Circuits and Systems (ISCAS), 2004 (incorporated by reference), the Cross Traffic Alert (CTA) system here described can work directly in the Image Signal Processor (ISP), internal to the image camera, avoiding to overload the Engine Control Unit (ECU) and avoiding to transmit the entire image flow, without the need to have an external image processor.
This solution allows for obtaining a real time Cross Traffic Alert (CTA) system with low extra computational effort, offering good performance.
As mentioned before, the Cross Traffic Alert (CTA) system can work directly in the Image Signal Processor (ISP) of the Camera, without the need to have an external Image Processor.
In the example considered, at least one image camera 10 generating an image IMG is placed on a vehicle, such as a car, a truck or a motorcycle. The image camera 10 is placed on the vehicle and is configured to monitor the front, the back and/or the lateral road situation with respect to the vehicle.
The image camera 10 comprises a processing module (Pr) 20, for example a system for the Cross Traffic Alert analysis and the generation of an alert, that is adapted to analyze the road situation and produce an alert to the driver under some circumstances.
A typical detection system may therefore comprise a camera 10, a processing module 20 and an alert device 30, such as a visual display 34, an acoustic element 32 and/or a haptic actuator 36. Therefore, the alert given to the driver can be of one or more different types.
In particular, the haptic or kinesthetic communication recreates the sense of touch by applying forces, vibrations, or motions to the user, and in particular on the steering wheel of the vehicle.
In various embodiments of the present disclosure, the alert can be the sum or the mixing of the acoustic, the visible and the haptic alert.
In the embodiment considered, for example, the Cross Traffic Alert (CTA) system generates an output video showing the input scene where the detected vehicles or moving objects are surrounded by a Bounding Box BB (see, for example,
In particular, the Cross Traffic Alert (CTA) processing module 20 is entirely based on the Optical Flow (OF), for example, the collection of Motion Vectors (MVs) indicating the motion of a feature in the current frame compared with the same feature in the previous frame.
The Optical Flow (OF) is computed directly inside the Image Signal Processor (ISP) of the camera 10, for example by the image processing module 20, ensuring a real time processing.
More to the point, the Cross Traffic Alert (CTA) system uses only the Optical Flow (OF) available from sensors/ISP (for example, by the new device sensor STV0991).
Moreover, the sensor device STV0991 is an image signal processor with hardware accelerators for video analytics (i.e., Optical Flow and Line Detector) working in parallel with embedded video encoders. A 500 MHz ARM based CPU, an H264 video encoder and a small rendering engine enable real time applications. The Cross Traffic Alert (CTA) procedure runs on STV0991 CPU exploiting its embedded video analytics HW accelerator. In particular, only the Optical Flow is used.
Moreover, as mentioned before, the Cross Traffic Alert (CTA) system does not need the image content, allowing to reduce the power consumption and the processor requirements, because it can work directly in a low resource system, i.e. directly in the Image Signal Processor (ISP) of the camera 10.
This can avoid to overload the Engine Control Unit (ECU) and to transmit the entire image flow between the different modules of the system.
The processing module 20 comprises a first image analysis (IA) module 22 configured to analyze the images IMG provided by the camera 10 in order to generate Optical Flow data OF. For example, in various embodiments, the Optical Flow data OF include a collection/list of Motion Vectors (MV) indicating the motion of respective features in the current image/frame compared with the previous image/frame. The Optical Flow data OF are computed in hardware, thereby permitting a real time processing with, for example, 30 fps.
Commonly, the computation of Optical Flow data OF, in particular of Motion Vectors, is well known in the art, rendering a more detailed description herein unnecessary.
In the embodiment considered, the Cross Traffic Alert (CTA) processing module 20 receives the Optical Flow data OF, for example, the motion vectors MV, having been estimated by the image analysis module 22.
In one or more embodiments as exemplified in
The aforementioned steps will be described in more in detail in the following subparagraphs:
First of all, the processing module 20 calculates the Vanishing Point VP using the Vanishing Point calculation module 24.
One of the most useful information for the scene understanding is the position of the Vanishing Point VP. From a theoretical point of view, the Vanishing Point VP position into the scene overlaps the center of the image only in a case of an ideal situation: a road perfectly planar (no slopes and no curves), a forward camera placed horizontally to the ground (no tilt angle) and perpendicularly to the main car axis (no pan angle).
The real case (this is the target scenario for the proposed application) presents camera calibration parameters different from zero (for example, tilt and pan angles) and, mainly, the host car crosses through roads which can have slopes and curves.
Therefore, in a real environment, the Vanishing Point VP position does not coincide with the center of the image and for this reason it is estimated.
In one or more embodiments, the Vanishing Point VP position is important not only because delimits the horizon of the scene, but also because it contributes to the selection of the Motion Vectors MVs potentially belonging to a crossing vehicle in the next Horizontal Filter sub-block.
In one or more embodiments the Vanishing Point VP is computed using only the OF.
Some or all of the processing steps of the procedure performed by the Vanishing Point calculation module 24 are exemplified in
Moreover, the processing module 20 calculates the subset OF′ using the Horizontal Filter module 26 and receiving in input the Optical Flow OF and the Vanishing Point VP. Furthermore, also the Bound Box BB List generated by the Clustering module 28 is received in input as a feedback reaction.
In order for the subsequent Clustering step 28 to work correctly, since it works only with a steady camera, it is important to remove the Motion Vectors MVs that are in the same direction of the vehicle movement, and thus a filtering of the horizontal Motion Vectors MVs is performed.
Therefore, the processing module 20 is configured for apply a filtering based on the Bound Box BB List of previous frame, Vanishing Point VP and Motion Vectors MVs orientations.
In particular, as indicated in
if the Motion Vector is considered Horizontal it is maintained in the subset OF′, otherwise it is discarded; a vector is retained, i.e. it is considered horizontal, if two conditions are both satisfied:
1) its orientation lies around an horizontal orientation (zero or 180 degree), as indicated in
2) the orientation difference between the Motion Vector MV orientation and the orientation of the Motion Vector MV translated on Vanishing Point VP overcomes an evaluated dynamic threshold TH, as indicated in
In particular, the Clustering module 28 receives the subset OF′ from the Horizontal Filter module 26 and calculates the Bound Box BB List.
More to the point, Clustering is the task of grouping a set of information in such a way that information in the same group (called a cluster) are more similar to each other than to those in other groups.
In this application is important to identify moving cars transversally approaching the monitored vehicle on which the Cross Traffic Alert (CTA) system is installed.
The Clustering step applied in module 28 is based, as mentioned before, only on the Optical Flow analysis, which came from the previous Horizontal Filter step 26. For example, in one or more embodiments the Clustering step 28 can be implemented as disclosed in U.S. application patent Ser. No. 15/169,232 (claiming priority to Italian Application for Patent No. 102015000082886) (incorporated by reference).
Many tests have been executed with different scenarios and different cameras at different resolutions, with both linear and fish-eye lens, obtaining very good visual results.
For real-time testing, the processing unit may comprise the STMicroelectronics device STV0991, the specific of which can be found at the internet web site http://www.st.com/web/catalog/mmc/FM132/SC51/PF255756. This device is an image processor having not only the classic pipeline to reconstruct the image from a Bayer image, but also it embeds the Optical Flow, which can be directly used as input for the cross traffic alert method.
In this image processor, there is an embedded ARM Cortex-R4 CPU @500 MHz, therefore the method can be loaded directly on it, without external host processors, having in the chip the complete solution to test “live” the behaviour of the cross traffic alert method.
An example of Rear Cross Traffic Alert CTA with fish-eye camera is shown in
The output of just the Clustering step 28 represented in
Moreover, another example of slow crossroad arriving with linear camera is shown in
In this case, the camera is mounted in front of the vehicle and the vehicle is approaching to a crossroad. The output of just the Clustering step 28, represented in FIG. 8A, shows false Bound Boxes BBs on the ground, on the leaves of trees and in the road sign.
Conversely, with the proposed Cross Traffic Alert CTA method, represented in
Furthermore, another example of slow crossroad leaving with linear camera is shown in
In this case, the camera is mounted in front of the vehicle and the vehicle is leaving a crossroad. The output of just the Clustering step 28, represented in
Conversely, with the proposed Cross Traffic Alert CTA method, represented in
The proposed solution has been experimentally tested on a representative dataset of scenes obtaining effective results in terms of accuracy. In particular, lots of tests have been executed with different scenarios and different cameras obtaining really good visual results. For example, Linear and Fish-eye lens and different cameras and resolutions have been tested.
A very reliable and low-cost Cross Traffic Alert CTA method has been developed with the following characteristics:
In particular, the computation time is about 800 frame/sec for a VGA sequence on a 2.3 GHz processor (like an ARM Cortex-A15).
The system is deterministic, i.e. the same input produces always the same output.
All the Cross Traffic Alert CTA processing is inside the ISP of the Camera, and moreover for solution detectability can be used:
Of course, without prejudice to the principle of the invention, the details of construction and the embodiments may vary widely with respect to what has been described and illustrated herein purely by way of example, without thereby departing from the scope of the present invention, as defined by the ensuing claims.
Number | Date | Country | Kind |
---|---|---|---|
102016000094858 | Sep 2016 | IT | national |
Number | Name | Date | Kind |
---|---|---|---|
5777690 | Takeda et al. | Jul 1998 | A |
5991428 | Taniguchi | Nov 1999 | A |
6501794 | Wang | Dec 2002 | B1 |
6594583 | Ogura | Jul 2003 | B2 |
7248718 | Comaniciu | Jul 2007 | B2 |
7266220 | Sato | Sep 2007 | B2 |
7437243 | Fujimoto | Oct 2008 | B2 |
7437244 | Okada | Oct 2008 | B2 |
7612800 | Okada | Nov 2009 | B2 |
7639841 | Zhu | Dec 2009 | B2 |
8812226 | Zeng | Aug 2014 | B2 |
8964034 | Nagamine | Feb 2015 | B2 |
8977060 | Dollar | Mar 2015 | B2 |
9420151 | Yokota | Aug 2016 | B2 |
9443163 | Springer | Sep 2016 | B2 |
9460354 | Fernandez | Oct 2016 | B2 |
9664789 | Rosenblum | May 2017 | B2 |
9734404 | Dollar | Aug 2017 | B2 |
9944317 | Lee | Apr 2018 | B2 |
20010012982 | Ogura | Aug 2001 | A1 |
20020042668 | Shirato | Apr 2002 | A1 |
20030210807 | Sato | Nov 2003 | A1 |
20030235327 | Srinivasa | Dec 2003 | A1 |
20040057599 | Okada | Mar 2004 | A1 |
20060171563 | Takashima | Aug 2006 | A1 |
20060177099 | Zhu | Aug 2006 | A1 |
20080306666 | Zeng et al. | Dec 2008 | A1 |
20100097455 | Zhang | Apr 2010 | A1 |
20100098295 | Zhang | Apr 2010 | A1 |
20100104199 | Zhang | Apr 2010 | A1 |
20100191391 | Zeng | Jul 2010 | A1 |
20100201508 | Green et al. | Aug 2010 | A1 |
20110133917 | Zeng | Jun 2011 | A1 |
20120133769 | Nagamine | May 2012 | A1 |
20130250109 | Yokota | Sep 2013 | A1 |
20130286205 | Okada | Oct 2013 | A1 |
20140015693 | Komoguchi et al. | Jan 2014 | A1 |
20140334675 | Chu | Nov 2014 | A1 |
20140341474 | Dollar | Nov 2014 | A1 |
20150234045 | Rosenblum | Aug 2015 | A1 |
20150332114 | Springer | Nov 2015 | A1 |
20160073062 | Ohsugi | Mar 2016 | A1 |
20160339959 | Lee | Nov 2016 | A1 |
Entry |
---|
Ikuro, Sato et al: “Crossing Obstacle Detection With a Vehicle-Mounted Camera,” Intelligent Vehicles Symposium (VI), IEEE, Jun. 5, 2011, pp. 60-65, XP031998914. |
Lefaix, G. et al: “Motion-Based Obstacle Detection and Tracking for Car Driving Assistance,” Pattern Recognition, 2002, Proceedings, 16th Internatioal Conference on Quebec City, Que., Canada Aug. 11-15, 2002, Los Alamitos, CA, US, IEEE Comput. Soc. US, vol. 4, Aug. 11, 2002, pp. 74-77, XP010613474. |
Yamaguchi, K et al: “Moving Obstacle Detection Using Monocular Vision,” 2006 IEEE Intelligent Vehicles Symposium : Meguro-ku, Japan, Jun. 13-15, 2006, IEEE, Piscataway, NJ, US, Jun. 13, 2006, pp. 288-293, XP010937028. |
Italian Search Report and Written Opinion for IT 201600094858 dated May 30, 2017 (9 pages). |
Aung, Thanda et al: “Video Based Lane Departure Warning System Using Hough Transform,” International Conference on Advances in Engineering and Technology (ICAET'2014) Mar. 29-30, 2014 Singapore, pp. 85-88. |
Balisavira, V., et al: “Real-Time Object Detection by Road Plane Segmentation Technique for ADAS,” 2012 Eighth International Conference on Signal Image Technology and Internet Based Systems, 2012 IEEE, pp. 161-167. |
Choudhary, Akash, et al: “Design and Implementation of Wireless Security System in Vehicle,” International Journal of Innovative Research in Computer and Communication Engineering, vol. 3, Issue 7, Jul. 2015, pp. 6623-6629. |
Cui, Jianzhu et al: “Vehicle Localisation Using a Single Camera,” 2010 IEEE Intelligent Vehicles Symposium, University of California, San Diego, Jun. 21-24, 2010, pp. 871-876. |
Dagan, Erez, et al: “Forward Collision Warning with a Single Camera,” 2004 IEEE Intelligent Vehicles Symposium, University of Parma, Italy, Jun. 14-17, 2004, pp. 37-42. |
Deng, Yao, et al: “An Integrated Forward Collision Warning System Based on Monocular Vision,” Proceedings of the 2014 IEEE, International Conference on Robitics and Biomimetics, Dec. 5-10, 2014 Bali, Indonesia, pp. 1219-1223. |
Jeong, Seongkyun, et al: “Design Analysis of Precision Navigation System,” 2012 12th International Conference on Control, Automation and Systems, Oct. 17-21, 2012, in ICC, Jeju Island, Korea, pp. 2079-2082. |
Jheng, Yu-Jie, et al: “A Symmetry-Based Forward Vehicle Detection and Collision Warning System on Android Smartphone,” 2015 International Conference on Consumer Electronics—Taiwan (ICCE-TW), 2015 IEEE, pp. 212-213. |
Reimer, Bryan, et al: “An Evaluation of Driver Reactions to New Vehicle Parking Assist Technologies Developed to Reduce Driver Stress,” MIT Agelab, Nov. 4, 2010, pp. 1-26. |
Salari, E., et al: “Camera-Based Forward Collision and Lane Departure Warning Systems Using SVM,” 2013 IEEE, pp. 1278-1281. |
Shakouri, Payman, et al: “Adaptive Cruise Control System Using Balance-Based Adaptive Control Technique,” 2012 IEEE, pp. 510-515. |
Stocker, Alan A., et al: “Analog Integrated 2-D Optical Flow Sensor With Programmable Pixels,” 2004 IEEE, pp. III-9 through III-12. |
Tan, Robert: “A Safety Concept for Camera Based ADAS Based on MultiCore MCU,” 2014 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Dec. 16-17, 2014, Hyderabad, India, pp. 1-6. |
Thomanek, Jan, et al: “A New Pixel-Based Fusion Framework to Enhance Object Detection in Automotive Applications,” 17th International Conference on Information Fusion (FUSION), Jul. 7-10, 2014 (8 pages). |
Wu, Bing-Fei, et al: Research Article—“A Real-Time Embedded Blind Spot Safety Assistance System,” International Journal of Vehicular Technology, vol. 2012, Article ID 506235 (15 pages). |
Number | Date | Country | |
---|---|---|---|
20180082132 A1 | Mar 2018 | US |