The present invention relates to a system and a method for producing enhanced video images of an area surrounding a vehicle based on detection of objects near the vehicle.
Many vehicles are being equipped with video camera systems which provide drivers with live images of the surroundings of the vehicle. Providing such images to a driver helps improve safety and facilitate difficult driving maneuvers such as parking or maneuvering through heavy traffic. Vehicles are also being equipped with object sensors which warn the driver of the presence of objects in the vicinity of the vehicle, again to assist the driver with difficult driving maneuvers.
What is needed is a way to combine the information from the video camera systems and the object sensors so as to provide improved information to a driver and to further assist the driver in difficult driving situations.
Thus, in one embodiment, the invention provides a system for providing guidance information to a driver of a vehicle. The system includes an image capture device attached to the vehicle. The image capture device is configured to acquire an image of the vicinity of the vehicle. The system also includes an object sensor that is attached to the vehicle and is configured to detect an object near the vehicle; and a central processing unit configured to process the acquired image from the image capture device to produce an output image. Processing of the acquired image is based on information obtained from the object sensor. The system also includes an image display unit mounted in the vehicle that is configured to display the output image produced by the central processing unit.
In another embodiment, the invention provides a method of providing guidance information to a driver of a vehicle. The method includes obtaining an acquired image from an image capture device attached to the vehicle; detecting an object near the vehicle using an object sensor attached to the vehicle; and processing the acquired image using a central processing unit associated with the vehicle to produce an output image. Processing of the acquired image is based on information obtained from the object sensor. The method also includes displaying the output image produced by the central processing unit on an image display unit mounted in the vehicle.
Other aspects of the invention will become apparent by consideration of the detailed description and accompanying drawings.
Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways.
Embodiments of the present invention include methods and systems to provide the driver of a vehicle with improved information while maneuvering the vehicle. In general, the methods and systems combine information from image capture devices and object sensors associated with the vehicle to produce a more informative image display for the driver. The methods and systems disclosed herein may be used in a number of vehicles, such as passenger cars, vans, trucks, SUVs, buses, etc.
A system 10 according to embodiments of the invention includes a vehicle 20 having one or more object sensors 30 and one or more image capture devices 40 associated therewith (
In various embodiments, the object sensors 30 are attached to one or more of the sides, rear, or front of the vehicle 20 (
The object sensors 30 can detect various regions near the vehicle 10. In some embodiments, the object sensors 30 collect information from a horizontal sensing region 70 behind the vehicle 10 which begins about ten centimeters from the ground and continues to about two hundred and fifty centimeters from the ground (
Although the present discussion focuses on object sensors 30, image capture devices 40, and objects near the rear of a vehicle that is operating in reverse, the various elements of the system 10 can also be attached to the front or sides, or both, of the vehicle 20 and can be used to assist the driver in operating the vehicle 20 in other situations besides operating in reverse. In certain embodiments in which the system 10 operates while the vehicle 20 is moving forward, the system 10 may only show an image on the display 60 if the vehicle 20 is moving slowly, for example at or below ten miles per hour. In general the display 60 is disposed in a location that is readily visible to the driver of the vehicle 20, for example on the dashboard.
The object sensors 30 provide information regarding nearby objects, including the size of the objects and their location relative to the vehicle 20. When combined with other information, such as the location of the vehicle 20 and the vehicle's rate and direction of movement, the system 10 can determine whether the objects are moving and in what direction, and whether the objects are or will be in the vehicle's projected path of movement. Information regarding the location of the vehicle 20 as well as the rate and direction of movement of the vehicle 20 can be obtained from one or more of a global positioning system (GPS), accelerometers, and wheel speed sensors associated with the vehicle 20. As discussed further below, events that trigger the system 10 to produce an altered image output include the presence of a nearby object, the size of the object, how close the object is to the vehicle 20, whether the object is moving and the rate of such movement, whether the object is in or is predicted to enter the vehicle's 20 projected path of movement, and whether the object falls within a predetermined region of the field of view of the image capture device 40. Still another event that can trigger the system 10 to produce an altered image is when the vehicle 20 moves from a confined space having objects close to either side of the vehicle 20 (e.g. as in an alleyway or between two parked vehicles) to an open space.
One or more image capture devices 40 (for example, a digital camera) may be used with the system 10 (
In certain embodiments, image information from a plurality of image capture devices 40 is combined in order to provide additional information to the driver of the vehicle 20. The image capture devices 40 may be located at the rear, sides, or front of the vehicle 20. The image capture device 40 can use a number of detection technologies, such as a charge-coupled device (CCD) or similar detector, which can deliver a series of images to the CPU 50. In one particular embodiment, the image capture device 40 includes a 640×480 pixel CCD chip, although detectors with other pixel numbers and aspect ratios are also possible. In various embodiments, the image capture device 40 delivers at least one, five, ten, twenty, thirty, or more images per second to the CPU 50, although other rates are also possible.
The CPU 50 can include a processor, memory or other computer-readable media, and input/output mechanisms. The CPU 50 is configured to receive information from the object sensors 30 and the image capture device 40, to process the images captured by the image capture device 40 according to the information from the object sensors 30, and to transmit an output signal to the display 60. In various embodiments, the output from the CPU 50 may also or alternatively include sounds (e.g., a warning buzzer, bell, synthesized voice, etc.), tactile or movement (e.g. vibration of a part of the vehicle 20 such as the seat or steering wheel), light (e.g. flashing light), or other form of communication with one or more occupants of the vehicle 20, including the driver. The CPU 50 may communicate with the image capture device 40, the object sensors 30, the display 60, and other elements of the system 10 using wired or wireless modes of communication. The CPU 50 may include program instructions on a computer-readable medium and may further include memory for storing information. In various embodiments, the system 10 includes one or more of the elements disclosed herein along with appropriate program instructions on the CPU 50 for carrying out embodiments of the invention. Thus, in one or more embodiments, the system 10 produces a modified image output based on data obtained from object sensors 30 regarding objects which are in the vicinity of the vehicle 20.
Top View
In one such embodiment, the system 10 creates a simulated top view from the image data obtained from the image capture device 40. The simulated top view is generated by re-mapping (e.g. using orthogonal projection) the pixel information from the images obtained from the horizontally-directed image capture device 40 to produce an image which appears that it was captured from a device located directly above the vehicle 20 (
The event that triggers display of a hybrid view image can include the presence of an object in the projected path of the vehicle. Table I shows one possible relationship between the proximity of the object to the vehicle 20, as reported by the object sensors 30, and the relative percentages of the original image (“view-mode 1”) and the simulated top-view image (“view-mode 2”) by which the images are combined to produce the output image shown on the display 60. Thus, as the distance between the object and the vehicle 20 decreases, the output image contains a greater percentage of the top view image relative to the original image, drawing the driver's attention to the presence of the object and providing a better view of the relative positions of the vehicle 20 and the object. While Table I shows the changes of simulated view angle as occurring in a series of discrete steps, in various embodiments the changes occur continuously with changes in the distance between the object and the vehicle 20. In still other embodiments the changes in view of the hybrid image can occur in discrete steps as shown in Table I, but with a finer or lower level of resolution.
Rear Cross Path
In another embodiment, the system 10 alters the output image to increase the proportion of the lateral portions 46 of the image 42 from the image capture device 40 which are shown on the display 60, so that the modified portions 46′ comprise most or all of the modified output image 42′ (
In other embodiments, only one side or the other of the acquired image is enhanced depending on factors such as whether an object is present and, if so, whether the object is moving and which side of the vehicle 20 the object is currently located. Another situation that may trigger the system 10 to enhance the lateral portions of the displayed image is if the vehicle 20 moves from a confined space (e.g., in a narrow alleyway or between two other vehicles) to an open space, insofar as an object such as another vehicle or a pedestrian may be in the open space.
In certain embodiments, a rear cross path field of view is defined, such that if an object is detected within this field of view on one or both sides of the vehicle 20, then one or both of the lateral portions of the image are enhanced as discussed above. For example, the rear cross path field of view can be defined as a semicircular region adjacent to the rear of the vehicle 20, which in one particular embodiment has a radius of a half meter and corresponds to the outermost fifteen degrees of the field of view of a rear-facing, wide-angle camera having a viewing angle of α=180° (
Object-Focused Magnification
In still another embodiment, the system 10 enhances a subregion 44 of an image 42 that is acquired by the image capture device 40 in order to draw the driver's attention to an object (e.g. a trailer hitch) within the subregion 44 which is close to the vehicle 20 (e.g. within thirty to fifty centimeters) (
The subregion 44 to be enhanced can be defined in various ways, for example as one or more boxes, rectangles, circles, or other shapes in a particular region (e.g., the center) or regions of the field of view of the image capture device 40. Alternatively or in addition, the system 10 may automatically create a subregion to correspond to an area in the field of view in which an object is determined to be close to the vehicle 20.
The alteration of the displayed image from one mode to another, e.g., from a conventional view that corresponds to the acquired image to an enhanced view as described herein, may occur in a single frame or may be transitioned gradually over a series of frames, to help provide context for the driver to understand the relationship between the views. The information from the sensors 30 and image capture device 40 can be continuously provided to the CPU 50 for continuous updates of the acquired images and any related altered images being displayed.
Thus, the invention provides, among other things, a new and useful system and method for providing guidance information to a driver of a vehicle. Various features and advantages of the invention are set forth in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5289321 | Secor | Feb 1994 | A |
5574443 | Hsieh | Nov 1996 | A |
5670935 | Schofield et al. | Sep 1997 | A |
5949331 | Schofield et al. | Sep 1999 | A |
6222447 | Schofield et al. | Apr 2001 | B1 |
6254127 | Breed et al. | Jul 2001 | B1 |
6327522 | Kojima et al. | Dec 2001 | B1 |
6400831 | Lee et al. | Jun 2002 | B2 |
6424272 | Gutta et al. | Jul 2002 | B1 |
6484080 | Breed | Nov 2002 | B2 |
6593960 | Sugimoto et al. | Jul 2003 | B1 |
6618672 | Sasaki et al. | Sep 2003 | B2 |
6662642 | Breed et al. | Dec 2003 | B2 |
6734896 | Nobori et al. | May 2004 | B2 |
6746078 | Breed | Jun 2004 | B2 |
6749218 | Breed | Jun 2004 | B2 |
6820897 | Breed et al. | Nov 2004 | B2 |
6856873 | Breed et al. | Feb 2005 | B2 |
6923080 | Dobler et al. | Aug 2005 | B1 |
6933837 | Gunderson et al. | Aug 2005 | B2 |
6970579 | Thornton | Nov 2005 | B1 |
7049945 | Breed et al. | May 2006 | B2 |
7161616 | Okamoto et al. | Jan 2007 | B1 |
7209221 | Breed et al. | Apr 2007 | B2 |
7218758 | Ishii et al. | May 2007 | B2 |
7227459 | Bos et al. | Jun 2007 | B2 |
7227893 | Srinivasa et al. | Jun 2007 | B1 |
7317813 | Yanagawa et al. | Jan 2008 | B2 |
7359782 | Breed | Apr 2008 | B2 |
7444003 | Laumeyer et al. | Oct 2008 | B2 |
7460951 | Altan et al. | Dec 2008 | B2 |
7576767 | Lee et al. | Aug 2009 | B2 |
7630806 | Breed | Dec 2009 | B2 |
20050046584 | Breed | Mar 2005 | A1 |
20050192725 | Li | Sep 2005 | A1 |
20060125919 | Camilleri et al. | Jun 2006 | A1 |
20070182528 | Breed et al. | Aug 2007 | A1 |
20070182623 | Zeng et al. | Aug 2007 | A1 |
20070244641 | Altan et al. | Oct 2007 | A1 |
20070285510 | Lipton et al. | Dec 2007 | A1 |
20080040004 | Breed | Feb 2008 | A1 |
20080100704 | Venetianer et al. | May 2008 | A1 |
20080100706 | Breed | May 2008 | A1 |
20080117296 | Egnal et al. | May 2008 | A1 |
20080157510 | Breed et al. | Jul 2008 | A1 |
20080187047 | Stephan et al. | Aug 2008 | A1 |
Number | Date | Country |
---|---|---|
2301922 | Dec 1996 | GB |
2002-359838 | Dec 2002 | JP |
2005-328181 | Nov 2005 | JP |
2006-1331166 | May 2006 | JP |
2006-279511 | Oct 2006 | JP |
9638319 | Dec 1996 | WO |
0220287 | Mar 2002 | WO |
Entry |
---|
Automatic Adjustable Rear and Side Mirrors Tracking System, disclosed by IBM, UTC English, Sep. 17, 2008. |
Linan-Cembrano, et al., “Insect-Vision Inspired Collision Warning Vision Processor for Automobiles”, IEEE Circuits and Systems Magazine, vol. 8, No. 2, pp. 6-24, IEEE, Second Quarter 2008. |
Sosa, Rene., et al., “Obstacles Detection and Collision Avoidance System Developed with Virtual Models”, Proceedings of the 2007 IEEE International Conference on Vehicular Electronics and Safety, pp. 269-276, 2007. |
Sotelo, M.A., et al., “Vision-Based Blind Spot Detection Using Optical Flow”, Computer Aided Systems Theory—EUROCAST 2007, 11th International, Conference on Computer Aided Systems Theory, Revised Selected Papers (Lecture Notes in Computer Science, vol. 4739), pp. 1113-1118, 2007. |
Sengupta, Raja., et al., “Cooperative Collision Warning Systems: Concept Definition and Experimental Implementation”, Journal of Intelligent Transportation Systems, vol. 11, No. 3, pp. 143-155, 2007. |
Moon, Jaekyoung, et al., “An Automotive Detector Using Biologically Motivated Selective Attention Model for a Blind Spot Monitor”, Neural Information Processing, 13th International Conference, ICONIP 2006 Proceedings, Part II (Lecture Notes in Computer Science vol. 4233), pp. 466-473, 2006. |
Japanese Patent Office Action for Application No. 2010-190269 dated Aug. 4, 2014 (6 pages—with English Translation). |
Number | Date | Country | |
---|---|---|---|
20110050886 A1 | Mar 2011 | US |