1. Technical Field
The present disclosure relates to a driving assistance device capable of monitoring the surrounding environment of a vehicle such as an automobile, and to a related method.
2. Description of Related Art
To assist the driver of a running vehicle such as a motorcar to observe the surrounding environment, a video system is often installed in the vehicle. The video system usually employs cameras mounted on the sides and the rear portion of the vehicle to capture images at the sides and the rear of the vehicle, and a liquid crystal display (LCD) screen inside the vehicle to display the captured images. However, the image displayed on the display screen is a two-dimensional image, which may not clearly and accurately display the surrounding environment of the vehicle.
The components of any of the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present driving assistance system and method.
Referring to
Each captured image includes a distance information component indicating the distance(s) between one camera 2 that captures the image and any one or more objects in the field of view of that camera 2. In the embodiment, each camera 2 is a TOF (Time of Flight) camera. Referring also to
The driving assistance device 1 includes a processor 10, a storage unit 20, and a driving assistance system 30. In the embodiment, the driving assistance system 30 includes an image obtaining module 31, an object detecting module 32, a creating module 33, and a control module 34. One or more programs of the above-mentioned function modules 31, 32, 33, 34 may be stored in the storage unit 20 and executed by the processor 10. In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language. The software instructions in the modules 31, 32, 33, 34 may be embedded in firmware, such as in an erasable programmable read-only memory (EPROM) device. The modules 31, 32, 33, 34 described herein may be implemented as either software and/or hardware modules, and may be stored in any type of computer-readable medium or other storage device.
The image obtaining module 31 is used to obtain the images of the surrounding environment of the vehicle taken by the three cameras 2.
The object detecting module 32 is used to extract the distance information in relation to the distance(s) between each of the cameras 2 and each of the objects appearing in the captured image of each camera 2. In the embodiment, the object detecting module 32 extracts the distance information using a Robust Real-time Object Detection Method which is well-known to one of ordinary skill in the art.
The creating module 33 is used to create 3D models of the surrounding environment based on the captured images and the extracted distance information. In detail, the creating module 33 establishes a Cartesian coordinate system in one image captured by each camera 2, and determines the coordinates of each pixel in the one image. The creating module 33 then randomly selects several pixels and creates several virtual spheres, with the positions of the selected pixels as center points of the virtual spheres and distance values of the selected pixels (obtained from the distance information) as radiuses of the virtual spheres. Because the selected pixels are at different positions, the creating module 33 further determines the intersection point of the virtual spheres, and the intersection point is referred to as a reference point. For example, as shown in
In the embodiment, the 3D models respectively are named as a left side 3D model according to one image captured by the camera 2 mounted on the left side of the vehicle, a right side 3D model according to one image captured by the camera 2 mounted on the right side of the vehicle, and a rear portion 3D model according to one image captured by the camera 2 mounted on the rear portion of the vehicle.
In the embodiment, there is only one display device 3, and the control module 34 is used to control the display device 3 to display the three 3D models in a sub-frame mode. In an alternative embodiment, the control module 34 is used to control the display device 3 to display only one of the three 3D models at any one time, and to regularly and repeatedly switch the displaying of the 3D models in the following chronological order: the left side 3D model, the right side 3D model, and the rear portion 3D model. It should be understood that the chronological order of switching the displaying of the 3D models can be varied according to need. In another alternative embodiment, there can be three display devices 3, with each display device 3 corresponding to one camera 2. The control module 34 can control each display device 3 to constantly display one 3D model, which is created according to the image captured by the corresponding camera 2.
In the embodiment, the storage unit 20 stores a table recording the relationship between pixel value and distance range. Each distance range corresponds to one pixel value. The control module 34 is further used to determine the pixel value of each pixel in the image captured by the camera 2 according to the extracted distance information and the stored table, and assign the determined pixel value of the pixel to the corresponding pixel of the 3D model. The created 3D models can then be displayed in colors. Thus the driver can know the distance range between the vehicle and the object in the surrounding environment by noting the color of the object displayed on the display device 3. For example, when the distance between one object in the surrounding environment and the vehicle is about 110 meters (m), the control module 34 determines that the pixel value of the object is blue, and further assigns the pixel value of blue to the corresponding pixels of the 3D model. When the distance between one object in the surrounding environment and the vehicle is about 60 m, the control module 34 determines that the pixel value of the object is orange, and further assigns the pixel value of orange to the corresponding pixels of the 3D model.
Referring to
In step S401, the image obtaining module 31 obtains the images of the surrounding environment of the vehicle taken by the three cameras 2.
In step S402, the object detecting module 32 extracts the distance information in relation to the distance(s) between each of the cameras 2 and each of the objects appearing in the captured image of each camera 2.
In step S403, the creating module 33 creates 3D models of the surrounding environment according to the captured images and the extracted distance information.
In step S404, the control module 34 controls the display device 3 to display the three 3D models in a sub-frame mode.
In an alternative embodiment, in step S404, the control model 34 controls the display device 3 to display only one of the three 3D models at any one time, and to regularly and repeatedly switch the displaying of the 3D models in the following chronological order: the left side 3D model, the right side 3D model, and the rear portion 3D model.
In another alternative embodiment, in step S404, there are three display devices 3. The control module 34 controls each of the three display devices 3 to constantly display one 3D model, which is created according to the image captured by the corresponding camera 2.
In the embodiment, the displaying of the 3D models is performed before the control module 34 assigns a pixel value(s) to the object(s) in the surrounding environment captured by the corresponding camera(s) 2.
In detail, for each 3D model, the control module 34 determines the pixel value of the pixels of each object captured by the corresponding camera 2 according to the extracted distance information and the stored table, and assigns the determined pixel value to the corresponding pixels of the 3D model.
Although the present disclosure has been specifically described on the basis of the exemplary embodiments thereof, the disclosure is not to be construed as being limited thereto. Various changes or modifications may be made to the embodiments without departing from the scope and spirit of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
201110427020.3 | Dec 2011 | CN | national |