The present invention relates to a panoramic image system, and in particular, to a parking assistant panoramic image system.
Currently, most parking assistant systems in the market partner with an ultrasonic sensor by using an image system. The ultrasonic sensor transmits an ultrasonic wave around a vehicle and receives an ultrasonic wave that is fed back, so as to measure a distance and a direction of a barrier, thereby detecting a parking space.
However, limited to different parking skills of drivers, requirements for a detection range and a sensitivity of the ultrasonic sensor are also different. If the detection range of the ultrasonic sensor is excessively large, it is easy to judge all objects in the range as barriers. As a result, a driver needs to frequently review an image system so as not to misjudge a parking action. However, if the detection range of the ultrasonic sensor is excessively small, the ultrasonic sensor can sense existence of a barrier only when a vehicle moves to be considerably close to the barrier. Therefore, precision of detecting a parking space by a conventional parking assistant system is considerably limited. Moreover, a parking route or a step guidance cannot be further determined only by using an image system partnering with an ultrasonic sensor. Therefore, when the conventional parking assistant system is used, a driver still judges a parking space in consideration of vehicle dynamics and needs.
The present invention puts forward a parking assistant panoramic image system, applicable to a vehicle. The parking assistant panoramic image system includes a lens group, a three-dimensional image synthesis module, a guidance computation module, and a display module. The lens group includes a plurality of lenses, where the lenses are separately disposed at different locations around the vehicle, so as to separately shoot a plurality of external images around the vehicle. The three-dimensional image synthesis module is electrically connected to the lens group, and receives the external images and synthesizes the external images into a three-dimensional panoramic projection image, where the three-dimensional panoramic projection image includes a virtual vehicle body and a parking position image. The guidance computation module is electrically connected to the three-dimensional image synthesis module and receives the three-dimensional panoramic projection image. The guidance computation module detects and analyzes relative locations of the virtual vehicle body and the parking position image, and computes a parking guidance track accordingly, where the parking guidance track is a simulative route track along which the virtual vehicle body is capable of simulating driving into the parking position image. The display module is electrically connected to the three-dimensional image synthesis module and the guidance computation module, where the display module receives and displays at least a part of the three-dimensional panoramic projection image and the parking guidance track.
To sum up, according to the present invention, a three-dimensional image synthesis module establishes a three-dimensional panoramic projection image and judges a parking position location. Then, a guidance computation module detects and analyzes relative locations of a virtual vehicle body and a parking position image in the three-dimensional panoramic projection image to compute a parking guidance track, and a display module displays at least a part of the three-dimensional panoramic projection image and the parking guidance track. Therefore, a driver can simulate driving into a simulative dynamic image of a simulative route track of the parking position image along the parking guidance track and with reference to the virtual vehicle body, and can also operate a steering wheel and/or a braking system to adjust a real-time driving route of a driven vehicle according to the parking guidance track.
The present invention will become more fully understood from the detailed description given herein below for illustration only, and thus are not limitative of the present invention, and wherein:
The lens group 110 includes a plurality of lenses 112, separately disposed at different locations on the vehicle C1 and shooting multiple external images around the vehicle C1 at different shooting angles. It should be noted that, viewing angles at which neighboring lenses 112 perform shooting partially overlap, and therefore the multiple captured external images can display different viewing angles and can be pieced together into a panoramic image around the vehicle C1. In this embodiment, the lens group 110 includes four lenses 112, a quantity of these lenses 112 and a shooting angle may be adjusted according to actual requirements, and the quantity of the lenses 112 is not limited in the present invention.
The foregoing four lenses 112 are separately referred to as a left lens 112L, a right lens 112R, a front lens 112F, and a back lens 112B. Specifically, the left lens 112L is disposed on the left of the vehicle C1, for example, disposed on a left rearview mirror on the left of the vehicle C1, so as to shoot an image of a surrounding environment on the left of the outside of the vehicle C1 and capture the image as a left image IL. The right lens 112R is disposed on the right of the vehicle C1, for example, disposed on a right rearview mirror on the right of the vehicle C1, so as to shoot an image of a surrounding environment on the right of the outside of the vehicle C1 and capture the image as a right image IR. The front lens 112F is disposed in front of the vehicle C1, for example disposed on a hood, so as to shoot an image of a surrounding environment in front of the outside of the vehicle C1 and capture the image as a front image IF. The back lens 112B is disposed at the back of the vehicle C1, for example disposed on a trunk cover, so as to shoot an image of a surrounding environment at the back of the outside of the vehicle C1 and capture the image as a back image IB.
In practice, these lenses 112 may be wide-angle lenses or fisheye lenses. Because these lenses 112 are neighboring to each other and shooting viewing angles partially overlap, the captured front image IF, left image IL, back image IB, and right image IR display different viewing angles, and neighboring images may at least partially overlap with each other to be pieced together into a complete panoramic image around the vehicle C1.
The three-dimensional image synthesis module 120 are electrically connected to these lenses 112 and receives the foregoing front image IF, left image IL, back image IB, and right image IR, so as to synthesize the images into a three-dimensional panoramic projection image IS. Specifically, the three-dimensional image synthesis module 120 may be a microcomputer, a processor, or a special-purpose chip that can be mounted on the vehicle C1. Three-dimensional panoramic projection image IS includes a virtual vehicle body AC1 simulating the vehicle C1, and the virtual vehicle body AC1 is basically located at the center of the three-dimensional panoramic projection image IS, to present a three-dimensional surrounding panoramic environment using the vehicle C1 as the center. The user can intuitively recognize a scene and an object around the vehicle C1 according to the three-dimensional panoramic projection image IS. Basically, the specification and the size of the virtual vehicle body AC1 are set according to specification data such as the vehicle body length, the vehicle body height, or the tyre wheel distance of the vehicle C1, and therefore, relative locations and size proportions of the virtual vehicle body AC1 and the three-dimensional panoramic projection image IS correspond to relative locations and proportions of the vehicle C1 and a surrounding environment within a shooting distance of these lenses 112, so as to judge a relative relationship between objects or between the vehicle C1 and an object, such as the size, the height, or the distance. Moreover, the virtual vehicle body AC1 may be a perspective image, so that a driver observes, by using the virtual vehicle body AC1, the three-dimensional panoramic projection image IS masked by the virtual vehicle body AC1.
Specifically, the three-dimensional image synthesis module 120 can detect relative locations of a first target image M1 and a second target image M2 of the three-dimensional panoramic projection image IS, and analyze the size of a space S1 approximately formed by the first target image M1 and the second target image M2. Because the three-dimensional panoramic projection image IS can specifically present a proportion relationship between objects or between the vehicle C1 and an object, such as the size, the height, or the distance, the three-dimensional image synthesis module 120 can analyze, by means of calculation, whether the size of the space S1 between the first target image M1 and the second target image M2 of the three-dimensional panoramic projection image IS can accommodate the vehicle C1. If the space S1 can accommodate the vehicle C1, the three-dimensional image synthesis module 120 sets the space S1 to the parking position image AP1. For example, as shown in
Further, the three-dimensional image synthesis module 120 can further analyze barrier types of the first target image M1 and the second target image M2. In practice, the first target image M1 and the second target image M2 may be physical barriers such as walls, trees, vehicles, or traffic sign lamp posts. Using
The guidance computation module 130 is electrically connected to the three-dimensional image synthesis module 120 and receives the three-dimensional panoramic projection image IS. Because the specification and the size of the virtual vehicle body AC1 are set according to specification data such as the vehicle body length, the vehicle body height, or the tyre wheel distance of the vehicle C1, the guidance computation module 130 can detect and analyze image coordinates corresponding to relative locations of the virtual vehicle body AC1 and the parking position image AP1 and compute a parking guidance track T1 according to factors such as specification and size data of the virtual vehicle body AC1 and a speed or a turning angle of the vehicle C1. Specifically, the guidance computation module 130 may be a microcomputer, a processor, or a special-purpose chip mounted on the vehicle C1. It should be noted that, the parking guidance track T1 is synthesized in the three-dimensional panoramic projection image IS, is used as an assistant guidance basis on which the driver drives the vehicle C1 into the parking position P1, and is a simulative route track along which the virtual vehicle body AC1 can simulate driving into the parking position image AP1. Further, because the virtual vehicle body AC1 is a perspective image, so that the driver observes, by using the virtual vehicle body AC1, the parking guidance track T1 and the three-dimensional panoramic projection image IS that is masked by the virtual vehicle body AC1, and it is beneficial to use the parking guidance track T1 as a reference.
Using
Further, the guidance computation module 130 can further compute a simulative dynamic image of simulating driving, by the virtual vehicle body AC1, into the parking position image AP1 along the parking guidance track T1, and the display module 140 can further display the simulative dynamic image. The simulative dynamic image is a real-time dynamic image, and the virtual vehicle body AC1 represents the vehicle C1 driven by the driver and can be changed in real time according to an advancing direction and path of the vehicle C1. It should be noted that, the simulative dynamic image may be used as a teaching image of a parking operation, and the driver can demonstrate, with reference to the simulative dynamic image, an image of simulating driving, by the virtual vehicle body AC1, into the parking position image AP1 according to the parking guidance track T1, and then independently parks the vehicle C1 in the parking position P1. However, according to an actual requirement, when using the simulative dynamic image as a reference, the driver can operate a steering wheel and/or a braking system to perform manual parking, and therefore, the driver can adjust a real-time driving route of the driven vehicle C1 according to the parking guidance track T1.
As shown in
As shown in
Further, the guidance computation module 130 can further compute a simulative dynamic image of simulating driving a virtual vehicle body AC1 into a parking position image AP1 along the modification parking track T1a, and the display module 140 can further display the simulative dynamic image. Based on this, the driver can demonstrate, with reference to the simulative dynamic image, an image of simulating driving the virtual vehicle body AC1 into the parking position image AP1 according to the modification parking track T1a, and then independently parks the vehicle C1 in the parking position P1.
In the second embodiment, the parking assistant panoramic image system 100 further includes a starting unit 132 electrically connected to the guidance computation module 130. In practice, the starting unit 132 may use a gear signal or a vehicle speed signal as a source of triggering starting. For example, when the driver operates a reversing gear (R gear) or a driving vehicle speed is less than a preset critical value, it is considered that the driver triggers the starting unit to output a starting signal. Moreover, the starting unit 132 may be a physical switch module used by the driver to enable the guidance computation module 130, and the driver may trigger, directly by pressing a switch, the starting unit to output the starting signal. When the driver triggers the starting unit to output the starting signal, the guidance computation module 130 is driven to begin to compute the parking guidance track T1.
To sum up, according to the present invention, a three-dimensional image synthesis module processes and synthesizes external images, to establish a three-dimensional panoramic projection image and judge a parking position location. Then, a guidance computation module detects and analyzes relative locations of a virtual vehicle body and a parking position image in the three-dimensional panoramic projection image to compute a parking guidance track, and a display module displays at least a part of the three-dimensional panoramic projection image and the parking guidance track. Therefore, a driver can simulate driving into a simulative dynamic image of a simulative route track of the parking position image along the parking guidance track and with reference to the virtual vehicle body, and can also operate a steering wheel and/or a braking system to adjust a real-time driving route of a driven vehicle according to the parking guidance track.
Although the present invention has been described in considerable detail with reference to certain preferred embodiments thereof, the disclosure is not for limiting the scope of the invention. Persons having ordinary skill in the art may make various modifications and changes without departing from the scope and spirit of the invention. Therefore, the scope of the appended claims should not be limited to the description of the preferred embodiments described above.