The present invention relates to an image system and a method for positioning vehicles. More particularly, the present invention relates to a vehicle image system and a method for positioning vehicles using vehicle images.
Vehicles, such as cars, trucks, and other motor-driven vehicles, are usually provided with one or more image capturers that capture images or videos of the surrounding environment. For example, a rear-view image capturer can be mounted at the rear of an automobile and used to capture videos of the rear environment of the automobile. While the automobile is in a reverse driving mode, the captured videos can be displayed (e.g., at a center console display) to the driver or passengers. Such image systems can help assist the driver in operating the vehicle, enhancing vehicle safety. For example, displayed video image data from the rear-view image capturer can help the driver to identify obstructions that would otherwise be difficult to visually identify (e.g., through the rear windshield, rear-view mirrors, or side mirrors of the vehicle) on the path.
Vehicles are sometimes provided with additional image capturers on various positions. For example, image capturers may be mounted on the front, sides, and rear of the vehicles for capturing images from various regions of the surrounding environment. The images from the additional image capturers can be combined to obtain an around view image. The Around View Monitor (AVM) Technique thus widely applied to vehicles based on the mature image capturers thereon. A well-known application of AVM is Blind Spot Information System (BLIS), usually displayed as an eagle view over a screen. However, the bottom of the vehicle is always the unconquered blind spot on the eagle view.
On the other hand, as to applications of electric vehicles or hybrid electric vehicles, wireless charging has become a convenient and common technology. The wireless charging technology can charge vehicles without connecting to a charging cable. It significantly improves the inconvenience when charging. Before wireless charging is processed, a power receiver of the vehicle and a power transmitter must be overlapped. The power transmitter is usually placed on the ground and marks showing the location of the power transmitter are arranged around. However, it is still difficult for the driver to move the vehicle to make the power receiver overlap the power transmitter. Therefore, how to easily and accurately overlap the power receiver and the power transmitter is the subject of interest to those skilled in the art.
It is desired that a vehicle image system can be available to make charging vehicles easily. Especially, the vehicle image system is based on an improved AVM and has functions of Artificial Intelligence (AI) identification.
This paragraph extracts and compiles some features of the present invention; other features will be disclosed in the follow-up paragraphs. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims.
According to an aspect of the present invention, a vehicle image system is disclosed. The vehicle image system is installed in a vehicle which has a power receiver and comprises: at least one image capturer, mounted to the vehicle, for capturing images from a surrounding environment of the vehicle to generate successive image data frames, wherein a view of the at least one image capturer for capturing images has a scene under the vehicle bottom or a portion of a scene of the surrounding environment blocked by the vehicle so that any one of the successive image data frames lacks the scene under the vehicle bottom or the portion of the scene of the surrounding environment; and a processing module, connected with the at least one image capturer, having a power receiver location data describing a location of the power receiver relative to that of the vehicle, for receiving the successive image data frames from the at least one image capturer, and generating an image data depicting the scene under the vehicle bottom and the portion of the scene of the surrounding environment blocked by the vehicle with the successive image data frames in real time after the vehicle moves.
Another aspect of the present invention is to provide a method for positioning vehicles using vehicle images, suitable for a vehicle installed with at least one image capturer and a power receiver. The method comprises: capturing images from a surrounding environment of the vehicle to generate successive image data frames by at least one image capturer, wherein a view of the at least one image capturer for capturing images has a scene under the vehicle bottom and a portion of a scene of the surrounding environment blocked by the vehicle so that any one of the successive image data frames lacks the scene under the vehicle bottom or the portion of the scene of the surrounding environment; receiving the successive image data frames from the at least one image capturer by a processing module having a power receiver location data describing a location of the power receiver relative to that of the vehicle; and generating an image data depicting the scene under the vehicle bottom and the portion of the scene of the surrounding environment blocked by the vehicle with the successive image data frames in real time after the vehicle moves by the processing module.
The vehicle image system provided by the present invention generates an image data depicting the scene under the vehicle bottom and the portion of the scene of the surrounding environment blocked by the vehicle when the vehicle operates by successive image data frames from the at least one image capturer. It can easily and accurately overlap the power receiver with the power transmitter.
The present invention will now be described more specifically with reference to the following embodiments.
Image portions 104 and 106 may include regions 102 that correspond to portions of the surrounding environment that are obstructed from camera view. In particular, the vehicle may include a frame or chassis that provides structural support for the various components and parts of the vehicle (e.g., support for the motor, wheels, seats, etc.). The cameras may be mounted directly or indirectly to the vehicle chassis, and the chassis itself may obstruct parts of the vehicle surroundings from the cameras. Regions 102 correspond to portions underneath the vehicle chassis that are obstructed from camera view, whereas regions 108 correspond to unobstructed surroundings. In the example of
Successive images 100 (e.g., images generated at successive times) may form a stream of images, sometimes referred to as a video stream or video data. The example of
Cameras that are mounted to a vehicle each have a different view of the surrounding environment. It may be desirable to transform the image data from each camera to a common perspective. For example, image data from multiple cameras may each be transformed to the front perspective view of image region 104 and/or the bird eye perspective view of image region 106.
As shown in
Image data captured by the camera in coordinate plane 202 may be transformed (e.g., projected) onto coordinate plane n according to a matrix formula xπ=H*x1. Matrix H can be calculated and determined via calibration processes for the camera. For example, the camera may be mounted to a desired location on a vehicle, and calibration images may be taken to produce images of a known environment. In this scenario, multiple pairs of corresponding points in planes 202 and n may be known (e.g., x1 and xn may constitute a pair), and H can be calculated based on the known points.
As an example, point x1 may be defined as x1=(xi,yi,wi) by the coordinate system of plane 202, whereas point x π may be defined as x π =(xi′,yi′,wi′) by the coordinate system of plane π. In this scenario, matrix H may be defined as shown in equation 1, the relationship between x1 and xπ may be defined as shown in equation 2.
Each camera that is mounted to the vehicle may be calibrated to calculate a respective matrix H that transforms coordinates at the camera's plane to a desired coordinate plane. For example, in a scenario in which cameras are mounted to the front, rear, and sides of a vehicle, each of the cameras may be calibrated to determined respective matrices that transform image data captured by that camera to projected image data on a shared, common image plane (e.g., a ground image plane from a birds eye perspective such as shown in image region 106 of
Time-delayed image data may be identified based on vehicle data. The vehicle data may be provided by control and/or monitoring systems (e.g., over a communications path such as a controller area network bus).
The angular speed of the vehicle may be calculated based on the current vehicle speed V, wheelbase length L, and steering angle φ (e.g., as described in equation 3).
For each location, a corresponding future position may be calculated based on projected movement Δyi. Projected movement Δyi may be calculated based on that location's X-axis distance rxi and Y-axis distance Lxi from the center of the vehicle's turning radius and the vehicle angular speed (e.g., according to equation 4). For each location within camera-obstructed region 304, the projected movement can be used to determine whether the projected future location is within the currently viewable region of the vehicle's surroundings (e.g., region 302). If the projected location is located within the currently viewable region, then current image data for the projected location can be displayed to approximate the projected region of the future environment after the vehicle moves and the projected region of the environment becomes obstructed. Equation 4:
Δyi=√{square root over (Lxi2+rxi2)}×ω
At initial time T-20, multiple cameras may capture and provide raw image data of the vehicle's surroundings. Raw image data frame 602 may be captured, for example, by a first camera mounted to the front of the vehicle, whereas additional raw image data frames may be captured by cameras mounted to the left side, right side, and rear of the vehicle (omitted from
The imaging system may process the raw image data frame from each camera to coordinate-transform the image data to a common perspective. In the example of
In some scenarios, the perspectives of cameras mounted to the vehicle may overlap (e.g., the views of front and side cameras may overlap at the border of region 606). If desired, the imaging system may combine overlapping image data from different cameras, which may help to improve the image quality at the overlapping regions.
As shown in
At subsequent time T-10, the vehicle may have moved relative to time T-20. The cameras may capture a different image due to its new environmental location (e.g., raw image 602 at time T-10 may be different than raw image 602 at time T-20), and thus top-view image 604 reflects that the vehicle has moved since time T-20. Based on vehicle data such as vehicle speed, steering angle, and wheelbase length, the image processing system may determine that part of viewable area 606 at time T-20 is now obstructed by the vehicle chassis (e.g., due to movement of the vehicle between times T-20 and T-10). The image processing system may transfer the identified image data from the previously viewable area 606 to corresponding region 612 of image buffer 610. Displayed image 611 includes the transferred image data in region 612 as a time-delayed approximation of part of the vehicle's surroundings that are now obstructed from camera view.
At time T-10, portion 614 of the image buffer remains empty or filled with initialization data, because the vehicle has not moved sufficiently to allow approximation via portions of previously-viewable surroundings. At subsequent time T, the vehicle may have moved sufficiently such that substantially all of the obstructed surroundings can be approximated with time-delayed image data captured from previously-viewable surroundings.
In the example of
During step 702, the image processing system may initialize an image buffer with a suitable size for storing image data from vehicle cameras. For example, the system may determine the image buffer size based on a maximum vehicle speed that is desired or supported (e.g., a larger image buffer size for higher maximum vehicle speed, and a smaller size for a lower maximum vehicle speed).
During step 704, the image processing system may receive new image data. The image data may be received from one or more vehicle cameras, and may reflect the current vehicle environment.
During step 706, the image processing system may transform the image data from the camera's perspectives to a desired common perspective. For example, the coordinate transformation of
During step 708, the image processing system may receive vehicle data such as vehicle speed, steering angle, gear position, and other vehicle data that can be used in identifying movement of the vehicle and corresponding shifts in image data.
During subsequent step 710, the image processing system may update the image buffer based on the received image data. For example, the image processing system may have allocated part of the image buffer such as region 608 of
During subsequent step 712, the image processing system may update the image buffer with the new image data received from the cameras during step 704 and transformed during step 706. The transformed image data may be stored in regions of the image buffer that represent viewable portions of the surrounding environment (e.g., image buffer portion 604 of
If desired, a transparent image of the obstruction may be overlaid with the image buffer during optional step 714. For example, as shown in
By combining currently captured image data during step 712 and previously captured (e.g., time-delayed) image data during step 710, the image processing system may produce and maintain a composite image in the image buffer that portrays the vehicle surroundings despite obstructions such as a vehicle chassis that block portions of the surrounding environment from view of the camera at any given time. The process may be repeated to create a video stream that displays the surrounding environment as if there were no obstructions to camera view.
During subsequent step 716, the image processing system may retrieve the composite image data from the image buffer and display the composite image. If desired, the composite image may be displayed with a transparent overlay of the obstruction, which may help to inform users of the obstruction's existence and that the information displayed within the overlay of the obstruction is time-delayed.
The example of
Storage and processing circuitry 1020 may include processing circuitry such as one or more general purpose processors, specialized processors such as digital signal processors (DSPs), or other digital processing circuitry. The processing circuitry may receive and process the image data received from cameras 1040. For example, the processing circuitry may perform the steps of
In displaying an obstruction-compensated image of the vehicle surroundings, image data may be retrieved from the image buffers and combined, which may help to improve image quality by reducing blurriness. The number of buffers used may be determined based on vehicle speed (e.g., more buffers may be used for faster speeds, whereas fewer buffers may be used for slower speeds). In the example of
As the vehicle moves along a path 1312, the image buffers store successively captured images (e.g., combined and coordinate-transformed images from image sensors on the vehicle). At time t for currently vehicle location 1314, the obstructed portions of the current vehicle surroundings may be reconstructed by combining portions of images captured at time t−5n, t−4n, t−3n, t−2n, and t−n. The image data for obstructed vehicle surroundings may be transferred from portions of the multiple image buffers to corresponding portions of display buffer 1300 during display operations. Image data from buffer (t−5n) may be transferred to display buffer portion 1302, image data from buffer (t−4n) may be transferred to display buffer portion 1304, etc. The resulting combined image reconstructs and approximates the currently obstructed vehicle surroundings using time-delayed information previously stored at successive times in multiple image buffers.
Please refer to
In this embodiment, the image capturers are all equipped with 180-degree wide angle lens. Ideally, they can be equipped with fisheye lens. However, due to neighboring objects, a portion of their view might be blocked. For example, the view of the first image capturer 801 is blocked by two headlights and the view V801 has an effective range less than 180 degrees. The view of the fourth image capturer 804 is blocked by the frames of the vehicle 800 and the V804 lacks a portion with an effective range less than 180 degrees. Views of the rest two image capturers 803 and 804 are not blocked by any portion of the vehicle 800 so that views V802 and V803 conform to original design. Dotted areas are used to indicate where the blocked zones are. In addition, in the around view image, the place under the vehicle bottom can not be seen by any image capturer because of the vehicle 800. Therefore, in summary, a view of each image capturer for capturing images has a scene under the vehicle bottom or a portion of a scene of the surrounding environment blocked by the vehicle 800 so that any one of the successive image data frames lacks the scene under the vehicle bottom or the portion of the scene of the surrounding environment.
The processing module 820 is connected with the four image capturers. It is a part of the vehicle computer and has kept a power receiver location data. The power receiver location data describes a location of the power receiver 810 relative to a location of the vehicle 800. In this embodiment, the power receiver 810 is installed near the chassis of the vehicle 800. The power receiver location data may, for example, include distance and direction of a geometric center of the power receiver 810 to that of the vehicle 800, or coordinates of some anchor points on the power receiver 810 and the vehicle 800 based on a relative coordinate system. No matter what the format of the power receiver location data is, it can be used to locate the power receiver 810 if the location of the vehicle 800 is known. The processing module 820 is capable of receiving the successive image data frames from the image capturers. It can also generate an image data depicting the scene under the vehicle bottom and the portion of the scene of the surrounding environment blocked by the vehicle 800 with the successive image data frames in real time after the vehicle 800 moves. Details of the principle for generating the image data is the same as what has been disclosed above. It is not repeated again.
In addition, the processing module 820 can further identify whether an appearance image of a portion of a power transmitter exposed to the ground or an indicator image in the environment exists in the successive image data frames. The power transmitter may be in a form of a charging plate placed on the ground or partially buried in the ground with a charging pile exposed. The processing module 820 knows the portion of the power transmitter which can be seen above the ground and use it to find the image in the successive image data frames. If the power transmitter is fixed under the ground and charge the vehicle 800 over air, there will be some marks on the ground or on adjacent fixtures showing the driver how to move the vehicle 800 to an aligned location to charge. The “marks” are the indicator image that is also known by the processing module 820 to identify out of the successive image data frames. Thus, the processing module 820 can determine a location of the appearance image or the indicator image relative to the vehicle 800 after the appearance image or the indicator image is identified. Once the relative location is determined, a power transmitter location data regarding the power transmitter can be labeled in the successive image data frames. The power transmitter location data may be a description about the location (in a relative coordinate system) and labeled in metadata of the successive image data frames. In practice, the power transmitter location data may be the pixels which form the appearance image of the portion of the power transmitter.
A detailed illustration of the processing module 820 and interactions with other modules and devices on the vehicle 800 is given in
In order to enable temporary buffer and long-time storage, the memory unit 822 provides related memory functions. In general, the memory unit 822 may be equipped with a Random Access Memory (RAM) 8221 for temporary buffer and a flash memory 8222 for long-time storage. In some cases, a hard disk can be an alternative of the flash memory 8222. Any program or data is stored in the flash memory 8222 before it is called by the processing circuit 821. The RAM 8221 temporarily keeps the codes or data that the processing circuit 821 runs before they are further released. An importance of the memory unit 822 in the present invention is storing the successive image data frames and the image data.
The learning unit 823 is connected to the processing circuit 821 and the memory unit 822. It can be operated to learn what the appearance image or the indicator image. For different charging stations, different appearance images and/or indicator images may be used to guide vehicles. Unless the appearance images and/or indicator images were stored in the memory unit 822 during the time that the vehicle 800 was assembled for sale, the vehicle 800 will never know the appearance images and/or indicator images. Thus, the learning unit 823 helps the vehicle 800 know new appearance images or indicator images. There are two learning functions: self-learning and cloud-learning. For the self-learning, the learning unit 823 learns from the image data and the image data frames to obtain the appearance image of the portion of the power transmitter exposed to the ground or the indicator image in the environment, and obtain a location of the power transmitter when the power receiver 810 operates or a location chosen from the image data frames or the image data by a driver of the vehicle 800. There are various learning algorithms and related open source codes for achieving the learning purposes. Use of the algorithms, open source codes or even newly developed codes is not limited by the present invention. Learning outcomes can be recorded by the learning unit 823 as a first package and store the first package in the memory unit 822. The cloud-learning is receiving a second package externally (e.g. a cloud server) and fetching an appearance image or an indicator image in the second package. The second package may be wiredly (e.g. by a RJ45 cable linked to an Ethernet interface) or wirelessly (e.g. via Bluetooth or wi-fi to an access point of a network) received. Even, the second package can be stored and transferred to the learning unit 823 by a physical device, such as a USB storage or a hard disc. Data structures of the first package and the second package are the same. However, a self-learning process was done by the cloud server rather than the vehicle 800 itself and the second package was created thereby. The self-learning process might also be done in another vehicle and the second package was uploaded to the cloud server after it was created. By this way, resources of the learning unit 823 and the processing circuit 821 can be saved. Similarly, the first package can also be uploaded to the cloud server to share what the learning unit 823 has learned to other vehicles. It should be emphasized that for any learning unit installed in the vehicles using the vehicle image system, it may apply only one of the learning functions. Both learning functions can be designed to one learning unit 823 as well. In this embodiment, the learning unit 823 is a portion of hardware in the processing module 820. In other embodiment, the learning unit 823 may not be in the form of hardware but software operated in the processing module 820.
The object detecting unit 824 is connected to the processing circuit 821 and the learning unit 823. It can determine a location of the appearance image or the indicator image in the successive image data frames according to the appearance image or the indicator image, default or from the first package or the second package. The location, for example, may be 5.2 m and 271 degrees from the moving direction. It can be provided to the path generating unit 825 for further calculation.
The path generating unit 825 is connected to object detecting unit 824. The path generating unit 825 connects to a number of distance measuring devices 840 mounted on the vehicle 800 and a vehicle control module 850 via the CAN bus 860. The distance measuring devices 840, e.g. ultrasonic sensors, radars or LiDAR, are fixed around the vehicle 800 to detect objects nearby. Location data of the objects are used to make sure where the detect objects are and also sent to the path generating unit 825 for further use. The vehicle control module 850 is the electrical hardware which controls motion of the vehicle 800. The vehicle control module 850 may include a steering controller 851 which controls directions of wheels according to a steering wheel; an accelerator controller 852 which controls operation of a motor based on the accelerator; a brake controller 853 slows down the speed of the vehicle 800 when a brake is pressed; and a gear controller 854 programmed to control the gear being used. The vehicle control module 850 can be operated by the driver. In an auto-mode, the vehicle control module 850 can work following some specific instructions without human control. Based on the determined location of the appearance image or the indicator image in the successive image data frames from the object detecting unit 824 and location data from the distance measuring devices 840, the path generating unit 825 can generate a moving path to lead the vehicle 80 so that the power receiver 810 and a power transmitter can be overlapped by using the power receiver location data and the location of the appearance image or the indicator image. In order to have a better understanding of how the path generating unit 825 works, please refer to
According to the present invention, the processing module 820 can further calculate a vehicle aerial view data regarding a location of a vertical projection of the vehicle 800 in the image data using a location of the four (at least one) image capturers relative to the vehicle 800. The vehicle aerial view data is basically a top view of the vehicle 800 and its size and orientation can be determined as long as the absolute location of any one of the image capturer is known (orientation can be available with an installation angle between the image capturer and a central axis of the vehicle 800). The vehicle aerial view data is used to show the image of the vehicle 800 on the display module 830. Please refer to
According to the present invention, the processing module 820 can further receive a vehicle steering data regarding a steering angle of at least one wheel of the vehicle 800 (e.g. from the steering controller 851 of the vehicle control module 850 or other device monitoring the steering angle). Meanwhile, the processing module 820 calculates a wheel location data of the at least one wheel in the image data by combining the vehicle aerial view data and the steering angle. The wheel location data is used to present the status (location and orientation) of the wheel(s) on the display module 830. Please see
The display module 830 is connected to the processing module 820. The display module 830 can show any information the processing module 820 sends to. In practice, the display module 830 can be an LCD, an OLED, a PLED or a Micro LED monitor. Preferably, it comes with touch-control function for interaction. The display module 830 can show at least one of the power receiver location data, the image data, the successive image data frames, the vehicle aerial view data, the wheel location data, a virtual image of the power receiver, and the appearance image or the indicator image. These data can be shown by texts and values. They can also be graphs. Even some are texts with values while others are graphs.
In the following embodiments, methods for positioning vehicles using vehicle images are disclosed. Some methods may support the operation of the vehicle image system and will be illustrated along with the operating procedure of the vehicle image system under particular mode.
Please refer to
Step S01 to S03 initiate the vehicle image system to see the environment of the vehicle. Then, identify whether an appearance image of a portion of a power transmitter exposed to the ground or an indicator image in the environment exists in the successive image data frames by the processing module (S04). In this case, the appearance image or the indicator image has been preloaded to the processing module. Compared to actual operation, since the processing module of the vehicle has known the appearance image or the indicator image during the time that it was assembled for sale, when step S04 is applied, the processing module can automatically run without help from an external cloud server or an extra learning step to get the appearance image or the indicator image. If a result of step S04 is yes, meaning the appearance image or the indicator image is found in the successive image data frames, determine a location of the appearance image or the indicator image relative to the vehicle after the appearance image or the indicator image is identified by the processing module (S05). If a result of step S04 is no, meaning the appearance image or the indicator image is not found in the successive image data frames and only the scene of the environment is available currently, just repeat step S04 itself until the appearance image or the indicator image is found. Step S04 is the basic function of the processing module. After step S05 is finished, the driver can choose to use the auto-mode to move the vehicle so that the power receiver in the vehicle can overlap the power transmitter. The driver can also choose to drive the vehicle by himself in a driver-mode. In the auto-mode, the method processes the following step generating a moving path by using the power receiver location data and the location of the appearance image or the indicator image by the processing module (S06). Finally, the vehicle successfully is moved according to the moving path by a step of leading the vehicle so that the power receiver and the power transmitter are overlapped by the processing module (S07).
In the case that there is no appearance image or indicator image preloaded to the processing module during the time that the vehicle was assembled for sale, or new appearance image or indicator image is required for charging the vehicle in another charge system, further steps are required to process self-learning or receive the result from the cloud server. Please refer to
On the other hand, if the appearance image or the indicator image is not self-learned but comes from the cloud server, a modified method is required. Please see
For applying to the driver-mode, a display module is necessary and some preparatory information need to be available. Please see
There are some key points that should be emphasized. First, step S32 and S33 may not be necessary in other embodiments. Therefore, the corresponding wheel location data is not an option in step S34. Furthermore, steps S31 to S34, or steps S31 and S34, can be applied between step S03 and S07 in the flowcharts in
If the driver would like to drive by himself to charge the vehicle (driver mode), the method of the present invention can be modified to meet the goal. There are three scenarios of the driver mode: <case 1> the vehicle has no information of the power transmitter (the appearance image or the indicator image) and the driver has to drive by himself; <case 2> the vehicle identifies the power transmitter, but the driver wants to drive by himself to charge the vehicle; and <case 3> the driver drives by himself to charge the vehicle with only the help of the display module. Below are the descriptions for these cases.
Please see
Please see
Please see
While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims, which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures.
Number | Date | Country | |
---|---|---|---|
Parent | 14935437 | Nov 2015 | US |
Child | 16411497 | US |