The present disclosure relates to a system and method for correcting vehicle images to remove obstructions.
Vehicles today generally include at least a rear-view camera and may even include a series of cameras that can provide a surround view of an exterior of the vehicle. Images from these cameras improve a driver’s field of view surrounding the vehicle. However, the cameras can become obstructed by weather conditions, such as snow or rain, or by dirt from simply driving the vehicle. The obstruction will need to be cleared from the camera in order to provide a clear view of the surroundings. However, this requires the driver to exit the vehicle and manually remove the debris or operate a vehicle integrated camera washer.
In one exemplary embodiment, a method of processing a vehicle image includes obtaining a first image from at least one vehicle exterior camera when a vehicle is in a first position. An obstructed area is identified in the first image. At least one previously captured image is obtained from the at least one vehicle exterior camera when the vehicle is in a second position different from the first position. An unobstructed area of the at least one previously captured image that corresponds to at least a portion of the obstructed area of the first image is identified. The unobstructed area is stitched into at least a portion of the obstructed area to create a corrected image that corresponds to the first image with at least a portion of the obstructed area removed. The corrected image is displayed on a display in the vehicle.
In another embodiment according to any of the previous embodiments, the obstructed area is identified by comparing the first image with the at least one previously captured image.
In another embodiment according to any of the previous embodiments, the first image is compared to the at least one previously captured image. Unchanged regions are identified between the first image and the at least one previously captured image.
In another embodiment according to any of the previous embodiments, the first image is compared to the at least one previously captured image by monitoring at least one vehicle dynamic.
In another embodiment according to any of the previous embodiments, the at least one vehicle dynamic is monitored by monitoring changes in the steering angle during a period of time between when the first image was obtained and the at least one previously captured image was obtained.
In another embodiment according to any of the previous embodiments, the at least one vehicle dynamic is monitored by monitoring changes in vehicle velocity during a time period between when the first image was obtained and the at least one previously captured image was obtained.
In another embodiment according to any of the previous embodiments, the at least one previously captured image includes a plurality of previously captured images.
In another embodiment according to any of the previous embodiments, the plurality of previously captured images are successive images.
In another embodiment according to any of the previous embodiments, unobstructed areas are identified in each of the plurality of previously captured images that correspond to the obstructed area in the first image.
In another embodiment according to any of the previous embodiments, the unobstructed area from the plurality of previously captured images are stitched into at least a portion of the obstructed area to create the corrected image.
In another embodiment according to any of the previous embodiments, the obstructed area is formed by an obstruction fixed relative to a lens of the at least one vehicle exterior camera.
In another embodiment according to any of the previous embodiments, the obstruction includes at least one of dirt or water.
In another embodiment according to any of the previous embodiments, the first image and the at least one previously captured image partially overlaps with the first image.
In another exemplary embodiment, a system for generating a rear-view image from a vehicle includes at least one vehicle exterior camera. A hardware processor is in communication with the at least one vehicle exterior camera. Hardware memory is in communication with the hardware processor. The hardware memory stores instructions that when executed on the hardware processor cause the hardware processor to perform operations. A first image from the at least one vehicle exterior camera is obtained when the vehicle is in a first position. An obstructed area in the first image is identified. At least one previously captured image is obtained from the at least one vehicle exterior camera when the vehicle is in a second position different from the first position. An unobstructed area of the at least one previously captured image that corresponds to the obstructed area of the first image is identified. The unobstructed area is stitched into at least a portion of the obstructed area to create a corrected that corresponds to the first image with at least a portion of the obstructed area removed.
In another embodiment according to any of the previous embodiments, the obstructed area is identified by comparing the first image with the at least one previously captured image.
In another embodiment according to any of the previous embodiments, the first image is compared to the at least one previously captured image by identifying unchanged regions between the first image and the at least one previously captured image.
In another embodiment according to any of the previous embodiments, the first image is compared to the at least one previously captured image by monitoring at least one vehicle dynamic.
In another embodiment according to any of the previous embodiments, the obstructed area is formed by an obstruction fixed relative to a lens of the at least one vehicle exterior camera.
In another embodiment according to any of the previous embodiments, the first image and the at least one previously captured image at least partially overlaps with the first image.
In another embodiment according to any of the previous embodiments, the at least one previously captured image includes a plurality of previously captured images. Identified unobstructed areas are included in each of the plurality of previously captured images that correspond to the obstructed area in the first image. The unobstructed area is stitched from the plurality of previously captured images into at least a portion of the obstructed area to create the corrected image.
The various features and advantages of the present disclosure will become apparent to those skilled in the art from the following detailed description. The drawings that accompany the detailed description can be briefly described as follows.
The vehicle 20 includes multiple sensors, such as cameras located on the front and rear portions 22 and 24 as well as a mid-portion of the vehicle 20. In addition to cameras 30, the vehicle 20 can include object detecting sensors 32, such as at least one of a radar sensor, an ultrasonic sensor, or a lidar sensor, on the front and rear portions 22 and 24.
As shown in
The image processing system 40 includes a controller 42, having a hardware processor and hardware memory in communication with the hardware processor. The hardware memory stores instructions that when executed on the hardware processor cause the hardware processor to perform operations described in the method 100 of processing a vehicle image.
The method 100 includes obtaining a first image from one of the cameras 30 on the vehicle 20. (Block 110). The first image is obtained when the vehicle is located in a first position. Once the system 40 has obtained the first image, the system 40 identifies if there is an area obstructed area 36 in the first image. (Block 120). The obstructed area 36 identified by the system 40 includes objects that are fixed adjacent a lens of the rear-view camera 30, such as water or dirt, as opposed to a moveable object behind the vehicle 20, such as a trailer. Identifying the obstructed area in the first image includes identifying unchanged regions between the first image and previously captured images. The unchanged regions correspond to the obstructed area 36 because they do not change even when the vehicle 20 has changed position such that the cameras 30 would have a different field of view.
The system 40 then obtains at least one previously captured image from the same camera 30. (Block 130). Because the at least one previously captured image comes from the same camera 30, a perspective of the at least one previously captured image is similar as a perspective of the first image. In particular, the at least one previously captured image is obtained when the vehicle 20 is in a second position different from the first position when the first image was obtained and prior to obtaining the first image. However, the first image and the previously captured image at least partially overlap the same scene from the vehicle 20. This allows the system 40 to identify an unobstructed area in the previously captured image that corresponds to at least a portion of the obstructed area 36 in first image. (Block 140).
The system 40 can monitor vehicle dynamics to aid in finding the unobstructed areas from the previously captured image or successive previously captured images that correspond to the obstructed area 36 in the first image. The system 40 can monitor changes in vehicle velocity during a time period between when the first image was obtained and the earlier successive images were captured. The system 40 can also monitor changes in steering angle during a period of time between when the first image was obtained and each of the previous succession of images. The system 40 can use the information regarding velocity and steering angle to predict where in the previously captured images might correspond to the obstructed area 36 in the first image.
The system 40 can then stitch the unobstructed area from at least one of the previously captured images into the obstructed area in the first image to create a corrected image that corresponds to the first image with at least a portion of the obstructed area 36 removed. (Block 150). The system 40 can then display the corrected image on the display 28 within the passenger cabin 26. (Block 160).
If the obstructed area cannot be entirely removed from the first image or only removed up to a threshold level, such as 90% of the entire area of the first image, the system 40 can obtain additional previously captured images from the camera 30 stored on the memory. For example, the system 40 could obtain a third, fourth, fifth, or etc. previously captured image. The system 40 can then identify if the additional previously captured images includes an unobstructed area that corresponds to a portion of the remaining obstructed area 36 in the first image. The system 40 can then use the previously captured images to reduce a portion of the remaining obstructed area 36 in the first image until the portion that is obstructed in the corrected image is less than the predetermined threshold or until the previously captured images no longer include a view that corresponds to the obstructed area 36 in the first image.
Although the different non-limiting examples are illustrated as having specific components, the examples of this disclosure are not limited to those particular combinations. It is possible to use some of the components or features from any of the non-limiting examples in combination with features or components from any of the other non-limiting examples.
It should be understood that like reference numerals identify corresponding or similar elements throughout the several drawings. It should also be understood that although a particular component arrangement is disclosed and illustrated in these exemplary embodiments, other arrangements could also benefit from the teachings of this disclosure.
The foregoing description shall be interpreted as illustrative and not in any limiting sense. A worker of ordinary skill in the art would understand that certain modifications could come within the scope of this disclosure. For these reasons, the following claim should be studied to determine the true scope and content of this disclosure.