The present disclosure relates to a system and method for viewing an environment behind a vehicle and trailer.
A rear facing camera is used to aid a driver in reversing a vehicle. A trailer attached to the vehicle may also include a camera to provide the driver with images of the environment behind the trailer. Combining images from the vehicle and the trailer can provide a view that appears to the vehicle operator as if they are looking through the trailer. Such a view is commonly referred to as a transparent trailer view. The transparent trailer view is formed from a combination of images and can provide a useful view to a vehicle operator. However, the various camera angles and resulting images often do not capture all areas proximate the vehicle. Image manipulation is then used to fill in those missing areas. Such image manipulation can result in discontinuities that distract a driver and detract from the usefulness of a transparent trailer view.
The background description provided herein is for the purpose of generally presenting a context of this disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
A system for generating a composite image of an area behind a vehicle according to a disclosed example embodiment includes, among other possible things, a controller configured to receive a first image of a trailer from a vehicle camera, receive a second image of an area aft of the trailer with a trailer camera, obtain a third image of an area proximate the vehicle from a database containing previously obtained images and combine the first image, the second image and the third image into a composite image.
In another disclosed embodiment of the foregoing system for generating a composite image of an area behind a vehicle, the third image comprises a view of a portion of the environment obstructed by the trailer.
In another disclosed embodiment of any of the foregoing systems for generating a composite image of an area behind a vehicle, the third image is combined with the first image and the second image and corresponds with an area obstructed from view of the vehicle camera by the trailer.
In another disclosed embodiment of any of the foregoing systems for generating a composite image of an area behind a vehicle, the third image is combined with the first image and the second image and corresponds with a perspective discontinuity between the first image and the second image.
In another disclosed embodiment of any of the foregoing systems for generating a composite image of an area behind a vehicle, the controller is configured to combine the second image of an area aft of the trailer and at least a portion of the first image including the trailer.
In another disclosed embodiment of any of the foregoing systems for generating a composite image of an area behind a vehicle, the controller is configured to combine the third image to occupy an area of the front face of the trailer corresponding to a region unseen by the vehicle camera with the first image and the second image.
In another disclosed embodiment of any of the foregoing systems for generating a composite image of an area behind a vehicle, the controller is configured to receive information indicative of vehicle operation dynamics and combine the first image, the second image and the third image based on the received information indicative of vehicle operation dynamics.
In another disclosed embodiment of any of the foregoing systems for generating a composite image of an area behind a vehicle, the vehicle operation dynamics includes a relative orientation between the vehicle and the trailer and the controller is configured to form the composite image based on the relative orientation between the vehicle and the trailer.
In another disclosed embodiment of any of the foregoing systems for generating a composite image of an area behind a vehicle, obtaining the third image from the database is based at least in part on the relative orientation between the vehicle and trailer.
A method of forming a composite image of an area behind a vehicle towing a trailer according to another disclosed example embodiment includes, among other possible things, capturing a first image of an area behind a tow vehicle, capturing a second image of an area behind a trailer, obtaining a third image of an area proximate the tow vehicle from a database, and combining the first image, the second image and the third image into a composite image.
In another disclosed embodiment of the foregoing method, combining the first image, the second image and the third image comprises combining a portion of the third image with the first image and the second image to correspond with a perspective discontinuity between the first image and the second image.
Another disclosed embodiment of any of the foregoing methods further comprises combining the second image over at least a portion of the first image including the trailer.
Another disclosed embodiment of any of the foregoing methods further comprises combining the third image with the first image in a portion of the first image that is blocked from view by the trailer.
Another disclosed embodiment of any of the foregoing methods, further comprising obtaining information indicative of vehicle operation dynamics and combining the first image, the second image and the third image based on the received information indicative of vehicle operation dynamics.
In another disclosed embodiment of any of the foregoing methods, the vehicle operation dynamics includes a relative orientation between the vehicle and the trailer and forming the composite image is performed based on the relative orientation between the vehicle and trailer.
Another disclosed embodiment of any of the foregoing methods, further comprising obtaining the third image from the database based in part on the relative orientation between the vehicle and trailer.
A non-transitory computer readable medium including instructions executable by at least one processor according to another example disclosed embodiment includes, among other possible things, instructions executed by the at least one processor that prompt capture of a first image of an area behind a tow vehicle, instructions executed by the at least one processor that prompt capture of a second image of an area behind a trailer, instructions executed by the at least one processor to obtain a third image of an area surrounding the vehicle from a database, and instructions that prompt combining the first image, the second image and the third image into a composite image.
In another disclosed embodiment of the foregoing non-transitory computer readable medium, the instructions for combining the first image, the second image and the third image comprises instructions that govern combining the third image with the first image and the second image to correspond with a perspective discontinuity between the first image and the second image.
Although the different examples have the specific components shown in the illustrations, embodiments of this disclosure are not limited to those particular combinations. It is possible to use some of the components or features from one of the examples in combination with features or components from another one of the examples.
These and other features disclosed herein can be best understood from the following specification and drawings, the following of which is a brief description.
Referring to
The system 24 uses a first image 40 from the vehicle camera 26 of the trailer 22 and surrounding environment with a second image 42 from the trailer camera 28 to form an image that appears to be looking through the trailer 22. The trailer camera 28 provides a field of view indicated by dashed lines 54. A region 56 is obstructed from view of the vehicle camera 26 by the trailer 22 and is also outside the field of view 54 of the trailer camera 28. Accordingly, region 56 represents an area unseen by any camera.
The unseen region 56 causes an undesirable visual discontinuity that manifests in the composite image as an area of the trailer face for which the appropriate line of sight is either unavailable or not accurately represented. The resulting image is therefore not “fully-transparent” and results in a distorted final image. An unrepresented region 50 is defined in this example embodiment as the region of the trailer face below projection line 52 linking the vehicle camera's aperture to the point of intersection between the field of view of the trailer camera 28 and the roadway 48.
The disclosed example system 24 uses a stored image from a database 32 to continuously populate the unrepresented region 50 resulting from the unseen region 56. The resulting composite image is better matched with less visible discontinuity. It should be appreciated, that although the unrepresented region 50 resulting from unseen region 56 is addressed in this example embodiment, other portions of the trailer, or the environment surrounding the vehicle 20 that are unseen by a system camera could also benefit from the system and method of this disclosure to provide a composite image.
The example disclosed system 24 includes the controller 30 that includes a memory module 34 and a processor 38. The memory module 34 includes the database 32 with historic images that are combined into the composite image.
The memory module 34 also provides a non-transitory computer readable medium for storage of processor executable software instructions. The instructions direct the processor to capture the first image, the second image and a third image from the database 32 into the composite image. The instructions prompt operation to correct a perspective discontinuity between the first image and the second image with the added third image.
The example disclosed processor 38 may be a hardware device for executing software, particularly software stored in memory. The processor 38 can be a custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the computing device, a semiconductor-based microprocessor (in the form of a microchip or chip set) or generally any device for executing software instructions.
The memory 34 can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, VRAM, etc.)) and/or nonvolatile memory elements (e.g., ROM, hard drive, tape, CD-ROM, etc.). Moreover, the memory 34 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory can also have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor.
The software instructions 60 in the memory 34 may include one or more separate programs, each of which includes an ordered listing of executable instructions for implementing logical functions. A system component embodied as software may also be construed as a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When constructed as a source program, the program is translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the memory.
Referring to
For example, the sensors 62 may provide an indication of an angle 58 of the trailer 22 relative to the vehicle 20. When the trailer 22 is directly behind the vehicle 20 along the axis A, one set of images and/or combination method may be appropriate. When the trailer is disposed at the angle 58 relative to the vehicle as indicated at 22′, another set of images and/or combination method may be appropriate for that orientation and may be utilized and combined with the images from the vehicle camera 26 and the trailer camera 28.
Referring to
The size and shape of the portions of the second image 42 that are stitched into the first image 40 correspond with an outline of the trailer 22. The size and shape of the portions of the third image 44 that are stitched into the intermediate image 45 correspond with the unseen region 56 that corresponds to the area 50. The processor 38 may be utilized to execute image processing algorithms to determine the size and shape of the distorted area and stitch the third image 44 into the composite image based on that determination. The size and shape of the third image 44 that is stitched into the composite image may be selected from a group of sizes and shapes that correspond with different vehicle operations. Moreover, other methods and criteria for determining the size and shape of the portion of the third image 44 stitched into the composite image 46 could be utilized and are within the scope and contemplation of this disclosure. Additionally, although the example composite image is formed utilizing one stored historic image, additional images could be incorporated to form the composite image and are within the scope and contemplation of this disclosure. A fourth, fifth and/or any number of images could be combined to address unseen portions and provide a desired composite image.
Referring to
The third image 44 is selected from historical image data that can be stored in the database 32. In one disclosed embodiment, the historical images can be previous images taken by the vehicle camera 26. However, other available sources for historical images suitable for combination to form the composite image could be utilized and are within the contemplation and scope of this disclosure.
Accordingly, the example system utilizes at least one additional image to reduce perceived discontinuities that may detract from the composite image viewed by a vehicle operator. The additional image is obtained from a historical database rather than from a camera and therefore does not require additional structure or hardware.
Although the different non-limiting embodiments are illustrated as having specific components or steps, the embodiments of this disclosure are not limited to those particular combinations. It is possible to use some of the components or features from any of the non-limiting embodiments in combination with features or components from any of the other non-limiting embodiments.
It should be understood that like reference numerals identify corresponding or similar elements throughout the several drawings. It should be understood that although a particular component arrangement is disclosed and illustrated in these exemplary embodiments, other arrangements could also benefit from the teachings of this disclosure.
The foregoing description shall be interpreted as illustrative and not in any limiting sense. A worker of ordinary skill in the art would understand that certain modifications could come within the scope of this disclosure. For these reasons, the following claims should be studied to determine the true scope and content of this disclosure.