This disclosure relates to a camera mirror system for use in vehicles such as off-highway construction equipment, commercial trucks and passenger vehicles. More particularly, the disclosure relates to a system and method of stitching multiple images together on one or more displays from multiple cameras to mitigate object distortion.
Since the proliferation of backup cameras on passenger vehicles, the use of cameras in vehicles has become more prevalent. A more recent feature on some vehicles is a “surround view” that modifies multiple camera images combined, or “stitched”, into a single displayed picture on a video screen in the vehicle cabin. The intersections of the fields of view of the various cameras are fixed such that each camera always displays the same portion of the scene.
This “surround view” inherently has distortions at these intersections such that objects located at, or passing across, the intersections may not entirely appear on the display or may be distorted. That is, the way in which the views from the different cameras are stitched together may omit objects that are actually present, and may skew, stretch, or compress visible objects at the stitching, thus failing to provide a “complete view.” This can lead to a driver who confuses the “surround view” with a “complete view” or otherwise misinterprets a distorted image causing the vehicle to collide with an object in the stitching portion of the image.
In one exemplary embodiment a camera mirror system for a vehicle includes a camera system having at least a first and second field of view of a scene, the first and second fields of view including a shared overlap area, at least one display configured to display the first and second fields of view to provide a complete view of the scene comprising the first and second fields of view adjoining at a stitching interface corresponding to an intersection of the first and second fields of view in the overlap area, at least one object detector configured to detect an object in the scene, and a controller in communication with the camera system, the at least one object detector, and the at least one display, wherein the controller includes a stitching algorithm configured to evaluate the proximity of the object to the overlap area and adjust at least one of the first and second fields of view to move the stitching interface and ensure the object is depicted on the at least one display.
In another example of the above described camera mirror system for a vehicle the object detector includes one of an image based detection system and a 3D space detection system.
In another example of any of the above described camera mirror systems for a vehicle the first and second fields of view correspond to distinct class views.
In another example of any of the above described camera mirror systems for a vehicle first and second fields of view each have a maximum field of view that provides the overlap area, and the controller is configured to reduce at least of the first and second fields of view from its maximum field of view to provide the stitching interface.
In another example of any of the above described camera mirror systems for a vehicle at least one of the first and second field of view is distorted at the stitching interface.
In another example of any of the above described camera mirror systems for a vehicle the controller evaluates the proximity of the object to the stitching interface by at least one of determining whether the object is approaching the overlap area, the object is in the overlap area, and/or the object is exiting the overlap area.
In another example of any of the above described camera mirror systems for a vehicle the controller is configured to determine whether the object is human, and wherein the stitching algorithm is configured to give priority to a first classification of objects over a second classification of objects.
In another example of any of the above described camera mirror systems for a vehicle the first classification of objects is human objects, the second classification of objects is non-human objects, and wherein the classification of human objects includes humans and objects likely to include humans.
In another example of any of the above described camera mirror systems for a vehicle the first classification of objects is a nearest object to vehicle classification.
In another example of any of the above described camera mirror systems for a vehicle the at least one object detector is a 3D space object detector including at least one of the camera system, a radar sensor, a LIDAR sensor, an infrared sensor, and/or an ultrasonic sensor.
In another example of any of the above described camera mirror systems for a vehicle the at least one object detector is an image detection system configured to detect an object using a neural network detection.
In another example of any of the above described camera mirror systems for a vehicle the camera system includes at least a third field of view of the scene, wherein the third field of view includes a corresponding overlap area with at least one of the first field of view and the second field of view and wherein the stitching algorithm is configured to evaluate the proximity of the object to the corresponding overlap area and adjust at least one of the third field of view and the at least one of the first and second field of view and ensure that the object is depicted on the at least one display.
An exemplary method of displaying multiple vehicle camera views, including the steps of sensing first and second images respectively in first and second fields of view of a scene, the first and second fields of view have an overlap area with one another and stitching the first and second images using a stitching interface to create a third image, wherein the stitching interface is a position where at least one of the first and second fields of view to meet one another, detecting a proximity of an object in the scene to the stitching interface, dynamically adjusting the stitching interface such that the object does not cross the stitching interface, and displaying the third image.
In another example of the above described method of displaying multiple vehicle camera views the sensing step is performed using first and second cameras respectively providing the first and second fields of view each having a maximum field of view that provides the overlap area.
In another example of any of the above described methods of displaying multiple vehicle camera views the dynamically adjusting step includes using less than the maximum field of view of at least one of the first and second cameras for display.
In another example of any of the above described methods of displaying multiple vehicle camera views the dynamically adjusting step includes shifting the stitching interface away from the object in response to the object entering the overlap area.
In another example of any of the above described methods of displaying multiple vehicle camera views the dynamically adjusting step further includes snapping the stitching interface behind the object.
In another example of any of the above described methods of displaying multiple vehicle camera views the dynamically adjusting step further comprises the stitching interface following the object after snapping behind the object.
In another example of any of the above described methods of displaying multiple vehicle camera views the detecting step includes sensing the object using at least one of a camera including at least one of first and second camera respectively providing the first and second fields of view, a radar sensor, a LIDAR sensor, an infrared sensor, and/or an ultrasonic sensor.
In another example of any of the above described methods of displaying multiple vehicle camera views the detecting step includes evaluating the proximity of the object to the overlap area by determining whether the object is approaching the overlap area, in the overlap area, or exiting the overlap area.
In another example of any of the above described methods of displaying multiple vehicle camera views the detecting step includes determining a classification of the object, and the adjusting step includes prioritizing objects having a first classification over objects not having the first classification when dynamically adjusting the stitching interface.
In another example of any of the above described methods of displaying multiple vehicle camera views the detecting step includes detecting at least one of a proximity to the vehicle and a time to collision with the vehicle and the adjusting step includes prioritizing a closest object to the vehicle when dynamically adjusting the stitching interface.
The disclosure can be further understood by reference to the following detailed description when considered in connection with the accompanying drawings wherein:
The embodiments, examples and alternatives of the preceding paragraphs, the claims, or the following description and drawings, including any of their various aspects or respective individual features, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible.
A schematic view of a commercial truck 10 is illustrated in
One example camera mirror system 20 is shown in a highly schematic fashion in
An ECU, or controller, 26 is in communication with the first and second cameras 22, 24. Various sensors 28, such as a radar sensor 38, a LIDAR sensor 40, an infrared sensor 42, and/or an ultrasonic sensor 44 may be in communication with the controller 26. The sensors 28 and/or first and second cameras 22, 24 are used to detect objects within the images captured by the first and second cameras 22, 24. Any number of suitable object detections schemes may be used, such as those that rely on neural networks and 3D geometry models to determine positions of objects in space, such as detection from ego-motion. In the case of object detection using a neural network, the first and second cameras 22, 24 provide at least one of the sensors used to detect the object. In alternative examples, any object detection system can be used to detect objects within an image plane, including image based detection such as neural networks analysis, as well as detecting images in 3D space using 3D space detection systems such as radar, lidar, sensors and the like.
The controller 26 outputs a video signal to be displayed 18. The video signal is a combination of the images from the first and second cameras 22, 24 based upon a stitching algorithm 30. In the example, a screen 32 of the display 18 provides a complete view 36 consisting of at least first and second adjusted fields of view 46, 48 from the first and second cameras 22, 24 that are joined at a stitching interface 34 (alternatively referred to as stitching). In operation, the stitching algorithm 30 is used to adjust at least one of the first and second fields of view FOV1, FOV2 to create an intersection in the overlap area 25. The intersection is selected to position objects outside of the intersection so the objects are not obscured or distorted, which could occur if a fixed intersection was used as in a typical surround view. In this manner, dynamic stitching of images is provided to the display 18, as the intersection and stitching interface is changed to account for the position of the object in relation to the overlap area 25. Even still, distortion of the object is unavoidable as the object crosses the stitching joining the images.
In contrast to the above, referring to
Various human and non-human objects 58a-58d are shown in a scene 50 covered by the first and second fields of view FOV1, FOV2. A display 118 corresponding to the scene 50 is illustrated in
Referring to
At least one of the first and second fields of view FOV1, FOV2 is adjusted to meet one another at an intersection 56 in the overlap area 25 and position the object 58 outside the intersection 56 (block 66). These adjusted first and second fields of view are displayed to provide a complete scene 36 that includes the object 58 (block 68).
The effects of this dynamic stitching method are shown in
Referring to
The first and seconds fields of view from the first and second cameras 22, 24 have maximum first and second fields of view FOV1max, FOV2max. Overlapping maximum fields of view are desirable so that the intersection 56 can be moved (e.g., 56a, 56b, 56c) as needed while still capturing the desired scene without gaps in the images. The stitching algorithm 30 determines which portion of the image from each of the first and second fields of view FOV1max, FOV2max is used and creates an interface 56 in the overlap area 25 that ensures the object 58 is depicted on the display 18 regardless of its position in the scene.
The object 58 is illustrated as moving across the scene in
It should also be understood that although a particular component arrangement is disclosed in the illustrated embodiment, other arrangements will benefit herefrom. Although particular step sequences are shown, described, and claimed, it should be understood that steps may be performed in any order, separated or combined unless otherwise indicated and will still benefit from the present invention.
With continued reference to
A controller within the vehicle defines a stitching 740 joining the first image 710 and the second image 720. Each of the images are manipulated to create a single joined image as the third image 730 with the stitching 740 dividing the image. As can be appreciated, the manipulation of each image 710, 720 to create the joined third image 730 is different. As a consequence of the distinct manipulations an object 704 approaching, or crossing, the stitching 740 the appearance of the object is distorted. In the illustrated example, the object 704 is stretched and bent, however in other examples, portions of the object 704 can be cut, or obscured, or the object 704 can appear as two distinct objects 704 with a gap disposed between them.
With continued reference to
Once the object 806 has traveled sufficiently far into the overlap zone 804 that a new stitching line 806 can be created behind the object 802, without overlapping the object 802, the new stitching line 806 is created and replaces the old stitching line, effectively snapping the stitching line 806 behind the object 802 as shown in
By utilizing the dynamically moving stitching line 806 illustrated above, the object 802 never traverses a stitching line 806, and the distortions associated with approaching and crossing a stitching line 806 are minimized.
In some examples, multiple objects can simultaneously approach the stitching line 806 resulting in a situation where at least one detected object will cross the stitching line 806 and be distorted. In such an example, the controller 26 (see
In one example, the prioritization can be based on physical proximity to the vehicle or based on time to collision as determined by a collision avoidance system. In this example, the object determined to be closest to the vehicle, or soonest to collide with the vehicle and thus most likely to be a collision hazard, is prioritized for minimizing distortions.
In another example, where the object detection system of the vehicle controller 26 includes object classification, the prioritization can be based on object types. In such an example, the controller 26 consults an ordered listing of object classifications, and prioritizes the object having the highest position on the ordered list. In one example, an ordered list can be: 1) People, 2) Animals, 3) inanimate objects. In another example, objects can be classified as potential human objects and non-human objects, with potential human objects including people and objects likely to contain or include people and non-human objects being other types of objects.
In yet further examples, the methods for differentiating objects can be combined when multiple objects within the images have the same classification, or are the same distance from the vehicle.
Although the different examples have specific components shown in the illustrations, embodiments of this invention are not limited to those particular combinations. It is possible to use some of the components or features from one of the examples in combination with features or components from another one of the examples.
Although an example embodiment has been disclosed, a worker of ordinary skill in this art would recognize that certain modifications would come within the scope of the claims. For that reason, the following claims should be studied to determine their true scope and content.
This application claims priority to U.S. Provisional Patent Application No. 62/807,352 filed on Feb. 19, 2019.
Number | Name | Date | Kind |
---|---|---|---|
6611202 | Schofield et al. | Aug 2003 | B2 |
7466338 | Xie | Dec 2008 | B2 |
7859565 | Schofield | Dec 2010 | B2 |
8004394 | Englander | Aug 2011 | B2 |
8134594 | Nagamine | Mar 2012 | B2 |
8264716 | Ola et al. | Sep 2012 | B2 |
8633810 | Luo | Jan 2014 | B2 |
8823796 | Shen et al. | Sep 2014 | B2 |
8953011 | Lang et al. | Feb 2015 | B2 |
9071752 | Kuo | Jun 2015 | B2 |
9648233 | Beers | May 2017 | B2 |
9667922 | Lang et al. | May 2017 | B2 |
10194097 | Abbas | Jan 2019 | B2 |
10259390 | Zhang et al. | Apr 2019 | B2 |
10313584 | Pan et al. | Jun 2019 | B2 |
10397524 | Wu | Aug 2019 | B1 |
10469753 | Yang et al. | Nov 2019 | B2 |
10909703 | Shen | Feb 2021 | B2 |
10967790 | Gyori | Apr 2021 | B2 |
20020167589 | Schofield | Nov 2002 | A1 |
20030122930 | Schofield | Jul 2003 | A1 |
20090051778 | Pan | Feb 2009 | A1 |
20110115615 | Luo | May 2011 | A1 |
20140114534 | Zhang | Apr 2014 | A1 |
20140247352 | Rathi | Sep 2014 | A1 |
20140247353 | Lang et al. | Sep 2014 | A1 |
20160088280 | Sadi et al. | Mar 2016 | A1 |
20170006219 | Adsumilli | Jan 2017 | A1 |
20170006220 | Adsumilli et al. | Jan 2017 | A1 |
20180015881 | Sweet | Jan 2018 | A1 |
20180086271 | Kosugi et al. | Mar 2018 | A1 |
20180174327 | Singh | Jun 2018 | A1 |
20180205889 | Abbas | Jul 2018 | A1 |
20180244199 | Gyori | Aug 2018 | A1 |
20190034752 | Lang et al. | Jan 2019 | A1 |
20190126825 | Park et al. | May 2019 | A1 |
20190143896 | Rathi et al. | May 2019 | A1 |
20190161011 | Timoneda et al. | May 2019 | A1 |
20190184900 | Lang et al. | Jun 2019 | A1 |
20190253625 | Pan et al. | Aug 2019 | A1 |
20190260970 | Lu et al. | Aug 2019 | A1 |
20200267332 | van Den Brink | Aug 2020 | A1 |
Entry |
---|
_ Fast stitching algorithm for moving object detection and mosaic construction; Jun. 2003. (Year: 2003). |
_ Automatic panoramic image stitching using invariant features; Brown; 2007. (Year: 2007). |
_ External vision based remote parking system; Li—2018. (Year: 2018). |
_ Library USPTO query for NPL; 2022. (Year: 2022). |
International Search Report and Written Opinion for International Application No. PCT/EP2020/054406 dated May 4, 2020. |
Hsieh, Jun-Wei, Fast Stitching Algorithm for Moving Object Direction and Mosaic Construction, Image and Vision Computing, vol. 22, No. 4, Apr. 1, 2004, pp. 291-306, XP055174902. |
International Preliminary Report on Patentability for Application No. PCT/EP2020/054406 dated Sep. 2, 2021. |
Number | Date | Country | |
---|---|---|---|
20200267332 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62807352 | Feb 2019 | US |