The present invention relates generally to a vehicle vision system for a vehicle and, more particularly, to a vehicle vision system that utilizes one or more cameras at a vehicle.
Use of imaging sensors in vehicular trailer assist systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 9,446,713 and 9,085,261, which are hereby incorporated herein by reference in their entireties.
The present invention provides a trailer assist system for a vehicle that includes a camera disposed at a rear portion of a vehicle equipped with the vehicular trailering assist system. The camera views at least rearward of the vehicle, and, with a trailer hitched to a hitch of the vehicle via a pivoting joint hitch connection of the trailer to the hitch of the vehicle, the camera views at least a portion of the trailer hitched to the hitch of the vehicle. The camera captures frames of image data that include image data representative of at least a portion of the trailer hitched to the hitch of the vehicle. The system includes an electronic control unit (ECU) with electronic circuitry and associated software. The electronic circuitry includes an image processor operable to process frames of image data captured by the camera. With the trailer hitched to the hitch of the vehicle, the ECU, responsive to processing of frames of image data captured by the camera during a calibration maneuver by the vehicle, determines an initial trailer template of the trailer hitched to the hitch of the vehicle. The ECU, during a turning portion of the calibration maneuver, and at least in part via processing of frames of image data captured by the camera during the turning portion of the calibration maneuver, determines a hitch ball location of the hitch of the vehicle based on the determined initial trailer template. The ECU, after completion of the calibration maneuver, and via processing of frames of image data captured by the camera as the vehicle is driven along a road, and based on the determined hitch ball location, determines a current trailer angle of the trailer relative to a longitudinal axis of the vehicle as the vehicle is driven along the road.
These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
A vehicle and trailer maneuvering system or trailering assist system and/or driving assist system operates to capture images exterior of the vehicle and a trailer being towed by the vehicle and may process the captured image data to determine a path of travel for the vehicle and trailer and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle and trailer in a rearward (or forward) direction. The system includes an image processor or image processing system that is operable to receive image data from one or more cameras and may provide an output to a display device for displaying images representative of the captured image data. Optionally, the system may provide a rearview display or a top down or bird's eye or surround view display or the like.
Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes a trailer assist system 12 that is operable to assist in backing up or reversing the vehicle with a trailer hitched to the vehicle via, for example, a hitch 14, and that may maneuver the vehicle 10 and trailer 16 toward a desired or selected location. The trailer maneuver assist system 12 includes at least one exterior viewing vehicle-based imaging sensor or camera, such as a rearward viewing imaging sensor or camera 18 (and the system may optionally include multiple exterior viewing imaging sensors or cameras, such as a sideward/rearward viewing camera at respective sides of the vehicle), which captures image data representative of the scene exterior of the vehicle 10, which includes the hitch 14 and/or trailer 16, with the camera 18 having a lens for focusing images at or onto an imaging array or imaging plane or imager of the camera (
Referring now to
Referring now to
The trailer assist system may obtain frames of image data captured by a camera that includes the hitch ball in its field of view. For example, in the top down image frame of
Optionally, the system performs the hitch ball detection in two primary parts: hitch range detection and hitch ball detection. In the hitch range detection portion, the system reduces the amount of processing necessary to determine the row (i.e., the x position) of the hitch ball within a frame or frames of image data with a hitch range detection technique. The hitch range technique divides the initial hitch range (e.g., rows 25 to 70 of one or more frames of image data) into three equal portions (e.g., 15 rows each) and determines which portion the hitch ball is present in. Each row is a row of pixels (which are organized into rows and columns to form a frame of image data). Optionally, the initial hitch range is divided into more portions (e.g., four or five) or less portions (e.g., two). Once the system determines which portion the hitch ball is present in, the system may perform final processing only on the rows of photosensing elements associated with the selected portion and thus the final processing will be reduced to one third (or less) compared to the original number hitch range (e.g., only 15 rows instead of 45 rows). The system sends this new hitch range (e.g., the selected 15 rows) to the hitch ball detection portion to find a more accurate or exact hitch point within the selected range.
The hitch ball detection portion receives the input from the hitch range portion and begins processing in the new hitch range, which is a subset of the entire frame of image data (e.g., the selected one third portion of the initial hitch range). As discussed in more detail below, the hitch ball detection portion determines the exact row in the new hitch ball range that the hitch ball is present in.
The trailer assist system enters the calibration state to calibrate a new trailer that has not been hitched to the towing vehicle before or trailer that otherwise has not been previously calibrated by the system (i.e., a trailer template has not previously been determined for the trailer). For example, the system determines that the trailer is new/uncalibrated or that an operator of the vehicle indicates the trailer is new/uncalibrated via, for example, a display within the vehicle. During the calibration process, the system generates a trailer template, determines the hitch ball point, and determines a trailer collision angle (i.e., the trailer angle relative to the vehicle where the trailer will collide with the towing vehicle). Once in the calibration state (e.g., because the user chooses to calibrate the hitched trailer), the system automatically enters the first sub-state (i.e., the drive straight sub-state).
During drive straight sub-state, the user or operator (or the system when the vehicle is semi-autonomous or autonomous) drives the vehicle in a straight line by maintaining a steering angle of zero or near zero. The operator may also maintain a vehicle speed that is above a speed threshold for a certain distance (e.g., above 5 mph for 20 meters). During the drive straight sub-state, the system generates the trailer template (
After the drive straight sub-state, the system enters the turn left or right sub-state. During this state the user performs a turn such as a U-turn (i.e., a 180 degree turn) to the left or the right with any steady wheel angle. After finishing the turn, the user sets the wheel angle to zero and drives forward to straighten both the towing vehicle and the trailer. When the vehicle begins turning both the hitch range algorithm and hitch detection algorithm may be enabled to start collecting data. The hitch range algorithm selects, for example, three hitch points which divides the initial hitch range of (e.g., rows 16 to 70) into three equal portions. While selecting these three hitch points, the system also performs angle detection in parallel and the dynamic template for all of the selected (e.g., three) hitch points is stored in a buffer or memory.
Optionally, the system determines kinematic angles in parallel (i.e., simultaneously) as the angle from the three hitch points vary. Using the kinematic angle as a reference, when the angle reaches, for example, 30 degrees, the hitch range algorithm may halt tracking the angle and determine a number (e.g., three) dynamic templates for detecting the hitch range (
Ideally, the dynamic template has a trailer angle not less than 70 percent of the kinematic angle, as otherwise that dynamic template may not be considered for processing. The system matches these dynamic templates with the warped initial trailer template. As shown in
Referring now to
Optionally, both the hitch range and hitch detection algorithm stop once the kinematic angle reaches a threshold degree (e.g., 30 degrees). Once the vehicle completes the 180 degree turn, the vehicle may drive straight again for a short period (e.g., a few meters). That is, the system may not transition to the next state until the vehicle moves straight for few meters (or other threshold distance) above a speed threshold. Once this condition is satisfied, the system may transition to the please wait sub-state.
The please wait sub-state activates the hitch detection algorithm and begins processing all the stored template images (e.g., 9 images). Referring now to
Thus, as shown in
The system may utilize aspects of the trailering assist systems or trailer angle detection systems or trailer hitch assist systems described in U.S. Pat. Nos. 10,755,110; 10,733,757; 10,706,291; 10,638,025; 10,586,119; 10,532,698; 10,552,976; 10,160,382; 10,086,870; 9,558,409; 9,446,713; 9,085,261 and/or 6,690,268, and/or U.S. Publication Nos. US-2020-0406967; US-2020-0356788; US-2020-0334475; US-2020-0361397; US-2020-0017143; US-2019-0297233; US-2019-0347825; US-2019-0118860; US-2019-0064831; US-2019-0042864; US-2019-0039649; US-2019-0143895; US-2019-0016264; US-2018-0276839; US-2018-0276838; US-2018-0253608; US-2018-0215382; US-2017-0254873; US-2017-0050672; US-2015-0217693; US-2014-0160276; US-2014-0085472 and/or US-2015-0002670, which are all hereby incorporated herein by reference in their entireties.
The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in U.S. Pat. Nos. 10,099,614 and/or 10,071,687, which are hereby incorporated herein by reference in their entireties.
The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an image processing chip selected from the EYEQ family of image processing chips available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.
The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels. The imaging array may capture color image data, such as via spectral filtering at the array, such as via an RGB (red, green and blue) filter or via a red/red complement filter or such as via an RCC (red, clear, clear) filter or the like. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.
For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 9,233,641; 9,146,898; 9,174,574; 9,090,234; 9,077,098; 8,818,042; 8,886,401; 9,077,962; 9,068,390; 9,140,789; 9,092,986; 9,205,776; 8,917,169; 8,694,224; 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, and/or U.S. Publication Nos. US-2014-0340510; US-2014-0313339; US-2014-0347486; US-2014-0320658; US-2014-0336876; US-2014-0307095; US-2014-0327774; US-2014-0327772; US-2014-0320636; US-2014-0293057; US-2014-0309884; US-2014-0226012; US-2014-0293042; US-2014-0218535; US-2014-0218535; US-2014-0247354; US-2014-0247355; US-2014-0247352; US-2014-0232869; US-2014-0211009; US-2014-0160276; US-2014-0168437; US-2014-0168415; US-2014-0160291; US-2014-0152825; US-2014-0139676; US-2014-0138140; US-2014-0104426; US-2014-0098229; US-2014-0085472; US-2014-0067206; US-2014-0049646; US-2014-0052340; US-2014-0025240; US-2014-0028852; US-2014-005907; US-2013-0314503; US-2013-0298866; US-2013-0222593; US-2013-0300869; US-2013-0278769; US-2013-0258077; US-2013-0258077; US-2013-0242099; US-2013-0215271; US-2013-0141578 and/or US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in U.S. Pat. Nos. 10,071,687; 9,900,490; 9,126,525 and/or 9,036,026, which are hereby incorporated herein by reference in their entireties.
Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device, such as by utilizing aspects of the video display systems described in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187; 6,690,268; 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,501; 6,222,460; 6,513,252 and/or 6,642,851, and/or U.S. Publication Nos. US-2014-0022390; US-2012-0162427; US-2006-0050018 and/or US-2006-0061008, which are all hereby incorporated herein by reference in their entireties.
Changes and modifications in the specifically described embodiments can be carried out without departing from the principles of the invention, which is intended to be limited only by the scope of the appended claims, as interpreted according to the principles of patent law including the doctrine of equivalents.
The present application claims the filing benefits of U.S. provisional application Ser. No. 62/705,967, filed Jul. 24, 2020, which is hereby incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6690268 | Schofield et al. | Feb 2004 | B2 |
7038577 | Pawlicki et al. | May 2006 | B2 |
7720580 | Higgins-Luthman | May 2010 | B2 |
7855755 | Weller et al. | Dec 2010 | B2 |
9085261 | Lu et al. | Jul 2015 | B2 |
9446713 | Lu et al. | Sep 2016 | B2 |
9558409 | Pliefke et al. | Jan 2017 | B2 |
10071687 | Ihlenburg et al. | Sep 2018 | B2 |
10086870 | Gieseke et al. | Oct 2018 | B2 |
10099614 | Diessner | Oct 2018 | B2 |
10160382 | Pliefke et al. | Dec 2018 | B2 |
10532698 | Potnis et al. | Jan 2020 | B2 |
10552976 | Diessner et al. | Feb 2020 | B2 |
10586119 | Pliefke et al. | Mar 2020 | B2 |
10638025 | Gali et al. | Apr 2020 | B2 |
10706291 | Diessner et al. | Jul 2020 | B2 |
10733757 | Gupta et al. | Aug 2020 | B2 |
10755110 | Bajpai | Aug 2020 | B2 |
20140085472 | Lu et al. | Mar 2014 | A1 |
20140160276 | Pliefke et al. | Jun 2014 | A1 |
20150002670 | Bajpai | Jan 2015 | A1 |
20150217693 | Pliefke et al. | Aug 2015 | A1 |
20170050672 | Gieseke et al. | Feb 2017 | A1 |
20170254873 | Koravadi | Sep 2017 | A1 |
20180215382 | Gupta et al. | Aug 2018 | A1 |
20180253608 | Diessner et al. | Sep 2018 | A1 |
20180276838 | Gupta et al. | Sep 2018 | A1 |
20180276839 | Diessner | Sep 2018 | A1 |
20190016264 | Potnis et al. | Jan 2019 | A1 |
20190039649 | Gieseke et al. | Feb 2019 | A1 |
20190042864 | Pliefke et al. | Feb 2019 | A1 |
20190064831 | Gali et al. | Feb 2019 | A1 |
20190118860 | Gali et al. | Apr 2019 | A1 |
20190143895 | Pliefke et al. | May 2019 | A1 |
20190297233 | Gali et al. | Sep 2019 | A1 |
20190347825 | Gupta et al. | Nov 2019 | A1 |
20190359134 | Yamamoto | Nov 2019 | A1 |
20200017143 | Gali | Jan 2020 | A1 |
20200334475 | Joseph et al. | Oct 2020 | A1 |
20200356788 | Joseph et al. | Nov 2020 | A1 |
20200361397 | Joseph et al. | Nov 2020 | A1 |
20200406967 | Yunus et al. | Dec 2020 | A1 |
20210027490 | Taiana | Jan 2021 | A1 |
20220027644 | Gali et al. | Jan 2022 | A1 |
20220028111 | Gali et al. | Jan 2022 | A1 |
Number | Date | Country | |
---|---|---|---|
20220024391 A1 | Jan 2022 | US |
Number | Date | Country | |
---|---|---|---|
62705967 | Jul 2020 | US |