The present invention relates generally to a vehicle vision system for a vehicle and, more particularly, to a vehicle vision system that utilizes one or more cameras at a vehicle.
Use of imaging sensors in vehicle imaging systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 5,949,331; 5,670,935 and/or 5,550,677, which are hereby incorporated herein by reference in their entireties.
The present invention provides a method for calibrating a vehicular camera of a driving assistance system or vision system or imaging system for a vehicle that utilizes one or more cameras (preferably one or more CMOS cameras) to capture image data representative of images exterior of the vehicle. The method includes placing a target within the field of view of the vehicular camera and capturing image data with the camera representative of the field of view of the vehicular camera. The target includes a first portion of the target with a first geometric pattern and a second portion of the target with a second geometric pattern. The method also includes detecting first and second edges of the first portion of the target and third and fourth edges of the second portion of the target. The method also includes determining first edge pixels representative of the first detected edge of the first portion of the target, second edge pixels representative of the second detected edge of the first portion of the target, third edge pixels representative of the third detected edge of the second portion of the target, and fourth edge pixels representative of the fourth detected edge of the second portion of the target. The method also includes determining a first vanishing point based on the determined first edge pixels of the first portion of the target and the determined second edge pixels of the first portion of the target, and determining a second vanishing point based on the determined third edge pixels of the second portion of the target and the determined fourth edge pixels of the second portion of the target. The method also includes determining camera orientation based on location of the determined first vanishing point relative to location of the determined second vanishing point. The method includes calibrating the vehicular vision system for the vehicular camera based on the determined camera orientation.
These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
A vehicle vision system and/or driver or driving assist system and/or object detection system and/or alert system operates to capture image data representative of the scene exterior of the vehicle and may process the captured image data to display images and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The vision system includes an image processor or image processing system that is operable to receive image data from one or more cameras and provide an output to a display device for displaying images representative of the captured image data. Optionally, the vision system may provide display, such as a rearview display or a top down or bird's eye or surround view display or the like.
Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes an imaging system or vision system 12 that includes at least one exterior viewing imaging sensor or camera, such as a rearward viewing imaging sensor or camera 14a (and the system may optionally include multiple exterior viewing imaging sensors or cameras, such as a forward viewing camera 14b at the front (or at the windshield) of the vehicle, and a sideward/rearward viewing camera 14c, 14d at respective sides of the vehicle), which captures image data representative of the scene exterior of the vehicle, with the camera having a lens for focusing images at or onto an imaging array or imaging plane or imager of the camera (
Camera calibration is an essential phase of visual odometry in vehicle vision systems and autonomous driving systems. Accurate camera calibration (e.g., adjusting a position of the camera or adjusting processing of image data captured by the camera to accommodate a determined rotational and/or translational offset or misalignment of the camera at the vehicle) is necessary for extracting precise reliable geometric information to obtain perspective projection from the three-dimensional (3D) world to a two-dimensional (2D) image plane. Using given camera intrinsic data such as focal length, principal point and lens distortion parameters, etc., an extrinsic parameter such as camera orientation in pitch-yaw-roll and associated with a translation parameter can be calculated to construct a perspective projection model.
A conventional camera calibration technique is often implemented based on a pre-arranged 3D control scene containing known 3D control points. In this technique, the camera captures a plane image of the 3D control scene. Next, points in the image plane corresponding to the known 3D control points are determined. The correspondence between the set of points in the plane image are then calculated to obtain extrinsic parameters in the perspective projection model.
Using this technique, the 3D control points must be accurate to achieve accurate camera calibration. However, implementing the 3D control scene is a complex and difficult manual arrangement and it is not always available for each calibration. Thus, point-based approaches result in lower calibration accuracy.
Another technique attempts to achieve camera calibration using a pair of parallel lines to calibrate the extrinsic parameters from a single perspective image. This method is widely used improve techniques for accurate camera calibration to overcome the disadvantage in point-based approaches.
Implementations herein relate to techniques for camera calibration, and more particularly, to techniques for camera calibration autonomously from a single perspective. That is, implementations herein provide a robust technique for camera extrinsic calibration that overcomes inaccuracy in typical point-based approaches. The technique includes a pre-set target 210 (
Referring now to
The edge grouper 500 receives the edge sets 450 from the edge detector. The edge grouper 500 employs a longest line linkage from the edge sets by computing a line distance and angles based on a pixel gradient and normal coefficients to yield edge pixels to determine or generate a set of lines and/or line segments 550. Using the set of lines 550, the VP estimator 600 estimates facade of edges and, for example, uses a J-linkage operation to ensure the pair-parallel lines cluster with minimum distance to produce VP candidates 650. The VP estimator 600 sends the VP candidates 650 to the VP refiner 700. The VP refiner 700 refines the VP candidates 650 to determine one or more optimized VPs 750 (e.g., two orthogonal VPs 750) based on the orthogonal constraint. Optionally, the VP refiner 700 obtains a horizontal VP 750 and a vertical VP 750 or a plurality (e.g., two) horizontal VPs 750 and a vertical VP 750. Using the orthogonal VPs 750, the orientation estimator 800 coordinates alignment between the 3D world scene and the 2D camera image by arranging the obtained VPs 750 into an orientation matrix (e.g., a 3 by x matrix). The camera orientation (i.e., the pitch-yaw-roll of the camera relative to the vehicle) may be calculated using the orientation matrix.
Referring now to
Referring now to
Referring now to
Thus, the calibration system calibrates a vehicular camera (such as cameras 14a-d of
pi=KRXi for Vanishing Points XiTXj=0 (1)
In Equation (1), Pi is the calculated VP in the camera image. The calibration system sets directions of the vanishing points in the rotation matrix (e.g., Xi=[1, 0, 0]) and each VP provides one column for R. Because of special orthonormal properties of R (e.g., inv(R)=R{circumflex over ( )}T), each row and column of R has unit length.
Referring now to
Thus, the calibration system provides high performance because the vanishing point position estimation is primarily affected by the angles of the lines rather than by each point's absolute position and the angles of the lines may be robustly estimated in most photometry measurement cases.
The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in International Publication Nos. WO 2013/081984 and/or WO 2013/081985, which are hereby incorporated herein by reference in their entireties.
The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an image processing chip selected from the EYEQ family of image processing chips available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.
The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels. The imaging array may capture color image data, such as via spectral filtering at the array, such as via an RGB (red, green and blue) filter or via a red/red complement filter or such as via an RCC (red, clear, clear) filter or the like. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.
For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 9,233,641; 9,146,898; 9,174,574; 9,090,234; 9,077,098; 8,818,042; 8,886,401; 9,077,962; 9,068,390; 9,140,789; 9,092,986; 9,205,776; 8,917,169; 8,694,224; 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, and/or U.S. Publication Nos. US-2014-0340510; US-2014-0313339; US-2014-0347486; US-2014-0320658; US-2014-0336876; US-2014-0307095; US-2014-0327774; US-2014-0327772; US-2014-0320636; US-2014-0293057; US-2014-0309884; US-2014-0226012; US-2014-0293042; US-2014-0218535; US-2014-0218535; US-2014-0247354; US-2014-0247355; US-2014-0247352; US-2014-0232869; US-2014-0211009; US-2014-0160276; US-2014-0168437; US-2014-0168415; US-2014-0160291; US-2014-0152825; US-2014-0139676; US-2014-0138140; US-2014-0104426; US-2014-0098229; US-2014-0085472; US-2014-0067206; US-2014-0049646; US-2014-0052340; US-2014-0025240; US-2014-0028852; US-2014-005907; US-2013-0314503; US-2013-0298866; US-2013-0222593; US-2013-0300869; US-2013-0278769; US-2013-0258077; US-2013-0258077; US-2013-0242099; US-2013-0215271; US-2013-0141578 and/or US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in U.S. Pat. Nos. 10,071,687; 9,900,490; 9,126,525 and/or 9,036,026, which are hereby incorporated herein by reference in their entireties.
Changes and modifications in the specifically described embodiments can be carried out without departing from the principles of the invention, which is intended to be limited only by the scope of the appended claims, as interpreted according to the principles of patent law including the doctrine of equivalents.
The present application claims the filing benefits of U.S. provisional application Ser. No. 62/969,390, filed Feb. 3, 2020, which is hereby incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5550677 | Schofield et al. | Aug 1996 | A |
5670935 | Schofield et al. | Sep 1997 | A |
5949331 | Schofield et al. | Sep 1999 | A |
7038577 | Pawlicki et al. | May 2006 | B2 |
7720580 | Higgins-Luthman | May 2010 | B2 |
7855755 | Weller et al. | Dec 2010 | B2 |
8421865 | Euler et al. | Apr 2013 | B2 |
9150155 | Vico et al. | Oct 2015 | B2 |
9205776 | Turk | Dec 2015 | B2 |
9357208 | Gupta et al. | May 2016 | B2 |
9491450 | Kussel | Nov 2016 | B2 |
9491451 | Pliefke | Nov 2016 | B2 |
9563951 | Okouneva | Feb 2017 | B2 |
9688200 | Knudsen | Jun 2017 | B2 |
9723272 | Lu et al. | Aug 2017 | B2 |
9762880 | Pflug | Sep 2017 | B2 |
9834153 | Gupta et al. | Dec 2017 | B2 |
9916660 | Singh et al. | Mar 2018 | B2 |
10071687 | Ihlenburg et al. | Sep 2018 | B2 |
10099614 | Diessner | Oct 2018 | B2 |
10179543 | Rathi et al. | Jan 2019 | B2 |
10380765 | Singh et al. | Aug 2019 | B2 |
10453217 | Singh et al. | Oct 2019 | B2 |
10504241 | Singh | Dec 2019 | B2 |
10816666 | Nicke et al. | Oct 2020 | B2 |
10884103 | Pliefke et al. | Jan 2021 | B2 |
10946799 | May | Mar 2021 | B2 |
20120057799 | Nguyen | Mar 2012 | A1 |
20120180084 | Huang | Jul 2012 | A1 |
20140043473 | Gupta | Feb 2014 | A1 |
20140241579 | Nonaka | Aug 2014 | A1 |
20170024861 | Arata | Jan 2017 | A1 |
20170236258 | Hsu | Aug 2017 | A1 |
20170344821 | Gaskill | Nov 2017 | A1 |
20170372481 | Onuki | Dec 2017 | A1 |
20180040141 | Guerreiro | Feb 2018 | A1 |
20180281698 | Tang et al. | Oct 2018 | A1 |
20190094347 | Singh | Mar 2019 | A1 |
20200057487 | Sicconi | Feb 2020 | A1 |
20200167578 | Ding | May 2020 | A1 |
20200193641 | Markkassery | Jun 2020 | A1 |
20210197893 | Okouneva et al. | Jul 2021 | A1 |
20210229706 | Jones | Jul 2021 | A1 |
20210241492 | Hsu et al. | Aug 2021 | A1 |
Number | Date | Country | |
---|---|---|---|
20210241492 A1 | Aug 2021 | US |
Number | Date | Country | |
---|---|---|---|
62969390 | Feb 2020 | US |