An embodiment relates generally to image capture and processing for in-vehicle vision systems.
Vehicle systems often use in-vehicle vision systems for rear-view scene detections, side-view scene detection, and forward view scene detection. For those applications that require graphic overlay or to emphasize an area of the captured image, it is critical to accurately calibrate the position and orientation of the camera with respect to the vehicle and the surrounding objects. Camera modeling which takes a captured input image from a device and remodels the image to show or enhance a respective region of the captured image must reorient all objects within the image without distorting the image so much that it becomes unusable or inaccurate to the person viewing the reproduced image.
Current rear back-up cameras on vehicles are typically wide FOV cameras, for example, a 135° FOV. Wide FOV cameras typically provide curved images that cause image distortion around the edges of the image. Various approaches are known in the art to provide distortion correction for the images of these types of cameras, including using a model based on a pinhole camera and models that correct for radial distortion by defining radial parameters.
In order to provide accurate depiction of the surroundings such as surround view or ultra wide views, an ultra-wide field-of-view camera may be used. Such cameras are typically referred to as fish-eye cameras because their image is significantly curved. For the image to be effective for various driving scenarios such as back-up applications, distortions in the images must be corrected and those portions that are required to be enhanced that are the focus of the image (available parking spot) must be displayed so that distortions do not degrade the image. Traditional virtual camera modeling uses a planar imaging surface representing the flat image sensor within a camera; however, such systems can only do limited distortion correction feature when projected on to a display device. Such displays are left uncorrected or complex routines must be applied to remove the distortions.
An advantage of the invention described herein is that a virtual image can be synthesized using various image effects utilizing a camera view synthesis one or multiple cameras. The image effects include generating an image that is viewed from a different camera pose than the real camera capturing the real image. For example, a backup camera that has its optical axis pointed 30 degrees down from a horizontal position can be modeled and synthesized for displaying a top down view as if the real camera is positioned above the vehicle looking directly perpendicular to the ground. The technique as described herein uses a non-planar imaging surface for modeling the captured image in a virtual camera model, whereas traditional virtual camera modeling approaches utilize a flat planar surface captured.
Moreover, the various arbitrary shapes may be utilized for enhancing a region of the captured image. For example, by utilizing an elliptical imaging surface, a center portion of the image may be enhanced (zoomed) without cutting off or distorting the end portions of the image. Various non-planar imaging surfaces may be dynamically inserted within the virtual camera model for synthesizing a view based on operating conditions of the vehicle. An example would a signal from a turn signal indicator that is provided to the processor where a nonplanar surface is used that enhances the region or direction that the vehicle is turning.
A method for displaying a captured image on a display device. A real image is captured by a vision-based imaging device. A virtual image is generated from the captured real image based on a mapping by a processor. The mapping utilizes a virtual camera model with a non-planar imaging surface. Projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device.
There is shown in
The vision-based imaging system 12 includes a front-view camera 14, a rear-view camera 16, a left-side view camera 18, and a right-side view camera (not shown). The cameras 14-18 can be any camera suitable for the purposes described herein, many of which are known in the automotive art, that are capable of receiving light, or other radiation, and converting the light energy to electrical signals in a pixel format using, for example, charged coupled devices (CCD). The cameras 14-18 generate frames of image data at a certain data frame rate that can be stored for subsequent processing. The cameras 14-18 can be mounted within or on any suitable structure that is part of the vehicle 10, such as bumpers, facie, grill, side-view mirrors, door panels, etc., as would be well understood and appreciated by those skilled in the art. In one non-limiting embodiment, the side camera 18 is mounted under the side view mirrors and is pointed downwards. Image data from the cameras 14-18 is sent to a processor 22 that processes the image data to generate images that can be displayed on a vehicle display 24.
In other embodiments, the vision-based imaging system 12 may be used to identify the clear path or lane markings in the road for systems such as, but not limited to, back-up object detection, lane departure warning systems, or lane centering. Although the vision-based imaging system 12 may be used for a variety of functions such as utilizing the captured image to recognize landmarks including, but not limited to, road markings, lane markings, road signs, buildings, trees, humans, or other roadway objects so that movement of landmarks between image frames of the video can be detected, the embodiments describe herein are mainly targeted at video imaging for displaying to the driver the environment surrounding the vehicle (e.g., such as backing up).
The present invention proposes an efficient and effective image modeling and de-warping process for ultra-wide FOV cameras that employs a simple two-step approach and offers fast processing times and enhanced image quality without utilizing radial distortion correction. Distortion is a deviation from rectilinear projection, a projection in which straight lines in a scene remain straight in an image. Radial distortion is a failure of a lens to be rectilinear.
The two-step approach as discussed above includes (1) applying a camera model to the captured image for projecting the captured image on a non-planar surface and (2) applying a view synthesis for mapping the virtual image projected on to the non-planar surface to the real display image. For view synthesis, given one or more images of a specific subject taken from specific points with specific camera setting and orientations, the goal is to build a synthetic image as taken from a virtual camera having a same or different optical axis. The term “virtual camera” is a simulated camera with simulated camera model parameters and simulated imaging surface, in addition to a simulated camera pose. The camera modeling is performed by processor or multiple processors. The term “virtual image” is a synthesized image of a scene using the virtual camera modeling.
The proposed approach provides effective surround view and dynamic rearview mirror functions with an enhanced de-warping operation, in addition to a dynamic view synthesis for ultra-wide FOV cameras. Camera calibration as used herein refers to estimating a number of camera parameters including both intrinsic and extrinsic parameters. The intrinsic parameters include focal length, optical center, radial distortion parameters, etc. and extrinsic parameters include camera location, camera orientation, etc.
Camera models are known in the art for mapping objects in the world space to an image sensor plane of a camera to generate an image. One model known in the art is referred to as a pinhole camera model that is effective for modeling the image for narrow FOV cameras. The pinhole camera model is defined as:
Equation (1) includes the parameters that are employed to provide the mapping of point M in the object space 34 to point m in the image plane 32. Particularly, intrinsic parameters include fu, fv, uc, vc and γ and extrinsic parameters include a 3 by 3 matrix R for the camera rotation and a 3 by 1 translation vector t from the image plane 32 to the object space 34. The parameter γ represents a skewness of the two image axes that is typically negligible, and is often set to zero.
Since the pinhole camera model follows rectilinear projection which a finite size planar image surface can only cover a limited FOV range (<<180° FOV), to generate a cylindrical panorama view for an ultra-wide (˜180° FOV) fisheye camera using a planar image surface, a specific camera model must be utilized to take horizontal radial distortion into account. Some other views may require other specific camera modeling, (and some specific views may not be able to generated). However, by changing the image plane to a non-planar image surface, a specific view can be easily generated by still using the simple ray tracing and pinhole camera model. As a result, the following description will describe the advantages of utilizing a non-planar image surface.
The in-vehicle display device 24 (shown in
A view synthesis technique is applied to the projected image on the non-planar surface for de-warping the image. In
In
Dynamic view synthesis is a technique by which a specific view synthesis is enabled based on a driving scenario of a vehicle operation. For example, special synthetic modeling techniques may be triggered if the vehicle is in driving in a parking lot versus a highway, or may be triggered by a proximity sensor sensing an object to a respective region of the vehicle, or triggered by a vehicle signal (e.g., turn signal, steering wheel angle, or vehicle speed). The special synthesis modeling technique may be to apply respective shaped image surfaces and virtual camera modeling to a captured image, or apply virtual pan, tilt, or directional zoom depending on a triggered operation. As a result, virtual imaging surfaces may be switched base on different driving needs or driving conditions. For example, while driving along a road, an elliptical image surface may be utilized to magnify a center-view to mimic a field-of-view and object size as seen from the rearview mirror. In another scenario, while passing a vehicle, or pulling out into cross traffic, a cylindrical imaging surface or a stitched planar imaging surface may be used to see obstacles to the sides of the driven vehicle. In yet another scenario, while performing a back-up maneuver into a parking space, a top-down view or stitched side views could be applied to guide the vehicle into a parking space and providing a view as to the proximity of adjacent vehicles, pedestrians, or curbs. Any arbitrary shape may be used as the imaging surface so long as the shape satisfies a homographic mapping constraint (1 to 1).
In block 62, the real camera model is defined, such as the fisheye model (rd=func(θ) and φ) and an imaging surface is defined. That is, the incident ray as seen by a real fish-eye camera view may be illustrated as follows:
where uc1 represents ureal and vc1 represents vreal. A radial distortion correction model is shown in
rd=r0(1+k1·r02+k2·r04+k2·r06+ . . . ) (3)
The point r0 is determined using the pinhole model discussed above and includes the intrinsic and extrinsic parameters mentioned. The model of equation (3) is an even order polynomial that converts the point r0 to the point rd in the image plane 72, where k is the parameters that need to be determined to provide the correction, and where the number of the parameters k define the degree of correction accuracy. The calibration process is performed in the laboratory environment for the particular camera that determines the parameters k. Thus, in addition to the intrinsic and extrinsic parameters for the pinhole camera model, the model for equation (3) includes the additional parameters k to determine the radial distortion. The non-severe radial distortion correction provided by the model of equation (3) is typically effective for wide FOV cameras, such as 135° FOV cameras. However, for ultra-wide FOV cameras, i.e., 180° FOV, the radial distortion is too severe for the model of equation (3) to be effective. In other words, when the FOV of the camera exceeds some value, for example, 140°-150°, the value r0 goes to infinity when the angle θ approaches 90°. For ultra-wide FOV cameras, a severe radial distortion correction model shown in equation (4) has been proposed in the art to provide correction for severe radial distortion.
The values p in equation (4) are the parameters that are determined. Thus, the incidence angle θ is used to provide the distortion correction based on the calculated parameters during the calibration process.
rd=p1·θ0+p2·θ03+p3·θ03+ . . . . (4)
Various techniques are known in the art to provide the estimation of the parameters k for the model of equation (3) or the parameters p for the model of equation (4). For example, in one embodiment a checker board pattern is used and multiple images of the pattern are taken at various viewing angles, where each corner point in the pattern between adjacent squares is identified. Each of the points in the checker board pattern is labeled and the location of each point is identified in both the image plane and the object space in world coordinates. The calibration of the camera is obtained through parameter estimation by minimizing the error distance between the real image points and the reprojection of 3D object space points.
In block 63, a real incident ray angle (θreal) and (φreal) are determined from the real camera model. The corresponding incident ray will be represented by a (θreal,φreal).
Block 64 represents a conversion process (described in
In block 65, a virtual incident ray angle θvirt and corresponding φvirt is determined. If there is no virtual tilt and/or pan, then (θvirt, φvirt) will be equal to (θreal, φreal). If virtual tilt and/or pan are present, then adjustments must be made to determine the virtual incident ray. Discussion of the virtual incident ray will be discussed in detail later.
In block 66, once the incident ray angle is known, then view synthesis is applied by utilizing a respective camera model (e.g., pinhole model) and respective non-planar imaging surface (e.g., cylindrical imaging surface).
In block 67, the virtual incident ray that intersects the non-planar surface is determined in the virtual image. The coordinate of the virtual incident ray intersecting the virtual non-planar surface as shown on the virtual image is represented as (uvirt, vvirt). As a result, a mapping of a pixel on the virtual image (uvirt, vvirt) corresponds to a pixel on the real image (ureal, vreal). Once the a correlating pixel location is determined, the scene of the virtual image may be synthesized by utilizing a pixel value of the real image and applying the pixel value to the corresponding pixel in the virtual image. Alternatively, the pixel value of the virtual image may be generated by interpolating the pixel values of neighboring pixels of the corresponding location in the real image and applying the interpoloated pixel value of the real image to the corresponding pixel in the virtual image
It should be understood that while the above flow diagram represents view synthesis by obtaining a pixel in the real image and finding a correlation to the virtual image, the reverse order may be performed when utilizing in a vehicle. That is, every point on the real image may not be utilized in the virtual image due to the distortion and focusing only on a respective highlighted region (e.g., cylindrical/elliptical shape). Therefore, if processing takes place with respect to these points that are not utilized, then time is wasted in processing pixels that are not utilized. Therefore, for an in-vehicle processing of the image, the reverse order is performed. That is, a location is identified in a virtual image and the corresponding point is identified in the real image. The following describes the details for identifying a pixel in the virtual image and determining a corresponding pixel in the real image.
where uvirt is the virtual image point u-axis (horizontal) coordinate, fu is the u direction (horizontal) focal length of the camera, and u0 is the image center u-axis coordinate.
Next, the vertical projection of angle θ is represented by the angle β. The formula for determining angle β follows the rectilinear projection as follows:
where vvirt is the virtual image point v-axis (vertical) coordinate, fv is the v direction (vertical) focal length of the camera, and v0 is the image center v-axis coordinate.
The incident ray angles can then be determined by the following formulas:
As described earlier, if there is no pan or tilt between the optical axis 70 of the virtual camera and the real camera, then the virtual incident ray (θvirt, φvirt) and the real incident ray (θreal, φreal) are equal. If pan and/or tilt are present, then compensation must be made to correlate the projection of the virtual incident ray and the real incident ray.
For each determined virtual incident ray (θvirt, φvirt) any point on the incident ray can be represented by the following matrix:
where ρ is the distance of the point form the origin.
The virtual pan and/or tilt can be represented by a rotation matrix as follows:
where α is the pan angle, and β is the tilt angle. It should be understood that the rotation matrix as described herein is just one of an exemplary rotation matrix, and other rotation matrices may be used the includes other pose (view angle) changes in addition to tilt and/or pan. For example, a rotation matrix may include 3 degree of freedom changes or may include an entire position change.
After the virtual pan and/or tilt rotation is identified, the coordinates of a same point on the same incident ray (for the real) will be as follows:
The new incident ray angles in the rotated coordinates system will be as follows:
As a result, a correspondence is determined between (θvirt, φvirt) and (θreal, φreal) when tilt and/or pan is present with respect to the virtual camera model. It should be understood that that the correspondence between (θvirt, φvirt) and (θreal, φreal) is not related to any specific point at distance ρ on the incident ray. The real incident ray angle is only related to the virtual incident ray angles (θvirt, φvirt) and virtual pan and/or tilt angles α and β.
Once the real incident ray angles are known, the intersection of the respective light rays on the real image may be readily determined as discussed earlier. The result is a mapping of a virtual point on the virtual image to a corresponding point on the real image. This process is performed for each point on the virtual image for identifying corresponding point on the real image and generating the resulting image.
In comparison, a point on an ellipse shaped image that is intersected by the incident ray is as follows:
The incident angle (of the horizontal component) α in the ellipse mode is as follows:
The determination of the vertical component β, θvirt, φvirt is the same as follows:
While certain embodiments of the present invention have been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention as defined by the following claims.
This application claims priority of U.S. Provisional Application Ser. No. 61/712,433 filed Oct. 11, 2012, the disclosure of which is incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3815407 | Lavery | Jun 1974 | A |
6034676 | Egan et al. | Mar 2000 | A |
6539788 | Mancosu et al. | Apr 2003 | B1 |
7554573 | Mizusawa | Jun 2009 | B2 |
20020003571 | Schofield et al. | Jan 2002 | A1 |
20030169522 | Schofield et al. | Sep 2003 | A1 |
20040032649 | Kondo et al. | Feb 2004 | A1 |
20050117708 | Cho et al. | Jun 2005 | A1 |
20060125919 | Camilleri et al. | Jun 2006 | A1 |
20060274302 | Shylanski et al. | Dec 2006 | A1 |
20070252074 | Ng et al. | Nov 2007 | A1 |
20100046803 | Tomita | Feb 2010 | A1 |
20100180676 | Braghiroli et al. | Jul 2010 | A1 |
20100208032 | Kweon | Aug 2010 | A1 |
20100231717 | Sasaki et al. | Sep 2010 | A1 |
20110074916 | Demirdjian | Mar 2011 | A1 |
20120098927 | Sablak | Apr 2012 | A1 |
20120242788 | Chuang et al. | Sep 2012 | A1 |
20130070095 | Yankun et al. | Mar 2013 | A1 |
20130121559 | Hu et al. | May 2013 | A1 |
20130272600 | Garcia Becerro et al. | Oct 2013 | A1 |
20130300831 | Mavromatis et al. | Nov 2013 | A1 |
20140125774 | Lee et al. | May 2014 | A1 |
20150042799 | Zhang et al. | Feb 2015 | A1 |
Entry |
---|
Wu, An Improved Polynomials Model Using RBF Network for Fish-eye Lens Correction, Nov. 1, 2012, Journal of Information & Computational Science 9: 13 (2012) 3655-3663. |
Ma, Rational Radial Distortion Models of Camera Lenses with Analytical Solution for Distortion Correction, Apr. 10, 2004, Center for Self-Organizing and Intelligent Systems, 1-11. |
Gyeong-il Kweon, Image-processing Based Panoramic Camera Employing Single Fisheye Lens, Journal of the Optical Society of Korea vol. 14, No. 3, Sep. 2010, pp. 245-259. |
Homography, Wikipedia, Downloaded Jul. 7, 2015; https://en.wikipedia.org/wiki/Homography—(computer—vision). |
T. Gandhi, M. M. Trivedi. Vehicle Surround Capture: Survey of Techniques and a Novel Omni-Video-Based Approach for Dynamic Panoramic Surround Maps. IEEE Trans. on Intelligent Transportation Systems, vol. 7 (3), pp. 293-308, 2006. |
S. Thibault. Novel Compact Panomorph Lens Based Vision System for Monitoring Around a Vehicle. Proceedings of the SPIE, vol. 7003, pp. 700321-700321-9, 2008. |
B. Zitova, J. Flusser. Image Registration Methods: A Survey. Image and Vision Computing, 21, pp. 977-1000, 2003. |
M. Brown, D. G. Lowe. Automatic Panoramic Image Stitching using Invariant Features. International Journal of Computer Vision, 74(1), pp. 59-73, 2007. |
Number | Date | Country | |
---|---|---|---|
20140104424 A1 | Apr 2014 | US |
Number | Date | Country | |
---|---|---|---|
61712433 | Oct 2012 | US |