The present invention is directed to a vehicle rear viewing system and more particularly, to a method and apparatus for distortion correction and image enhancing of a vehicle rear viewing system.
Operating a motor vehicle in reverse can be a frustrating and difficult task. These frustrations largely result from a drivers' inability to see objects behind the vehicle while proceeding in reverse despite rearview mirrors and windows. The areas blocked from a driver's view are the result of the vehicle structure due to dead angle areas hidden from view through the vehicles' mirrors (referred to as blind spots), and because of other causes.
The increasing popular sport utility vehicle (“SUV”) suffers from an even higher degree of difficulty in seeing during an attempt to travel in reverse as compared to a passenger vehicle. To aid the driver in becoming more aware of the surroundings behind the SUV, rearview camera systems have been proposed. Such camera systems provide a display of the rearview camera image to the driver. Such camera systems use a wide angle lens that distorts the rearview image.
The present invention relates to an image enhancing system for a vehicle comprising a display unit for displaying modified images and an imaging device for receiving captured images enhanced by the image enhancing system. The system further includes an image enhancing module in communication with the display unit and the imaging device such that pixels located in the captured images are enhanced by repositioning the pixels from a first position to a second position via a transfer operation.
The present invention also relates to a method of enhancing an image in a vehicle imaging system comprising the steps of receiving captured images by at least one imaging device located on the vehicle and communicating the captured images to an image enhancing module. The method further includes enhancing the captured images such that pixels located in the captured images are repositioned from a first position to a second position by a transfer operation to form modified images. The method further communicates the modified images from the enhancing module to a display unit located in the vehicle.
The present invention further relates to an image enhancing system for improving images received by an imaging device located on a vehicle. The image enhancing system comprises at least one camera located on the vehicle. The camera includes a pixel array for receiving real-time captured images within the camera field of view. The image enhancing system further comprises a computing unit having an image enhancing module in communication with the camera for improving the captured images by repositioning pixels in the captured images from a first position to a second position in accordance with a transfer operation performed by the image enhancing module such that the repositioning of the pixels form enhanced images. A display unit is in communication with the computing unit and is located within the vehicle for displaying the enhanced images.
The present invention further relates to a rearview image enhancing system for a vehicle comprising a display unit for displaying modified images enhanced by the image enhancing system and an imaging device for receiving captured images enhanced by the image enhancing system. The system further comprises an image enhancing module in communication with the display unit and the imaging device such that pixels located in the captured images are clustered and segmented to form at least one area of interest by referencing the pixels from a ground plane in the captured images to form the modified images.
The foregoing and other features and advantages of the present invention will become apparent to those skilled in the art to which the present invention relates upon reading the following description with reference to the accompanying drawings, in which:
Referring to
While
The FOV captured by the imaging device 20 is processed and enhanced by an image enhancing module 26 associated with the rearview enhancing system 12 in accordance with the control process illustrated in
The image enhancing module 26 is located in the vehicle 10 and includes processing capabilities performed by a computing unit 30, such as a digital signal processor (DSP), field programmable gate array (FPGA), microprocessors, or application specific integrated circuits (ASIC), or a combination thereof that include programming capabilities, for example, by computer readable media such as software or firmware embedded into a microprocessor including flash Read Only Memory (ROM) or as a binary image file that can be programmed by a user. The image enhancing module 26 can be integral with the imaging device 20 or display unit 14 or remotely located in communication (wire or wireless) with both the imaging device and display unit.
The initiation of the rearview enhancing system 12 of
Optical Distortion Correction
Optical distortion correction in step 40 is one enhancing function applied to the continuous images 36 by the image enhancing module 26. Optical distortion correction 40 facilitates the removal of a perspective effect and visual distortion caused by a wide angle lens used in the imaging device 20. The optical distortion correction 40 utilizes a mathematical model of the distortion to determine the correct position of the pixels captured in the continuous images 36. The mathematical model also corrects the pixel position of the continuous images 36 as a result of differences between the width and height of a pixel unit due to the aspect ratio produced by wide angle lenses.
The optical distortion correction in step 40 uses a mathematical model that is represented by the following equations in which the actual pixel positions captured by the continuous images 36 are represented as a single image point Xa, Ya and are then transferred to a respective corrected position Xc, Yc, where
Xc=s*cos φ*Xa*(1+k1ρ2+k2ρ4) Eq. 1
Yc=(s*sin φ*Xa+cos φ*Ya)(1+k1ρ2+k2ρ4) Eq. 2
In the above equations, s is the aspect ratio of pixel unit and φ is the rectification angle. The lens distortion coefficients are k1 and k2 and
ρ=((s*cos φ*Xa)2+(s*sin φ*Xa+cos φ*Ya)2)1/2 Eq. 3
For certain lenses used by the imaging device 20, the distortion coefficient values k1 and k2 can be predetermined to help eliminate the barrel distortion created by use of a wide angle lens. The distortion coefficient values are used for real-time correction of the continuous images 36, e.g. floating point calculations. Those skilled in the art will also appreciate these values can also generate offline lookup tables.
The distortion coefficient values k1 and k2 can be further tuned by using an image captured in the continuous images 36 having known straight lines, for example, a center of distortion 42 illustrated in
Referring to
Inverse Perspective Projection
Inverse perspective projection 44 is an enhancing function applied to the continuous images 36 by the image enhancing module 26. The angle of view acquired by the imaging device 20 and the distance of the objects therefrom generates at least two imaging problems. First, the angle of view contributes to associate different information content to each pixel captured, and second is the perspective effect produced by wide angle lenses. To resolve these problems, the inverse perspective projection 44 applies a geometrical transform or inverse perspective mapping (IPM) transform 46 to remove the perspective effect from the acquired image, remapping it into a new two-dimensional domain or remapped domain 48.
wherein xp and yp are the remapped coordinates projected into the remapped domain 48 and parameter ƒ is the distance of the remapped domain 48 to the origin “◯” along the Z or optical axis of the imaging device 20.
Projection of the all the pixels from the distorted image onto the remapped domain 48 via IPM transform 46 creates an enhanced image plane 50. The pixel information content in the enhanced image plane 50 is homogeneously distributed among all the pixels. The enhanced image plane 50 represents one of the several enhanced image planes that form the continuous enhanced images 38 sent to the display unit of the vehicle 10. The inverse perspective projection 44 utilizing the IPM transform 46 can be performed on the captured continuous images 36 in isolation or after the optical distortion correction 40 is performed on the images as illustrated in
The application of the IPM transform 46 requires information relating to the specific acquisition conditions (e.g. imaging device position, orientation, optics) and requires some assumptions (a-priori assumptions) about the scene represented in the image. As such, the IPM transform 46 can be used in structured environments, where, for example, the imaging device 20 is mounted in a fixed position or in situations where the calibration of the system and surrounding environment can be sensed via other types of sensors.
The image captured at the camera is analyzed to determine the distance of objects within the image from the vehicle. To this end, the two-dimensional image plane is translated to a three-dimensional camera coordinate system 92, with the x-axis and y-axis of the camera coordinate system 92 representing vertical and horizontal axes, respectively, within the image plane 84. A z-axis extends along the field of view of the camera, normal to the image plane 84. Because of the pitch of the camera and its location within the vehicle, the camera coordinate system 92 is slightly rotated around the X-axis and translated vertically a distance equal to the height, H, of the camera from the ground vertically relative to the world coordinate system 86.
To simplify the transformation of the image into the camera coordinates, it is helpful to assume that the ground plane 90 is a flat surface. For a camera 88 having pixels of width wu, height wv, and a focal length of f, the relationship between camera coordinates (x, y, z) and image coordinates (u, v) can be expressed as:
where (u0, v0), represents a center point of the image plane.
A horizon line, vh, within the image plane can be determined from the characteristics of the camera as described above and the known pitch angle, Θ, of the camera as:
From Eq. 7 above, Eq. 8 can be rewritten as:
Since the camera coordinate system 92 represents the world coordinate system 86 with a rotation around the X-axis equal to the pitch angle, Θ, and a translation along the Y-axis equal to the camera height, H, the translation between the camera coordinates and the world coordinates can be represented as X=x, Y=y[cos(Θ)]-H, and Z=z[cos(Θ)].
Accordingly, from Eq. 9 above, the distance, d, between a given point on the ground plane (X, Y=0, Z=d) and the vehicle can be expressed as:
In practice, to determine the distance to a given object within the image, the intersection of the object with the ground plane can be determined, and the vertical location of the intersection within the image can be utilized to calculate the distance, d, to the object. Accordingly, the distance of various objects within the field of view of the camera can be determined without the use of additional sensors, significantly reducing the cost of the system 80.
For the rearview enhancing system 12 of the present invention, the intrinsic and extrinsic parameters are known from prior calibrations, including the mounting position, yaw, and pitch angles of the imaging device 20. The ground near the vehicle 10 is assumed to be planar and the employment and the original captured continuous image 36 illustrated in
Novel View Projection
Novel view projection 54 is another enhancing function applied to the continuous images 36 by the image enhancing module 26. The perspective effect caused by wide angle lenses produce distortions in the size of objects imaged. Such distortion problems are not eliminated or even addressed in conventional imaging sensors.
The image enhancing module's employment of the novel view projection function 54 eliminates such distortion problems. The novel view projection 54 generates new images from a top view and side view so that the distance from the back of the vehicle 10 to any object or obstacle 56 is linearly proportional without distortion, as illustrated in
Novel view projection 54 can be applied to the captured continuous image 36 in isolation or in combination with the other functions performed by the image enhancing module 26, including inverse perspective projection 44 and optical distortion correction 40 or both.
Image Clustering and Non-ground Segmentation
Image clustering 60 and non-ground segmentation 62 are two additional enhancing functions applied to the continuous images 36 by the image enhancing module 26. The image clustering 60 and non-ground segmentation 62 functions use multiple image cues, including color, texture, and pixel intensity to find pixel regions, blobs or clusters that share similar characteristics that are tagged or labeled. Features are extracted from each pixel region/blob/cluster to form an area of interest. Detection and recognition are performed based on the extracted features, using such techniques as template matching, modeling, pattern cognition, and the like.
The rearview enhancing system 12 and more specifically, the image enhancing module 26 employs the above techniques in image clustering 60 and non-ground segmentation 62 to separate and recognize the ground and non-ground regions for obtaining areas of interest.
The identity of the ground region can be mapped assuming all the pixels in the ground region fit on the plane ground. The identity of the non-ground segmentation 62 regions can be further clustered and recognized to be certain objects, for example the objects could include obstacles, pedestrians, etc. These non-ground recognized objects can pop-up on the display unit 14, similar to the obstacle 56 illustrated in
The non-ground regions are assumed to be a planer surface that can be remapped after exercising the IPM transform 46 on the captured continuous image 36. Other non-ground regions are also analyzed by the image enhancing module 26 using image clustering 60 and non-ground segmentation 62, such as pedestrian and obstacle detection 64, projected backup curve detection 66, and a parking lane/garage wall detection 68 used to detect parking lane marks on the ground or parking walls for parking spaces located in, for example, a parking garage. The image enhancement allows objects, lines, and walls to stand out from the planer surface making them more easily detected or recognized by the operator 18.
Information Fusion
Utilizing known information provided by the steering angle 67 of the vehicle 10, a backup path 70 can be predicted by the computing unit 30 and provided on the display unit 14. This information is used in a collision warning module 72, which is another function performed by the image enhancing module 26. The described pedestrian and obstacle detection 64, projected backup curve detection 66, and a parking lane/garage wall detection 68 made possible by the image clustering 60 and non-ground segmentation 62 techniques are also used by the collision warning module 72. The computing unit 30 can predict the potential for collision and highlight the obstacles or pedestrians found in the backup path 70 warning the operator 18 on the display unit 14.
Similar to the collision warning module 72, the rearview enhancing system 12 provides a parking assist module 74. The described pedestrian and obstacle detection 64, projected backup curve detection 66, and a parking lane/garage wall detection 68 made possible by the image clustering 60 and non-ground segmentation 62 techniques are used by the parking assist module 74. The parking assist module 74 can facilitate the backing into a parking space or parallel parking in a manual or automated operating mode of the vehicle 10.
Scene Reconstruction and Image Synthesis
From the above description of the invention, those skilled in the art will perceive improvements, changes and modifications. Such improvements, changes and modifications within the skill of the art are intended to be covered by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
6405132 | Breed et al. | Jun 2002 | B1 |
6587760 | Okamoto | Jul 2003 | B2 |
6593960 | Sugimoto et al. | Jul 2003 | B1 |
6618672 | Sasaki et al. | Sep 2003 | B2 |
6985171 | Kuriya et al. | Jan 2006 | B1 |
7006129 | McClure | Feb 2006 | B1 |
7057500 | Belloso | Jun 2006 | B1 |
7136754 | Hahn et al. | Nov 2006 | B2 |
20020118292 | Baron | Aug 2002 | A1 |
20030090568 | Pico | May 2003 | A1 |
20050031169 | Shulman et al. | Feb 2005 | A1 |
20050128294 | Green et al. | Jun 2005 | A1 |
20060132601 | Kukita et al. | Jun 2006 | A1 |
20070206835 | Abe | Sep 2007 | A1 |
20080068520 | Minikey et al. | Mar 2008 | A1 |
20080304705 | Pomerleau et al. | Dec 2008 | A1 |
Number | Date | Country |
---|---|---|
2002-335438 | Nov 2002 | JP |
2006-050451 | Feb 2006 | JP |
10-2007-0005069 | Jan 2007 | KR |
Number | Date | Country | |
---|---|---|---|
20090021609 A1 | Jan 2009 | US |