The subject disclosure relates to dynamic adjustment of an augmented reality image.
Vehicles (e.g., automobiles, trucks, construction equipment, farm equipment) are increasingly equipped to obtain information about the vehicle and its surroundings and to provide information to a person in the vehicle. Exemplary interfaces include an infotainment system and a head-up display (HUD). The infotainment system may include a monitor to display a map or an image obtained by a camera of the vehicle. The HUD involves a projection of an image onto the windshield of the vehicle, for example. An augmented reality (AR) image refers to an image that enhances a real-world image. For example, the image of the road ahead obtained by a camera in the front of the vehicle can be augmented with an AR image of an arrow to indicate an upcoming turn. As another example, instead of displaying speed or other data, the HUD may project an AR image that augments what the driver sees through the windshield. Road dynamics or other sources of changes in the relative orientation of the vehicle to the outside world can lead to the AR image being displayed in the wrong place (e.g., the arrow indicating direction is projected onto the side of the road in the camera image rather than onto the road). Accordingly, it is desirable to provide dynamic adjustment of an AR image.
In one exemplary embodiment, a system to adjust an augmented reality (AR) image includes a front-facing camera in a vehicle to obtain an image that includes a road surface ahead of the vehicle. The system also includes a processor to detect lane lines of the road surface in three dimensions, to perform interpolation using the lane lines to determine a ground plane of the road surface, and to adjust the AR image to obtain an adjusted AR image such that all points of the adjusted AR image correspond with points on the ground plane.
In addition to one or more of the features described herein, the processor obtains a base image that shows the road surface.
In addition to one or more of the features described herein, the processor adds the adjusted AR image to the base image for display to a driver of the vehicle.
In addition to one or more of the features described herein, the processor obtains the base image from the camera or a second front-facing camera in the vehicle.
In addition to one or more of the features described herein, the processor monitors eye position of a driver of the vehicle.
In addition to one or more of the features described herein, the processor determines a head-up display region on a windshield of the vehicle to correspond with the eye position of the driver.
In addition to one or more of the features described herein, the processor projects the adjusted AR image in the head-up display region.
In addition to one or more of the features described herein, the processor adjusts the AR image based on a rotation and a translation of the points on the ground plane.
In addition to one or more of the features described herein, the processor detects an object on the road surface directly ahead of the vehicle from the image.
In addition to one or more of the features described herein, the processor selects or creates the adjusted AR image size based on the road surface between the vehicle and a position of the object.
In another exemplary embodiment, a method of adjusting an augmented reality (AR) image includes obtaining an image from a front-facing camera in a vehicle, the image including a road surface ahead of the vehicle. The method also includes detecting lane lines of the road surface in three dimensions, and performing interpolation using the lane lines to determine a ground plane of the road surface. The AR image is adjusted to obtain an adjusted AR image such that all points of the adjusted AR image correspond with points on the ground plane.
In addition to one or more of the features described herein, the method also includes obtaining a base image that shows the road surface.
In addition to one or more of the features described herein, the method also includes the processor adding the adjusted AR image to the base image for display to a driver of the vehicle.
In addition to one or more of the features described herein, the obtaining the base image is from the camera or a second front-facing camera in the vehicle.
In addition to one or more of the features described herein, the method also includes the processor monitoring eye position of a driver of the vehicle.
In addition to one or more of the features described herein, the method also includes the processor determining a head-up display region on a windshield of the vehicle to correspond with the eye position of the driver.
In addition to one or more of the features described herein, the method also includes the processor projecting the adjusted AR image in the head-up display region.
In addition to one or more of the features described herein, the adjusting the AR image is based on a rotation and a translation of the points on the ground plane.
In addition to one or more of the features described herein, the method also includes the processor detecting an object on the road surface directly ahead of the vehicle from the image.
In addition to one or more of the features described herein, the method also includes the processor selecting or creating the adjusted AR image size based on the road surface between the vehicle and a position of the object.
The above features and advantages, and other features and advantages of the disclosure are readily apparent from the following detailed description when taken in connection with the accompanying drawings.
Other features, advantages and details appear, by way of example only, in the following detailed description, the detailed description referring to the drawings in which:
The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
As previously noted, an AR image can be added to an image of the real world (e.g., obtained by a vehicle camera and displayed by the infotainment system) or can be projected onto a view of the real world (e.g., in the form of a HUD). Embodiments of the systems and methods detailed herein involve dynamic adjustment of the AR image. While driving, AR images may be shown on a road surface or other road feature. As also previously noted, an AR image may be presented in the wrong place on the image of the real-world or in the view of the real world due to changes in the relative orientation of the vehicle to the road surface. This can happen if the road ahead is not flat. The relative orientation can also change when the vehicle has an inclination angle (e.g., due to a heavy load in the trunk) even if the road is flat. A change in the position of the vehicle relative to the road can also lead to the issue. Finally, other vehicles or objects being in the field of view may require a change in the position or other aspect of the AR symbol.
A prior approach to addressing the relative orientation issue includes creating an accurate terrain map beforehand. Then, with accurate vehicle localization and inclination, the orientation of the vehicle to the road surface at a given position is known. However, this requires driving over all potential terrain to obtain the terrain map. Also, sensors are needed for the localization and inclination information. Further, the presence of other vehicle or objects in real time cannot be addressed with this approach. Another prior approach includes using a three-dimensional sensor such as a light detection and ranging (lidar) system. However, lidar systems are not generally available in vehicles and the processing of data obtained by the lidar system is generally more processor intensive than the processing of data obtained by other sensors (e.g., camera, radar system).
According to one or more embodiments detailed herein, a single image from a single camera is used to obtain information about road geometry or structure for a section of road. The camera may be a front-facing camera that is already generally available in vehicles. Additional techniques to reduce lag and improve accuracy are also presented. According to an exemplary embodiment, road geometry is determined for upcoming sections of road that are captured by the camera. According to another exemplary embodiment, a large section of road can be divided into subsections such that the road geometry of each subsection are determined more accurately.
In accordance with an exemplary embodiment,
The controller 110 may obtain an image 220 (
A HUD region 130 is shown on the windshield 135 of the vehicle 100. The HUD region 130 corresponds with the location of the eye 310 (
The complete image 230b shows the same road 160 ahead of the vehicle 100. However, as
At block 430, selecting the AR image 210 may refer to creating the image or choosing the image from predefined images. The road geometry information extracted and the object 320 information obtained at block 420 may affect the creation or selection of the AR image 210. For example, if there is an object 320 (e.g., another vehicle 100) directly ahead of the vehicle 100, the dimensions of the surface of the road 160 onto which the AR image 210 is projected may be reduced and, thus, may limit the size of the AR image 210 that is created or selected.
At block 440, a homography is used to project the AR image 210 selected at block 430 to a correct location of a base image 225 or the HUD region 130. A homography relates pixel coordinates in two images. Thus, any two images of the same planar surface are related by a homography. Specifically, the rotation and translation between two points of view—one being perpendicular to the ground plane PG (i.e. a bird's eye view) and the other being either the point of view of the camera that creates the base image 225 or the driver's point of view—is determined at block 440, as further detailed. The process at block 440 requires obtaining information, from block 450, resulting from monitoring the eye 310 or obtaining a base image 225. Then, a point {right arrow over (Q)} in the real world is projected to the HUD region 130 or a point {right arrow over (Q)} in the real world is projected to the coordinate system of the camera 120 that obtained the base image 225. This process at block 440 is the adjustment of the AR image 210 that considers the relative orientation of the vehicle 100 to the road 160 and is detailed with reference to
Qz=Σ{right arrow over (p)}∈P(1−d({right arrow over (p)},Qx,Qy))pz/Σ{right arrow over (p)}∈P(1−d({right arrow over (p)},Qx,Qy)) [EQ. 1]
In EQ. 1, as indicated, {right arrow over (p)} is a point in the set {P}, and the function d gives the Euclidean distance between two dimensions of the point {right arrow over (p)} and the point {right arrow over (Q)}. The function d is given by:
In
When EQ. 3 is used to adjust an AR image 210 that is displayed in the HUD region 130, f, is a plane distance or two-dimensional distance given by fx and fy between the HUD region plane PH and the location {right arrow over (e)} of an eye 310 of the driver 330, as indicated in
In
The process of adjusting the AR image 210 may be made more efficient according to exemplary embodiments. As one example of an efficiency, if an image 220 is obtained farther ahead of the vehicle 100, then the process at block 420 may be performed ahead of the process at block 440. Specifically, a precomputation may be performed:
A=[{right arrow over (t)}]·{right arrow over (Q)} [EQ. 4]
A involves the estimated rotation and translation {right arrow over (t)} of the road 160 with respect to the point of view from which the base image 225 of the road 160 will be captured (when the vehicle 100 is closer to the area obtained by the image 220) and on which the AR image 210 is to be shown, in the exemplary case of a display 145, for example. Then, at the time of display of the AR image 210, it can be confirmed whether the estimated pose A matches, within a specified threshold, the current pose ahead of the vehicle 100. If so, the second part of the computation may be performed as:
In this way the lag time associated with the processes of the method 400 may be reduced.
While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiments disclosed, but will include all embodiments falling within the scope thereof.
Number | Name | Date | Kind |
---|---|---|---|
20110301813 | Sun | Dec 2011 | A1 |
20120224060 | Gurevich | Sep 2012 | A1 |
20160084661 | Gautama | Mar 2016 | A1 |
20160214617 | Arndt | Jul 2016 | A1 |
20160349066 | Chung | Dec 2016 | A1 |
20180328752 | Tomatsu | Nov 2018 | A1 |
20190143816 | Wakatsuki | May 2019 | A1 |
20190188912 | Kamini | Jun 2019 | A1 |
20190311546 | Tay | Oct 2019 | A1 |
20200104607 | Kim | Apr 2020 | A1 |
20200180619 | Lee | Jun 2020 | A1 |
20200393263 | Kleen | Dec 2020 | A1 |
20210107356 | Watanabe | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
109624851 | Apr 2019 | CN |
20130012629 | Jul 2011 | KR |
Number | Date | Country | |
---|---|---|---|
20220044032 A1 | Feb 2022 | US |