CAMERA POSE ESTIMATION TECHNIQUES

Information

  • Patent Application
  • 20230410363
  • Publication Number
    20230410363
  • Date Filed
    September 06, 2023
    a year ago
  • Date Published
    December 21, 2023
    a year ago
Abstract
Techniques are described for estimating pose of a camera located on a vehicle. An exemplary method of estimating camera pose includes obtaining, from a camera located on a vehicle, an image including a lane marker on a road on which the vehicle is driven, and estimating a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road.
Description
TECHNICAL FIELD

This document describes techniques to estimate pose of a camera located on or in a vehicle.


BACKGROUND

A vehicle may include cameras attached to the vehicle for several purposes. For example, cameras may be attached to a roof of the vehicle for security purposes, for driving aid, or for facilitating autonomous driving. Cameras mounted on a vehicle can obtain images of one or more areas surrounding the vehicle. These images can be processed to obtain information about the road or about the objects surrounding the vehicle. For example, images obtained by a camera can be analyzed to determine distances of objects surrounding the autonomous vehicle so that the autonomous vehicle can be safely maneuvered around the objects.


SUMMARY

This patent document describes exemplary techniques to estimate pose of a camera located on or in a vehicle. A method of estimating camera pose includes obtaining, from a camera located on a vehicle, an image including a lane marker on a road on which the vehicle is driven; and estimating a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road.


In some embodiments, the first position corresponds to pixel locations associated with a corner of the lane marker, and the second position corresponds to a three-dimensional (3D) world coordinates of the corner of the lane marker. In some embodiments, the first position corresponds to pixel locations associated with the lane marker, and wherein the second position corresponds to a three-dimensional (3D) world coordinates of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing a function of a cost of misalignment term by minimizing a distance from the 3D world coordinates of the corner of the lane marker to the pixel locations associated with the corner of the lane marker. In some embodiments, the distance is minimized by minimizing a sum of squared distance between the pixel locations associated with the corner of the lane marker and the 3D world coordinates of the corner of the lane marker.


In some embodiments, the best match according to the criterion is determined by minimizing the function of a combination of the cost of misalignment term and of a cost of constraint term, the cost of constraint term represents a constraint to limit or that limits parameter search space, and the cost of constraint term is determined by minimizing a difference between the pixel locations and a third position of the corner of the lane marker from a previous image obtained as the vehicle is driven.


In some embodiments, the method further includes generating a binary image from the image obtained from the camera; and generating a gray-scale image from the binary image, the gray-scale image includes pixels with corresponding values, a value of each pixel is a function of a distance between a pixel location in the gray-scale image and the first position of the corner of lane marker in the gray-scale image.


In some embodiments, the second position of the lane marker is determined based on the location of the vehicle, a direction in which the vehicle is driven, and a pre-determined field of view (FOV) of the camera. In some embodiments, the second position of the lane marker is determined by: obtaining, from the stored map and based on the location of the vehicle, a first set of one or more lane markers that are located within a pre-determined distance from the vehicle; obtaining a second set of one or more lane markers from the first set of one or more lane markers based on the direction in which the vehicle is driven; obtaining a third set of one or more lane markers from the second set of one or more lane markers based on a pre-determined FOV of the camera; and obtaining the second position of the lane marker from the third set of one or more lane markers. In some embodiments, the third set of one or more lane markers excludes one or more lane markers determined to be obstructed by one or more objects.


In another exemplary aspect, the above-described methods are embodied in the form of processor-executable code and stored in a non-transitory computer-readable storage medium. The non-transitory computer readable storage includes code that when executed by a processor, causes the processor to implement the methods described in this patent document.


In yet another exemplary embodiment, a device that is configured or operable to perform the above-described methods is disclosed.


The above and other aspects and their implementations are described in greater detail in the drawings, the descriptions, and the claims.





BRIEF DESCRIPTION OF THE DRAWING


FIG. 1 shows a block diagram to estimate a pose of a camera located on or in a vehicle.



FIG. 2 shows an exemplary system that includes a vehicle on a road.



FIG. 3 shows a flow diagram of operations performed to obtain the static lane marker.



FIG. 4 shows a flow diagram of operations performed to obtain the observed lane marker and to estimate camera pose.



FIG. 5 shows an exemplary flow diagram of operations to estimate camera pose.



FIG. 6 shows an exemplary block diagram of a computer located in a vehicle to estimate camera pose.





DETAILED DESCRIPTION

An autonomous vehicle includes cameras to obtain images of one or more areas surrounding the autonomous vehicle. These images can be analyzed by a computer on-board the autonomous vehicle to obtain distance or other information about the road or about the objects surrounding the autonomous vehicle. However, a camera's pose needs to be determined so that the computer on-board the autonomous vehicle can precisely or accurately detect an object and determine its distance.



FIG. 1 shows a block diagram to estimate a pose of a camera located on or in a vehicle. A camera's pose can be estimated in real-time as an autonomous vehicle is operated or being driven on a road. In an autonomous vehicle, a plurality of cameras can be coupled to a roof of the cab to capture images of a region towards which the autonomous vehicle is being driven. Due to the non-rigidity of the mechanical structure through which the cameras can be coupled to the autonomous vehicle, the cameras can experience random vibration when engine is on and/or when the autonomous vehicle is driven on the road or from wind. FIG. 1 shows a block diagram that can be used to estimate a camera's pose (orientation and position) in six degrees-of-freedom (DoF). The six DoF parameters includes three variables for orientation (e.g., roll, pitch, yaw) of the camera and three variables for translation (x, y, z) of the camera. The precise and robust real-time camera pose can have a significant impact towards autonomous driving related applications such as tracking, depth estimation and speed estimation of objects that surround the autonomous vehicle.


On the top part of FIG. 1, at the operation 102, the exemplary camera pose estimation technique includes a high-definition (HD) map that can store information about the lane markers (shown as 210a, 210b in FIG. 2) on a road. The HD map can store information such as the three-dimension (3D) world coordinates of the four corners (shown as 212a-212b in FIG. 2) of each lane markers. The HD map can be stored in a computer located in an autonomous vehicle, where the computer performs the camera pose estimation techniques described in this patent document.


At the operation 104, the localization can include a global positioning system (GPS) transceiver located in the autonomous vehicle that can provide a position or location of the autonomous vehicle in 3D world coordinates. The computer located in the autonomous vehicle can receive the position of the autonomous vehicle and can query (shown in the operation 106) the HD map (shown in the operation 102) to obtain the 3D position of the corners of lane markers that can be located within a pre-determined distance (e.g., 100 meters) of the autonomous vehicle. Based on the query (the operation 106), the computer can obtain position information about the corners of the lane markers. At the operation 108, the position information of the corners of each lane marker can be considered a static lane marker information.


On the bottom part of FIG. 1, at the operation 110, the exemplary camera pose estimation technique includes an image that is obtained from a camera located on or in the autonomous vehicle. The computer located in the autonomous vehicle can obtain the image (the operation 110) from the camera and can perform a deep learning lane detection technique at the operation 112, to identify the lane makers and the two-dimensional position of the corners of the lane markers in the image (the operation 110). The deep learning lane detection technique (the operation 112) can also identify the two-dimension (2D) pixel location of the corners of each lane marker located in the image. Each identified lane marker and the pixel location of the corners of each lane marker in the image (the operation 110) can be considered an observed lane marker at the operation 114. In an exemplary embodiment, the deep learning lane detection can include using a convolutional neural network (CNN) that can operate based on a base framework.


In some embodiments, the computer located in the autonomous vehicle can perform data and image processing to obtain the static lane marker (the operation 108) and observed lane marker (the operation 114) every 5 milliseconds. The computer located in the autonomous vehicle can perform the matching operation (the operation 116) to minimize the distance from the 3D world coordinates of at least one corner of a lane marker obtained from the HD map and 2D pixel location of at least one corner of the corresponding lane marker in the image. The matching operation (the operation 116) can provide a best match or best fit between the lane marker obtained from the image and the corresponding lane marker obtained from the HD map. By minimizing the distance between a lane marker from the HD map and a corresponding lane marker from the image, the computer can obtain an estimated camera pose at the operation 118. The estimated camera pose can include values for the six DoF variables that describe a camera's pose.



FIG. 2 shows an exemplary system 200 that includes a vehicle 202 on a road 208, where the vehicle 202 includes a plurality of cameras. In FIG. 2, a single camera 204 is shown for ease of description. However, the plurality of cameras can be located on or positioned on the vehicle 202 to obtain images of the road 208 that includes the lane markers 210a, 210b as the vehicle 202 is driven. The camera pose estimation techniques described herein for the camera 204 can be applied to estimate the pose of other cameras located on the vehicle 202. The vehicle 202 can be an autonomous vehicle.


The road 208 includes lane markers 210a, 201b that can be affixed on either side of the road 208. The lane markers include a first set of lane markers 210a located on a first side of the road 208, and a second set of lane markers 210b located on a second side of the road 208 opposite to the first side. Each lane marker can include a plurality of corners 212a-212d (e.g., four corners for a rectangular lane marker). As described in FIG. 1, a computer can be located in the vehicle 202, where the computer can include a HD map that includes the 3D world coordinates of the corners of each lane marker. In each set of lane markers, one lane marker can be separated from another lane marker by a pre-determined distance to form a set of broken lines on the road. A lane marker 210a, 210b can have a rectangular shape and can have a white color or the lane marker 210a, 210b can have another shape (e.g., square, polygon, etc.) and can have a color (e.g., black, white, red, etc.). As shown in FIG. 2, the first and second set of lane markers 210a, 210b can be parallel to each other.



FIG. 3 shows a flow diagram of operations performed to obtain the static lane marker as described in FIG. 1. As described in FIG. 1, the computer located in the autonomous vehicle can obtain position information about the corners of a first set of one or more lane markers from an HD map, where the first set of one or more lane markers can be obtained based at least on the location of the autonomous vehicle obtained from a GPS transceiver. The localization described in FIG. 1 can also include an inertial measurement unit (IMU) sensor located on the vehicle that can provide a heading direction of the autonomous vehicle. At the operation 302, based on the heading direction, the computer located in the autonomous vehicle can filter by heading direction by filtering out or removing from the first set of one or more lane markers those lane marker(s) that are located to the side or behind the autonomous vehicle so that the computer can obtain a second set of one or more lane markers located in front of the autonomous vehicle. Next, at the operation 304, the computer can perform filter by camera's pre-determined field of view (FOV) to further narrow the second set of one or more lane markers to those lane marker(s) that are estimated to be located within a pre-determined FOV (e.g., within a pre-determined range of degrees of view) of a camera whose pose is being estimated. At the filtering operation (the operation 304), the computer can filter lane marker(s) fetched from the static HD-map and filtered by heading direction to project onto the field of view of the camera to obtain a third set of one or more lane markers. Then, at the operation 306, the computer can optionally perform a ray racing filtering or other filtering approaches, to remove the lane marker(s) that are geometrically occluded by landscapes. For example, the computer can perform the ray racing filtering to remove the lane marker(s) behind a foreseen uphill, or the lane marker(s) blocked by trees or walls. To be more specific, the ray racing filtering or other filtering approaches can remove the lane markers that exists in the 3D world coordinates but cannot be seen from a 2D image.


Next, at the operation 308, the computer can filter the third set of one or more lane marker by image size to filter out or more lane markers that cannot be easily perceived in the image. For example, the computer can filter out or remove one or more lane markers located past a pre-determined distance (e.g., past 50 meters) from the location of the autonomous vehicle. The filtering operation (the operation 308) can yield a fourth set of one or more lane markers that are located within a pre-determined distance (e.g., 50 meters) of the location of the autonomous vehicle. Finally, at the operation 310, the computer can optionally perform filter by dynamic objects occlusion to filter out one or more lane markers from the fourth set of lane markers that the computer determines to be occluded by or obstructed by one or more objects (e.g., landscape, other vehicles, etc.) on the road to obtain a fifth set of one or more lane markers with which the computer can perform the matching operation described in FIG. 1. As further described in this patent document, at the matching operation, the computer can determine and minimize the difference between the 3D location of a corner of lane marker obtained from the HD map and pixel locations of the corner of the lane marker obtained from the image and viewed by the camera, and where the lane marker obtained from the HD map corresponds to the lane marker obtained from the image.



FIG. 4 shows a flow diagram of operations performed to obtain the observed lane marker and to estimate camera pose as described in FIG. 1. At the segmentation post-process operation (the operation 402), the observed lane marker module (shown as the observed lane marker module 625 in FIG. 6) in a computer located in the autonomous vehicle can process the image obtained from the camera to extract the representation of lane markers in the image. As described in FIG. 1, the segmentation post-process operation (the operation 402) can involve the observed lane marker module using deep learning lane detection technique to extract the lane marker from the captured image and identify pixel locations of the corners of the lane markers. At the segmentation post-process operation (the operation 402), the observed lane marker module can also generate a binary image from the obtained image.


The observed lane marker module can perform the distance transformation operation (the operation 404) to smooth out the binary image to obtain a gray-scale image. Each pixel has a value associated with it, for which, the smaller the value, the closer to the pixel of the lane marker or lane marker (e.g., pixel location of the corner of the lane marker) and the larger the value, the farther from the pixel of the lane marker or the lane marker (e.g., pixel location of the corner of the lane marker).


The camera pose module (shown as the camera pose module 630 in FIG. 6) in a computer located in the autonomous vehicle can estimate camera pose (at the operation 406) based on a set of pixels with their locations associated with pre-determined reference point, for example, the corner of the lane marker. The location of the pre-determined reference point can be obtained from the HD map (shown as the HD map 615 in FIG. 6). The computer can estimate the 6 DoF parameters by minimizing the sum of the squared distances from the lane markers fetched from the HD map to the closest lane segmentation pixels of the lane markers captured by camera that are associated with the lane marker. In some embodiments, the camera pose module can estimate camera pose at the operation 406 by performing Equation (1) shown below for lane markers within the pre-determined distance of the location of the pixel locations:













arg


min


ξ
cor






E
cost

+

E
con








Equation



(
1
)








where Ecost is cost term for misalignment, and where Ecost is cost term for constraint. The cost term for misalignment Ecost is described in Equations (2) to (4) below and the cost term for constraint in Ecost described in Equation (5) below:










E
cost

=



i





j






e

i
,
j




2
2







Equation



(
2
)














e

i
,
j


=

ρ

(

DT

(

x

i
,
j


)

)





Equation



(
3
)














x

i
,
j


=

π

(


K
[


I

3
×
3






"\[LeftBracketingBar]"

0


]



G

(

ξ
cor

)



T
imu
cam





T
enu
imu



X

i
,
j




.



)





Equation



(
4
)








where the e(i, j) represents the misalignment error of the j-th corner point from i-th lane marker that fetched from the map, the function p(DT(xi,j)) indicates a robust loss function, and DT stands for the distance-transformation function. The X(i, j) is a vector representing one point in homogeneous coordinate system. K represents a 3×3 camera intrinsic matrix and I_(3×3) represents a 3×3 identical matrix. The function /pi is used to normalize the homogeneous coordinate.


In some embodiments, the computer can determine the 6 DoF variables that describe a pose of the camera by minimizing a distance from the 3D world coordinates of one or more corners of a lane marker obtained from the HD map within a pre-determined distance of the location of the autonomous vehicle and the pixel locations of a corresponding lane marker (e.g., a corner of the lane marker) from the obtained image.


In some embodiments, the computer can estimate camera pose (e.g., at the operation 406) by adding a regularization term to enable smoothness using Equations (5) to (10) as shown below:






E
con=∥θ(ξcor)∥Ω12+∥ϕ(ξcor)∥Ω12+∥θ(ξcor)−θ(ξprev)∥Ω32+∥ϕ(ξcor)−ϕ(ξprev)∥Ω42+η(ξcorprev)  Equation (5)


where θ( ) in Equation (5) is the function to obtain roll, pitch and yaw from se(3) as further described in Equations (7) to (10). SE(3) is Special Euclidean Transformation, which represents a Rigid transformations in 3D space and it is a 4×4 matrix. The Lie algebra of SE(3) is se(3), which has an exponential map to SE(3), and it is a 1×6 vector. The function ϕ( ) is designed to get translation vector from se(3). The function ξ(cor) is the se(3) representation of the corrected camera pose while the ξ(prev) is the se(3) representation of the camera pose of previous frame. The Ω1 to Ω4 terms are diagonal matrixes, where omega represents a weight of that term. We also set boundaries for each of the degree of freedom and the function η(ξ(cor), ξ(prev)) is defined as










η

(


ξ

c

o

r


,

ξ

p

r

e

v



)

=

{



0


else





1


0
7








"\[LeftBracketingBar]"


r
x



"\[RightBracketingBar]"


>


γ
1


or





or





"\[LeftBracketingBar]"


Δ


t
x




"\[RightBracketingBar]"



>

γ

1

2












Equation



(
6
)








The regularization term of Equation (6) can be viewed as a type of constraint that bound the space of parameter searching space. Adding a regularization term is a technically beneficial features at least because doing so can minimize the change in all 6 DoF parameters for a single time estimation. As shown in Equation (5), the pose of the camera can be based on a function that adds a constraint to limit parameter search space. The cost of constraint term shown in Equation (5) is determined by minimizing a difference between the estimated (corrected) camera pose at the current time frame and the pose that estimated from the previous time frame.





θ(ξcor)=(rx,ry,rz)T  Equation (7)


where θ(ξcor) is the function to obtain rotation from the se(3) in lie algebra, and rx, ry, and rz are the rotation values with respect to the camera coordinate along the x, y, and z axis, respectively, on the image plane where the camera image is obtained.





ϕ(ξcor)=(tx,ty,tz)T  Equation (8)


where ϕ(ξcor) is the function to obtain the translation, and tx, ty, and tz are the translation values with respect to the camera coordinate of the x, y, and z axis, respectively, on the image plane.





θ(ξcor)−θ(ξprev)=(Δrx,Δry,Δrz)T  Equation (9)


where θ(ξcor)−ϕ(ξprev) is the difference between the current value for the rotation and the previous rotation calculated from the previous image frame.





ϕ(ξcor)−ϕ(ξprev)=(Δtx,ΔtyΔtz)T  Equation (10)


where ϕ(ξcor)−ϕ(ξprev) is the difference between the current value for the translation and the previous translation from the previous image frame.



FIG. 5 shows an exemplary flow diagram of operations performed to estimate camera pose. At the obtaining operation 502, an observed lane marker module (shown as the observed lane marker module 625 in FIG. 6) can obtain an image from a camera located on a vehicle, where an image including a lane marker on a road on which the vehicle is driven. At the estimating operation 504, a camera pose module (shown as the camera pose module 630 in FIG. 6) can estimate a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road.


In some embodiments, first position corresponds to pixel locations associated with a corner of the lane marker, and the second position corresponds to a three-dimensional (3D) world coordinates of the corner of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing a function of a cost of misalignment term by minimizing a distance from the 3D world coordinates of the corner of the lane marker to the pixel locations associated with the corner of the lane marker. In some embodiments, the distance is minimized by minimizing a sum of squared distance between the pixel locations associated with the corner of the lane marker and the 3D world coordinates of the corner of the lane marker.


In some embodiments, the best match according to the criterion is determined by minimizing the function of a combination of the cost of misalignment term and of a cost of constraint term, where the cost of constraint term represents a constraint to limit parameter search space, and where the cost of constraint term is determined by minimizing a difference between the pixel locations and a third position of the corner of the lane marker from a previous image obtained as the vehicle is driven.


In some embodiments, the method of FIG. 5 further includes generating a binary image from the image obtained from the camera, and generating a gray-scale image from the binary image, where the gray-scale image includes pixels with corresponding values, a value of each pixel is a function of a distance between a pixel location in the gray-scale image and the first position of the corner of lane marker in the gray-scale image. In some embodiments, the method of FIG. 5 further includes generating a gray-scale image from the image, wherein the gray-scale image includes pixels with corresponding values, wherein a value of each pixel is a function of a distance between a pixel location in the gray-scale image and the first position of the corner of lane marker in the gray-scale image.


In some embodiments, the second position of the lane marker is determined by the static lane marker module (shown as the static lane marker module 620 in FIG. 6) based on the location of the vehicle, a direction in which the vehicle is driven, and a pre-determined field of view (FOV) of the camera. In some embodiments, the second position of the lane marker is determined by the static lane marker module by obtaining, from the stored map and based on the location of the vehicle, a first set of one or more lane markers that are located within a pre-determined distance from the vehicle, obtaining a second set of one or more lane markers from the first set of one or more lane markers based on the direction in which the vehicle is driven, obtaining a third set of one or more lane markers from the second set of one or more lane markers based on a pre-determined FOV of the camera, and obtaining the second position of the lane marker from the third set of one or more lane markers. In some embodiments, the third set of one or more lane markers excludes one or more lane markers determined to be obstructed by one or more objects.


In some embodiments, the first position corresponds to at least one location associated with the lane marker, and wherein the second position corresponds to a three-dimensional (3D) world coordinates of the at least one location of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing a function of a cost of misalignment term by minimizing a distance from the 3D world coordinates of the at least one location of the lane marker to the at least one location of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing the function of the cost of misalignment term and of a cost of constraint term. In some embodiments, the cost of constraint term represents a constraint that limits a search space. In some embodiments, the cost of constraint term is determined by minimizing a difference between the at least one location and a third position of the corner of the lane marker from a previous image obtained as the vehicle is driven.


In some embodiments, the second position of the lane marker is determined based on at least the location of the vehicle. In some embodiments, the second position of the lane marker is determined by obtaining, from the stored map and based on the location of the vehicle, a first set of lane markers that are located within a pre-determined distance from the vehicle; obtaining a second set of lane markers from the first set of lane markers based on a direction in which the vehicle is driven; obtaining a third set of lane markers from the second set of lane markers based on a pre-determined FOV of the camera; and obtaining the second position of the lane marker from the third set of lane markers.


In some implementations, methods described in the various embodiments in this patent document are embodied in a computer readable program stored on a non-transitory computer readable media. The computer readable program includes code that when executed by a processor, causes the processor to perform the methods described in this patent document, including the method described in FIG. 5.


In some embodiments, a system includes a processor and a memory. The memory storing instructions associated with a static lane marker module, an observed lane marker module, and/or a camera pose estimating module is executable by the processor to perform an operation to estimate camera pose. The system may, for example, include an apparatus, such as a computer, wherein the apparatus includes the above-mentioned memory and the above-mentioned processor.



FIG. 6 shows an exemplary block diagram of a computer located in an autonomous vehicle to estimate camera pose as described in this patent document. The computer 600 includes at least one processor 610 and a memory 605 having instructions stored thereupon. The instructions upon execution by the processor 610 configure the computer 600 to perform the operations described in FIGS. 1 to 5, and/or the operations described in the various embodiments in this patent document. The static lane marker module 620 can perform the operations to obtain static lane markers related information described in FIGS. 1 and 3 and in the various embodiments in this patent document. The observed lane marker module 625 can perform operations to obtain observed lane markers related information described in FIGS. 1, 4, and 5 and in the various embodiments in this patent document. The camera pose module 630 can perform operations to estimate camera pose as described in FIGS. 1, 4, and 5 and in the various embodiments in this patent document.


In this document the term “exemplary” is used to mean “an example of” and, unless otherwise stated, does not imply an ideal or a preferred embodiment.


Some of the embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Therefore, the computer-readable media can include a non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer- or processor-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.


Some of the disclosed embodiments can be implemented as devices or modules using hardware circuits, software, or combinations thereof. For example, a hardware circuit implementation can include discrete analog and/or digital components that are, for example, integrated as part of a printed circuit board. Alternatively, or additionally, the disclosed components or modules can be implemented as an Application Specific Integrated Circuit (ASIC) and/or as a Field Programmable Gate Array (FPGA) device. Some implementations may additionally or alternatively include a digital signal processor (DSP) that is a specialized microprocessor with an architecture optimized for the operational needs of digital signal processing associated with the disclosed functionalities of this application. Similarly, the various components or sub-components within each module may be implemented in software, hardware or firmware. The connectivity between the modules and/or components within the modules may be provided using any one of the connectivity methods and media that is known in the art, including, but not limited to, communications over the Internet, wired, or wireless networks using the appropriate protocols.


While this document contains many specifics, these should not be construed as limitations on the scope of an invention that is claimed or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or a variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results.


Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this disclosure.

Claims
  • 1. A method of estimating camera pose, comprising: receiving, by a computer located in a vehicle, an image from a camera located on the vehicle, wherein the image comprises a lane marker on a road;determining pixel locations of a plurality of corners of the lane marker;obtaining, from a database, three-dimensional (3D) coordinates of the plurality of corners of the lane marker; anddetermining a pose of the camera by minimizing a distance from 3D world coordinates of at least one corner from the plurality of corners of the lane marker and at least one pixel location of the at least one corner of the lane marker.
  • 2. The method of claim 1, wherein the 3D coordinates of the plurality of corners of the lane marker are obtained by: obtaining, from the database and based on a location of the vehicle, 3D coordinates of corners of each lane marker in a first set of lane markers;obtaining, from the first set of lane markers and based on a direction in which the vehicle is driven, a second set of lane markers that are located within a first pre-determined distance from the vehicle;obtaining, from the second set of lane markers, a third set of lane markers that are located within a pre-determined field of view of the camera;obtaining a fourth set of lane markers by removing one or more lane markers from the third set of lane markers located past a second pre-determined distance from the location of the vehicle; andobtaining a fifth set of lane markers by removing one or more lane markers obstructed by one or more objects on the road from the fourth set of lane markers, wherein the fifth set of lane markers includes the lane marker.
  • 3. The method of claim 2, wherein the third set of lane markers are obtained by removing at least one lane marker that is occluded by landscapes.
  • 4. The method of claim 2, wherein the direction in which the vehicle is driven is obtained from an inertial measurement unit (IMU) sensor located on the vehicle.
  • 5. The method of claim 1, further comprising: generating, from the image, a second image that includes a plurality of pixels, wherein each of the plurality of pixels has a value that is directly related to another distance between a pixel and a corner of the lane marker.
  • 6. The method of claim 1, wherein the lane marker has a rectangular shape.
  • 7. The method of claim 1, wherein the pose includes values for an orientation and a position of the camera.
  • 8. The method of claim 1, wherein the pose of the camera is determined as the vehicle is driven on the road.
  • 9. A computer, comprising: a processor configured to:receive an image from a camera located on a vehicle, wherein the image comprises a lane marker on a road, wherein the computer is located in the vehicle;determine pixel locations of a plurality of corners of the lane marker;obtain, from a database, three-dimensional (3D) coordinates of the plurality of corners of the lane marker; anddetermine a pose of the camera by minimizing a distance from 3D world coordinates of at least one corner from the plurality of corners of the lane marker and at least one pixel location of the at least one corner of the lane marker.
  • 10. The computer of claim 9, wherein the 3D coordinates of the plurality of corners of the lane marker are obtained by the processor configured to: obtain, from the database and based on a location of the vehicle, 3D coordinates of corners of each lane marker in a first set of lane markers;obtain, from the first set of lane markers and based on a direction in which the vehicle is driven, a second set of lane markers that are located within a first pre-determined distance from the vehicle;obtain, from the second set of lane markers, a third set of lane markers that are located within a pre-determined field of view of the camera;obtain a fourth set of lane markers by removal of one or more lane markers from the third set of lane markers located past a second pre-determined distance from the location of the vehicle; andobtain a fifth set of lane markers by removal of one or more lane markers obstructed by one or more objects on the road from the fourth set of lane markers, wherein the fifth set of lane markers includes the lane marker.
  • 11. The computer of claim 10, wherein the third set of lane markers are obtained by the processor configured to remove at least one lane marker that is occluded by landscapes.
  • 12. The computer of claim 10, wherein the direction in which the vehicle is driven is obtained from an inertial measurement unit (IMU) sensor located on the vehicle.
  • 13. The computer of claim 9, wherein the processor is further configured to: generate, from the image, a second image that includes a plurality of pixels, wherein each of the plurality of pixels has a value that is directly related to another distance between a pixel and a corner of the lane marker.
  • 14. The computer of claim 9, wherein the lane marker has a rectangular shape.
  • 15. The computer of claim 9, wherein the pose includes values for an orientation and a position of the camera.
  • 16. The computer of claim 9, wherein the pose of the camera is determined as the vehicle is driven on the road.
  • 17. A non-transitory computer readable program storage medium having code stored thereon, the code, when executed by a processor, causing the processor to implement a method, comprising: receiving, by a computer located in a vehicle, an image from a camera located on the vehicle, wherein the image comprises a lane marker on a road;determining pixel locations of a plurality of corners of the lane marker;obtaining, from a database, three-dimensional (3D) coordinates of the plurality of corners of the lane marker; anddetermining a pose of the camera by minimizing a distance from 3D world coordinates of at least one corner from the plurality of corners of the lane marker and at least one pixel location of the at least one corner of the lane marker.
  • 18. The non-transitory computer readable program storage medium of claim 17, wherein the 3D coordinates of the plurality of corners of the lane marker are obtained by: obtaining, from the database and based on a location of the vehicle, 3D coordinates of corners of each lane marker in a first set of lane markers;obtaining, from the first set of lane markers and based on a direction in which the vehicle is driven, a second set of lane markers that are located within a first pre-determined distance from the vehicle;obtaining, from the second set of lane markers, a third set of lane markers that are located within a pre-determined field of view of the camera;obtaining a fourth set of lane markers by removing one or more lane markers from the third set of lane markers located past a second pre-determined distance from the location of the vehicle; andobtaining a fifth set of lane markers by removing one or more lane markers obstructed by one or more objects on the road from the fourth set of lane markers, wherein the fifth set of lane markers includes the lane marker.
  • 19. The non-transitory computer readable program storage medium of claim 18, wherein the third set of lane markers are obtained by removing at least one lane marker that is occluded by landscapes.
  • 20. The non-transitory computer readable program storage medium of claim 18, wherein the direction in which the vehicle is driven is obtained from an inertial measurement unit (IMU) sensor located on the vehicle.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 17/225,396, filed on Apr. 8, 2021, which claims priority to and the benefit of U.S. Provisional Application No. 63/007,895, filed on Apr. 9, 2020. The aforementioned applications are incorporated herein by reference in their entireties.

Provisional Applications (1)
Number Date Country
63007895 Apr 2020 US
Continuations (1)
Number Date Country
Parent 17225396 Apr 2021 US
Child 18461625 US