Camera pose estimation techniques

Information

  • Patent Grant
  • 11810322
  • Patent Number
    11,810,322
  • Date Filed
    Thursday, April 8, 2021
    3 years ago
  • Date Issued
    Tuesday, November 7, 2023
    7 months ago
  • CPC
  • Field of Search
    • CPC
    • G06V20/588
    • G06V20/176
    • G06V20/41
    • G06V20/56
    • G06V20/58
    • G06V10/454
    • G06V10/806
    • G06V10/811
    • G06V10/82
    • G06V2201/10
    • G06V30/19173
    • G06V30/194
    • G06T17/05
    • G06T2200/04
    • G06T2207/10012
    • G06T2207/30244
    • G06T2207/30256
    • G06T7/344
    • G06T7/74
  • International Classifications
    • G06T7/73
    • Term Extension
      84
Abstract
Techniques are described for estimating pose of a camera located on a vehicle. An exemplary method of estimating camera pose includes obtaining, from a camera located on a vehicle, an image including a lane marker on a road on which the vehicle is driven, and estimating a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road.
Description
TECHNICAL FIELD

This document describes techniques to estimate pose of a camera located on or in a vehicle.


BACKGROUND

A vehicle may include cameras attached to the vehicle for several purposes. For example, cameras may be attached to a roof of the vehicle for security purposes, for driving aid, or for facilitating autonomous driving. Cameras mounted on a vehicle can obtain images of one or more areas surrounding the vehicle. These images can be processed to obtain information about the road or about the objects surrounding the vehicle. For example, images obtained by a camera can be analyzed to determine distances of objects surrounding the autonomous vehicle so that the autonomous vehicle can be safely maneuvered around the objects.


SUMMARY

This patent document describes exemplary techniques to estimate pose of a camera located on or in a vehicle. A method of estimating camera pose includes obtaining, from a camera located on a vehicle, an image including a lane marker on a road on which the vehicle is driven; and estimating a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road.


In some embodiments, the first position corresponds to pixel locations associated with a corner of the lane marker, and the second position corresponds to a three-dimensional (3D) world coordinates of the corner of the lane marker. In some embodiments, the first position corresponds to pixel locations associated with the lane marker, and wherein the second position corresponds to a three-dimensional (3D) world coordinates of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing a function of a cost of misalignment term by: minimizing a distance from the 3D world coordinates of the corner of the lane marker to the pixel locations associated with the corner of the lane marker. In some embodiments, the distance is minimized by minimizing a sum of squared distance between the pixel locations associated with the corner of the lane marker and the 3D world coordinates of the corner of the lane marker.


In some embodiments, the best match according to the criterion is determined by minimizing the function of a combination of the cost of misalignment term and of a cost of constraint term, the cost of constraint term represents a constraint to limit or that limits parameter search space, and the cost of constraint term is determined by minimizing a difference between the pixel locations and a third position of the corner of the lane marker from a previous image obtained as the vehicle is driven.


In some embodiments, the method further includes generating a binary image from the image obtained from the camera; and generating a gray-scale image from the binary image, the gray-scale image includes pixels with corresponding values, a value of each pixel is a function of a distance between a pixel location in the gray-scale image and the first position of the corner of lane marker in the gray-scale image.


In some embodiments, the second position of the lane marker is determined based on the location of the vehicle, a direction in which the vehicle is driven, and a pre-determined field of view (FOV) of the camera. In some embodiments, the second position of the lane marker is determined by: obtaining, from the stored map and based on the location of the vehicle, a first set of one or more lane markers that are located within a pre-determined distance from the vehicle; obtaining a second set of one or more lane markers from the first set of one or more lane markers based on the direction in which the vehicle is driven; obtaining a third set of one or more lane markers from the second set of one or more lane markers based on a pre-determined FOV of the camera; and obtaining the second position of the lane marker from the third set of one or more lane markers. In some embodiments, the third set of one or more lane markers excludes one or more lane markers determined to be obstructed by one or more objects.


In another exemplary aspect, the above-described methods are embodied in the form of processor-executable code and stored in a non-transitory computer-readable storage medium. The non-transitory computer readable storage includes code that when executed by a processor, causes the processor to implement the methods described in this patent document.


In yet another exemplary embodiment, a device that is configured or operable to perform the above-described methods is disclosed.


The above and other aspects and their implementations are described in greater detail in the drawings, the descriptions, and the claims.





BRIEF DESCRIPTION OF THE DRAWING


FIG. 1 shows a block diagram to estimate a pose of a camera located on or in a vehicle.



FIG. 2 shows an exemplary system that includes a vehicle on a road.



FIG. 3 shows a flow diagram of operations performed to obtain the static lane marker.



FIG. 4 shows a flow diagram of operations performed to obtain the observed lane marker and to estimate camera pose.



FIG. 5 shows an exemplary flow diagram of operations to estimate camera pose.



FIG. 6 shows an exemplary block diagram of a computer located in a vehicle to estimate camera pose.





DETAILED DESCRIPTION

An autonomous vehicle includes cameras to obtain images of one or more areas surrounding the autonomous vehicle. These images can be analyzed by a computer on-board the autonomous vehicle to obtain distance or other information about the road or about the objects surrounding the autonomous vehicle. However, a camera's pose needs to be determined so that the computer on-board the autonomous vehicle can precisely or accurately detect an object and determine its distance.



FIG. 1 shows a block diagram to estimate a pose of a camera located on or in a vehicle. A camera's pose can be estimated in real-time as an autonomous vehicle is operated or being driven on a road. In an autonomous vehicle, a plurality of cameras can be coupled to a roof of the cab to capture images of a region towards which the autonomous vehicle is being driven. Due to the non-rigidity of the mechanical structure through which the cameras can be coupled to the autonomous vehicle, the cameras can experience random vibration when engine is on and/or when the autonomous vehicle is driven on the road or from wind. FIG. 1 shows a block diagram that can be used to estimate a camera's pose (orientation and position) in six degrees-of-freedom (DoF). The six DoF parameters includes three variables for orientation (e.g., roll, pitch, yaw) of the camera and three variables for translation (x, y, z) of the camera. The precise and robust real-time camera pose can have a significant impact towards autonomous driving related applications such as tracking, depth estimation and speed estimation of objects that surround the autonomous vehicle.


On the top part of FIG. 1, at the operation 102, the exemplary camera pose estimation technique includes a high-definition (HD) map that can store information about the lane markers (shown as 210a, 210b in FIG. 2) on a road. The HD map can store information such as the three-dimension (3D) world coordinates of the four corners (shown as 212a-212b in FIG. 2) of each lane markers. The HD map can be stored in a computer located in an autonomous vehicle, where the computer performs the camera pose estimation techniques described in this patent document.


At the operation 104, the localization can include a global positioning system (GPS) transceiver located in the autonomous vehicle that can provide a position or location of the autonomous vehicle in 3D world coordinates. The computer located in the autonomous vehicle can receive the position of the autonomous vehicle and can query (shown in the operation 106) the HD map (shown in the operation 102) to obtain the 3D position of the corners of lane markers that can be located within a pre-determined distance (e.g., 100 meters) of the autonomous vehicle. Based on the query (the operation 106), the computer can obtain position information about the corners of the lane markers. At the operation 108, the position information of the corners of each lane marker can be considered a static lane marker information.


On the bottom part of FIG. 1, at the operation 110, the exemplary camera pose estimation technique includes an image that is obtained from a camera located on or in the autonomous vehicle. The computer located in the autonomous vehicle can obtain the image (the operation 110) from the camera and can perform a deep learning lane detection technique at the operation 112, to identify the lane makers and the two-dimensional position of the corners of the lane markers in the image (the operation 110). The deep learning lane detection technique (the operation 112) can also identify the two-dimension (2D) pixel location of the corners of each lane marker located in the image. Each identified lane marker and the pixel location of the corners of each lane marker in the image (the operation 110) can be considered an observed lane marker at the operation 114. In an exemplary embodiment, the deep learning lane detection can include using a convolutional neural network (CNN) that can operate based on a base framework.


In some embodiments, the computer located in the autonomous vehicle can perform data and image processing to obtain the static lane marker (the operation 108) and observed lane marker (the operation 114) every 5 milliseconds. The computer located in the autonomous vehicle can perform the matching operation (the operation 116) to minimize the distance from the 3D world coordinates of at least one corner of a lane marker obtained from the HD map and 2D pixel location of at least one corner of the corresponding lane marker in the image. The matching operation (the operation 116) can provide a best match or best fit between the lane marker obtained from the image and the corresponding lane marker obtained from the HD map. By minimizing the distance between a lane marker from the HD map and a corresponding lane marker from the image, the computer can obtain an estimated camera pose at the operation 118. The estimated camera pose can include values for the six DoF variables that describe a camera's pose.



FIG. 2 shows an exemplary system 200 that includes a vehicle 202 on a road 208, where the vehicle 202 includes a plurality of cameras. In FIG. 2, a single camera 204 is shown for ease of description. However, the plurality of cameras can be located on or positioned on the vehicle 202 to obtain images of the road 208 that includes the lane markers 210a, 210b as the vehicle 202 is driven. The camera pose estimation techniques described herein for the camera 204 can be applied to estimate the pose of other cameras located on the vehicle 202. The vehicle 202 can be an autonomous vehicle.


The road 208 includes lane markers 210a, 201b that can be affixed on either side of the road 208. The lane markers include a first set of lane markers 210a located on a first side of the road 208, and a second set of lane markers 210b located on a second side of the road 208 opposite to the first side. Each lane marker can include a plurality of corners 212a-212d (e.g., four corners for a rectangular lane marker). As described in FIG. 1, a computer can be located in the vehicle 202, where the computer can include a HD map that includes the 3D world coordinates of the corners of each lane marker. In each set of lane markers, one lane marker can be separated from another lane marker by a pre-determined distance to form a set of broken lines on the road. A lane marker 210a, 210b can have a rectangular shape and can have a white color or the lane marker 210a, 210b can have another shape (e.g., square, polygon, etc.) and can have a color (e.g., black, white, red, etc.). As shown in FIG. 2, the first and second set of lane markers 210a, 210b can be parallel to each other.



FIG. 3 shows a flow diagram of operations performed to obtain the static lane marker as described in FIG. 1. As described in FIG. 1, the computer located in the autonomous vehicle can obtain position information about the corners of a first set of one or more lane markers from an HD map, where the first set of one or more lane markers can be obtained based at least on the location of the autonomous vehicle obtained from a GPS transceiver. The localization described in FIG. 1 can also include an inertial measurement unit (IMU) sensor located on the vehicle that can provide a heading direction of the autonomous vehicle. At the operation 302, based on the heading direction, the computer located in the autonomous vehicle can filter by heading direction by filtering out or removing from the first set of one or more lane markers those lane marker(s) that are located to the side or behind the autonomous vehicle so that the computer can obtain a second set of one or more lane markers located in front of the autonomous vehicle. Next, at the operation 304, the computer can perform filter by camera's pre-determined field of view (FOV) to further narrow the second set of one or more lane markers to those lane marker(s) that are estimated to be located within a pre-determined FOV (e.g., within a pre-determined range of degrees of view) of a camera whose pose is being estimated. At the filtering operation (the operation 304), the computer can filter lane marker(s) fetched from the static HD-map and filtered by heading direction to project onto the field of view of the camera to obtain a third set of one or more lane markers. Then, at the operation 306, the computer can optionally perform a ray racing filtering or other filtering approaches, to remove the lane marker(s) that are geometrically occluded by landscapes. For example, the computer can perform the ray racing filtering to remove the lane marker(s) behind a foreseen uphill, or the lane marker(s) blocked by trees or walls. To be more specific, the ray racing filtering or other filtering approaches can remove the lane markers that exists in the 3D world coordinates but cannot be seen from a 2D image.


Next, at the operation 308, the computer can filter the third set of one or more lane marker by image size to filter out or more lane markers that cannot be easily perceived in the image. For example, the computer can filter out or remove one or more lane markers located past a pre-determined distance (e.g., past 50 meters) from the location of the autonomous vehicle. The filtering operation (the operation 308) can yield a fourth set of one or more lane markers that are located within a pre-determined distance (e.g., 50 meters) of the location of the autonomous vehicle. Finally, at the operation 310, the computer can optionally perform filter by dynamic objects occlusion to filter out one or more lane markers from the fourth set of lane markers that the computer determines to be occluded by or obstructed by one or more objects (e.g., landscape, other vehicles, etc.) on the road to obtain a fifth set of one or more lane markers with which the computer can perform the matching operation described in FIG. 1. As further described in this patent document, at the matching operation, the computer can determine and minimize the difference between the 3D location of a corner of lane marker obtained from the HD map and pixel locations of the corner of the lane marker obtained from the image and viewed by the camera, and where the lane marker obtained from the HD map corresponds to the lane marker obtained from the image.



FIG. 4 shows a flow diagram of operations performed to obtain the observed lane marker and to estimate camera pose as described in FIG. 1. At the segmentation post-process operation (the operation 402), the observed lane marker module (shown as the observed lane marker module 625 in FIG. 6) in a computer located in the autonomous vehicle can process the image obtained from the camera to extract the representation of lane markers in the image. As described in FIG. 1, the segmentation post-process operation (the operation 402) can involve the observed lane marker module using deep learning lane detection technique to extract the lane marker from the captured image and identify pixel locations of the corners of the lane markers. At the segmentation post-process operation (the operation 402), the observed lane marker module can also generate a binary image from the obtained image.


The observed lane marker module can perform the distance transformation operation (the operation 404) to smooth out the binary image to obtain a gray-scale image. Each pixel has a value associated with it, for which, the smaller the value, the closer to the pixel of the lane marker or lane marker (e.g., pixel location of the corner of the lane marker) and the larger the value, the farther from the pixel of the lane marker or the lane marker (e.g., pixel location of the corner of the lane marker).


The camera pose module (shown as the camera pose module 630 in FIG. 6) in a computer located in the autonomous vehicle can estimate camera pose (at the operation 406) based on a set of pixels with their locations associated with pre-determined reference point, for example, the corner of the lane marker. The location of the pre-determined reference point can be obtained from the HD map (shown as the HD map 615 in FIG. 6). The computer can estimate the 6 DoF parameters by minimizing the sum of the squared distances from the lane markers fetched from the HD map to the closest lane segmentation pixels of the lane markers captured by camera that are associated with the lane marker. In some embodiments, the camera pose module can estimate camera pose at the operation 406 by performing Equation (1) shown below for lane markers within the pre-determined distance of the location of the pixel locations:













arg

min


ξ
cor






E
cost

+

E
con








Equation



(
1
)









where Ecost is cost term for misalignment, and where Econ is cost term for constraint. The cost term for misalignment Ecost is described in Equations (2) to (4) below and the cost term for constraint in Econ described in Equation (5) below;










E
cost

=



i





j





e

i
,
j




2
2







Equation



(
2
)














e

i
,
j


=

ρ

(

DT

(

x

i
,
j


)

)





Equation



(
3
)














x

i
,
j


=

π

(


K
[


I

3

3


|
0

]



G

(

ξ
cor

)



T
imu
cam



T
enu
imu



X

i
,
j



)





Equation



(
4
)









where the e(i, j) represents the misalignment error of the j-th corner point from i-th lane marker that fetched from the map, the function ρ(DT(xi,j)) indicates a robust loss function, and DT stands for the distance-transformation function. The X(i, j) is a vector representing one point in homogeneous coordinate system. K represents a 3×3 camera intrinsic matrix and I_(3×3) represents a 3×3 identical matrix. The function \pi is used to normalize the homogeneous coordinate.


In some embodiments, the computer can determine the 6 DoF variables that describe a pose of the camera by minimizing a distance from the 3D world coordinates of one or more corners of a lane marker obtained from the HD map within a pre-determined distance of the location of the autonomous vehicle and the pixel locations of a corresponding lane marker (e.g., a corner of the lane marker) from the obtained image.


In some embodiments, the computer can estimate camera pose (e.g., at the operation 406) by adding a regularization term to enable smoothness using Equations (5) to (10) as shown below:

Econ=∥θ(ξcor)∥Ω12+∥ϕ(ξcor)∥Ω22+∥θ(ξcor)−θ(ξprev)∥Ω32+∥ϕ(ξcor)−ϕ(ξprev)∥Ω42+η(ξcorprev)  Equation (5)

where θ( ) in Equation (5) is the function to obtain roll, pitch and yaw from se(3) as further described in Equations (7) to (10). SE(3) is Special Euclidean Transformation, which represents a Rigid transformations in 3D space and it is a 4×4 matrix. The Lie algebra of SE(3) is se(3), which has an exponential map to SE(3), and it is a 1×6 vector. The function ϕ( ) is designed to get translation vector from se(3). The function ((cor) is the se(3) representation of the corrected camera pose while the ξ(prev) is the se(3) representation of the camera pose of previous frame. The Ω1 to Ω4 terms are diagonal matrixes, where omega represents a weight of that term. We also set boundaries for each of the degree of freedom and the function η((cor), ξ(prev)) is defined as










η

(


ξ
cor

,

ξ
prev


)

=

{



0


else





10
7







"\[LeftBracketingBar]"


r
x



"\[RightBracketingBar]"


>


γ
1



or





or





"\[LeftBracketingBar]"


Δ


t
x




"\[RightBracketingBar]"



>

γ
12










Equation



(
6
)








The regularization term of Equation (6) can be viewed as a type of constraint that bound the space of parameter searching space. Adding a regularization term is a technically beneficial features at least because doing so can minimize the change in all 6 DoF parameters for a single time estimation. As shown in Equation (5), the pose of the camera can be based on a function that adds a constraint to limit parameter search space. The cost of constraint term shown in Equation (5) is determined by minimizing a difference between the estimated (corrected) camera pose at the current time frame and the pose that estimated from the previous time frame.

θ(ξcor)=(rx,ry,rz)  Equation (7)

where θ(ξcor) is the function to obtain rotation from the se(3) in lie algebra, and rx, ry, and rz are the rotation values with respect to the camera coordinate along the x, y, and z axis, respectively, on the image plane where the camera image is obtained.

ϕ(ξcor)=(tx,ty,tz)T  Equation (8)

where ϕ(ξcor) is the function to obtain the translation, and tx, ty, and tz are the translation values with respect to the camera coordinate of the x, y, and z axis, respectively, on the image plane.

θ(ξcor)−θ(ξprev)=(Δrx,Δry,Δrz)  Equation (9)

where θ(ξcor)−θ(ξprev) is the difference between the current value for the rotation and the previous rotation calculated from the previous image frame.

ϕ(ξcor)−ϕ(ξprev)=(Δtx,Δty,Δtz)T  Equation (10)

where ϕ(ξcor)−ϕ(ξprev) is the difference between the current value for the translation and the previous translation from the previous image frame.



FIG. 5 shows an exemplary flow diagram of operations performed to estimate camera pose. At the obtaining operation 502, an observed lane marker module (shown as the observed lane marker module 625 in FIG. 6) can obtain an image from a camera located on a vehicle, where an image including a lane marker on a road on which the vehicle is driven. At the estimating operation 504, a camera pose module (shown as the camera pose module 630 in FIG. 6) can estimate a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road.


In some embodiments, first position corresponds to pixel locations associated with a corner of the lane marker, and the second position corresponds to a three-dimensional (3D) world coordinates of the corner of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing a function of a cost of misalignment term by minimizing a distance from the 3D world coordinates of the corner of the lane marker to the pixel locations associated with the corner of the lane marker. In some embodiments, the distance is minimized by minimizing a sum of squared distance between the pixel locations associated with the corner of the lane marker and the 3D world coordinates of the corner of the lane marker.


In some embodiments, the best match according to the criterion is determined by minimizing the function of a combination of the cost of misalignment term and of a cost of constraint term, where the cost of constraint term represents a constraint to limit parameter search space, and where the cost of constraint term is determined by minimizing a difference between the pixel locations and a third position of the corner of the lane marker from a previous image obtained as the vehicle is driven.


In some embodiments, the method of FIG. 5 further includes generating a binary image from the image obtained from the camera, and generating a gray-scale image from the binary image, where the gray-scale image includes pixels with corresponding values, a value of each pixel is a function of a distance between a pixel location in the gray-scale image and the first position of the corner of lane marker in the gray-scale image. In some embodiments, the method of FIG. 5 further includes generating a gray-scale image from the image, wherein the gray-scale image includes pixels with corresponding values, wherein a value of each pixel is a function of a distance between a pixel location in the gray-scale image and the first position of the corner of lane marker in the gray-scale image.


In some embodiments, the second position of the lane marker is determined by the static lane marker module (shown as the static lane marker module 620 in FIG. 6) based on the location of the vehicle, a direction in which the vehicle is driven, and a pre-determined field of view (FOV) of the camera. In some embodiments, the second position of the lane marker is determined by the static lane marker module by obtaining, from the stored map and based on the location of the vehicle, a first set of one or more lane markers that are located within a pre-determined distance from the vehicle, obtaining a second set of one or more lane markers from the first set of one or more lane markers based on the direction in which the vehicle is driven, obtaining a third set of one or more lane markers from the second set of one or more lane markers based on a pre-determined FOV of the camera, and obtaining the second position of the lane marker from the third set of one or more lane markers. In some embodiments, the third set of one or more lane markers excludes one or more lane markers determined to be obstructed by one or more objects.


In some embodiments, the first position corresponds to at least one location associated with the lane marker, and wherein the second position corresponds to a three-dimensional (3D) world coordinates of the at least one location of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing a function of a cost of misalignment term by minimizing a distance from the 3D world coordinates of the at least one location of the lane marker to the at least one location of the lane marker. In some embodiments, the best match according to the criterion is determined by minimizing the function of the cost of misalignment term and of a cost of constraint term. In some embodiments, the cost of constraint term represents a constraint that limits a search space. In some embodiments, the cost of constraint term is determined by minimizing a difference between the at least one location and a third position of the corner of the lane marker from a previous image obtained as the vehicle is driven.


In some embodiments, the second position of the lane marker is determined based on at least the location of the vehicle. In some embodiments, the second position of the lane marker is determined by obtaining, from the stored map and based on the location of the vehicle, a first set of lane markers that are located within a pre-determined distance from the vehicle; obtaining a second set of lane markers from the first set of lane markers based on a direction in which the vehicle is driven; obtaining a third set of lane markers from the second set of lane markers based on a pre-determined FOV of the camera; and obtaining the second position of the lane marker from the third set of lane markers.


In some implementations, methods described in the various embodiments in this patent document are embodied in a computer readable program stored on a non-transitory computer readable media. The computer readable program includes code that when executed by a processor, causes the processor to perform the methods described in this patent document, including the method described in FIG. 5.


In some embodiments, a system includes a processor and a memory. The memory storing instructions associated with a static lane marker module, an observed lane marker module, and/or a camera pose estimating module is executable by the processor to perform an operation to estimate camera pose. The system may, for example, include an apparatus, such as a computer, wherein the apparatus includes the above-mentioned memory and the above-mentioned processor.



FIG. 6 shows an exemplary block diagram of a computer located in an autonomous vehicle to estimate camera pose as described in this patent document. The computer 600 includes at least one processor 610 and a memory 605 having instructions stored thereupon. The instructions upon execution by the processor 610 configure the computer 600 to perform the operations described in FIG. 1 to 5, and/or the operations described in the various embodiments in this patent document. The static lane marker module 620 can perform the operations to obtain static lane markers related information described in FIGS. 1 and 3 and in the various embodiments in this patent document. The observed lane marker module 625 can perform operations to obtain observed lane markers related information described in FIGS. 1, 4, and 5 and in the various embodiments in this patent document. The camera pose module 630 can perform operations to estimate camera pose as described in FIGS. 1, 4, and 5 and in the various embodiments in this patent document.


In this document the term “exemplary” is used to mean “an example of” and, unless otherwise stated, does not imply an ideal or a preferred embodiment.


Some of the embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Therefore, the computer-readable media can include a non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer- or processor-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.


Some of the disclosed embodiments can be implemented as devices or modules using hardware circuits, software, or combinations thereof. For example, a hardware circuit implementation can include discrete analog and/or digital components that are, for example, integrated as part of a printed circuit board. Alternatively, or additionally, the disclosed components or modules can be implemented as an Application Specific Integrated Circuit (ASIC) and/or as a Field Programmable Gate Array (FPGA) device. Some implementations may additionally or alternatively include a digital signal processor (DSP) that is a specialized microprocessor with an architecture optimized for the operational needs of digital signal processing associated with the disclosed functionalities of this application. Similarly, the various components or sub-components within each module may be implemented in software, hardware or firmware. The connectivity between the modules and/or components within the modules may be provided using any one of the connectivity methods and media that is known in the art, including, but not limited to, communications over the Internet, wired, or wireless networks using the appropriate protocols.


While this document contains many specifics, these should not be construed as limitations on the scope of an invention that is claimed or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or a variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results.


Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this disclosure.

Claims
  • 1. A method of estimating camera pose, comprising: obtaining, from a camera located on a vehicle, an image comprising a lane marker on a road on which the vehicle is driven; andestimating a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road, wherein the first position corresponds to pixel locations associated with a corner of the lane marker,wherein the second position corresponds to a three-dimensional (3D) world coordinates of the corner of the lane marker,wherein the second position of the lane marker is determined by: obtaining, from the stored map and based on a location of the vehicle, a first set of one or more lane markers that are located within a pre-determined distance from the vehicle;obtaining a second set of one or more lane markers from the first set of one or more lane markers based on a direction in which the vehicle is driven;obtaining a third set of one or more lane markers from the second set of one or more lane markers based on a pre-determined field of view (FOV) of the camera; andobtaining the second position of the lane marker from the third set of one or more lane markers;wherein the best match according to the criterion is determined by minimizing a function of a combination of a cost of misalignment term and of a cost of constraint term,wherein the cost of misalignment term is determined by minimizing a distance from the 3D world coordinates of the corner of the lane marker to the pixel locations associated with the corner of the lane marker, andwherein the cost of constraint term is determined by minimizing a difference between a first estimated camera pose at a first time when the image is obtained and a second estimated camera pose from a second time, wherein the second time precedes in time the first time.
  • 2. The method of claim 1, wherein the distance is minimized by minimizing a sum of squared distance between the pixel locations associated with the corner of the lane marker and the 3D world coordinates of the corner of the lane marker.
  • 3. The method of claim 1, further comprising: generating a binary image from the image obtained from the camera; andgenerating a gray-scale image from the binary image, wherein the gray-scale image comprises pixels with corresponding values, wherein a value of each pixel is a second function of a second distance between a pixel location in the gray-scale image and the first position of the corner of lane marker in the gray-scale image.
  • 4. The method of claim 1, wherein the third set of one or more lane markers excludes one or more lane markers determined to be obstructed by one or more objects.
  • 5. The method of claim 1, wherein the cost of constraint term is another function of a first set of rotation values of the camera associated with the first time, a second set of translation values of the camera associated with the first time, a second difference between the first set of rotation values and a third set of rotation values of the camera associated with the second time, and a third difference between the second set of translation values and a fourth set of translation values of the camera associated with the second time.
  • 6. A system comprising: a processor; anda memory that stores instructions executable by the processor to: obtain, from a camera located on a vehicle, an image comprising a lane marker on a road on which the vehicle is driven; andestimate a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road, wherein the first position corresponds to pixel locations associated with a corner of the lane marker,wherein the second position corresponds to a three-dimensional (3D) world coordinates of the corner of the lane marker,wherein the second position of the lane marker is determined by the processor configured to: obtain, from the stored map and based on a location of the vehicle, a first set of one or more lane markers that are located within a pre-determined distance from the vehicle;obtain a second set of one or more lane markers from the first set of one or more lane markers based on a direction in which the vehicle is driven;obtain a third set of one or more lane markers from the second set of one or more lane markers based on a pre-determined field of view (FOV) of the camera; andobtain the second position of the lane marker from the third set of one or more lane markers;wherein the best match according to the criterion is determined by minimizing a function of a combination of a cost of misalignment term and of a cost of constraint term,wherein the cost of misalignment term is determined by minimizing a distance from the 3D world coordinates of the corner of the lane marker to the pixel locations associated with the corner of the lane marker, andwherein the cost of constraint term is determined by minimizing a difference between a first estimated camera pose at a first time when the image is obtained and a second estimated camera pose from a second time, wherein the second time precedes in time the first time.
  • 7. The system of claim 6, wherein the cost of constraint term represents a constraint that limits a search space.
  • 8. The system of claim 6, wherein the cost of constraint term is another function of a first set of rotation values of the camera associated with the first time, a second set of translation values of the camera associated with the first time, a second difference between the first set of rotation values and a third set of rotation values of the camera associated with the second time, and a third difference between the second set of translation values and a fourth set of translation values of the camera associated with the second time.
  • 9. A non-transitory computer readable storage medium having code stored thereon, the code, when executed by a processor, causing the processor to implement a method, comprising: obtaining, from a camera located on a vehicle, an image comprising a lane marker on a road on which the vehicle is driven; andestimating a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road, wherein the first position corresponds to pixel locations associated with a corner of the lane marker,wherein the second position corresponds to a three-dimensional (3D) world coordinates of the corner of the lane marker,wherein the second position of the lane marker is determined by: obtaining, from the stored map and based on a location of the vehicle, a first set of lane markers that are located within a pre-determined distance from the vehicle;obtaining a second set of lane markers from the first set of lane markers based on a direction in which the vehicle is driven;obtaining a third set of lane markers from the second set of lane markers based on a pre-determined field of view (FOV) of the camera; andobtaining the second position of the lane marker from the third set of lane markers;wherein the best match according to the criterion is determined by minimizing a function of a combination of a cost of misalignment term and of a cost of constraint term,wherein the cost of misalignment term is determined by minimizing a distance from the 3D world coordinates of the corner of the lane marker to the pixel locations associated with the corner of the lane marker, andwherein the cost of constraint term is determined by minimizing a difference between a first estimated camera pose at a first time when the image is obtained and a second estimated camera pose from a second time, wherein the second time precedes in time the first time.
  • 10. The non-transitory computer readable storage medium of claim 9, wherein the method further comprises: generating a gray-scale image from the image, wherein the gray-scale image comprises pixels with corresponding values, wherein a value of each pixel is a second function of a second distance between a pixel location in the gray-scale image and the first position of lane marker in the gray-scale image.
  • 11. The non-transitory computer readable storage medium of claim 9, wherein the distance is minimized by minimizing a sum of squared distance between the pixel locations associated with the corner of the lane marker and the 3D world coordinates of the corner of the lane marker.
  • 12. The non-transitory computer readable storage medium of claim 9, wherein the third set of lane markers excludes one or more lane markers determined to be obstructed by one or more objects.
  • 13. The non-transitory computer readable storage medium of claim 9, wherein the cost of constraint term is another function of a first set of rotation values of the camera associated with the first time, a second set of translation values of the camera associated with the first time, a second difference between the first set of rotation values and a third set of rotation values of the camera associated with the second time, and a third difference between the second set of translation values and a fourth set of translation values of the camera associated with the second time.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to provisional patent application No. 63/007,895, titled “CAMERA POSE ESTIMATION TECHNIQUES,” filed Apr. 9, 2020, the disclosure of which is hereby incorporated by reference herein.

US Referenced Citations (259)
Number Name Date Kind
6084870 Wooten et al. Jul 2000 A
6263088 Crabtree et al. Jul 2001 B1
6594821 Banning et al. Jul 2003 B1
6777904 Degner et al. Aug 2004 B1
6975923 Spriggs Dec 2005 B2
7103460 Breed Sep 2006 B1
7689559 Canright et al. Mar 2010 B2
7742841 Sakai et al. Jun 2010 B2
7783403 Breed Aug 2010 B2
7844595 Canright et al. Nov 2010 B2
8041111 Wilensky Oct 2011 B1
8064643 Stein et al. Nov 2011 B2
8082101 Stein et al. Dec 2011 B2
8164628 Stein et al. Apr 2012 B2
8175376 Marchesotti et al. May 2012 B2
8271871 Marchesotti Sep 2012 B2
8346480 Trepagnier et al. Jan 2013 B2
8378851 Stein et al. Feb 2013 B2
8392117 Dolgov et al. Mar 2013 B2
8401292 Park et al. Mar 2013 B2
8412449 Trepagnier et al. Apr 2013 B2
8478072 Aisaka et al. Jul 2013 B2
8553088 Stein et al. Oct 2013 B2
8706394 Trepagnier et al. Apr 2014 B2
8718861 Montemerlo et al. May 2014 B1
8788134 Litkouhi et al. Jul 2014 B1
8908041 Stein et al. Dec 2014 B2
8917169 Schofield et al. Dec 2014 B2
8963913 Baek Feb 2015 B2
8965621 Urmson et al. Feb 2015 B1
8981966 Stein et al. Mar 2015 B2
8983708 Choe et al. Mar 2015 B2
8993951 Schofield et al. Mar 2015 B2
9002632 Emigh Apr 2015 B1
9008369 Schofield et al. Apr 2015 B2
9025880 Perazzi et al. May 2015 B2
9042648 Wang et al. May 2015 B2
9081385 Ferguson et al. Jul 2015 B1
9088744 Grauer et al. Jul 2015 B2
9111444 Kaganovich Aug 2015 B2
9117133 Barnes et al. Aug 2015 B2
9118816 Stein et al. Aug 2015 B2
9120485 Dolgov Sep 2015 B1
9122954 Srebnik et al. Sep 2015 B2
9134402 Sebastian et al. Sep 2015 B2
9145116 Clarke et al. Sep 2015 B2
9147255 Zhang et al. Sep 2015 B1
9156473 Clarke et al. Oct 2015 B2
9176006 Stein Nov 2015 B2
9179072 Stein et al. Nov 2015 B2
9183447 Gdalyahu et al. Nov 2015 B1
9185360 Stein et al. Nov 2015 B2
9191634 Schofield et al. Nov 2015 B2
9214084 Grauer et al. Dec 2015 B2
9219873 Grauer et al. Dec 2015 B2
9233659 Rosenbaum et al. Jan 2016 B2
9233688 Clarke et al. Jan 2016 B2
9248832 Huberman Feb 2016 B2
9248835 Tanzmeister Feb 2016 B2
9251708 Rosenbaum et al. Feb 2016 B2
9277132 Berberian Mar 2016 B2
9280711 Stein Mar 2016 B2
9282144 Tebay et al. Mar 2016 B2
9286522 Stein et al. Mar 2016 B2
9297641 Stein Mar 2016 B2
9299004 Lin et al. Mar 2016 B2
9315192 Zhu et al. Apr 2016 B1
9317033 Ibanez-Guzman et al. Apr 2016 B2
9317776 Honda et al. Apr 2016 B1
9330334 Lin et al. May 2016 B2
9342074 Dolgov et al. May 2016 B2
9347779 Lynch May 2016 B1
9355635 Gao et al. May 2016 B2
9365214 Shalom et al. Jun 2016 B2
9399397 Mizutani et al. Jul 2016 B2
9418549 Kang et al. Aug 2016 B2
9428192 Schofield et al. Aug 2016 B2
9436880 Bos et al. Sep 2016 B2
9438878 Niebla, Jr. et al. Sep 2016 B2
9443163 Springer Sep 2016 B2
9446765 Shalom et al. Sep 2016 B2
9459515 Stein Oct 2016 B2
9466006 Duan Oct 2016 B2
9476970 Fairfield et al. Oct 2016 B1
9483839 Kwon et al. Nov 2016 B1
9490064 Hirosawa et al. Nov 2016 B2
9494935 Okumura et al. Nov 2016 B2
9507346 Levinson et al. Nov 2016 B1
9513634 Pack et al. Dec 2016 B2
9531966 Stein et al. Dec 2016 B2
9535423 Debreczeni Jan 2017 B1
9538113 Grauer et al. Jan 2017 B2
9547985 Tuukkanen Jan 2017 B2
9549158 Grauer et al. Jan 2017 B2
9552657 Ueno et al. Jan 2017 B2
9555803 Pawlicki et al. Jan 2017 B2
9568915 Berntorp et al. Feb 2017 B1
9587952 Slusar Mar 2017 B1
9599712 Van Der Tempel et al. Mar 2017 B2
9600889 Boisson et al. Mar 2017 B2
9602807 Crane et al. Mar 2017 B2
9612123 Levinson et al. Apr 2017 B1
9620010 Grauer et al. Apr 2017 B2
9625569 Lange Apr 2017 B2
9628565 Stenneth et al. Apr 2017 B2
9649999 Amireddy et al. May 2017 B1
9652860 Maali et al. May 2017 B1
9669827 Ferguson et al. Jun 2017 B1
9672446 Vallespi-Gonzalez Jun 2017 B1
9690290 Prokhorov Jun 2017 B2
9701023 Zhang et al. Jul 2017 B2
9712754 Grauer et al. Jul 2017 B2
9720418 Stenneth Aug 2017 B2
9723097 Harris et al. Aug 2017 B2
9723099 Chen et al. Aug 2017 B2
9723233 Grauer et al. Aug 2017 B2
9726754 Massanell et al. Aug 2017 B2
9729860 Cohen et al. Aug 2017 B2
9738280 Rayes Aug 2017 B2
9739609 Lewis Aug 2017 B1
9746550 Nath et al. Aug 2017 B2
9753128 Schweizer et al. Sep 2017 B2
9753141 Grauer et al. Sep 2017 B2
9754490 Kentley et al. Sep 2017 B2
9760837 Nowozin et al. Sep 2017 B1
9766625 Boroditsky et al. Sep 2017 B2
9769456 You et al. Sep 2017 B2
9773155 Shotton et al. Sep 2017 B2
9779276 Todeschini et al. Oct 2017 B2
9785149 Wang et al. Oct 2017 B2
9805294 Liu et al. Oct 2017 B2
9810785 Grauer et al. Nov 2017 B2
9823339 Cohen Nov 2017 B2
9842399 Yamaguchi Dec 2017 B2
9953236 Huang et al. Apr 2018 B1
10147193 Huang et al. Dec 2018 B2
10223806 Luo et al. Mar 2019 B1
10223807 Luo et al. Mar 2019 B1
10410055 Wang et al. Sep 2019 B2
10529089 Ahmad Jan 2020 B2
10698100 Becker et al. Jun 2020 B2
10816354 Liu Oct 2020 B2
20010051845 Itoh Dec 2001 A1
20030114980 Klausner et al. Jun 2003 A1
20030174773 Comaniciu et al. Sep 2003 A1
20040264763 Mas et al. Dec 2004 A1
20070088497 Jung Apr 2007 A1
20070183661 El-maleh et al. Aug 2007 A1
20070183662 Wang et al. Aug 2007 A1
20070230792 Shashua et al. Oct 2007 A1
20070286526 Abousleman et al. Dec 2007 A1
20080109118 Schwartz et al. May 2008 A1
20080249667 Horvitz et al. Oct 2008 A1
20090040054 Wang et al. Feb 2009 A1
20090087029 Coleman et al. Apr 2009 A1
20090243825 Schofield Oct 2009 A1
20100049397 Liu et al. Feb 2010 A1
20100082238 Nakamura et al. Apr 2010 A1
20100111417 Ward et al. May 2010 A1
20100226564 Marchesotti et al. Sep 2010 A1
20100281361 Marchesotti Nov 2010 A1
20110142283 Huang et al. Jun 2011 A1
20110206282 Aisaka et al. Aug 2011 A1
20110247031 Jacoby Oct 2011 A1
20120041636 Johnson et al. Feb 2012 A1
20120105639 Stein et al. May 2012 A1
20120120069 Kodaira et al. May 2012 A1
20120140076 Rosenbaum et al. Jun 2012 A1
20120274629 Baek Nov 2012 A1
20120314070 Zhang et al. Dec 2012 A1
20130051613 Bobbitt et al. Feb 2013 A1
20130083959 Owechko et al. Apr 2013 A1
20130182134 Grundmann et al. Jul 2013 A1
20130204465 Phillips et al. Aug 2013 A1
20130266187 Bulan et al. Oct 2013 A1
20130329052 Chew Dec 2013 A1
20140063489 Steffey et al. Mar 2014 A1
20140072170 Zhang et al. Mar 2014 A1
20140104051 Breed Apr 2014 A1
20140142799 Ferguson et al. May 2014 A1
20140143839 Ricci May 2014 A1
20140145516 Hirosawa et al. May 2014 A1
20140198184 Stein et al. Jul 2014 A1
20140314322 Snavely et al. Oct 2014 A1
20140321704 Partis Oct 2014 A1
20140334668 Saund Nov 2014 A1
20150062304 Stein et al. Mar 2015 A1
20150127239 Breed et al. May 2015 A1
20150253428 Holz Sep 2015 A1
20150269437 Maruyama et al. Sep 2015 A1
20150269438 Samarasekera et al. Sep 2015 A1
20150292891 Kojo Oct 2015 A1
20150310370 Burry et al. Oct 2015 A1
20150353082 Lee et al. Dec 2015 A1
20160008988 Kennedy et al. Jan 2016 A1
20160026787 Nairn et al. Jan 2016 A1
20160037064 Stein et al. Feb 2016 A1
20160046290 Aharony et al. Feb 2016 A1
20160094774 Li et al. Mar 2016 A1
20160118080 Chen Apr 2016 A1
20160125608 Sorstedt May 2016 A1
20160129907 Kim et al. May 2016 A1
20160165157 Stein et al. Jun 2016 A1
20160191860 Jung Jun 2016 A1
20160210528 Duan Jul 2016 A1
20160275766 Venetianer et al. Sep 2016 A1
20160321381 English et al. Nov 2016 A1
20160321817 Ratcliff et al. Nov 2016 A1
20160334230 Ross et al. Nov 2016 A1
20160342837 Hong et al. Nov 2016 A1
20160347322 Clarke et al. Dec 2016 A1
20160375907 Erban Dec 2016 A1
20170053169 Cuban et al. Feb 2017 A1
20170061632 Lindner et al. Mar 2017 A1
20170124476 Levinson et al. May 2017 A1
20170134631 Zhao et al. May 2017 A1
20170177951 Yang et al. Jun 2017 A1
20170227647 Baik Aug 2017 A1
20170301104 Qian et al. Oct 2017 A1
20170305423 Green Oct 2017 A1
20170318407 Meister et al. Nov 2017 A1
20170363423 Dormody et al. Dec 2017 A1
20180005407 Browning et al. Jan 2018 A1
20180111274 Seok et al. Apr 2018 A1
20180131924 Jung et al. May 2018 A1
20180149739 Becker et al. May 2018 A1
20180151063 Pun et al. May 2018 A1
20180158197 Dasgupta et al. Jun 2018 A1
20180188043 Chen et al. Jul 2018 A1
20180216943 Hawkins et al. Aug 2018 A1
20180260956 Huang et al. Sep 2018 A1
20180268566 Houts et al. Sep 2018 A1
20180283892 Behrendt et al. Oct 2018 A1
20180284278 Russell et al. Oct 2018 A1
20180312125 Jung et al. Nov 2018 A1
20180315201 Cameron et al. Nov 2018 A1
20180364717 Douillard et al. Dec 2018 A1
20180373254 Song et al. Dec 2018 A1
20180373980 Huval Dec 2018 A1
20190025853 Julian et al. Jan 2019 A1
20190065863 Luo et al. Feb 2019 A1
20190066329 Luo et al. Feb 2019 A1
20190066330 Luo et al. Feb 2019 A1
20190108384 Wang et al. Apr 2019 A1
20190132391 Thomas et al. May 2019 A1
20190132392 Liu et al. May 2019 A1
20190163989 Guo et al. May 2019 A1
20190210564 Han et al. Jul 2019 A1
20190210613 Sun et al. Jul 2019 A1
20190226851 Nicosevici et al. Jul 2019 A1
20190236950 Li et al. Aug 2019 A1
20190266420 Ge et al. Aug 2019 A1
20190271549 Zhang Sep 2019 A1
20190312993 Yamashita et al. Oct 2019 A1
20190339084 Korenaga et al. Nov 2019 A1
20200089973 Efland Mar 2020 A1
20200271473 Wang et al. Aug 2020 A1
20210183099 Fujii Jun 2021 A1
20210373161 Lu Dec 2021 A1
Foreign Referenced Citations (52)
Number Date Country
102815305 Dec 2012 CN
105667518 Jun 2016 CN
105825173 Aug 2016 CN
106340197 Jan 2017 CN
106781591 May 2017 CN
106909876 Jun 2017 CN
107015238 Aug 2017 CN
107111742 Aug 2017 CN
108010360 May 2018 CN
111256693 Jun 2020 CN
2608513 Sep 1977 DE
0890470 Jan 1999 EP
1754179 Feb 2007 EP
2448251 May 2012 EP
2463843 Jun 2012 EP
2761249 Aug 2014 EP
2918974 Sep 2015 EP
2946336 Nov 2015 EP
2993654 Mar 2016 EP
3081419 Oct 2016 EP
3819673 May 2021 EP
2017198566 Nov 2017 JP
100802511 Feb 2008 KR
20170065083 Jun 2017 KR
1991009375 Jun 1991 WO
2005098739 Oct 2005 WO
2005098751 Oct 2005 WO
2005098782 Oct 2005 WO
2010109419 Sep 2010 WO
2013045612 Apr 2013 WO
2014111814 Jul 2014 WO
2014166245 Oct 2014 WO
2014201324 Dec 2014 WO
2015083009 Jun 2015 WO
2015103159 Jul 2015 WO
2015125022 Aug 2015 WO
2015186002 Dec 2015 WO
2016090282 Jun 2016 WO
2016135736 Sep 2016 WO
2017013875 Jan 2017 WO
2017079349 May 2017 WO
2017079460 May 2017 WO
2018132608 Jul 2018 WO
2019040800 Feb 2019 WO
2019084491 May 2019 WO
2019084494 May 2019 WO
2019140277 Jul 2019 WO
2019161134 Aug 2019 WO
2019168986 Sep 2019 WO
2020038118 Feb 2020 WO
2020097512 May 2020 WO
WO-2021017213 Feb 2021 WO
Non-Patent Literature Citations (81)
Entry
Extended European Search Report for European Patent Application No. 18849237.5, dated Apr. 23, 2021.
Extended European Search Report for European Patent Application No. 21166828.0, dated Aug. 5, 2021 (8 pages).
International Application No. PCT/US18/53795, International Search Report and Written Opinion dated Dec. 31, 2018.
International Application No. PCT/US18/57848, International Search Report and Written Opinion dated Jan. 7, 2019.
International Application No. PCT/US19/12934, International Search Report and Written Opinion dated Apr. 29, 2019.
International Application No. PCT/US19/25995, International Search Report and Written Opinion dated Jul. 9, 2019.
International Application No. PCT/US2018/047608, International Search Report and Written Opinion dated Dec. 28, 2018.
International Application No. PCT/US2018/047830, International Search Report and Written Opinion dated Apr. 27, 2017.
International Application No. PCT/US2018/057851, International Search Report and Written Opinion dated Feb. 1, 2019.
International Application No. PCT/US2019/013322, International Search Report and Written Opinion dated Apr. 2, 2019.
International Application No. PCT/US2019/019839, International Search Report and Written Opinion dated May 23, 2019.
International Search Report and Written Opinion for PCT/US19/060547, dated Jun. 25, 2020.
Luo, Yi et al. U.S. Appl. No. 15/684,389 Notice of Allowance, dated Oct. 9, 2019.
Office Action for Chinese Application No. 201810025516.X, dated Sep. 3, 2019.
Ahn, Kyoungho , et al., “The Effects of Route Choice Decisions on Vehicle Energy Consumption and Emissions”, Virginia Tech Transportation Institute, date unknown.
Athanasiadis, Thanos , et al., “Semantic Image Segmentation and Object Labeling”, IEEE Transactions on Circuits and Systems for Video Technology, 17(3).
Barth, Matthew , et al., “Recent Validation Efforts for a Comprehensive Modal Emissions Model”, Transportation Research Record 1750, Paper No. 01-0326, College of Engineering, Center for Environmental Research and Technology, University of California, Riverside, CA 92521, date unknown.
Carle, Patrick J.F., “Global Rover Localization by Matching Lidar and Orbital 3D Maps.”, IEEE, Anchorage Convention Distriction, pp. 1-6, May 3-8, 2010. (Anchorage Alaska, US), May 3-8, 2019.
Caselitz , et al., “Monocular Camera Localization in 3D LiDAR Maps”, Germany.
Cordts, Marius , et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition {CVPR), Las Vegas.
Dai, Jifeng , et al., “Instance-aware Semantic Segmentation via Multi-task Network Cascades”, Microsoft Research, CVPR, 10 pp.
Engel , et al., “LSD-SLAM: Large-Scale Direct Monocular SLAM”, Munich.
Geiger, Andreas , et al., “Automatic Camera and Range Sensor Calibration using a single Shot”, Robotics and Automation (ICRA), IEEE International Conference, 1-8.
Guarneri, P. , et al., “A Neural-Network-Based Model for the Dynamic Simulation of the Tire/Suspension System While Traversing Road Irregularities”, IEEE Transactions on Neural Networks, 19(9), 1549-1563.
Gurghian, A. , et al., “DeepLanes: End-to-End Lane Position Estimation using Deep Neural Networks”, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 38-45.
Hillel, Aharon B., et al., “Recent Progress in Road and Late Detection—A Survey”.
Hou, Xiaodi , et al., “Image Signature: Highlighting Sparse Salient Regions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1), 194-201.
Hou, Xiaodi , et al., “A Meta-Theory of Boundary Detection Benchmarks”, arXiv preprint arXiv:1302.5985, 2013.
Hou, Xiaodi , et al., “A Time-Dependent Model of Information Capacity of Visual Attention”, International Conference on Neural Information Processing, Springer Berlin Heidelberg, 127-136.
Hou, Xiaodi , et al., “Boundary Detection Benchmarking: Beyond F-Measures”, Computer Vision and Pattern Recognition, CVPR'13, IEEE, 1-8.
Hou, Xiaodi , et al., “Color Conceptualization”, Proceedings of the 15th ACM International Conference on Multimedia, ACM, 265-268.
Hou, Xiaodi , “Computational Modeling and Psychophysics in Low and Mid-Level Vision”, California Institute of Technology.
Hou, Xiaodi , et al., “Dynamic Visual Attention: Searching for Coding Length Increments”, Advances in Neural Information Processing Systems, 21, 681-688.
Hou, Xiaodi , et al., “Saliency Detection: A Spectral Residual Approach”, Computer Vision and Pattern Recognition, CVPR'07—IEEE Conference, 1-8.
Hou, Xiaodi , et al., “Thumbnail Generation Based on Global Saliency”, Advances in Cognitive Neurodynamics, ICCN 2007, Springer Netherlands, 999-1003.
Huval, Brody , et al., “An Empirical Evaluation of Deep Learning on Highway Driving”, arXiv:1504.01716v3 [cs.RO], 7 pp.
Jain, Suyong Dutt , et al., “Active Image Segmentation Propagation”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas.
Kendall, Alex , et al., “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision”, arXiv:1703.04977v1 [cs.CV].
Levinson, Jesse , et al., “Experimental Robotics, Unsupervised Calibration for Multi-Beam Lasers”, 12th Ed., Oussama Khatib, Vijay Kumar, Gaurav Sukhatme (Eds.) Springer-Verlag Berlin Heidelberg, 179-194.
Li, Tian , “Proposal Free Instance Segmentation Based on Instance-aware Metric”, Department of Computer Science, Cranberry-Lemon University, Pittsburgh, PA, date unknown.
Li, Yanghao , et al., “Demystifying Neural Style Transfer”, arXiv preprint arXiv:1701.01036.
Li, Yanghao , et al., “Factorized Bilinear Models for Image Recognition”, arXiv preprint arXiv:1611.05709.
Li, Yanghao , et al., “Revisiting Batch Normalization for Practical Domain Adaptation”, arXiv preprint arXiv:1603.04779.
Li, Yin , et al., “The Secrets of Salient Object Segmentation”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 280-287.
Macaodha, Oisin , et al., “Hierarchical Subquery Evaluation for Active Learning on a Graph”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Mur-Artal, et al., “ORB-SLAM: A Versatile and Accurate Monocular SLAM System”, IEEE Transactions on Robotics, Oct. 2015, vol. 31, No. 5, Spain.
Narote, S., et al., “A review of recent advances in lane detection and departure warning system”, Pattern Recognition, Elsevier, 73, pp. 216-234 (2018).
Nguyen, T., “Evaluation of Lane Detection Algorithms based on an Embedded Platform”, Master Thesis, Technische Universität Chemnitz, Jun. 2017, available at https://nbn-resolving.org/urn:nbn:de:bsz:ch1-qucosa-226615.
Niu, J. , et al., “Robust Lane Detection using Two-stage Feature Extraction with Curve Fitting”, Pattern Recognition, Elsevier, 59, pp. 225-233 (2016).
Norouzi, Mohammad , et al., “Hamming Distance Metric Leaming”, Departments of Computer Science and Statistics, University of Toronto, date unknown.
Paszke, Adam , et al., “Enet: A deep neural network architecture for real-time semantic segmentation”, CoRR, abs/1606.02147.
Ramos, Sebastian , et al., “Detecting Unexpected Obstacles for Self-Driving Cars: Fusing Deep Leaming and Geometric Modeling”, arXiv:1612.06573v1 [cs.CV], 8 pp.
Richter, Stephen, et al., “Playing for Data: Ground Truth from Computer Games”, Intel Labs, European Conference on Computer Vision (ECCV), Amsterdam, the Netherlands.
Sattler , et al., “Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization?”, CVPR, IEEE, 2017, 1-10.
Schindler, Andreas , et al., “Generation of high precision digital maps using circular arc splines”, 2012 IEEE Intelligent Vehicles Symposium, Alcala de Henares10. I 109/IVS.2012.6232124, 246-251.
Schroff, Florian , et al., “FaceNet: A Unified Embedding for Face Recognition and Clustering”, Google, CVPR, 10 pp.
Somani, Adhiraj, et al., “DESPOT: Online POMDP Planning with Regularization”, Department of Computer Science, National University of Singapore, date unknown.
Spinello, Luciano , et al., “Multiclass Multimodal Detection and Tracking in Urban Environments”, Sage Journals, 29(12)Article first published online: Oct. 7, 2010; Issue published: Oct. 1, 2010, 1498-1515.
Szeliski, Richard , “Computer Vision: Algorithms and Applications”, http://szeliski.org/Book/.
Wang, Panqu, et al., “Understanding Convolution for Semantic Segmentation”, arXiv preprint arXiv:1702.08502.
Wei, Junqing, et al., “A Prediction- and Cost Function-Based Algorithm for Robust Autonomous Freeway Driving”, 2010 IEEE Intelligent Vehicles Symposium, University of California, San Diego, CA, USA.
Welinder, Peter, et al., “The Multidimensional Wisdom of Crowds”, http://www.vision.caltech.edu/visipedia/papers/WelinderEtaINIPS10.pdf.
Yang, C., “Neural Network-Based Motion Control of an Underactuated Wheeled Inverted Pendulum Model”, IEEE Transactions on Neural Networks and Learning Systems, 25(11), 2004-2016.
International Search Report and Written Opinion for PCT/US19/18113, dated May 8, 2019.
Yu, Kai , et al., “Large-scale Distributed Video Parsing and Evaluation Platform”, Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences, China, arXiv:1611.09580v1 [cs.CV].
Zhang, Z , et al., “A Flexible new technique for camera calibration”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, Issue: 11.
Zhou, Bolei , et al., “A Phase Discrepancy Analysis of Object Motion”, Asian Conference on Computer Vision, Springer Berlin Heidelberg, 225-238.
Chinese Patent Office, First Search Report for CN 201980013350.2, dated Feb. 21, 2022, 3 pages with machine translation.
Chinese Patent Office, First Office Action for CN 201980013350.2, dated Feb. 25, 2022, 20 pages with machine translation.
Mingdong Wang et al., U.S. Appl. No. 16/184,926 Notice of Allowance dated Jan. 15, 2021, pp. 1-5.
Harry Y. Oh, U.S. Appl. No. 15/896,077, Non-Final Office Action dated Mar. 13, 2020, pp. 1-21.
Harry Y. Oh, U.S. Appl. No. 15/896,077, Final Office Action dated Jul. 9, 2020, pp. 1-30.
Harry Y. Oh, U.S. Appl. No. 15/896,077, Non-Final Office Action dated Oct. 1, 2020, pp. 1-34.
Harry Y. Oh, U.S. Appl. No. 16/184,926, Non-Final Office Action dated Oct. 5, 2020, pp. 1-17.
Examination Report from corresponding European Patent Application No. 21166828.0, dated Mar. 16, 2023 (8 pages).
Xiao Zhongyang et al: “Monocular Vehicle Self-localization method based on Compact Semantic Map”, (2018 21st International Conference on Intelligent Transportation Systems (ITSC), IEEE, Nov. 4, 2018, pp. 3083-3090.
Siyuan Liu, U.S. Appl. No. 17/074,468 Notice of Allowance dated Oct. 7, 2022, p. 1-7.
Chinese Patent Office, First Office Action for CN 201880055025.8, dated Dec. 16, 2022, 10 pages.
Schindler, et al., “Generation of High Precision Digitial Maps using Circular Arc Splines,” 2012 Intelligent Vehicles Symposium, Alcala de Henares, Spain, Jun. 3-7, 2012.
Mingdong Wang, U.S. Appl. No. 17/320,888, Non-Final Office Action dated Jan. 18, 2023, pp. 1-8.
U.S. Patent & Trademark Office, Non-Final Office Action for U.S. Appl. No. 17/308,803, dated Mar. 16, 2023, 25 pages.
Related Publications (1)
Number Date Country
20210319584 A1 Oct 2021 US
Provisional Applications (1)
Number Date Country
63007895 Apr 2020 US