SELF ADAPTIVE ENHANCEMENT FOR AUTOMATED DRIVING WITH MAP AND CAMERA ISSUES

Information

  • Patent Application
  • 20240401970
  • Publication Number
    20240401970
  • Date Filed
    June 01, 2023
    a year ago
  • Date Published
    December 05, 2024
    2 months ago
Abstract
A vehicle includes a system and method of operating the vehicle. The system includes a camera, a map database and a processor. The camera obtains camera data of a location of a road being traversed by the vehicle. The map database provides map data of the location of the road. The processor determines a first curvature of the road at the location from the camera data, determines a second curvature of the road at the location from the map data, identifies a mismatch between the first curvature and the second curvature at the location of the road, generates a case report for one of the map data and the camera data at the location upon occurrence of the mismatch, and adjusts one of the map data and a confidence level in the camera data for the location based on the case report.
Description
INTRODUCTION

The subject disclosure relates to autonomous vehicles and, in particular, to a system and method for correcting map data and/or camera data used in guiding the autonomous vehicles along a section of road.


Automated driving of a vehicle relies on map data and/or camera data to identify a direction of a road and a curvature of the road. When the camera data and the map data do not match each other at a given location, technical personnel are called in to manually identify the causes for the mismatch and propose solutions for improving the data integrity. Such reliance on technical personnel is time consuming. Accordingly, it is desirable to provide a system that determines which of map data and camera data is in error and makes necessary corrections.


SUMMARY

In one exemplary embodiment, a method of operating a vehicle is disclosed. Camera data of a location of a road is obtained, the camera data being used by the vehicle to traverse the location. Map data of the location of the road is obtained, the map data being used by the vehicle to traverse the location. A first curvature of the road is determined at the location from the camera data. A second curvature of the road is determined at the location from the map data. A mismatch is identified between the first curvature and the second curvature at the location of the road. A case report is generated for one of the map data and the camera data at the location upon occurrence of the mismatch. One of the map data and a confidence level in the camera data is adjusted for the location based on the case report.


In addition to one or more of the features described herein, the method further includes determining a rationality of one of the map data and the camera data by determining one of a swing in one of the first curvature and the second curvature and an oscillation in one of the first curvature and the second curvature. The method further includes determining an occurrence of an escalation event when one the first curvature and the second curvature is determined to meet a rationality condition. The method further includes comparing a trajectory of a human-driven vehicle to the trajectory of one of the camera data and the map data at the location. The method further includes generating the case report for each time the vehicle traverses the location. The method further includes generating the case report for multiple vehicles traversing the location at different times. The method further includes excluding a case report for the camera data based on an environmental condition.


In another exemplary embodiment, a system for operating a vehicle is disclosed. The system includes a camera, a map database and a processor. The camera is configured to obtain a camera data of a location of a road being traversed by the vehicle. The map database is configured to provide map data of the location of the road. The processor is configured to determine a first curvature of the road at the location from the camera data, determine a second curvature of the road at the location from the map data, identify a mismatch between the first curvature and the second curvature at the location of the road, generate a case report for one of the map data and the camera data at the location upon occurrence of the mismatch, and adjust one of the map data and a confidence level in the camera data for the location based on the case report.


In addition to one or more of the features described herein, the processor is further configured to determine a rationality of one of the map data and the camera data by determining one of a swing in one of the first curvature and the second curvature and an oscillation in one of the first curvature and the second curvature. The processor is further configured to determine an occurrence of an escalation event when one the first curvature and the second curvature is determined to meet a rationality condition. The processor is further configured to compare a trajectory of a human-driven vehicle to the trajectory of one of the camera data and the map data at the location. The processor is further configured to generate the case report for each time the vehicle traverses the location. The processor is further configured to generate the case report for multiple vehicles traversing the location at different times. The processor is further configured to exclude a case report for the camera data based on an environmental condition.


In yet another exemplary embodiment, a vehicle is disclosed. The vehicle includes a camera, a map database and a processor. The camera is configured to obtain a camera data of a location of a road being traversed by the vehicle. The map database is configured to provide map data of the location of the road. The processor is configured to determine a first curvature of the road at the location from the camera data, determine a second curvature of the road at the location from the map data, identify a mismatch between the first curvature and the second curvature at the location of the road, generate a case report for one of the map data and the camera data at the location upon occurrence of the mismatch, and adjust one of the map data and a confidence level in the camera data for the location based on the case report.


In addition to one or more of the features described herein, the processor is further configured to determine a rationality of one of the map data and the camera data by determining one of a swing in one of the first curvature and the second curvature and an oscillation in one of the first curvature and the second curvature. The processor is further configured to determine an occurrence of an escalation event when one the first curvature and the second curvature is determined to meet a rationality condition. The processor is further configured to compare a trajectory of a human-driven vehicle to the trajectory of one of the camera data and the map data at the location. The processor is further configured to generate the case report for each time the vehicle traverses the location. The processor is further configured to generate the case report for multiple vehicles traversing the location at different times.


The above features and advantages, and other features and advantages of the disclosure are readily apparent from the following detailed description when taken in connection with the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

Other features, advantages and details appear, by way of example only, in the following detailed description, the detailed description referring to the drawings in which:



FIG. 1 shows a vehicle that can be operated in an autonomous mode or in a cruise control mode, in accordance with an exemplary embodiment;



FIG. 2 shows a flow diagram of a method for enhancing automated driving, in an embodiment;



FIG. 3 shows a flowchart of a method for identifying an issue for enhanced use of map data and camera data in enhanced driving;



FIG. 4 shows a graph of a timeline of a trajectory of a vehicle following a selected data set, in an embodiment;



FIG. 5 shows a graph of a timeline of a trajectory of a vehicle following a selected data set, in another embodiment;



FIG. 6 shows a flowchart of a method for determining a rationality of a curvature swing of a trajectory;



FIG. 7 shows a flowchart of a method of determining a rationality of a curvature oscillation;



FIG. 8 shows a first curvature for a trajectory of a human-driven vehicle and a second curvature for a trajectory based on a data set; and



FIG. 9 shows a flowchart illustrating a method of location analysis.





DETAILED DESCRIPTION

The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.


In accordance with an exemplary embodiment, FIG. 1 shows a vehicle 100 that can be operated in an autonomous mode or in a cruise control mode. The vehicle 100 includes a driving system 102 that controls motion of the vehicle along a section of road. The driving system 102 can include a camera 104 that obtains information regarding the surroundings of the vehicle 100 including a road on which the vehicle is moving. The driving system 102 also includes a map database 108 that provides a map of the section of road being traversed by the vehicle. The vehicle 100 can obtain both map data and camera data. The vehicle includes a controller 106 that performs various calculations to operate the vehicle using one or more of the map data and the camera data.


The controller 106 may include processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. The controller 106 may include a non-transitory computer-readable medium that stores instructions which, when processed by one or more processors of the controller 106, implement a method of determining a difference between map data and camera data and adjust a parameter with respect to either the map data or the camera data for subsequent use. The method includes selecting one of the map data and camera data for creating a trajectory for the vehicle and maneuvering the vehicle along the trajectory.


The vehicle 100 can also include a communication unit 110 that allows the controller 106 to communicate with a map server 120. In an alternative embodiment, map data can be received from the map database 108 and the results of calculations at the controller 106 can be transferred to the map database for implementation.



FIG. 2 shows a flow diagram 200 of a method for enhancing automated driving, in an embodiment. In box 202, data is obtained at the vehicle 100 for use in automated driving of the vehicle. The data includes camera data obtained from camera 104 and map data from the map database 108 or the map server 120.


In box 204, a first curvature of the road at a selected location of the road is determined from the map data and a second curvature of the road at the selected location is determined from the camera data.


In box 206, the first curvature and the second curvature are compared to each other to determine if there is a mismatch between them. When the first curvature and the second curvature differ from each other by an amount that is less than a match threshold (i.e., the curvatures match), the method can return to box 202 where new sets of data are obtained from the map database 108 and the camera 104. If the first curvature and the second curvature differ from each other by an amount that is greater than a match threshold (i.e., the curvatures do not match), the method proceeds to box 208.


In box 208, each of the first curvature and the second curvature can be checked for rationality. The rationality of a curvature is determined by quantifying a swing or an oscillation of the curvature of the road into a value and then comparing the value to a threshold. If the value of either the first curvature (i.e., map data) or the second curvature (i.e., camera data) is greater than the threshold, their curvatures are determined to be irrational. If the value is greater than the threshold, the method proceeds to box 210. Otherwise, if the value is less than or equal to the threshold, the method proceeds to box 212.


Referring first to box 210, a case report is generated for an irrational data set. A case report can be generated each time the location is crossed. This can include each time a same vehicle traverses the location or can include traversals of the location by multiple vehicles, or both. In box 222, the case reports associated with the location or section of the road are accumulated and counted. From box 222, the method proceeds either to box 224 or to box 226. If there are a large number of case reports for the map data (i.e., greater than a threshold). The method proceeds to box 224. If there are a large number of case reports for the camera data, the method proceeds to box 226.


In box 224, the map is updated with a corrected map for the location or section of road. In box 226, external environmental conditions that may have an impact on data are checked. For example, night-time viewing, rainy weather, etc. can cause the camera data to be poor, even for a camera that is in good condition. Such data is related to the environmental condition. In box 228, a camera confidence level is reduced or adjusted for the section of road based on the environmental conditions.


Referring now to box 212, the camera data is reviewed to determine if an escalation event has occurred at the same time as the curvature mismatch. An illustrative escalation event includes a lane touch event, for example. If an escalation event has not occurred simultaneously with the curvature mismatch, the method proceeds to box 214. Otherwise, the method proceeds to box 218 for location analysis.


In box 214, the trajectory of a human-driven vehicle is compared to the map data and the camera data to determine whether the human-driven vehicle follows or is generally aligned with the map data or the camera data. In box 216, the data set that is not followed by the human-driven vehicle is marked as a potentially incorrect data. The method proceeds from box 216 to box 222 for location analysis.


Returning to box 218, it is determined whether a curvature mismatch occurs after an escalation event. If a mismatch does occur after an escalation event, the method proceeds to box 214. Otherwise, the method proceeds to box 220. In box 220, the number of map data cases and the number of camera data cases are identified. From box 220, the method proceeds to box 222.



FIG. 3 shows a flowchart 300 of a method for identifying an issue for enhanced use of the map data and the camera data. The method begins in box 302 where a mismatch has occurred in a first curvature at a location in the road based on the map data at a road location and a second curvature at the location in the road based on the camera data. In box 304, both the first curvature and the second curvature are compared to various rationality threshold to determine whether either of the curvatures are rational or irrational. In box 306, a decision is made based on the results of the calculations performed in box 304. If the first curvature (map data) is irrational, the method proceeds to box 308. If the second curvature (camera data) is irrational, the method proceeds to box 310. If both the first curvature and the second curvature are rational, the method proceeds to box 320.


In box 308 (first curvature is irrational), a case report indicating a potential map issue is generated. In box 310 (second curvature is irrational), a case report indicating a potential camera issue is generated. In box 312, a location is identified with a large number of case reports. From box 312, the method proceeds to box 314 when there is a potential map issue (i.e., a large number of case reports for the map data). In box 314, the map database is updated for the location. From box 314, the method proceeds to box 330, where the method ends.


Referring again to box 312, the method proceeds to box 316 when there is a potential camera issue (i.e., a large number of case reports for the camera data). In box 316, the impact of weather conditions on the camera data is taken into account. In box 318, a confidence level for the camera data is adjusted (or reduced) based on the camera data issues and the impact of weather conditions on the camera data. From box 318, the method proceeds to box 330, where the method ends.


Referring now to box 320, when both the map data and the camera data are determined to be rational, the processor determines whether an escalation event has occurred. If an escalation event has not occurred, the method proceeds to box 322. In box 322, data indicated of a path of a human-driven vehicle is compared with the map data and the camera data to determine whether a human-driven vehicle follows the map data or the camera data. If the path of the human-driven vehicle aligns with the camera data, the method proceeds to box 308. If the path of the human-driven vehicle aligns with the map data, the method proceeds to box 310. Separately, if the path of the human-driven vehicle does not align with either the map data or the camera data, the method proceeds to box 328 in which further analysis is requested.


Returning to box 320, if an escalation event has occurred, the method proceeds to box 324. In box 324, the data is reviewed to determine if there is a curvature mismatch that occurs after the escalation event. If there is a curvature mismatch after escalation, the method proceeds to box 322. If there is no curvature mismatch after the escalation event, the method proceeds to box 326. In box 326, the map data and the camera data are reviewed to determine if a change in curvature has occurred in either the map data or the camera data. If a change in curvature occurs in the map data, the method proceeds to box 308. If a change in curvature occurs in the camera data, the method proceeds to box 310. If a change does not occur in either the camera data or the map data, the method proceeds to box 328. In box 328, further analyzed is requested and/or performed. From box 328, the method proceeds to box 330, where the method ends.



FIG. 4 shows a graph 400 of a timeline of a trajectory 402 of a vehicle following a selected data set. Time in seconds(s) is shown along the abscissa and amplitude (A) is shown along the ordinate axis. A section of the trajectory 402 can be isolated using a sliding window 404 and the curvature within the sliding window can be analyzed. In an embodiment, the swing of the curvature can be reviewed for rationality.


To analyze the swing, three curvature amplitudes are determined. A first curvature amplitude (CurvA) is the curvature at a rear of the sliding window. A second curvature amplitude (CurvB) is the curvature that has the highest value of within the sliding window. A third curvature amplitude (CurvC) is the curvature at the front of the sliding window. The rationality of the swing can be determined by comparing the curvatures to various conditions (shown in Eqs. (1)-(4)) based on the amplitudes. The change of curvature between any two curvatures can be compared to a relative threshold, as shown in Eq. (1):












"\[LeftBracketingBar]"



Curv
1

-

Curv
2




"\[RightBracketingBar]"


>

max




"\[LeftBracketingBar]"

Curv


"\[RightBracketingBar]"


*

Th
rel






Eq
.


(
1
)








where the indices (1,2) can be (A,B) or (B,C), Curv is the data set of all curvature values and Threl is a relative threshold that can be predetermined. The change of curvature can also be compared to an absolute threshold, as shown in Eqs. (2) and (3):












"\[LeftBracketingBar]"



Curv
A

-

Curv
B




"\[RightBracketingBar]"


>

Th
abs





Eq
.


(
2
)
















"\[LeftBracketingBar]"



Curv
B

-

Curv
C




"\[RightBracketingBar]"


>

Th
abs





Eq
.


(
3
)








Additionally, a product of the changes in curvature are checked for whether it is positive or negative, as shown in Eq. (4):











(


Curv
A

-

Curv
B


)

*

(


Curv
B

-

Curv
C


)


<
0




Eq
.


(
4
)








When all of Eqs. (1)-(4) are satisfied for a data set, the curvature of the data set is considered to be irrational.



FIG. 5 shows a graph 500 of a timeline of a trajectory 502 of a vehicle following a selected data set, in another embodiment. Time in seconds(s) is shown along the abscissa and amplitude (A) is shown along the ordinate axis. A section of the trajectory 502 can be isolated by the sliding window 404. The oscillation of the curvature can be calculated.


To analyze the oscillation, three curvature amplitudes are determined. A first curvature amplitude (CurvA) is the point having the local lowest value of curvature within the sliding window. A second curvature amplitude (CurvB) is the curvature that has the highest value of curvature within the sliding window. A third curvature amplitude (CurvC) is the curvature at an inflection point in the curvature after then second curvature amplitude. The rationality of the oscillation can be determined by comparing the curvatures to various conditions (shown in Eqs. (5)-(8)) based on the amplitudes. The change of curvature between any two curvatures can be compared to a relative threshold, as shown in Eq. (5):












"\[LeftBracketingBar]"



Curv
1

-

Curv
2




"\[RightBracketingBar]"


>

max




"\[LeftBracketingBar]"

Curv


"\[RightBracketingBar]"


*

Th
rel






Eq
.


(
5
)








The change of curvature can also be compared to an absolute threshold, as shown in Eqs. (6) and (7):












"\[LeftBracketingBar]"



Curv
A

-

Curv
B




"\[RightBracketingBar]"


>


Th
abs

/
2





Eq
.


(
6
)
















"\[LeftBracketingBar]"



Curv
B

-

Curv
C




"\[RightBracketingBar]"


>


Th
abs

/
2





Eq
.


(
7
)








Additionally, a product of the changes in curvature are checked for whether it is positive or negative, as shown in Eq. (8):











(


Curv
A

-

Curv
B


)

*

(


Curv
B

-

Curv
C


)


<
0




Eq
.


(
8
)








In addition, the trajectory is reviewed to determine whether the same pattern occurs within a 10 second interval.



FIG. 6 shows a flowchart 600 of a method for determining a rationality of a curvature swing. The method starts at box 602. In box 604, a sliding window is identified. If a sliding window having a preselected time duration can not be identified, the method proceeds to box 610. In an illustrative embodiment, the preselected time duration is 3 seconds. In box 610, an output is generated that indicates that there is no curvature swing. The method then proceeds to box 614 in which the method ends.


Returning to box 604, if a sliding window is identified, curvature points can be located and tested as the method proceeds to box 606. Otherwise, the method returns to box 604. In box 606, the first curvature (CurvA) and second curvature CurvB) are identified. If Eq. (2) and Eq. (1) are satisfied for the first curvature and the second curvature, the method proceeds to box 608. Otherwise, the method returns to box 604. In box 608, the third curvature (Curve) is identified. If Eq. (3), Eq. (1) are satisfied for the second curvature and the third curvature and Eq. (4) is satisfied, the method proceeds to box 612. Otherwise, the method returns to box 604. In box 612, an output is generated that indicated that there is a curvature swing.



FIG. 7 shows a flowchart 700 of a method of determining a rationality of a curvature oscillation. The method starts at box 702. In box 704, a sliding window is identified. If a sliding window having a preselected time duration cannot be identified, the method proceeds to box 712. In box 712, an output is generated that indicates that there is no curvature oscillation. The method then proceeds to box 716 in which the method ends.


Returning to box 704, if a sliding window is identified, curvature points can be located and tested as the method proceeds to box 706. In box 706, the first curvature (CurvA) and second curvature CurvB) are identified. If Eq. (6) and Eq. (6) are satisfied for the first curvature and the second curvature, the method proceeds to box 708. Otherwise, the method returns to box 704. In box 708, the third curvature (CurvC) is identified. If Eq. (7), Eq. (5) are satisfied for the second curvature and the third curvature and Eq. (8) is satisfied, the method proceeds to box 710. Otherwise, the method returns to box 704. In 710, a check is made to determine if the theses conditions are satisfied for another section of the timeline within a 10 second window. If the conditions are satisfied within the 10 second winds, the method process to box 714. Otherwise, the method returns to box 704. In box 714, an output is generated that indicated that there is a curvature oscillation.



FIG. 8 shows a first curvature 802 for a trajectory of a human-driven vehicle and a second curvature 804 for a trajectory based on a data set (either from map data or camera data). Time (t) is shown along the abscissa and curvature (C) is shown along the ordinate axis. These curvatures can be compared to determine if the vehicle has followed the data set. Referring to the first curvature 802, curvature values CV0, CV1, . . . , CVn are calculated at intervals over a sampling period ΔS. Similar curvature values (second curvature 804) include curvature values CM0, CM1, . . . , CMn for map data and/or CC0, CC1, . . . , CCn for camera data, which are calculated at the same intervals as for the human driven vehicle. An area sum for the vehicle data is shown in Eq. (9):









SV
=


(


CV

0

+

CV

1

+


+
CVn

)

*
Δ

S





Eq
.


(
9
)








Similarly, an area sum for the map data is shown in Eq. (10):









SM
=


(


CM

0

+

CM

1

+


+
CMn

)

*
Δ

S





Eq
.


(
10
)








and an area sum for the camera data is shown in Eq. (11):









SC
=


(


CC

0

+

CC

1

+


+
CCn

)

*
Δ

S





Eq
.


(
11
)








A difference between the vehicle curvature and the map data curvature is calculated by:










D

1

=

SV
-
SM





Eq
.


(
12
)








and a difference between the vehicle curvature and the camera data curvature is calculated by:










D

2

=

SV
-
SC





Eq
.


(
13
)








If:










D

2

-

D

1




Th
Curv





Eq
.


(
14
)








where ThCurv is a calibratable threshold that can be determined at design time, then the vehicle is following the map data. Otherwise, if:











D

1

-

D

2




Th
Curv





Eq
.


(
15
)








then the vehicle is following the camera data.



FIG. 9 shows a flowchart 900 illustrating a method of location analysis (as performed in box 222 of FIG. 2 or in box 312 of FIG. 3). The method begins at box 902. In box 904, the cases are organized to be placed in a case order. In box 906, the cases are reviewed to see if there exists a pair of cases that is unprocessed. If there are not unprocessed cases, the method proceeds to box 916 where the method ends.


If there are unprocessed cases, the method proceeds to box 908. In box 908, the distance is compared to a distance threshold. If the distance is less than the distance threshold, the method returns to box 906. Otherwise, the method proceeds to box 910. Distance can be determined using the following:









distance
=

2

r



sin

-
1


(




sin
2

(



φ
2

-

φ
1


2

)

+

cos



φ
1

·
cos




φ
2

·


sin
2

(



λ
2

-

λ
1


2

)





)






Eq
.


(
16
)








Where φ1 and φ2 are the latitude of point 1 and the latitude of point 2, respectively, and λ1 and λ1 are the longitude of point 1 and the longitude of point 2, respectively.


In box 910, the cases are reviewed to see if they are associated with a same VIN (vehicle identification number) and a same date). If they are associated, the method returns to box 906. Otherwise, the method proceeds to box 912. In box 912, the cases are compared to see if they are from the same side of the road. If they are not from the same side of the road, the method returns to box 906. Otherwise, the method proceeds to box 914. In box 914, the two cases are placed in the same group. If none of the existing groups can accommodate the two cases, then a new group is created.


The terms “a” and “an” do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item. The term “or” means “and/or” unless clearly indicated otherwise by context. Reference throughout the specification to “an aspect”, means that a particular element (e.g., feature, structure, step, or characteristic) described in connection with the aspect is included in at least one aspect described herein, and may or may not be present in other aspects. In addition, it is to be understood that the described elements may be combined in any suitable manner in the various aspects.


When an element such as a layer, film, region, or substrate is referred to as being “on” another element, it can be directly on the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present.


Unless specified to the contrary herein, all test standards are the most recent standard in effect as of the filing date of this application, or, if priority is claimed, the filing date of the earliest priority application in which the test standard appears.


Unless defined otherwise, technical and scientific terms used herein have the same meaning as is commonly understood by one of skill in the art to which this disclosure belongs.


While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiments disclosed, but will include all embodiments falling within the scope thereof.

Claims
  • 1. A method of operating a vehicle, comprising: obtaining a camera data of a location of a road, the camera data being used by the vehicle to traverse the location;obtaining map data of the location of the road, the map data being used by the vehicle to traverse the location;determining a first curvature of the road at the location from the camera data;determining a second curvature of the road at the location from the map data;identifying a mismatch between the first curvature and the second curvature at the location of the road;generating a case report for one of the map data and the camera data at the location upon occurrence of the mismatch; andadjusting one of the map data and a confidence level in the camera data for the location based on the case report.
  • 2. The method of claim 1, further comprising determining a rationality of one of the map data and the camera data by determining one of: (i) a swing in one of the first curvature and the second curvature; and (ii) an oscillation in one of the first curvature and the second curvature.
  • 3. The method of claim 1, further comprising determining an occurrence of an escalation event when one the first curvature and the second curvature is determined to meet a rationality condition.
  • 4. The method of claim 1, further comprising comparing a trajectory of a human-driven vehicle to the trajectory of one of the camera data and the map data at the location.
  • 5. The method of claim 1, further comprising generating the case report for each time the vehicle traverses the location.
  • 6. The method of claim 1, further comprising generating the case report for multiple vehicles traversing the location at different times.
  • 7. The method of claim 1, further comprising excluding a case report for the camera data based on an environmental condition.
  • 8. A system for operating a vehicle, comprising: a camera configured to obtain a camera data of a location of a road being traversed by the vehicle;a map database configured to provide map data of the location of the road;a processor configured to: determine a first curvature of the road at the location from the camera data;determine a second curvature of the road at the location from the map data;identify a mismatch between the first curvature and the second curvature at the location of the road;generate a case report for one of the map data and the camera data at the location upon occurrence of the mismatch; andadjust one of the map data and a confidence level in the camera data for the location based on the case report.
  • 9. The system of claim 8, wherein the processor is further configured to determine a rationality of one of the map data and the camera data by determining one of: (i) a swing in one of the first curvature and the second curvature; and (ii) an oscillation in one of the first curvature and the second curvature.
  • 10. The system of claim 8, wherein the processor is further configured to determine an occurrence of an escalation event when one the first curvature and the second curvature is determined to meet a rationality condition.
  • 11. The system of claim 8, wherein the processor is further configured to compare a trajectory of a human-driven vehicle to the trajectory of one of the camera data and the map data at the location.
  • 12. The system of claim 8, wherein the processor is further configured to generate the case report for each time the vehicle traverses the location.
  • 13. The system of claim 8, wherein the processor is further configured to generate the case report for multiple vehicles traversing the location at different times.
  • 14. The system of claim 8, wherein the processor is further configured to exclude a case report for the camera data based on an environmental condition.
  • 15. A vehicle, comprising: a camera configured to obtain a camera data of a location of a road being traversed by the vehicle;a map database configured to provide map data of the location of the road;a processor configured to: determine a first curvature of the road at the location from the camera data;determine a second curvature of the road at the location from the map data;identify a mismatch between the first curvature and the second curvature at the location of the road;generate a case report for one of the map data and the camera data at the location upon occurrence of the mismatch; andadjust one of the map data and a confidence level in the camera data for the location based on the case report.
  • 16. The vehicle of claim 8, wherein the processor is further configured to determine a rationality of one of the map data and the camera data by determining one of: (i) a swing in one of the first curvature and the second curvature; and (ii) an oscillation in one of the first curvature and the second curvature.
  • 17. The vehicle of claim 8, wherein the processor is further configured to determine an occurrence of an escalation event when one the first curvature and the second curvature is determined to meet a rationality condition.
  • 18. The vehicle of claim 8, wherein the processor is further configured to compare a trajectory of a human-driven vehicle to the trajectory of one of the camera data and the map data at the location.
  • 19. The vehicle of claim 8, wherein the processor is further configured to generate the case report for each time the vehicle traverses the location.
  • 20. The vehicle of claim 8, wherein the processor is further configured to generate the case report for multiple vehicles traversing the location at different times.