FILTERING PERCEPTION-RELATED ARTIFACTS

Information

  • Patent Application
  • 20230294716
  • Publication Number
    20230294716
  • Date Filed
    March 17, 2022
    2 years ago
  • Date Published
    September 21, 2023
    9 months ago
Abstract
The subject disclosure relates to techniques for filtering perception-related artifacts. The disclosed technology can include receiving, by a machine learning model, a first output generated by a perception system model onboard an autonomous vehicle, wherein the first output is based on sensor data received from sensors of the autonomous vehicle at a first time and includes an inaccurate perception of an environment around the autonomous vehicle, receiving, by the machine learning model, a second output generated by the perception system model onboard the autonomous vehicle, wherein the second output is based on sensor data received from sensors of the autonomous vehicle at a second time after the first time and includes an accurate perception of the environment around the autonomous vehicle, and altering, by the machine learning model, the inaccurate perception of the environment from the first output based on the accurate perception of the environment in the second output.
Description
Claims
  • 1. A computer-implemented method comprising: receiving, by a machine learning model, a first output generated by a perception system model onboard an autonomous vehicle, wherein the first output is based on sensor data received from sensors of the autonomous vehicle at a first time and includes an inaccurate perception of an environment around the autonomous vehicle;receiving, by the machine learning model, a second output generated by the perception system model onboard the autonomous vehicle, wherein the second output is based on sensor data received from sensors of the autonomous vehicle at a second time after the first time and includes an accurate perception of the environment around the autonomous vehicle; andaltering, by the machine learning model, the inaccurate perception of the environment from the first output based on the accurate perception of the environment in the second output.
  • 2. The computer-implemented method of claim 1, wherein the inaccurate perception is a false negative perception that does not include an object that is in the environment around the autonomous vehicle at the first time, and wherein the accurate perception includes the object in the environment around the autonomous vehicle at the second time.
  • 3. The computer-implemented method of claim 1, wherein the inaccurate perception is a false positive perception that includes a perceived object that is not in the environment around the autonomous vehicle at the first time, and wherein the accurate perception does not include the perceived object in the environment around the autonomous vehicle at the second time.
  • 4. The computer-implemented method of claim 1, wherein inaccurate perception includes an inaccurate label of an object in the environment around the autonomous vehicle, and wherein the accurate perception includes an accurate label of the object in the environment around the autonomous vehicle.
  • 5. The computer-implemented method of claim 1, further comprising: receiving, by the machine learning model, a third output generated by the perception system model onboard the autonomous vehicle, wherein the third output is based on sensor data received from sensors of the autonomous vehicle at a third time before the first time and includes an accurate perception of the environment around the autonomous vehicle,wherein altering the inaccurate perception of the perceived object is further based on the accurate perception of the environment in the third output.
  • 6. The computer-implemented method of claim 1, wherein the first output and the second output are a portion of a series of outputs.
  • 7. The computer-implemented method of claim 1, further comprising: tracking, by the machine learning model, the environment across a series of outputs generated by the perception system model; anddetermining, by the machine learning model, that the inaccurate perception is inaccurate based on a discrepancy in the environment across the series of outputs.
  • 8. A system comprising: a storage configured to store instructions;a processor configured to execute the instructions and cause the processor to: receive, by a machine learning model, a first output generated by a perception system model onboard an autonomous vehicle, wherein the first output is based on sensor data received from sensors of the autonomous vehicle at a first time and includes an inaccurate perception of an environment around the autonomous vehicle;receive, by the machine learning model, a second output generated by the perception system model onboard the autonomous vehicle, wherein the second output is based on sensor data received from sensors of the autonomous vehicle at a second time after the first time and includes an accurate perception of the environment around the autonomous vehicle; andalter, by the machine learning model, the inaccurate perception of the environment from the first output based on the accurate perception of the environment in the second output.
  • 9. The system of claim 8, wherein the inaccurate perception is a false negative perception that does not include an object that is in an environment around the autonomous vehicle at the first time, and wherein the accurate perception includes the object in the environment around the autonomous vehicle at the second time.
  • 10. The system of claim 8, wherein the inaccurate perception is a false positive perception that includes a perceived object that is not in the environment around the autonomous vehicle at the first time, and wherein the accurate perception does not include the perceived object in the environment around the autonomous vehicle at the second time.
  • 11. The system of claim 8, wherein inaccurate perception includes an inaccurate label of an object in the environment around the autonomous vehicle, and wherein the accurate perception includes an accurate label of the object in the environment around the autonomous vehicle.
  • 12. The system of claim 8, wherein the instructions further cause the processor to: receive, by the machine learning model, a third output generated by the perception system model onboard the autonomous vehicle, wherein the third output is based on sensor data received from sensors of the autonomous vehicle at a third time before the first time and includes an accurate perception of the environment around the autonomous vehicle,wherein altering the inaccurate perception of the perceived object is further based on the accurate perception of the environment in the third output.
  • 13. The system of claim 8, wherein the first output and the second output are a portion of a series of outputs.
  • 14. The system of claim 8, wherein the instructions further cause the processor to: track, by the machine learning model, the environment across a series of outputs generated by the perception system model; anddetermine, by the machine learning model, that the inaccurate perception is inaccurate based on a discrepancy in the environment across the series of outputs.
  • 15. A non-transitory computer readable medium comprising instructions, the instructions, when executed by a computing system, cause the computing system to: receive, by a machine learning model, a first output generated by a perception system model onboard an autonomous vehicle, wherein the first output is based on sensor data received from sensors of the autonomous vehicle at a first time and includes an inaccurate perception of an environment around the autonomous vehicle;receive, by the machine learning model, a second output generated by the perception system model onboard the autonomous vehicle, wherein the second output is based on sensor data received from sensors of the autonomous vehicle at a second time after the first time and includes an accurate perception of the environment around the autonomous vehicle; andalter, by the machine learning model, the inaccurate perception of the environment from the first output based on the accurate perception of the environment in the second output.
  • 16. The computer readable medium of claim 15, wherein the inaccurate perception is a false negative perception that does not include an object that is in an environment around the autonomous vehicle at the first time, and wherein the accurate perception includes the object in the environment around the autonomous vehicle at the second time.
  • 17. The computer readable medium of claim 15, wherein the inaccurate perception is a false positive perception that includes a perceived object that is not in the environment around the autonomous vehicle at the first time, and wherein the accurate perception does not include the perceived object in the environment around the autonomous vehicle at the second time.
  • 18. The computer readable medium of claim 15, wherein inaccurate perception includes an inaccurate label of an object in the environment around the autonomous vehicle, and wherein the accurate perception includes an accurate label of the object in the environment around the autonomous vehicle.
  • 19. The computer readable medium of claim 15, wherein the instructions, when executed by the computing system, further cause the computing system to: receive, by the machine learning model, a third output generated by the perception system model onboard the autonomous vehicle, wherein the third output is based on sensor data received from sensors of the autonomous vehicle at a third time before the first time and includes an accurate perception of the environment around the autonomous vehicle,wherein altering the inaccurate perception of the perceived object is further based on the accurate perception of the environment in the third output.
  • 20. The computer readable medium of claim 15, wherein the first output and the second output are a portion of a series of outputs.