A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the disclosure herein and to the drawings that form a part of this document: Copyright 2016-2022, TuSimple, Inc., All Rights Reserved.
This patent document pertains generally to tools (systems, apparatuses, methodologies, computer program products, etc.) for autonomous driving simulation systems, vehicle control systems, and autonomous driving systems, and more particularly, but not by way of limitation, to a system and method for implementing a neural network based vehicle dynamics model.
Autonomous vehicle simulation is an important process for developing and configuring autonomous vehicle control systems. These vehicle simulation systems need to produce vehicle movements and dynamics that mirror the movement and dynamics of vehicles in the real world. However, there are thousands of different types of vehicles operating in the real world, each having different types of components and/or different vehicle characteristics. Conventional simulation systems need detailed information about the engine and transmission or vehicle component types or characteristics of each specific vehicle being simulated. This detailed information for a large number of vehicle types is very difficult to collect, maintain, and use. As such, the conventional vehicle simulation systems are unwieldy, inefficient, and not readily adaptable to new vehicle types.
A system and method for implementing a neural network based vehicle dynamics model are disclosed herein. The vehicle dynamics model is one of the key subsystems for producing accurate vehicle simulation results in an autonomous vehicle simulation system. In various example embodiments as disclosed herein, the data-driven modeling system and method based on a neural network allows the modeling system to predict accurate vehicle accelerations and torque based on recorded historical vehicle driving data. To generate the predicted vehicle accelerations, a control command (e.g., throttle, brake, and steering commands) and vehicle status (e.g., vehicle pitch and speed status) are provided as inputs to the modeling system for each time step. To generate the predicted vehicle torque, a control command (e.g., throttle and brake commands) and vehicle status (e.g., vehicle speed status) are provided as inputs to the modeling system for each time step. The modeling system as described herein can use these inputs to generate the predicted vehicle acceleration and torque.
In contrast to other vehicle dynamics models, the system and method disclosed herein does not need the detailed information about the engine and transmission or vehicle component types or characteristics of a specific vehicle. This feature of the disclosed embodiments is very useful for the vehicle simulation in the simulation system; because, the dynamics and status of a specific engine and transmission or other vehicle component types or characteristics are often difficult to obtain and to model. Moreover, the modeling system of the various example embodiments as disclosed herein can be easily adapted to work with any type of vehicle by simply changing the training data used to configure the neural network. This beneficial attribute of the modeling system as disclosed herein saves model rebuilding time when working with other types of vehicles.
The various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It will be evident, however, to one of ordinary skill in the art that the various embodiments may be practiced without these specific details.
A system and method for implementing a neural network based vehicle dynamics model are disclosed herein. The vehicle dynamics model is one of the key subsystems for producing accurate vehicle simulation results in a simulation system. In various example embodiments as disclosed herein, the data-driven modeling system and method based on a neural network allows the modeling system to predict accurate vehicle accelerations based on recorded historical vehicle driving data. To generate the predicted vehicle accelerations, a control command (e.g., throttle, brake, and steering commands) and vehicle status (e.g., vehicle pitch and speed status) are provided as inputs to the modeling system for each time step. The modeling system as described herein can use these inputs to generate the predicted vehicle acceleration. In an alternative embodiment disclosed herein, the data-driven modeling system and method based on a neural network allows the modeling system to predict accurate vehicle torque based on recorded historical vehicle driving data. To generate the predicted vehicle torque, a control command (e.g., throttle and brake commands) and vehicle status (e.g., vehicle speed status) are provided as inputs to the modeling system for each time step. The modeling system of the alternative embodiment as described herein can use these inputs to generate the predicted vehicle torque.
In contrast to other vehicle dynamics models, the system and method disclosed herein does not need the detailed information about the engine and transmission or other vehicle component types or characteristics of a specific vehicle. This feature of the disclosed embodiments is very useful for the vehicle simulation in the simulation system; because, the dynamics and status of a specific engine and transmission or other vehicle component types or characteristics are often difficult to obtain and to model. Moreover, the modeling system of the various example embodiments as disclosed herein can be easily adapted to work with any type of vehicle by simply changing the training data used to configure the neural network. This beneficial attribute of the modeling system as disclosed herein saves model rebuilding time when working with other type of vehicles.
As described in various example embodiments, a system and method for implementing a neural network based vehicle dynamics model are described herein. Referring to
As also shown in
In the various example embodiments described herein, a neural network or other machine learning system is used to predict accurate vehicle accelerations based on recorded or otherwise captured historical vehicle driving data. In an example embodiment, vehicle driving data corresponding to real world vehicle operations or simulated vehicle movements is captured over time for a large number of vehicles in a large number of operating environments. The vehicle driving data can be annotated or labeled to enhance the utility of the data in a machine learning training dataset. As this vehicle driving data is captured over a long time period and a wide operating environment, patterns of vehicle dynamics begin to emerge. For example, similar types of vehicles operating in a similar environment tend to operate or move in a similar manner. As such, these patterns of movement, as represented in the training dataset, can be used to predict the dynamics of a vehicle for which the specific vehicle movement is unknown. As shown in
Referring now to
As shown in
Referring still to
In the various example embodiments disclosed herein, the vehicle status data 102 can include speed data and pitch data for a particular vehicle. Pitch data corresponds to the vehicle's degree of inclination or slope. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that other types of vehicle status data may be provided as input to the autonomous vehicle dynamics modeling system 120. In a typical operational scenario, the autonomous vehicle dynamics modeling system 120 periodically receives inputs 101 and 102 for a particular iteration and generates the corresponding simulated vehicle dynamics data 125 for the autonomous vehicle simulation system 140. Each iteration can be configured to occur at or within a particular pre-defined rate. When the autonomous vehicle simulation system 140 receives the simulated vehicle dynamics data 125 for a current iteration, the autonomous vehicle simulation system 140 can generate updated vehicle speed and pitch data corresponding to the received simulated vehicle dynamics data 125 for the current iteration. As shown in
For each iteration, the autonomous vehicle dynamics modeling system 120, and the vehicle dynamics modeling module 173 therein, can produce simulated vehicle dynamics data 125 that corresponds to the modeled vehicle dynamics data produced for the input vehicle control command data 101 and the vehicle status data 102 and based on the neural network 175 trained using one or more of the training datasets 135. The simulated vehicle dynamics data 125 can include predicted vehicle acceleration data for the current iteration, based on the vehicle control command data 101, the vehicle status data 102, and the trained neural network 175. The predicted vehicle acceleration data can be used by the autonomous vehicle simulation system 140 to generate corresponding vehicle speed and pitch data, among other values generated for the particular autonomous vehicle simulation environment. As shown in
In various example embodiments as disclosed herein, the data-driven modeling system and method based on a neural network allows the autonomous vehicle dynamics modeling system 120 to predict accurate vehicle accelerations based on recorded historical vehicle driving data as embodied in the trained neural network 175. To generate the predicted vehicle accelerations, the vehicle control command data 101 (e.g., throttle, brake, and steering commands) and the vehicle status data (e.g., vehicle pitch and speed status) are provided as inputs to the autonomous vehicle dynamics modeling system 120 for each time step or iteration. Because the predicted vehicle accelerations are based in part on the trained neural network 175, the particular autonomous vehicle simulation environment can be readily changed and adapted to a new simulation environment by retraining the neural network 175 with a new training dataset 135. In this manner, the autonomous vehicle dynamics modeling system 120 is readily adaptable to desired simulation environments without having to provide detailed vehicle component type information or specific vehicle characteristic information to the autonomous vehicle dynamics modeling system 120. As such, the autonomous vehicle dynamics modeling system 120 of the various example embodiments as disclosed herein can be easily adapted to work with any type of vehicle by simply changing the training data 135 used to configure the neural network 175. This beneficial attribute of the modeling system as disclosed herein saves model rebuilding time when working with other types of vehicles.
Referring now to
In an alternative embodiment shown in
Referring now to
The example computing system 700 can include a data processor 702 (e.g., a System-on-a-Chip (SoC), general processing core, graphics core, and optionally other processing logic) and a memory 704, which can communicate with each other via a bus or other data transfer system 706. The mobile computing and/or communication system 700 may further include various input/output (I/O) devices and/or interfaces 710, such as a touchscreen display, an audio jack, a voice interface, and optionally a network interface 712. In an example embodiment, the network interface 712 can include one or more radio transceivers configured for compatibility with any one or more standard wireless and/or cellular protocols or access technologies (e.g., 2nd (2G), 2.5, 3rd (3G), 4th (4G) generation, and future generation radio access for cellular systems, Global System for Mobile communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), LTE, CDMA2000, WLAN, Wireless Router (WR) mesh, and the like). Network interface 712 may also be configured for use with various other wired and/or wireless communication protocols, including TCP/IP, UDP, SIP, SMS, RTP, WAP, CDMA, TDMA, UMTS, UWB, WiFi, WiMax, Bluetooth™, IEEE 802.11x, and the like. In essence, network interface 712 may include or support virtually any wired and/or wireless communication and data processing mechanisms by which information/data may travel between a computing system 700 and another computing or communication system via network 714.
The memory 704 can represent a machine-readable medium on which is stored one or more sets of instructions, software, firmware, or other processing logic (e.g., logic 708) embodying any one or more of the methodologies or functions described and/or claimed herein. The logic 708, or a portion thereof, may also reside, completely or at least partially within the processor 702 during execution thereof by the mobile computing and/or communication system 700. As such, the memory 704 and the processor 702 may also constitute machine-readable media. The logic 708, or a portion thereof, may also be configured as processing logic or logic, at least a portion of which is partially implemented in hardware. The logic 708, or a portion thereof, may further be transmitted or received over a network 714 via the network interface 712. While the machine-readable medium of an example embodiment can be a single medium, the term “machine-readable medium” should be taken to include a single non-transitory medium or multiple non-transitory media (e.g., a centralized or distributed database, and/or associated caches and computing systems) that store the one or more sets of instructions. The term “machine-readable medium” can also be taken to include any non-transitory medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
This patent application is a continuation of U.S. patent application Ser. No. 17/147,836, titled “NEURAL NETWORK BASED VEHICLE DYNAMICS MODEL,” filed on Jan. 13, 2021, which is a continuation of U.S. patent application Ser. No. 15/672,207, titled “NEURAL NETWORK BASED VEHICLE DYNAMICS MODEL,” filed on Aug. 8, 2017, now U.S. Pat. No. 11,029,693. This non-provisional patent application draws priority from the referenced patent applications. The entire disclosure of the referenced patent applications is considered part of the disclosure of the present application and is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5913576 | Naito | Jun 1999 | A |
6777904 | Degner | Aug 2004 | B1 |
7689559 | Canright | Mar 2010 | B2 |
7844595 | Canright | Nov 2010 | B2 |
8041111 | Wilensky | Oct 2011 | B1 |
8064643 | Stein | Nov 2011 | B2 |
8082101 | Stein | Dec 2011 | B2 |
8164628 | Stein | Apr 2012 | B2 |
8175376 | Marchesotti | May 2012 | B2 |
8271871 | Marchesotti | Sep 2012 | B2 |
8378851 | Stein | Feb 2013 | B2 |
8401292 | Park | Mar 2013 | B2 |
8478072 | Aisaka | Jul 2013 | B2 |
8553088 | Stein | Oct 2013 | B2 |
8908041 | Stein | Dec 2014 | B2 |
8917169 | Schofield | Dec 2014 | B2 |
8963913 | Baek | Feb 2015 | B2 |
8981966 | Stein | Mar 2015 | B2 |
8993951 | Schofield | Mar 2015 | B2 |
9008369 | Schofield | Apr 2015 | B2 |
9025880 | Perazzi | May 2015 | B2 |
9042648 | Wang | May 2015 | B2 |
9117133 | Barnes | Aug 2015 | B2 |
9118816 | Stein | Aug 2015 | B2 |
9120485 | Dolgov | Sep 2015 | B1 |
9122954 | Srebnik | Sep 2015 | B2 |
9145116 | Clarke | Sep 2015 | B2 |
9147255 | Zhang | Sep 2015 | B1 |
9156473 | Clarke | Oct 2015 | B2 |
9176006 | Stein | Nov 2015 | B2 |
9179072 | Stein | Nov 2015 | B2 |
9183447 | Gdalyahu | Nov 2015 | B1 |
9185360 | Stein | Nov 2015 | B2 |
9191634 | Schofield | Nov 2015 | B2 |
9233659 | Rosenbaum | Jan 2016 | B2 |
9233688 | Clarke | Jan 2016 | B2 |
9248832 | Huberman | Feb 2016 | B2 |
9251708 | Rosenbaum | Feb 2016 | B2 |
9277132 | Berberian | Mar 2016 | B2 |
9280155 | Cox | Mar 2016 | B2 |
9280711 | Stein | Mar 2016 | B2 |
9286522 | Stein | Mar 2016 | B2 |
9297641 | Stein | Mar 2016 | B2 |
9299004 | Lin | Mar 2016 | B2 |
9317776 | Honda | Apr 2016 | B1 |
9330334 | Lin | May 2016 | B2 |
9355635 | Gao | May 2016 | B2 |
9365214 | Ben Shalom | Jun 2016 | B2 |
9428192 | Schofield | Aug 2016 | B2 |
9436880 | Bos | Sep 2016 | B2 |
9443163 | Springer | Sep 2016 | B2 |
9446765 | Ben Shalom | Sep 2016 | B2 |
9459515 | Stein | Oct 2016 | B2 |
9466006 | Duan | Oct 2016 | B2 |
9490064 | Hirosawa | Nov 2016 | B2 |
9531966 | Stein | Dec 2016 | B2 |
9555803 | Pawlicki | Jan 2017 | B2 |
10268200 | Fang | Apr 2019 | B2 |
20070230792 | Shashua | Oct 2007 | A1 |
20100226564 | Marchesotti | Sep 2010 | A1 |
20100281361 | Marchesotti | Nov 2010 | A1 |
20110206282 | Aisaka | Aug 2011 | A1 |
20120105639 | Stein | May 2012 | A1 |
20120140076 | Rosenbaum | Jun 2012 | A1 |
20120274629 | Baek | Nov 2012 | A1 |
20140145516 | Hirosawa | May 2014 | A1 |
20140198184 | Stein | Jul 2014 | A1 |
20150062304 | Stein | Mar 2015 | A1 |
20160037064 | Stein | Feb 2016 | A1 |
20160094774 | Li | Mar 2016 | A1 |
20160165157 | Stein | Jun 2016 | A1 |
20160210528 | Duan | Jul 2016 | A1 |
20170132334 | Levinson | May 2017 | A1 |
20180164810 | Luo | Jun 2018 | A1 |
20190228571 | Atsmon | Jul 2019 | A1 |
Number | Date | Country |
---|---|---|
1754179 | Feb 2007 | EP |
2448251 | May 2012 | EP |
2463843 | Jun 2012 | EP |
2463843 | Jul 2013 | EP |
2761249 | Aug 2014 | EP |
2463843 | Jul 2015 | EP |
2448251 | Oct 2015 | EP |
2946336 | Nov 2015 | EP |
2993654 | Mar 2016 | EP |
3081419 | Oct 2016 | EP |
WO2005098739 | Oct 2005 | WO |
WO2005098751 | Oct 2005 | WO |
WO2005098782 | Oct 2005 | WO |
WO2010109419 | Sep 2010 | WO |
WO2013045612 | Apr 2013 | WO |
WO2014111814 | Jul 2014 | WO |
WO2014111814 | Jul 2014 | WO |
WO2014201324 | Dec 2014 | WO |
WO2015083009 | Jun 2015 | WO |
WO2015103159 | Jul 2015 | WO |
WO2015125022 | Aug 2015 | WO |
WO2015186002 | Dec 2015 | WO |
WO2015186002 | Dec 2015 | WO |
WO2016135736 | Sep 2016 | WO |
WO2017013875 | Jan 2017 | WO |
Entry |
---|
Hou, Xiaodi and Zhang, Liqing, “Saliency Detection: A Spectral Residual Approach”, Computer Vision and Pattern Recognition, CVPR'07—IEEE Conference, pp. 1-8, 2007. |
Hou, Xiaodi and Harel, Jonathan and Koch, Christof, “Image Signature: Highlighting Sparse Salient Regions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, No. 1, pp. 194-201, 2012. |
Hou, Xiaodi and Zhang, Liqing, “Dynamic Visual Attention: Searching for Coding Length Increments”, Advances in Neural Information Processing Systems, vol. 21, pp. 681-688, 2008. |
Li, Yin and Hou, Xiaodi and Koch, Christof and Rehg, James M. and Yuille, Alan L., “The Secrets of Salient Object Segmentation”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 280-287, 2014. |
Zhou, Bolei and Hou, Xiaodi and Zhang, Liqing, “A Phase Discrepancy Analysis of Object Motion”, Asian Conference on Computer Vision, pp. 225-238, Springer Berlin Heidelberg, 2010. |
Hou, Xiaodi and Yuille, Alan and Koch, Christof, “Boundary Detection Benchmarking: Beyond F-Measures”, Computer Vision and Pattern Recognition, CVPR'13, vol. 2013, pp. 1-8, IEEE, 2013. |
Hou, Xiaodi and Zhang, Liqing, “Color Conceptualization”, Proceedings of the 15th ACM International Conference on Multimedia, pp. 265-268, ACM, 2007. |
Hou, Xiaodi and Zhang, Liqing, “Thumbnail Generation Based on Global Saliency”, Advances in Cognitive Neurodynamics, ICCN 2007, pp. 999-1003, Springer Netherlands, 2008. |
Hou, Xiaodi and Yuille, Alan and Koch, Christof, “A Meta-Theory of Boundary Detection Benchmarks”, arXiv preprint arXiv:1302.5985, pp. 1-4, Feb. 25, 2013. |
Li, Yanghao and Wang, Naiyan and Shi, Jianping and Liu, Jiaying and Hou, Xiaodi, “Revisiting Batch Normalization for Practical Domain Adaptation”, arXiv preprint arXiv:1603.04779, pp. 1-12, Nov. 8, 2016. |
Li, Yanghao and Wang, Naiyan and Liu, Jiaying and Hou, Xiaodi, “Demystifying Neural Style Transfer”, arXiv preprint arXiv:1701.01036, pp. 1-8, Jan. 4, 2017. |
Hou, Xiaodi and Zhang, Liqing, “A Time-Dependent Model of Information Capacity of Visual Attention”, International Conference on Neural Information Processing, pp. 127-136, Springer Berlin Heidelberg, 2006. |
Wang, Panqu and Chen, Pengfei and Yuan, Ye and Liu, Ding and Huang, Zehua and Hou, Xiaodi and Cottrell, Garrison, “Understanding Convolution for Semantic Segmentation”, arXiv preprint arXiv:1702.08502, pp. 1-10, Feb. 27, 2017. |
Li, Yanghao and Wang, Naiyan and Liu, Jiaying and Hou, Xiaodi, “Factorized Bilinear Models for Image Recognition”, arXiv preprint arXiv:1611.05709, pp. 1-9, Mov. 17, 2016. |
Hou, Xiaodi, “Computational Modeling and Psychophysics in Low and Mid-Level Vision”, California Institute of Technology, pp. i to xi and 1-114, May 7, 2014. |
Spinello, Luciano, Triebel, Rudolph, Siegwart, Roland, “Multiclass Multimodal Detection and Tracking in Urban Environments”, Sage Journals, vol. 29 issue: 12, pp. 1498-1515 Article first published online: Oct. 7, 2010; Issue published: Oct. 1, 2010. |
Matthew Barth, Carrie Malcolm, Theodore Younglove, and Nicole Hill, “Recent Validation Efforts for a Comprehensive Modal Emissions Model”, Transportation Research Record 1750, Paper No. 01-0326, College of Engineering, Center for Environmental Research and Technology, University of California, Riverside, CA 92521, pp. 13-23, Jan. 1, 2001. |
Kyoungho Ahn, Hesham Rakha, “The Effects of Route Choice Decisions on Vehicle Energy Consumption and Emissions”, Virginia Tech Transportation Institute, Blacksburg, VA 24061, pp. 1-32, May 1, 2008. |
Ramos, Sebastian, Gehrig, Stefan, Pinggera, Peter, Franke, Uwe, Rother, Carsten, “Detecting Unexpected Obstacles for Self-Driving Cars: Fusing Deep Learning and Geometric Modeling”, arXiv:1612.06573v1 [cs.CV], pp. 1-8, Dec. 20, 2016. |
Schroff, Florian, Dmitry Kalenichenko, James Philbin, (Google), “FaceNet: A Unified Embedding for Face Recognition and Clustering”, pp. 1-10, CVPR Jun. 17, 2015. |
Dai, Jifeng, Kaiming He, Jian Sun, (Microsoft Research), “Instance-aware Semantic Segmentation via Multi-task Network Cascades”, pp. 1-10, CVPR Dec. 14, 2015. |
Huval, Brody, Tao Wang, Sameep Tandon, Jeff Kiske, Will Song, Joel Pazhayampallil, Mykhaylo Andriluka, Pranav Rajpurkar, Toki Migimatsu, Royce Cheng-Yue, Fernando Mujica, Adam Coates, Andrew Y. Ng, “An Empirical Evaluation of Deep Learning on Highway Driving”, arXiv:1504.01716v3 [cs.RO], pp. 1-7, Apr. 17, 2015. |
Tian Li, “Proposal Free Instance Segmentation Based on Instance-aware Metric”, Department of Computer Science, Cranberry-Lemon University, Pittsburgh, PA., pp. 1-2, 2015. |
Mohammad Norouzi, David J. Fleet, Ruslan Salakhutdinov, “Hamming Distance Metric Learning”, Departments of Computer Science and Statistics, University of Toronto, pp. 1-9, 2012. |
Jain, Suyong Dutt, Grauman, Kristen, “Active Image Segmentation Propagation”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10, Las Vegas, Jun. 2016. |
MacAodha, Oisin, Campbell, Neill D.F., Kautz, Jan, Brostow, Gabriel J., “Hierarchical Subquery Evaluation for Active Learning on a Graph”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-8, 2014. |
Kendall, Alex, Gal, Yarin, “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision”, arXiv:1703.04977v1 [cs.CV], pp. 1-11, Mar. 15, 2017. |
Wei, Junqing, John M. Dolan, Bakhtiar Litkhouhi, “A Prediction- and Cost Function-Based Algorithm for Robust Autonomous Freeway Driving”, 2010 IEEE Intelligent Vehicles Symposium, University of California, San Diego, CA, USA, pp. 512-517, Jun. 21-24, 2010. |
Peter Welinder, Steve Branson, Serge Belongie, Pietro Perona, “The Multidimensional Wisdom of Crowds”; http://www.vision.caltech.edu/visipedia/papers/WelinderEtalNIPS10.pdf, pp. 1-9, 2010. |
Kai Yu, Yang Zhou, Da Li, Zhang Zhang, Kaiqi Huang, “Large-scale Distributed Video Parsing and Evaluation Platform”, Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences, China, arXiv:1611.09580v1 [cs.CV], pp. 1-7, Nov. 29, 2016. |
P. Guarneri, G. Rocca and M. Gobbi, “A Neural-Network-Based Model for the Dynamic Simulation of the Tire/Suspension System While Traversing Road Irregularities,” in IEEE Transactions on Neural Networks, vol. 19, No. 9, pp. 1549-1563, Sep. 2008. |
C. Yang, Z. Li, R. Cui and B. Xu, “Neural Network-Based Motion Control of an Underactuated Wheeled Inverted Pendulum Model,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 25, No. 11, pp. 2004-2016, Nov. 2014. |
Stephan R. Richter, Vibhav Vineet, Stefan Roth, Vladlen Koltun, “Playing for Data: Ground Truth from Computer Games”, Intel Labs, European Conference on Computer Vision (ECCV), Amsterdam, the Netherlands, pp. 1-16, 2016. |
Thanos Athanasiadis, Phivos Mylonas, Yannis Avrithis, and Stefanos Kollias, “Semantic Image Segmentation and Object Labeling”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 3, pp. 298-312, Mar. 2007. |
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele, “The Cityscapes Dataset for Semantic Urban Scene Understanding”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, pp. 1-11, 2016. |
Adhiraj Somani, Nan Ye, David Hsu, and Wee Sun Lee, “DESPOT: Online POMDP Planning with Regularization”, Department of Computer Science, National University of Singapore, pp. 1-9, 2013. |
Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. CoRR, abs/1606.02147, pp. 1-10, 2016. |
Chinese Office Action for CN application No. 201711329345.1, filed Dec. 13, 2017, publ. No. 2021033002782080, dated Apr. 2, 2021, document includes English abstract serving as a concise explanation. |
Number | Date | Country | |
---|---|---|---|
20230161354 A1 | May 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17147836 | Jan 2021 | US |
Child | 18094363 | US | |
Parent | 15672207 | Aug 2017 | US |
Child | 17147836 | US |