Advanced Driver Assistance Systems (ADAS) and Autonomous Driving Systems (ADS) can use digital maps as part of various operations, including route planning, navigation, collision and obstacle avoidance, and managing interactions with drivers. However, while an autonomous vehicle may receive information, perform path planning, and make maneuvering decisions based on sensor and map data, the ADS may only be informed about the accuracy, precision and other confidence information regarding vehicle sensor data, and thus may not be able to take into account accuracy, precision and similar confidence information regarding map data.
Various aspects include methods for including and using safety and/or confidence information regarding object and feature map data in autonomous and semi-autonomous driving operations. Various aspects may include methods performed by a processor of an autonomous driving system of a vehicle for using map data in performing an autonomous driving function, including accessing, from a map database accessible by the processor, map data regarding an object or feature in the vicinity of the vehicle, accessing, by the processor, confidence information associated with the map data regarding the object or feature in the vicinity of the vehicle, and using the confidence information by the processor in performing an autonomous or semi-autonomous driving action.
In some aspects, the confidence information may include one or more of: an Automotive Safety Integrity Level (ASIL) autonomous driving level in the vicinity of the object or feature, an indication related to accuracy of the map data regarding the object or feature, an indication related to reliability of the map data regarding the object or feature, a statistical score indicative of a precision of the map data regarding the object or feature, or an age or freshness of the map data regarding the object or feature.
In some aspects, accessing confidence information associated with the map data regarding the object or feature in the vicinity of the vehicle may include obtaining the confidence information by the processor from the map database, in which information in the map database is obtained from one or more of system memory, a remote computing device, or another vehicle. In some aspects, accessing confidence information associated with the map data regarding the object or feature in the vicinity of the vehicle may include obtaining the confidence information based on a location of the object or feature from a data structure accessible by the processor that is different from the map database.
In some aspects, using the confidence information in performing an autonomous or semi-autonomous driving action by the vehicle may include applying, by the processor, a weight to the accessed map data regarding the object or feature based upon the confidence information, and using weighted map data regarding the object or feature by the processor while performing a path planning, object avoidance or steering autonomous driving action. In some aspects, using the confidence information in performing an autonomous or semi-autonomous driving action by the vehicle may include changing an autonomous driving mode of the vehicle implemented by the processor based on the confidence information regarding the object or feature in the vicinity of the vehicle.
In some aspects, changing the autonomous driving mode of the vehicle implemented by the processor based on the confidence information regarding the object or feature in the vicinity of the vehicle may include changing the autonomous driving mode of the vehicle implemented by the processor to a driving mode compatible with the confidence information regarding the object or feature in the vicinity of the vehicle. Some aspects may further include notifying a driver of a need to participate in driving of the vehicle in response to determining that the confidence information regarding the object or feature in the vicinity of the vehicle does not support a fully autonomous driving mode, and changing the autonomous driving mode of the vehicle implemented by the processor after notifying the driver. In some aspects, the confidence information regarding the object or feature may include confidence information regarding objects and features within a defined area, and changing the autonomous driving mode of the vehicle implemented by the processor based on the confidence information regarding the object or feature in the vicinity of the vehicle may include changing the autonomous driving mode of the vehicle implemented by the processor to an autonomous driving mode consistent with the confidence information while the vehicle is in the defined area.
Some aspects may further include obtaining, by the processor from vehicle sensors, sensor data regarding the object or feature in the vicinity of the vehicle, determining, by the processor, whether the obtained sensor data regarding the object or feature in the vicinity of the vehicle differs from the map data regarding the object or feature obtained from the map database by a threshold amount, and uploading, by the processor to a remote computing device, the obtained sensor data regarding the object or feature in the vicinity of the vehicle along with confidence information based on one or more of a type of sensor used to detect or classify the object or feature, a quality of perception of the object or features achieved by the sensor, or an accuracy or precision of the sensor data in response to determining that the obtained sensor data differs from the map data regarding the object or feature obtained from the map database by at least the threshold amount.
Further aspects include a vehicle processing system including a memory and a processor configured to perform operations of any of the methods summarized above. Further aspects may include a vehicle processing system having various means for performing functions corresponding to any of the methods summarized above. Further aspects may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a vehicle processing system to perform various operations corresponding to any of the methods summarized above.
Further aspects include methods performed by a computing device for including safety and confidence information within map data useful by autonomous and semiautonomous driving systems in vehicles. Various aspects may include receiving, by the computing device from a source, information regarding an object or feature for inclusion in a map database including a measure of confidence in the information regarding the object or feature, using the received measure of confidence in the information regarding the object or feature to generate safety and confidence information regarding the object or feature suitable for use by vehicle autonomous and semi-autonomous driving systems in autonomous or semi-autonomous driving operations, in which the safety and confidence information may include one or more of an ASIL autonomous driving level in the vicinity of the object or feature, an indication related to accuracy of the map data regarding the object or feature, a statistical score indicative of a precision of the map data regarding the object or feature, an indication related to reliability of the map data regarding the object or feature, or an age or freshness of the map data regarding the object or feature, and storing the safety and confidence information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems. In some aspects, storing the safety and confidence information may include updating information regarding the object or feature in the map database based at least in part on the received measure of confidence in the received information regarding the object or feature confidence.
In some aspects, receiving information regarding an object or feature for inclusion in a map database including a measure of confidence in the information regarding the object or feature may include receiving from one or more vehicles information including: a location of the object or feature, a characteristic of the object or feature, and a measure of confidence in the information regarding either the location or the characteristic of the object or feature.
In some aspects, storing the safety and confidence information regarding the object or feature may include including the safety and confidence information as part of location and other information regarding the object or feature in the map database provided to vehicles for use in autonomous or semi-autonomous driving operations. In some aspects, storing the safety and confidence information regarding the object or feature may include storing the safety and confidence information in a database separate from the map database correlated with location information of the object or feature, and providing the database to vehicles for use in autonomous or semi-autonomous driving operations.
In some aspects, receiving information regarding an object or feature for inclusion in a map database may include receiving, from a plurality of sources, information regarding the object or feature along with measures of confidence in the information regarding the object or feature. Such aspects may further determining, from information received from the plurality of sources, one set of information regarding the object or feature and consolidated safety and confidence information for the determined set of information regarding the object or feature, and storing safety and confidence information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems for use in autonomous or semi-autonomous driving operations may include storing the consolidated safety and confidence information for the determined set of information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems for use in autonomous or semi-autonomous driving operations.
Further aspects include a computing device, such as a server, including a memory and a processor configured to perform operations of any of the methods summarized above. Further aspects may include a computing device having means for performing functions corresponding to any of the methods summarized above. Further aspects may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform various operations corresponding to any of the methods summarized above.
The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments of the claims, and together with the general description given and the detailed description, serve to explain the features herein.
Various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the claims.
Various embodiments include methods and processors of a vehicle autonomous driving system (ADS) for using map data that includes safety and/or confidence information in performing an autonomous driving function. Various embodiments may include the vehicle ADS processing system accessing a map database to obtain map data regarding an object or feature in the vicinity of the vehicle, and also accessing information regarding an autonomous driving safety level and/or information regarding a degree of confidence that should be ascribed to the corresponding map data regarding a given object or feature (referred to herein as “confidence information”). Such safety and/or confidence information may be included in or linked to the map data regarding the object or feature in the vicinity of the vehicle in a manner that enables the vehicle ADS processing system to obtain that information in conjunction with accessing or using the corresponding map data. Various embodiments may further include using the safety and/or confidence information by the processor in performing an autonomous or semi-autonomous driving action by the processor.
In some embodiments, the safety and/or confidence information may include an Automotive Safety Integrity Level (ASIL) autonomous driving level in the vicinity of the object or feature. Additionally or alternatively, in some embodiments the safety and/or confidence information may include an accuracy factor, metric or indication related to the object or feature map data. Additionally or alternatively, in some embodiments the safety and/or confidence information may include a reliability factor or metric. Additionally or alternatively, in some embodiments the safety and/or confidence information may include a statistical score indicative of a precision of the map data regarding the object or feature. Additionally or alternatively, in some embodiments the safety and/or confidence information may include an indication related to reliability of the object or feature map data. Additionally or alternatively, in some embodiments the safety and/or confidence information may include an age or freshness of the object or feature map data. For example, the vehicle ADS processing system may use safety information related to the object or feature map data to determine and implement an appropriate autonomous driving mode when in the vicinity of the object or feature. As another example, the vehicle ADS processing system may use confidence information to determine a weight to apply to the object or feature map data, and use weighted object or feature map data while performing a path planning, object avoidance or steering autonomous driving action. As another example, the safety and/or confidence information may include a statistical score or measure, such as an F-score or F-measure, which has a value between 1 (best) and 0 (worst) and provides a measure of a measurement's accuracy calculated from the precision of the measurements or sensor. An example of an F-score that may be used is known as an F1 score, which is the harmonic mean of the precision. The F1 score is also referred to as the Sorensen-Dice coefficient or Dice similarity coefficient.
In some embodiments, the safety and/or confidence information associated with the object or feature map data may be included within the map database so that the information can be obtained by the processor in the same or related operations as obtaining the object or feature information. For example, the safety and/or confidence information may be stored in one or more data fields along with position and description information regarding objects and features. In some embodiments, the safety and/or confidence information associated with the object or feature map data may be stored in and obtained from a data structure accessible by the processor that is different from the map database, such as a provisioned or downloaded (or downloadable) data table indexed to locations or an identifier of objects and features.
The map database and/or the safety and/or confidence information database may be stored in system memory and/or obtained from remote data stores, such as road side units, a central database server, and or other vehicles. The map information stored in a memory-hosted database may come from remote side units (e.g., a smart RSU) or from another vehicle. In such embodiments, the confidence assigned to objects and features in the map data may depend on the source of the map data. For example, the confidence level assigned to or associated with objects and features in a map generated from a single other vehicle received via V2X communications may be less than the confidence level assigned to or associated with objects and features in a map generated by map crowd sourcing.
In some embodiments, the vehicle ADS processing system may recognize when vehicle sensor data regarding an object or feature near the vehicle differs from map data by a threshold amount, and upload the sensor data to a remote computing device when that is the case. Such uploaded sensor data regarding the object or feature may include map coordinates along with information regarding a measure or estimate of the accuracy or precision of the sensor data. Some embodiments also include a remote computing device that may receive the object or feature sensor data, and use the received measure of confidence information regarding the object or feature confidence to generate safety and/or confidence information regarding the object or feature in a format (e.g., within a map database or separate database) suitable for use by a vehicle ADS in performing autonomous or semi-autonomous driving operations.
Various embodiments include storing or providing information regarding a safe ASIL (or other measure) autonomous driving level (referred to generally herein as “safety information” and/or information regarding a level of confidence (e.g., accuracy, precision, reliability, age, etc.) in object or feature map data. Safety information and confidence information may be related in some instances as it may be appropriate to indicate that fully autonomous driving is not safe in the vicinity of objects or features in which there is low confidence in the map data. However, safety information may be unrelated to confidence information, such as when map objects or features are associated with typical roadway, traffic or pedestrian conditions (i.e., in which there is high confidence in the map data) where full autonomous driving is risky. Also, there may be low confidence in map data for some objects or features without impacting safe autonomous driving levels, such as locations of objects or features alongside but not in the roadway. In various embodiments, safety information and information regarding the level of confidence in object or feature map data may be stored and accessed in the same or similar manners. For these reasons and for ease of reference, safety information and confidence information are referred to herein as “safety and/or confidence information” or collectively as “confidence information.” Thus, references to “confidence information” in the description and some claims is not intended to exclude information limited to safe autonomous driving levels.
As used herein, the term “vehicle” refers generally to any of an automobile, truck, bus, train, boat, and any other type of mobile ADS-capable system that may access map data to perform autonomous or semi-autonomous functions.
The term “system on chip” (SOC) is used herein to refer to a single integrated circuit (IC) chip that contains multiple resources and/or processors integrated on a single substrate. A single SOC may contain circuitry for digital, analog, mixed-signal, and radio-frequency functions. A single SOC may also include any number of general purpose and/or specialized processors (digital signal processors, modem processors, video processors, etc.), memory blocks (e.g., ROM, RAM, Flash, etc.), and resources (e.g., timers, voltage regulators, oscillators, etc.). SOCs may also include software for controlling the integrated resources and processors, as well as for controlling peripheral devices.
The term “system in a package” (SIP) may be used herein to refer to a single module or package that contains multiple resources, computational units, cores and/or processors on two or more IC chips, substrates, or SOCs. For example, a SIP may include a single substrate on which multiple IC chips or semiconductor dies are stacked in a vertical configuration. Similarly, the SIP may include one or more multi-chip modules (MCMs) on which multiple ICs or semiconductor dies are packaged into a unifying substrate. A SIP may also include multiple independent SOCs coupled together via high speed communication circuitry and packaged in close proximity, such as on a single motherboard or in a single wireless device. The proximity of the SOCs facilitates high speed communications and the sharing of memory and resources.
Technologies and technical standards are under development in multiple regions of the world for supporting evolving and future highway systems and vehicles, including setting standards for enabling safe autonomous and semi-autonomous vehicle operations. Such technologies include standardizing vehicle-based communication systems and functionality, and developing standards for vehicle autonomous driving systems (ADS).
Among standards being developed for autonomous and semi-autonomous vehicles is International Organization for Standardization (ISO) standard 26262 for the functional safety of road vehicles. ASIL is a risk classification defined by the ISO 26262 standard, which defines functional safety as “the absence of unreasonable risk due to hazards caused by malfunctioning behavior of electrical or electronic systems.” ISO 26262 defines Automotive Safety Integrity Levels (ASIL), which are risk classifications that are associated with appropriate levels of performance, accuracy and reliability imposed on vehicle ADS systems and data to ensure acceptable levels of functional safety in different autonomous driving modes. ASILs establish safety requirements—based on the probability and acceptability of harm—for automotive components to be compliant with the standard. There are four ASILs identified in ISO 26262—A, B, C, and D. ASIL-A represents the lowest degree and ASIL-D represents the highest degree of automotive hazard. Systems like airbags, anti-lock brakes, and power steering require an ASIL-D grade—the highest rigor applied to safety assurance—because the risks associated with their failure are the highest. On the other end of the safety spectrum, components like rear lights require only an ASIL-A grade. Head lights and brake lights generally would be ASIL-B while cruise control would generally be ASIL-C.
ASIL's are also referred to in terms of levels. Including a level of no functional safety equipment (“L0”), there are five ASILs. In L0 the driver is fully responsible for the safe operation of the vehicle and no driver assistance is provided. In L1 the driver can delegate steering or acceleration/braking, but the system performs just one driving task. In L2 the driver must constantly monitory the system, but the ADS performs several driving tasks (e.g., steering, cruise control with safe distance control, and automatic braking). In L3 the driver can turn attention away from the roadway in certain situations and the ADS can autonomously control the vehicle on defined routes (e.g., in highway driving). In L4 the driver can transfer complete control to the system but can take control at any time as the system is able to perform all driving tasks. Finally, in L5 no driver is needed as the system can control the vehicle autonomously under all conditions.
ASIL levels define not only the type of driving but also the level of confidence, accuracy and reliability required for a vehicle to operate at a given ASIL level of autonomy. Thus, when the safety and/or confidence information associated with map data of nearby objects or features is less than required for a vehicle's current ASIL level of operation (e.g., L4 or L5) or autonomous/semi-autonomous driving mode, the vehicle ADS processor should change the operating mode to an ASIL autonomous driving level consistent with the object/feature safety and/or safety and confidence information (e.g., L3). For example, if the vehicle is operating autonomously (e.g., in L4 or L5) and approaches objects and/or features with safety and/or safety and confidence information that only supports semi-autonomous or driver assistance operating modes (e.g., L3 or L2), the vehicle processor should notify the driver that he/she must pay attention to the roadway or take control of the vehicle. However, currently there are no agreed solutions for informing a vehicle ADS regarding the ASIL level associated with or appropriate for driving in the vicinity of particular objects or features identified in map data used by the ADS.
Among technologies and standards that will support autonomous and semi-autonomous driving are communication technologies and networks for Intelligent Highway Systems (ITS). Examples include standards being developed by the Institute of Electrical and Electronics Engineers (IEEE) and
Society of Automotive Engineers (SAE) for use in North America, or in the European Telecommunications Standards Institute (ETSI) and European Committee for Standardization (CEN) for use in Europe. For example, the IEEE 802.11p standard is the basis for the Dedicated Short Range Communication (DSRC) and ITS-G5 communication standards. IEEE 1609 is a higher layer standard based on IEEE 802.11p. The Cellular Vehicle-to-Everything (C-V2X) standard is a competing standard developed under the auspices of the 3rd Generation Partnership Project. These standards serve as the foundation for vehicle-based wireless communications, and may be used to support intelligent highways, autonomous and semi-autonomous vehicles, and improve the overall efficiency and safety of the highway transportation systems. ITS communications may be supported by next-generation 5G NR communication systems. These and other V2X wireless technologies may be used in various embodiments for downloading map data and safety and/or confidence information, as well as uploading observations by vehicle sensors for updating map data according to various embodiments.
The C-V2X protocol defines two transmission modes that, together, provide a 360° non-line-of-sight awareness and a higher level of predictability for enhanced road safety and autonomous driving. A first transmission mode includes direct C-V2X, which includes vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-pedestrian (V2P), and that provides enhanced communication range and reliability in the dedicated Intelligent Transportation System (ITS) 5.9 gigahertz (GHz) spectrum that is independent of a cellular network. A second transmission mode includes vehicle-to-network communications (V2N) in mobile broadband systems and technologies, such as third generation wireless mobile communication technologies (3G) (e.g., global system for mobile communications (GSM) evolution (EDGE) systems, code division multiple access (CDMA) 2000 systems, etc.), fourth generation wireless mobile communication technologies (4G) (e.g., long term evolution (LTE) systems, LTE-Advanced systems, mobile Worldwide Interoperability for Microwave Access (mobile WiMAX) systems, etc.), fifth generation new radio wireless mobile communication technologies (5G NR systems, etc.), etc.
An autonomous vehicle may use map data in conjunction with vehicle sensors and other sources of information to perform autonomous and semi-autonomous functions, such as automatic braking and speed management, path planning, maneuvering, obstacle avoidance, etc. A vehicle ADS processing system typically receives information regarding the external environment (e.g., landmarks, road markings, traffic signs, etc.) as well as other vehicles from a plurality of onboard sensors (e.g., cameras, radar, lidar, Global Positioning System (GPS) receivers, etc.) that the processor can use for navigation and object avoidance. The vehicle processor may also use digital map data that includes locations and information regarding streets, roadway and near-road objects, roadway markings, traffic control signals, and other information useful for safely operating the vehicle autonomously or semi-autonomously. Examples of such digital maps include SD maps, information-rich high-definition (HD) maps, dynamic maps, and autonomous driving maps. The vehicle processor also may receive location information from a positioning system or a communication network.
Sensor data from various vehicle sensors used in performing autonomous and semi-autonomous functions will exhibit different levels of accuracy and precision. Sensors have inherent limitations on accuracy and precision depending on the nature and design of each sensor (e.g., sensor operational wavelength, aperture dimensions, position on the vehicle and field of view, etc.). Additionally, environmental conditions, such as precipitation, smoke, dust, illumination, sun angle, etc., can affect the accuracy and precision of sensor data. Thus, in performing autonomous and semi-autonomous functions the vehicle processor may take into account inherent and potential inaccuracies in sensor data, such as by consolidating data from multiple sensors to determine locations of the vehicle with respect to features and objects in the environment. In so doing, the processor may take into account a level of confidence associated with each sensor or set of sensor data, relying more heavily on sensor data with a high level of confidence than on sensor data having a lower level of confidence. For example, in consolidating data from multiple sensors for determining location, making steering or braking decisions and path planning, the processor may apply weights to various sensor data based on a confidence metric associated with each sensor to arrive at a weighted or confidence-adjusted location of various objects and features in the environment.
Similar to sensor data, the information regarding objects and features included in maps used by ADS-equipped vehicle processors may have varying levels of accuracy or precision depending on the sources of such information. For example, position information providing roadway boundaries (e.g., centerline of lanes, lane widths, curb locations, etc.) may be determined through a survey and thus recorded in the map with high accuracy and high confidence. Conversely, position information regarding landmarks and temporary obstacles (e.g., construction sites, moveable barriers, potholes, etc.) may be gathered by vehicle sensors that have varying degrees of accuracy and precision depending on characteristics of the sensors, the conditions under which position information was gathered, viewing perspective at the time the object or feature was measured, and the like. Further, some sources of location information may be more trustworthy than others. However, conventional maps used by vehicle ADS processing systems for autonomous and semi-autonomous driving functionality may provide little or no information regarding the reliability or accuracy of object and feature location data. Thus, while a vehicle ADS processing system may be configured to take into account confidence metrics for vehicles sensor data, conventional maps do not provide equivalent safety and confidence information regarding the map objects and features that the processor can take into account in localization, driving decisions, route planning, and the like.
Various embodiments overcome such limitations in digital map data used by vehicle ADS processing systems for autonomous and semi-autonomous driving functions by including safety and confidence information, such as a confidence metric, associated with objects and features in the digital map. Various embodiments include methods for using safety and confidence information, such as a confidence metrics, associated with objects and features in the digital map for performing autonomous and semi-autonomous driving functions. Some embodiments include methods for including safety and confidence information, such as a confidence metrics, associated with objects and features in digital maps suitable for use by vehicles equipped with vehicle ADS processing systems configured to perform operations of various embodiments.
In some embodiments, a vehicle processor may be configured to perform operations including accessing, from a map database accessible by the processor, map data regarding an object or feature in the vicinity of the vehicle and obtaining or accessing safety and/or confidence information associated with the object or feature map data regarding the in the vicinity of the vehicle. In some embodiments, the safety and/or confidence information may be included within the map database, such as part of map data records. In some embodiments, the safety and/or confidence information may be stored in a database separate from the map database, such as with an index or common data element that enables a vehicle processor to find the safety and/or confidence information corresponding to particular object and feature map data. The vehicle processor may then use the safety and/or confidence information in performing an autonomous or semi-autonomous driving action by the processor. In various embodiments, in performing an autonomous or semi-autonomous driving action the vehicle ADS processing system may adjust the autonomous driving level being performed by the system consistent with safety and confidence information (e.g., switching to a lower level of autonomous driving consistent with the safety information), take into account confidence information in object or feature map information as part of sensor fusion and navigation, route planning, object avoidance, discontinue or suspend an autonomous driving function, functionality, feature or action, and the like.
In some embodiments, the vehicle processor of the autonomous driving system may apply a weight to the accessed map data regarding the object or feature based upon the confidence information, and use weighted map data regarding the object or feature by the processor while performing a path planning, object avoidance, steering, and/or other autonomous driving action. In some embodiments, the vehicle processor of the autonomous driving system may perform operations including changing an autonomous driving mode of the vehicle implemented by the vehicle processor based on the safety and/or confidence information regarding the object or feature in the vicinity of the vehicle. In some embodiments or circumstances, the vehicle processor of the autonomous driving system may discontinue or suspend an autonomous driving function, functionality, feature or action.
In some embodiments, the vehicle processor of the autonomous driving system may obtain sensor data from vehicle sensors regarding objects and features in the vicinity of the vehicle, determine whether the sensor data indicate a new objects or features, or differences between sensor data and map data regarding an object or feature, and upload to a remote computing device information regarding the location of the new or changed object or feature including information regarding confidence (e.g., accuracy, precision, or reliability of the underlying sensor data) in the uploaded location information. In this manner, ADS-equipped vehicles may support the creation of map database including confidence information.
In some embodiments, a computing device, such as a server, may be configured to receive reports from ADS equipped vehicles that identify locations of objects and features that are new or differ from what is included in a map or maps used by vehicle ADS processing systems for autonomous and semi-autonomous driving functions, including safety and confidence information (e.g., a confidence metric) associated with each identified location. In such embodiments, the computing device may be configured to perform operations on the received information determining appropriate confidence information for added map data, and storing the confidence information in a database that is provided to or accessible by ADS-equipped vehicles.
The communications system 100 may include a heterogeneous network architecture that includes a core network 140, a number of base stations 110, and a variety of mobile devices including a vehicle 102 equipped with an ADS 104 including wireless communication capabilities. The base station 110 may communicate with a core network 140 over a wired network 126. The communications system 100 also may include road side units 112 supporting V2X communications with vehicles 102 via V2X wireless communication links 124.
A base station 110 is a network elements that communicates with wireless devices (e.g., the vehicle 102) via, and may be referred to as a Node B, an LTE Evolved nodeB (eNodeB or eNB), an access point (AP), a radio head, a transmit receive point (TRP), a New Radio base station (NR BS), a 5G NodeB (NB), a Next Generation NodeB (gNodeB or gNB), or the like. Each base station 110 may provide communication coverage for a particular geographic area or “cell.” In 3GPP, the term “cell” can refers to a coverage area of a base station, a base station subsystem serving this coverage area, or a combination thereof, depending on the context in which the term is used. The core network 140 may be any type of core network, such as an LTE core network (e.g., an evolved packet core (EPC) network), 5G core network, a disaggregated network as described with reference to
Road side units may be coupled via wired networks 128 to a remote computing device 132 that may store and map data and confidence information for communication to vehicles 102 in accordance with various embodiments. Roadside units 112 may communicate via V2X wireless communication links 124 with ITS and ADS-equipped vehicles 102 for downloading information useful for ADS autonomous and semi-autonomous driving functions, including downloading map databases and other data bases including safety and/or confidence information databases in accordance with some embodiments. V2X wireless communication links 124 may also be used for uploading information regarding objects and features, and associated confidence measures, obtained by vehicle sensors to a remote computing device 132 for use in generating map data in accordance with some embodiments.
Cellular wireless communications, such as 5G wireless communications supported by base stations 110 may also be used for downloading information useful for ADS autonomous and semi-autonomous driving functions, including downloading map databases and other data bases including safety and/or confidence information databases, as well as for uploading information regarding objects and features, and associated confidence measures, obtained by vehicle sensors to a remote computing device 132 for use in generating map data in accordance with some embodiments. To support such communications, the remote computing device 132 hosting map and confidence information databases may be coupled to the core network via a communication link 127, such as the Internet, and map data and confidence information may be communicated to a base station 110 via a wired communication link 126 (e.g., Ethernet, fiber optic, etc.) for downloading to vehicles 102 via cellular wireless communication links 122 such as 5G wireless communication links.
Cellular wireless communication links 122 may include a plurality of carrier signals, frequencies, or frequency bands, each of which may include a plurality of logical channels. The wireless communication links 122 and 124 may utilize one or more radio access technologies (RATs). Examples of RATs that may be used in a wireless communication link include 3GPP LTE, 3G, 4G, 5G (e.g., NR), GSM, Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Worldwide Interoperability for Microwave Access (WiMAX), Time Division Multiple Access (TDMA), and other mobile telephony communication technologies cellular RATs. Further examples of RATs that may be used in one or more of the various wireless communication links within the communication system 100 include medium range protocols such as Wi-Fi, LTE-U, LTE-Direct, LAA, MuLTEfire, and relatively short range RATs such as ZigBee, Bluetooth, and Bluetooth Low Energy (LE).
Each of the units (i.e., CUs 162, DUs 170, RUs 172), as well as the Near-RT RICs 164, the Non-RT RICs 168 and the SMO Framework 166, may include one or more interfaces or be coupled to one or more interfaces configured to receive or transmit signals, data, or information (collectively, signals) via a wired or wireless transmission medium. Each of the units, or an associated processor or controller providing instructions to the communication interfaces of the units, can be configured to communicate with one or more of the other units via the transmission medium. For example, the units can include a wired interface configured to receive or transmit signals over a wired transmission medium to one or more of the other units. Additionally, the units can include a wireless interface, which may include a receiver, a transmitter or transceiver (such as a radio frequency (RF) transceiver), configured to receive or transmit signals, or both, over a wireless transmission medium to one or more of the other units.
In some aspects, the CU 162 may host one or more higher layer control functions. Such control functions may include the radio resource control (RRC), packet data convergence protocol (PDCP), service data adaptation protocol (SDAP), or the like. Each control function may be implemented with an interface configured to communicate signals with other control functions hosted by the CU 162. The CU 162 may be configured to handle user plane functionality (i.e., Central Unit-User Plane (CU-UP)), control plane functionality (i.e., Central Unit-Control Plane (CU-CP)), or a combination thereof. In some implementations, the CU 162 can be logically split into one or more CU-UP units and one or more CU-CP units. The CU-UP unit can communicate bidirectionally with the CU-CP unit via an interface, such as the E1 interface when implemented in an O-RAN configuration. The CU 162 can be implemented to communicate with DUs 170, as necessary, for network control and signaling.
The DU 170 may correspond to a logical unit that includes one or more base station functions to control the operation of one or more RUs 172. In some aspects, the DU 170 may host one or more of a radio link control (RLC) layer, a medium access control (MAC) layer, and one or more high physical (PHY) layers (such as modules for forward error correction (FEC) encoding and decoding, scrambling, modulation and demodulation, or the like) depending, at least in part, on a functional split, such as those defined by the 3rd Generation Partnership Project (3GPP). In some aspects, the DU 170 may further host one or more low PHY layers. Each layer (or module) may be implemented with an interface configured to communicate signals with other layers (and modules) hosted by the DU 170, or with the control functions hosted by the CU 162.
Lower-layer functionality may be implemented by one or more RUs 172. In some deployments, an RU 172, controlled by a DU 170, may correspond to a logical node that hosts RF processing functions, or low-PHY layer functions (such as performing fast Fourier transform (FFT), inverse FFT (iFFT), digital beamforming, physical random access channel (PRACH) extraction and filtering, or the like), or both, based at least in part on the functional split, such as a lower layer functional split. In such an architecture, the RU(s) 172 may be implemented to handle over the air (OTA) communication with one or more UEs 120. In some implementations, real-time and non-real-time aspects of control and user plane communication with the RU(s) 172 may be controlled by the corresponding DU 170. In some scenarios, this configuration may enable the DU(s) 170 and the CU 162 to be implemented in a cloud-based radio access network (RAN) architecture, such as a vRAN architecture.
The SMO Framework 166 may be configured to support RAN deployment and provisioning of non-virtualized and virtualized network elements. For non-virtualized network elements, the SMO Framework 166 may be configured to support the deployment of dedicated physical resources for RAN coverage requirements, which may be managed via an operations and maintenance interface (such as an O1 interface). For virtualized network elements, the SMO Framework 166 may be configured to interact with a cloud computing platform (such as an open cloud (O-Cloud) 176) to perform network element life cycle management (such as to instantiate virtualized network elements) via a cloud computing platform interface (such as an O2 interface). Such virtualized network elements can include, but are not limited to, CUs 162, DUs 170, RUs 172 and Near-RT RICs 164. In some implementations, the SMO Framework 166 may communicate with a hardware aspect of a 4G RAN, such as an open eNB (O-eNB) 174, via an O1 interface. Additionally, in some implementations, the SMO Framework 166 may communicate directly with one or more RUs 172 via an O1 interface. The SMO Framework 166 also may include a Non-RT RIC 168 configured to support functionality of the SMO Framework 166.
The Non-RT RIC 168 may be configured to include a logical function that enables non-real-time control and optimization of RAN elements and resources, Artificial Intelligence/Machine Learning (AI/ML) workflows including model training and updates, or policy-based guidance of applications/features in the Near-RT RIC 164. The Non-RT RIC 168 may be coupled to or communicate with (such as via an A1 interface) the Near-RT RIC 164. The Near-RT RIC 164 may be configured to include a logical function that enables near-real-time control and optimization of RAN elements and resources via data collection and actions over an interface (such as via an E2 interface) connecting one or more CUs 162, one or more DUs 170, or both, as well as an O-eNB, with the Near-RT RIC 164.
In some implementations, to generate AI/ML models to be deployed in the Near-RT RIC 164, the Non-RT RIC 168 may receive parameters or external enrichment information from external servers. Such information may be utilized by the Near-RT RIC 164 and may be received at the SMO Framework 166 or the Non-RT RIC 168 from non-network data sources or from network functions. In some examples, the Non-RT RIC 168 or the Near-RT RIC 164 may be configured to tune RAN behavior or performance. For example, the Non-RT RIC 168 may monitor long-term trends and patterns for performance and employ AI/ML models to perform corrective actions through the SMO Framework 166 (such as reconfiguration via 01) or via creation of RAN management policies (such as A1 policies).
The vehicle ADS processing system 204 may include a processor 205, memory 206, an input module 207, an output module 208 and the radio module 218. The processor 205 may be coupled to the memory 206 (i.e., a non-transitory storage medium), and may be configured with processor-executable instructions stored in the memory 206 to perform operations of the methods according to various embodiments described herein. Also, the processor 205 may be coupled to the output module 208, which may control in-vehicle displays, and to the input module 207 to receive information from vehicle sensors as well as driver inputs.
The vehicle ADS processing system 204 may include a V2X antenna 219 coupled to the radio module 218 that is configured to communicate with one or more ITS participants (e.g., stations), a roadside unit 112, and a base station 110 or another suitable network access point. The V2X antenna 219 and radio module 218 may be configured to receive dynamic traffic flow feature information via vehicle-to-everything (V2X) communications. In various embodiments, the vehicle ADS processing system 204 may receive information from a plurality of information sources, such as the in-vehicle network 210, infotainment system 212, various sensors 214, various actuators 216, and the radio module 218. The vehicle ADS processing system 204 may be configured to perform autonomous or semi-autonomous driving functions using map data in addition to sensor data, as further described below.
Examples of an in-vehicle network 210 include a Controller Area Network (CAN), a Local Interconnect Network (LIN), a network using the FlexRay protocol, a Media Oriented Systems Transport (MOST) network, and an Automotive Ethernet network. Examples of vehicle sensors 214 include a location determining system (such as a Global Navigation Satellite Systems (GNSS) system, a camera, radar, lidar, ultrasonic sensors, infrared sensors, and other suitable sensor devices and systems. Examples of vehicle actuators 216 include various physical control systems such as for steering, brakes, engine operation, lights, directional signals, and the like.
The vehicle ADS processing system stack 220 may include a radar and/or lidar perception layer 222, a camera perception layer 224, a positioning engine layer 226, a map database 228 including safety and/or confidence information (or a linked databased storing such information), a map fusion and arbitration layer 230, a route planning layer 232, an ASIL operating mode assessment layer 234, a sensor fusion and road world model (RWM) management layer 236, a motion planning and control layer 238, and a behavioral planning and prediction layer 240. The layers 222-240 are merely examples of some layers in one example configuration of the vehicle ADS processing system stack 220. In other configurations, other layers may be included, such as additional layers for other perception sensors (e.g., a lidar perception layer, etc.), additional layers for planning and/or control, additional layers for modeling, etc., and/or certain of the layers 222-240 may be excluded from the vehicle ADS processing system stack 220. Each of the layers 222-240 may exchange data, computational results and commands as illustrated by the arrows in
The radar and/or lidar perception layer 222 may receive data from one or more detection and ranging sensors, such as radar (e.g., 132) and/or lidar (e.g., 138), and process the data to recognize and determine locations of other vehicles and objects within a vicinity of the vehicle 100. The radar perception layer 22 may include use of neural network processing and artificial intelligence methods to recognize objects and vehicles, and pass such information on to the sensor fusion and RWM management layer 236.
The camera perception layer 224 may receive data from one or more cameras, such as cameras, and process the data to recognize and determine locations of other vehicles and objects within a vicinity of the vehicle 100. The camera perception layer 224 may include use of neural network processing and artificial intelligence methods to recognize objects and vehicles, and pass such information on to the sensor fusion and RWM management layer 236.
The positioning engine layer 226 may receive data from the radar and/or lidar perception layer 222, the camera perception layer 224, and various sources of navigation information, and process the data and information to determine a position of the vehicle 100. Various sources of navigation information may include, but is not limited to, a GPS receiver, an IMU, and/or other sources and sensors connected via a CAN bus. The positioning engine layer 226 may also utilize inputs from one or more cameras, such as cameras and/or any other available sensor capable of identifying and determining directions and distances to objects in the vicinity of the vehicle, such as radars, lidars, etc.
The vehicle ADS processing system 220 may include or be coupled to a vehicle wireless communication subsystem 218. The wireless communication subsystem 218 may be configured to communicate with highway communication systems, such as via V2X communication links (e.g., 124) and/or to remote information sources (e.g., computing device 132) via cellular wireless communication links (e.g., 122), such as via 5G cellular networks.
The map fusion and arbitration layer 230 may access the map database 228 for location information regarding nearby objects and features as well as safety and/or confidence information, and receive localizing/navigation information output from the positioning engine layer 226, and process the data to further determine the position of the vehicle 102 within the map, such as location within a lane of traffic, position within a street map, etc. sensor data may be stored in a memory (e.g., memory 312).
In determining the position of the vehicle 102 within the map, the positioning engine layer 226 take into consideration confidence information regarding locations of objects and features within the map as well as confidence (e.g., accuracy and/or precision information) in sensor data used in the positioning engine layer 226, such as confidence information related to radar, lidar and/or camera sensor data. The locations of objects and features within the map data may have varying levels of confidence, provided by safety and/or confidence information within the map database or a linked database, so the map fusion and arbitration layer 230 may take into account such information as well as confidence in sensor data in developing arbitrated map location information. For example, the map fusion and arbitration layer 230 may convert latitude and longitude information from GPS into locations within a surface map of roads contained in the map database and compare such locations to information received from radar, lidar and/or camera sensors that can identify and locate the objects and features associated with roads in the map data.
Similar to location information in some map objects and features and sensor accuracy and precision, GPS position fixes include some error, so the map fusion and arbitration layer 230 may function to determine a best guess location of the vehicle within a roadway based upon an arbitration between the GPS coordinates, sensor data, and map data regarding objects and features in and near the roadway. For example, while GPS coordinates may place the vehicle near the middle of a two-lane road in the sensor data, the map fusion and arbitration layer 230 may determine from the direction of travel that the vehicle is most likely aligned with the travel lane consistent with the direction of travel. The map fusion and arbitration layer 230 may pass arbitrated map location information to the sensor fusion and RWM management layer 236.
The route planning layer 232 may utilize sensor data, as well as inputs from an operator or dispatcher to plan a route to be followed by the vehicle 102 to a particular destination. The route planning layer 232 may pass map-based location information to the sensor fusion and RWM management layer 236. However, the use of a prior map by other layers, such as the sensor fusion and RWM management layer 236, etc., is not required. For example, other stacks may operate and/or control the vehicle based on perceptual data alone without a provided map, constructing lanes, boundaries, and the notion of a local map as perceptual data is received.
In embodiments including an ASIL mode assessment layer 234, that processing layer may use safety and/or confidence information regarding nearby objects and features identified in the map database 228 to select an appropriate ADS driving mode. In some embodiments, the ASIL mode assessment layer 234 may determine whether the current autonomous or semi-autonomous driving mode is consistent with or appropriate in view of safety and/or confidence information regarding nearby objects and features in the driving environment. For example, the ASIL mode assessment layer 234 may compare ASIL safety level information associated or linked to nearby objects and features in the map database, and initiate an action to change the driving mode to an ASIL level compatible or consistent with the ASIL safety level information of the nearby objects and features.
The sensor fusion and RWM management layer 236 may receive data and outputs produced by the radar and/or lidar perception layer 222, camera perception layer 224, map fusion and arbitration layer 230, route planning layer 232, and ASIL mode assessment layer 234, and use some or all of such inputs to estimate or refine the location and state of the vehicle 102 in relation to the road, other vehicles on the road, and other objects within a vicinity of the vehicle 100. For example, the sensor fusion and RWM management layer 236 may combine imagery data from the camera perception layer 224 with arbitrated map location information from the map fusion and arbitration layer 230 to refine the determined position of the vehicle within a lane of traffic. As another example, the sensor fusion and RWM management layer 236 may combine object recognition and imagery data from the camera perception layer 224 with object detection and ranging data from the radar and/or lidar perception layer 222 to determine and refine the relative position of other vehicles and objects in the vicinity of the vehicle. As another example, the sensor fusion and RWM management layer 236 may receive information from V2X communications (such as via the CAN bus) regarding other vehicle positions and directions of travel, and combine that information with information from the radar and/or lidar perception layer 222 and the camera perception layer 224 to refine the locations and motions of other vehicles. The sensor fusion and RWM management layer 236 may output refined location and state information of the vehicle 100, as well as refined location and state information of other vehicles and objects in the vicinity of the vehicle, to the motion planning and control layer 238 and/or the behavior planning and prediction layer 240.
As a further example, the sensor fusion and RWM management layer 236 may use dynamic traffic control instructions directing the vehicle 102 to change speed, lane, direction of travel, or other navigational element(s), and combine that information with other received information to determine refined location and state information. The sensor fusion and RWM management layer 236 may output the refined location and state information of the vehicle 102, as well as refined location and state information of other vehicles and objects in the vicinity of the vehicle 100, to the motion planning and control layer 238, the behavior planning and prediction layer 240 and/or devices remote from the vehicle 102, such as a data server, other vehicles, etc., via wireless communications, such as through C-V2X connections, other wireless connections, etc.
As a still further example, the sensor fusion and RWM management layer 236 may monitor perception data from various sensors, such as perception data from a radar and/or lidar perception layer 222, camera perception layer 224, other perception layer, etc., and/or data from one or more sensors themselves to analyze conditions in the vehicle sensor data. The sensor fusion and RWM management layer 236 may be configured to detect conditions in the sensor data, such as sensor measurements being at, above, or below a threshold, certain types of sensor measurements occurring, etc., and may output the sensor data as part of the refined location and state information of the vehicle 102 provided to the behavior planning and prediction layer 240 and/or devices remote from the vehicle 100, such as a data server, other vehicles, etc., via wireless communications, such as through C-V2X connections, other wireless connections, etc.
The behavioral planning and prediction layer 240 of the autonomous vehicle system stack 220 may use the refined location and state information of the vehicle 102 and location and state information of other vehicles and objects output from the sensor fusion and RWM management layer 236 to predict future behaviors of other vehicles and/or objects. For example, the behavioral planning and prediction layer 240 may use such information to predict future relative positions of other vehicles in the vicinity of the vehicle based on own vehicle position and velocity and other vehicle positions and velocity. Such predictions may take into account information from the map data and route planning to anticipate changes in relative vehicle positions as host and other vehicles follow the roadway. The behavioral planning and prediction layer 240 may output other vehicle and object behavior and location predictions to the motion planning and control layer 238. Additionally, the behavior planning and prediction layer 240 may use object behavior in combination with location predictions to plan and generate control signals for controlling the motion of the vehicle 102. For example, based on route planning information, refined location in the roadway information, and relative locations and motions of other vehicles, the behavior planning and prediction layer 240 may determine that the vehicle 102 needs to change lanes and accelerate, such as to maintain or achieve minimum spacing from other vehicles, and/or prepare for a turn or exit. As a result, the behavior planning and prediction layer 240 may calculate or otherwise determine a steering angle for the wheels and a change to the throttle setting to be commanded to the motion planning and control layer 238 and ADS vehicle control unit 242 along with such various parameters necessary to effectuate such a lane change and acceleration. One such parameter may be a computed steering wheel command angle.
The motion planning and control layer 238 may receive data and information outputs from the sensor fusion and RWM management layer 236, safety and/or confidence information from the map database 232 (or a separate and linked database), and other vehicle and object behavior as well as location predictions from the behavior planning and prediction layer 240, and use this information to plan and generate control signals for controlling the motion of the vehicle 102 and to verify that such control signals meet safety requirements for the vehicle 100. For example, based on route planning information, refined location in the roadway information, and relative locations and motions of other vehicles, the motion planning and control layer 238 may verify and pass various control commands or instructions to the ADS vehicle control unit 242.
The ADS vehicle control unit 242 may receive the commands or instructions from the motion planning and control layer 238 and translate such information into mechanical control signals for controlling wheel angle, brake and throttle of the vehicle 100. For example, ADS vehicle control unit 242 may respond to the computed steering wheel command angle by sending corresponding control signals to the steering wheel controller.
In various embodiments, the wireless communication subsystem 218 may communicate with other V2X system participants via wireless communication links to transmit sensor data, position data, vehicle data and data gathered about the environment around the vehicle by onboard sensors. Such information may be used by other V2X system participants to update stored sensor data for relay to other V2X system participants.
In various embodiments, the vehicle ADS processing system stack 220 may include functionality that performs safety checks or oversight of various commands, planning or other decisions of various layers that could impact vehicle and occupant safety. Such safety check or oversight functionality may be implemented within a dedicated layer or distributed among various layers and included as part of the functionality. In some embodiments, a variety of safety parameters may be stored in memory and the safety checks or oversight functionality may compare a determined value (e.g., relative spacing to a nearby vehicle, distance from the roadway centerline, etc.) to corresponding safety parameter(s), and issue a warning or command if the safety parameter is or will be violated. For example, a safety or oversight function in the behavior planning and prediction layer 240 (or in a separate layer) may determine the current or future separate distance between another vehicle (as defined by the sensor fusion and RWM management layer 236) and the vehicle (e.g., based on the world model refined by the sensor fusion and RWM management layer 236), compare that separation distance to a safe separation distance parameter stored in memory, and issue instructions to the motion planning and control layer 238 to speed up, slow down or turn if the current or predicted separation distance violates the safe separation distance parameter. As another example, safety or oversight functionality in the motion planning and control layer 238 (or a separate layer) may compare a determined or commanded steering wheel command angle to a safe wheel angle limit or parameter, and issue an override command and/or alarm in response to the commanded angle exceeding the safe wheel angle limit.
Some safety parameters stored in memory may be static (i.e., unchanging over time), such as maximum vehicle speed. Other safety parameters stored in memory may be dynamic in that the parameters are determined or updated continuously or periodically based on vehicle state information and/or environmental conditions. Non-limiting examples of safety parameters include maximum safe speed, maximum brake pressure, maximum acceleration, and the safe wheel angle limit, all of which may be a function of roadway and weather conditions.
The processing device SOC 300 may include analog circuitry and custom circuitry 314 for managing sensor data, analog-to-digital conversions, wireless data transmissions, and for performing other specialized operations, such as processing encoded audio and video signals for rendering in a web browser. The processing device SOC 300 may further include system components and resources 316, such as voltage regulators, oscillators, phase-locked loops, peripheral bridges, data controllers, memory controllers, system controllers, access ports, timers, and other similar components used to support the processors and software clients (e.g., a web browser) running on a computing device.
The processing device SOC 300 also include specialized circuitry for camera actuation and management (CAM) 305 that includes, provides, controls and/or manages the operations of one or more cameras (e.g., a primary camera, webcam, 3D camera, etc.), the video display data from camera firmware, image processing, video preprocessing, video front-end (VFE), in-line JPEG, high definition video codec, etc. The CAM 305 may be an independent processing unit and/or include an independent or internal clock.
In some embodiments, the image and object recognition processor 306 may be configured with processor-executable instructions and/or specialized hardware configured to perform image processing and object recognition analyses involved in various embodiments. For example, the image and object recognition processor 306 may be configured to perform the operations of processing images received from cameras via the CAM 305 to recognize and/or identify other vehicles, and otherwise perform functions of the camera perception layer 224 as described. In some embodiments, the processor 306 may be configured to process radar or lidar data and perform functions of the radar and/or lidar perception layer 222 as described.
The system components and resources 316, analog and custom circuitry 314, and/or CAM 305 may include circuitry to interface with peripheral devices, such as cameras radar lidar electronic displays, wireless communication devices, external memory chips, etc. The processors 303, 304, 306, 307, 308 may be interconnected to one or more memory elements 312, system components and resources 316, analog and custom circuitry 314, CAM 305, and RPM processor 317 via an interconnection/bus module 324, which may include an array of reconfigurable logic gates and/or implement a bus architecture (e.g., CoreConnect, AMBA, etc.). Communications may be provided by advanced interconnects, such as high-performance networks-on chip (NoCs).
The processing device SOC 300 may further include an input/output module (not illustrated) for communicating with resources external to the SOC, such as a clock 318 and a voltage regulator 320. Resources external to the SOC (e.g., clock 318, voltage regulator 320) may be shared by two or more of the internal SOC processors/cores (e.g., a DSP 303, a modem processor 304, a graphics processor 306, an applications processor 308, etc.).
In some embodiments, the processing device SOC 300 may be included in a control unit (e.g., 140) for use in a vehicle (e.g., 100). The control unit may include communication links for communication with a telephone network (e.g., 180), the Internet, and/or a network server (e.g., 184) as described.
The processing device SOC 300 may also include additional hardware and/or software components that are suitable for collecting sensor data from sensors, including motion sensors (e.g., accelerometers and gyroscopes of an IMU), user interface elements (e.g., input buttons, touch screen display, etc.), microphone arrays, sensors for monitoring physical conditions (e.g., location, direction, motion, orientation, vibration, pressure, etc.), cameras, compasses, GPS receivers, communications circuitry (e.g., Bluetooth®, WLAN, WiFi, etc.), and other well-known components of modern electronic devices.
The vehicle ADS processing system 204 may include one or more processors 205, memory 206, a radio module 218), and other components. The vehicle ADS processing system 204 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to the processor 205.
The memory 206 may include non-transitory storage media that electronically stores information. The electronic storage media of memory 206 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with the vehicle ADS processing system 204 and/or removable storage that is removably connectable to the vehicle ADS processing system 204 via, for example, a port (e.g., a universal serial bus (USB) port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). In various embodiments, memory 206 may include one or more of electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), and/or other electronically readable storage media. The memory 206 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Memory 206 may store software algorithms, information determined by processor(s) 205, information received from the one or more other vehicles 220, information received from the roadside unit 112, information received from the base station 110, and/or other information that enables the vehicle ADS processing system 204 to function as described herein.
The processor(s) 205 may include one of more local processors that may be configured to provide information processing capabilities in the vehicle ADS processing system 204. As such, the processor(s) 205 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although the processor(s) 205 is shown in
The vehicle ADS processing system 204 may be configured by machine-readable instructions 332, which may include one or more instruction modules. The instruction modules may include computer program modules. In various embodiments, the instruction modules may include one or more of a map data accessing module 334, a confidence information accessing module 336, one or more autonomous driving modules 338, a sensed object and feature map data upload module 340, and/or other modules.
The map data accessing module 334 may be configured to access a map database, which may be stored in the memory 206 (or other vehicle memory), to obtain map data regarding objects and/or features in the vicinity of the vehicle.
The confidence information accessing module 334 may be configured to access a map database or other database indexed to map data, which may be stored in the memory 206 (or other vehicle memory), to obtain safety and/or confidence information associated with the map data related to objects and/or features in the vicinity of the vehicle. In some embodiments, the confidence information accessing module 334 may access a memory within the vehicle on which the confidence information is stored. In some embodiments, the confidence information accessing module 334 may access a network-accessible memory, such as a server storing the confidence information.
The autonomous driving system modules 338 may be configured to execute the various functions of autonomous and semi-autonomous driving by a vehicle ADS, including using the confidence information by the processor in performing an autonomous or semi-autonomous driving action by the processor as well as other operations of various embodiments. In some embodiments, the autonomous driving system modules 338 may use the confidence information by weighting corresponding map data and using weighted map data in driving operations. In some embodiments, the autonomous driving system modules 338 may take one or more actions to change a current autonomous driving mode to a driving move consistent with a safety level and/or confidence level of map data regarding nearby objects and features.
The sensed object and feature map data upload module 340 may be configured to identify objects and features detected by vehicle sensors that should be uploaded for consideration in generating map data, and uploading that information along with confidence information to a remote computing device. In some embodiments a processor executing the sensed object and feature map data upload module 340 may obtain sensor data from vehicle sensors regarding one or more objects and/or features in the vicinity of the vehicle, and determine whether the obtained sensor data regarding any object or feature in the vicinity of the vehicle differs from the map data regarding the object or feature obtained from the map database sufficiently to justify reporting the data to the remote computing device, such as by a threshold amount. When such an object and/or feature is identified in vehicle sensor data, the sensed object and feature map data upload module 340 may upload location information regarding the observed object and/or feature in conjunction with confidence information regarding that information.
The processor(s) 205 may be configured to execute the modules 332-340 and/or other modules by software, hardware, firmware, some combination of software, hardware, and/or firmware, and/or other mechanisms for configuring processing capabilities on processor(s) 205.
The description of the functionality provided by the different modules 332-340 is for illustrative purposes, and is not intended to be limiting, as any of modules 332-340 may provide more or less functionality than is described. For example, one or more of modules 332-340 may be eliminated, and some or all of its functionality may be provided by other ones of modules 332-340. As another example, processor(s) 205 may be configured to execute one or more additional modules that may perform some or all of the functionality attributed below to one of modules 332-340.
Similarly, driving zone or area 422 that involves a merger of four lanes of traffic to two lanes of traffic, which can be challenging for autonomous driving systems (as well as human drivers) because the unpredictable nature of merging traffic that will occur within this area. Thus, the driving area 422 provides an example of a roadway feature that may be assigned or associated with an ASIL safety or autonomous driving L2 or L3, indicating that an ADS equipped vehicle should not operate fully autonomously, and should engage the driver to pay attention to the roadway and either standby to take control of the vehicle or begin steering the vehicle (if not take control). In some embodiments, the ADS of a vehicle 102 operating in full autonomous driving mode in area 420 may receive safety information from the map database or a linked database concerning area 422 as it approaches, and in response to receiving the L2 or L3 safety information alert the driver that a change in driving mode is commencing, and shift to the appropriate semi-autonomous driving mode upon entering area 422. Once past the merging area 422, the roadway may again be suitable for autonomous driving, in which case as the vehicle 102 approaches the area 424, the ADS may receive safety information for the area from the map database or a linked database, and notify the driver that the vehicle can be shifted to an autonomous driving mode. Such a shift from manual or semi-manual driving to a fully autonomous driving mode may require driver agreement.
The objects and features illustrated in
However, some types of objects and features that may be included in map data may be temporary or change over a period time such that the date or age of the associated map data is relevant, and may be included in the safety and/or confidence information.
In some embodiments, the safety and/or confidence information (including date or age information) may be included as data fields within the data record for individual objects and features included within the map database.
In some embodiments, the safety and/or confidence information (including date or age information) may be stored and made available in a separate database containing data records that are linked to specific data records of the map database. For example,
In block 502, the vehicle processor may perform operations including accessing, from a map database accessible by the processor, map data regarding an object or feature in the vicinity of the vehicle. In some embodiments, the vehicle processor may access the map database for all data records of objects and features that are within a threshold distance of the current location of the vehicle. Described herein, the vehicle processor may be maintaining position information using a variety of sensors, including GPS coordinate data, dead reckoning, and visual navigation based upon the relative location of objects and features in the vicinity of the vehicle using information stored in already access map database records. For example, as the vehicle moves forward on a roadway, the vehicle processor may continually or periodically access to the map database to obtain data records of objects and features that are within a threshold distance ahead of the vehicle, thus accessing such data before the vehicle reaches the objects or features to give the vehicle ADS processing system time to conduct route planning and object avoidance processing. Means for performing the operations of block 502 may include memory (e.g., 206) storing a map database, and a processor (e.g., 205, 300) of a vehicle ADS processing system (e.g., 104, 204) executing the map data accessing module 334.
In block 504, the vehicle processor may perform operations including accessing, by the processor, confidence information associated with the map data regarding the object or feature in the vicinity of the vehicle. In some embodiments, the confidence information may an Automotive Safety Integrity Level (ASIL) autonomous driving level in the vicinity of the object or feature. Additionally or alternatively, in some embodiments the safety and/or confidence information may include an indication related to accuracy of the map data regarding the object or feature. Additionally or alternatively, in some embodiments the safety and/or confidence information may include an indication related to reliability of the map data regarding the object or feature. Additionally or alternatively, in some embodiments the safety and/or confidence information may include a statistical score indicative of a precision of the map data regarding the object or feature (e.g., statistical measure of precision, F1 score, etc.). Additionally or alternatively, in some embodiments the safety and/or confidence information may include an age or freshness of the map data regarding the object or feature. For example, as described with reference to
In block 506, the vehicle processor may perform operations including using the confidence information by the processor in performing an autonomous or semi-autonomous driving action. In some embodiments, the vehicle ADS processing system may adjust the autonomous driving level being performed by the system consistent with safety and confidence information (e.g., switching to a lower level of autonomous driving consistent with the safety information), take into account confidence information in object or feature map information as part of sensor fusion and navigation, route planning, object avoidance, and the like. In some embodiments, the vehicle processor of the autonomous driving system may discontinue an autonomous or semi-autonomous driving function, functionality, feature or action. Means for performing the operations of block 506 may include the in-vehicle network 210, and a processor (e.g., 205, 300) of a vehicle ADS processing system (e.g., 104, 204) executing the ADS processing system stack 220 and/or autonomous driving system modules 338.
After the operations in block 512, the vehicle processor may perform the operations in block 506 as described.
In block 516, the vehicle processor may perform operations including using weighted map data regarding the object or feature by the processor while performing a path planning, object avoidance or steering autonomous driving action. For example, the vehicle processor may assign a large weight (e.g., 1) to object and feature data for which the confidence information indicates significant confidence in the accuracy, precision and/or reliability of the data. As another example, the vehicle processor may assign a weight less that one to object and feature data for which the confidence information indicates that there is a degree of inaccuracy, imprecision and/or unreliability involved with the map data. As another example, the vehicle processor may assign a weight less that one to object and feature data that is old and thus may no longer be correct. Means for performing the operations of block 516 may include memory (e.g., 206) storing a map database, and a processor (e.g., 205, 300) of a vehicle ADS processing system (e.g., 104, 204) executing the autonomous driving system module 338.
In block 524, the vehicle processor may perform operations including changing the autonomous driving mode of the vehicle implemented by the ADS after notifying the driver. For example, if the vehicle ADS processing system is operating in a full autonomous mode (e.g., L4 or L5) and the safety information associated with map data of objects, features or areas in the vicinity of the vehicle indicates that the driving environment is not safe for autonomous operations, the vehicle processor may shift to a driver-assisted or driver-in-charge operating mode based upon the safety information after receiving an acknowledgement or detecting that the driver has taken control. Means for performing the operations of block 524 may include the in-vehicle network 210, and a processor (e.g., 205, 300) of a vehicle ADS processing system (e.g., 104, 204) executing the ADS processing system stack 220 and/or autonomous driving system modules 338.
In some instances, the confidence information regarding the object or feature may be confidence information regarding objects and features within a defined area. In such instances, in block 526 the vehicle processor may perform operations including changing the autonomous driving mode of the vehicle implemented by the ADS to a driving mode compatible with the confidence information while the vehicle is in the defined area.
In block 528, the vehicle processor may perform operations including determining whether the obtained sensor data regarding the object or feature in the vicinity of the vehicle differs from the map data regarding the object or feature obtained from the map database by a threshold amount. For example, the sensor fusion & RWM management layer 236 executing in the vehicle processor may determine whether differences between information obtained from vehicle sensors and map data is of a magnitude indicating that the sensor data should be reported to a computing device that maintains the map data. Such a magnitude may be in the form of one or more thresholds of difference and/or may be in the form of a table that indicates the types and magnitudes of inconsistency that warrant sending the sensor data to the computing device that maintains the map data. Means for performing the operations of block 528 may include the in-vehicle network 210, and a processor (e.g., 205, 300) of a vehicle ADS processing system (e.g., 104, 204) executing the sensed object and feature map data upload module 340.
In block 530, the vehicle processor may perform operations including uploading to a remote computing device, the obtained sensor data regarding the object or feature in the vicinity of the vehicle along with confidence information based on one or more of a type of sensor used to detect or classify the object or feature, a quality of perception of the object or features achieved by the sensor, and/or an accuracy or precision of the sensor data in response to determining that the obtained sensor data differs from the map data regarding the object or feature obtained from the map database by at least the threshold amount. In some embodiments, the vehicle processor may upload the sensor data and confidence information to the computing device using a V2X network (e.g., 124) via a roadside unit (e.g., 112). In some embodiments and/or instances, the vehicle processor may upload the sensor data and confidence information to the computing device via a cellular wireless network (e.g., 122), such as a 5G network, via a base station (110). Means for performing the operations of block 530 may include a radio module 218 and a processor (e.g., 205, 300) of a vehicle ADS processing system (e.g., 104, 204) executing the sensed object and feature map data upload module 340.
In block 602, computing device may perform operations including receiving, by the computing device from a source, information regarding an object or feature for inclusion in a map database including a measure of confidence in the information regarding the object or feature. The coordinate and description information related to each object or feature may be gathered from a variety of sources, including surveys of roadways and roadway features, overhead and satellite imagery, data from survey vehicles equipped to recognize and localize objects and features and report the data to the computing device, and from ADS-equipped vehicles reporting objects and features identified and localized by the vehicle's sensors (e.g., radar, lidar, cameras, etc.). Means for performing the operations of block 616 may include a computing device such as a server illustrated in
In block 604, the computing device may perform operations including using the received measure of confidence in the information regarding the object or feature confidence to generate confidence information regarding the object or feature suitable for use by vehicle autonomous and semi-autonomous driving systems in autonomous or semi-autonomous driving operations. In some embodiments, the confidence information may include one or more of an Automotive Safety Integrity Level (ASIL) autonomous driving level in the vicinity of the object or feature; an indication related to accuracy of the map data regarding the object or feature; an indication related to reliability of the map data regarding the object or feature; a statistical score indicative of a precision of the map data regarding the object or feature; or an age or freshness of the map data regarding the object or feature. ASIL information regarding map objects and features may be received from an authority or service that assigns safe autonomous driving levels based on the accuracy of map data and/or challenges posed by roadway features.
In some embodiments, the confidence information regarding the accuracy or precision of the map data may be established based upon the accuracy and reliability of the sources of or methods used to obtain the map data. The information used to generate the map database may come from a variety of sources, including survey vehicles, highway systems (e.g., cameras, traffic sensors, etc.) remote side units and from vehicles on the highway. In some embodiments, the confidence assigned to objects and features in the map data in block 616 may depend on the source of the map data. For example, the confidence level assigned to or associated with objects and features in a map generated from a single vehicle received via V2X communications may be less than the confidence level assigned to or associated with objects and features in a map generated by map crowd sourcing.
Means for performing the operations of block 616 may include a computing device such as a server illustrated in
In block 606, the computing device may perform operations including storing the confidence information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems in block 606. In some embodiments, the computing device may store the confidence information in the map data, such as in the same data records and the location and description information of the corresponding object and feature data. In some embodiments, the computing device may store the confidence information in a database (e.g., a confidence information database) separate from the map database, such as in data records with an index or information linking or indexing the confidence information record to the corresponding object and feature data record in the map database. In some embodiments as part of the operations in block 606, the computing device may store the map database including confidence information or store the confidence information database in a network location that ADS-equipped vehicles can access, such as to download the databases before beginning a trip or during a trip. In some embodiments as part of the operations in block 606, the computing device may transmit or otherwise distribute the map database including confidence information or the map database and a confidence information database to ADS-equipped vehicles, such as via over-the-air updates. Means for performing the operations of block 606 may include a computing device such as a server illustrated in
In some embodiments, receiving information regarding an object or feature for inclusion in a map database including a measure of confidence in the information regarding the object or feature may include receiving from one or more vehicles information including: a location of the object or feature; a characteristic of the object or feature; and a measure of confidence in the information regarding either the location or the characteristic of the object or feature.
Referring to
Referring to
Referring to
In block 614 the computing device may perform operations including providing the database to vehicles for use in autonomous or semi-autonomous driving operations. In some embodiments, the computing device may periodically (or episodically upon completing an update) transmit the map database or just updates to the database to ADS-equipped vehicles. For example, the computing device may transmit updates to or updated map databases to ADS-equipped vehicles using a V2X network (e.g., 124) via a roadside units (e.g., 112). In some embodiments and/or instances, the computing device may transmit updates to or updated map databases to ADS-equipped vehicles using a cellular wireless network (e.g., 122), such as a 5G network, via a base station (110). Means for performing the operations of block 614 may include a computing device such as a server illustrated in
Referring to
Following the operations in block 604, the computing device may perform operations including determining, from information received from the plurality of sources, one set of information regarding the object or feature and consolidated confidence information for the determined set of information regarding the object or feature in block 616. For example, the vehicle processor may perform a statistical analysis on the information to select on set of object or feature data that best represents the information received from the plurality of sources, such as averaging or taking a weighted average using the confidence information associated with each reported sensor data as a weighting factor. In performing such statistical analysis, the computing device may determine a consolidated confidence level appropriate for the consolidate map data based on the number of sources of information used to generate the consolidated map data as well as the confidence information associated with each source of information. For example, if the one set of information regarding the object or feature stored in the map data is based on information received from a large number of sources (e.g., more than 10) and the various sources indicated high confidence in the reported information, the computing device may reflect a high level of confidence in the one set of object or feature data in the corresponding confidence information. Conversely, if the one set of information regarding the object or feature stored in the map data is based on information received from a small number of sources (e.g., three or fewer) and the various sources indicated low confidence in the reported information, the computing device may reflect a low level of confidence in the one set of object or feature data in the corresponding confidence information. Means for performing the operations of block 616 may include a computing device such as a server illustrated in
In block 618, the computing device may perform operations including storing the consolidated confidence information for the determined set of information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems for use in autonomous or semi-autonomous driving operations in block 618. For example, similar to the operations in block 612, the computing device may store the confidence information in a data record of a data base or data table along with information or an index that is also in the map database corresponding data record of the map object or feature in the map database to enable ADS-equipped vehicles to find and access the confidence information. Means for performing the operations of block 618 may include a computing device such as a server illustrated in
Implementation examples are described in the following paragraphs. While some of the following implementation examples are described in terms of example methods, further example implementations may include: the example methods discussed in the following paragraphs implemented by a vehicle processing device that may be an on-board unit as part of our coupled to an autonomous driving system configured with processor-executable instructions to perform operations of the methods 1-10 of the following implementation examples; the example methods discussed in the following paragraphs implemented by a computing device configured with processor-executable instructions to perform operations of the methods 11-16 of the following implementation examples; the example methods discussed in the following paragraphs implemented by a processing device including means for performing functions of the methods of the following implementation examples; and the example methods discussed in the following paragraphs implemented as a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a vehicle processing device or computing device to perform the operations of the methods of the following implementation examples.
Example 1. A method performed by a processor of an autonomous driving system of a vehicle for using map data in performing an autonomous driving function, including: accessing, from a map database accessible by the processor, map data regarding an object or feature in the vicinity of the vehicle; accessing, by the processor, confidence information associated with the map data regarding the object or feature in the vicinity of the vehicle; and using the confidence information by the processor in performing an autonomous or semi-autonomous driving action.
Example 2. The method of example 1, in which the confidence information includes an ASIL autonomous driving level in the vicinity of the object or feature.
Example 3. The method of either of example 1 or 2, in which the confidence information includes an indication related to accuracy of the map data regarding the object or feature.
Example 4. The method of any of examples 1-3, in which the confidence information includes an indication related to reliability of the map data regarding the object or feature.
Example 5. The method of any of examples 1-4, in which the confidence information includes a statistical score indicative of a precision of the map data regarding the object or feature.
Example 6. The method of any of examples 1-5, in which the confidence information includes an age or freshness of the map data regarding the object or feature.
Example 7. The method of any of examples 1-6, in which accessing confidence information associated with the map data regarding the object or feature in the vicinity of the vehicle includes obtaining the confidence information by the processor from the map database, in which information in the map database is obtained from one or more of system memory, a remote computing device, or another vehicle.
Example 8. The method of any of examples 1-7, in which accessing confidence information associated with the map data regarding the object or feature in the vicinity of the vehicle includes obtaining the confidence information based on a location of the object or feature from a data structure accessible by the processor that is different from the map database.
Example 9. The method of any of examples 1-8, in which using the confidence information in performing an autonomous or semi-autonomous driving action by the vehicle includes applying, by the processor, a weight to the accessed map data regarding the object or feature based upon the confidence information, and using weighted map data regarding the object or feature by the processor while performing a path planning, object avoidance or steering autonomous driving action.
Example 10. The method of any of examples 1-9, in which using the confidence information in performing an autonomous or semi-autonomous driving action by the vehicle includes changing an autonomous driving mode of the vehicle implemented by the processor based on the confidence information regarding the object or feature in the vicinity of the vehicle.
Example 11. The method of any of examples 1-10, in which changing the autonomous driving mode of the vehicle implemented by the processor based on the confidence information regarding the object or feature in the vicinity of the vehicle includes changing the autonomous driving mode of the vehicle implemented by the processor to a driving mode compatible with the confidence information regarding the object or feature in the vicinity of the vehicle.
Example 12. The method of any of examples 1-11, further including notifying a driver of a need to participate in driving of the vehicle in response to determining that the confidence information regarding the object or feature in the vicinity of the vehicle does not support a fully autonomous driving mode, and changing the autonomous driving mode of the vehicle implemented by the processor after notifying the driver.
Example 13. The method of any of examples 1-12, in which: the confidence information regarding the object or feature includes confidence information regarding objects and features within a defined area; and changing the autonomous driving mode of the vehicle implemented by the processor based on the confidence information regarding the object or feature in the vicinity of the vehicle includes changing the autonomous driving mode of the vehicle implemented by the processor to an autonomous driving mode consistent with the confidence information while the vehicle is in the defined area.
Example 14. The method of any of examples 1-13, further including; obtaining, by the processor from vehicle sensors, sensor data regarding the object or feature in the vicinity of the vehicle; determining, by the processor, whether the obtained sensor data regarding the object or feature in the vicinity of the vehicle differs from the map data regarding the object or feature obtained from the map database by a threshold amount; and uploading, by the processor to a remote computing device, the obtained sensor data regarding the object or feature in the vicinity of the vehicle along with confidence information based on one or more of a type of sensor used to detect or classify the object or feature, a quality of perception of the object or features achieved by the sensor, or an accuracy or precision of the sensor data in response to determining that the obtained sensor data differs from the map data regarding the object or feature obtained from the map database by at least the threshold amount.
Example 15. A method performed by a computing device for including safety and confidence information within map data useful by autonomous and semiautonomous driving systems in vehicles, including; receiving, by the computing device from a source, information regarding an object or feature for inclusion in a map database including a measure of confidence in the information regarding the object or feature; using the received measure of confidence in the information regarding the object or feature to generate safety and confidence information regarding the object or feature suitable for use by vehicle autonomous and semi-autonomous driving systems in autonomous or semi-autonomous driving operations, in which the safety and confidence information includes one or more of an ASIL autonomous driving level in the vicinity of the object or feature; an indication related to accuracy of the map data regarding the object or feature; a statistical score indicative of a precision of the map data regarding the object or feature; an indication related to reliability of the map data regarding the object or feature; or an age or freshness of the map data regarding the object or feature; and storing the safety and confidence information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems.
Example 16. The method of example 15, in which receiving information regarding an object or feature for inclusion in a map database including a measure of confidence in the information regarding the object or feature includes receiving from one or more vehicles information including: a location of the object or feature; a characteristic of the object or feature; and a measure of confidence in the information regarding either the location or the characteristic of the object or feature.
Example 17. The method of either of examples 15 or 16, further including updating information regarding the object or feature in the map database based at least in part on the received measure of confidence in the received information regarding the object or feature confidence.
Example 18. The method of any of examples 15-17, in which storing the safety and confidence information regarding the object or feature includes including the safety and confidence information as part of location and other information regarding the object or feature in the map database provided to vehicles for use in autonomous or semi-autonomous driving operations.
Example 19. The method of any of examples 15-18, in which storing the safety and confidence information regarding the object or feature includes: storing the safety and confidence information in a database separate from the map database correlated with location information of the object or feature; and providing the database to vehicles for use in autonomous or semi-autonomous driving operations.
Example 20. The method of any of examples 15-19, in which: receiving information regarding an object or feature for inclusion in a map database includes receiving, from a plurality of sources, information regarding the object or feature along with measures of confidence in the information regarding the object or feature, the method further including determining, from information received from the plurality of sources, one set of information regarding the object or feature and consolidated safety and confidence information for the determined set of information regarding the object or feature; and storing safety and confidence information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems for use in autonomous or semi-autonomous driving operations includes storing the consolidated safety and confidence information for the determined set of information regarding the object or feature in a manner that enables access by vehicle autonomous and semi-autonomous driving systems for use in autonomous or semi-autonomous driving operations.
Various embodiments illustrated and described are provided merely as examples to illustrate various features of the claims. However, features shown and described with respect to any given embodiment are not necessarily limited to the associated embodiment and may be used or combined with other embodiments that are shown and described. Further, the claims are not intended to be limited by any one example embodiment. For example, one or more of the operations of the methods may be substituted for or combined with one or more operations of the methods.
The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular.
The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the claims.
The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (TCUASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.
In one or more embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.