The present disclosure relates generally to feature mapping, and in particular, some implementations may relate to systems and methods for updating map data based on vehicle sensor data.
Autonomous vehicle technology is becoming more commonplace with the introduction of new vehicles each model year. While widespread adoption of fully autonomous vehicles is only now becoming visible on the horizon, autonomous vehicle technology is gaining increasing popularity for assisted driving and other semi-autonomous vehicle operation.
Map data plays an important role in enabling Autonomous Vehicles (AVs) to navigate safely and efficiently. These maps are often referred to as High-Definition (HD) maps or Autonomous Vehicle Maps (AVMs). HD and other resolution maps may include detailed geospatial information about the environment, such as the location of road features including lanes, intersections, traffic signs, signals, curbs, and other roadside features. Lane markings, curbs and other like data can provide lane-level information, to identify where a road is as well as where the lanes are and their exact dimensions. The maps may include the location of and information about traffic signals, stop signs, yield signs, and other road signs.
Maps may be augmented with high-resolution imagery, including images from cameras mounted on AVs or other sources. This visual data can help AVs recognize landmarks, navigate through complex areas, and verify the environment. Some maps are three-dimensional, providing detailed information about the height and shape of objects in the environment.
Maps, however, need to be updated to reflect changes in the environment, such as new road construction, changes in traffic patterns, new or moved features (e.g., lane markings, road signs, etc.) and temporary road closures.
Various embodiments of the disclosed technology relate to systems and methods for curating map data. Some aspects may relate to using information obtained from vehicle sensors to detect new map features such as, for example, new lane markings, road signs, traffic signals, and so on. Systems and methods disclosed herein may be configured to detect an unmapped feature at a determined position along a vehicle's trajectory using sensor data from one or more vehicle sensors. This information may be used to determine the feature type and its position (e.g., position along a roadway). To refine the new object's position, the systems and methods may be configured to identify data representing one or more existing, already mapped features along the vehicle's trajectory and to compare the position of those features as determined using data from the vehicle sensors with the position of those objects as noted in the map.
This comparison may provide information to refine the positions and orientations (poses) of the vehicle along the trajectory in the map. The refined and well-localized vehicle poses can be used to identify the map-centric position(s) of the new object(s) (based on vehicle data). Furthermore, the system can be used to determine if the positions, size, or orientations of an existing, mapped, feature has been modified based on vehicle data. In such scenarios, the system will compute the updated, map-centric, position, size, and orientation of the feature and update the map accordingly. Moreover, the system may use the vehicle data to determine if an existing, mapped, feature is removed; the system will update the map accordingly.
A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
In one general aspect, a method may include detecting at least one change in a first feature at a determined position along the trajectory of the vehicle based on sensor data from one or more sensors of the vehicle. The method may also include scanning sensor data from the one or more sensors of the vehicle along the trajectory of the vehicle to identify sensor data representing one or more existing map features on a map. The method may furthermore include determining a vehicle-centric position of the one or more existing map features based on the sensor data from one or more sensors of the vehicle. The method may in addition include comparing the determined vehicle-centric position of the one or more existing map features to a map-based position of their corresponding one or more existing map features and determining a difference between the vehicle-centric positions of the one or more existing map features and corresponding map-based positions of the one or more existing map features. The method may moreover include determining a transformation based on the determined difference and applying the transformation to the vehicle-centric position of the first feature to obtain a revised position of the first feature. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
Implementations may include one or more of the following features. The method where applying the transformation aligns positions of the vehicle along the trajectory and the vehicle-centric positions of the one or more existing features with the map. The method where applying the transformation to the determined position of the first feature obtain a revised position of the first feature may include applying the transformation to the determined vehicle-centric position of the first feature to obtain a revised position of the first feature in the map. The method, where applying the transformation to the vehicle-centric position of the first feature to obtain a revised position of the first feature may include determining, based on sensor data, whether at least one of a position, orientation, or size of the first feature has changed, and upon detecting a change in at least one of a position, orientation, and size of the first feature, applying the transformation to the vehicle-centric position of the first feature to obtain at least one of a revised position, orientation, and size of the first feature. The method may include updating at least one of the position, orientation and size of the first feature on the map to reflect the change. The method, where determining a transformation, may include determining a position offset between the vehicle-centric position of the one or more existing map features and the map-based position of the one or more existing map features. The method may include comparing the first feature to existing features on the map to determine if the first feature is a new feature. The method may include updating the map to include the new feature at the revised position. The method may include comparing a set of one or more vehicle-centric features to existing features on the map to determine if the first feature has been removed from along the trajectory and upon detection of removal of the feature, updating the map to reflect removal of the feature. The method may include updating the map to remove the first feature from the map. The method, where the first feature may include at least one of a road sign and lane markings. Implementations of the described techniques may include hardware, a method or process, or a computer tangible medium.
In one general aspect, a system may include one or more processors configured to perform operations of: detecting at least one change in a first feature at a determined position along the trajectory of the vehicle based on sensor data from one or more sensors of the vehicle; scanning sensor data from the one or more sensors of the vehicle along the trajectory of the vehicle to identify sensor data representing one or more existing map features on a map; determining a vehicle-centric position of the one or more existing map features based on the sensor data from one or more sensors of the vehicle; comparing the determined vehicle-centric position of the one or more existing map features to a map-based position of their corresponding one or more existing map features and determining a difference between the vehicle-centric positions of the one or more existing map features and corresponding map-based positions of the one or more existing map features. The system may also include determining a transformation based on the determined difference and applying the transformation to the vehicle-centric position of the first feature to obtain a revised position of the first feature. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
Implementations may include one or more of the following features. The system where applying the transformation aligns positions of the vehicle along the trajectory and the vehicle-centric positions of the one or more existing map features with the map. The system where applying the transformation to the determined position of the first feature obtain a revised position of the first feature may include applying the transformation to the determined vehicle-centric position of the first feature to obtain a refined position of the first feature in the map. The system where applying the transformation to the vehicle-centric position of the first feature to obtain a revised position of the first feature may include determining, based on sensor data, whether at least one of a position, orientation, or size of the first feature has changed, and upon detecting a change in at least one of a position, orientation, and size of the first feature, applying the transformation to the vehicle-centric position of the first feature to obtain at least one of a revised position, orientation, and size of the first feature. The system may include updating at least one of the position, orientation and size of the first feature on the map to reflect the change. The system where determining a transformation, may include determining a position offset between the vehicle-centric position of the one or more existing map features and the map-based position of the one or more existing map features. The system where the one or more processors are further configured to perform the operation of comparing the first feature to existing features on the map to determine if the first feature is a new feature. The system where the one or more processors are further configured to perform the operation of updating the map to include the new feature at the revised position. The system where the one or more processors are further configured to perform the operation of comparing a set of one or more vehicle-centric features to existing features on the map to determine if the first feature has been removed from along the trajectory and upon detection of removal of the first feature, updating the map to reflect removal of the first feature. The system where the one or more processors are further configured to perform the operation of updating the map to remove the first feature from the map. The system where the first feature may include at least one of a road sign and lane markings. Implementations of the described techniques may include hardware, a method or process, or a computer tangible medium.
Other features and aspects of the disclosed technology will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the disclosed technology. The summary is not intended to limit the scope of any inventions described herein, which are defined solely by the claims attached hereto.
The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example embodiments.
The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.
Aspects of the systems and methods disclosed herein relate to using information obtained from vehicle sensors to detect new, moved, removed or otherwise changed map features such as, for example, lane markings, road signs, traffic signals, and so on. Aspects of the technology disclosed herein may be configured to detect unmapped features along a vehicle's trajectory using sensor data from one or more vehicle sensors. Systems and methods may use this information to determine characteristics such as the feature type, its position along a roadway, and other characteristics (e.g., text on a road sign, color of lane markings, and so on).
Because the object's position as determined by the sensor data may be less accurate than is desired for the map (which may include a high-definition, medium-definition, standard-definition or other resolution map), aspects of the systems and methods disclosed herein may be configured to update or refine the object's position. This can be done, for example, by comparing the position of one or more other, known, roadway features detected by the vehicle sensors with their respective positions as cataloged on the map to determine if there is an error or offset between position information determined based on vehicle sensor data and position information as indicated on the map (which may be assumed for this purpose to be more accurate).
Particularly, aspects of the technology disclosed herein may be configured to identify data representing an existing, already mapped feature along the vehicle's trajectory and its vehicle-centric position as determined from vehicle sensor data. That determined vehicle-centric data can be compared to a map-based position of that feature. This comparison may be used to determine a transformation based on a position offset between the vehicle-centric position of the existing map feature and the map-based position of the feature as noted on the map. The systems and methods may apply this same offset to revise the noted position of the new feature. This can be used to adjust or account for inaccuracies or imprecision in position determination based on vehicle sensor data.
Information about the vehicle-centric features available to the system from the sensors may be 2D or 3D data. 2D data may arise, for example, in situations where the vehicle sensors that obtained the features were cameras (as opposed to, e.g., lidar). In such cases, the system may use the refined or unrefined vehicle poses along the trajectory to compute the 3D positions, orientations, and sizes of the vehicle-centric features.
In some applications, a transformation may be determined for existing map features can be weighted when it is applied to offset or adjust the determined position of new features. Weightings can be set, for example, based on factors such as, for example, bearing to the respective feature, distance between the vehicle and the respective feature, distance between the existing feature and the new feature, and so on.
The systems and methods disclosed herein may be implemented using sensor data from any of a number of different autonomous or semi-autonomous vehicles and vehicle types. Data may even be gathered from sensors on non-autonomous vehicles. For example, the systems and methods disclosed herein may be used with cars, trucks, buses, construction vehicles and other on- and off-road vehicles. These can include vehicles for transportation of people/personnel, materials or other items. In addition, the technology disclosed herein may also extend to other vehicle types as well.
Vehicle sensors used to gather data may include a plurality of different sensors to gather data regarding the vehicle and its surrounding environment. For example, sensors may include lidar, radar, infrared sensors, image sensors and other data gathering sensors. GPS or other vehicle positioning systems may be used to gather data regarding a position of the vehicle at a given time, which may be used in conjunction with sensor data to calculate a vehicle-centric feature position.
Distance measuring sensors such as lidar, radar, infrared sensors and other like sensors can be used to gather data to measure distances and closing rates to various external roadway features such as traffic signs, light poles, curbs, lane markings, and other objects. Image sensors can include one or more cameras or other image sensors to capture images of the environment around the vehicle including objects along the vehicle trajectory. Information from image sensors can be used to determine information about the environment surrounding the vehicle including, for example, information regarding objects surrounding the vehicle. For example, image sensors may be able to recognize features or other objects (including, e.g., street signs, traffic lights, lane markings, curbs, etc.), slope of the road, objects to be avoided (e.g., other vehicles, pedestrians, bicyclists, etc.) and other objects. Information from the image sensors can be used in conjunction with other information such as map data or information from positioning system to determine, refine or verify information about objects along the vehicle's trajectory including object type, vehicle-centric object position, object characteristics (e.g., text on a road sign), and so on.
A vehicle-centric position for new objects can be determined using vehicle sensor data. For example, range and bearing from the vehicle to the object can be computed using vehicle sensor data. This range and bearing can be used along with vehicle position data (e.g., from the vehicle's GPS or other position determination system) to calculate the vehicle-centric position. However, this vehicle-centric position may not be as accurate as desired due to imprecision or inaccuracies in the vehicle sensor data. For example, GPS accuracy can suffer from predicted-orbit errors, clock errors, atmospheric errors, interference and so on. Similarly, LIDAR, while generally very accurate, may suffer from atmospheric conditions, interference, or other error-inducing conditions. Therefore, as described below, it may be desirable to refine the vehicle-centric position determined for the new object.
As also shown in the example of
As further shown in
Because GPS or other position data may be somewhat inaccurate, the system may be configured to look at one or more existing features already mapped to refine the position information for the new object. Thus the system can be configured to search for existing map features along the trajectory. At operation 108 of example process 100, once an existing map feature is identified, the system may compare the determined vehicle-centric position of the existing map feature to a map-based position of the existing map feature (e.g., the position of the same object as indicated on the map). The results of this comparison may be used to determine a transformation (or function) that accurately aligns the positions and orientations of the vehicle along the trajectory and the vehicle-centric positions of the features with the map—the function/transformation computes their map-centric positions and orientations from their vehicle-centric positions and orientations. This may be based on a position offset, or difference, between the vehicle-centric position of the existing map feature and the map-based position of the existing map feature. In some applications, the transformation for a given object may be determined iteratively. An example of this iterative determination is shown and described with reference to
As further shown in
In this example, we can assume that street sign 204 is a new feature that is unmapped—i.e., it is not included on the map. The system may be configured to detect this feature based on sensor data and identify it as being unmapped. For example, it might be identified as being unmapped because there is no corresponding feature that already exists in the map. A user viewing the example dashboard 200 may also make this determination. For example, the user might adjust the transparency of the camera image to see if a corresponding feature appears on the map. A user could also examine the map data to determine whether it exists.
As noted above with reference to
Accordingly, the system can calculate a transformation based on the offset between the object's position as determined by vehicle sensor data and the object's position as noted on the map data. The system can apply the transformation to correct information representing the position determined based on sensor data to partially or fully remove this offset. An example of the result of removing this offset is illustrated in the upper right hand screenshot where representative icons 302 and 304 are at least partially overlapping one another. In this example, however, there is still some position offset as the representative icons 302 and 304 are not fully overlapping. Accordingly, the offset determination and correction operations can be applied again to further an example of the result of this is shown in the example screenshot at bottom center. Here the position of the object as determined based on vehicle sensor data almost identically matches the position of the object as specified in the map data. Indeed, it can be seen that representative icon 302 (representing the object based on vehicle sensor data) almost perfectly overlaps with representative icon 304 (representing the position of the object as specified by the map data.
As described above with reference to operation 110, the position offset determined for the existing map feature can be applied to the sensor-based, or vehicle-centric, position of the new, unmapped feature to refine its position, with the intention of making this position data more accurate. In simplistic terms, the system can be configured to calculate the distance and direction (e.g., in three dimensions) that it “moved” the sensor-based object (represented by icon 304) and apply that same translation to the vehicle-centric position data of the new object with the intention of improving the accuracy of the position data for the new object.
According to some aspects of the systems and methods described herein, the position information can be further refined by examining additional existing objects along the vehicle trajectory.
As also shown in
At operation 408, the system may apply the position offsets determined for one or more of the additional existing map features to the refined position of the unmapped feature (e.g., as refined at operation 110 of
Aspects of the systems and methods may be implemented in which values for feature attributes of the new object may be identified and included with the map data for the new object. Values for these attributes may be determined automatically, such as by machine vision recognition of the object or they may be adjusted or input by the user.
Continuing with the screen shot in the upper left, the text attribute (attribute 5) as detected by the vehicle sensors, is “null” or nonexistent. In other words, the sensors were unable to detect sufficient information for the text attribute or image/character recognition was unable to determine the text based on the sensor data collected. Similarly, the system was unable to determine the catalog_id value (attribute 3). In this example the user uses the selection for attribute 3 (see example screenshots on the right hand side and lower left-hand side) to select “speed_limit” from the drop-down menu. Similarly, the user can type in the appropriate text attribute value, which in this case is “35”, representing that the speed limit posted on the sign is 35 mph. The user may reference the image from the camera to obtain these user-supplied attribute values.
Where the disclosed technology is implemented in whole or in part using software, these software elements can be implemented to operate with a computing or processing component capable of carrying out the functionality described with respect thereto. One such example computing component is shown in
Referring now to
Computing component 700 might include, for example, one or more processors, controllers, control components, or other processing devices. Processor 704 might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller, or other control logic. Processor 704 may be connected to a bus 702. However, any communication medium can be used to facilitate interaction with other components of computing component 700 or to communicate externally.
Computing component 700 might also include one or more memory components, simply referred to herein as main memory 708. For example, random access memory (RAM) or other dynamic memory, might be used for storing information and instructions to be executed by processor 704. Main memory 708 might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 704. Computing component 700 might likewise include a read only memory (“ROM”) or other static storage device coupled to bus 702 for storing static information and instructions for processor 704.
The computing component 700 might also include one or more various forms of information storage mechanism 710, which might include, for example, a media drive 712 and a storage unit interface 720. The media drive 712 might include a drive or other mechanism to support fixed or removable storage media 714. For example, a hard disk drive, a solid-state drive, a magnetic tape drive, an optical drive, a compact disc (CD) or digital video disc (DVD) drive (R or RW), or other removable or fixed media drive might be provided. Storage media 714 might include, for example, a hard disk, an integrated circuit assembly, magnetic tape, cartridge, optical disk, a CD or DVD. Storage media 714 may be any other fixed or removable medium that is read by, written to or accessed by media drive 712. As these examples illustrate, the storage media 714 can include a computer usable storage medium having stored therein computer software or data.
In alternative embodiments, information storage mechanism 710 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing component 700. Such instrumentalities might include, for example, a fixed or removable storage unit 722 and an interface 720. Examples of such storage units 722 and interfaces 720 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory component) and memory slot. Other examples may include a PCMCIA slot and card, and other fixed or removable storage units 722 and interfaces 720 that allow software and data to be transferred from storage unit 722 to computing component 700.
Computing component 700 might also include a communications interface 724. Communications interface 724 might be used to allow software and data to be transferred between computing component 700 and external devices. Examples of communications interface 724 might include a modem or softmodem, a network interface (such as, for example, Ethernet, network interface card, IEEE 802.XX or other interface). Other examples include a communications port (such as, for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface. Software/data transferred via communications interface 724 may be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 724. These signals might be provided to communications interface 724 via a channel 728. Channel 728 might carry signals and might be implemented using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to transitory or non-transitory media. Such media may be, e.g., memory 708, storage unit 722, storage media 714, and channel 728. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing component 700 to perform features or functions of the present application as discussed herein.
It should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described. Instead, they can be applied, alone or in various combinations, to one or more other embodiments, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present application should not be limited by any of the above-described exemplary embodiments.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing, the term “including” should be read as meaning “including, without limitation” or the like. The term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof. The terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known.” Terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time. Instead, they should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.
The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “component” does not imply that the aspects or functionality described or claimed as part of the component are all configured in a common package. Indeed, any or all of the various aspects of a component, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.
Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.