Identifying a stopping place for an autonomous vehicle

Information

  • Patent Grant
  • 10681513
  • Patent Number
    10,681,513
  • Date Filed
    Thursday, October 20, 2016
    8 years ago
  • Date Issued
    Tuesday, June 9, 2020
    4 years ago
Abstract
Among other things, stored data is maintained indicative of potential stopping places that are currently feasible stopping places for a vehicle within a region. The potential stopping places are identified as part of static map data for the region. Current signals are received from sensors or one or more other sources current signals representing perceptions of actual conditions at one or more of the potential stopping places. The stored data is updated based on changes in the perceptions of actual conditions. The updated stored data is exposed to a process that selects a stopping place for the vehicle from among the currently feasible stopping places.
Description
BACKGROUND

This description relates to identifying stopping places for an autonomous vehicle.


As shown in FIG. 1, a typical activity of an autonomous vehicle (AV) 10 is to safely and reliably drive through a road environment 12 to a goal position 14, while avoiding vehicles, pedestrians, cyclists and other obstacles 16 and obeying the rules of the road. We sometimes refer to the AV's ability to perform this activity as an “autonomous driving capability”.


The autonomous driving capability of an AV typically is supported by an array of technology 18, 20 including hardware, software, and stored and real time data that we together sometimes refer to as an AV system 22. Some or all of the technology is onboard the AV and some of the technology may be at a server, for example, in the cloud. Most AV systems include some or all of the following basic components:


1. Sensors 24 able to measure or infer or both properties of the AV's state and condition, such as the vehicle's position, linear and angular velocity and acceleration, and heading (i.e., orientation of the leading end of the AV). Such sensors include but are not limited to, e.g., GPS, inertial measurement units that measure both vehicle linear accelerations and angular rates, individual wheel speed sensors and derived estimates of individual wheel slip ratios, individual wheel brake pressure or braking torque sensors, engine torque or individual wheel torque sensors, and steering wheel angle and angular rate sensors.


2. Sensors 26 able to measure properties of the vehicle's surroundings. Such sensors include but are not limited to, e.g., LIDAR, RADAR, monocular or stereo video cameras in the visible light, infrared, or thermal spectra, ultrasonic sensors, time-of-flight (TOF) depth sensors, as well as temperature and rain sensors.


3. Devices 28 able to communicate the measured or inferred or both properties of other vehicles' states and conditions, such as other vehicles' positions, linear and angular velocities, and accelerations, and headings. These devices include Vehicle-to-Vehicle (V2) and Vehicle-to-Infrastructure (V2I) communication devices, and devices for wireless communications over point-to-point or ad-hoc networks or both. The devices can operate across the electromagnetic spectrum (including radio and optical communications) or other media (e.g., acoustic communications).


4. Data sources 30 providing historical, real-time, or predictive information or combinations of them about the local environment, including traffic congestion updates and weather conditions. Such data may be stored on a memory storage unit 32 on the vehicle or transmitted to the vehicle by wireless communication from a remotely located database 34.


5. Data sources 36 providing digital road map data drawn from GIS databases, potentially including high-precision maps of the roadway geometric properties, maps describing road network connectivity properties, maps describing roadway physical properties (such as the number of vehicular and cyclist travel lanes, lane width, lane traffic direction, lane marker type and location), and maps describing the spatial locations of road features such as crosswalks, traffic signs of various types (e.g., stop, yield), and traffic signals of various types (e.g., red-yellow-green indicators, flashing yellow or red indicators, right or left turn arrows). Such data may be stored on a memory storage unit on the AV or transmitted to the AV by wireless communication from a remotely located database.


6. Data sources 38 providing historical information about driving properties (e.g., typical speed and acceleration profiles) of vehicles that have previously traveled along the local road section at a similar time of day. Such data may be stored on a memory storage unit on the AV or transmitted to the AV by wireless communication from a remotely located database.


7. A computer system 40 located on the AV that is capable of executing algorithms (e.g., processes 42) for the on-line (that is, real-time on board) generation of control actions based on both real-time sensor data and prior information, allowing an AV to execute its autonomous driving capability.


8. A display device 44 that is connected to the computer system to provide information and alerts of various types to occupants of the AV.


9. A wireless communication device 46 to transmit data from a remotely located database 34 to the AV and to transmit vehicle sensor data or data related to driver performance to a remotely located database 34.


10. Functional devices and features 48 of the AV that are instrumented to receive and act on commands for driving (e.g., steering, acceleration, deceleration, gear selection) and for auxiliary functions (e.g., turn indicator activation) from the computer system.


11. A memory 32.


SUMMARY

In general, in an aspect, a vehicle is caused to drive autonomously through a road network toward a defined goal position. Current information is analyzed about potential stopping places in the vicinity of the goal position, to make a choice of a currently selected stopping place that is acceptable and feasible. The vehicle is caused to drive autonomously toward the currently selected stopping place. The activities are repeated until the vehicle stops at a currently selected stopping place.


Implementations may include one or a combination of two or more of the following features. The analyzing of current information includes analyzing a combination of static information that is known prior to the vehicle beginning to drive autonomously through the road network and information that is obtained during the autonomous driving. The analyzing of current information about potential stopping places includes continuously analyzing information from sensors on the vehicle or information from one or more other sources or both. The analyzing of current information about potential stopping places includes applying a predefined strategy for choosing a currently selected stopping place. The strategy includes stopping at the first feasible stopping place in the vicinity of the goal position. The strategy includes stopping at the most desirable among the currently feasible stopping places in the vicinity of the goal position. The strategy includes, if the vehicle has not stopped at a currently selected stopping place within a specified period of time, stopping at the first available stopping place in the vicinity of the goal position. The strategy includes trading off between choosing a best possible stopping place and a cost incurred in continuing to analyze current information to make a choice. If the vehicle has not stopped at a currently selected stopping place, the activities are repeated using a relaxed threshold for acceptability, a relaxed threshold for feasibility, or both. If the vehicle has not stopped at a currently selected stopping place that is acceptable and feasible, the activities are repeated based on a redefined goal position. If the vehicle has not stopped at a currently selected stopping place, the activities are repeated based on an expanded vicinity of the goal position. If the vehicle has not stopped at a currently selected stopping place, the activities are repeated during an extended period of time beyond the specified amount of time. If the vehicle has not stopped at a currently selected stopping place, the vehicle is enabled to be controlled from a remote location. If the vehicle has not stopped at a currently selected stopping place, the stop is aborted. If the vehicle has not stopped at a currently selected stopping place, a threshold for acceptability is relaxed, the goal position is redefined, or the stop is aborted the stop, based on information provided by a passenger. If the vehicle has not stopped at a currently selected stopping place, a threshold for acceptability is automatically relaxed, the goal position is automatically redefined, or the stop is automatically aborting. The analyzing of current information includes analyzing map data about the road network. The analyzing of current information includes analyzing information about features in the vicinity of the goal position as perceived by sensors. The sensors are on the vehicle. The analyzing of current information includes analyzing distances of respective stopping places from the goal position. The analyzing of current information includes analyzing information about respective positions of the vehicle at the potential stopping places. The analyzing of current information includes analyzing information about whether the vehicle can legally stop at the potential stopping places. The analyzing of current information includes analyzing relative desirability of potential stopping places. The analyzing relative desirability of potential stopping places includes applying a statistical model to predict an expected feasibility state of a stopping place based on historical statistics of level of demand for parking and current traffic volumes. The analyzing of current information includes identifying a set of potential stopping places that are within a proximity region in the vicinity of the goal position. The analyzing of current information includes removing potential stopping places from the set that are not feasible stopping places for the vehicle. If no potential stopping places remain after the removing, the proximity region is enlarged. There may be no stopping place that is currently acceptable. Current information is analyzed about additional potential stopping places in the vicinity of the goal position to make a choice of a currently selected stopping place that is acceptable and feasible. The additional potential stopping places lie within a larger vicinity of the goal position. The larger vicinity is determined based on information provided by a passenger of the vehicle. The larger vicinity is determined automatically based on pre-specified rules. The size of the larger vicinity is less than a predetermined upper limit.


In general, in an aspect, and autonomous vehicle includes steering, acceleration, and deceleration devices that respond to signals from an autonomous driving control system to drive the vehicle autonomously on a specified route through a road network defined by map data and toward a specified goal position. Sensors on the vehicle perceive characteristics of potential stopping places in the vicinity of the goal position. A communication feature sends the currently perceived characteristics to the autonomous driving control system and receives from the autonomous driving control system current information indicative of commands for the steering, acceleration, and deceleration devices to cause the vehicle to drive to and stop at a currently selected stopping place.


Implementations may include one or a combination of two or more of the following features. The autonomous driving control system includes elements onboard the vehicle. The autonomous driving control system includes elements remote from the vehicle. The sensors on the vehicle include video capture, LIDAR, or RADAR devices. The current information received from the autonomous driving control system includes a continuously updated choice of a selected stopping place.


In general, in an aspect, a planning process is associated with an autonomous vehicle. The planning process has inputs including static map data and dynamic data from sensors on the autonomous vehicle and outputs including a route to be driven through a road network to reach a goal position, a continually updated choice of a currently selected stopping place in the vicinity of the goal position, and a trajectory to be executed through a road network to reach the currently selected stopping place. A communication element communicates information about the updated choice of a currently selected stopping place to a device of a passenger, receives from the device of the passenger information about the goal position, and delivers the information to the planning process as an input.


Implementations may include one or a combination of two or more of the following features. The currently selected stopping place is updated based on non-feasibility of a currently selected stopping place. The planning process updates the selection of a stopping place based on the information from the device of the passenger about the stopping place. The communication element communicates to the device of the passenger that the planning process has selected a stopping place. The information received from the device of the passenger includes an indication that a stopping place farther from the goal position would be acceptable. The information received from the device of the passenger includes an indication of a maximum acceptable distance. The information received from the device of the passenger includes an indication of a new goal position to be considered by the planning process. The information received from the device of the passenger includes an indication that further time spent searching for an acceptable stopping place would be acceptable to the passenger.


In general, in an aspect, stored data is maintained indicative of potential stopping places that are currently feasible stopping places for a vehicle within a region. The potential stopping places are identified as part of static map data for the region. Current signals are received from sensors or one or more other sources current signals representing perceptions of actual conditions at one or more of the potential stopping places. The stored data is updated based on changes in the perceptions of actual conditions. The updated stored data is exposed to a process that selects a stopping place for the vehicle from among the currently feasible stopping places.


Implementations may include one or a combination of two or more of the following features. The potential stopping places are initialized as all of the potential stopping places identified as part of the static map data for the region. The potential stopping places are discretized as a finite number of points within the region corresponding to potential stopping places. A potential stopping place is defined as a shape containing one of the points, the shape corresponding to a footprint of the vehicle. An orientation is attributed to the shape, the orientation corresponding to a direction of traffic flow. The potential stopping places are initialized as potential stopping places expected to be feasible based on prior signals from sensors representing perceptions of actual conditions at one or more of the potential stopping places. The sensors include sensors located on the vehicle. The sensors include sensors located other than on the vehicle. Information is included in the updated stored data about how recently the updating occurred, or the reason for the indication that a potential stopping place is feasible or not feasible. The current signals received from sensors are received through V2V or V2I communication. The one or more other sources include crowd-sourced data sources. The vehicle is part of a fleet of vehicles managed from a central server and the method includes the server distributing information received from sensors at one of the vehicles to other vehicles of the fleet.


In general, in an aspect, a vehicle is caused to drive autonomously through a road network toward a defined goal position, and, if no potential stopping place that is in the vicinity of the goal position and is acceptable and feasible can be identified, a human operator not onboard the vehicle is enabled to drive the vehicle to a stopping place.


Implementations may include one or a combination of two or more of the following features. information is provided for presenting to the human operator a view associated with a state of the vehicle. The human operator is enabled to provide inputs to the steering, throttle, brake, or other actuators of the vehicle. The human operator is enabled to directly control a trajectory planning process for the vehicle by manually selecting a goal position for the vehicle or influencing the planned trajectory to the goal position. A passenger in the vehicle is informed or consent by a passenger in the vehicle is requested for the human operator to drive the vehicle to the stopping place.


In general, in an aspect, a vehicle is caused to drive autonomously through a road network toward a defined goal position at which a stopped activity is to occur. A set of acceptable stopping places is identified in the vicinity of the goal position. The identifying of the set includes: identifying a proximity region in the vicinity of the goal position, identifying a goal region within the proximity region, discretizing the goal region to identify acceptable stopping places given characteristics of the vehicle and of the stopped activity analyzing current information about potential stopping places in the goal region in the vicinity of the goal position to make a choice of a currently selected stopping place that is acceptable and feasible. Current information is analyzed about the potential stopping places, to make a choice of a currently selected stopping place that is acceptable and feasible the vehicle is caused to drive autonomously toward the currently selected stopping place. The activities are repeated until the vehicle stops at a currently selected stopping place.


Implementations may include one or a combination of two or more of the following features. The analyzing of current information includes analyzing a combination of static information that is known prior to the vehicle beginning to drive autonomously through the road network and information that is obtained during the autonomous driving. The analyzing of current information about potential stopping places includes continuously analyzing information from sensors on the vehicle or information from one or more other sources or both. If the vehicle has not stopped at a currently selected stopping place that is acceptable and feasible within a specified amount of time and within specified thresholds for acceptability and feasibility, activities are repeated using a relaxed threshold for acceptability, a relaxed threshold for feasibility, or both. The analyzing of current information includes analyzing map data about the road network. The analyzing of current information includes analyzing information about features in the vicinity of the goal position as perceived by sensors. The analyzing of current information includes analyzing distance of respective stopping places from the goal position. The analyzing of current information includes analyzing information about respective positions of the vehicle at the potential stopping places. The analyzing of current information includes analyzing information about whether the vehicle can legally stop at the potential stopping places. The analyzing of current information includes analyzing relative desirability of potential stopping places. The analyzing of current information includes identifying a set of potential stopping places that are within a proximity region in the vicinity of the goal position. The analyzing of current information includes removing potential stopping places from the set that are not feasible stopping places for the vehicle.


These and other aspects, features, implementations, and advantages, and combinations of them, can be expressed as methods, systems, components, apparatus, program products, methods of doing business, means and steps for performing functions, and in other ways.


Other aspects, features, implementations, and advantages will become apparent from the following description and from the claims.





DESCRIPTION


FIG. 1 is a block diagram.



FIGS. 2, 10, and 13 are schematic views of maps.



FIG. 3 is a screenshot.



FIG. 4 is a schematic diagram.



FIGS. 5 through 9 are schematic views of maps.



FIGS. 11, 12, 14, and 15 are flow diagrams.





The use of the following terms in this description are intended broadly and meant to include, for example, what is recited after each of the terms.


Annotated map data—conventional road network map data used, for example, for autonomous driving systems that has been augmented with data associated with potential stopping places. Conventional road network map data can include some or all of the data mentioned in item 5. under the Background heading above.


Autonomous vehicle (AV)—a vehicle that has autonomous driving capability.


Autonomous driving capability—an ability to safely and reliably drive through a road environment to a goal position while avoiding vehicles, pedestrians, cyclists, and other obstacles and obeying the rules of the road.


Coordinates—geographic longitude and latitude.


Acceptable—satisfies one or more criteria for evaluating the appropriateness of a stopping place as a place where, for example, the pick up or drop-off or other stopped activity can be done safely or legally or conveniently or expeditiously, among other criteria.


Stop—come to a halt for a limited period of time with the intention of completing a stopped activity.


Stopped activity—an action that may occur at a stopping place (other than, for example, a stop made in the course of driving such as at a stop sign or a traffic light), such as picking up or dropping off a passenger or parcel, or other action.


AV system—a set of elements or components or processes located on an autonomous vehicle or at other locations, or a combination of them, and that together do the things that must be done in order to enable an autonomous vehicle to operate.


Passenger—a human being who is to be picked up or dropped off at a stopping place by an autonomous vehicle and (for convenience and simplicity in the prose) a human being who wants to have a parcel picked up or dropped off by an autonomous vehicle.


Stopping place—an area that the vehicle occupies (identified by a defined shape, typically a rectangle, at a defined location in the world) and a direction in which a vehicle is facing when stopped at the stopping place.


Acceptable stopping place—a stopping place that is acceptable because it is safe, legal, and convenient for the passenger.


Potential stopping place—a stopping place that is under consideration by an AV system.


Feasible stopping place—a stopping place that is possible for the AV to reach and stop at.


Currently selected stopping place—a stopping place that is currently selected by an AV system.


Actual stopping place—a stopping place where an AV actually stops.


Availability layer—a layer of annotated road data that identifies potential stopping places included in the road data.


Goal position—a position that is an intended destination, is identified by coordinates on a road network, and has been specified by a passenger or an element of an AV system.


Proximity region—a region defined around or in the vicinity of the goal position and entirely within a fixed configurable distance of a point within the region (e.g., its center). The proximity region need not be round because the distance metric employed in calculation of the region may be defined in a non-Euclidean metric space.


Goal region—a region defined around or in the vicinity of the goal position such that any stopping place within the goal region is an acceptable stopping place for the AV.


Sub-region—a portion of a region.


As shown in FIG. 2, we describe techniques by which an AV system can determine and continuously update a currently selected stopping place 100 for an AV at or in the vicinity of a goal position 102 to which the AV is driving. At the currently selected stopping place, the AV will stop and engage in or enable a stopped activity, including, for example, picking up or dropping off a passenger 104 or a package 106 or both. We use the term “stop” broadly to include, for example, stop, park, or stand.


At least the following complexities must be taken into account in determining and updating a selected stopping place:


1. The AV's goal position is often specified directly as geographic coordinates (latitude and longitude) or indirectly by an address or some other form of identifier that can be translated into geographic coordinates. Sometimes the goal position is specified by a passenger who wants to be picked up or dropped off by the AV roughly at that position, or by a user who wants to load a parcel onto the AV at that position for delivery to another location, or unload a parcel. For simplicity we use the word passenger to refer not only to a human rider but also to a user who is to load or unload a parcel even though that user may not be a rider in the AV.


As shown in FIG. 3, a passenger 110 can specify a goal position by typing out an address 112 or by dropping a pin 114 on a map 116 using a touch-based user interface 118 of a mobile app running on a user device 120. In some cases, the coordinates that are entered directly or that result from translation of such an address or other identifier may correspond to a goal position that does not lie on a drivable road 122 or other drivable feature of the road network, such as a goal position within a building, waterway, park, or other non-drivable feature.


2. Even when the specified goal position is on a road or other drivable feature, the coordinates may correspond to a location on the road that is not an acceptable stopping place (e.g., because it is not a safe or legal stopping place), such as a stopping place in the middle of the road or at the side of a busy highway. We use the term “acceptable” broadly to refer to satisfying one or more criteria for evaluating the appropriateness of a stopping place as a place where, for example, the pick up or drop-off or other stopped activity can be done safely or legally or conveniently or expeditiously, among other criteria.


3. The coordinates of the goal position may correspond to an acceptable stopping place 130 on a road or other drivable feature where, for example, it is normally safe and legal for the AV to stop (e.g., a taxi stand or a reserved parking space). Yet for a variety of reasons such as the temporary presence of a parked vehicle or other obstacle or construction works 132, it may be temporarily impossible for the AV to stop there. In other words, such an acceptable stopping place is not a feasible stopping place.


4. The coordinates of the goal position may correspond to an acceptable stopping place where a road or driveway exists, but the AV may not have map information sufficient to enable it to determine that the goal position represents such an acceptable stopping place. This circumstance may occur on private roads or grounds, such as private residences, shopping malls, corporate campuses, or other private sites. In other words, such an acceptable stopping place is not a feasible stopping place.


5. The coordinates of the goal position may correspond to an acceptable stopping place that would require passing through a feature that the AV cannot navigate. For example, the acceptable stopping place may lie beyond the entrance of a parking structure that requires human interaction (e.g., to retrieve a parking ticket), beyond a guard post or checkpoint, or beyond a road region that the AV has otherwise identified as impassable (e.g., due to the presence of road construction). In such circumstances, the otherwise acceptable stopping place is not a feasible stopping place.


In all of these cases and others, the AV needs to find a stopping place on the road that is not only acceptable but is also a feasible stopping place.


Although we have been discussing considerations in selecting an acceptable and feasible stopping place that is in the vicinity of a goal position that has been specified by a passenger, circumstances may sometimes require that the AV stop in the vicinity of a goal position that is not a passenger-specified goal position. For example, in the event of a passenger medical emergency, the AV may be automatically re-directed to the closest medical center. Another example is that of a centrally coordinated service, where a central optimization algorithm asks an idle AV to reposition itself (for example, in expectation of higher passenger or freight demand around the new location). The techniques that we describe here are applicable to such other goal positions as well. Thus, we use the term “goal position” broadly to include, for example, any position on the road network to which the AV is driving for the purpose of stopping.


Although we often refer to a goal position as a single specific location defined by geographic coordinates, a goal region 140 of a configurable size and geometry can be defined around or in the vicinity of the goal position coordinates. A goal region is the region within which it would be acceptable for the AV to stop to engage in stopped activities, for a goal position located at the specified coordinates. If the AV were to stop at any stopping place within the goal region, it would be considered to have stopped acceptably near to the goal position, that is, at an acceptable stopping place.


All potential stopping places that are acceptable (and therefore within the goal region) and feasible may not be equally desirable. For example, some of those feasible stopping places may be closer to the goal position than others, making them more desirable.


Given an AV that has an autonomous driving capability, the techniques and systems that we describe here enable the AV to select and then drive to a selected stopping place within the goal region that is acceptable, feasible, and desirable.


Data from road map data sources that was mentioned earlier is typically used for the purpose of AV routing and autonomous navigation. We assume, to the extent necessary for the techniques and systems mentioned here to function, that the data is augmented (if necessary) to include the following annotated map data:


1. Data about areas that contain potential stopping places where it is legal for the AV to stop including the direction in which it is legal for the AV to stop there, restrictions on how long the AV can stop there, restrictions on what type of vehicles can stop there, restrictions on what activities stopped vehicles can engage in (e.g., loading zones) and any other relevant information. For example, on busy roads it might only be legal to stop at the very edge of the road, adjacent to the curb. However, on a quiet residential street, it might be acceptable to double park and stop in the travel lane itself. On certain roads, stopping might be completely disallowed (except during emergencies). On certain roads, it may not be legal to park during snow emergencies.


2. Data about areas that include potential stopping places where AVs might not normally drive, but nevertheless are allowed to stop (e.g., parking lots, driveways) and restrictions associated with those areas. These areas are specifically identified in the data to ensure that the AV system does not use these areas for any purpose other than stopping.


3. Data about areas that contain potential stopping places where it is not legal for a vehicle to stop in any situation (e.g., no stopping zones, in front of certain buildings such as fire stations, etc.) and any exceptions to those rules (e.g., a loading zone might permit cargo drop-offs but not passenger drop-offs.)


The process of gathering the annotated map data normally happens offline before the data is in active use by the AV or may be streamed to the vehicle wirelessly as it travels. Also, some of this data may be gathered in real-time using data from the AV's perception system.


Given an AV that has access to the annotated map data and has an autonomous driving capability, the process shown in FIG. 4 illustrates an example of how a selected stopping place may be determined. A wide variety of other sequences and components of the process could be used.


1. As shown in FIG. 4, the geographic coordinates of the goal position 200 of an AV are specified directly or inferred by the AV system 202 in one or more of the following ways and in other ways or combinations of them:


a. A passenger indicates a point on a map or an address 206 as the goal position of the AV. This could be done on a mobile app, a kiosk, a notebook, a tablet, or a workstation 207, to name a few examples. The map or address is communicated wirelessly to and received by a communication element 208 of the AV system 202.


b. The AV system itself sets a goal position based on one or more processes, for example, an emergency response process 210 that routes a vehicle to the nearest hospital or a vehicle re-balancing process 212 that routes the vehicle to a different goal position in a city, etc.


2. As shown in FIG. 4, given the coordinates of a goal position, a goal region 214 is defined by the AV system 202. The goal region contains potential stopping places (based on available annotated map data) where it is acceptable for the AV to stop to perform a stopped activity. As noted before, not all of these potential stopping places may be feasible or equally desirable. Also, the goal region may be updated for a variety of reasons, for example, as more information becomes available.


As shown in FIG. 5, to define the goal region, the AV system first creates and stores a proximity region 214. The proximity region comprises both acceptable and unacceptable stopping places. The goal region is defined as the subset of the proximity region where all stopping places are acceptable. However, not all of these acceptable stopping places in the goal region are feasible or equally desirable. So finally, a stopping place is selected from the goal region that is both feasible and desirable, and the AV navigates to the currently selected stopping place.


The proximity region is defined to be within, for example, a fixed, configurable distance 218 from the goal position. The center of this proximity region could be the goal position itself, or a point 222 on the drivable area of the map that is near (e.g., “nearest”) to the goal position, or something similar.


This fixed configurable distance could be calculated by the AV system in a number of ways, including but not restricted to one or a combination of:


1. Straight line distance (i.e., all points within a specific radius from the center).


2. Manhattan or grid distance.


3. Walking distance (i.e., all points within a specific walking distance from the center).


4. Points that are within line of sight of the goal position (based on appropriate sightline data).


5. Other non-Euclidean distance measures


This allows the proximity region to take a variety of shapes beyond that of a circle. We note that the actual shape of the proximity region is not important, as long as it is well-defined.


The goal region is then determined by excluding from the proximity region sub-regions where stopping is not allowed, and retaining the sub-regions where stopping is allowed, given the nature of the stopped activity, expected duration of stopping, nature of the AV, time of the day, etc. This can be done, for example, by determining the intersection of the areas in the annotated map data where stopping is allowed (given the nature of the stopped activity, expected duration of stopping, nature of the AV, time of the day, etc.) with the proximity region. Such geospatial intersections may be performed using available commercial or open-source software, for example ArcGIS, MongoDB.


Examples of such sub-regions include one or more of the following:


a. Certain areas such as loading zones might be used for cargo (we sometimes use the word cargo interchangeably with parcel) pickups and drop-offs, but not for passengers. Similarly, certain driveways might be usable for passengers but not for cargo. Such restrictions could be part of the annotated map data. Therefore, depending on the purpose of the stopped activity (which could be provided by the passenger or the AV system or another source), certain sub-regions can be excluded from the proximity region.


b. Some sub-regions might be excluded because the allowed stopping time in the stopping places in that sub-region is not sufficient to carry out the stopped activity. The estimated duration of the stopped activity can be influenced by a number of factors including one or more of the following. A pickup is generally slower than a drop-off as the passenger might need first to locate the AV and confirm his identity to the AV. Cargo stops might be slower than for passengers because of time needed to load or unload. Multiple passengers getting in or out of the AV might be slower than a single passenger. Certain sub-regions of the goal region might have restrictions (captured in the annotated map data) on how long a vehicle can stop there and therefore might not be suitable for some stopped activities. Independent configurable time parameters can be defined that relate to the expected time required for a pickup, drop-off, loading of cargo, unloading of cargo, and loading and unloading of one or multiple passengers or for other stopped activities. Certain sub-regions can then be excluded from the proximity region if the allowed stopping time in that sub-region is less than the expected stopping time of the stopping activity.


c. Some vehicle types (e.g., trucks or buses) might not be allowed to stop in certain zones. The vehicle type is known to the AV system. Certain sub-regions can then be removed from the proximity region if they correspond to zones where the AV is not allowed to stop.


d. Some large vehicles 240 (e.g., trucks or buses) might not physically fit into a certain zone, depending on the shape and size of the vehicle relative to that of the zone. To ascertain this, an AV footprint area (normally a rectangle) 242 is defined by the AV system corresponding to the footprint of the AV with a margin added in each dimension to account for overhangs and errors and provide a safety margin. Any sub-region of the proximity region that is unable to fully contain this AV footprint area is then excluded from the proximity region.


e. Even though some part of the proximity region, say a parking space at a particular stopping place, is able to accommodate the AV configuration area, the driving approaches to reach that space may be too small for the AV footprint area. In other words, the stopping place is not a feasible stopping place because no feasible path exists for the AV to reach that stopping place. This situation is accommodated as part of the trajectory planning process described later.


A proximity region in the vicinity of the road network can be annotated to exclude sub-regions that relate to the particular stopped activity, the AV, or the time of day.


An example of a goal region 250 is shown in a checkerboard pattern in FIG. 6. The exclusion of sub-regions from the proximity region yields the goal region 250.


Although the goal region or the proximity region can be defined as a mathematical set having potentially an infinite number of potential stopping places 252, in some implementations this can be discretized into a set having a finite number of potential stopping places using a number of well-known discretization strategies (e.g., random sampling, uniform sampling). These sampling strategies would yield a finite number of points within the proximity region. A stopping place could be constructed around each of these sampled points, for example, by drawing a rectangle corresponding to the AV's footprint around the sampled point. The sampled point could be the centroid of the rectangle. The size of the rectangle would have to be sufficient to accommodate the footprint of the AV including space for overhangs or a safety buffer on all sides. The orientation of the rectangle would be determined by the direction of the traffic flow at that point. For example, if the sampled point is on a traffic lane, the vehicle would have to stop in the direction of the traffic flow. The rectangle would be oriented accordingly and would be characterized both by its size, boundaries and its orientation. If the rectangle thus specified falls entirely within the proximity region, the stopping place that it represents would be considered to be a part of the proximity region.


As mentioned above, each potential stopping place in the goal region is associated with a direction in which it is legal for the leading end of the AV to point when stopped. This direction is inferred from the annotated map data which specifies legal driving directions for all parts of the roads and other drivable features, including traffic lanes.



FIG. 7 illustrates a few sample stopping places 251 (rectangles with arrows pointing in the direction that the leading end of the AV must face). The direction of a stopping place is inferred from the direction of travel 253 in the lanes which are shown with dashed arrows. A goal region might have an infinite number of valid stopping places and the stopping places shown in FIG. 7 are merely a sample.


As noted before, all stopping places within the goal region are considered acceptable stopping places, though they may not all be feasible stopping places or equally desirable.


If the goal region has no acceptable stopping places, i.e., it is empty, then the system may expand the size of the proximity region and re-calculate the goal region. This expansion may be done automatically by the AV system, or by presenting the option to the passenger through the user interface, or by a combination of both.


As shown in FIGS. 9 and 10, in an example of a user interface 400 for presenting options of the passenger, if the initial proximity region 410 comprised stopping places that are within a certain distance, say 100 meters, of the goal position 420, the goal region 430 may be empty (i.e. it contains no acceptable stopping places). The passenger could be prompted (through a smartphone app that was used for booking the ride, or through a touch-screen inside the car in the case of a stop in which the stopping activity is to drop a passenger, for example) and asked if, for example, double the walking distance associated with the initial proximity region would be acceptable to the passenger, as shown in the user prompt 440. In some cases, the user interface could allow the passenger to specify the maximum walking distance that is acceptable to the passenger. In some instances, the user interface could allow the passenger to specify the proximity region more directly, for instance by allowing the passenger to draw the region or by providing a mechanism for the passenger to relocate the boundaries of the region, for example using a boundary expansion tool 450 (which is an example of a touch-based drag and drop tool). The expanded proximity region 460 and the expanded goal region 470 may also be shown to the passenger.


The expansion may be subject to some upper limit on the size of the proximity region. It is possible that despite expansion up to the upper limit, the goal region may still be empty. This could happen, for example, if the goal position is in the middle of a large field or a military installation. In these cases, the passenger will be informed through the user interface that no acceptable stopping places may be found in the vicinity of the goal position, and the passenger may be requested to select a different goal position. FIG. 15 shows a flowchart of the activities involved in an example of the process used if the goal region is empty.


Not all stopping places are equally desirable for stopping. The desirability of a stopping place depends on, for example, quantifiable factors such as, but not limited to, the following:


1. Walking distance to the goal position from the stopping place. This distance 256 (FIG. 4) is computed by the AV system using one or a combination of distance metrics, such as the Euclidean distance, Manhattan distance, or another metric, or using exact information about walking paths obtained from the map data. Generally, a stopping place that has a shorter walking distance is more desirable.


2. Covered walking distance to a goal position from the stopping place (i.e., walking distance with protection from the weather). Again, this covered distance 258 can be computed using one or a combination of distance metrics, such as the Euclidean distance, Manhattan distance, or another metric. Generally, a stopping place that has a shorter covered distance is more desirable. In some scenarios a covered walking path or data about the coverage of walking paths might not exist.


3. Clear sightline to the goal position, because it is generally desirable for a passenger at the goal position to be able to see a stopped AV based on its stopping place. The presence of a clear sightline 260 from the stopping place under consideration to a passenger at the goal position can be inferred given a 3D model of the local environment that includes the dimensions and locations of buildings and other objects, such as is available from sources such as Google Earth.


4. Distance from the curb. Stopping place that are closer to curbs are generally preferred as they allow the passenger to access the AV more easily. Information about curbs may be part of the annotated map data or may be perceived by sensors on the AV.


5. Type of road. Generally, it is desirable for the AV to stop on a road that has fewer lanes or has a lower speed limit or both. The lane configuration and speed limit information may be part of the annotated map data.


6. Expected or actual traffic. Generally, it is desirable for the AV to pick up or drop off a passenger or parcel on a road that is less trafficked as determined, for example, by AV system analysis of historical traffic data or data 270 collected by sensors mounted on the AV or both.


7. Designated stopping area. Generally, it is desirable for the AV to stop at a pre-defined stopping zone such as a taxi stand, a hotel pick-up and drop-off zone, a loading zone or other pre-defined stopping zone, compared to stopping in the travel lane of a road.


Given that the desirability of a stopping place depends on a variety of factors of different types, it is useful to combine them in a way that facilitates comparisons of the relative desirability of different stopping places. One or a combination of two or more of the following approaches can be used by the AV system in taking account of the factors:


1. The AV system creates a generalized cost or utility function 274 for stopping places that normalizes all factors (using calibrated weights or scaling factors) to create a single cost or utility value 276 associated with each stopping place. These costs or utilities, being numbers, can then be directly compared. The cost function can be continuous (e.g., the Euclidean distance from the stopping place to the goal position) or binary (e.g., a certain non-zero cost if a sightline between the stopping place and destination point does not exist, and a cost of zero if a sightline does exist). The cost can also depend on the type of stopped activity, e.g., whether it is a pickup or drop-off of a passenger, multiple passengers, or a parcel.


2. The AV system converts each factor to a 0-1 range and applies fuzzy logic rules to compare stopping places.


3. The AV system uses a prioritized comparison, in which factors that are defined to be more important are compared before factors that are less important. This kind of prioritized comparison can also impose minimum and maximum values on each factor to ensure that the selected stopping place meets the maximum and minimum requirements for each factor.


4. The AV system creates ranks such that a stopping place with a higher rank is more desirable than a stopping place with a lower rank. The goal region might be subdivided into sub-regions each with its own rank, and all stopping places within a sub-region could share the same rank, i.e., desirability.


As shown in FIG. 4, based on the previous activities the AV system can determine and store the goal region 250 containing a set of stopping places all within the goal region, where stopping is acceptable given the nature of the stopped activity and the AV. A desirability index or utility or cost or rank (i.e., a measure of desirability) can be associated with each stopping place in the goal region. FIG. 11 shows a flowchart of the activities involved in an example of the process of defining a goal region.


The AV system also maintains what we term an availability layer 282 that can be thought of as an overlay on the map's potential stopping places. This availability layer of the annotated map data identifies to the AV system, for each potential stopping place, whether the potential stopping place is a feasible stopping place.


If no prior information on the feasibility of stopping places is available, the availability layer can be initialized by assuming that all potential stopping places are feasible stopping places. Prior data about stopping places 284, if available, for example, from previous trips or from other vehicles or from sensor infrastructure, can be used to initialize the layer with stopping places expected to be feasible, while identifying those are not expected to be feasible.


The availability layer is continually updated in real time as, for example, the AV perceives new information from its sensors. This information can come, for example, from a variety of sensors such as LIDAR, Radar, ultrasonic, video camera, IR, etc., which allow the AV to determine the shape and position of objects in its environment.


For example, LIDAR data allows an AV to find other vehicles in its environment. The areas occupied by those vehicles (along with a buffer assumed to be around each of the vehicles to account for sensing error or vehicle overhangs) can then be removed from the availability layer as these areas are not available to the AV for stopping. Similarly, objects such as traffic cones or road signs (denoting construction work) can be detected using a combination of LIDAR and video cameras. Sometimes an object can be detected but cannot be classified, for example, a fallen tree or construction debris. In these cases, the AV system may note that the related area is not available and therefore remove it from the availability layer. In some cases, an object may be detected but does not represent an obstacle to stopping (for example, a moving pedestrian). In those cases, the corresponding stopping places can be retained in the availability layer.



FIG. 8 shows an example of what an AV 300 that is approaching its goal position 302 perceives and how the availability layer 282 is updated as a result of that perception. The checkerboard pattern 306 denotes the part of the goal region that includes only stopping places that are feasible.


The availability layer is updated frequently, using both the AV's own perception data and external information. The frequent updating is important because the AV can stop only at a stopping place 308 that it is currently perceiving as being feasible. A stopping place that was assumed to be feasible (because of previous perception or information from another vehicle, etc.) might not actually be available (e.g., feasible) when the AV approaches that stopping place and sees for “itself”. Similarly, a stopping place that was previously thought to be unavailable (and therefore infeasible) might actually be available (and therefore feasible) when the AV perceives it directly.


In addition to the last known feasibility status, the availability layer in the annotated map data could also store, for each potential stopping place:


1. The time of the most recent update of the stopping place status 310, as it is a measure of the current accuracy of the information. A stopping place that was reported being infeasible two minutes ago is more likely to remain unfeasible than a stopping place that was reported being infeasible twenty minutes ago.


2. The reason for the stopping place's infeasibility 312. For example, if the stopping place was infeasible because of the presence of another car, then the AV system might expect it to become available later. On the other hand, if the stopping place was infeasible because of construction works, the AV system might not expect that stopping place to become feasible for the rest of the day or for several days. Potentially the AV system could modify the map to reflect that circumstance.


The likelihood that a stopping place that is marked as infeasible in the availability layer might become feasible by the time the AV reaches that stopping place (and vice versa) depends on several factors, including among others: freshness of the information (time elapsed since last update); historical statistics on the level of demand for parking relative to supply in that area at that time of the day; the reason for the infeasibility of that stopping place; and the current traffic volumes around that stopping place (derived from the AV's perception system, potentially supplemented by information from other AVs or sensors). The AV system could use a statistical model 314 that predicts the expected feasibility state of a stopping place (or a similar metric), given some or all of the data points, along with a confidence bound for that estimate. Such a metric could contribute to the calculation of the desirability value of a potential stopping place. A stopping place that is more likely to be available is more desirable than an equivalent stopping place that is less likely to be available.


The availability layer can be updated using information received from other AVs (either directly or through a central cloud server). Therefore, as part of an interconnected fleet of AVs or manually driven vehicles that are equipped with V2V (vehicle-to-vehicle) communication capabilities, the AV might have foreknowledge of which stopping places are available without the AV actually having seen them. The availability layer can also be updated using information from sensors that are fixed (e.g., sensors inside parking garages or sensors monitoring city parking spaces), from crowd-sourced data (e.g., apps such as Waze on which people report construction work, etc.) and from a variety of other sources.


Typically, the AV system executes a trajectory planning process 326 as part of its autonomous capability, which attempts to identify a trajectory from the AV's current position to a specified destination on the drivable area of a map. The result of the trajectory planning process is a continuously updated selected stopping place in the goal region, and a feasible trajectory to reach the currently selected stopping place, if one exists. We emphasize that the trajectory planning process routes the vehicle to the currently selected stopping place, and not the goal position specified by the user or the system, as that may not represent an acceptable and feasible stopping place.


If no feasible trajectory exists to the currently selected stopping place, the trajectory planning process updates its choice of selected stopping place. This may happen, for example, if the approaches to the currently selected stopping place are blocked or cannot accommodate the AV. This trajectory planning process continues as long as the AV has not stopped at the currently selected stopping place. FIG. 12 shows a flowchart of the activities involved in an example of the process of trajectory planning.


The trajectory planning process is executed simultaneously and asynchronously with the AV's perception process 328. As the perception process updates the availability layer, the currently selected stopping places may become unfeasible, for example, because some other vehicle has now occupied one of the stopping places. In addition, it is possible that a more desirable stopping place within the goal region that was previously unavailable, is now available and therefore feasible. Therefore, the trajectory planning process may be forced to update its selected stopping place and the corresponding trajectory to that selected stopping place, multiple times.


The stopping place in the goal region that is selected by the trajectory planning process depends on: (1) feasibility of the acceptable stopping places within the goal region, as determined in real-time by the availability layer via the perception process, (2) the relative desirability of the acceptable stopping places within the goal region, as computed by static or real-time data or both, and (3) the optimization objective of the algorithm that determines the trade-off between coming to a stop sooner versus spending more time to find a more desirable stopping place.


Some examples of strategies are:


1. Stop at the first feasible stopping place in the goal region (often called the “greedy” approach).


2. Stop at the most desirable among the currently feasible stopping places in the goal region.


3. If the AV has not stopped at a stopping place within a specified amount of time, stop searching for the most desirable stopping place, and instead find the first available stopping place in the goal region and stop there.


These objectives can be used to effect a trade-off between selecting the best possible stopping place and the time and effort spent in searching for it.


As shown in FIG. 13, in some cases, the AV system may ask the user to choose a stopping place from among a set of stopping places. A touch-based user interface 500 can show the passenger a choice of, for example, three acceptable and feasible stopping places (A, B and C) 520 that are within the proximity region defined around the goal position 510. The user may select from one of the three stopping places by touching the appropriate stopping place. In some instances, the user may be able to select the desired stopping place using a voice command that references the stopping place name, for example, “Stop at A”. As mentioned before, the goal region may contain an infinite number of potential stopping places or a finite number which is still too large to present to the passenger. Therefore, the AV system may choose a limited (likely pre-specified) number of stopping places to present to the passenger. These may be stopping places that are relatively desirable and relatively more likely to be available (e.g., the status of the update was updated relatively recently) while being sufficiently different from each other (e.g., non-overlapping).


The user input may be optional, in that the AV system may automatically choose a stopping place if the user does not make a choice within a specified amount of time. When the AV system chooses a stopping place automatically, the selected stopping place may be communicated to the passenger, for example, on a map-based interface on the passenger's mobile device or on a display located in the vehicle. Further, the passenger may be given the choice of changing the system-selected stopping place to one of the passenger's own choosing through an interface, such as the one described for FIG. 13.


As the AV system explores the goal region to select a stopping place, the AV system can revisit stopping places that had been perceived as unfeasible in the hope that they might now be feasible.


The trajectory planning process can communicate with a passenger to keep her informed of the AV's progress. A passenger can be informed that the AV has found an acceptable stopping place or informed when the AV has stopped at a stopping place, or both.


If the AV is unable to stop at an acceptable, feasible stopping place within a specified amount of time, the AV system may adopt strategies to deal with that situation.



FIG. 14 shows a flowchart of the activities involved in an example of the process of expanding the analysis if the AV cannot come to a stop, including the following:


1. Expand the goal region around the goal position with the understanding that this might require the passenger to walk a greater distance. This may be done automatically by the AV system. The AV system may have stored a specified maximum distance from the goal position that is acceptable for the goal region. The initial proximity region (and goal regions) might include stopping places that are significantly closer to the goal position than this specified maximum distance to enable the AV system to find as close a stopping place as possible. If the initial search does not yield a feasible stopping place, the proximity region (and correspondingly the goal region) may incrementally be expanded to include more stopping places that are farther away from the goal position, but still within the specified maximum distance from the goal position. The passenger may provide information or choices that control the increasing of the size of the proximity region (and correspondingly the goal region) in response to requests posed to the passenger through the user interface (such as a smartphone or a tablet, located in the car or belonging to the user). One of the pieces of information that may be specified by the user, for example, is the maximum distance (or other distance measure) from the goal position that is acceptable to the user. If the system is unable to find a stopping place within this distance, the user may be prompted to increase the specified distance, subject to some specified limit. In some implementations a combination of processes running on the AV system and input from the passenger can be used to control the expansion of the regions.


2. Search the existing goal region again for a specified amount of time in the hope that a stopping space that was previously unavailable is now available. The user may be given the option via a user interface located in the car or on a device belonging to the user, to allow the AV more time to search the goal region, subject to some specified limit.


3. Change the AV's goal position to a new goal position where an acceptable, feasible stopping place may be more easily found. This may be done by the passenger using a user interface similar to what was used to specify the initial goal position. The passenger may have, for example, the option of dropping a pin on a map or typing out an address or searching a location service such as Google Maps.


4. The AV may switch to a tele-operation or remote operation mode in which the control of the operation of the AV switches, partially or fully, from the AV system to a human operator typically located at a central operations control center. Tele-operation systems typically stream live data from the AV to a remote location using wireless transmitters located on the AV. This data may include, but is not limited to, some combination of: (1) raw data from the sensors onboard the AV (for example, a live video stream from the on-board cameras), (2) processed data from the computers on board the AV (for example, data about detected and classified objects, or a rendering of the world around the AV that has been created by fusing data from multiple sensors), (3) data from other systems (for example, outputs from the trajectory planning process), (4) data about the actuators on the AV (such as the throttle, brake, steering), (5) data about the current position, speed, acceleration and orientation of the car from the localization system, and (6) data from the AV's health monitoring system (for example, sensor health, battery status, etc.). This data is typically viewed by a tele-operator or other remote operator who is located at a central operations control center and who can then decide how to drive and otherwise control the AV. The tele-operator may have the ability to control the AV by directly providing inputs to the steering, throttle, brake and other actuators (for example, in the manner in which driver training simulators function). The tele-operator may have the ability to directly control the trajectory planning process by manually selecting a goal position for the vehicle, or influencing the trajectory to the goal position (by specifying the entire trajectory, or providing waypoints, or by some other method). The transition from autonomous to tele-operation mode should be handled with care. Such a transition would normally take place when the AV is not moving, although the AV may not be stopped at an acceptable stopping place. If there is a passenger in the AV, the passenger may be informed about or asked to approve the remote operation. This passenger communication may take place through a user interface in a smartphone belonging to the passenger, or a smartphone or tablet located in the AV, or some other such device.


5. Providing the passenger the option of switching the AV from an autonomous mode to a partially or fully manual mode (if there is a passenger in the AV who is legally authorized and willing to drive it seated in the driver's seat and the AV has a manual mode) so that the passenger may locate an acceptable, feasible stopping place. The transition from autonomous to manual mode should be handled with care. A transition would normally take place when the AV is not moving, although the AV may not be stopped at an acceptable stopping place. The passenger may be informed about the option, and her approval sought, through a user interface in a smartphone belonging to the passenger, or a smartphone or tablet located in the AV, or some other such device. Furthermore, the passenger may be required to ensure that the stopping place chosen manually is one from which the AV can resume autonomous operation after the passenger has exited the AV. A tele-operator may take control of the AV after the passenger has exited the AV, and bring the AV to a position at which autonomous mode may be engaged.


6. In the case of a passenger in the AV, returning toward the known start position and offering to stop at the first feasible stopping place as it travels back toward the start position. This stopping place may or may not be within the goal region.


7. In the case of a parcel in the AV, returning towards the known start position and alerting the sender of the package to unload the package from the AV. Or giving the sender, through a user interface such as a smartphone app, the option of specifying an alternative time when the delivery would be attempted.


8. Aborting the stop. If the stopped activity involves a pickup, the AV system may inform the passenger that a stopping place cannot be found and that the pickup request has been canceled.


Other implementations are also within the scope of the following claims.

Claims
  • 1. A computer-implemented method comprising: receiving, by one or more processors, static map data for a region, wherein the static map data identifies one or more potential stopping places for a vehicle within the region;maintaining, by the one or more processors, stored data indicative of one or more currently feasible stopping places for the vehicle within the region, wherein the one or more currently feasible stopping places area subset of the one or more potential stopping places, and wherein at least one potential stopping place of the one or more potential stopping places is determined to be a currently feasible stopping place based on:an amount of time elapsed since the potential stopping place was determined to be infeasible for parking stopping exceeding a first threshold value,a reason for the determination that the potential stopping place is infeasible for stopping, andat least one of a historical level of demand for parking in a vicinity of the potential stopping place being less than a second threshold value or traffic volume in the vicinity of the potential stopping place being less than a third threshold value;receiving from one or more sensors or one or more other sources current signals representing perceptions of actual conditions at the one or more currently feasible stopping places;updating, by the one or more processors, the stored data based on the perceptions of actual conditions to include one or more updated currently feasible stopping places; andexposing, by the one or more processors, the updated stored data to a process that selects a stopping place for the vehicle from among the one or more updated currently feasible stopping places.
  • 2. The method of claim 1 comprising: discretizing, by the one or more processors, the one or more potential stopping places as a finite number of points within the region.
  • 3. The method of claim 2 comprising: defining, by the one or more processors, the potential stopping place as a shape containing at least one of the points, the potential stopping place configured to accommodate a footprint of the vehicle.
  • 4. The method of claim 3 comprising: attributing, by the one or more processors, an orientation to the shape, the orientation corresponding to a direction of traffic flow at the potential stopping place.
  • 5. The method of claim 2 comprising: initializing, by the one or more processors, the one or more potential stopping places as one or more stopping places expected to be feasible based on prior signals from the one or more sensors, the prior signals representing past perceptions of past actual conditions at some of the one or more potential stopping places.
  • 6. The method of claim 1 in which the one or more sensors comprise at least one sensor that is physically located on the vehicle.
  • 7. The method of claim 1 in which the one or more sensors comprise at least one sensor that is physically remote from the vehicle, wherein the at least one sensor is located inside a parking garage.
  • 8. The method of claim 1 in which the current signals received from the one or more sensors are received through vehicle-to-vehicle or vehicle-to-infrastructure communication.
  • 9. The method of claim 1 in which the one or more other sources comprise crowd-sourced data sources.
  • 10. The method of claim 1 in which the vehicle is part of a fleet of vehicles managed from a central server and the method comprises the server distributing information received from sensors at one of the vehicles to other vehicles of the fleet.
  • 11. An autonomous vehicle, comprising: one or more processors;one or more sensors; andone or more data storage devices including instructions that when executed by the one or more processors, cause the autonomous vehicle to perform functions comprising:receiving static map data for a region, wherein the static map data identifies one or more potential stopping places for a vehicle within the region;maintaining stored data indicative of one or more currently feasible stopping places for the vehicle within the region, wherein the one or more currently feasible stopping places are a subset of the one or more potential stopping places, and wherein a at least one potential stopping place of the one or more potential stopping places is determined to be a currently feasible stopping place based on:an amount of time elapsed since the potential stopping place was determined to be infeasible for stopping exceeding a first threshold value,a reason for the determination that the potential stopping place is infeasible for stopping, andat least one of a historical level of demand for parking in a vicinity of the potential stopping place being less than a second threshold value or traffic volume in the vicinity of the potential stopping place being less than a third threshold value;receiving from the one or more sensors or one or more other sources current signals representing perceptions of actual conditions at the one or more currently feasible stopping places;updating the stored data based on the perceptions of actual conditions to include one or more updated currently feasible stopping places; and exposing the updated stored data to a process that selects a stopping place for the vehicle from among the one or more updated currently feasible stopping places.
  • 12. The autonomous vehicle of claim 11, wherein the functions comprise: initializing the potential stopping places as all of the potential stopping places identified as part of the static map data for the region.
  • 13. The autonomous vehicle of claim 11, wherein the functions comprise: discretizing the potential stopping places as a finite number of points within the region corresponding to potential stopping places.
  • 14. The autonomous vehicle of claim 13, wherein the functions comprise: defining a potential stopping place as a shape containing one of the points, the shape corresponding to a footprint of the vehicle.
  • 15. The autonomous vehicle of claim 14, wherein the functions comprise: attributing an orientation to the shape, the orientation corresponding to a direction of traffic flow.
  • 16. The autonomous vehicle of claim 13, wherein the functions comprise: initializing the potential stopping places as potential stopping places expected to be feasible based on prior signals from the one or more sensors representing perceptions of actual conditions at one or more of the potential stopping places.
  • 17. The autonomous vehicle of claim 11 in which the current signals received from the one or more sensors are received through vehicle-to-vehicle or vehicle-to-infrastructure communication.
  • 18. A non-transitory computer readable medium storing instructions thereon that, when executed by one or more processors, cause the one or more processors to perform functions comprising: receiving static map data for a region, wherein the static map data identifies one or more potential stopping places for a vehicle within the region;maintaining stored data indicative of one or more currently feasible stopping places for the vehicle within the region, wherein the one or more currently feasible stopping places are a subset of the one or more potential stopping places, and wherein at least one potential stopping place of the one or more potential stopping places determined to be a currently feasible stopping place based on:an amount of time elapsed since the potential stopping place was determined to be infeasible for stopping exceeding a first threshold value,a reason for the determination that the potential stopping place is infeasible for stopping, andat least one of a historical level of demand for parking in a vicinity of the potential stopping place being less than a second threshold value or traffic volume in the vicinity of the potential stopping place being less than a third threshold value;receiving from one or more sensors or one or more other sources current signals representing perceptions of actual conditions at the one or more currently feasible stopping places;updating the stored data based on the perceptions of actual conditions to include one or more updated currently feasible stopping places; andexposing the updated stored data to a process that selects a stopping place for the vehicle from among the one or more updated currently feasible stopping places.
US Referenced Citations (446)
Number Name Date Kind
4113046 Arpino Sep 1978 A
5128874 Bhanu et al. Jul 1992 A
5166668 Aoyagi Nov 1992 A
5521579 Bernhard May 1996 A
5913917 Murphy Jun 1999 A
6018806 Cortopassi et al. Jan 2000 A
6026347 Schuster Feb 2000 A
6067501 Vieweg May 2000 A
6126327 Bi et al. Oct 2000 A
6151539 Bergholz et al. Nov 2000 A
6188602 Alexander et al. Feb 2001 B1
6320515 Olsson Nov 2001 B1
6356961 Oprescu-Surcobe Mar 2002 B1
6546552 Peleg Apr 2003 B1
6768813 Nakayama Jul 2004 B1
6782448 Goodman et al. Aug 2004 B2
6836657 Ji et al. Dec 2004 B2
6947554 Freyman et al. Sep 2005 B2
6978198 Shi Dec 2005 B2
7007049 Peng Feb 2006 B2
7218212 Hu May 2007 B2
7260465 Waldis et al. Aug 2007 B2
7292870 Heredia et al. Nov 2007 B2
7350205 Ji Mar 2008 B2
7512516 Widmann Mar 2009 B1
7512673 Miloushev et al. Mar 2009 B2
7516450 Ogura Apr 2009 B2
7562360 Tai et al. Jul 2009 B2
7584049 Nomura Sep 2009 B2
7587433 Peleg et al. Sep 2009 B2
7642931 Sato Jan 2010 B2
7657885 Anderson Feb 2010 B2
7665081 Pavlyushchik Feb 2010 B1
7668871 Cai et al. Feb 2010 B1
7681192 Dietsch et al. Mar 2010 B2
7734387 Young et al. Jun 2010 B1
7802243 Feeser et al. Sep 2010 B1
7805720 Chang et al. Sep 2010 B2
7853405 Yamamoto Dec 2010 B2
7865890 Sumi et al. Jan 2011 B2
7890427 Rao et al. Feb 2011 B1
7904895 Cassapakis et al. Mar 2011 B1
7934209 Zimmer et al. Apr 2011 B2
7941656 Hans et al. May 2011 B2
8010959 Mullis et al. Aug 2011 B2
8078349 Prada Gomez et al. Dec 2011 B1
8095301 Kawamura Jan 2012 B2
8112165 Meyer et al. Feb 2012 B2
8145376 Sherony Mar 2012 B2
8146075 Mahajan Mar 2012 B2
8170739 Lee May 2012 B2
8229618 Tolstedt et al. Jul 2012 B2
8261256 Adler et al. Sep 2012 B1
8266612 Rathi et al. Sep 2012 B2
8271972 Braghiroli Sep 2012 B2
8326486 Moinzadeh et al. Dec 2012 B2
8375108 Aderton et al. Feb 2013 B2
8392907 Oshiumi et al. Mar 2013 B2
8397230 Ewington et al. Mar 2013 B2
8428649 Yan et al. Apr 2013 B2
8429643 Venkatachalam et al. Apr 2013 B2
8437890 Anderson et al. May 2013 B2
8457827 Ferguson et al. Jun 2013 B1
8468243 Ogawa et al. Jun 2013 B2
8495618 Inbaraj et al. Jul 2013 B1
8516142 Lee et al. Aug 2013 B2
8543261 Anderson et al. Sep 2013 B2
8549511 Seki et al. Oct 2013 B2
8578361 Cassapakis et al. Nov 2013 B2
8612153 Nomura et al. Dec 2013 B2
8612773 Nataraj et al. Dec 2013 B2
8676427 Ferguson et al. Mar 2014 B1
8706394 Trepagnier et al. Apr 2014 B2
8744648 Anderson et al. Jun 2014 B2
8781707 Kawawa et al. Jul 2014 B2
8781715 Breed Jul 2014 B2
8813061 Hoffman et al. Aug 2014 B2
8880270 Ferguson et al. Nov 2014 B1
8880272 Ferguson et al. Nov 2014 B1
8996234 Tamari et al. Mar 2015 B1
9008961 Nemec et al. Apr 2015 B2
9045118 Taguchi et al. Jun 2015 B2
9070305 Raman et al. Jun 2015 B1
9081383 Montemerlo et al. Jul 2015 B1
9090259 Dolgov et al. Jul 2015 B2
9096267 Mudalige et al. Aug 2015 B2
9097549 Rao et al. Aug 2015 B1
9110196 Urmson et al. Aug 2015 B2
9120485 Dolgov Sep 2015 B1
9128798 Hoffman et al. Sep 2015 B2
9139199 Harvey Sep 2015 B2
9176500 Teller et al. Nov 2015 B1
9187117 Spero et al. Nov 2015 B2
9188982 Thomson Nov 2015 B2
9196164 Urmson et al. Nov 2015 B1
9202382 Klinger et al. Dec 2015 B2
9218739 Trombley et al. Dec 2015 B2
9243537 Ge Jan 2016 B1
9314924 Laurent et al. Apr 2016 B1
9321461 Silver Apr 2016 B1
9348577 Hoffman et al. May 2016 B2
9349284 Cudak et al. May 2016 B2
9354075 Kim May 2016 B2
9365213 Stennethe et al. Jun 2016 B2
9399472 Minoiu-Enache Jul 2016 B2
9412280 Zwillinger et al. Aug 2016 B1
9493158 Harvey Nov 2016 B2
9519290 Kojo Dec 2016 B2
9523984 Herbach Dec 2016 B1
9534910 Okumura Jan 2017 B2
9547307 Cullinane Jan 2017 B1
9547986 Curlander et al. Jan 2017 B1
9557736 Silver Jan 2017 B1
9568915 Berntorp et al. Feb 2017 B1
9587952 Slusar Mar 2017 B1
9594373 Solyom et al. Mar 2017 B2
9600768 Ferguson Mar 2017 B1
9606539 Kentley et al. Mar 2017 B1
9625261 Polansky Apr 2017 B2
9625906 Meuleau et al. Apr 2017 B2
9645577 Frazzoli et al. May 2017 B1
9648023 Hoffman et al. May 2017 B2
9671785 Bhatia et al. Jun 2017 B1
9682707 Silver Jun 2017 B1
9898008 Xu et al. Feb 2018 B2
9910440 Wei et al. Mar 2018 B2
20030043269 Park Mar 2003 A1
20030060973 Mathews et al. Mar 2003 A1
20030112132 Trajkovic et al. Jun 2003 A1
20030125864 Banno et al. Jul 2003 A1
20030125871 Cherveny et al. Jul 2003 A1
20040054995 Lee Mar 2004 A1
20040093196 Hawthorne et al. May 2004 A1
20040167702 Isogai et al. Aug 2004 A1
20050065711 Dahlgren et al. Mar 2005 A1
20050093720 Yamane et al. May 2005 A1
20050134710 Nomura et al. Jun 2005 A1
20050143889 Isaji et al. Jun 2005 A1
20050206142 Prakah-Asante et al. Sep 2005 A1
20050273256 Takahashi Dec 2005 A1
20050283699 Nomura et al. Dec 2005 A1
20060103590 Divon May 2006 A1
20060174240 Flynn Aug 2006 A1
20060195257 Nakamura Aug 2006 A1
20060217939 Nakata et al. Sep 2006 A1
20060242206 Brezak et al. Oct 2006 A1
20070001831 Raz et al. Jan 2007 A1
20070055446 Schiffmann et al. Mar 2007 A1
20070061074 Safoutin Mar 2007 A1
20070061779 Dowedeit et al. Mar 2007 A1
20070087756 Hoffberg Apr 2007 A1
20070124029 Hattori May 2007 A1
20070142995 Wotlermann Jun 2007 A1
20070162905 Kooijmans Jul 2007 A1
20070185624 Duddles et al. Aug 2007 A1
20070225900 Kropp Sep 2007 A1
20070226726 Robsahm Sep 2007 A1
20070229310 Sato Oct 2007 A1
20070253261 Uchida et al. Nov 2007 A1
20070255764 Sonnier et al. Nov 2007 A1
20070265767 Yamamoto Nov 2007 A1
20080001919 Pascucci Jan 2008 A1
20080005733 Ramachandran et al. Jan 2008 A1
20080046174 Johnson Feb 2008 A1
20080071460 Lu Mar 2008 A1
20080134165 Anderson et al. Jun 2008 A1
20080140278 Breed Jun 2008 A1
20080162027 Murphy et al. Jul 2008 A1
20080184785 Wee Aug 2008 A1
20080201702 Bunn Aug 2008 A1
20080244757 Nakagaki Oct 2008 A1
20080266168 Aso et al. Oct 2008 A1
20080303696 Aso et al. Dec 2008 A1
20090024357 Aso et al. Jan 2009 A1
20090058677 Tseng et al. Mar 2009 A1
20090062992 Jacobs et al. Mar 2009 A1
20090070031 Ginsberg Mar 2009 A1
20090079839 Fischer et al. Mar 2009 A1
20090089775 Zusman Apr 2009 A1
20090174573 Smith Jul 2009 A1
20090177502 Doinoff et al. Jul 2009 A1
20090237263 Sawyer Sep 2009 A1
20090271084 Taguchi Oct 2009 A1
20090312942 Froeberg Dec 2009 A1
20100100268 Zhang et al. Apr 2010 A1
20100198513 Zeng et al. Aug 2010 A1
20100228419 Lee et al. Sep 2010 A1
20100228427 Anderson et al. Sep 2010 A1
20100256836 Mudalige Oct 2010 A1
20100274430 Dolgov et al. Oct 2010 A1
20100286824 Solomon Nov 2010 A1
20100317401 Lee et al. Dec 2010 A1
20110066312 Sung et al. Mar 2011 A1
20110080302 Muthaiah et al. Apr 2011 A1
20110102195 Kushi et al. May 2011 A1
20110106442 Desai et al. May 2011 A1
20110137549 Gupta et al. Jun 2011 A1
20110141242 Fernandez Alvates et al. Jun 2011 A1
20110153166 Yester Jun 2011 A1
20110184605 Neff Jul 2011 A1
20110190972 Timmons Aug 2011 A1
20110197187 Roh Aug 2011 A1
20110231095 Nakada et al. Sep 2011 A1
20110252415 Ricci Oct 2011 A1
20110265075 Lee Oct 2011 A1
20110307879 Ishida et al. Dec 2011 A1
20120010797 Luo et al. Jan 2012 A1
20120016581 Mochizuki et al. Jan 2012 A1
20120017207 Mahajan et al. Jan 2012 A1
20120078504 Zhou Mar 2012 A1
20120109421 Scarola May 2012 A1
20120110296 Harata May 2012 A1
20120112895 Jun May 2012 A1
20120124568 Fallon et al. May 2012 A1
20120124571 Nagai et al. May 2012 A1
20120140039 Ota Jun 2012 A1
20120179362 Stille Jul 2012 A1
20120242167 Zeung et al. Sep 2012 A1
20120266156 Spivak et al. Oct 2012 A1
20120268262 Popovic Oct 2012 A1
20120271510 Seymour et al. Oct 2012 A1
20120275524 Lien et al. Nov 2012 A1
20120323402 Murakami Dec 2012 A1
20130018572 Jang Jan 2013 A1
20130046471 Rahmes et al. Feb 2013 A1
20130054133 Lewis et al. Feb 2013 A1
20130055231 Hyndman et al. Feb 2013 A1
20130079950 You Mar 2013 A1
20130085817 Pinkus Apr 2013 A1
20130099911 Mudalige et al. Apr 2013 A1
20130151058 Zagorski et al. Jun 2013 A1
20130167131 Carson Jun 2013 A1
20130174050 Heinonen et al. Jul 2013 A1
20130184926 Spero et al. Jul 2013 A1
20130223686 Shimizu Aug 2013 A1
20130227538 Maruyama Aug 2013 A1
20130238235 Kitchel Sep 2013 A1
20130245877 Ferguson et al. Sep 2013 A1
20130253754 Ferguson et al. Sep 2013 A1
20130261952 Aso et al. Oct 2013 A1
20130297172 Ariga et al. Nov 2013 A1
20130304349 Davidson Nov 2013 A1
20130304365 Trombley et al. Nov 2013 A1
20130325241 Lombrozo Dec 2013 A1
20130328916 Arikan et al. Dec 2013 A1
20130332918 Aoyagi et al. Dec 2013 A1
20130335569 Einecke et al. Dec 2013 A1
20130338854 Yamamoto Dec 2013 A1
20130339721 Yasuda Dec 2013 A1
20140013015 Chang Jan 2014 A1
20140018994 Panzarella et al. Jan 2014 A1
20140059534 Daum et al. Feb 2014 A1
20140062725 Maston Mar 2014 A1
20140063232 Fairfield et al. Mar 2014 A1
20140067488 James et al. Mar 2014 A1
20140068594 Young et al. Mar 2014 A1
20140088855 Ferguson Mar 2014 A1
20140104077 Engel et al. Apr 2014 A1
20140112538 Ogawa et al. Apr 2014 A1
20140136414 Abhyanker May 2014 A1
20140149153 Cassandras et al. May 2014 A1
20140156182 Nemec et al. Jun 2014 A1
20140168377 Cluff et al. Jun 2014 A1
20140176350 Niehsen et al. Jun 2014 A1
20140195093 Litkouhi et al. Jul 2014 A1
20140204209 Huth et al. Jul 2014 A1
20140207325 Mudalige et al. Jul 2014 A1
20140222280 Salomonsson Aug 2014 A1
20140245285 Krenz Aug 2014 A1
20140249742 Krivacic et al. Sep 2014 A1
20140266665 Haushalter Sep 2014 A1
20140272894 Grimes et al. Sep 2014 A1
20140278052 Slavin et al. Sep 2014 A1
20140278090 Boes et al. Sep 2014 A1
20140288810 Donovan et al. Sep 2014 A1
20140303827 Dolgov et al. Oct 2014 A1
20140309856 Willson-Quayle Oct 2014 A1
20140309885 Ricci Oct 2014 A1
20140327532 Park Nov 2014 A1
20140330479 Dolgov et al. Nov 2014 A1
20140334168 Ehlgen et al. Nov 2014 A1
20140334689 Butler et al. Nov 2014 A1
20140371987 Van Wiemeersch Dec 2014 A1
20150006012 Kammek et al. Jan 2015 A1
20150012204 Breuer et al. Jan 2015 A1
20150032290 Kitahama et al. Jan 2015 A1
20150039211 Stefan et al. Feb 2015 A1
20150046076 Costrello Feb 2015 A1
20150051785 Pal et al. Feb 2015 A1
20150081156 Trepagnier et al. Mar 2015 A1
20150088357 Yopp Mar 2015 A1
20150094943 Yoshihama et al. Apr 2015 A1
20150100216 Rayes Apr 2015 A1
20150120125 Thomson et al. Apr 2015 A1
20150121071 Schwarz Apr 2015 A1
20150123816 Breed May 2015 A1
20150124096 Koravadi May 2015 A1
20150134180 An et al. May 2015 A1
20150149017 Attard et al. May 2015 A1
20150154243 Danaher Jun 2015 A1
20150154323 Koch Jun 2015 A1
20150160024 Fowe Jun 2015 A1
20150161895 You et al. Jun 2015 A1
20150166069 Engel et al. Jun 2015 A1
20150178998 Attard et al. Jun 2015 A1
20150191135 Noon et al. Jul 2015 A1
20150191136 Noon et al. Jul 2015 A1
20150210274 Clarke et al. Jul 2015 A1
20150219463 Kang Aug 2015 A1
20150253778 Rothoff et al. Sep 2015 A1
20150266488 Solyom et al. Sep 2015 A1
20150268665 Ludwick et al. Sep 2015 A1
20150279210 Zafiroglu et al. Oct 2015 A1
20150285644 Pfaff et al. Oct 2015 A1
20150292894 Goddard et al. Oct 2015 A1
20150293534 Takamatsu Oct 2015 A1
20150307131 Froeschl et al. Oct 2015 A1
20150310744 Farrelly et al. Oct 2015 A1
20150319093 Stolfus Nov 2015 A1
20150329107 Meyer et al. Nov 2015 A1
20150332101 Takaki et al. Nov 2015 A1
20150336502 Hillis et al. Nov 2015 A1
20150338849 Nemec et al. Nov 2015 A1
20150339928 Ramanujam Nov 2015 A1
20150345959 Meuleau Dec 2015 A1
20150345966 Meuleau Dec 2015 A1
20150345967 Meuleau Dec 2015 A1
20150345971 Meuleau et al. Dec 2015 A1
20150346724 Jones et al. Dec 2015 A1
20150346727 Ramanujam Dec 2015 A1
20150348112 Ramanujam Dec 2015 A1
20150353082 Lee et al. Dec 2015 A1
20150353085 Lee et al. Dec 2015 A1
20150353094 Harda et al. Dec 2015 A1
20150355641 Choi et al. Dec 2015 A1
20150358329 Noda et al. Dec 2015 A1
20150360692 Ferguson et al. Dec 2015 A1
20150379468 Harvey Dec 2015 A1
20160013934 Smereka et al. Jan 2016 A1
20160016127 Mentzel et al. Jan 2016 A1
20160016525 Chauncey et al. Jan 2016 A1
20160025505 Oh et al. Jan 2016 A1
20160033964 Sato et al. Feb 2016 A1
20160041820 Ricci et al. Feb 2016 A1
20160047657 Caylor et al. Feb 2016 A1
20160047666 Fuchs Feb 2016 A1
20160075333 Sujan et al. Mar 2016 A1
20160078758 Basalamah Mar 2016 A1
20160107655 Desnoyer et al. Apr 2016 A1
20160109245 Denaro Apr 2016 A1
20160117923 Dannenbring Apr 2016 A1
20160121482 Bostick et al. May 2016 A1
20160129907 Kim et al. May 2016 A1
20160137199 Kuhne et al. May 2016 A1
20160137206 Chandraker et al. May 2016 A1
20160138924 An May 2016 A1
20160139594 Okumura et al. May 2016 A1
20160139598 Ichikawa et al. May 2016 A1
20160139600 Delp May 2016 A1
20160147921 VanHolme May 2016 A1
20160148063 Hong et al. May 2016 A1
20160161266 Crawford et al. Jun 2016 A1
20160161270 Okumura Jun 2016 A1
20160161271 Okumura Jun 2016 A1
20160167652 Slusar Jun 2016 A1
20160176398 Prokhorov et al. Jun 2016 A1
20160180707 MacNeille et al. Jun 2016 A1
20160209843 Meuleau et al. Jul 2016 A1
20160231122 Beaurepaire Aug 2016 A1
20160231746 Hazelton et al. Aug 2016 A1
20160239293 Hoffman et al. Aug 2016 A1
20160260328 Mishra et al. Sep 2016 A1
20160266581 Dolgov et al. Sep 2016 A1
20160280264 Baek Sep 2016 A1
20160282874 Kurata et al. Sep 2016 A1
20160288788 Nagasaka Oct 2016 A1
20160291155 Nehmadi et al. Oct 2016 A1
20160318437 Vilakathara Nov 2016 A1
20160318531 Johnson et al. Nov 2016 A1
20160321551 Priness et al. Nov 2016 A1
20160332574 Park et al. Nov 2016 A1
20160334229 Ross et al. Nov 2016 A1
20160334230 Ross et al. Nov 2016 A1
20160355192 James et al. Dec 2016 A1
20160370194 Colijn et al. Dec 2016 A1
20160370801 Fairfield et al. Dec 2016 A1
20160379486 Taylor Dec 2016 A1
20170010613 Fukumoto Jan 2017 A1
20170016730 Gawrilow Jan 2017 A1
20170059335 Levine et al. Mar 2017 A1
20170059339 Sugawara et al. Mar 2017 A1
20170082453 Fischer et al. Mar 2017 A1
20170090480 Ho et al. Mar 2017 A1
20170106871 You et al. Apr 2017 A1
20170110022 Gulash Apr 2017 A1
20170113683 Mudalige et al. Apr 2017 A1
20170122766 Nemec et al. May 2017 A1
20170123429 Levinson et al. May 2017 A1
20170123430 Nath et al. May 2017 A1
20170139701 Lin et al. May 2017 A1
20170153639 Stein Jun 2017 A1
20170154225 Stein Jun 2017 A1
20170219371 Suzuki et al. Aug 2017 A1
20170242436 Creusot Aug 2017 A1
20170245151 Hoffman et al. Aug 2017 A1
20170248949 Moran et al. Aug 2017 A1
20170249848 Niino et al. Aug 2017 A1
20170276502 Fischer et al. Sep 2017 A1
20170277193 Frazzoli et al. Sep 2017 A1
20170277194 Frazzoli et al. Sep 2017 A1
20170277195 Frazzoli et al. Sep 2017 A1
20170286784 Bhatia et al. Oct 2017 A1
20170291608 Engel et al. Oct 2017 A1
20170292843 Wei et al. Oct 2017 A1
20170305335 Wei et al. Oct 2017 A1
20170309179 Kodama Oct 2017 A1
20170132334 Levinson et al. Nov 2017 A1
20170327128 Denaro Nov 2017 A1
20170336788 Iagnemma Nov 2017 A1
20170337819 Wei et al. Nov 2017 A1
20170341652 Sugawara et al. Nov 2017 A1
20170345321 Cross et al. Nov 2017 A1
20170349181 Wei et al. Dec 2017 A1
20170351263 Lambermont et al. Dec 2017 A1
20170356746 Iagnemma Dec 2017 A1
20170356747 Iagnemma Dec 2017 A1
20170356748 Iagnemma Dec 2017 A1
20170356750 Iagnemma Dec 2017 A1
20170356751 Iagnemma Dec 2017 A1
20170369051 Sakai et al. Dec 2017 A1
20180004206 Iagnemma et al. Jan 2018 A1
20180004210 Iagnemma et al. Jan 2018 A1
20180039269 Lambermont et al. Feb 2018 A1
20180050664 Tarte Feb 2018 A1
20180053276 Iagnemma et al. Feb 2018 A1
20180053412 Iagnemma et al. Feb 2018 A1
20180086280 Nguyen Mar 2018 A1
20180105174 Russell et al. Apr 2018 A1
20180113455 Iagnemma et al. Apr 2018 A1
20180113457 Iagnemma et al. Apr 2018 A1
20180113459 Bennie et al. Apr 2018 A1
20180113463 Iagnemma et al. Apr 2018 A1
20180113470 Iagnemma et al. Apr 2018 A1
20180114442 Minemura et al. Apr 2018 A1
20180120845 Lambermont et al. May 2018 A1
20180120859 Eagelberg et al. May 2018 A1
Foreign Referenced Citations (31)
Number Date Country
105652300 Jun 2016 CN
102013010983 Jan 2015 DE
0436213 Jul 1991 EP
2381361 Oct 2011 EP
2639781 Sep 2013 EP
2955077 Dec 2015 EP
2982562 Feb 2016 EP
2005-189983 Jul 2005 JP
2009-102003 May 2009 JP
2009-251759 Oct 2009 JP
2010-086269 Apr 2010 JP
2011-253379 Dec 2011 JP
2013-242737 Dec 2013 JP
2015-044432 Mar 2015 JP
2016-095627 May 2016 JP
2018-012478 Jan 2018 JP
10-2013-0085235 Jul 2013 KR
10-2014-0069749 Jun 2014 KR
10-2014-0130968 Nov 2014 KR
10-1480652 Jan 2015 KR
10-1590787 Feb 2016 KR
2016-0049017 May 2016 KR
WO2007053350 May 2007 WO
WO 2009156186 Dec 2009 WO
WO2014139821 Sep 2014 WO
WO2015008032 Jan 2015 WO
WO2015151055 Oct 2015 WO
WO2016018636 Feb 2016 WO
WO2017205278 Nov 2017 WO
WO2017218563 Dec 2017 WO
WO2018005819 Jan 2018 WO
Non-Patent Literature Citations (90)
Entry
Transaction history and application as filed of U.S. App. No. 15/298,935, filed Oct. 20, 2016.
Transaction history and application as filed of U.S. App. No. 15/298,984, filed Oct. 20, 2016.
Transaction history and application as filed of U.S. App. No. 15/298,970, filed Oct. 20, 2016.
Transaction history and application as filed of U.S. App. No. 15/299,028, filed Oct. 20, 2016.
U.S. Appl. No. 15/298,935, filed Oct. 20, 2016, Iagnemma et al.
U.S. Appl. No. 15/298,984, filed Oct. 20, 2016, Iagnemma et al.
U.S. Appl. No. 15/298,970, filed Oct. 20, 2016, Iagnemma et al.
U.S. Appl. No. 15/299,028, filed Oct. 20, 2016, Iagnemma et al.
Aguiar et al., “Path-following for non-minimum phase systems removes performance limitations,” IEEE Transactions on Automatic Control, 2005, 50(2):234-239.
Aguiar et al., “Trajectory-tracking and path-following of under-actuated autonomous vehicles with parametric modeling uncertainty,” Transactions on Automatic Control, 2007, 52(8):1362-1379.
Amidi and Thorpe, “Integrated mobile robot control,” International Society for Optics and Photonics, Boston, MA, 1991, 504-523.
Aoude et al., “Mobile agent trajectory prediction using Bayesian nonparametric reachability trees,” American Institute of Aeronautics and Astronautics, 2011, 1587-1593.
Autoliv.com [online], “Vision Systems—another set of “eyes”,” available on or before Sep. 8, 2012, retrieved Oct. 20, 2016,<https://www.autoliv.com/ProductsAndInnovations/Active SafetySystems/Pages/VisionSystems.aspx>, 2 pages.
Autonomoustuff.com [online], “ibeo Standard Four Layer Multi-Echo LUX Sensor: Bringing together the World's Best Technologies,” available on or before Jul. 2016, retrieved on Feb. 7, 2017, <http://www.autonomoustuff.com/product/ibeo-lux-standard/>, 2 pages.
Bahlmann et al., “A system for traffic sign detection, tracking, and recognition using color, shape, and motion information.” IEEE Intelligent Vehicles Symposium, 2005, 255-260.
Balabhadruni, “Intelligent traffic with connected vehicles: intelligent and connected traffic systems,” IEEE International Conference on Electrical, Electronics, Signals, Communication, and Optimization, 2015, 2 pages (Abstract Only).
Bertozzi et al., “Stereo inverse perspective mapping: theory and applications” Image and Vision Computing, 1999, 16:585-590.
Betts, “A survey of numerical methods for trajectory optimization,” AIAA Journal of Guidance, Control, and Dynamics, Mar.-Apr. 1998, 21(2):193-207.
Bosch-presse.de [online], “Connected horizon—seeing beyond the bends ahead,” Bosch: Invented for Life, available on or before Oct. 2015, retrieved on Nov. 3, 2016, < http://www.bosch-presse.de/pressportal/en/connected-horizon---seeing-beyond-the-bends-ahead-35691.html>, 2 pages.
Castro et al., “Incremental Sampling-based Algorithm for Minimum-violation Motion Planning”, Decision and Control, IEEE 52nd Annual Conference, Dec. 2013, 3217-3224.
Chaudari et al., “Incremental Minimum-Violation Control Synthesis for Robots Interacting with External Agents,” American Control Conference, Jun. 2014, <http://vision.ucla.edu/˜pratikac/pub/chaudhari.wongpiromsarn.ea.acc14.pdf>, 1761-1768.
Chen et al., “Likelihood-Field-Model-Based Dynamic Vehicle Detection and Tracking for Self-Driving,” IEEE Transactions on Intelligent Transportation Systems, Nov. 2016, 17(11):3142-3158.
D'Andrea-Novel et al., “Control of Nonholonomic Wheeled Mobile Robots by State Feedback Linearization,” The International Journal of Robotics Research, Dec. 1995, 14(6):543-559.
De la Escalera et al., “Road traffic sign detection and classification,” IEEE Transactions on Industrial Electronics, Dec. 1997, 44(6):848-859.
Delphi.com [online], “Delphi Electronically Scanning Radar: Safety Electronics,” retrieved on Feb. 7, 2017, <http://delphi.com/manufacturers/auto/safety/active/electronically-scanning-radar>, 4 pages.
Demiris, “Prediction of intent in robotics and multi-agent systems.” Cognitive Processing, 2007, 8(3):151-158.
Dolgov et al. “Path Planning for Autonomous Vehicles in Unknown Semi-structured Environments,” International Journal of Robotics Research, 2010, 29(5):485-501.
Dominguez et al., “An optimization technique for positioning multiple maps for self-driving car's autonomous navigation,” IEEEE International Conference on Intelligent Transportation Systems, 2015, 2694-2699.
Fairfield and Urmson, “Traffic light mapping and detection,” In Proceedings of the International Conference on Robotics and Automation (ICRA), 2011, 6 pages.
Falcone et al., “A linear time varying model predictive control approach to the integrated vehicle dynamics control problem in autonomous systems,” IEEE Conference on Decision and Control, 2007, 2980-2985.
Falcone et al., “A Model Predictive Control Approach for Combined Braking and Steering in Autonomous Vehicles”, Ford Research Laboratories, Mediterranean Conference on Control & Automation, 2007, <http;//www.me.berkeley.edu/˜frborrel/pdfpub/pub-20.pdf>, 6 pages.
Fong et al., “Advanced Interfaces for Vehicle Teleoperation: Collaborative Control Sensor Fusion Displays, and Remote Driving Tools”, Autonomous Robots 11, 2001, 77-85.
Franke et al., “Autonomous driving goes downtown,” IEEE Intelligent Systems and their Applications, 1998, 6:40-48.
Fraser, “Differential Synchronization,” ACM: DocEng '09, Sep. 2009, <https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35605.pdf>, 13-20.
Garcia et al., “Model predictive control: theory and practice—a survey,” Automatica, 1989, 25(3):335-348.
Gavrila and Philomin, “Real-time object detection for “smart” vehicles,” In Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, 1:87-93.
Golovinsky et al., “Shape-based Recognition of 3D Point Clouds in Urban Environments,” Proceedings of the 12th International Conference on Computer Vision, 2009, 2154-2161.
Hammerschmidt, “Bosch to Focus on Cloud for Connected Car Services”, EE Times Europe, Dec. 2015, retrieved on Aug. 25, 2016, ,http://www.electronics-eetimes.com/news/bosch-focus-cloud-connected-car-services>, 4 pages.
He et al., “Color-Based Road Detection in Urban Traffic Scenes,” IEEE Transactions on Intelligent Transportation Systems, Dec. 2004, 5(4):309-318.
Himmelsback et al., “Fast Segmentation of 3D Point Clouds for Ground Vehicles,” IEEE Intelligent Vehicles Symposium, Jul. 21-24, 2010, 6 pages.
IEEE Global Initiative for Ethical Consideration in Artificial Intelligence and Autonomous Systems, “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems,” IEEE Advancing Technology for Humanity, Dec. 13, 2016, 138 pages.
ISO.org, “ISO 14229-1:2006; Road Vehicles—Unified diagnostic services (UDS)—Part 1: Specification and requirements,” International Standard Organization, 2006, retrieved on Apr. 4, 2018, <https://www.iso.org/standard/45293.html>, 2 pages (abstract).
ISO.org, “ISO 15765-3:2004; Road Vehicles—Diagnostics on Controller Area Networks (CAN)—Part 3: Implementation of unified diagnostic services (UDS on CAN),” International Standard Organization, Oct. 2004, retrieved on Apr. 4, 2018, <https://www.iso.org/obp/ui/#iso:std:iso:14229:-1:ed-1:v2:en>, 2 pages (abstract).
Jiang and Nijmeijer, “Tracking control of mobile robots: a case study in backstepping,” Automatica, 1997, 33(7):1393-1399.
Kala, et al: “Motion Planning of Autonomous Vehicles on a Dual Carriageway without Speed Lanes”, Electronics, Jan. 13, 2015 4(1):59-81.
Kanayama, “A Stable Tracking Control Method for an Autonomous Mobile Robot,” International Conference on Robotics and Automation, 1990, 384-389.
Karaman and Frazzoli, “Sampling-based algorithms for optimal motion planning ” Int. Journal of Robotics Research, Jun. 2011, <http://ares.lids.mit.edu/papers/Kamman.Frazzoli.IJRR11.pdf>, 30(7):846-894.
Karaman et al., “Sampling-based Algorithms for Optimal Motion Planning with Deterministic-Calculus Specifications”, 2012 American Control Conference, Jun. 27-Jun. 29, 2012, 8 pages.
Kavraki et al., “Probabilistic roadmaps for path planning in high-dimensional configuration spaces.” IEEE Transactions on Robotics and Automation, 1996, 12(4):566-580.
Kessels et al., “Electronic Horizon: Energy Management using Telematics Information”, IEEE: Vehicle Power and Propulsion Conference, 2007, 6 pages.
Kim, “Robust lane detection and tracking in challenging scenarios.” IEEE Transactions on Intelligent Transportation Systems, 2008, 9(1):16-26.
Larson et al., “Securing Vehicles against Cyber Attacks,” ACM, 2008, retrieved on [date], <http://dl.acm.org/citation.cfm?id=1413174>, 3 pages.
Lindner et al., “Robust recognition of traffic signals,” IEEE Intelligent Vehicles Symposium, 2004, 5 pages.
Liu et al, “Nonlinear Stochastic Predictive Control with Unscented Transformation for Semi_Autonomous Vehicles,” American Control Conference, Jun. 4-6, 2014, 5574-5579.
Liu et al., “Robust semi-autonomous vehicle control for roadway departure and obstacle avoidance,” ICCAS, Oct. 20-23, 2013, 794-799.
Lobdell, “Robust Over-the-air Firmware Updates Using Program Flash Memory Swap on Kinetis Microcontrollers,” Freescal Semiconductor Inc., 2012, retrieved on Apr. 11, 2018, <http://cache.freescale.com/flies/microcontrollers/doc/app_note/AN4533.pdf>, 20 pages.
Luzcando (searcher), “EIC 3600 Search Report,” STIC—Scientific & Technical Information Center, Feb. 14, 2018, 20 pages.
Maldonado-Bascón et al., “Road-sign detection and recognition based on support vector machines,” IEEE Transactions on Intelligent Transportation Systems, 2007, 8(2):264-278.
Mayne et al., “Constrained model predictive control: Stability and optimality,” Automatica, 2000, 36(6):789-814.
Mobileye [online], “Advanced Driver Assistance Systems (ADAS) systems range on the spectrum of passive/active,” Copyright 2017, retrieved on Oct. 20, 2016, <http://www.mobileye.com/our-technology/adas/>, 2 pages.
Møgelmose et al., “Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey,” IEEE Transactions on Intelligent Transportation Systems, 2012, 13(4):1484-1497.
Morris et al., “Learning, modeling, and classification of vehicle track patterns from live video.” IEEE Transactions on Intelligent Transportation Systems, 2008, 9(3):425-437.
Nilsson et al., “A Framework for Self-Verification of Firmware Updates over the Air in Vehicle ECUs,” IEEE. GLOBECOM Workshops, Nov. 2008, 5 pages.
Nilsson et al., “Conducting Forensic Investigations of Cyber Attacks on Automobiles In-Vehicle Networks,” ICST, 2008, retrieved on Mar. 20, 2016, <http://dl.acm.org/citation.cfm?id=1363228>, 6 pages.
Ollero and Amidi, “Predictive path tracking of mobile robots. application to the CMU Navlab,” in 5th International Conference on Advanced Robotics, 1991, 91:1081-1086.
Paik et al., “Profiling-based Log Block Replacement Scheme in FTL for Update-intensive Executions,” IEEE: Embedded and Ubiquitous Computing (EUC), Oct. 2011, 182-188.
Ponomarev, “Augmented reality's future isn't glasses. It's the car,” Venturebeat.com, available on or before, Aug. 2017, retrieved on Mar. 30, 2018, <https://venturebeat.com/2017/08/23/ar-will-drive-the-evolution-of-automated-cars/>, 4 pages.
Premebida et al., “A lidar and vision-based approach for pedestrian and vehicle detection and tracking.” In Proceedings of the IEEE Intelligent Transportation Systems Conference, 2007, 1044-1049.
Premebida et al., “LIDAR and vision-based pedestrian detection system.” Journal of Field Robotics, 2009, 26(9):696-711.
Rankin et al., “Autonomous path planning navigation system used for site characterization,” SPIE—International Society for Optics and Photonics, 1996, 176-186.
Shavel-Shwartz et al., “Avoiding a “Winter of Autonomous Driving”: On a Formal Model of Safe, Scalable, Self-driving Cars,” arXiv preprint, Aug. 17, 2017, 25 pages.
Shen et al., “A Robust Video based Traffic Light Detection Algorithm for Intelligent Vehicles,” Proceedings of the IEEE Intelligent Vehicles Symposium, 2009, 521-526.
Shin, “Hot/Cold Clustering for Page Mapping in NAND Flash Memory,” IEEE: Transactions on Consumer Electronics, Nov. 2011, 57(4):1728-1731.
Spieser et al, “Toward a systematic approach to the design and evaluation of automated mobility-on-demand systems: A case study in Singapore,” Road Vehicle Automation, 2014, 229-245.
Standards sae.org, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles,” SAE International, Sep. 2016, retrieved on Apr. 18, 2017, <http://standards.sae.org/j3016_201609/>, 3 pages.
Steger et al, “Applicability of IEEE 802.11s for Automotive Wireless Software Updates,” IEEE: Telecommunications (ConTEL), Jul. 2015, 8 pages.
Stokar, “Perform over-the-air updates for car ECUss,” eMedia Asia Ltd., 2013, retrieved on Apr. 11, 2018, <http://www.eetasia.com/STATIC/PDF/201312/EEOL_2013DEC05_NET_EMS_T_01.pdf?SOURCES=DOWNLOAD>, 3 pages.
Strahn et al., “Laser Scanner-Based Navigation for Commercial Vehicles,” IEEE Intelligent Vehicles Symposium, Jun. 13-15, 2007, 969-974.
Tabuada and Pappas, “Linear time logic control of discrete-time linear systems,” IEEE Transactions on Automatic Control, 2006, 51(12):1862-1877.
Wallace et al., “First results in robot road-following,” in IJCAI, 1985, 1089-1095.
Wang et al., “Lane detection and tracking using B-Snake,” Image and Vision Computing, 2004, 22(4):269-280.
Wang et al., “Simultaneous localization, mapping and moving object tracking,” The International Journal of Robotics Research, 2007, 26(9):889-916.
Weiskircher et al., “Predictive Guidance and Control Framework for (Semi-) Autonomous Vehicles in Public Traffic,” IEEE Transactions on Control Systems Technology, 2017, 25(6):2034-2046.
Weiss et al., “Autonomous v. Tele-operated: How People Perceive Human-Robot Collaboration with HRP-2,” Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, 2009, 3 pages.
Wikipedia.org [online],“Gain Scheduling”, Aug. 20, 2006 (first draft), retrieved on Aug. 25, 2016, <https://en.wikipedia.org/wiki/Gain_scheduling>, 1 page.
Wit et al., “Autonomous ground vehicle path tracking,” Journal of Robotic Systems, 2004, 21(8):439-449.
Wu et al., “Data Sorting in Flash Memory,” ACM, 2015, <http://dl.acm.org/citation.cfm?id=2747982.2665067>, 25 pages.
Yilmaz et al., “Object tracking: A survey,” ACM Computing Surveys, 2006, 31 pages.
Zax, “A Software Update for Your Car? Ford reboots it infotainment system, following consumer complaints,” MIT Technology Review, 2012, retrieved on Apr. 11, 2018, <http://www.technologyreview.com/view/427153/a-software-update-for-yourcar?/>, 6 pages.
Zheng et al, “Lane-level positioning system based on RFID and vision,” IET International Conference on Intelligent and Connected Vehicles, 2016, 5 pages.
Related Publications (1)
Number Date Country
20180113456 A1 Apr 2018 US