Autonomous vehicles, such as vehicles which do not require a human driver, may be used to aid in the transport of passengers or items from one location to another. An important component of an autonomous vehicle is the perception system, which allows the vehicle to perceive and interpret its surroundings using cameras, radar, sensors, and other similar devices. The perception system executes numerous decisions while the autonomous vehicle is in motion, such as speeding up, slowing down, stopping, turning, etc. Autonomous vehicles may also use the cameras, sensors, and global positioning devices to gather and interpret images and sensor data about its surrounding environment, e.g., pedestrians, bicyclists, other vehicles, parked cars, trees, buildings, etc.
Information from the perception system may be combined with highly detailed map information in order to allow a vehicle's computer to safely maneuver the vehicle in various environments. This highly detailed map information may describe expected conditions of the vehicle's environment such as the shape and location of roads, parking spots, dead zones, traffic signals, and other objects. In this regard, the information from the perception system and detailed map information may be used to assist a vehicle's computer in making driving decisions involving intersections and traffic signals.
One aspect of the disclosure provides a computer-implemented method. The method includes identifying, by one or more computing devices, an object in a vehicle's environment. The object may be associated with a heading and a location. The method further includes generating, by the one or more computing devices, a set of possible actions for the object. The set of possible actions may be generated using the heading and location of the object and map information that describes the vehicle's environment. The method may then generate a set of possible future trajectories of the object based on the set of possible actions. The method includes receiving contextual information and determining a likelihood value of each trajectory in the set of possible future trajectories based on the contextual information. According to some examples, the contextual information may include a status of the detected object. The method includes determining, by the one or more computing devices, a final future trajectory based on the determined likelihood value for each trajectory of the set of possible future trajectories and maneuvering the vehicle in order to avoid the final future trajectory and the object.
In some examples, determining the final future trajectory may include comparing the likelihood value for each trajectory to a threshold value and discarding trajectories when the likelihood value of that trajectory does not meet the threshold value. Accordingly, the likelihood value of the discarded trajectory may not be used to determine the final future trajectory. When none of the trajectories meet the threshold value, the method may include identifying a plurality of waypoints for each trajectory in the set of trajectories. A waypoint may include at least one of a position, a velocity, and a timestamp. The method may include determining a trajectory of the vehicle that includes a plurality of waypoints. The method may then compare each of the waypoints to a waypoint associated with a trajectory of the vehicle at a same timestamp in order to determine the final future trajectory.
According to other examples, determining the final future trajectory may include identifying a situational relationship between the object and the vehicle. The method may then compare the likelihood value of the trajectories remaining in the set to a second threshold different from the first threshold value and discard a second trajectory from the set of trajectories when the likelihood value of that second trajectory does not meet the second threshold value. Accordingly, the likelihood value of the discarded second trajectory may not be used to determine the final future trajectory. After discarding the second trajectory, the remaining trajectories may be identified as final future trajectories. In this regard, the method includes maneuvering the vehicle to avoid each of the remaining trajectories.
In some examples, the method may include determining the final future trajectory by selecting a trajectory with the highest likelihood value as the final future trajectory. Additionally, the method may include discarding an action from the set of possible actions for failing to comply with a model of possible actions for the object. The method may also include generating the set of possible actions based on a past trajectory of the object. According to other examples, the contextual information may describe a status of a second object in the vehicle's environment.
Another aspect of the disclosure provides a system comprising one or more computing devices. The one or more computing devices are configured to identify an object in a vehicle's environment, the object having a heading and location and generate a set of possible actions for the object using the heading and location of the object and map information. The one or more computing devices are configured to generate a set of possible future trajectories of the object based on the set of possible actions. The one or more computing devices may receive contextual information to determine a likelihood value of each trajectory. In some examples the likelihood value includes a status of the detected object. The one or more computing devices are also configured to determine a final future trajectory based on the likelihood value for each trajectory and maneuver the vehicle in order to avoid the final future trajectory and the object.
In one example, the one or more computing devices may determine the final future trajectory by comparing the likelihood value for each trajectory to a threshold value and discarding trajectories when the likelihood value does not meet the threshold value. The likelihood value of the discarded trajectory may not be used to determine the final future trajectory. The one or more computing devices may also be configured to identify a plurality of waypoints for each trajectory when none of the trajectories meet the threshold value. The waypoint may include a position, a velocity, or a timestamp of the detected object. The one or more computing devices may be configured to determine a trajectory of the vehicle that includes a plurality of waypoints and comparing each of the trajectory waypoints to a waypoint of the trajectory of the vehicle at the same time to determine the final future trajectory. The one or more computing devices may be configured to determine the final future trajectory by identifying a situational relationship between the object and the vehicle and comparing the likelihood value of the remaining trajectories to a second threshold different from the first threshold value. The one or more computing devices may subsequently discard a second trajectory when the likelihood value of that second trajectory does not meet the second threshold value. Accordingly, the likelihood value of the discarded second trajectory may not be used to determine the final future trajectory. After discarding the second trajectory (trajectories), the one or more computing devices are configured to identify the remaining trajectories and maneuver the vehicle to avoid each of the remaining trajectories. In some examples, the one or more computing devices are configured to determine the final future trajectory by selecting a trajectory with the highest likelihood value. In other examples, the one or more computing devices are configured to determine the final future trajectory by discarding an action for failing to comply with a model of possible actions for the object. Additionally, the one or more computing devices may be configured to generate the set of possible actions based on a past trajectory of the object. According to some examples, the vehicle is an autonomous vehicle.
A further aspect of the disclosure provides a non-transitory computer-readable medium on which instructions are stored. The instructions, when executed by one or more processors cause the one or more processors to perform a method. The method includes identifying, by one or more computing devices, an object in a vehicle's environment. The object may be associated with a heading and a location. The method further includes generating, by the one or more computing devices, a set of possible actions for the object. The set of possible actions may be generated using the heading and location of the object and map information that describes the vehicle's environment. The method may then generate a set of possible future trajectories of the object based on the set of possible actions. The method includes receiving contextual information and determining a likelihood value of each trajectory of the set of possible future trajectories based on the contextual information. According to some examples, the contextual information may include a status of the detected object. The method includes determining, by the one or more computing devices, a final future trajectory based on the determined likelihood value for each trajectory of the set of possible future trajectories. Finally, the method includes maneuvering the vehicle in order to avoid the final future trajectory and the object.
Overview
The present disclosure relates to predicting the trajectory of objects and using the predicted trajectories to modify vehicle behavior. For example, a vehicle navigating a roadway may need to keep track of and predict what other objects in the vehicle's environment, such as other vehicles, bikes, pedestrians, animals, etc., are going to do. The failure to do so may result in collisions, reckless and erratic driving, or other hazardous situations. Accordingly, safer driving conditions may be achieved by accurately predicting the trajectory of other objects based on combining detailed map information and contextual information of other objects.
In order to predict the trajectory of other objects, the vehicle's computer may access the detailed map information and contextual information of the detected objects. The map information may include detailed information about intersections, lanes (e.g. turn-only lanes), exit only lanes, lane locations, parking spots, stop signs, traffic lights, clear zones (explicit or implicit, such as a railroad crossing or an intersection), driveways, parking lot entrances/exits, etc. Additionally, the map information may also include information about speed limits for each lane. The detailed map information may be updated to include changes to road conditions, such as temporary road closures, detours, etc.
The vehicle may detect objects in the vehicle's environment using sensors mounted on the vehicle. For example, the vehicle may have several devices mounted thereon to detect the presence of objects around the vehicle, such as cameras, radar devices, sonar devices, LIDAR devices, etc. These devices may be used to detect objects around the vehicle, including pedestrians, other vehicles, bicycles, traffic signs (e.g., stop signs or yield signs), traffic lights, etc.
After the objects around the vehicle have been detected, a heading, estimated speed, location, size and/or shape for each of the objects may be determined. In some examples, acceleration, curvature, etc. may also be detected for each of the objects detected. The heading of a detected object may include the object's direction of movement. Location for each the objects may include the position of the detected object in relation to the vehicle.
Additionally, location for each of the objects may also include geographic position (e.g., latitude, longitude). The location information may be used to identify information about the object's location relative to the detailed map information. For example, the detailed map information in conjunction with location information for an object may be used to determine that the object, such as a vehicle, is in a specific lane, such as a turn only-lane or the middle lane of a highway. In another example, the detailed map information in conjunction with location information for an object may be used to determine that the object, such as a bicyclist, is in a bike lane or the right most lane of traffic.
A set of possible actions for each detected objects may be generated, using the vehicle's computer, based on the detailed map information and the heading, location, size, and shape for that detected object. In this regard, limiting the set of possible actions for each detected object may be accomplished through kinematic or dynamic models of feasible behavior for the detected object. A potential trajectory for each action of the possible set of actions may be generated using the vehicle's computer. Potential trajectories may be generated using the detailed map information and contextual information, such as information about other objects (e.g. vehicles, pedestrians, cyclists, etc.). A potential trajectory may include predicted headings and locations of the detected object for some period of time into the future. In alternative examples, the potential trajectory may also include a predicted speed of the detected object.
The contextual information may also be generated using the vehicle's computer. In this regard, contextual information may include information about the detected object, such as a status of a turn signal or a brake light, as well as objects other than the detected object. For example, contextual information may include a type of the detected object (e.g. bike, pedestrian, etc.), the size and/or shape of the detected object, lighted signals from the detected objects, etc.
Additionally, the contextual information may include information about other objects. Again, the contextual information may identify a type of the other objects (e.g. bike, pedestrian, other vehicles, etc.), the heading and location of the other objects, the speed of the other objects, the size and/or shape of the other objects. Additionally, the contextual information may include environmental information, such as lighted signals from other objects, states of traffic signals, weather conditions (e.g. rain), traffic signs, etc.
Based on each potential trajectory and the contextual information, the vehicle's computer may determine a likelihood value for each trajectory for the detected object. For example, the likelihood value may indicate a likelihood of a given potential trajectory actually occurring. Thus, the likelihood value may be determined based on details of a single potential trajectory (e.g., possible future locations and headings) as well as the contextual information.
The vehicle's computer may use the potential trajectories to identify a final future trajectory. The trajectory with the highest likelihood value may be identified as the final future trajectory. In other examples a second threshold may be used to identify a subset of the trajectories based on a relationship between the vehicle and other objects. The vehicle's computer may react to all the trajectories in the subset of trajectories in planning a route for the vehicle. In still other examples, where there are no trajectories with likelihood values above the predetermined threshold, all of the potential trajectories may be further analyzed. This analysis may include taking a number of points along each of the trajectories. If points from different trajectories are within a predetermined distance from each other at a predetermined time, the likelihood values of those trajectories may be added up and compared to the threshold value. If the sum of the likelihood values meets the threshold value, then all of the trajectories that had their likelihood values summed together may be considered a final future trajectory. The final future trajectory may be used by the vehicle's computer to plan a route for the vehicle that, for example, avoids the vehicle coming too close to the object.
The aspects described herein may allow a vehicle's computer to make predictions of the trajectories of objects around a vehicle. Such predictions may help the vehicle's computer to navigate, provide notifications to drivers to keep them alert to their surroundings, improve safety, and reduce traffic accidents.
Example Systems
As shown in
The memory 130 stores information accessible by the one or more processors 120, including data 132 and instructions 134 that may be executed or otherwise used by the processor(s) 120. The memory 130 may be of any type capable of storing information accessible by the processor(s), including a computing device-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, ROM, RAM, DVD or other optical disks, as well as other write-capable and read-only memories. Systems and methods may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media.
The data 132 may be retrieved, stored or modified by processor(s) 120 in accordance with the instructions 134. For instance, although the claimed subject matter is not limited by any particular data structure, the data may be stored in computing device registers, in a relational database as a table having a plurality of different fields and records, XML documents or flat files. The data may also be formatted in any computing device-readable format.
The instructions 134 may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor. For example, the instructions may be stored as computing device code on the computing device-readable medium. In that regard, the terms “instructions” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.
The one or more processors 120 may be any conventional processors, such as commercially available CPUs. Alternatively, the one or more processors may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor, such as a field programmable gate array (FPGA). Although
Computing device 110 may have all of the components normally used in connection with a computing device such as the processor and memory described above, as well as a user input 150 (e.g., a mouse, keyboard, touch screen and/or microphone) and various electronic displays (e.g., a monitor having a screen, a small LCD touch-screen or any other electrical device that is operable to display information). In this example, the vehicle includes an internal electronic display 152. In this regard, internal electronic display 152 may be located within a cabin of vehicle 100 and may be used by computing device 110 to provide information to passengers within the vehicle 100.
In one example, computing device 110 may be an autonomous driving computing system incorporated into vehicle 100. The autonomous driving computing system may be capable of communicating with various components of the vehicle as needed in order to control the vehicle in a fully autonomous driving mode (without input from a driver), as well as semi-autonomous (some input from a driver) driving mode.
As an example,
Returning to
In this regard, computing device 110 may be in communication with various systems of vehicle 100, such as deceleration system 160, acceleration system 162, steering system 164, signaling system 166, navigation system 168, positioning system 170, perception system 172, such that one or more systems working together may control the movement, speed, direction, etc. of vehicle 100 in accordance with the instructions 134 stored in memory 130. Although these systems are shown as external to computing device 110, in actuality, these systems may also be incorporated into computing device 110, again as an autonomous driving computing system for controlling vehicle 100.
As an example, computing device 110 may interact with deceleration system 160 and acceleration system 162 in order to control the speed of the vehicle. Similarly, steering system 164 may be used by computing device 110 in order to control the direction of vehicle 100. For example, if vehicle 100 is configured for use on a road, such as a car or truck, the steering system may include components to control the angle of the wheels to turn the vehicle. Signaling system 166 may be used by computing device 110 in order to signal the vehicle's intent to other drivers or vehicles, for example, by lighting turn signals or brake lights when needed.
Navigation system 168 may be used by computing device 110 in order to determine and follow a route to a location. In this regard, the navigation system 168 and/or data 132 may store map information, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, vegetation, bike lanes, or other such objects and information. Additionally, the map information may be updated to include temporary information, such as temporarily blocked lanes due to construction, accident information, etc.
In addition, the detailed map information includes a network of rails 350, 352, and 354, which provide the vehicle's computer with guidelines for maneuvering the vehicle so that the vehicle follows the rails and obeys traffic laws. As an example, a vehicle's computer may maneuver the vehicle from point A to point B (two fictitious locations not actually part of the detailed map information) by following rail 350, transitioning to rail 352, and subsequently transitioning to rail 354 in order to make a right turn at intersection 302.
Positioning system 170 may be used by computing device 110 in order to determine the vehicle's relative or absolute position on a map or on the earth. For example, the position system 170 may include a GPS receiver to determine the device's latitude, longitude and/or altitude position. Other location systems such as laser-based localization systems, inertial-aided GPS, or camera-based localization may also be used to identify the location of the vehicle. The location of the vehicle may include an absolute geographical location, such as latitude, longitude, and altitude as well as relative location information, such as location relative to other cars immediately around the vehicle which can often be determined with less noise than absolute geographical location.
The positioning system 170 may also include other devices in communication with computing device 110, such as an accelerometer, gyroscope or another direction/speed detection device to determine the direction and speed of the vehicle or changes thereto. By way of example only, an acceleration device may determine its pitch, yaw or roll (or changes thereto) relative to the direction of gravity or a plane perpendicular thereto. The device may also track increases or decreases in speed and the direction of such changes. The device's provision of location and orientation data as set forth herein may be provided automatically to the computing device 110, other computing devices and combinations of the foregoing.
The perception system 172 also includes one or more components for detecting and performing analysis on objects external to the vehicle, such as other vehicles, obstacles in the roadway, pedestrians, bicyclists, traffic signals, signs, trees, etc. For example, the perception system 172 may include lasers, sonar, radar, one or more cameras, or any other detection devices which record and process data that may be further processed by computing device 110. In the case where the vehicle is a small passenger vehicle, such as a car, the car may include a laser mounted on the roof or other convenient location, as well as other sensors, such as cameras, radars, sonars, and additional lasers.
Accordingly, the one or more computing devices 110 may control the direction and speed of the vehicle based information provided by the vehicle's various systems described above. By way of example, if the vehicle is operating completely autonomously, computing device 110 may navigate the vehicle to a location using data from the detailed map information and navigation system 168. Computing device 110 may use the positioning system 170 to determine the vehicle's location. Information from the perception system 172 may be used to detect and respond to objects when needed to reach a location safely. In order to do so, computing device 110 may cause the vehicle to accelerate (e.g., by increasing fuel or other energy provided to the engine by acceleration system 162), decelerate (e.g., by decreasing the fuel supplied to the engine or by applying brakes by deceleration system 160), change direction (e.g., by turning the front or rear wheels of vehicle 100 by steering system 164), and signal such changes (e.g. by lighting turn signals of signaling system 166).
Example Methods
In addition to the operations described above and illustrated in the figures, various operations will now be described. It should be understood that the following operations do not have to be performed in the precise order described below. Rather, various steps can be handled in a different order or simultaneously, and steps may also be added or omitted.
As noted above, a vehicle's one or more computing devices may maneuver the vehicle using the various systems described above. In the example of
As the vehicle is being maneuvered, the perception system 172 may detect and identify various objects in the vehicle's environment. In the example of
As noted above, the perception system 172 may determine various characteristics of these objects such as the type of each object. As an example, the detected objects may be compared to the detailed map information in order to determine whether the objects have a corresponding feature in the detailed map information. For objects such as the lane lines and crosswalks, this may be used to identify the types of these objects by comparison to the corresponding lane lines and crosswalks of the detailed map information.
By tracking the location of objects over a brief period of time, the perception system may also determine a heading and speed for each of the objects. This brief period of time may be a fraction of a second or more or less. In addition, this information may be plotted in order to determine a past trajectory of the object. This information may be used to determine a heading, a speed, a location, whether there is a change in speed (such as acceleration or deceleration) or heading, for the object. As shown in the example of
Likewise, the perception system may also track the location of the bicyclist 570 from the location 572 (shown in
In addition to moving objects, the perception system 172 may also track stationary objects over time to determine that such objects are in fact stationary. In this regard, some objects, such as the lane lines 510, 512, 514, and 516, the crosswalks 530, 532, and 534, and the turn only lanes 552 and 554, which will not have moved in the brief period of time may be identified as stationary.
The perception system may also identify contextual information. As an example, the perception system may detect a turn signal or a deceleration of the vehicle 580 for example by identifying the status of a turn signal or brake lights from an image captured by a camera of the vehicle using any known image processing technique. In examples where a turn signal of vehicle 580 is within a line of site of the sensors of the perception system, the perception system may detect brake lights. Alternatively, the perception system may detect a change in speed using the past trajectory described above.
The perception system may also identify contextual information about humans. For example, the perception system may detect a direction that a pedestrian is facing or gestures, such as hand signals or nodding. Similarly, the perception system may detect the direction, heading, location and speed of a cyclist. Additionally, the perception system may also determine hand signals and other gestures performed by a cyclist.
Other examples of contextual information may include information about other objects in the vehicle 100's environment. Additionally, the contextual information may include detecting the color or shape of a traffic sign, the color of a traffic light, or a flashing signal, for example, from an image captured by a camera of the vehicle using any known image processing technique.
Using the past trajectories and speeds, the vehicle's one or more computing devices may determine a set of possible actions for each non-stationary object.
In addition, the detailed map information may also be used to limit the set of possible actions. For example, the one or more computing devices may detect the speed of a given vehicle, for example, to be 25 MPH. If the given vehicle begins to accelerate, for example at a rate of 2 m/s2, the one or more computing devices may determine that the set of possible actions include the vehicle continuing to accelerate at the same rate for a given period of time, after a short period reducing the rate of acceleration, after a short period increasing the rate of acceleration, continuing to accelerate until a specific speed is reached, etc. However, combining the location information with the speed limits of the detailed map information, the one or more computing devices may determine that the given vehicle is traveling in a 35 MPH zone. Given this information, the one or more computing devices may eliminate possible actions where the vehicle continues to accelerate after a speed of 45 MPH is reached. Thus, any action where the given vehicle would keep accelerating at 2 m/s2 for the next 10 seconds to reach approximately 65 MPH may be eliminated from the set of possible actions.
Another example of the detailed map information limiting the set of possible actions may be based on the relative location of the detected object. For example, the detailed map information may be used to identify lane information of the detected object. For instance, referring to
Additionally, the detailed map information may also be used to eliminate possible actions based on the detected object's proximity to landmarks, such as the intersection 502 and stop sign 505 in
Referring to
Additionally, the map information and contextual information may be used to limit or add actions to the set of possible actions. For example, the map information or contextual information may indicate a temporary construction blocking a lane. In this regard, the temporary construction may limit the set of possible actions for the vehicle 100. Additionally, the contextual information may detect a person directing traffic through a construction zone. In this regard, the person directing traffic may be directing vehicles in to an oncoming lane to navigate around the construction. The one or more computing devices may add a new action (i.e., driving in the oncoming lane) to the set of possible actions.
In addition, the one or more computing devices may reduce or limit the set of possible actions for each detected object using kinematic or dynamic models of feasible behavior for the detected object. For instance, the capabilities of a typical passenger vehicle would limit certain maneuvers. In this regard, a vehicle traveling in one direction at 25 MPH could not immediately change its heading as to be traveling 25 MPH in the reverse direction in the next instant. Referring to
Based on the generated set of possible actions, the vehicle's one or more computing devices may generate a set of possible future trajectories. Each future trajectory may include a path including a set of locations, headings, speeds, accelerations or decelerations, and curvature that the detected object could take for a given actions in the set of possible actions. Curvature is a rate of change in the heading over an arc length. In this regard, curvature may indicate the turning radius of a vehicle, which affects the position and heading of the vehicle. The set of future trajectories may be based upon the detailed map information, past trajectories, contextual information and speeds. Given that more than one trajectory could be determined for each possible action, each action may therefore be associated with one or more trajectories.
Turning to
The set of future trajectories may also be based upon the detailed map information For example, when generating a trajectory for the left turn action 673 (from
The one or more computing devices may use the detailed map information to determine where the detected object is likely to stop when generating possible trajectories. For example, when determining trajectories related to stopping for traffic lights, at stop signs, or yielding to oncoming traffic, the one or more computing devices may refer to the detailed map information to determine a location of a stop line or where lanes overlap. Accordingly, the generated trajectory may stop at a stopping point indicated in the detailed map information.
In some examples, contextual information of other detected objects may be used to generate the set of future trajectories. For example, other objects may block, or otherwise, obstruct one or more of the set of possible actions of the detected object. The one or more computing devices may factor in the blocking or obstructing object when determining a trajectory for the action. Accordingly, the one or more computing devices may generate a trajectory for the action that includes stopping or slowing down to account for the other objects blocking the way of the detected objects. Further examples include generating trajectories that nudge (i.e., maneuver around) other detected objects. For example, the one or more computing devices may generate a trajectory for a bicyclist that swerves around a parked car blocking the bike lane. Further examples include generating a trajectory for a detected object that may be performing a U-turn. The one or more vehicle computing devices may use the curvature of the turn to determine how fast the U-turn may be formed. For example, a vehicle cannot perform a U-turn at a speed of 35 MPH or greater. Thus, the U-turn trajectory may limit the speed of the detected object to less than 35 MPH.
After determining one or more future trajectories for each of the possible actions in the set of possible actions, the one or more computing devices may calculate a weight or probability (e.g., likelihood value) of each future trajectory based upon the contextual information. As an example, vehicle's one or more computing devices may use a machine learning classifier to determine a likelihood value for each trajectory for the detected object based on the future trajectories and the contextual information. Accordingly, the classifier may output a value indicating the likelihood value for each of the potential future trajectories. The classifier may be trained on data recorded by sensors which have captured information about how objects in the environment are likely to act at different locations at different times. In some examples, the classifier may learn from processing data about different objects in a variety of different settings. The one or more computing devices may regularly (i.e., multiple times a second) update the weights or probabilities associated with each of the future trajectories up until the tracked object commits to one of the predicted trajectories (e.g., running a red light or making a turn).
Turning to
In the example of
As another example, the future trajectories 710 and 730 each have a likelihood value of 20% because in at least some cases, vehicles that approached a stop sign with activating a turn signal did in fact make left or right turns at the same rate. In this regard, the likelihood values of future trajectory 710 or future trajectory 730 may be greater if the vehicle 580 were to have activated a turn signal. Similarly, if the vehicle 580 changes heading and moves closer to the double-line 514, the one or more computing devices may determine that there is a greater likelihood of the vehicle 580 making a left turn and thus, the likelihood value for future trajectory 730 may be increased.
In a further example, the future trajectory 740 has a likelihood value of 10% because in at least some cases, vehicles approaching stop signs without turn signals rolled into an intersection without stopping. Finally, the future trajectory 750 has a likelihood value of 1% indicating that it is very unlikely that the vehicle 580 would reverse down the road as very few or no other vehicles at stop signs performed an illegal action and reverse down a road in the wrong direction.
After computing the likelihood values for each of the future trajectories of the detected object, the one or more computing devices may determine a final future trajectory for the detected object. In order to do so, a threshold value may be used to discard or filter unlikely trajectories from the set of possible future trajectories. As an example, if the threshold value is 15% or 0.15 trajectories with a probability of less than 15% or 0.15 may be discarded or filtered. Turning to
If any future trajectories remain in the set of possible future trajectories after any trajectories that do not meet the threshold value are discarded or filtered, the one or more computing devices may select the future trajectory with the highest likelihood value as the final future trajectory as discussed in greater detail with respect to
The final future trajectory may also be determined based upon the relationship between the location of the vehicle and the objects detected in the vehicle's environment. When such situational relationships are identified, a second threshold may be used to eliminate additional trajectories of the set of trajectories. The remaining trajectories may then all be treated as the final future trajectory.
One example of a situational relationship is a path crossing. Crossing the vehicle's path may include, for example, vehicles travelling perpendicular to the vehicle 100 or making a left turn in the vehicle 100's path. In the context of highways, crossing the vehicle 100's path may include lane changes. Because of the dangers involved in a path crossing, the vehicle's one or more computers may use a second threshold and take into consideration multiple trajectories that are likely to occur. In one instance, the one or more computing devices may identify a situational relationship between the vehicle 100 and future trajectories of vehicle 580 in that multiple of the future trajectories indicate that vehicle 580 is likely to cross the path of the vehicle 100. Thus, trajectories in the set of trajectories (that have not already been eliminated) may be compared to a second threshold value. As an example, if this second threshold value is 20%, The remaining trajectories (i.e. 710, 720, and 730) may be compared to the second threshold since the future trajectories 710, 720, and 730 are equal to or greater than the likelihood value of 20%. The one or more computing devices may consider all three remaining trajectories in determining the final future trajectory of the vehicle 580.
Another example of a situational relationship may be based on the relative location of a pedestrian who is crossing a roadway proximate to the vehicle. For example, where a pedestrian is crossing a roadway in a crosswalk, a single trajectory may be appropriate for the final future trajectory. However, if the pedestrian is not crossing in a crosswalk, again given the dangers involved, the vehicle's one or more computers may use a second threshold and take into consideration multiple trajectories that are likely to occur. As another example, if the crosswalk includes a median or sidewalk portion where a pedestrian can stop and wait between lanes of a roadway, this may be another situational relationship where the vehicle's one or more computers may use a second threshold to take into consideration multiple trajectories.
Another example of a situational relationship may include another object located in a two-way left-turn lane, commonly called a suicide lane, where traffic from either direction on a roadway may enter to make a left turn. Because these left turns may be made by a vehicle at various points along a two-way left-turn lane, vehicle 100's one or more computing devices may use a second threshold to take into consideration multiple trajectories of other vehicles in such two-way left-turn lanes. This may be the case even where the other vehicle is not necessarily crossing the path of vehicle 100, such as where vehicle 100 is also located in the two-way left-turn lane and moving in the same or the opposite direction of traffic as the vehicle.
If no trajectories remain after eliminating trajectories from the set of trajectories as described above, then the set of future trajectories may be further analyzed to determine a final future trajectory. In one example, the vehicle's computing devices may determine commonly occurring points or locations between the trajectories. The most commonly occurring locations may be strung together into a new trajectory and identified as the final future trajectory. For example, if the threshold is 15% and each possible trajectory of a set has a likelihood value of less than 15%, all of the trajectories would fall below the threshold value. If at least some of the trajectories include the detected object moving forward in the same general direction, then the common portions of these trajectories may be strung together and identified as the final future trajectory.
The vehicle's one or more computing devices may determine a final future trajectory (or most likely trajectories) of the detected object using waypoints. For instance, each trajectory may include a number of waypoints determined by the vehicle's one or more computing devices. A waypoint may define a position or location and velocity of the detected object along a trajectory at a given time. Thus, each waypoint may include a projected future position or location of the detected object, a projected future velocity of the detected object, and a time or timestamp for the waypoint. Waypoints may also be determined for a projected future location of vehicle 100's. This may be used to determine whether any waypoints of vehicle 100 and the waypoints of the set of trajectories are within a predetermined distance of one another.
Referring to
As noted above, the one or more computing devices of the vehicle 100 may determine waypoints for each of the trajectories 710, 720, 730, and 740. As shown in
In one example, the one or more computing devices of vehicle 100 may determine whether any of the waypoints are within a predetermined distance of each other at a predetermined time. If so, these trajectories may be used to determine a final future trajectory by summing their probabilities and comparing that sum to the threshold value. As an example, a predetermined distance may be between about 2 meters and about 6 meters.
Referring to
The sum of the probabilities may be compared to the threshold value. If the sum of the probabilities meets the threshold value, then the one or more computing devices may determine that the trajectory 720, the trajectory 730, and the trajectory 740 are together a final future trajectory. In this regard, each of these trajectories may be considered a final future trajectory of the vehicle 580.
In another example, the waypoints may be used to determine the final future trajectory based on whether any of the waypoints of a projected future path of vehicle 100 are within a predetermined distance of one of the waypoints of a possible future trajectory of a detected object. If so, these trajectories may be used to determine one or more final future trajectories of the detected object.
For example, the one or more computing devices may determine that the path 780 of the vehicle 100 includes waypoint 780-1 and waypoint 780-2. The one or more computing devices may compare the waypoints of path 780 to the waypoints of the trajectories of the set of possible future trajectories (710, 720, 730, and 740) at various timestamps. If a waypoint of the path 780 for a particular timestamp are within a predetermined distance of a given waypoint with the same particular timestamp of any of the set of possible future trajectories, 710, 720, 730, and 740, then the one or more computing devices may identify the possible future trajectory associated with the given waypoint as the final future trajectory for the object.
For example, in
For timestamp T=2, the one or more computing devices may compare waypoint 780-2 to waypoint 710-2, waypoint 720-2, waypoint 730-2, and waypoint 740-2 to determine whether 710-2, waypoint 720-2, waypoint 730-2, and waypoint 740-2 are within a predetermined distance of waypoint 780-2. In this regard, the one or more computing devices may determine that waypoint 710-2 is within the predetermined distance of waypoint 780-2 at timestamp T=2. Furthermore, the one or more computing devices may determine that that waypoint 720-2, waypoint 730-2, and 740-2 are not within the predetermined distance of waypoint 780-2. Therefore, waypoint 720-1, waypoint 730-1, and waypoint 740-1 may not be used to determine the final future trajectory at timestamp T=1. The one or more computing devices may determine trajectory 710 as the final future trajectory for the timestamp T=2 since waypoint 710-2 is within the predetermined distance of waypoint 780-2 at timestamp T=2.
After determining the final future trajectory (or trajectories), the one or more computing devices of the vehicle 100 may determine how to maneuver the vehicle 100. For example, the one or more computing devices may determine a route that avoids the final future trajectory of the detected object. Accordingly, the one or more computing devices may maneuver the vehicle 100 according to the determined route such that the vehicle 100 avoids intersecting the final future trajectory of the detected object. For example, turning to
In addition to the examples discussed above,
Based on the received information, the one or more computing devices may generate a set of possible actions for the vehicle 800. For example, the vehicle 800 may take various actions, such as turning left 810, continuing straight 820, turning right 830, or travelling in reverse 840. As noted above, some actions, such as traveling in reverse, may be discarded using kinematic or dynamic models of feasible behavior for the detected object.
The one or more computing devices may then calculate future trajectories for each of the actions in the set of actions as shown in
In this example, prior to the vehicle 800 reaching the intersection 502, each of the trajectories 810, 820, and 830 may have an equal probability of occurring. However, once the vehicle 800 is within some distance of the intersection 502, these probabilities may begin to change until they reach the probabilities identified in table 890 of
Again, based on the likelihood values, the vehicle's one or more computing devices may identify a final future trajectory. Turning to
The final future trajectory may be provided to the one or more computing devices to determine how to maneuver the vehicle 100 in order to avoid the final future trajectory of vehicle 800 and continue through the intersection 502. Accordingly, the vehicle 100 may follow the path 880 into the left-turn only lane 594 to maneuver around the vehicle 800 to avoid the trajectory 830 and the vehicle 800.
Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
The present application is a continuation of U.S. patent application Ser. No. 15/802,820, filed Nov. 3, 2017, which is a continuation of U.S. patent application Ser. No. 15/278,341, filed Sep. 28, 2016, now issued as U.S. Pat. No. 9,914,452 on Mar. 3, 2018, which is a continuation of U.S. patent application Ser. No. 14/873,647, filed Oct. 2, 2015, now issued as U.S. Pat. No. 9,669,827 on Jun. 6, 2017, which is a continuation of U.S. patent application Ser. No. 14/505,007, filed Oct. 2, 2014, now issued as U.S. Pat. No. 9,248,834 on Feb. 2, 2016, the disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
1924984 | Fageol | Aug 1933 | A |
3186508 | Lamont | Jun 1965 | A |
3324805 | Mulch | Jun 1967 | A |
3411139 | Lynch et al. | Nov 1968 | A |
3596728 | Neville | Aug 1971 | A |
4372414 | Anderson | Feb 1983 | A |
4387783 | Carman | Jun 1983 | A |
4656834 | Elpern | Apr 1987 | A |
4924795 | Ottemann | May 1990 | A |
4970653 | Kenue | Nov 1990 | A |
4982072 | Takigami | Jan 1991 | A |
5187666 | Watanabe | Feb 1993 | A |
5415468 | Latarnik | May 1995 | A |
5448487 | Arai | Sep 1995 | A |
5470134 | Toepfer et al. | Nov 1995 | A |
5521579 | Bernhard | May 1996 | A |
5684696 | Rao et al. | Nov 1997 | A |
5774069 | Tanaka et al. | Jun 1998 | A |
5790403 | Nakayama | Aug 1998 | A |
5906645 | Kagawa et al. | May 1999 | A |
5913376 | Takei | Jul 1999 | A |
5954781 | Slepian et al. | Sep 1999 | A |
6055042 | Sarangapani | Apr 2000 | A |
6064926 | Sarangapani et al. | May 2000 | A |
6070682 | Isogai et al. | Jun 2000 | A |
6151539 | Bergholz et al. | Nov 2000 | A |
6195610 | Kaneko | Feb 2001 | B1 |
6226570 | Hahn | May 2001 | B1 |
6321147 | Takeda et al. | Nov 2001 | B1 |
6332354 | Lalor et al. | Dec 2001 | B1 |
6343247 | Jitsukata et al. | Jan 2002 | B2 |
6385539 | Wilson et al. | May 2002 | B1 |
6414635 | Stewart et al. | Jul 2002 | B1 |
6438472 | Tano et al. | Aug 2002 | B1 |
6438491 | Farmer | Aug 2002 | B1 |
6453056 | Laumeyer et al. | Sep 2002 | B2 |
6470874 | Mertes | Oct 2002 | B1 |
6504259 | Kuroda | Jan 2003 | B1 |
6516262 | Takenaga et al. | Feb 2003 | B2 |
6560529 | Janssen | May 2003 | B1 |
6591172 | Oda et al. | Jul 2003 | B2 |
6606557 | Kotzin | Aug 2003 | B2 |
6643576 | O Connor et al. | Nov 2003 | B1 |
6832156 | Farmer | Dec 2004 | B2 |
6836719 | Andersson et al. | Dec 2004 | B2 |
6847869 | Dewberry et al. | Jan 2005 | B2 |
6862524 | Nagda | Mar 2005 | B1 |
6876908 | Cramer et al. | Apr 2005 | B2 |
6934613 | Yun | Aug 2005 | B2 |
6963657 | Nishigaki et al. | Nov 2005 | B1 |
7011186 | Frentz et al. | Mar 2006 | B2 |
7031829 | Nisiyama | Apr 2006 | B2 |
7085633 | Nishira et al. | Aug 2006 | B2 |
7102496 | Ernst, Jr. et al. | Sep 2006 | B1 |
7177760 | Kudo | Feb 2007 | B2 |
7194347 | Harumoto et al. | Mar 2007 | B2 |
7207304 | Lwatsuki | Apr 2007 | B2 |
7233861 | Van Buer et al. | Jun 2007 | B2 |
7327242 | Holloway et al. | Feb 2008 | B2 |
7340332 | Underdahl | Mar 2008 | B2 |
7346439 | Bodin | Mar 2008 | B2 |
7373237 | Wagner et al. | May 2008 | B2 |
7394046 | Olsson et al. | Jul 2008 | B2 |
7486802 | Hougen | Feb 2009 | B2 |
7499774 | Barrett et al. | Mar 2009 | B2 |
7499776 | Allard et al. | Mar 2009 | B2 |
7499804 | Svendsen et al. | Mar 2009 | B2 |
7515101 | Bhogal et al. | Apr 2009 | B1 |
7565241 | Tauchi | Jul 2009 | B2 |
7579942 | Kalik | Aug 2009 | B2 |
7656280 | Hines et al. | Feb 2010 | B2 |
7694555 | Howell et al. | Apr 2010 | B2 |
7778759 | Tange et al. | Aug 2010 | B2 |
7818124 | Herbst et al. | Oct 2010 | B2 |
7835859 | Bill | Nov 2010 | B2 |
7865277 | Larson et al. | Jan 2011 | B1 |
7894951 | Norris et al. | Feb 2011 | B2 |
7908040 | Howard et al. | Mar 2011 | B2 |
7956730 | White et al. | Jun 2011 | B2 |
7979175 | Allard et al. | Jul 2011 | B2 |
8024102 | Swoboda et al. | Sep 2011 | B2 |
8050863 | Trepagnier et al. | Nov 2011 | B2 |
8078349 | Prada Gomez et al. | Dec 2011 | B1 |
8095313 | Blackburn | Jan 2012 | B1 |
8099213 | Zhang et al. | Jan 2012 | B2 |
8126642 | Trepagnier et al. | Feb 2012 | B2 |
8190322 | Lin et al. | May 2012 | B2 |
8194927 | Zhang et al. | Jun 2012 | B2 |
8195341 | Huang et al. | Jun 2012 | B2 |
8244408 | Lee et al. | Aug 2012 | B2 |
8244458 | Blackburn | Aug 2012 | B1 |
8260515 | Huang et al. | Sep 2012 | B2 |
8280601 | Huang et al. | Oct 2012 | B2 |
8280623 | Trepagnier et al. | Oct 2012 | B2 |
8311274 | Bergmann et al. | Nov 2012 | B2 |
8352111 | Mudalige | Jan 2013 | B2 |
8352112 | Mudalige | Jan 2013 | B2 |
8412449 | Trepagnier et al. | Apr 2013 | B2 |
8452506 | Groult | May 2013 | B2 |
8457827 | Ferguson et al. | Jun 2013 | B1 |
8634980 | Urmson et al. | Jan 2014 | B1 |
8694236 | Takagi | Apr 2014 | B2 |
8706394 | Trepagnier et al. | Apr 2014 | B2 |
8718861 | Montemerlo et al. | May 2014 | B1 |
8724093 | Sakai et al. | May 2014 | B2 |
8775063 | Zeng | Jul 2014 | B2 |
8831813 | Ferguson et al. | Sep 2014 | B1 |
8855860 | Isaji et al. | Oct 2014 | B2 |
8874267 | Dolgov et al. | Oct 2014 | B1 |
8918277 | Niem et al. | Dec 2014 | B2 |
8929604 | Platonov et al. | Jan 2015 | B2 |
8948954 | Ferguson et al. | Feb 2015 | B1 |
8949016 | Ferguson et al. | Feb 2015 | B1 |
8970397 | Nitanda et al. | Mar 2015 | B2 |
8972093 | Joshi | Mar 2015 | B2 |
9008369 | Schofield et al. | Apr 2015 | B2 |
9062979 | Ferguson et al. | Jun 2015 | B1 |
9063548 | Ferguson et al. | Jun 2015 | B1 |
9081383 | Montemerlo et al. | Jul 2015 | B1 |
9182759 | Wimmer et al. | Nov 2015 | B2 |
9248834 | Ferguson | Feb 2016 | B1 |
9669827 | Ferguson | Jun 2017 | B1 |
9914452 | Ferguson | Mar 2018 | B1 |
20010024095 | Fitzgibbon et al. | Sep 2001 | A1 |
20010037927 | Nagler | Nov 2001 | A1 |
20020188499 | Jenkins | Dec 2002 | A1 |
20030014302 | Jablin | Jan 2003 | A1 |
20030016804 | Sheha | Jan 2003 | A1 |
20030037977 | Tatara et al. | Feb 2003 | A1 |
20030055554 | Shioda | Mar 2003 | A1 |
20030093209 | Andersson et al. | May 2003 | A1 |
20030125963 | Haken | Jul 2003 | A1 |
20040243292 | Roy | Dec 2004 | A1 |
20050012589 | Kokubu | Jan 2005 | A1 |
20050099146 | Nishikawa et al. | May 2005 | A1 |
20050125154 | Kawasaki | Jun 2005 | A1 |
20050131645 | Panopoulos | Jun 2005 | A1 |
20050149251 | Donath et al. | Jul 2005 | A1 |
20050273251 | Nix | Dec 2005 | A1 |
20060037573 | Lwatsuki | Feb 2006 | A1 |
20060082437 | Yuhara | Apr 2006 | A1 |
20060089764 | Filippov et al. | Apr 2006 | A1 |
20060089765 | Pack et al. | Apr 2006 | A1 |
20060089800 | Svendsen et al. | Apr 2006 | A1 |
20060116801 | Shirley et al. | Jun 2006 | A1 |
20060173841 | Bill et al. | Aug 2006 | A1 |
20060178240 | Hansel | Aug 2006 | A1 |
20060276942 | Anderson | Dec 2006 | A1 |
20070010942 | Bill | Jan 2007 | A1 |
20070024501 | Yeh | Feb 2007 | A1 |
20070112477 | Van Zanten et al. | May 2007 | A1 |
20070142992 | Gronau et al. | Jun 2007 | A1 |
20070149214 | Walsh | Jun 2007 | A1 |
20070165910 | Nagaoka et al. | Jul 2007 | A1 |
20070193798 | Allard et al. | Aug 2007 | A1 |
20070203617 | Haug | Aug 2007 | A1 |
20070225909 | Sakano | Sep 2007 | A1 |
20070239331 | Kaplan | Oct 2007 | A1 |
20070247281 | Shimomura | Oct 2007 | A1 |
20070279250 | Kume et al. | Dec 2007 | A1 |
20080021628 | Tryon | Jan 2008 | A1 |
20080033615 | Khajepour et al. | Feb 2008 | A1 |
20080039991 | May et al. | Feb 2008 | A1 |
20080040039 | Takagi | Feb 2008 | A1 |
20080056535 | Bergmann et al. | Mar 2008 | A1 |
20080059015 | Whittaker et al. | Mar 2008 | A1 |
20080059048 | Kessler | Mar 2008 | A1 |
20080084283 | Kalik | Apr 2008 | A1 |
20080089556 | Salgian et al. | Apr 2008 | A1 |
20080120025 | Naitou et al. | May 2008 | A1 |
20080120171 | Ikeuchi et al. | May 2008 | A1 |
20080147253 | Breed | Jun 2008 | A1 |
20080161987 | Breed | Jul 2008 | A1 |
20080167771 | Whittaker et al. | Jul 2008 | A1 |
20080183512 | Benzinger | Jul 2008 | A1 |
20080188246 | Sheha | Aug 2008 | A1 |
20080195268 | Sapilewski et al. | Aug 2008 | A1 |
20080277183 | Huang | Nov 2008 | A1 |
20080303696 | Aso et al. | Dec 2008 | A1 |
20080306969 | Mehta et al. | Dec 2008 | A1 |
20090005959 | Bargman et al. | Jan 2009 | A1 |
20090010494 | Bechtel et al. | Jan 2009 | A1 |
20090074249 | Moed et al. | Mar 2009 | A1 |
20090082879 | Dooley et al. | Mar 2009 | A1 |
20090115594 | Han | May 2009 | A1 |
20090164071 | Takeda | Jun 2009 | A1 |
20090198400 | Allard et al. | Aug 2009 | A1 |
20090248231 | Tsuyoshi | Oct 2009 | A1 |
20090276154 | Subramanian | Nov 2009 | A1 |
20090287367 | Salinger | Nov 2009 | A1 |
20090287368 | Bonne | Nov 2009 | A1 |
20090306834 | Hjelm et al. | Dec 2009 | A1 |
20090313077 | Wheeler, IV | Dec 2009 | A1 |
20090313095 | Hurpin | Dec 2009 | A1 |
20090319096 | Offer et al. | Dec 2009 | A1 |
20090319112 | Fregene et al. | Dec 2009 | A1 |
20090322872 | Muehlmann et al. | Dec 2009 | A1 |
20090326799 | Crook | Dec 2009 | A1 |
20100010699 | Taguchi et al. | Jan 2010 | A1 |
20100014714 | Zhang et al. | Jan 2010 | A1 |
20100017056 | Asakura | Jan 2010 | A1 |
20100042282 | Taguchi et al. | Feb 2010 | A1 |
20100052945 | Breed | Mar 2010 | A1 |
20100066587 | Yamauchi et al. | Mar 2010 | A1 |
20100076640 | Maekawa et al. | Mar 2010 | A1 |
20100079590 | Kuehnle et al. | Apr 2010 | A1 |
20100179715 | Puddy | Jul 2010 | A1 |
20100179720 | Lin et al. | Jul 2010 | A1 |
20100191433 | Groult | Jul 2010 | A1 |
20100198491 | Mays | Aug 2010 | A1 |
20100205132 | Taguchi et al. | Aug 2010 | A1 |
20100207787 | Catten et al. | Aug 2010 | A1 |
20100228419 | Lee et al. | Sep 2010 | A1 |
20100241297 | Aoki | Sep 2010 | A1 |
20100253542 | Seder et al. | Oct 2010 | A1 |
20100256836 | Mudalige | Oct 2010 | A1 |
20100265354 | Kameyama | Oct 2010 | A1 |
20110010131 | Miyajima et al. | Jan 2011 | A1 |
20110040481 | Trombley et al. | Feb 2011 | A1 |
20110071718 | Norris et al. | Mar 2011 | A1 |
20110099040 | Felt et al. | Apr 2011 | A1 |
20110137520 | Rector et al. | Jun 2011 | A1 |
20110150348 | Anderson | Jun 2011 | A1 |
20110206273 | Plagemann et al. | Aug 2011 | A1 |
20110210866 | David | Sep 2011 | A1 |
20110213511 | Visconti et al. | Sep 2011 | A1 |
20110239146 | Dutta | Sep 2011 | A1 |
20110246156 | Zecha et al. | Oct 2011 | A1 |
20110254655 | Maalouf | Oct 2011 | A1 |
20110264317 | Druenert et al. | Oct 2011 | A1 |
20120053775 | Nettleton et al. | Mar 2012 | A1 |
20120069185 | Stein | Mar 2012 | A1 |
20120083960 | Zhu et al. | Apr 2012 | A1 |
20120114178 | Platonov et al. | May 2012 | A1 |
20120157052 | Quade | Jun 2012 | A1 |
20120271483 | Samukawa et al. | Oct 2012 | A1 |
20120277947 | Boehringer et al. | Nov 2012 | A1 |
20120283912 | Lee et al. | Nov 2012 | A1 |
20120314070 | Zhang et al. | Dec 2012 | A1 |
20130035821 | Bonne et al. | Feb 2013 | A1 |
20130054049 | Uno | Feb 2013 | A1 |
20130054106 | Schmudderich et al. | Feb 2013 | A1 |
20130054128 | Moshchuk et al. | Feb 2013 | A1 |
20130144520 | Ricci | Jun 2013 | A1 |
20130179382 | Fritsh et al. | Jul 2013 | A1 |
20130282277 | Rubin | Oct 2013 | A1 |
20130321400 | Van Os et al. | Dec 2013 | A1 |
20130321422 | Pahwa et al. | Dec 2013 | A1 |
20140067187 | Ferguson et al. | Mar 2014 | A1 |
20140088855 | Ferguson et al. | Mar 2014 | A1 |
20140139369 | Baba | May 2014 | A1 |
20140156164 | Schuberth | Jun 2014 | A1 |
20140180543 | Ueda | Jun 2014 | A1 |
20140195138 | Stelzig et al. | Jul 2014 | A1 |
20140214255 | Dolgov et al. | Jul 2014 | A1 |
20140350836 | Stettner et al. | Nov 2014 | A1 |
20140369168 | Max et al. | Dec 2014 | A1 |
20150112571 | Schmudderich | Apr 2015 | A1 |
20150153735 | Clarke et al. | Jun 2015 | A1 |
20150177007 | Su et al. | Jun 2015 | A1 |
20150198951 | Thor et al. | Jul 2015 | A1 |
20150203107 | Lippman et al. | Jul 2015 | A1 |
20150293216 | O'Dea et al. | Oct 2015 | A1 |
20150302751 | Strauss et al. | Oct 2015 | A1 |
20160327947 | Ishikawa et al. | Nov 2016 | A1 |
20160334230 | Ross et al. | Nov 2016 | A1 |
20160334797 | Ross | Nov 2016 | A1 |
Number | Date | Country |
---|---|---|
101073018 | Nov 2007 | CN |
101364111 | Feb 2009 | CN |
10218010 | Nov 2003 | DE |
10336986 | Mar 2005 | DE |
0884666 | Dec 1998 | EP |
2692064 | Dec 1993 | FR |
H05-246635 | Sep 1993 | JP |
H08-110998 | Apr 1996 | JP |
09066853 | Feb 1997 | JP |
H09-160643 | Jun 1997 | JP |
H09-161196 | Jun 1997 | JP |
H09-166209 | Jun 1997 | JP |
H11-039598 | Feb 1999 | JP |
H11282530 | Oct 1999 | JP |
2000149188 | May 2000 | JP |
2000305625 | Nov 2000 | JP |
2000338008 | Dec 2000 | JP |
2001101599 | Apr 2001 | JP |
2002236993 | Aug 2002 | JP |
2002251690 | Sep 2002 | JP |
2003081039 | Mar 2003 | JP |
2003162799 | Jun 2003 | JP |
2003-205804 | Jul 2003 | JP |
2004-206510 | Jul 2004 | JP |
2004-326730 | Nov 2004 | JP |
2004-345862 | Dec 2004 | JP |
2005062912 | Mar 2005 | JP |
2005067483 | Mar 2005 | JP |
2005071114 | Mar 2005 | JP |
2005-297621 | Oct 2005 | JP |
2005339181 | Dec 2005 | JP |
2006-264530 | Oct 2006 | JP |
2006322752 | Nov 2006 | JP |
2007001475 | Jan 2007 | JP |
2007-022135 | Feb 2007 | JP |
2000-193471 | Jul 2007 | JP |
2007-331458 | Dec 2007 | JP |
2008087545 | Apr 2008 | JP |
2008117082 | May 2008 | JP |
2008152655 | Jul 2008 | JP |
2008170404 | Jul 2008 | JP |
2008213581 | Sep 2008 | JP |
2008257652 | Oct 2008 | JP |
2008290680 | Dec 2008 | JP |
2009026321 | Feb 2009 | JP |
0070941 | Nov 2000 | WO |
2001088827 | Nov 2001 | WO |
2005013235 | Feb 2005 | WO |
2007145564 | Dec 2007 | WO |
2009028558 | Mar 2009 | WO |
Entry |
---|
Carl Crane, et al.,Team Gator Nation's Autonomous Vehicle Development For The 2007 DARPA Urban Challenge, Dec. 2007, 27 pages. |
Chinese Office Action for Application No. 201180057942.8 dated Jun. 3, 2015. |
Chinese Office Action for Application No. 201180057954.0 dated Apr. 29, 2015. |
Extended European Search Report for EP Patent Application No. 11831503.5, dated Dec. 3, 2015. |
Eric Guizzo, How's Google's Self-Driving Car Works, IEEE. Org, IEEE, Oct. 18, 2011, pp. 1/31/-31/31. |
Extended European Search Report for European Patent Application No. 11831362.6, dated Mar. 14, 2017. 11 pages. |
Extended European Search Report for European Patent Application No. 11831505.0, dated Apr. 7, 2017, 13 pages. |
Extended European Search Report for European Patent Application No. 17151573.7, dated Apr. 19, 2017. 7 pages. |
Fact Sheet: Beyond Traffic Signals: A Paradigm Shift Intersection Control For Autonomous Vehicles, [online]. Retrieved Apr. 27, 2011]. Retrieved from the internet: http://www.fhwa.dot.gov/advancedresearch/pubs/10023/index.cfm>, 3 pages. |
Google Cars Drive Themselves, in Traffic [online]. [Retrieved Aug. 19, 2011] Retrieved from the internet: <http://www.nytimes.com/2010/10/1 O/science/10google.html>, 4 pages. |
International Search Report and the Written Opinion for Application No. PCT/US 2011/054154, dated Apr. 24, 2012. |
International Search Report and the Written Opinion for Application No. PCT/US 2011/054896, dated Apr. 25, 2012. |
International Search Report and Written Opinion for Application No. PCT/US2013/061604 dated Jul. 3, 2014. |
International Search Report and Written Opinion for Application No. PCT/US2011/054899 dated May 4, 2012. |
Jaffe, “The First Look at How Google's Self-Driving Car Handles City Streets”, The Atlantic City Lab, Apr. 28, 2014. |
Japanese Office Action for Application No. 2013-532908 dated Sep. 8, 2015. |
Martin Schonhof, Martin Treiber, Arne Kesting, and Dirk Helbing, Autonomous Detection And Anticipation Of Jam Fronts From Messages Propagated by Intervehicle Communication, 2007, pp. 3-12. |
Matthew McNaughton, Motion Planning for Autonomous Driving with a Conformal Spatiotempral Lattice, International Conference on Robotics and Automation, May 9-13, pp. 4889-4895. |
Notice of Preliminary Rejection for Korean Patent Application No. 10-2013-7011655, dated May 18, 2017. |
Notice of Preliminary Rejection for Korean Patent Application No. 10-2013-7011657 dated Feb. 1, 2016. |
Notice of Reasons for Rejection for Japanese Patent Application No. 2013-532909 dated May 26, 2016. |
Notice of Reasons for Rejection for Japanese Patent Application No. 2013-532909, dated Nov. 25, 2015. |
Supplementary Partial European Search Report for European Patent Application No. 11831505.0, dated Dec. 20, 2016. |
TomTom GO user manual. Oct. 1, 2007 (Oct. 1, 2007). XP055123040. Retrieved from the Internet: <http://download.tomtom.com/open/manuals/device/refman/TomTom-GO-en-GB.pdf> , 100 pages. |
Vincenzo DiLecce and Marco Calabrese, Experimental System To Support Real-Time Driving Pattern Recognition, 2008, pp. 1192-1199. |
Tiwari , et al., ““Survival analysis: Pedestrian risk exposure at signalized intersections.””, Transportation Research Part F: Traffic Psychology and Behaviour, Pergamon, Amsterdam, NL, vol. 10, No. 2, Dec. 12, 2006 (Dec. 12, 2006), p. 77-89, XP005802066, ISSN: 1369-8478, DOI: 10.1016/J.TRF.2006. |
Number | Date | Country | |
---|---|---|---|
Parent | 15802820 | Nov 2017 | US |
Child | 16538063 | US | |
Parent | 15278341 | Sep 2016 | US |
Child | 15802820 | US | |
Parent | 14873647 | Oct 2015 | US |
Child | 15278341 | US | |
Parent | 14505007 | Oct 2014 | US |
Child | 14873647 | US |