This relates to a vehicle and, more particularly, to an autonomous vehicle configured to determine a path of motion based on user input via a user interface of a computing device.
Fully or partially autonomous vehicles, such as autonomous consumer automobiles, offer convenience and comfort to passengers. In some examples, a fully or partially autonomous vehicle can maneuver into a parking space in certain conditions. For example, a vehicle can be a specified distance and/or orientation relative to a parking space prior to commencing a fully or partially autonomous parking operation. In some examples, a parking space for autonomous parking can be indicated with a visual or location-based marker.
The present disclosure relates to a vehicle and, more particularly, to an autonomous vehicle configured to determine a path of motion based on user input at via user interface of a computing device. In some examples, the vehicle can load or generate a three-dimensional (3D) model of its surroundings to be used in a fully or partially autonomous parking operation. For instance, a two-dimensional (2D) “occupancy grid map” can be generated from the 3D model to determine unobstructed sections of road the vehicle can proceed. The 2D occupancy grid map can be displayed at a computing device, such as a mobile device (e.g., a smartphone, a tablet, or other mobile device) or a device included in the vehicle (e.g., an infotainment panel or other in-vehicle device). In some examples, the user can provide input to instruct the vehicle where to travel to. The user input can include one or more of a path, a final location, a final pose (location and orientation), or a series of waypoints, for example. In some examples, the vehicle can generate a path based on the user input and optimize and smooth the generated path to plan its path. The vehicle can drive in a fully or partially autonomous mode along the finalized path. In some examples, the path can lead to a parking location and the vehicle can autonomously park at the end of the path.
In the following description, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the examples of the disclosure.
The present disclosure relates to a vehicle and, more particularly, to an autonomous vehicle configured to determine a path of motion based on user input at via user interface of a computing device. In some examples, the vehicle can load or generate a three-dimensional (3D) model of its surroundings to be used in a fully or partially autonomous parking operation. For instance, a two-dimensional (2D) “occupancy grid map” can be generated from the 3D model to determine unobstructed sections of road the vehicle can proceed. The 2D occupancy grid map can be displayed at a computing device, such as a mobile device (e.g., a smartphone, a tablet, or other mobile device) or a device included in the vehicle (e.g., an infotainment panel or other in-vehicle device). In some examples, the user can provide input to instruct the vehicle where to travel to. The user input can include one or more of a path, a final location, a final pose (location and orientation), or a series of waypoints, for example. In some examples, the vehicle can generate a path based on the user input and optimize and smooth the generated path to plan its path. The vehicle can drive in a fully or partially autonomous mode along the finalized path. In some examples, the path can lead to a parking location and the vehicle can autonomously park at the end of the path.
Vehicle control system 100 can include an on-board computer 110 that is coupled to the cameras 106, sensors 107, GNSS receiver 108 and user input devices 109, and that is capable of receiving, for example, image data from the cameras and/or outputs from the sensors 107, the GNSS receiver 108, and user input devices 109. The on-board computer 110 can be capable of generating a 3D model of the vehicle's surroundings, generating a 2D occupancy grid map, and performing one or more optimization and/or smoothing algorithms on a pre-determined vehicle path, as described in this disclosure. On-board computer 910 can include storage 112, memory 116, and a processor 114. Processor 114 can perform any of the methods as will be described below with reference to
In some examples, the vehicle control system 100 can be connected to (e.g., via controller 120) one or more actuator systems 130 in the vehicle and one or more indicator systems 140 in the vehicle. The one or more actuator systems 130 can include, but are not limited to, a motor 131 or engine 132, battery system 133, transmission gearing 134, suspension setup 135, brakes 136, steering system 137 and door system 138. The vehicle control system 100 can control, via controller 120, one or more of these actuator systems 130 during vehicle operation; for example, to control the vehicle during autonomous driving operations, which can utilize the feature map and driving route stored on the on-board computer 110, using the motor 131 or engine 132, battery system 133, transmission gearing 134, suspension setup 135, brakes 136 and/or steering system 137, etc. Actuator systems 130 can also include sensors that send dead reckoning information (e.g., steering information, speed information, etc.) to on-board computer 110 (e.g., via controller 120) to determine the vehicle's location and orientation. The one or more indicator systems 140 can include, but are not limited to, one or more speakers 141 in the vehicle (e.g., as part of an entertainment system in the vehicle), one or more lights 142 in the vehicle, one or more displays 143 in the vehicle (e.g., as part of a control or entertainment system in the vehicle) and one or more tactile actuators 144 in the vehicle (e.g., as part of a steering wheel or seat in the vehicle). The vehicle control system 100 can control, via controller 120, one or more of these indicator systems 140 to provide visual and/or audio indications that the vehicle detected a cautionary scenario.
In response to the user-provided path, an onboard computer of the vehicle can smooth and optimize the path to prepare the vehicle to autonomously travel in the direction indicated.
In response to the user-provided endpoint, an onboard computer of the vehicle can plan a path from the vehicle's current location to the endpoint.
In response to the user-provided endpoint, an onboard computer of the vehicle can plan a path from the vehicle's current location to the provided pose 520.
In response to the user-provided vehicle waypoints, an onboard computer of the vehicle can plan a path from the vehicle's current location along the waypoints 620.
In some examples, the vehicle can generate 702 a 3D model (e.g., model 200) of its surroundings. Generating a 3D model can include collecting perception data at one or more vehicle sensors (e.g., LIDAR, ultrasonic, cameras, etc.) included in the vehicle. In some examples, the 3D model can be built in real-time as the vehicle navigates its surroundings. Additionally or alternatively, a partial or complete 3D model can be obtained from previously-collected data. For example, a memory included in the vehicle can include pre-stored data or a partial or complete 3D model. Pre-stored data or a pre-stored model can be previously obtained by the vehicle itself and saved or it can be downloaded from a second vehicle or from a remote server.
In some examples, the vehicle can use the 3D model to generate 704 a 2D occupancy grid map, such as one or more of the occupancy grid maps described with reference to
In some examples, the vehicle can receive 706 a user input to create a path for the vehicle to travel along. The user input can include one or more of a path (e.g., path 320), an endpoint (e.g., endpoint 420), a vehicle pose (e.g., vehicle pose 520), or a series of waypoints (e.g., waypoints 620), for example. In some examples, the user input can be provided at a user interface of a computing device, such as a mobile device (e.g., a smartphone, a tablet, etc.) or an in-vehicle device (e.g., an infotainment panel or other in-vehicle device). The device can include a touch screen to allow for touch input using the user's finger, a stylus, or other object, for example. In some examples, other input devices, such as a keyboard or a mouse, are possible.
In some examples, the vehicle can generate 708 a path from its current location to a second location, such as a parking spot. In some examples, a path can be provided by user input in the form of a user-provided path (e.g., path 320) or a series of waypoints (e.g., waypoints 620). Additionally or alternatively, an onboard computer included in the vehicle can use a searching algorithm to determine a minimum path to a user-provided endpoint (e.g., endpoint 420) or vehicle pose (e.g., vehicle pose 520). In some examples, the path can be displayed at a user interface including an occupancy grid map of the vehicle's surroundings.
In some examples, the vehicle can smooth 710 the generated 708 path. The onboard computer of the vehicle can perform a smoothing algorithm to eliminate sharp turns from the path. In some examples, smoothing can further include applying a constraint to the path to require a predetermined distance between the vehicle and its surroundings to avoid a collision. In some examples, the finalized path can be displayed at a user interface including an occupancy grid map of the vehicle's surroundings.
In some examples, the vehicle can autonomously drive 712 along the path. Autonomously driving along the path can include, in some examples, performing an autonomous parking operation at the end of the path. The vehicle can rely on one or more sensors (e.g., cameras, LIDAR, ultrasonic, etc.) to avoid a collision while driving autonomously. Avoiding a collision while driving autonomously along a path is described in more detail below with reference to
In some examples, while driving along a pre-determined path, a vehicle can detect 802 an obstacle. The vehicle can detect the obstacle with one or more sensors such as LIDAR, cameras, or ultrasonic sensors, for example. Other sensors are possible. In some examples, a user can provide an input (e.g., via voice command, via a button or switch, via an electroencephalogram or other bioinformatics sensor, or by pressing the brake pedal, for example) that an obstacle is detected. In some examples, detecting 802 an obstacle can include detecting an object located along or in close proximity to the vehicle's pre-determined route. Additionally or alternatively, the vehicle can detect obstacles that do not yet overlap the route but, based on their position, velocity, and acceleration, may collide with the vehicle as it travels forward along the route.
In response to a detected obstacle, the vehicle can stop 804, for example. In some examples, one or more indicator systems of the vehicle can alert a vehicle user that the vehicle has stopped in response to a detected obstacle. For example, the vehicle can play a sound, provide tactile feedback, and/or display an alert (e.g., at an infotainment panel, a mobile device, or other display).
Optionally, in some examples, the vehicle can recalculate 806 a path in response to the obstacle. Recalculation can occur automatically in response to waiting a predetermined amount of time for the obstacle to move or in response to a user input, for example. In some examples, the vehicle can automatically generate an alternative path. For example, the vehicle can select a path with minimal deviation from the pre-determined path. Additionally or alternatively, the user can provide an input (e.g., a path, an endpoint, a vehicle pose, and/or a series of waypoints) to determine the new path.
In some examples, the vehicle can monitor its surroundings to determine 808 if the path is clear. Determining 808 if the path is clear can include determining if the obstacle has moved away from the pre-determined path, for example. In some examples, determining 808 if the path is clear can include determining if the newly generated path is clear.
In accordance with a determination that the vehicle's path is clear, in some examples, the vehicle can drive 810 autonomously once again. While driving autonomously, the vehicle can continue to monitor its surroundings to avoid a collision.
In accordance with a determination that the vehicle's path is not clear, in some examples, the vehicle can remain stopped 804 to wait for its path to clear. In some examples, the vehicle can generate a notification (e.g., an auditory notification, a visual notification, etc.) to terminate autonomous driving to allow a human driver, such as the user, to take over. Alternatively, in some examples, the process 800 can continue to monitor the vehicle's surroundings to determine when it is safe to drive 810 autonomously. In some examples, a new path can be generated 806.
It should be appreciated that in some embodiments a learning algorithm can be implemented such as an as a neural network (deep or shallow, which may employ a residual learning framework) and be applied instead of, or in conjunction with another algorithm described herein to solve a problem, reduce error, and increase computational efficiency. Such learning algorithms may implement a feedforward neural network (e.g., a convolutional neural network) and/or a recurrent neural network, with supervised learning, unsupervised learning, and/or reinforcement learning. In some embodiments, backpropagation may be implemented (e.g., by implementing a supervised long short-term memory recurrent neural network, or a max-pooling convolutional neural network which may run on a graphics processing unit). Moreover, in some embodiments, unsupervised learning methods may be used to improve supervised learning methods. Moreover still, in some embodiments, resources such as energy and time may be saved by including spiking neurons in a neural network (e.g., neurons in a neural network that do not fire at each propagation cycle).
In some examples, sensor data can be fused together (e.g., LiDAR data, radar data, ultrasonic data, camera data, etc.). This fusion can occur at one or more electronic control units (ECUs). The particular ECU(s) that are chosen to perform data fusion can be based on an amount of resources (e.g., processing power and/or memory) available to the one or more ECUs, and can be dynamically shifted between ECUs and/or components within an ECU (since an ECU can contain more than one processor) to optimize performance.
Therefore, according to the above, some examples of the disclosure are directed to a method of determining a path of travel for a vehicle, the method comprising: generating a three-dimensional (3D) model of the vehicle's surroundings; converting the 3D model to a two-dimensional (2D) occupancy grid map; receiving a user input indicative of one or more points along a desired path; determining a finalized path in accordance with the user input; and autonomously driving along the finalized path. Additionally or alternatively to one or more of the examples disclosed above, the 3D model is generated based on data from one or more sensors included in the vehicle. Additionally or alternatively to one or more of the examples disclosed above, the occupancy grid map includes a footprint of one or more objects included in the 3D model. Additionally or alternatively to one or more of the examples disclosed above, the user input includes one or more of a user-defined path, an endpoint, a vehicle pose, and a series of waypoints. Additionally or alternatively to one or more of the examples disclosed above, the user input is received at a computing device operatively coupled to the vehicle. Additionally or alternatively to one or more of the examples disclosed above, the method further comprises generating an optimal path based on the user input. Additionally or alternatively to one or more of the examples disclosed above, the method further comprises generating a smooth path based on the user input. Additionally or alternatively to one or more of the examples disclosed above, the finalized path is at least a pre-defined minimum distance away from any obstacles included in the occupancy grid map at all points. Additionally or alternatively to one or more of the examples disclosed above, the method further comprises, while driving autonomously: in accordance with a determination that there is an obstacle along the finalized path, stopping the vehicle. Additionally or alternatively to one or more of the examples disclosed above, the method further comprises: in accordance with the determination that there is an obstacle along the finalized path, determining a new path and driving along the new path. Additionally or alternatively to one or more of the examples disclosed above, the method further comprises: in accordance with a determination that there is no longer an obstacle along the finalized path, resuming driving along the finalized path.
According to the above, some examples of the disclosure are direct to a vehicle comprising a processor, the processor configured for: generating a three-dimensional (3D) model of the vehicle's surroundings; converting the 3D model to a two-dimensional (2D) occupancy grid map; receiving a user input indicative of one or more points along a desired path; determining a finalized path in accordance with the user input; and autonomously driving along the finalized path.
According to the above, some examples of the disclosure are directed to a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors of a vehicle, causes the processor to perform a method of determining a path of travel for the vehicle, the method comprising: generating a three-dimensional (3D) model of the vehicle's surroundings; converting the 3D model to a two-dimensional (2D) occupancy grid map; receiving a user input indicative of one or more points along a desired path; determining a finalized path in accordance with the user input; and autonomously driving along the finalized path.
Although examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of examples of this disclosure as defined by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 62/382,175, filed Aug. 31, 2016, the entirety of which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62382175 | Aug 2016 | US |