This disclosure relates generally to vehicles that include sensors for assisted driving or autonomous driving.
Presently, there is considerable research and development directed to vehicles, such as cars, sport utility vehicles (SUVs), trucks and other motorized vehicles, that include sensors that are configured to obtain data about moving objects surrounding a vehicle. These sensors often include cameras that acquire optical images, ultrasonic sensors that use ultrasound, radar sensors that use radar techniques and technology, and LIDAR sensors that use pulsed infrared lasers. Data from these sensors can be processed both individually and collectively to attempt to recognize (e. g., classify) moving objects surrounding a vehicle that includes the sensors. For example, the data from a camera and a radar or a LIDAR system can be processed to recognize other vehicles and pedestrians that move in the environment around the vehicle. A processing system can then use the information about the recognized moving vehicles and pedestrians to provide assisted driving or autonomous driving of the vehicle. For example, while the vehicle is driving with assisted cruise control, the processing system can use information about a recognized vehicle that is in front of the vehicle to provide adequate space in front of the vehicle when the recognized vehicle slows down; normally, assisted cruise control will cause the vehicle to slow down in this situation in order to maintain the adequate space in front of the vehicle.
The embodiments of this disclosure relate to vehicles, processing systems, methods, and non-transitory machine readable media in which assisted driving or autonomous driving can use a first trained model to recognize moving objects and also use a second trained model to recognize stationary road landmarks, such as road signs, and stationary road obstacles, such as road bear barriers, etc.
For one embodiment, a method can include the following operations: receiving a first set of data from a set of sensors on a vehicle, the set of sensors configured to obtain data about objects surrounding the vehicle; processing the first set of data using a first trained model to recognize one or more moving objects represented in the first set of data, the first trained model having been trained to recognize moving objects on or near roads; and processing the first set of data using a second trained model to recognize one or more stationary road landmarks or stationary road obstacles represented in the first set of data, the second trained model having been trained to recognize stationary road landmarks or stationary road obstacles on or near roads. For one embodiment, the method can also include providing at least one of assisted driving of the vehicle or autonomous driving of the vehicle based on the recognition of the one or more moving objects and the recognition of the one or more stationary road landmarks or stationary road obstacles. For one embodiment, the assisted driving can include one or more of: automatic lane departure prevention, automatic collision avoidance, automatic stopping, automatic cruise control, etc. For one embodiment, the first trained model and the second trained model can be embodied in a single neural network that includes both of the first trained model and the second trained model; for an alternative embodiment, the first trained model can be embodied in a first neural network, and the second trained model can be embodied in a second neural network that are separate. In addition, conventional computer vision may be used to recognize stationary objects.
For one embodiment, the method can further include: updating data for a first map stored locally and persistently in nonvolatile memory in the vehicle to include a representation of a recognized stationary road landmarks or stationary road obstacle in the first map; this updating can store the representation of the recognized stationary road landmark or recognized stationary road obstacle for future assisted driving or autonomous driving by one or more processing systems which can take into account the stationary objects when performing assisted driving or autonomous driving after the first map has been updated. For one embodiment, the method can further include the operation of: transmitting, to a set of one or more server systems, data to include the representation of the recognized stationary road landmark or stationary road obstacle in a second map maintained by one or more server systems, wherein the second map can be distributed to the other vehicles through transmissions from the one or more server systems.
For one embodiment, the method can further include the operation of: updating the data for the first map to remove the representation of the recognized stationary road landmark or stationary road obstacle in response to the one or more data processing systems determining that the stationary road landmark or stationary road obstacle has been removed from the road.
For one embodiment, at least a subset of the stationary road landmarks or stationary road obstacles have known static sizes and known static shapes and known color patterns which are used when training the second trained model to recognize stationary road obstacles or road landmarks. For one embodiment, the one or more moving objects can include vehicles, bicycles, motorcycles and pedestrians, and the one or more stationary landmarks or stationary road obstacles can include one or more of: road signs, road barriers or blockades; abandoned car parts, pylons or traffic cones, debris on a road, rocks, or logs. For one embodiment, the set of sensors can include a combination of: one or more LIDAR sensors; one or more radar sensors; and one or more camera sensors which provide the first set of data to computer vision algorithms that recognize the stationary road landmarks or stationary road obstacles.
A vehicle for one embodiment can include the following: a set of one or more sensors configured to obtain data about objects surrounding the vehicle; a steering system coupled to at least one wheel in a set of wheels; one or more motors coupled to at least one wheel in the set of wheels; a braking system coupled to at least one wheel in the set of wheels; a memory storing a first trained model and a second trained model; and a set of one or more processing systems coupled to the memory and to the set of one or more sensors and to the steering system and to the braking system and to the one or more motors; the set of one or more processing systems can be configured to receive a first set of data from the set of one or more sensors and to process the first set of data using the first trained model to recognize one or more moving objects represented in the first set of data, wherein the first trained model has been trained to recognize moving objects on or near roads; and the set of one or more processing systems is further to process the first set of data using the second trained model to recognize one or more stationary road landmarks or stationary road obstacles represented in the first set of data, wherein the second trained model has been trained to recognize stationary road landmarks or stationary road obstacles on or near roads.
For one embodiment, the vehicle can include one or more processing systems that provide at least one of assisted driving of the vehicle or autonomous driving of the vehicle based upon the recognition of the one or more moving objects and the recognition of the one or more stationary road landmarks or stationary road obstacles. For one embodiment, the assisted driving can include one or more of: automatic lane departure prevention; automatic collision avoidance; assisted parking; vehicle summon; automatic collision avoidance; and automatic stopping. For one embodiment, a vehicle can include a first map which is stored locally and persistently in the memory of the vehicle, and the set of one or more processing systems can update data for the first map to include a representation of a recently recognized stationary road landmark or a recently recognized stationary road obstacle in the first map, and the set of one or more processing systems in the vehicle can use the updated map for use in future assisted driving or autonomous driving to avoid the obstacles based upon their stored location in the first map. For one embodiment, the set of one or more processing systems can cause a transmission, to a set of one or more server systems, of the updated data to include the representation of the recognized stationary road landmark or stationary road obstacle in a second map maintained by a set of one or more server systems, wherein the second map is configured to be distributed to other vehicles through transmissions from the set of one or more server systems. For one embodiment, the first map can be modified to remove the representation in response to the set of one or more processing systems determining, from data from the set of sensors that the stationary road landmark or stationary road obstacle has been removed from a location specified in data associated with the representation, and wherein the representation can include an icon displayed on the first map. For one embodiment, the vehicle can include a single neural network that includes both of the first trained model and the second trained model, while in an alternative embodiment, the first trained model can be embodied in a first neural network and the second trained model can be embodied in a second neural network which is separate and distinct from the first neural network.
The embodiments described herein can include methods and vehicles which use the methods described herein. Moreover, the embodiments described herein can include non-transitory machine readable media that store executable computer program instructions that can cause one or more data processing systems to perform the one or more methods described herein when the computer program instructions are executed by the one or more data processing systems. The instructions can be stored in nonvolatile memory such as flash memory or dynamic random access memory or other forms of memory.
The above summary does not include an exhaustive list of all embodiments in this disclosure. All systems and methods can be practiced from all suitable combinations of the various aspects and embodiments summarized above and also those disclosed in the Detailed Description below.
The above summary does not include an exhaustive list of all embodiments in this disclosure. All systems and methods can be practiced from all suitable combinations of the various aspects and embodiments summarized above, and also those disclosed in the Detailed Description below.
The present embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
Various embodiments and aspects will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. The processes depicted in the figures that follow are performed by processing logic that comprises hardware (e.g. circuitry, dedicated logic, etc.), software, or a combination of both. Although the processes are described below in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in a different order. Moreover, some operations may be performed in parallel rather than sequentially.
The embodiments described herein can utilize two trained models which have been trained to recognize two different types of objects that can be encountered by a vehicle while the vehicle is operating on the roads. The two trained models can be implemented in two separate neural networks or in one neural network that has been trained to include both trained models. For one embodiment, a first trained model is trained to recognize moving objects such as vehicles, pedestrians, bicycles, motorcycles, and other moving objects on or near roads. The other trained model is trained to recognize stationary road landmarks or stationary road obstacles or both based upon known shapes, sizes, and color patterns of those landmarks and obstacles. The vehicle can use both models together to provide assisted driving and/or autonomous driving which can benefit by being able to recognize not only moving objects but also stationary road landmarks and stationary road obstacles. For example, when the system has recognized a stationary road landmark such as a construction sign or a road sign which indicates that the vehicle needs to move over by one lane to the left, the assisted driving system or the autonomous driving system can recognize the road landmark and cause the vehicle to move one lane to the left in order to avoid the road landmark. Alternatively, the vehicle can alert the driver of the presence of the road landmark to request the driver to move to the left.
The training system 10 can be trained by obtaining two different types of data. The first type of data is data for moving objects, such as a moving object data 12. In one embodiment, moving object data 12 can be data obtained by driving vehicles around which observe other vehicles and pedestrians while the vehicle is being driven around. The moving object data 12 can be used to train a neural network 14 which in turn, when trained, can produce first trained neural network 16 for moving objects. The first trained neural network 16 can be used to recognize moving objects. In one embodiment, the YOLO model for a neural network can be used to create the trained neural network 16 using conventional techniques known in the art for creating a YOLO neural network that can recognize moving objects. For one embodiment, stationary road landmark data and stationary road obstacle data 17 can be obtained and used as an input to train a neural network 19 which in turn can produce a trained neural network 21 which can be referred to as the second trained neural network which can be used for the recognition and classification of stationary road landmarks and stationary road obstacles. For one embodiment, the first trained neural network 16 and the second trained neural network 21 can be stored in memory in a vehicle for use while the vehicle is driving to provide assisted driving and/or autonomous driving. For an embodiment in which a single trained neural network contains both trained models, a single neural network (e.g. neural network 14) can be trained using both data 12 and data 17 to create the single trained neural network.
A vehicle can operate using the method shown in
For one embodiment, the first trained model can be implemented in the same neural network as the second trained model; in an alternative embodiment, the first trained model can be implemented in a first neural network which is separate and distinct from a second neural network which implements the second trained model. The stationary road landmarks and the stationary road obstacles can include all of the objects which were used during the training, such as the training implemented by training system 10; for example all of the stationary road landmarks and obstacle data 17 which were used to train neural network 19 can be recognized by the second trained model in operation 105. In operation 107, one or more data processing systems in the vehicle can use the recognized objects (including recognized moving objects and recognized stationary objects) to provide assisted driving and/or autonomous driving using the classifications from the first and the second trained models. For example, if the sensors have detected a moving vehicle in front of the vehicle and also detected a road sign indicating that the vehicle is to move to the left lane (from the right lane where the vehicle is currently traveling), the assisted driving and/or autonomous driving system in the vehicle can cause the vehicle to move to the left lane while assuring that it maintains an adequate safe distance behind the vehicle in front of it and while also allowing the vehicle in front of it to also move into the left lane from the right lane.
The method shown in
Stationary objects such as road obstacles can often be temporary objects that exist while a road project or construction project is being performed and are then removed from the location of the project when the project is completed. Thus, while a stationary object can be added to the map at one point in time in an embodiment, another embodiment described herein allows the representation of the previously added obstacle or landmark to be removed from both the local map maintained in the vehicle and a remote or second map maintained by one or more remote servers. An example of a method which removes such previously recognized stationary objects is shown in
For example, the one or more processing systems can detect the presence of a road obstacle in the right lane which is blocking the lane the vehicle is currently driving in, and the one or more processing systems can determine that the vehicle needs to move to the left lane but there is another vehicle in the left lane that blocks the move. Thus the one or more processing systems cause the vehicle to slow down to allow the vehicle in the left lane to pass it to then allow the vehicle to be moved into the left lane after the vehicle in the left lane has cleared the left lane to allow the vehicle to move into the left lane and continue past the road obstacle which blocks the current path of travel of the vehicle. The one or more processing systems can also cause the updating of the local map (or other data structure) in the navigation system 219 to cause a representation of the detected or recognized road obstacle to appear on the map in the display 215. Moreover, the one or more processing systems 203 can cause one or more radio systems 217 to transmit data to one or more servers maintaining a second or central map data source to allow the updating of the map for other vehicles. These one or more servers can then transmit updated map information to those other vehicles which can be similar to the vehicle shown in
In the foregoing specification, specific exemplary embodiments have been described. It will be evident that various modifications may be made to those embodiments without departing from the broader spirit and scope set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.