This application relates to the field of intelligent driving, and in particular, to a vehicle control technology.
Driving perception, as the first link in autonomous driving or assisted driving, is an important link for the interaction between a vehicle and the outside world. The key to driving perception is to enable the vehicle to better simulate a driver's perception capability, to implement safe driving of the vehicle on a road.
In related art, implementation of vehicle control mainly includes the following operations. First, a road image set that includes a large quantity of road images is constructed. Then, a perception model is trained based on the road image set, and the perception model is deployed at the vehicle for use, so that the vehicle is controlled, based on the perception model, to travel on the road.
According to this manner, the perception model is needed to be capable of accurately perceiving all possible situations that may occur during traveling of the vehicle. However, a capacity of the perception model is limited, and the vehicle may face various situations when traveling in the real world. Therefore, this method is difficult to achieve accurate perception, consequently affecting driving safety.
In accordance with the disclosure, there is provided a vehicle control method performed by a computer device including obtaining traveling information of a target vehicle that indicates a target road segment that the target vehicle to travel on and a target environmental condition when the target vehicle is on the target road segment, obtaining a target perception model of the target road segment under the target environmental condition from a perception model library that stores perception models of a plurality of road segments under different environmental conditions, calling the target perception model to perform a perception task on the target road segment to obtain a perception result, and controlling, based on the perception result, the target vehicle to travel on the target road segment.
Also in accordance with the disclosure, there is provided a computer device including at least one processor, and at least one memory storing at least one computer program that, when executed by the at least one processor, causes the computer device to obtain traveling information of a target vehicle that indicates a target road segment that the target vehicle to travel on and a target environmental condition when the target vehicle is on the target road segment, obtain a target perception model of the target road segment under the target environmental condition from a perception model library that stores perception models of a plurality of road segments under different environmental conditions, call the target perception model to perform a perception task on the target road segment to obtain a perception result, and control, based on the perception result, the target vehicle to travel on the target road segment.
Also in accordance with the disclosure, there is provided a vehicle control method performed by a computer device including obtaining traveling information of a target vehicle that indicates a target road segment that the target vehicle to travel on and a target environmental condition when the target vehicle is on the target road segment, searching for a target perception model of the target road segment under the target environmental condition from a perception model library that stores perception models of a plurality of road segments under different environmental conditions, and delivering the target perception model to the target vehicle, to enable the target vehicle to call the target perception model to perform a perception task on the target segment to obtain a perception result and control, based on the perception result, the target vehicle to travel on the target road segment.
To describe the technical solutions of embodiments of this application more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments. Apparently, the accompanying drawings in the following description show only some embodiments of this application, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
To make the objectives, technical solutions, and advantages of this application clearer, the following further describes implementations of this application in detail with reference to the accompanying drawings.
In this application, the terms “first,” “second,” and the like are configured for distinguishing identical or similar items with substantially the same effects and functions. There is no logical or temporal dependency between the “first,” “second,” and “nth” (if any), and no limitation is imposed on the quantity and execution order. Although the following descriptions use the terms first, second, and the like to describe various elements, these elements are not to be limited by the terms.
The terms are merely configured for distinguishing an element from another element. For example, a first element may be referred to as a second element, and similarly, the second element may be referred to as the first element without departing from a scope of various examples. Both the first element and the second element may be elements, and in some cases, may be separate and different elements.
At least one means one or more than one. For example, at least one element can be one element, two elements, three elements, or any integer number of elements greater than or equal to one. However, a plurality of means two or more than two. For example, a plurality of elements may be two elements, three elements, or any integer number greater than or equal to two.
Information (including but not limited to user equipment information, user personal information, and the like), data (including but not limited to data used for analysis, stored data, displayed data, and the like), and signals involved in this application are all authorized by the users or fully authorized by all parties, and collection, use, and processing of related data need to comply with relevant laws, regulations, and standards of relevant countries and regions.
First, abbreviations and key terms described in embodiments of this application are introduced.
In embodiments of this application, the vehicle is a vehicle that requires driver assistance or does not require driver control at all, for example, the vehicle may include an autonomous driving car, an unmanned vehicle, a computer-driven car, a driverless vehicle, and a self-driving car.
As an automatic vehicle, the vehicle mentioned in embodiments of this application can automatically travel on a road without the need for driver control.
As the first link in autonomous driving or assisted driving, the driving perception is an important link for the interaction between a vehicle and the outside world. The key to driving perception is to be capable of enabling the vehicle to better simulate a driver's perception capability, to implement safe driving of the vehicle on a road.
The driving perception is achieved based on a perception algorithm in the field of autonomous driving or assisted driving. In other words, the perception algorithm is an important part of implementing the autonomous driving or the assisted driving. In recent years, with the development of a deep learning technology, the perception algorithm have also been greatly improved. Generally, the perception algorithm implement the driving perception based on a perception model.
For example, the driving perception can detect various obstacles and collect various pieces of information on the road. The obstacles include dynamic obstacles and static obstacles, such as other vehicles, pedestrians, buildings. The various pieces of information on the road include but are not limited to drivable areas, lane lines, traffic signs, traffic lights, and the like. In addition, the driving perception requires the assistance of various sensors. A type of the sensor includes but is not limited to a camera, a laser radar, a millimeter-wave radar, and the like.
In embodiments of this application, the slice-based perception is relative to a single perception model in related art.
In related art, the single perception model is used for driving perception. In other words, the same perception model is used for all vehicles, the road segments, and all environmental conditions. This requires that the perception model can accurately perceive all possible situations that may occur during traveling of a vehicle. However, a capacity of the single perception model is limited, for example, a quantity of layers of the model is limited or a quantity of feature channels in each layer is limited. In addition, a vehicle may face various situations when traveling in the real world, including various long tail situations that are difficult to perceive, such as a tree in the middle of the road, blurry pedestrians crossing the road at night. In addition, the vehicle also faces different environmental conditions, such as different lighting categories such as day and night, and different weather categories such as rainy days, snowy days, and foggy days.
Therefore, the single perception model is difficult to adapt to the various situations mentioned above. In other words, it is difficult to achieve accurate perception based on the single perception model, in other words, perception failure may occur. The perception failure directly affects driving safety. For example, if the perception failure occurs in an autonomous driving mode at high speed, it may cause serious traffic accidents. In conclusion, based on safety considerations, a current level of publics trusting in autonomous driving is low, which increases the difficulty of implementing and promoting autonomous driving.
However, for the slice-based perception, corresponding perception models are trained for different road segments and different environmental conditions, to form a perception model library. When the perception model is used, the most adapted target perception model is dynamically selected, based on a to-be-traveled road segment of a to-be-controlled vehicle during traveling and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment, from the perception model library to perform a perception task. The slice-based perception can achieve more accurate perception and greatly reduce the perception difficulty of the model, thereby achieving more precise and robust perception effects, improving user experience, and ultimately promoting the implementation and promotion of the autonomous driving.
The following describes an implementation environment included in a vehicle control method provided in embodiments of this application.
In this embodiment of this application, the terminal may be in a vehicle, and the server may be a cloud server. Refer to
In a possible implementation, an adapted application client is installed on a terminal in the vehicle 101, and the vehicle 101 communicates with the cloud server 102 through the application client, which is not limited in this application. For example, the terminal may be a smart phone, a tablet computer, an on-board terminal, a smart speaker, or a smartwatch, but is not limited thereto.
The cloud server 102 may be an independent physical server, may be a server cluster or a distributed system including a plurality of physical servers, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), or big data and an artificial intelligence platform.
In another possible implementation, the cloud server 102 included in the implementation environment may be any server that provides background services, which is not limited in this application.
In another possible implementation,
With reference to
Operation 1: Collect data. The operation is configured for obtaining road images from vehicles located on various roads. In other words, in this embodiment of this application, massive road images are reflowed from the vehicles located on the various roads, so that the cloud server can obtain the road images.
Operation 2: Segment a road into segments. The operation is configured for segmenting a road into road segments, for example, as shown in
Operation 3: Divide an environment. The operation is configured for dividing the environment, for example, as shown in
Operation 4: Construct road image sets of the perception units. The operation is configured for constructing road image sets of road segments under the different environmental conditions based on the massive, reflowed road images.
Operation 5: Construct a perception model library. The operation is configured for training perception models of the perception units based on the road image sets of the perception units, for example, as shown in
Operation 6: Obtain vehicle traveling information and select a model. The operation is configured for dynamically selecting and delivering, based on road segment information during traveling of a vehicle and an environmental condition when the vehicle is on a corresponding road segment, the most adapted perception model from the perception model library to perform a perception task.
Operation 7: Reflow data and iterate the model. The vehicle automatically transmits the road images during the traveling, to implement a closed-loop iteration of model training-model delivery-data reflow-model training.
In conclusion, a slice-based perception solution uses a method of breaking the whole into parts, so that different perception units are segmented based on different road segments and different environmental conditions, the road image sets of the perception units are constructed, and customized perception models of the perception units are trained based on the road image sets of the perception units. In this way, a complete perception model library is established in the cloud. When the model is used, the most adapted perception model is dynamically selected, based on the road segment during the traveling of the vehicle and the environmental condition when the vehicle is on the corresponding road segment, from the perception model library of the cloud server to perform the perception task.
In other words, in embodiments of this application, roads in the real world are segmented into a large quantity of road segments. For any one of segmented road segments, road image sets of the road segment under different environmental conditions are constructed, so that perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions, to form the perception model library. In this way, during the traveling, the vehicle can dynamically call the most adapted perception model from the perception model library for driving perception based on the road segment during the traveling and the environmental condition when the vehicle is on the corresponding road segment.
To make the objectives, technical solutions, and advantages of this application clearer, the following further describes the vehicle control method provided in this application in detail with reference to the following implementations. The embodiments described herein are only used for describing this application, instead of limiting this application. In addition, technical features included in the implementations that are described below may be combined with each other provided that no conflict occurs.
301: Obtain current traveling information of a to-be-controlled vehicle, the traveling information being configured for indicating a to-be-traveled road segment of the to-be-controlled vehicle during traveling and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment.
In this embodiment of this application, the to-be-controlled vehicle, also referred to as a “target vehicle,” is generally any vehicle controlled by using a slice-based perception solution. Road segments are obtained by segmenting a road. For example, a road can be segmented into a plurality of road segments to form a plurality of road slices or a plurality of road units. The to-be-traveled road segment, also referred to as a “target road segment,” is a road segment that the to-be-controlled vehicle needs to pass through during traveling.
For example, there may be a plurality of to-be-traveled road segments, and the to-be-traveled road segments may be road segments on a current navigation route planned by the to-be-controlled vehicle. The to-be-controlled vehicle may have just planned the navigation route and has not passed through the road segments. Alternatively, there is one to-be-traveled road segment, the to-be-traveled road segment may be a to-be-traveled road segment where the to-be-controlled vehicle is currently located, which is not limited in this application.
In a possible implementation, the environmental condition includes a combination of different lighting categories and different weather categories, or the environmental condition includes different lighting categories, or the environmental condition includes different weather categories, which is not limited in this application. A quantity of categories for lighting and weather can be set based on an actual requirement, which is not limited in this application.
302: Obtain a target perception model of the to-be-traveled road segment under the target environmental condition from a perception model library, the perception model library being configured to store perception models of a plurality of road segments under different environmental conditions, and for any one of the road segments, the perception models of the road segment under the different environmental conditions being trained based on road image sets of the road segment under the different environmental conditions.
The to-be-controlled vehicle can actively request the cloud server to deliver a target perception model that matches the traveling information, or the cloud server can also actively push, based on the traveling information reported by the to-be-controlled vehicle, the target perception model that matches the traveling information to the to-be-controlled vehicle, which is not limited in this application. The target perception model that matches the traveling information may be a target perception model of the to-be-traveled road segment in a target environment.
As shown in
If the to-be-traveled road segment is road segment 1 and the target environmental condition is environmental condition 1, the obtained target perception model is perception model 1-1 trained based on the road image sets of road segment 1 under environmental condition 1.
In a possible implementation, all matching target perception models may be delivered to the to-be-controlled vehicle in advance after path planning. The path planning generally generates the current navigation route. In this case, a plurality of to-be-traveled road segments are obtained. The plurality of to-be-traveled road segments may be the road segments on the current navigation route of the to-be-controlled vehicle. To obtain all the matching target perception models in advance, target perception models of the to-be-traveled road segments under corresponding target environmental conditions can be obtained from the perception model library in response to obtaining of the plurality of to-be-traveled road segments. A target perception model matches a to-be-traveled road segment and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment.
In another possible implementation, the target perception model may be obtained in real time. In other words, the target perception model is obtained when the to-be-controlled vehicle is on the to-be-traveled road segment. In this case, there is one to-be-traveled road segment. In other words, the to-be-traveled road segment is the to-be-traveled road segment where the to-be-controlled vehicle is currently located. A manner of obtaining the target perception model of the to-be-traveled road segment under the target environmental condition from the perception model library may be that the to-be-controlled vehicle obtains the target perception model of the to-be-traveled road segment under the target environmental condition from the perception model library. The obtained target perception model matches the to-be-traveled road segment where the to-be-controlled vehicle is currently located, and the target environmental condition when the to-be-controlled vehicle is on the corresponding to-be-traveled road segment.
303: Call the target perception model to perform a perception task on the to-be-traveled road segment to obtain a perception result, and control, based on the perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment.
The perception task can be configured for detecting various obstacles and collecting various pieces of information on a road. The perception task is performed to obtain the perception result, and then the to-be-controlled vehicle is controlled, based on the perception result, to travel on the to-be-traveled road segment, thereby implementing safe driving of the vehicle on the road.
In this embodiment of this application, the implementation of 303 may alternatively be different depending on different manners of obtaining the target perception model. In a case that all the matching target perception models are obtained in advance, the calling the target perception model to perform a perception task on the to-be-traveled road segment to obtain a perception result, and controlling, based on the perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment includes: calling, during the traveling of the to-be-controlled vehicle, when the to-be-controlled vehicle reaches a to-be-traveled road segment, a target perception model matching a to-be-traveled road segment where the to-be-controlled vehicle is currently located from the plurality of target perception models to perform the perception task, and controlling, based on the obtained perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment where the to-be-controlled vehicle is currently located.
In a case that the target perception model is obtained in real time, the calling the target perception model to perform a perception task on the to-be-traveled road segment to obtain a perception result, and controlling, based on the perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment includes: calling the obtained target perception model to perform the perception task on the to-be-traveled road segment where the to-be-controlled vehicle is currently located, and controlling, based on the obtained perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment where the to-be-controlled vehicle is currently located.
The vehicle control solution provided in embodiments of this application can implement slice-based perception. In detail, for any one of road segments obtained by road segmentation, road image sets of the road segment under different environmental conditions are constructed, so that perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions, to form a perception model library. A perception model is configured to perform a perception task on a vehicle under an environmental condition on a road segment. In this way, during traveling of a to-be-controlled vehicle, the most adapted target perception model can be dynamically called from the perception model library for driving perception based on a to-be-traveled road segment of the to-be-controlled vehicle during the traveling and an environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment. Because different perception models are trained for different road segments and different environmental conditions, during the traveling of the vehicle, the most adapted target perception model can be used for driving perception for different to-be-traveled road segments and the different environmental conditions. Therefore, through a plurality of adapted target perception models, all possible situations that may occur during the traveling of the vehicle can be accurately perceived, providing a good perception effect, and ensuring driving safety.
401: The cloud server obtains current traveling information of a to-be-controlled vehicle, the traveling information being configured for indicating a to-be-traveled road segment of the to-be-controlled vehicle during traveling and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment.
In this embodiment of this application, the to-be-controlled vehicle can automatically report the current traveling information to the cloud server, which is not limited in this application.
402: The cloud server searches for a target perception model of the to-be-traveled road segment under the target environmental condition from a perception model library, the perception model library being configured to store perception models of a plurality of road segments under different environmental conditions, and for any one of the road segments, the perception models of the road segment under the different environmental conditions being trained based on road image sets of the road segment under the different environmental conditions.
In this embodiment of this application, the cloud server trains perception models of the plurality of road segments under the different environmental conditions based on road image sets of the plurality of road segments under the different environmental conditions to obtain the perception model library.
The first operation in constructing the perception model library is to segment a road into segments. A reason why the road needs to be segmented is that vegetation, buildings, road facility styles, and the like in different areas are different. Even for the same area, vegetation, buildings, road facility styles, and the like of different road segments in the area may also be different. To make the perception model have a better perception effect, the road needs to be segmented into a large quantity of small road segments.
For any one of roads within a target geographic area, the road is segmented to obtain road segments of the road. For example, the target geographic area may be a world range or a national range, which is not limited in this application.
In a possible implementation, all types of roads may be segmented using the same segmentation granularity. For example, each type of road may be segmented using a segmentation granularity of a plurality of kilometers (for example, 10 kilometers), to segment each road into a large quantity of small road segments. Alternatively, to make the perception model have a better perception effect, various roads can be segmented based on road types. In other words, different types of roads can alternatively be segmented based on different segmentation granularities. For example, a road with a more complex condition corresponds a smaller segmentation granularity. Specifically, for any one of the roads, a segmentation granularity matching a road type of the road is obtained; and the road is segmented based on the segmentation granularity to obtain road segments of the road. An overlapping area exists between any two adjacent road segments of the road.
For example, the road types include, but are not limited to, highways, urban expressways, urban roads, and country roads, which is not limited in this application. In addition, to avoid disconnected areas between adjacent road segments, a specific overlapping area (for example, 500 meters) is provided between any two adjacent road segments, which is not limited in this application.
In another possible implementation, in an example in which environmental factors include lighting and weather, for the same road segment, there are different lighting categories such as day and night, and different weather categories such as sunny days, rainy days, snowy days, and foggy days. Differences in environmental conditions have a large impact on image formation, consequently affecting a perception effect of a perception model. Therefore, it is needed to map each road unit into more perception units based on the foregoing different lighting and weather. Therefore, in this embodiment of this application, the environmental conditions may be further divided. To be specific, for the same road segment, the road segment is mapped to different perception units based on the different environmental conditions. In other words, the different perception units are the road segment under the different environmental conditions.
Then, in this embodiment of this application, road image sets of the perception units are constructed. In other words, the road image sets of the road segment under the different environmental conditions are constructed, and the perception model library is constructed based on the road image sets. To be specific, perception models of the perception units are trained based on the road image sets of the perception units, to form the perception model library. In other words, for the same road segment, perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions. A road image set includes a road image of the road segment under an environmental condition.
In addition, there are generally a plurality of perception tasks related to autonomous driving or assisted driving, such as target detection, semantic segmentation, and image classification. In embodiments of this application, a perception model library can be configured to perform a perception task, which is not limited in this application. Correspondingly, training the perception models of the road segment under the different environmental conditions based on the road image sets of the road segment under the different environmental conditions includes: training, for a target perception task, the perception models of the road segment under the different environmental conditions based on the road image sets of the road segment under the different environmental conditions and a model training method matching the target perception task, the trained perception model being configured to perform the target perception task, and the target perception task being any perception task related to the autonomous driving or the assisted driving.
403: The cloud server delivers the target perception model to the to-be-controlled vehicle, the to-be-controlled vehicle being configured to: call the target perception model to perform a perception task on the to-be-traveled road segment to obtain a perception result; and control, based on the obtained perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment.
The operation is the same as the foregoing operation 303. Details are not described herein again.
The vehicle control solution provided in embodiments of this application can implement slice-based perception. In detail, for any one of road segments obtained by road segmentation, road image sets of the road segment under different environmental conditions are constructed, so that perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions, to form a perception model library. A perception model is configured to perform a perception task on a vehicle under an environmental condition on a road segment. In this way, during traveling of a to-be-controlled vehicle, the most adapted target perception model can be dynamically called from the perception model library for driving perception based on a to-be-traveled road segment of the to-be-controlled vehicle during the traveling and an environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment. Because different perception models are trained for different road segments and different environmental conditions, during the traveling of the vehicle, the most adapted target perception model can be used for driving perception for different to-be-traveled road segments and the different environmental conditions. Therefore, through a plurality of adapted target perception models, all possible situations that may occur during the traveling of the vehicle can be accurately perceived, providing a good perception effect, and ensuring driving safety.
501: The cloud server obtains road network data and determines roads within a target geographic area based on the road network data.
In this embodiment of this application, the cloud server can pull a map via a map interface, and then obtains the road network data via the map, which is not limited in this application.
502: The cloud server obtains, for any one of the roads within the target geographic area, a segmentation granularity matching a road type of the road; and segments the road based on the segmentation granularity.
The operation is the same as the foregoing operation 402. Details are not described herein again.
503: The cloud server constructs, for any one of segmented road segments, road image sets of the road segment under different environmental conditions.
In an example in which the target geographic area is a world range, because the road network is spread all over the world, it is difficult to use a dedicated collection vehicle to collect samples from all roads, all lighting, and all weather situations in the real world. Therefore, this embodiment of this application may collect road images in a manner of crowdsourcing collection. To be specific, massive road images can be reflowed from vehicles all over the world by cooperating with various device manufacturers and car manufacturers, and the reflowed road images are automatically divided into corresponding road segments in the cloud based on position information when the vehicles collect or reflow the road images, to obtain sample data sets of the road segments. In addition, for the same road segment, there are differences in environmental conditions. Therefore, the sample data sets of the road segments need to be further divided based on the environmental conditions, to form road image sets of the road segments under the different environmental conditions. The device manufacturers are merchants that sell vehicle accessories.
To be specific, constructing the road image sets of the road segment under the different environmental conditions includes: obtaining road images collected by a plurality of vehicles within the target geographic area and position information of the plurality of vehicles when the road images are collected; classifying, based on the position information, the obtained road images into different road segments within the target geographic area to obtain sample data sets of the road segments within the target geographic area; and classifying, for any one of the road segments, the sample data set of the road segment based on a divided environmental condition to obtain road image sets of the road segment under the different environmental conditions.
For the sample data sets of the road segments, relying on manual environment differentiation on massive data is inefficient and costly. In view of this, an environment classification model can be used to automatically identify and classify road images under the different environmental conditions. To be specific, the classifying the sample data set of the road segment based on a divided environmental condition includes: classifying the sample data set of the road segment based on an environment classification model to obtain environmental conditions corresponding to road images in the sample data set; and using road images with a same environmental condition as a road image set of the road segment under an environmental condition.
Then, each segmented road image set can be manually annotated with various targets that need to be perceived by autonomous driving or assisted driving, such as pedestrians, other vehicles, lane lines, zebra crossings, drivable areas, traffic lights, and traffic signs, which is not limited in this application.
In a possible implementation, the foregoing environment classification model may use a deep learning model.
For example, the environment classification model includes a convolution layer, a first pooling layer, a plurality of residual blocks connected in sequence, a second pooling layer, and a fully connected layer. In
Correspondingly, for any one of the road images in the sample data set, the first convolution layer is configured to perform feature extraction on the road image to obtain a first feature map; the first pooling layer is configured to downsample the first feature map to obtain a second feature map; the plurality of residual blocks connected in sequence are configured to perform feature extraction on the second feature map to obtain a third feature map; the second pooling layer is configured to downsample the third feature map to obtain a fourth feature map; and the fully connected layer is configured to output, based on the fourth feature map, an environmental condition corresponding to the road image. For example, the fully connected layer is responsible for outputting probabilities that the road image belongs to the environmental conditions, and a category with the highest probability is the environmental condition corresponding to the road image.
504: The cloud server trains perception models of the plurality of road segments under the different environmental conditions based on the road image sets of the plurality of road segments under the different environmental conditions to obtain a perception model library.
There are generally a plurality of perception models related to the autonomous driving or the assisted driving, including target detection, semantic segmentation, image classification, and the like. The operation uses dynamic target (also referred to as a dynamic obstacle) detection as an example to describe a training process of a perception model. For example, this embodiment of this application defines the dynamic target as three categories: vehicles, pedestrians, and riders.
In a possible implementation, training, for any one of the road segments, perception models of the road segment under the different environmental conditions based on road image sets of the road segment under the different environmental conditions includes:
5041: Generate, for a road image set of the road segment under an environmental condition, anchors on road images included in the road image set.
The anchors are a series of detection boxes set in advance, also referred to as prior boxes. For example, a quantity and sizes of the anchors may be determined by performing k-means clustering on all annotation boxes of all the road images in the road image set. The annotation box is configured to annotate a target contained in a road image. In other words, the annotation box is a real boundary box manually annotated on the road image. In short, in this embodiment of this application, widths and heights of annotation boxes of the foregoing three types of targets are considered as features, and k-mean clustering is performed to cluster all annotation boxes into class B, and then a centroid of class B is used as a width and height of a corresponding anchor. B refers to a quantity of anchors.
Specifically, the generating anchors on road images included in the road image set includes: clustering annotation boxes on the road images included in the road image set to obtain a plurality of classes, each class including a plurality of data points, and the data points being generated based on height values and width values of the annotation boxes; using a quantity of classes obtained by clustering as a quantity of anchors corresponding to the road image set, and using height values and width values corresponding to centroids of the classes as sizes of the anchors corresponding to the road image set; and generating the anchors on the road images included in the road image set based on the determined quantity of anchors and the determined sizes of the anchors. In other words, the anchors are generated at positions on each road image based on the determined quantity of anchors and the determined sizes of the anchors.
5042: Annotate, for any one of the road images, the anchor on the road image based on an annotation box on the road image, to obtain an annotation category, annotation offset, and annotation confidence of the anchor on the road image.
During the training, each anchor is regarded as a training sample. To train the perception model, each anchor needs to be annotated. For example, annotation content includes a category label, offset, and confidence. The annotation category is configured for indicating a category of a target included in the anchor, the annotation offset is configured for indicating offset between the anchor and a similar annotation box, and the annotation confidence is configured for indicating a degree of confidence that the anchor includes the target. For any one of anchors, a similar annotation box of the anchor is an annotation box having the largest intersection-over-union with the anchor. Then, a category label of the annotation box is used as a category label of the anchor, and offset of the anchor relative to the annotation box is calculated.
Correspondingly, during the target detection, a plurality of anchors are first generated on an input road image, and then prediction categories, prediction offset, and prediction confidence of the anchors are generated. Positions of corresponding anchors can be adjusted based on the prediction offset to obtain predicted boxes. Finally, a predicted box that needs to be output can be screened based on maximum value suppression.
5043: Input the road images included in the road image set into a deep learning model to obtain an output feature map. Each position of the output feature map includes a plurality of anchors.
In this embodiment of this application, a quantity of anchors at the position of the output feature map is B, and resolution of the output feature map is 1/16 of an input image. In addition, a quantity of channels of the output feature map is B*(4+1+c). Four channels are used to provide offset of center point horizontal coordinates, center point vertical coordinates, width values, and height values of the anchors. One channel is used to provide confidence of whether a corresponding anchor includes a target, and c channels are used to provide a category of a target. Because in this embodiment of this application, detected dynamic targets are divided into three categories, the quantity of channels of the output feature map is B*8. In addition, because in this embodiment of this application, a target detection problem is regarded as a regression problem, the foregoing offset is also referred to as an offset regressor.
5044: Train a perception model of the road segment under the environmental condition based on prediction categories, prediction offset, and prediction confidence of the anchors on the output feature map, annotation categories, annotation offset, and annotation confidence of the anchors.
In this embodiment of this application, a loss value of a loss function is calculated based on the prediction categories, prediction offset, and prediction confidence of the anchors on the output feature map, the annotation categories, annotation offset, and annotation confidence of the anchors. With a goal of minimizing the loss value, the perception model of the road segment under the environmental condition is trained. Subsequently, through the training manner of the operation, a corresponding perception model can be trained on the road image set.
The expression of the loss function is as follows:
The first line is a center point loss, and the second line is a width and height loss. S represents the width and height of the output feature map. B represents the quantity of anchors at the position on the output feature map. represents whether the target appears in a jth anchor at an ith position of the output feature map. If there is the target at a position of (i,j), a value is 1. If there is no target at the position of (i,j), the value is 0. (Tij, yij, Wij, hij) represent the prediction offset of the anchor, which include prediction offset of the center horizontal coordinate, the center vertical coordinate, the width value, and the height value of the anchor respectively. (îij,ŷij,ŵij . . . ĥij) represent the annotation offset of the anchor, which include annotation offset of the center horizontal coordinate, the center vertical coordinate, the width value, and the height value of the anchor respectively.
The third line is a confidence loss. The fourth line is a category loss, that is, a sum of losses of categories is calculated on the output feature map. α, β, γ represent weight values of the losses. For example, the weight value α is the largest. i,j,k, c are all integers, and k, c are greater than 0. c represents a quantity of categories of the target. iobj represents whether the target appears at the ith position of the output feature map. Cij represents the prediction confidence. Ĉij represents the annotation confidence. pi(k) represents the prediction category. {circumflex over (p)}i(k) represents the annotation category.
505: The cloud server obtains current traveling information of the to-be-controlled vehicle, the traveling information being configured for indicating a to-be-traveled road segment of the to-be-controlled vehicle during traveling and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment.
The operation is the same as the foregoing operation 401. Details are not described herein again.
506: The cloud server searches for a target perception model of the to-be-traveled road segment under the target environmental condition from the perception model library, and sends the target perception model to the to-be-controlled vehicle.
In this embodiment of this application, to obtain all matching target perception models in advance, the cloud server searches for target perception models of the to-be-traveled road segments under corresponding target environmental conditions from the perception model library in response to obtaining of the plurality of to-be-traveled road segments. A target perception model matches a to-be-traveled road segment during traveling of the to-be-controlled vehicle and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment. Alternatively, the target perception model is obtained in real time, so that the to-be-traveled road segment is a to-be-traveled road segment where the to-be-controlled vehicle is currently located. In this case, a manner of obtaining the target perception model of the to-be-traveled road segment under the target environmental condition from a perception model library may be that the cloud server obtains the target perception model of the to-be-traveled road segment under the target environmental condition from the perception model library. The target perception model matches the to-be-traveled road segment where the to-be-controlled vehicle is currently located, and the target environmental condition when the to-be-controlled vehicle is on the corresponding to-be-traveled road segment.
In a possible implementation, the to-be-traveled road segment is a road segment on a current navigation route of the to-be-controlled vehicle. Obtaining the target environmental condition when the to-be-controlled vehicle is on the corresponding to-be-traveled road segment includes: obtaining a road condition of the to-be-controlled vehicle during the traveling based on the current navigation route; determining, based on the road condition and traveling speed of the to-be-controlled vehicle during the traveling, time when the to-be-controlled vehicle is on the to-be-traveled road segments; and determining, for any one of the to-be-traveled road segments, based on the time when the to-be-controlled vehicle is on the to-be-traveled road segment, the target environmental condition when the to-be-controlled vehicle is on the to-be-traveled road segment.
The current navigation route is a traveling path currently and automatically planned by the to-be-controlled vehicle. In an example in which environmental conditions include lighting and weather, a lighting category of the to-be-controlled vehicle when the to-be-controlled vehicle is on each road segment can be determined by estimating time when the to-be-controlled vehicle is on the road segment, and then a weather category of the to-be-controlled vehicle when the to-be-controlled vehicle is on the road segment can be determined by pulling a weather forecast, so that perception models of the road segments under different lighting and weather conditions can be delivered from the cloud server to the to-be-controlled vehicle in advance.
For example, the manner of obtaining target perception models of the to-be-traveled road segments under corresponding target environmental conditions from the perception model library may be to obtain the road condition of the to-be-controlled vehicle during the traveling based on the current navigation route; and determine, based on the road condition and traveling speed of the to-be-controlled vehicle during the traveling, time when the to-be-controlled vehicle is on the to-be-traveled road segments. In this way, the target perception models of the to-be-traveled road segments under the corresponding target environmental conditions are obtained from the perception model library in sequence based on the time when the to-be-controlled vehicle is on the to-be-traveled road segments. Correspondingly, the cloud server can obtain the corresponding perception models from the perception model library and deliver the perception models to the to-be-controlled vehicle in sequence based on the time when the to-be-controlled vehicle is on the to-be-traveled road segments. To be specific, a target perception model of a to-be-traveled road segment that the to-be-controlled vehicle passes first is delivered first, and a target perception model of a road segment that the to-be-controlled vehicle passes later is delivered later.
507: The to-be-controlled vehicle calls the target perception model to perform a perception task on the to-be-traveled road segment to obtain a perception result; and controls, based on the obtained perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment.
The operation is the same as the foregoing operation 303. Details are not described herein again.
In a possible implementation, the foregoing environment classification model and various sensors are deployed at the to-be-controlled vehicle. When the target perception model is called, an environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located can be determined based on the environment classification model and the sensors, to select a target perception model that needs to be called currently. In view of this, the calling a target perception model matching a to-be-traveled road segment where the to-be-controlled vehicle is currently located from the plurality of target perception models to perform the perception task includes: obtaining, based on an environment classification model deployed at the to-be-controlled vehicle, a first environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located; obtaining, based on a sensor deployed at the to-be-controlled vehicle, a second environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located; determining, based on the first environmental condition and the second environmental condition, a final environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located; and calling a target perception model of the to-be-traveled road segment where the to-be-controlled vehicle is currently located under the final environmental condition from the plurality of target perception models.
For example, the to-be-controlled vehicle can obtain positioning information in real time based on a global position system (GPS) module deployed at the to-be-controlled vehicle, and then determines, based on the obtained positioning information, the to-be-traveled road segment where the to-be-controlled vehicle is currently located. During determining the environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located, the environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located, which is referred to as the first environmental condition in this specification, can be determined by the deployed environment classification model. In addition, the environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located, which is referred to as the second environmental condition in this specification, can alternatively be determined by the sensor deployed at the to-be-controlled vehicle. For example, a rain sensor is used to identify whether it is raining on the road segment where the to-be-controlled vehicle is currently located, and a light sensor is used to identify a lighting category of the road segment where the to-be-controlled vehicle is currently located. Finally, the environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located (that is, the final environmental condition) is comprehensively determined based on the first environmental condition output by the environment classification model and the second environmental condition output by the sensor. For example, this embodiment of this application, the first environmental condition output by the environment classification model is used to correct the second environmental condition output by the sensor. In other words, the second environmental condition of the sensor is used as a reference.
In addition, the environmental condition of the to-be-controlled vehicle when the to-be-controlled vehicle is on the to-be-traveled road segment may be determined by the cloud server based on related information reported by the to-be-controlled vehicle, or may be determined by the to-be-controlled vehicle based on the related information, which is not limited in this application.
508: The cloud server obtains a plurality of road images collected by the to-be-controlled vehicle during traveling; updates, based on the road images collected by the plurality of to-be-controlled vehicles, road image sets of the road segments under the different environmental conditions within the target geographic area; and trains, based on updated road image sets of the road segments under the different environmental conditions, perception models of the road segments under the different environmental conditions.
When increasingly more vehicles use a slice-based perception solution to perform a perception task, in other words, when the slice-based perception solution is applied to enough vehicles, it is no longer necessary to cooperate with various device manufacturers and car manufacturers to reflow road images. Instead, a data reflow module can be disposed inside a perception model to automatically reflow the road images when a vehicle is in an autonomous driving mode or an assisted driving mode, thereby implementing a closed-loop iteration of model training-model delivery-data reflow-model training.
The vehicle control solution provided in embodiments of this application can implement slice-based perception. In detail, for any one of road segments obtained by road segmentation, road image sets of the road segment under different environmental conditions are constructed, so that perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions, to form a perception model library. A perception model is configured to perform a perception task on a vehicle under an environmental condition on a road segment. In this way, during traveling of a to-be-controlled vehicle, the most adapted target perception model can be dynamically called from the perception model library for driving perception based on a to-be-traveled road segment of the to-be-controlled vehicle during the traveling and an environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment. Because different perception models are trained for different road segments and different environmental conditions, during the traveling of the vehicle, the most adapted target perception model can be used for driving perception for different to-be-traveled road segments and the different environmental conditions. Therefore, through a plurality of adapted target perception models, all possible situations that may occur during the traveling of the vehicle can be accurately perceived, providing a good perception effect, and ensuring driving safety.
In conclusion, in embodiments of this application, more accurate perception can be achieved and the perception difficulty of a model is greatly reduced, thereby achieving a more precise and robust perception effect. In addition, it is also conducive to optimizing various long tail situations, improving user experience, and ultimately promoting the implementation and promotion of autonomous driving.
a first obtaining module 801, configured to obtain current traveling information of a to-be-controlled vehicle, the traveling information being configured for indicating a to-be-traveled road segment of the to-be-controlled vehicle during traveling and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment;
a second obtaining module 802, configured to obtain a target perception model of the to-be-traveled road segment under the target environmental condition from a perception model library, the perception model library being configured to store perception models of a plurality of road segments under different environmental conditions, and for any one of the road segments, the perception models of the road segment under the different environmental conditions being trained based on road image sets of the road segment under the different environmental conditions; and
a control module 803, configured to call the target perception model to perform a perception task on the to-be-traveled road segment to obtain a perception result, and control, based on the perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment.
The vehicle control solution provided in embodiments of this application can implement slice-based perception. In detail, for any one of road segments obtained by road segmentation, road image sets of the road segment under different environmental conditions are constructed, so that perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions, to form a perception model library. A perception model is configured to perform a perception task on a vehicle under an environmental condition on a road segment. In this way, during traveling of a to-be-controlled vehicle, the most adapted target perception model can be dynamically called from the perception model library for driving perception based on a to-be-traveled road segment of the to-be-controlled vehicle during the traveling and an environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment. Because different perception models are trained for different road segments and different environmental conditions, during the traveling of the vehicle, the most adapted target perception model can be used for driving perception for different to-be-traveled road segments and the different environmental conditions. Therefore, through a plurality of adapted target perception models, all possible situations that may occur during the traveling of the vehicle can be accurately perceived, providing a good perception effect, and ensuring driving safety.
In a possible implementation, a quantity of the to-be-traveled road segments is more than one, and the plurality of to-be-traveled road segments are road segments on a current navigation route of the to-be-controlled vehicle. The second obtaining module is configured to obtain target perception models of the to-be-traveled road segments under corresponding target environmental conditions from the perception model library in response to obtaining of the plurality of to-be-traveled road segments. For a plurality of target perception models, a target perception model matches a to-be-traveled road segment and a target environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment.
The control module is configured to call, during the traveling of the to-be-controlled vehicle, when the to-be-controlled vehicle reaches a to-be-traveled road segment, a target perception model matching a to-be-traveled road segment where the to-be-controlled vehicle is currently located from the plurality of target perception models to perform the perception task, and control, based on the obtained perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment where the to-be-controlled vehicle is currently located.
In a possible implementation, the second obtaining module is configured to: obtain a road condition of the to-be-controlled vehicle during the traveling based on the current navigation route; determine, based on the road condition and traveling speed of the to-be-controlled vehicle during the traveling, time when the to-be-controlled vehicle is on the to-be-traveled road segments; and determine, for any one of the to-be-traveled road segments, based on the time when the to-be-controlled vehicle is on the to-be-traveled road segment, the target environmental condition when the to-be-controlled vehicle is on the to-be-traveled road segment.
In a possible implementation, the to-be-traveled road segment is a road segment on the current navigation route of the to-be-controlled vehicle. The second obtaining module is configured to: obtain a road condition of the to-be-controlled vehicle during the traveling based on the current navigation route; determine, based on the road condition and traveling speed of the to-be-controlled vehicle during the traveling, time when the to-be-controlled vehicle is on the to-be-traveled road segments; and obtain, based on the time when the to-be-controlled vehicle is on the to-be-traveled road segments, the target perception models of the to-be-traveled road segments under the corresponding target environmental conditions from the perception model library in sequence.
In a possible implementation, the to-be-traveled road segment is a to-be-traveled road segment where the to-be-controlled vehicle is currently located. The second obtaining module is configured to obtain a target perception model of the to-be-traveled road segment under the target environmental condition from a perception model library. The obtained target perception model matches the to-be-traveled road segment where the to-be-controlled vehicle is currently located, and the target environmental condition when the to-be-controlled vehicle is on the corresponding to-be-traveled road segment.
The control module is configured to call the obtained target perception model to perform the perception task on the to-be-traveled road segment where the to-be-controlled vehicle is currently located to obtain the perception result, and control, based on the obtained perception result, the to-be-controlled vehicle to travel on the to-be-traveled road segment where the to-be-controlled vehicle is currently located.
In a possible implementation, the control module is configured to: obtain, based on an environment classification model deployed at the to-be-controlled vehicle, a first environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located; obtain, based on a sensor deployed at the to-be-controlled vehicle, a second environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located; determine, based on the first environmental condition and the second environmental condition, a final environmental condition of the to-be-traveled road segment where the to-be-controlled vehicle is currently located; and call a target perception model of the to-be-traveled road segment where the to-be-controlled vehicle is currently located under the final environmental condition from the plurality of target perception models.
Any combination of the foregoing exemplary technical solutions may be used to obtain an exemplary embodiment of the present disclosure. Details are not described herein.
The vehicle control solution provided in embodiments of this application can implement slice-based perception. In detail, for any one of road segments obtained by road segmentation, road image sets of the road segment under different environmental conditions are constructed, so that perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions, to form a perception model library. A perception model is configured to perform a perception task on a vehicle under an environmental condition on a road segment. In this way, during traveling of a to-be-controlled vehicle, the most adapted target perception model can be dynamically called from the perception model library for driving perception based on a to-be-traveled road segment of the to-be-controlled vehicle during the traveling and an environmental condition when the to-be-controlled vehicle is on a corresponding to-be-traveled road segment. Because different perception models are trained for different road segments and different environmental conditions, during the traveling of the vehicle, the most adapted target perception model can be used for driving perception for different to-be-traveled road segments and the different environmental conditions. Therefore, through a plurality of adapted target perception models, all possible situations that may occur during the traveling of the vehicle can be accurately perceived, providing a good perception effect, and ensuring driving safety.
In a possible implementation, the apparatus further includes:
In a possible implementation, the apparatus further includes:
In a possible implementation, for a target perception task, the perception models of the road segment under the different environmental conditions are trained based on the road image sets of the road segment under the different environmental conditions and a model training method matching the target perception task. The perception model is configured to perform the target perception task.
In a possible implementation, the apparatus further includes:
In a possible implementation, the training module is further configured to:
In a possible implementation, the training module is further configured to:
In a possible implementation, the construction module is configured to:
Any combination of the foregoing exemplary technical solutions may be used to obtain an exemplary embodiment of the present disclosure. Details are not described herein.
When the vehicle control apparatus provided in the foregoing embodiment performs vehicle control, only division of the foregoing function modules is used as an example for description. In actual application, the foregoing functions may be allocated to different function modules for implementation based on requirements. In other words, an internal structure of the apparatus is divided into different function modules, to complete all or some of the foregoing described functions. In addition, the vehicle control apparatus and vehicle control method embodiments provided in the foregoing embodiments are based on the same conception. For details of the specific implementation process, reference may be made to the method embodiments, and details are not described herein again.
The processor 1001 may include one or more processing cores, for example, a 4-core processor or an 8-core processor. The processor 1001 may be implemented in at least one hardware form of a digital signal processor (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA). The processor 1001 may also include a main processor and a coprocessor. The main processor is a processor configured to process data in an awake state, and is also referred to as a central processing unit (CPU). The coprocessor is a low-power processor configured to process data in a standby state. In a possible implementation, the processor 1001 may be integrated with a graphics processing unit (GPU). The GPU is configured to render and draw content that needs to be displayed on a display screen. In some embodiments, the processor 1001 may further include an artificial intelligence (AI) processor. The AI processor is configured to process computing operations related to machine learning.
The memory 1002 may include one or more computer-readable storage media. The computer-readable storage medium may be non-transient. The memory 1002 may further include a high-speed random access memory and a nonvolatile memory, for example, one or more disk storage devices or flash storage devices. In a possible implementation, the non-transitory computer-readable storage medium in the memory 1002 is configured to store a computer program, and the computer program is configured to be executed by the processor 1001 to implement the vehicle control method provided in the method embodiments of this application.
In a possible implementation, the computer device 1000 further includes: a peripheral device interface 1003 and at least one peripheral device. The processor 1001, the memory 1002, and the peripheral device interface 1003 may be connected via a bus or a signal cable. Each peripheral device may be connected to the peripheral device interface 1003 via a bus, a signal cable, or a circuit board. Specifically, the peripheral device includes at least one of a radio frequency circuit 1004, a display screen 1005, a camera component 1006, an audio circuit 1007, or a power supply 1008.
A person skilled in the art may understand that the structure shown in
The computer device 1100 may be a server. The computer device 1100 may vary a lot due to different configurations or performance, and may include one or more central processing units (CPU) 1101 and one or more memories 1102. The memory 1102 stores a computer program, and the computer program is loaded and executed by the central processing unit 1101 to implement the vehicle control method provided in the foregoing method embodiments. Certainly, the computer device 1100 may further include components such as a wired or wireless network interface, a keyboard, and an input/output (I/O) interface, to facilitate input and output. The computer device 1100 may further include another component configured to implement a function of a device. Details are not further described herein.
In an exemplary embodiment, a computer-readable storage medium is further provided, such as a memory including program code. The program code may be executed by a processor of a computer device to implement the vehicle control method provided in the foregoing embodiments. For example, the computer-readable storage medium may be a read-only memory (ROM), a random access memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, and an optical data storage device.
In an exemplary embodiment, a computer program product is further provided, including a computer program. The computer program is stored in a computer-readable storage medium, a processor of a computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program to cause the computer device to perform the foregoing vehicle control method.
A person of ordinary skill in the art may understand that all or some of the operations of the foregoing embodiments may be implemented by hardware, or may be implemented by a computer program instructing relevant hardware. The computer program may be stored in a computer-readable storage medium. The foregoing storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like.
The foregoing descriptions are merely exemplary embodiments of this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made within the spirit and principle of this application shall fall within the protection scope of this application.
Number | Date | Country | Kind |
---|---|---|---|
202211186055.7 | Sep 2022 | CN | national |
This application is a continuation of International Application No. PCT/CN2023/113976, filed on Aug. 21, 2023, which claims priority to Chinese Patent Application No. 202211186055.7, filed with the China National Intellectual Property Administration on Sep. 27, 2022 and entitled “VEHICLE CONTROL METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM,” the entire contents of both of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/113976 | Aug 2023 | WO |
Child | 18814092 | US |