The present application claims the benefit of Chinese Patent Application No. 202311469457.2, filed on Nov. 6, 2023, which is incorporated herein by reference in its entirety.
The present disclosure relates to an assisted driving technology, in particular to a method and apparatus for verifying a vehicle function, an electronic device, and a storage medium.
In fields such as autonomous driving and assisted driving, a vehicle usually has functions, such as Autonomous Emergency Braking (AEB), Adaptive Cruise Control (ACC), Navigate on Autopilot (NOA), and Highway Pilot (HWP), thereby providing users with service of corresponding functions. The implementation of these functions usually depends on perception results of a perception function of the vehicle (also known as an ego vehicle) to environments of the ego vehicle, and the environments may include dynamic objects, such as other vehicles, pedestrians, and cyclists around the ego vehicle, as well as static objects, such as lane lines and roadsides around the ego vehicle. The perception results need to be achieved based on a corresponding perception algorithm for the perception function of the ego vehicle, and the perception algorithm may include a perception algorithm based on at least one kind of sensor data of vision, laser radar, ultrasonic radar, millimeter wave radar, etc. The performance of the perception algorithm will be affected by factors, such as hardware and environmental changes during applications, which easily causes vehicle functions to fail to meet corresponding expected functional safety requirements or functional performance requirements.
In order to solve the above-mentioned technical problem that the perception algorithm easily causes the vehicle functions to fail to meet the expected functional safety requirements, there are provided a method and apparatus for verifying a vehicle function, an electronic device, and a storage medium in the embodiments of the present disclosure. By simulating various perception failure situations of the perception algorithm, the tolerance of the vehicle functions to a performance of the perception algorithm is verified, which may be used for guiding the development and optimization of the perception algorithm, so that the perception algorithm may make the vehicle function meet the corresponding expected functional safety requirements or functional performance requirements.
According to a first aspect of the present disclosure, there is provided a method for verifying a vehicle function, including: acquiring operation scenario data respectively corresponding to each of one or more operation scenarios, the operation scenario data including state information of an ego vehicle and surrounding environmental information of the ego vehicle; determining, based on the operation scenario data respectively corresponding to each of the operation scenarios, first perception result information corresponding to a preset vehicle function of the ego vehicle in each of the operation scenarios; performing perception failure simulation processing on the first perception result information based on a perception failure simulation rule to obtain second perception result information respectively corresponding to each of the operation scenarios; and verifying the preset vehicle function based on the second perception result information to obtain a verification result.
According to a second aspect of the present disclosure, there is provided an apparatus for verifying a vehicle function, including: an acquisition module configured to acquire operation scenario data respectively corresponding to each of one or more operation scenarios, the operation scenario data including state information of an ego vehicle and surrounding environmental information of the ego vehicle; a first processing module configured to determine, based on the operation scenario data respectively corresponding to each of the operation scenarios, first perception result information corresponding to a preset vehicle function of the ego vehicle in each of the operation scenarios; a second processing module configured to perform perception failure simulation processing on the first perception result information based on a perception failure simulation rule to obtain second perception result information respectively corresponding to each of the operation scenarios; and a third processing module configured to verify the preset vehicle function based on the second perception result information to obtain a verification result.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium, wherein the storage medium stores a computer program that, when executed by a processor, implements the method for verifying a vehicle function according to any one of the above-mentioned embodiments of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided an electronic device, including: a processor; and a memory configured to store a executable instruction; wherein the processor is configured to read the executable instruction from the memory and execute the instruction to implement the method for verifying a vehicle function according to any one of the above-mentioned embodiments of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product, which, when an instruction thereon being executed by a processor, makes the processor to implement the method for verifying a vehicle function according to any one of the above-mentioned embodiments of the present disclosure.
Based on the method and apparatus for verifying a vehicle function, the electronic device, and the storage medium provided in the above-mentioned embodiments of the present disclosure, the operation scenario data of the at least one operation scenario may be acquired; the first perception result information corresponding to the preset vehicle function of the ego vehicle in each of the operation scenarios is determined based on the operation scenario data corresponding to each of the operation scenarios, and then, perception failure simulation processing may be performed on the first perception result information based on the perception failure simulation rule to obtain the second perception result information corresponding to each of the operation scenarios, so that the second perception result information includes at least one possible situation of perception failure of the perception algorithm in a real scenario; and the preset vehicle function is verified based on the second perception result information to obtain the verification result, which may determine whether a functional performance of the preset vehicle function in the case of perception failure meets a certain expected functional safety requirement or functional performance requirement, thereby determining the tolerance of the preset vehicle function to the perception failure of the perception algorithm by simulating various perception failure situations of the perception algorithm to facilitate guiding the development and optimization of the perception algorithm, and then ensuring that the perception algorithm may be able to make the preset vehicle function meet the corresponding expected functional safety requirement or functional performance requirement.
In order to explain the present disclosure, exemplary embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. Obviously, the described embodiments are only a part of embodiments of the present disclosure, rather than all the embodiments. It should be understood that the present disclosure is not limited to exemplary embodiments.
It should be noted that relative layout, digital expressions and numerical values of components and steps described in these embodiments will limit to the scope of the present disclosure unless otherwise defined specifically.
During the implementation of the present disclosure, it is found by an inventor that in the fields such as autonomous driving and assisted driving, a vehicle usually has functions, such as Autonomous Emergency Braking (AEB), Adaptive Cruise Control (ACC), Navigate on Autopilot (NOA), and Highway Pilot (HWP), thereby providing users with service of corresponding functions. The implementation of these functions usually depends on perception results of a perception function of the vehicle (also known as an ego vehicle) to environments of the ego vehicle, and the environments may include dynamic object, such as other vehicle, pedestrian, and cyclist around the ego vehicle, as well as static object, such as lane line and roadside around the ego vehicle. The perception result needs to be achieved based on a corresponding perception algorithm for the perception function of the ego vehicle, and the perception algorithm may include a perception algorithm, which is specifically a target detection algorithm, a semantic segmentation algorithm, etc., based on at least one kind of sensor data of vision, laser radar, ultrasonic radar, millimeter wave radar, etc. The performance of the perception algorithm will be affected by factors, such as hardware and environmental changes during applications, which easily causes vehicle functions to fail to meet corresponding expected functional safety requirements or functional performance requirements.
As shown in
Step 201: acquiring operation scenario data respectively corresponding to each of one or more operation scenarios.
The operation scenario data includes state information of an ego vehicle and surrounding environmental information of the ego vehicle. The state information of the ego vehicle may include movement track information of the ego vehicle within a preset duration. The movement track information may include a state of the ego vehicle at each of time points within a preset duration, and the state may include for example one or more kinds of information on a position, a posture (an orientation), a speed, an accelerated speed, and an angular speed. The state information of the ego vehicle may be state information in a first coordinate system. The first coordinate system may be a world coordinate system or a reference coordinate system rigidly connected to the world coordinate system. The surrounding environmental information may include one or more kinds of behavior information, environmental information, road information, etc. of other road participants (which may be known as objects) around the ego vehicle. The behavior information of the other road participants may include one or more kinds of information on such as positions, postures, speeds, accelerated speeds, and angular speeds, of the other road participants at each of time points within a preset duration. The environmental information may include for example one or more kinds of time information and weather information, etc. The time information includes days, nights, etc. The weather information includes sunny days, rainy days, snowy days, etc. The road information may include road types (such as high-speed roads and urban roads), lane line information, road conditions (such as flat roads, uphill roads, and downhill roads), etc. The operation scenario data is used for simulating an operation situation of the ego vehicle in a certain environment, and provides simulated perception result information for verifying the preset vehicle function of the ego vehicle.
In some optional embodiments, the surrounding environmental information may further include at least one of positioning information, navigation information, high-precision map information, etc., for assisting in simulating real operation scenarios.
In some optional embodiments, the operation scenario data may include a scenario description file and a scenario map file which are generated in advance. The scenario description file is used for describing the state information of the ego vehicle and behavior information of other dynamic objects around the ego vehicle. The scenario map file is used for describing environmental information and road information of the operation scenarios. A specific expression way of the operation scenario data is not limited.
In some optional embodiments, the number of the operation scenarios and specific scenarios may be set according to an actual demand of the preset vehicle function. For example, the operation scenarios may include a scenario in which there are other vehicles behind the ego vehicle, there are opposite vehicles on an adjacent lane, there are other vehicles with low speeds in front of a highway, there are other vehicles cutting in the front, or there are pedestrians crossing a road in front of an intersection, etc. Each operation scenario may be further generalized according to a relationship between other objects around the ego vehicle and the ego vehicle to obtain more fine-grained operation scenarios, such as a plurality of operation scenarios in which there are other vehicles at different distances behind the ego vehicle, there are opposite coming vehicles with different speeds on an adjacent lane, there are other vehicles with different queue cutting in the front, etc. Each of the fine-grained operation scenarios is used as an operation scenario to obtain at least one operation scenario for vehicle function verification.
Step 202: determining, based on the operation scenario data respectively corresponding to each of the operation scenarios, first perception result information corresponding to a preset vehicle function of the ego vehicle in each of the operation scenarios.
For any operation scenario, the first perception result information of the preset vehicle function of the ego vehicle in the operation scenario may include behavior information and road information of other objects around the ego vehicle.
In some optional embodiments, the first perception result information may be perception result information in a coordinate system of the ego vehicle.
In some optional embodiments, for any operation scenario, the first perception result information may include perception result information of the ego vehicle at at least one trajectory point in the operation scenario. The perception result information of the ego vehicle at any trajectory point includes behavior information and road information of other objects around the ego vehicle at the trajectory point.
In some optional embodiments, the operation scenario data includes the state information and the surrounding environmental information of the ego vehicle, and therefore, traffic flow information around the ego vehicle may be obtained by analyzing the operation scenario data, and then, the first perception result information corresponding to the preset vehicle function of the ego vehicle in each of the operation scenarios is obtained. For example, during vehicle function verification, for each operation scenario being verified, other object around the ego vehicle may be determined according to the state information and the surrounding environmental information of the ego vehicle at any trajectory point, wherein other object may include dynamic object, such as other vehicle, pedestrians, and cyclists around the ego vehicle, as well as static object, such as lane line and roadside; and object state information of other object relative to the ego vehicle may be determined as first perception result information. That is, the first perception result information may include simulated perceived state information of other objects around the ego vehicle relative to the ego vehicle, such as positions, orientations, speeds, accelerated speeds, and angular speeds, etc. Compared with a perception result obtained in a real environment by using a perception algorithm, a perception result obtained based on the operation scenario data is determined based on states of the ego vehicle and surrounding objects in a simulated operation scenario, but is not determined based on an actual perception algorithm, which is not affected by factors such as hardware and environmental changes, and belongs to an accurate perception result without perception failure (or perception faults).
In some optional embodiments, the types and number of other objects included in the first perception result information are set according to specific operation scenario data and a perception demand of the preset vehicle function and are not specifically limited. For example, for the AEB function, obstacle objects around the ego vehicle and affecting the driving of the ego vehicle need to be perceived to determine whether the obstacle objects have a risk of collision with the ego vehicle and whether the AEB function needs to be triggered, etc.
Step 203: performing perception failure simulation processing on the first perception result information based on a perception failure simulation rule to obtain second perception result information respectively corresponding to each of the operation scenarios.
The perception failure simulation rule may be set according to a perception failure situation easily caused for the preset vehicle function in a real operation scenario, and is used for performing perception failure simulation processing (also known as perception fault injection) on the accurate first perception result information to obtain the second perception result information with perception failure.
In some optional embodiments, the perception failure simulation rule may include a triggering condition for perception failure simulation processing and a failure simulation rule. The triggering condition may be set according to a verification demand of the preset vehicle function. For example, the triggering condition includes that when a position relationship between each of other objects and the ego vehicle meets a certain condition, perception failure simulation processing for perception results for other objects is triggered. Perception failure simulation processing may include at least one of simulation processing, such as missing detection, false detection, detection delay, detection instability, a detection distance error, and a detection speed error simulated for other objects. For different objects, corresponding simulation rules (i.e., failure simulation rules) may be set. The simulation rule may include simulation ranges, simulation precisions, etc. of parameters (which may be known as failure parameters or fault parameters), such as a missing detection duration, a missing detection frequency, a false detection duration, a false detection frequency, delay time, a distance error, and a speed error. For example, for the AEB function, after a potential risk object is found, the AEB function is triggered, and perception failure simulation processing is performed on first perception result information of the risk object according to a simulation rule with a certain missing detection duration and frequency, so that second perception result information with a missing-detected perception fault is obtained.
In some optional embodiments, for the first perception result information of other object which may be subjected to perception failure simulation, at least one kind of perception failure simulation processing may be performed to obtain the corresponding second perception result information.
Step 204: verifying the preset vehicle function based on the second perception result information to obtain a verification result.
For the second perception result information, verification for the preset vehicle function based on the second perception result information is to verify whether the preset vehicle function may meet the corresponding expected functional safety requirement or functional performance requirement under the second perception result information. The obtained verification result may include at least one of a function response state of the preset vehicle function under the second perception result information and a perception performance threshold of a perception algorithm allowed by the preset vehicle function. The perception performance threshold may be determined based on the function response state under the second perception result information.
Based on the method for verifying a vehicle function provided in the above-mentioned embodiments of the present disclosure, the operation scenario data of the one or more operation scenarios may be acquired; the first perception result information corresponding to the preset vehicle function of the ego vehicle in each of the operation scenarios is determined based on the operation scenario data corresponding to each of the operation scenarios, and then, perception failure simulation processing may be performed on the first perception result information based on the perception failure simulation rule to obtain the second perception result information corresponding to each of the operation scenarios, so that the second perception result information includes at least one possible situation of perception failure of the perception algorithm in a real scenario; and the preset vehicle function is verified based on the second perception result information to obtain the verification result, by which it may be determined whether a functional performance of the preset vehicle function in the case of perception failure meets a certain expected functional safety requirement or functional performance requirement, thereby determining the tolerance of the preset vehicle function to the perception failure of the perception algorithm by simulating various perception failure situations of the perception algorithm to facilitate guiding the development and optimization of the perception algorithm, and then ensuring that the perception algorithm may make the preset vehicle function meet the corresponding expected functional safety requirement or functional performance requirement.
In some optional embodiments, the first perception result information includes first perception results of at least one object around the ego vehicle. Step 203 of performing perception failure simulation processing on the first perception result information based on a perception failure simulation rule to obtain second perception result information respectively corresponding to each of the operation scenarios includes:
step 2031: for any of the operation scenarios, determining, based on the first perception result information in the operation scenario, a target object meeting a first condition.
The first condition is set according to the preset vehicle function. The first condition may include a condition of a relative relationship between each of objects around the ego vehicle and the ego vehicle in the operation scenario, such as conditions of position and speed relationships between each of the surrounding objects and the ego vehicle. For example, the first condition may be that a distance from each of the objects to the ego vehicle is smaller than a distance threshold. The target object may be dynamic object, such as vehicle and pedestrian, or static object, such as lane line and roadside.
In some optional embodiments, the first perception result of each of the objects around the ego vehicle may be used as a perception result in a coordinate system of the ego vehicle. The target object meeting the first condition may be determined by matching with the first condition based on the first perception result of each of the objects around the ego vehicle. The specific first condition obtained when the different objects are determined as the target object may be different.
Step 2032: determining, based on the perception failure simulation rule, a failure mode corresponding to the target object.
The perception failure simulation rule may include mapping rules of the different objects and the failure modes and simulation rules of the different failure modes. The failure mode of the different objects may be different. The different failure modes may correspond to different perception failure simulation processing ways (i.e., the simulation rules). For example, for vehicle object cutting in the front, the perception failure simulation processing corresponding to the failure mode thereof may include simulation of missing detection, false detection, a detection distance error, etc. For lane line object, the perception failure simulation processing corresponding to the failure mode thereof may include a transverse distance error, etc.
In some optional embodiments, according to Operational Design Domains (ODD) of automatic driving, assisted driving and other functions, functional safety and expected functional safety Hazard Analysis and Risk Assessment (HARA) for vehicle functions may be performed, and functional safety goals (i.e., goals to be achieved at a functional safety level) may be determined, wherein the functional safety goals include avoiding braking loss, avoiding unexpected braking, etc. The functional safety goals are analyzed in way of Fault Tree Analysis (FTA) and Design Failure Mode and Effects Analysis (DFMEA) to obtain failure modes of the perception algorithm under the corresponding functional safety goals, such as missing detection, false detection, detection delay, detection instability, and a detection error, etc. According to the operation scenario and the analysis of the ODD in HARA analysis, a parameter range of each operation scenario, such as a range of parameter including longitudinal speed, longitudinal distance and transverse speed of the target object, is determined. Moreover, the parameter range may be associated with the operation scenario to generate the operation scenario data of different operation scenarios to verify the vehicle functions, and determine the failure modes of the target object in the operation scenario.
Exemplarily, for the HARA of a highway pilot (HWP) function, there is a hazard event that a front vehicle cuts into the ego vehicle without slowing down and a safety goal that the HWP function needs to prevent braking loss. A low-level failure mode may be obtained by FTA and DFMEA, such as missing detection of cut-in vehicles (that is, missing detection is performed on the cut-in vehicles by using the perception algorithm) or overlarge longitudinal ranging of the cut-in vehicles (that is, a detection distance error leads to large longitudinal distances of the cut-in vehicles perceived by the perception algorithm, but the longitudinal distances are not large in practice). For such safety goals and the definition of the ODD, parameter ranges of variables, such as longitudinal speeds, longitudinal distances and transverse speeds of the cut-in vehicles in a cut-in process are set. Specific parameter distribution within the parameter ranges may be determined by means of statistics and collection of natural driving data, for example, it may be obtained by analyzing and extracting drive test data and aerial survey data. Missing detection is used as a failure mode in an operation scenario that there are vehicles cutting in the front. By generalizing the missing detection duration and the missing detection frequency, various failure sub-modes, such as a long-time missing detection sub-mode and a short time high-frequency missing detection sub-mode, in the operation scenario may be determined. Fine-grained generalization may be further performed on a specific missing detection duration and missing detection frequency of each failure mode or failure sub-mode to obtain fine-grained fault parameters (i.e., failure parameters) in various failure sub-modes. For example, any determined missing detection duration value and missing detection frequency value are used as a group of fault parameters in the failure mode or failure sub-mode.
Exemplarily, in the HARA of the HWP function, there is a hazard event that there is ego vehicle braking (i.e., unexpected braking) of a vehicle on a lateral front lane and a safety goal of preventing the unexpected braking. A low-level failure mode, such as a mode that a lateral vehicle is inaccurate in transverse ranging, an actual transverse distance is longer while a transverse distance perceived by the perception algorithm is shorter, which leads to the unexpected braking of the ego vehicle, may be obtained by FTA and DFMEA. For such safety goals, parameter ranges of variables, such as longitudinal distances, transverse distances and longitudinal speeds of lateral front vehicles are defined to generate the operation scenario data of the various operation scenarios. A transverse detection error is used as a failure mode, and various fine-grained fault parameters in the failure mode may be obtained by generalizing different transverse detection distance errors. For example, each transverse detection distance error value obtained by generalization is used as a fault parameter in the failure mode.
Step 2033, performing perception failure simulation processing on the first perception result of the target object based on the failure mode to obtain a second perception result of perception failure of the target object.
After the failure mode of the target object is determined, perception failure simulation processing corresponding to the failure mode may be performed on the first perception result of the target object based on the failure mode of the target object to obtain the second perception result of perception failure of the target object.
Exemplarily, the second perception result with at least one duration and frequency combination may be obtained by performing missing detection simulation on the target object for different durations and/or at different frequencies. For example, in the first perception result information, the target object is detected at time points A to B, and the preset vehicle function is triggered. The generated second perception result is within a time period between the time point A and the time point B, missing detection is performed on the target object for a missing detection duration T1 and at a missing detection frequency F1, so that it is simulated whether a successful response is made to the preset vehicle function in the case of missing detection.
Step 2034, determining, based on the second perception result of the target object, the second perception result information corresponding to the operation scenario.
The second perception result information may include at least one second perception result corresponding to the target object in the operation scenario.
In some optional embodiments, for failure modes, such as missing detection, false detection, a detection distance error, and a detection speed error (also known as a perception fault), in order to ensure the accuracy and reliability of vehicle function verification, the different second perception results may be generated for different fault parameters (also known as failure parameters, such as a missing detection duration, a missing detection frequency, a false detection duration, and a false detection frequency) of each perception fault. For example, various fault parameters may be set for a missing detection fault, different fault parameters correspond to different missing detection durations and/or different missing detection frequencies, and thus, the second perception results respectively corresponding to the various fault parameters may be obtained. For any operation scenario, one of the first perception results of the target object may correspond to one or more of the second perception results. That is, the second perception result information corresponding to the operation scenario may include at least one second perception result corresponding to each target object of the one or more target objects.
In the present embodiment, by setting the corresponding failure modes for the different objects, when it is determined that there is the target object meeting the first condition, perception fault injection may be performed on the first perception result of the target object based on the failure mode corresponding to the target object to obtain perception fault injected second perception result of the target object, which is beneficial to the improvement of the accuracy and effectiveness of perception failure simulation.
In some optional embodiments, step 202 of determining, based on the operation scenario data respectively corresponding to each of the operation scenarios, first perception result information corresponding to a preset vehicle function of the ego vehicle in each of the operation scenarios includes:
step 2021, for any operation scenario, determining, based on the operation scenario data corresponding to the operation scenario, first perception result information of the ego vehicle at at least one trajectory point in the operation scenario.
The ego vehicle has a certain driving track when driving in the operation scenario, and points on the driving track are known as trajectory points. The operation scenario data includes data for describing state information of the ego vehicle, and each trajectory point of the ego vehicle in the operation scenario may be obtained by analyzing the operation scenario data. The first perception result information of the ego vehicle at any trajectory point may be obtained in combination with the surrounding environmental information of the ego vehicle in the operation scenario data. With the movement of the ego vehicle, relative relationships between each of other surrounding objects and the ego vehicle may be different when the ego vehicle is at the different trajectory points. That is, other object existing in the different areas around the ego vehicle may be different.
Step 2031 of for any operation scenario, determining, based on the first perception result information in the operation scenario, a target object meeting a first condition comprises, includes:
step 20311, for any operation scenario of all the operation scenarios, determining, based on the first perception result information corresponding to the trajectory point, the target object in different areas around the ego vehicle.
For each of the operation scenarios, the target object in the different areas around the ego vehicle may be determined based on the first perception result information corresponding to each of the trajectory points of the ego vehicle in the operation scenario.
In some optional embodiments, an area around the ego vehicle may be divided into different areas according to a preset area division rule. For example, the area of the ego vehicle is divided into the different areas by a grid coordinate system.
Exemplarily,
In the present embodiment, the target object in the different areas around the ego vehicle are determined based on the first perception result information of the ego vehicle at each trajectory point in the operation scenario, which facilitates determining the triggering of perception failure simulation on one hand, and is beneficial to the finer-grained verification of the tolerance of the preset vehicle function to perception failure of the target object in the different areas on the other hand.
In some optional embodiments, the driving of the ego vehicle in any operation scenario may be simulated by an in-loop simulation system, and current first perception result information is obtained in real time during driving, wherein the current first perception result information may include state information of the target object in the different areas around the ego vehicle. Moreover, it is determined, based on a preset condition, whether the perception fault injection is triggered, for example, it is determined whether the target object in the different areas meet a perception fault injection triggering condition (such as perception fault injection corresponding to grid 2 is triggered when position of the target object relative to the ego vehicle in
In some optional embodiments, the failure parameters may be traversed according to influences on the preset vehicle function from little to great, for example, a smaller detection error, a missing detection duration and a false detection duration are verified first. For example, since the error is smaller, the preset vehicle function may allow the existence of the error to achieve an expected result, and thus, the function response state is shown as success. As the failure parameters get larger and larger, after a perception performance index reaches an allowable boundary of the preset vehicle function, the functional response state of the preset vehicle function will be shown as failure, and thus, the perception performance threshold which may be allowed by the preset vehicle function may be obtained.
In some optional embodiments, the perception performance index may also be denoted as a perception delay index, a perception recall rate, a detection distance precision, a detection speed precision, etc. For example, a perception performance index corresponding to long-time missing detection may be denoted as the perception delay index. A perception performance index corresponding to short-time high-frequency missing detection may be denoted as the perception recall rate. A perception performance index corresponding to a transverse/longitudinal distance error may be denoted as the detection distance precision. A perception performance index corresponding to a transverse/longitudinal speed error may be denoted as the detection speed precision. Thus, each of a perception delay index threshold, a perception recall rate threshold, a detection distance precision threshold and a detection speed precision threshold is used as the verification result to guide the development and optimization of the perception algorithm.
In some optional embodiments, step 204 of verifying the preset vehicle function based on the second perception result information to obtain a verification result includes:
step 204A: determining, based on the second perception result information, a perception performance threshold corresponding to the preset vehicle function in each of the different areas around the ego vehicle.
The perception performance threshold may include an index threshold corresponding to each failure mode, such as a missing detection duration threshold, a missing detection frequency threshold, a false detection duration threshold, a false detection frequency threshold, a detection delay time threshold, a detection distance error threshold, and a detection speed error threshold. The detection distance error threshold may also be subdivided into a transverse distance error threshold and a longitudinal speed error threshold. The specific division of the perception performance index may be set according to a demand of the preset vehicle function, and is not limited in the present disclosure.
In some optional embodiments, for each area, it may be determined, based on the second perception results of the target object in the area, whether the preset vehicle function is triggered and whether the response state of the preset vehicle function may avoid hazard events in the case that the preset vehicle function is triggered. For the second perception results in any failure mode, if the response state of the preset vehicle function is shown as success, it is determined that the area may meet the expected functional safety requirement or functional performance requirement in the failure mode. The failure modes in which the area may meet the expected functional safety requirement or functional performance requirement may be determined based on the response state of the preset vehicle function in a large number of failure modes in the area, and the performance threshold corresponding to the area may be determined based on these failure modes.
Step 204B: taking the perception performance threshold corresponding to the preset vehicle function in each of the different areas around the ego vehicle as the verification result.
After the perception performance threshold corresponding to each of the different areas around the ego vehicle is obtained, the perception performance threshold corresponding to each of the different areas may be used as the verification result to guide the development and optimization of the perception algorithm.
In the present embodiment, the perception performance threshold corresponding to each of the different areas around the ego vehicle is determined as the verification result, which achieves finer-grained determination of the perception performance threshold, is beneficial to the provision of a more accurate and effective perception performance threshold for the development and optimization of the perception algorithm, and further provides the reliability of the perception algorithm.
In some optional embodiments, the failure modes include at least one type of sub-mode.
In some optional embodiments, step 2033 of performing perception failure simulation processing on the first perception result of the target object based on the failure mode to obtain a second perception result of perception failure of the target object includes:
step 20331: for any sub-mode, performing perception failure simulation processing on the first perception result of the target object based on a simulation sub-rule corresponding to the sub-mode to obtain the second perception result of the target object in the sub-mode.
The sub-modes are obtained by finer-grained division for a failure mode. For example, a false detection failure mode may be subdivided into a plurality of sub-modes, such as long-time false detection and short-time high-frequency false detection, according to different false detection durations and different false detection frequencies. The detection error may be subdivided into sub-modes, such as a detection distance error and a detection speed error, or may be subdivided into sub-modes, such as a transverse distance error, a longitudinal distance error, a transverse speed error, and a longitudinal distance error, or may be subdivided into sub-modes, such as an error enabling a detection result to be larger, and an error enabling the detection result to be smaller. The sub-modes may be further obtained by finer-grained division. The specific division of the sub-modes may be set according to a specific operation scenario of the preset vehicle function.
In some optional embodiments, for each of the sub-modes, a corresponding simulation sub-rule may be set, so that perception failure simulation processing is performed on the first perception result of the target object to obtain the second perception result of the target object in the sub-mode.
In some optional embodiments, for each of the sub-modes, the simulation sub-rule corresponding to the sub-mode may include a failure parameter range and a failure simulation precision which correspond to the sub-mode, so that a plurality of failure parameters are determined within the failure parameter range; and each of the failure parameters is used for fault injection on the first perception result of the target object to obtain a second perception result corresponding to the failure parameter. The second perception result of the target object in any sub-mode may include the second perception result respectively corresponding to each of the plurality of failure parameters. If each of the failure parameters is used as a sub-mode, the second perception result of the target object in the sub-mode include the second perception result corresponding to the failure parameter, which is not specifically limited.
Step 2034 of determining, based on the second perception result of the target object, the second perception result information corresponding to the operation scenario includes:
step 20341: determining, based on the second perception result corresponding to the target object in the sub-mode, the second perception result information corresponding to the sub-mode in the operation scenario.
The second perception result information of the operation scenario in any sub-mode may include the second perception result of each of the one or more target objects in the sub-mode.
In some optional embodiments, for each of the operation scenarios, in-loop verification may be performed on the preset vehicle function based on the second perception result information corresponding to the operation scenario in each of the sub-modes to obtain a degree of tolerance of the preset vehicle function to a perception performance index of the perception algorithm, so that the perception performance threshold of the perception algorithm is obtained.
In the present embodiment, by performing finger-grained division on the failure mode, the second perception result information of each of the operation scenarios in each of the sub-modes is obtained, which is beneficial to the more accurate and effective verification on the preset vehicle function, and provides the accuracy and effectiveness of the perception performance threshold of the perception algorithm allowed by the preset vehicle function.
In some optional embodiments, step 204 of verifying the preset vehicle function based on the second perception result information to obtain a verification result includes:
Step 2041a: determining, based on the second perception result information corresponding to the sub-mode in the operation scenario, a function response state of the preset vehicle function in the sub-mode of the operation scenario.
The function response state may include two states of success and failure. The success represents that the preset vehicle function may reach an expected result in the sub-mode. For example, the AEB function may cause the vehicle to be successfully braked to avoid collision with the target object. The failure represents that the preset vehicle function does not achieve an expected result in the sub-mode. For example, in the HWP function, a longitudinal distance from a front vehicle is smaller than the minimum distance safety threshold. The success of the function response state may also be determined according to results that Time to Collision (TTC) is greater than a safety time threshold, a distance out of a center line of a lane exceeds a safe distance threshold, etc., which may be set according to the preset vehicle function.
In some optional embodiments, for each of the sub-modes of each of the operation scenarios, it may be determined, based on the second perception result information of the sub-mode, whether the preset vehicle function is triggered, and if the preset vehicle function is triggered, control information of the ego vehicle is calculated based on a functional algorithm of the preset vehicle function, a driving track of the ego vehicle controlled by the control information is determined based on the control information of the ego vehicle, and it is determined, in combination with the surrounding environment data in the operation scenario data of the operation scenario, whether the driving track of the ego vehicle and each of other surrounding object may achieve the expected result. If the expected result may be achieved, it is determined that the functional response state is shown as success, or else, it is determined that the function response state is shown as failure.
Step 2042a, determining, based on the function response state of the preset vehicle function in the sub-mode of the operation scenario, a perception performance threshold of the preset vehicle function in the operation scenario.
A perception performance index which may be allowed by the preset vehicle function in the operation scenario, i.e., a perception performance index (a failure parameter) showing the function response state as success, such as an allowable missing detection duration, missing detection frequency, false detection duration, false detection frequency, detection distance error, and detection speed error, may be determined based on the function response state of the preset vehicle function in each of the sub-modes of the operation scenario. Then, the perception performance threshold of the preset vehicle function in the operation scenario may be determined based on the allowable perception performance index.
Step 2043a: determining, based on the perception performance threshold of the preset vehicle function in the operation scenario, a target perception performance threshold corresponding to the preset vehicle function as the verification result.
In some optional embodiments, the target perception performance threshold corresponding to the preset vehicle function may be determined as the verification result in combination with the perception performance threshold of the preset vehicle function in each of the operation scenarios.
In some optional embodiments, all the operation scenarios may be classified, and for the same type of operation scenarios, the target perception performance threshold of the preset vehicle function in the type of operation scenario may be determined in combination with a plurality of operation scenarios. For example, the target perception performance threshold in the operation scenario is determined in combination with a plurality of operation scenarios that there are vehicles cutting in the front. For example, the minimum perception performance threshold in a plurality of operation scenarios in the same type of operation scenarios may be used as the target perception performance threshold in the same type of operation scenarios, so that the functional safety of each of the operation scenarios in the type of operation scenarios is ensured.
In the present embodiment, by determining a function response state of the preset vehicle function in each of the sub-modes of each of the operation scenarios, the tolerance of the preset vehicle function to a perception fault of each of the sub-modes may be determined, then, the perception performance threshold of the preset vehicle function in each of the operation scenarios may be determined, the target perception performance threshold corresponding to the preset vehicle function may be obtained in combination with the perception performance threshold of the preset vehicle function in each of the operation scenarios. By in-loop simulation verification on a large number of sub-modes of a large number of operation scenarios, a target perception performance index corresponding to the preset vehicle function is accurately and effectively determined, which is beneficial to the increase of the verification efficiency.
In some optional embodiments, step 20331 of performing perception failure simulation processing on the first perception result of the target object based on a simulation sub-rule corresponding to the sub-mode to obtain the second perception result of the target object in the sub-mode includes:
step 203311: determining at least one perception performance indicator value based on the simulation sub-rule corresponding to the sub-mode; and for any perception performance indicator value, updating the first perception result of the target object based on the perception performance indicator value to obtain the second perception result of the target object corresponding to the perception performance indicator value.
The perception performance indicator value may include a missing detection duration, a missing detection frequency, a false detection duration, a false detection frequency, a detection distance error, a detection speed error, detection delay time, etc.
In some optional embodiments, at least one perception performance indicator value may be determined according to a sub-mode and a corresponding simulation sub-rule. For example, the sub-mode is a long-time missing detection mode, and then, the simulation sub-rule may include a missing detection duration range and a sampling precision; for example, the missing detection duration range is [t1,t2]s, and the sampling precision is 0.01 s, then, each of missing detection durations t1, t1+0.01, t1+0.02, . . . , t2 may be determined within the missing detection duration range, and each of the missing detection durations is used as one of the perception performance indicator values. Or, a combination of a missing detection duration and a missing detection frequency is used as one of the perception performance indicator values, which is not specifically limited. For another example, the sub-mode is a longitudinal distance error mode, the simulation sub-rule may include a longitudinal distance error range and a sampling precision, and then, a plurality of longitudinal distance error values may be sampled as the perception performance indicator values within the longitudinal distance error range based on the sampling precision.
In some optional embodiments, for any perception performance indicator value, perception fault injection is performed on the first perception result of the target object based on the perception performance indicator value to obtain the second perception result of the target object corresponding to the perception performance indicator value. For example, missing detection fault injection is performed on the first perception result of the target object according to a missing detection duration, so that missing detection is performed on the target object in the second perception result for a corresponding duration. For example, the first perception result may include state information of the target object at each time point within a period of time, and then, state information of the target object within the missing detection duration will be removed from the second perception result, so that perception faults caused by performing missing detection on the target object within the missing detection duration by using the perception algorithm are simulated. For another example, the first perception result may include a longitudinal distance from each of the target object to the ego vehicle, error injection is performed on longitudinal distances of the target object according to the above-mentioned determined longitudinal distance error value to obtain the second perception result. For example, the longitudinal distance in the first perception results is x1, and the longitudinal distance error value is Δx1, then, the longitudinal distance in the second perception results is x1+Δx1. Based on this, the second perception result of the target object corresponding to each of the perception performance indicator values, or the second perception results corresponding to the target object at each of the perception performance indicator values may be obtained.
In the present embodiment, by determining the at least one perception performance indicator value, it is convenient to perform perception fault injection on the first perception result of the target object to obtain the second perception result at each of the perception performance indicator values, and when the preset vehicle function is verified, the tolerance of the preset vehicle function to the perception performance indicator value may be determined.
In some optional embodiments, perception fault injection may be performed on first perception results of dynamic or static target object in following modes:
1. Long-time missing detection for target object: the target object perceived in the first perception result is continuously discarded for a missing detection duration, and the fault injection is ended after the missing detection duration is reached.
2. Short-time high-frequency missing detection for target object: the target object perceived in the first perception result is discarded at a certain missing detection frequency for the corresponding missing detection duration every time.
3. Long-time false detection for target object: continuous false detection perception fault injection is performed on the target object perceived in the first perception result, and the fault injection is ended after a false detection duration is reached. For example, the state information that the target object is perceived is increased in a false detection duration in an original time period when the target object are not perceived in the first perception result, so that perception faults of false detection for the target object is simulated.
4. Short-time high-frequency false detection for target object: short-time high-frequency false detection perception fault injection is performed on the first perception result.
5. Transverse and longitudinal distance errors of target object: transverse and longitudinal distance error faults are injected into transverse and longitudinal distances (including at least one of a transverse distance and a longitudinal distance) in the first perception result. For example, error perception fault injection is performed on a transverse distance from each of surrounding vehicles to the ego vehicle.
6. Transverse and longitudinal speed errors of target object: error perception faults are injected into transverse and longitudinal speeds (including at least one of a transverse speed and a longitudinal speed) in the first perception result. For example, error perception fault injection is performed on a longitudinal speed of a vehicle in front of the ego vehicle.
7. Orientation angle errors of target object: angle error perception fault injection is performed on orientation angles (i.e., postures or heading angles) in the first perception result.
8. Transverse distance error of lane line: error perception fault injection is performed on a transverse distance from the lane line to the ego vehicle.
During actual applications, more failure modes may be determined according to actual operation scenarios, and are used for verifying the preset vehicle function by perception fault injection of the in-loop simulation system to obtain the tolerance of the preset vehicle function to the perception performance index of the perception algorithm. The various abundant perception fault injection is beneficial to the obtaining of the tolerance of the preset vehicle function to more perception performance indexes of the perception algorithm, then, more accurate and effective perception performance thresholds may be provided for the development and maintenance of the perception algorithm, and the safety of the preset vehicle function is further improved.
In some optional embodiments, step 20341 of determining, based on the second perception result corresponding to the target object in the sub-mode, the second perception result information corresponding to the operation scenario in the sub-mode includes:
Step 203411: for any sub-mode, determining, based on the second perception result of the target object corresponding to the perception performance indicator value, the second perception result information corresponding to the perception performance indicator value.
Step 204 of verifying the preset vehicle function based on the second perception result information to obtain a verification result includes:
Step 2041b, determining, based on the second perception result information corresponding to the perception performance indicator value, a function response state corresponding to the preset vehicle function at the perception performance indicator value.
A specific operation principle that the function response state of the preset vehicle function is determined based on each of the second perception result information corresponding to any perception performance indicator value may refer to that in the previously-mentioned embodiment, and will be no longer repeated herein.
Step 2042b, determining a perception performance threshold of the preset vehicle function in the sub-mode based on the function response state corresponding to the preset vehicle function at the perception performance indicator value.
Each of the sub-modes may include one or more perception performance indicator values. Therefore, the perception performance threshold of the preset vehicle function in the sub-mode may be determined in combination with the function response state of the preset vehicle function at each of the perception performance indicator values.
Step 2043b, determining, based on the perception performance threshold corresponding to the preset vehicle function in the sub-mode, a perception performance threshold of the preset vehicle function in the operation scenario as the verification result.
A specific operation principle of step 2043b may refer to the previously-mentioned embodiment, and will be no longer repeated herein.
In the present embodiment, by verifying the preset vehicle function based on the second perception result information corresponding to each of the perception performance indicator values in each of the sub-modes, the function response state of the preset vehicle function to each of the perception performance indicator values may be obtained, so that the perception performance threshold which may be allowed by the preset vehicle function may be accurately obtained by traversing and verifying the perception performance indicator values, and the determination efficiency and accuracy of the perception performance threshold are further improved.
In some optional embodiments, step 204 of verifying the preset vehicle function based on the second perception result information to obtain a verification result includes:
Step 2041, for any operation scenario, determining map data of the operation scenario based on the operation scenario data of the operation scenario.
The map data may include at least one of positioning data, navigation data and high-precision map data. The map data may assist planning-control software in planning and controlling so as to provide a better simulation environment for planning and controlling and more effectively simulate a working situation of the planning-control software in a real scenario.
Step 2042, for the second perception result information of the operation scenario, determining, based on the second perception result information and the map data of the operation scenario, control information of the ego vehicle by using a functional algorithm corresponding to the preset vehicle function.
The functional algorithm corresponding to the preset vehicle function is an application software algorithm for achieving the preset vehicle function, and is used for calculating the control information of the ego vehicle based on the second perception result information, the map data, etc. For example, after the AEB function is triggered, the control information of the ego vehicle, such as a decelerated speed, may be calculated, a deceleration control instruction is sent to the ego vehicle, and the ego vehicle performs the deceleration control instruction for braking to achieve automatic emergency braking. The control information may include at least one of transverse control information, longitudinal control information, etc. For example, the control information may include a transverse accelerated speed, a longitudinal accelerated speed, an angle, etc.
Step 2043: determining, based on the control information and a dynamic model of the ego vehicle, driving state information of the ego vehicle controlled according to the control information.
The dynamic model of the vehicle is used for simulating a driving track of the vehicle controlled according to the control information. The driving state information of the ego vehicle controlled according to the control information may be determined based on the control information of the ego vehicle and the dynamic model of the vehicle. The driving state information may include state information, such as a position, a posture, a speed, an accelerated speed, and an angular speed, of the ego vehicle controlled according to the control information at one or more trajectory points (or time points) within a period of time in the future.
Step 2044: determining, based on the driving state information and the operation scenario data corresponding to the second perception result information, a function response state of the preset vehicle function under the second perception result information.
Driving tracks of other object around the ego vehicle within a period of time in the future may be determined based on the operation scenario data corresponding to the second perception result information, it may also be determined, in combination with the driving state information of the ego vehicle and the driving tracks of other object, whether the change of a relative relationship between the ego vehicle and each of other object achieves an expected result, such as whether the AEB function achieves the successful brake of the ego vehicle, and whether the HWP function achieves successful obstacle avoidance during vehicle cut-in. If the expected result may be achieved, it is determined that the function response state is shown as success; and if the expected result may not be achieved, it is determined that the function response state is shown as failure.
Step 2045: determining the verification result based on the function response state of the preset vehicle function under the second perception result information.
A specific operation principle of step 2045 may refer to the previously-mentioned embodiment, and will be no longer repeated herein.
In the present embodiment, the preset vehicle function is further verified in combination with the map data and the dynamic mode of the vehicle, which is beneficial to the further improvement of the accuracy of the verification result.
In some optional embodiments, step 2044 of determining, based on the driving state information and the operation scenario data corresponding to the second perception result information, a function response state of the preset vehicle function under the second perception result information includes:
determining, based on the driving state information and the operation scenario data corresponding to the second perception result information, a state of an expected result of the ego vehicle under the preset vehicle function; in response to the state of the expected result being achieved, determining the function response state of the preset vehicle function under the second perception result information as success; or in response to the state of the expected result being unachieved, determining the function response state of the preset vehicle function under the second perception result information as failure.
The expected result may be set according to a functional safety goal of the preset vehicle function. For example, the AEB function achieves successful brake, and the HWP function achieves successful obstacle avoidance. It may be determined, in combination with the driving state information of the ego vehicle controlled according to the control information and the driving tracks of other object in the operation scenario data, whether collision between the ego vehicle and each of other object occurs, the ego vehicle successfully avoids obstacles, the minimum longitudinal distance from the ego vehicle to each of the target object is greater than the minimum longitudinal safety distance threshold, and the minimum transverse distance from the ego vehicle to each of the target object is greater than the minimum transverse safety distance threshold, and thus, a state of the expected result of the ego vehicle under the preset vehicle function may be determined. If the expected result may be achieved, it is determined that the function response state of the preset vehicle function is shown as success; and if the expected result may not be achieved, it is determined that the function response state is shown as failure.
The above-mentioned embodiments of the present disclosure may be implemented alone or combined in any combination ways, which may be specifically set according to an actual demand and is not limited in the present disclosure.
Any method for verifying a vehicle function provided in the embodiments of the present disclosure may be performed by any appropriate device with data processing capacity, including, but not limited to a terminal device and a server. Or, any method for verifying a vehicle function provided in the embodiments of the present disclosure may be performed by a processor, for example, the processor performs any method for verifying a vehicle function mentioned in the embodiments of the present disclosure by calling a corresponding instruction stored in a memory, which will be no longer repeated hereinafter.
The acquisition module 501 is configured to acquire operation scenario data respectively corresponding to each of one or more operation scenarios, the operation scenario data comprising state information and surrounding environmental information of an ego vehicle.
The first processing module 502 is configured to determine, based on the operation scenario data respectively corresponding to each of the operation scenarios, first perception result information corresponding to a preset vehicle function of the ego vehicle in each of the operation scenarios.
The second processing module 503 is configured to perform perception failure simulation processing on the first perception result information based on a perception failure simulation rule to obtain second perception result information respectively corresponding to each of the operation scenarios.
The third processing module 504 is configured to verify the preset vehicle function based on the second perception result information to obtain a verification result.
In some optional embodiments, the first perception result information includes first perception result of at least one object around the ego vehicle. The second processing module 503 includes:
In some optional embodiments, the first processing module 502 includes:
The first processing unit 5031 is specifically configured to, for any operation scenario, determine, based on the first perception result information corresponding to each of the trajectory points, the target object in different areas around the ego vehicle.
In some optional embodiments, the third processing module 504 is specifically configured to:
In some optional embodiments, the failure modes include at least one type of sub-mode.
In some optional embodiments, the third processing unit 5033 is specifically configured to:
for any sub-mode, perform perception failure simulation processing on the first perception result of the target object based on a simulation sub-rule corresponding to the sub-mode to obtain the second perception results of the target object in the sub-mode.
The fourth processing unit 5034 is specifically configured to:
In some optional embodiments, the third processing module 504 includes:
In some optional embodiments, the third processing unit 5033 is specifically configured to:
In some optional embodiments, the fourth processing unit 5034 is specifically configured to:
The third processing module 504 is specifically configured to:
In some optional embodiments, the third processing module 504 includes:
In some optional embodiments, the eighth determination unit 5044 is specifically configured to:
The beneficial technical effects corresponding to the exemplary embodiments of the present apparatus may refer to the corresponding beneficial technical effects of the above-mentioned exemplary method, and will be no longer repeated herein.
The processor 11 may be a central processing unit (CPU) or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to execute expected functions.
The memory 12 may include one or more computer program products which may include various forms of computer-readable storage media, such as a volatile memory and/or a nonvolatile memory. The volatile memory may include a random access memory (RAM) and/or a cache memory, etc. The nonvolatile memory may include a read-only memory (ROM), a hard disk, a flash memory, etc. The computer-readable storage media may store one or more computer program instructions, and the one or more computer program instructions may run on the processor 11 to implement the method in the embodiments of the present disclosure mentioned above and/or other expected functions.
In an example, the electronic device 10 may further include an input means 13 and an output means 14, and these components are interconnected by a bus system and/or other forms of connecting mechanisms (unshown).
The input means 13 may further include a keyboard, a mouse, etc.
The output means 14 may output various information to the outside of the electronic device 10, and may include a display, a loudspeaker, a printer, a communication network, a remote output device connected by the communication network, etc.
Of course, for simplification,
In addition to the above-mentioned method and device, an embodiment of the present disclosure further provides a computer program product, including a computer program instruction, wherein the computer program instruction, when running on the processor, enables the processor to perform the steps of the method in the various embodiments of the present disclosure described in the above-mentioned “exemplary method”.
The computer program product may be used for compiling program codes for performing operations in the embodiments of the present disclosure by using any combinations of one or more programming languages, and the programming languages include object-oriented programming languages, such as Java and C++, and further include conventional procedural programming languages, such as “C” languages or similar programming languages. The program codes may be completely executed on a user computation device, partially executed on user equipment, executed as an independent software package, partially executed on the user computation device and partially executed on a remote computation device, or completely executed on the remote computation device or a server.
In addition, an embodiment of the present disclosure may further include a computer-readable storage medium, on which a computer program is stored, wherein the computer program instruction, when running on a processor, enables the processor to perform the steps of the method in the various embodiments of the present disclosure described in the above-mentioned “exemplary method”.
The computer-readable storage medium may adopt one or any combinations of more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium includes, but is not limited to including an electric, magnetic, optical, electromagnetic, infrared or semiconductor system, apparatus or devices or any combinations thereof. More specific examples (non-exhaustive list) of the readable storage medium include electric connectors with one or more conducting wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any appropriate combinations thereof.
A basic principle of the present disclosure has been described above in conjunction with the specific embodiments. However, the advantages, superiorities, effects, etc. mentioned in the present disclosure are only exemplary, rather than restrictive, and should not be considered that it is necessary for each embodiment of the present disclosure. In addition, the specific details disclosed as above are only for the purposes of giving examples and facilitating understanding, rather than restrictions, and the above-mentioned details do not limit the present disclosure to be necessarily achieved by adopting the above-mentioned specific details.
The skilled in the art may make various alterations and variations on the present disclosure without departing from the spirit and scope of the present application. In this way, if these alterations and variations of the present application fall within the scopes of the claims and the equivalent technologies of the present disclosure, the present disclosure is also intended to include these alterations and variations.
Number | Date | Country | Kind |
---|---|---|---|
202311469457.2 | Nov 2023 | CN | national |