The present disclosure relates to the field of autonomous driving technology, and particularly to a decision and control technology for an advanced driver assistance system (ADAS+) based on V2X perspective perception technology in a preceding vehicle obstruction scene, as well as an advanced driver assistance system (ADAS+) and a vehicle equipped with the system.
Safe driving is the primary rigid demand of vehicle users. During driving, rear-end collision is a main factor causing traffic accidents. Especially on high ways or in other high-speed driving scenes, the vast majority of traffic accidents are caused by rear-end collisions, and often result in multi-vehicle pile-ups with serious casualties and losses. The causes are often following too closely or driver's inattention.
Although existing technologies attempt to solve this problem, such as Forward Collision Warning (FCW) and Automatic Emergency Braking (AEB), they are limited by physical factors and the system response is not timely, so these technologies are often still unable to avoid the occurrence of rear-end collisions. For example, an ego vehicle (SV) is following a preceding vehicle (TV2) at high speed, TV2 is following its preceding vehicle (TV1), and so on. If the frontmost vehicle TV1 performs an emergency braking due to a sudden situation, the adjacent preceding vehicle TV2 may subsequently also perform an emergency braking. The ego vehicle SV has to wait until the preceding TV2 shows an emergency braking for a certain time before the driver or the ADAS system can perceive and start to make judgments and reactions. If the vehicle spacing between SV-TV2-TV1 is relatively close, even if the ego vehicle SV is equipped with an ADAS system with AEB function, it may still be unable to avoid a rear-end collision with TV2. Most of the multi-vehicle pile-up accidents on highways occur in such scenes.
The integration of autonomous driving vehicles, roads, and smart cities is a current cross-industry development trend. The development and maturity of intelligent+connected+big data+cloud platform technologies are the technical foundation and guarantee for realizing “intelligent vehicles+”. Intelligent driving technology is one of the core technical fields of intelligent connected vehicles. Among them, environmental perception and control decision-making are the core technical bottlenecks of intelligent driving systems. At present, in the field of intelligent driving technology, the environmental perception capability of the system is far from mature, which is the bottleneck of bottlenecks and the key constraint to realize intelligent driving. Both on-board perception (vehicle-mounted sensors) and vehicle-road collaboration (V2X) have their limitations. Only the combination of the two can achieve a breakthrough and leap in intelligent perception technology, which is currently the most feasible systematic solution and technical route and direction for intelligent driving. In other words, to realize the environmental perception capability that empowers intelligent driving of vehicles. it is necessary to integrate vehicle-mounted sensors and vehicle-road collaborative information technology, thereby greatly enhancing the perception capability of vehicles and ultimately greatly enhancing the functionality, performance, and reliability of intelligent driving of vehicles. Meanwhile, after the popularization of vehicle-road collaboration applications, the cost of single-vehicle intelligent perception can be greatly reduced.
Developing intelligent connected vehicles based on vehicle-road collaboration and realizing intelligent driving technology to solve the problem of extremely complex and ever-changing scenes is a long road and process. Although realizing fully autonomous driving is the direction of development for intelligent connected vehicle technology, it is a long-term goal, and it will take a long time to achieve widespread commercial application. Market demand is the decisive factor in promoting technological progress and implementation. Recently, the industry has begun to form a consensus that solving the problems of driving safety, traffic congestion, and improving traffic efficiency in key dangerous scenes through V2X technology is the most important market rigid demand and the biggest pain point in traffic travel safety. This is a problem that needs to be gradually solved in the next few decades. In other words, solving the driving safety problem in key dangerous scenes is the current key goal and promotes the industrialization and implementation of technology.
ADAS (Advanced Driver Assistance System) is a typical driver assistance system for solving driving safety problems and is also the technical foundation for realizing autonomous driving. It has been developing rapidly recently and has a huge market. However, although ADAS system products have been applied in the market for many years, their technology is still far from mature, and the functionality and performance of ADAS are also severely limited by the perception capabilities of the system. Especially in some special dangerous scenes, ADAS cannot achieve effective collision avoidance. By using V2X technology, the vehicle-mounted system can achieve fusion perception with roadside perception information, breaking through the technical bottlenecks of the system in perception and decision-making algorithms in some high-risk scenes, and developing an ADAS+ system with expanded functions and enhanced performance.
Existing ADAS technology relies on its on-board perception devices and is limited by its environmental perception range and capabilities. In many key dangerous scenes, it cannot play a role. For example, scenes where vehicles merging at intersections obstruct the field of view, scenes where the preceding vehicle is obstructed, scenes where traffic lights are obstructed, scenes where red lights are run at intersections, etc. In these scenes, physical conditions determine that the ADAS system based on the vehicle's own on-board perception cannot effectively play the role of collision avoidance in case of sudden situations.
V2X technology can help overcome the environmental perception obstacles in the above scenes and systematically improve the vehicle's environmental perception capabilities. However, the current application of V2X technology mainly focuses on helping to realize autonomous driving technology, and the widespread commercial application and implementation of autonomous driving technology are still a long way off. It can be seen that the combination of V2X and ADAS systems has very broad potential and development space in both technology and commercial application implementation.
The purpose of the present disclosure is based on the systematic and in-depth fusion of V2X and on-board perception, to solve one of the high-risk scenes that traditional ADAS system technology cannot solve, and to develop enhanced and more reliable ADAS+ system functional decision and control algorithm technology.
According to one aspect of the present disclosure, a decision and control method for ADAS in a preceding vehicle obstruction scene based on V2X is provided, including:
based on V2X perspective perception technology, collaboratively perceiving driving status information of an obstructed preceding vehicle, and fusing with driving status information of a potentially conflicting vehicle perceived by on-board perception of an ego vehicle;
based on the fused information, performing behavior judgment of the potentially conflicting vehicle;
making a control decision of the ego vehicle according to the judgment result.
Further, the driving status information of the potentially conflicting vehicle can be obtained by achieving real-time V2V communication and information interaction with the potentially conflicting vehicle or by achieving real-time V2I communication and information interaction with roadside unit equipment through V2X perspective perception technology;
the driving status information of the obstructed preceding vehicle can be obtained by achieving real-time V2V communication and information interaction with the obstructed preceding vehicle, or by achieving real-time V2I communication and information interaction with roadside unit equipment through V2X perspective perception technology.
Further, the control decision includes early alerting, mild decelerating, and emergency braking.
Further, when D002<D002-alert, early alerting is initiated to a driver;
wherein, D002 is relative distance between the ego vehicle and the potentially conflicting vehicle, D002-alert is an early alerting distance, D002-alert=Dstop−D2stop+Dpre1, wherein Dpre1 is a preset constant for early alerting in advance, Dstop is travel distance for the ego vehicle to decelerate to stop, D2stop is travel distance for the potentially conflicting vehicle to decelerate to stop.
Further, if the potentially conflicting vehicle undergoes emergency braking and a distance from its preceding vehicle is less than a preset value, an early alerting is initiated to the driver.
Further, when D002<D002-mild, a mild decelerating control is initiated;
wherein, D002 is the relative distance between the ego vehicle and the potentially conflicting vehicle, D002-mild is a triggering distance for mild decelerating control, D002-mild=Dstop−D2stop, Dstop is travel distance for the ego vehicle to decelerate to stop, D2stop is travel distance for the potentially conflicting vehicle to decelerate to stop.
Further, a deceleration of the mild decelerating is adjusted in real-time according to relative distance and relative speed between the two vehicles;
when D002>D02-mild+Dpre2, decelerating is stopped, Dpre2 is a preset constant.
Further, when the minimum value of the absolute value of the deceleration of the ego vehicle |amin|>A_aeb, an emergency braking control is initiated, wherein A_aeb is a set value.
According to another aspect of the present disclosure, an advanced driver assistance system based on V2X perspective perception technology is provided, which can execute the above-mentioned decision and control method.
According to yet another aspect of the present disclosure, an intelligent networked vehicle is provided, comprising the above-mentioned advanced driver assistance system based on V2X perspective perception technology;
An on-board perception unit and a roadside perception unit, provide target object information in an obstructed area for the advanced driver assistance system through V2X association and fusion perception.
According to the present disclosure, through the application of V2X technology, the fusion of on-board perception technology and roadside perception technology is achieved, more accurate, more reliable, and all-weather environmental perception information is obtained, and the perception obstacles in key dangerous scenes are solved. Especially in the scene where the line of sight to the preceding vehicle is obstructed, it realizes functions that traditional ADAS (based on on-board perception) cannot achieve or enhances the performance and reliability of traditional ADAS functions.
The method of the present disclosure performs behavior judgment of potentially conflicting vehicle based on V2X perception information, thereby developing control decision algorithm technology for the ego vehicle. On the one hand, it comprehensively considers the requirements of comfort, safety, and collision avoidance, meanwhile is applicable to both ADAS and autonomous driving systems. Decision and control algorithm strategy for the vehicle of the present disclosure is calculated and optimized in real-time through vehicle motion trajectories.
Before initiating emergency braking, the method of the present disclosure can apply a braking pre-control strategy when necessary to improve system response speed, reduce braking delay, and improve collision avoidance performance.
The present disclosure will be described in more detail through the illustrative embodiments in combination with the drawings, in which the same reference numerals generally represent the same components in the illustrative embodiments of the present disclosure.
The preferred embodiments of the present disclosure will be described in more detail below with reference to the drawings. Although the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure can be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided to make the disclosure of the present disclosure more thorough and complete and to fully convey the scope of the present disclosure to those skilled in the art.
The related technical terms in the present disclosure are explained as follows.
V2X: Vehicle to Everything, information communication and interaction between vehicles and the outside world (X represents all things in the outside world, including: V2V, V2I, V2N, V2P, etc.).
V2V: Vehicle to Vehicle, information communication and interaction between vehicles and other vehicles.
V2I: Vehicle to Infrastructure, communication and information interaction between vehicles and surrounding roadside facilities.
V2N: Vehicle to Network, communication and information interaction between vehicles and telecommunication networks.
V2P: Vehicle to People, communication and information interaction between vehicles and other vehicles.
AEB: Automatic Emergency Braking.
There are many core technical bottlenecks to realize intelligent driving, among which environmental perception technology is the core of the cores and the constraint factor for the implementation of intelligent driving systems. The purpose of the present disclosure is to obtain the ego vehicle's perspective perception and advance perception capabilities of the environment through the fusion perception of on-board perception and V2X technology (V2V or V2I), predict the behavior of the preceding vehicle TV2 being followed, so that the ego vehicle can make judgments and reactions in advance, and achieve the purpose of avoiding collision with the preceding vehicle to the greatest extent. The present disclosure provides the prediction, judgment, and control decision technology of the preceding vehicle TV2's behavior in this scene.
The present disclosure provides a decision and control method for ADAS in a preceding vehicle obstruction scene based on V2X, including: based on V2X perspective perception technology, collaboratively perceiving driving status information of an obstructed preceding vehicle, and fusing with driving status information of a potentially conflicting vehicle perceived by on-board perception of an ego vehicle: based on the fused information, performing behavior judgment of the potentially conflicting vehicle; making a control decision of the ego vehicle according to the judgment result.
To facilitate understanding of the scheme and effect of the embodiment of the present disclosure, a specific application example is given below. Those skilled in the art should understand that this example is for the purpose of facilitating understanding of the present disclosure and is not intended to limit the present disclosure in any way.
As shown in
The ego vehicle SV is equipped with an ADAS system with an emergency braking function (AEB), but the line of sight to vehicle TV1 is obstructed by TV2. If TV1 performs an emergency braking due to a sudden situation, TV2 will subsequently follow with an emergency braking, and SV may fail to respond in time to TV2's emergency braking and cause a rear-end collision. For example, in a multi-vehicle pile-up traffic accident on a highway. If SV can perceive the driving information such as the braking status of TV1 in advance, it can make judgments in advance and take necessary control measures in advance, such as early alerting, early decelerating, or early emergency braking, to achieve the purpose of avoiding collision with TV2.
As shown in
The ego vehicle SV is equipped with an OBU (On Board Unit) device (V2X information communication equipment) for achieving real-time V2V communication and information interaction with the preceding vehicle TV2, or for achieving real-time V2I communication and information interaction with RSU (Road Side Unit) equipment.
The preceding vehicle TV1 is equipped with an OBU device to achieve real-time V2V communication and information interaction with the ego vehicle SV, which can transmit its driving speed, braking status, deceleration and other information to vehicle SV in real time.
Perception devices and RSU equipment for realizing V2I communication are installed on the roadside. Roadside perception can monitor and identify vehicles on the road in real time, as well as their position and status information (such as driving speed and deceleration, etc.). Vehicle SV realizes real-time information interaction with the roadside RSU equipment through V2I, and transmits information of TV1 to vehicle SV in real time.
In the present disclosure, the ego vehicle SV, through its on-board OBU device, through V2X (V2V or V21) communication, realizes docking with the preceding vehicle TV1 equipped with OBU device or the roadside RSU equipment, collaboratively perceives the driving status information of the obstructed preceding vehicle (such as TV1), and then fuses it with the on-board perception information of the ego vehicle. On the one hand, the ego vehicle perceives the driving status and distance of the vehicle TV2 immediately ahead through its own perception system, and at the same time, obtains the driving behavior information of vehicle TV1 in a timely manner through the perspective perception capability obtained through V2X. If the obstructed preceding vehicle TV1 decelerates or performs emergency braking. the system can obtain the decelerating information of the obstructed preceding vehicle in advance before the ego vehicle perceives the decelerating of the vehicle TV2 immediately ahead through on-board perception devices, make perception and judgment of the environment in advance, make decisions in advance, and take necessary measures in advance, such as alerting, active decelerating or emergency braking. The ultimate goal is to keep the ego vehicle SV at a safe driving distance from the vehicle TV2 immediately ahead and avoid collisions in emergency situations. To achieve the above system technical goals, the following aspects of technical content need to be mainly solved: real-time communication of V2V or V2I, perception fusion based on V2X, behavior prediction and behavior judgment of the preceding vehicle TV2, and control decision algorithm of SV.
Referring to
The instantaneous vehicle speeds of the ego vehicle SV, the vehicle TV2 immediately ahead, and the obstructed preceding vehicle TV1 are v, v2 and v1 respectively. The initial vehicle speeds at a certain initial calculation moment (t=0) are V, V02, V01 respectively.
The instantaneous vehicle distances between SV and TV2, TV2 and TV1 are d02 and d12 respectively. The initial vehicle distances at a certain initial calculation moment (t=0) are D002 and D012 respectively.
The instantaneous accelerations of SV, TV2, and TV1 are a, a2 and a1 respectively (note: deceleration is a negative value).
Under emergency braking, the shortest reaction time of the driver of the preceding vehicle TV2 is Tdd2, and the response delay time of the braking system is Tbd2.
The shortest reaction time of the driver of the ego vehicle SV is Tdd, and the response delay time of the SV braking system is Tbd.
In this disclosure, the main focus is on solving the problem that when TV2 follows TV1 and TV1 decelerates suddenly, the ego vehicle SV's perception system can obtain the perspective perception capability of TV1 through V2X technology, predict that TV2 may follow TV1 to enter emergency braking. The ego vehicle SV needs to judge TV2's decelerating behavior and estimate its driving trajectory in advance, thereby takes measures in advance to avoid rear-end collision with TV2 or reduce the severity of the collision to the greatest extent possible. If TV1 undergoes mild decelerating, it generally will not trigger the need for emergency braking of TV2 and SV, so this situation is not within the consideration scope of the present disclosure.
Wherein, prediction of the reaction behavior of the preceding vehicle TV2 (reaction delay and prediction of deceleration):
TV2 follows TV1 closely. When TV1 decelerates at high intensity (deceleration |a1|>A1_trig, for example, A1_trig=0.5 g or other adapted values), TV2 will take decelerating behavior following TV1's decelerating, but with a delay from TV1. TV2's reaction behavior is aimed at avoiding collision with its preceding vehicle TV1, so it depends on the relative distance and relative speed between TV2 and TV1 and decelerating intensity of TV1.
From the current moment to a certain moment t, the motion state of TV1:
instantaneous speed of TV1: v1=V01+a1*t
travel distance of TV1:
when TV1 decelerates to stop, the required time is T1stop=−V01/a1, thus the travel distance of TV1 can be calculated from the above formula, defined as D1stop.
From the current moment to a certain moment t, the motion state of TV2:
instantaneous speed of TV2: v2=V02+a2*(t−Tdd2−Tbd2)
travel distance of TV2:
when TV2 decelerates to stop, the required time is T2stop=−V02/a2+(Tdd2+Tbd2), thus the travel distance of TV2 can be calculated from the above formula, defined as D2stop.
Relative distance between TV2 and TV1:
The condition for TV2 to avoid collision with TV1 is that when TV2 brakes to stop, d12>0
that is: D012+D1stop−D2stop>0
Thus, from the above formula, the minimum expected deceleration that vehicle TV2 should take can be predicted, defined as a2exp (acceleration is a negative value).
Wherein, prediction and estimation of the relative distance between the ego vehicle SV and the vehicle TV2 immediately ahead:
from the current moment to a certain moment t, the motion state of TV2:
when t<(Tdd2+Tbd2),
instantaneous speed of TV2: v2=V02+a2_1*t (note: deceleration is a negative value in the formula)
travel distance of TV2:
where, a2_1 is the acceleration (deceleration is a negative value) of TV2 measured during this period.
After
where, V02_2 is the speed at
travel distance of TV2: d2=d2_1+d2_2
where,
If the expected deceleration |a2exp|<|a2| (the actual deceleration of TV2 perceived), use the actual deceleration (a2exp=a2). That is, in the calculation, a2exp=min (a2, a2exp).
The calculation of d2_2 is until v2=0 (stop), that is,
Under the status that the ego vehicle SV is in the driver's driving operation state, from the current moment to a certain moment t, the motion state of SV:
instantaneous speed of SV: v=V0+a*(t−Tdd−Tbd) (note: deceleration is a negative value in the formula)
travel distance of SV:
relative distance between SV and TV2:
Referring to
When the distance between the ego vehicle SV and the preceding vehicle TV2 is less than a certain safe distance, or when the obstructed preceding vehicle TV1 performs an emergency braking and the distance between TV2 and TV1 is less than a set value, before the need to activate the ADAS emergency barking or automatic decelerating function, system of the ego vehicle SV first initiate an early alerting to its driver in advance, so that the driver can take necessary operations as early as possible to avoid collision.
Safe distance setting method: if the preceding vehicle performs an emergency braking to stop, the ego vehicle can safely stop under mild decelerating (for example, |amild|<0.2 g, this value can be adjusted and optimized as needed) without the risk of collision with the preceding vehicle TV2.
(i) Calculation of Early Alerting Distance D002-alert;
Initial vehicle speed of the ego vehicle is V0, deceleration is a (asv0), initial vehicle speed of the preceding vehicle is V02, and deceleration is a2.
Initial distance between the two vehicles is D002, the reaction delay of the driver of the ego vehicle is T_dd, and the response delay of the braking system: T_bd.
Where, prediction of TV2 motion:
if the preceding vehicle TV1 or TV2 brakes sharply, predict the travel distance of the preceding vehicle TV2 to decelerate to stop:
time for TV2 to decelerate to stop:
(deceleration is a negative value)
(Where, the calculation method of a2exp, see “prediction of the reaction behavior of the preceding vehicle TV2”)
Where, estimation of the motion of the ego vehicle SV (under the condition of mild braking, the control target):
time to decelerate to stop:
(wherein, for example amild=0.2 g or other appropriate values)
travel distance for SV to decelerate to stop:
wherein, estimation of the distance between the ego vehicle SV and the preceding vehicle TV2:
The principle for triggering early alerting is that after a certain time of early alerting, mild braking to stop can still be performed without collision with the preceding vehicle.
The condition for not colliding with the preceding vehicle is: d02>0, that is, D002>Dstop−D2stop.
Based on the above calculation, the early alerting distance is set to: D002-alert=Dstop−D2stop+Dpre1.
Here, Dpre1 is a preset constant for early alerting in advance (for example, 25 m, the specific value can be adjusted and optimized as needed).
Early alerting triggering condition 1:
when the initial relative distance D002 between SV and TV2 is less than D002-alert, the system of SV initiates an early alerting to the driver.
Additional early alerting triggering condition 2:
in addition to early alerting condition 1, if TV1 undergoes emergency braking with a deceleration |a1|>0.5 g, and the relative distance between TV2 and TV1 D012<D212-alert, the system of SV initiates an early alerting to the driver. (For example, D212-alert=25 m, the specific value can be adjusted and optimized as needed)
After the early alerting for the decelerating of the preceding vehicle TV2 and TV1, before the need to initiate high-intensity decelerating, mild decelerating can be initiated according to the vehicle distance and relative speed to maintain the vehicle distance and improve vehicle driving comfort. This function is applicable to vehicles with autonomous driving functions (or automatic accelerating and decelerating functions), and is not set for vehicles driven by drivers.
Using the aforementioned method, calculate Dstop and D0stop. The triggering distance of mild decelerating control is:
When D002<D002-mild, initiate mild decelerating control to maintain the safe distance between SV and TV2. The deceleration of mild decelerating, for example, |amild|<0.2 g, this value can be adjusted in real-time according to the relative distance and relative speed between the two vehicles.
When D002>D002-mild+Dpre2, stop decelerating, Dpre2 is a preset constant.
Following control target principle: In the case of significantly decelerating of TV2 (for example, |a2|>0.5 g), according to the relative distance and relative speed between SV and TV2, as well as the deceleration of the preceding TV2, judge the possibility of potential collision between SV and TV2, and the deceleration required for SV to avoid the collision. If SV needs strong decelerating (for example, acceleration |a|>0.5 g) to avoid collision with TV2, SV triggers emergency braking (|a|>0.5 g) to achieve the goal of avoiding collision with TV2 or reducing the severity of the collision.
When TV2 follows the obstructed preceding vehicle TV1 and has V2X conditions, when judging whether SV needs to initiate AEB braking, the V2V perspective perception capability can be used in this scene to perceive the status information of TV1 in advance, and predict in advance that TV2 may enter emergency braking following TV1 after a certain delay time. Then, based on the predictive calculation and judgment, the vehicle SV can take emergency measures in advance when necessary, and may even enter an emergency braking state before TV2.
AEB triggering judgment condition:
According to the calculation in the “prediction of the reaction behavior of the preceding vehicle TV2”, from the current moment to a certain moment t, the predicted distance between SV and TV2 is: d02=D002+d2−d.
When d02=0, it is the collision point between the ego vehicle SV and the preceding vehicle TV2. The condition for SV not to collide with TV2 is that when SV decelerates to stop, d02>0.
The time for vehicle SV to brake to stop is:
That is, when t=Tstop, d02>0 is the condition to avoid collision. That is:
From the above two formulas, the minimum value |amin| of the absolute value of the deceleration a of the ego vehicle SV can be calculated. If |amin|>A_aeb (for example 0.5 g or other set value), that is, the AEB function is triggered.
In this embodiment, through V2X (V2I, V2V) technology, the fusion of on-board perception and roadside perception information is realized, more accurate, more reliable, and all-weather environmental perception information is obtained, and the perception obstacles in key dangerous scenes are solved. Based on V2X perception fusion technology, it serves the ADAS system to form an ADAS+ system, thereby obtaining enhanced functional and performance ADAS functional algorithm technology. The control decision functional algorithm technology of the present disclosure on one hand considers the requirements of comfort, safety, and collision avoidance, meanwhile is applicable to ADAS and autonomous driving systems. The decision and control algorithm strategy of the vehicle can be calculated and optimized in real-time through vehicle motion trajectories. Before initiating emergency braking, when necessary, a braking pre-control strategy can be applied to improve system response speed, reduce braking delay, and improve collision avoidance performance.
In addition, the present disclosure also provides an advanced driver assistance system based on V2X perspective perception technology, which can execute the decision and control method described in the above embodiments.
The present disclosure also provides an intelligent networked vehicle, including the advanced driver assistance system based on V2X perspective perception technology described in the above embodiments;
on-board perception unit and roadside perception unit, through V2X association and fusion perception, provide target object information in an obstructed area for the advanced driver assistance system.
Those skilled in the art should understand that the purpose of the above description of the embodiments of the present disclosure is only to illustrate and explain the present disclosure, and is not intended to limit the embodiments of the present disclosure described here in any way.
What has been described above are the preferred embodiments of the present disclosure, and the description is exemplary rather than exhaustive, and is not intended to limit the embodiments disclosed here. Under the premise of not deviating from the scope and spirit of the embodiments described, many modifications and changes are obvious to those skilled in the art.
Number | Date | Country | Kind |
---|---|---|---|
202210096454.8 | Jan 2022 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2022/111851 | 8/11/2022 | WO |