The present invention concerns the field of autonomous vehicles and more specifically computerized equipment intended to control autonomous vehicles.
A vehicle is classified as autonomous if it can be moved without the continuous intervention and oversight of a human operator. According to the United States Department of Transportation, this means that the automobile can operate without a driver intervening for steering, accelerating or braking. Nevertheless, the level of automation of the vehicle remains the most important element. The National Highway Traffic Safety Administration (the American administration responsible for Highway traffic safety) thus defines five “levels” of automation:
Driverless vehicles operate by accumulating multiple items of information provided by cameras, sensors, geo-positioning devices (including radar), digital maps, programming and navigation systems, as well as data transmitted by other connected vehicles and networked infrastructures. The operating systems and the software then process all this information and provide coordination of the mechanical functions of the vehicle. These methods reproduce the infinite complexity of tasks carried out by a driver who is required, in order to drive properly, to concentrate on the road, the behavior of his vehicle as well as his own behavior.
The computer architecture of such vehicles must make it possible to manage the multitude of signals produced by sensors and outside sources of information and to process them to extract pertinent data from the signals, eliminating abnormal data and combining data to control the electromechanical members of the vehicle (steering, braking, engine speed, alarms, etc.).
Because of the context of usage, the computer architecture must guarantee absolute reliability, even in the event of error on a digital card, a failed sensor or malfunction of the navigation software, or all three of these elements at the same time.
The mechanisms to ensure the robustness of the architectures include:
Different solutions of computer architectures intended for autonomous vehicles have been proposed in the prior art.
WO 2014044480 describes a method for operating an automotive vehicle in an automatic driving mode, comprising the steps of:
in a case where the automatic driving of the automotive vehicle is no longer guaranteed, to change over to the safe range (B), the automotive vehicle being guided by the actuator device into the safe range (B).
US 20050021201 describes a method and device for the exchanging and common processing of object data between sensors and a processing unit. According to this prior art solution, position information and/or speed information and/or other attributes (dimension, identification, references) of sensor objects and fusion objects are transmitted and processed.
US 20100104199 describes a method for detecting an available travel path for a host vehicle, by clear path detection by image analysis and detection of an object within an environment of the host vehicle. This solution includes camera-based monitoring, analysis of the image by path detection, analysis to determine a clear path of movement in the image, the monitoring of data from the sensor describing the object, the analysis of the data from the sensor for determining the impact of the object on the path.
U.S. Pat. No. 8,930,060 describes an environment analysis system from a plurality of sensors for detecting predetermined safety risks associated with a plurality of potential destination regions around a vehicle when the vehicle is moving on a road. The system selects one of the potential destination regions as a target area having a substantially lower safety risk. A path determination unit assembles a plurality of plausible paths between the vehicle and the target area, monitors the predetermined safety risks associated with a plurality of plausible paths, and selects one of the plausible paths having a substantially lower risk as a target path. An impact detector detects an impact between the vehicle and another object. A stability control is configured to orient the vehicle autonomously over the target path when the impact is detected.
EP 2865575 describes a driving assistance system comprising a prediction subsystem in a vehicle. The method comprises the steps consisting of accepting an environment representation. The calculation of a confidence estimate is related to the representation of the environment by applying the plausibility rules to the representation of the environment and by furnishing the confidence estimate as contribution for an evaluation of a prediction based on the representation of the environment.
The solutions of the prior art are not completely satisfactory because the proposed architectures involve a “linear” processing of data, coming from sensors and disparate sources, some of which are potentially erroneous or flawed. With the proposed architectures, the processing of such erroneous or doubtful data is deterministic and can lead to unexpected actions.
The solutions proposed in the prior art are not completely adapted to the very high safety constraints for steering autonomous vehicles.
The environment of the vehicle, including meteorological and atmospheric aspects among others, as well as the road environment, is replete with disturbances.
It includes numerous factors that are random and therefore unpredictable, and the safety constraints resulting from these environmental disturbances have an infinite number of variants. For example, meteorological conditions can disturb the sensors, but the context or the road situation can also put the algorithm in a position it cannot or does not know how to manage. The limits of a sensor are known, but all the situations in which the sensors and their intelligence will reach their limits is unknown.
The proposed solutions do not involve an intelligent decision stage based on functional safety as well as dysfunctional at the same time, without human intervention.
In order to remedy these disadvantages, according to its most general meaning the invention concerns a system for steering an autonomous vehicle according to claim 1 and the dependent claims, as well as a steering method according to the method claim.
Compared to the known solutions, the system is distinguished by independent functional redundancies detailed in the following list, arbitrated by an additional decision module implementing the safety of the intended functionality (SOTIF) principles.
This arbitration takes into account three types of input information:
These safety principles are technically implemented by a rules base recorded in a computer memory. These rules model good practices, for example “stop to allow a pedestrian to pass” or “do not exceed maximum authorized speed” and associate decision-making parameters. For example, these rules are grouped within the standard ISO 26262.
This rules base is utilized by a processor modifying the calculation of the risk level, and the consequence on the technical choices.
The system makes it possible to respond to the disadvantages of the prior art by a distributed architecture, with specialized computers assigned solely to processing data from sensors, computers of another type specifically assigned to the execution of computer programs for the determination of delegated driving information, and an additional computer constituting the arbitration module for deciding the selection of the said delegated driving information.
The decision of the arbitration module enables the safest result to be identified for any type of object perceived in the scene (status of a traffic light, position of an obstacle, location of the vehicle, distance relative to a pedestrian, maximum authorized speed on the road, etc.).
Any disturbances and anomalies concerning a sensor or a data source is therefore not propagated into all the systems. With the proposed architecture, the system has great flexibility and robustness with regard to local malfunctions.
The arbitration module can consist of a computer applying processing from a mathematical logic rules base and artificial intelligence, or by applying statistical processing (for example Monte Carlo, Gibbs, Bayesian, etc.) or machine learning. This processing makes it possible to ensure both real-time processing, and parallel tasks processing to be subsequently reinjected into the real-time processing.
Also disclosed is a method of steering an autonomous vehicle comprising:
The present invention will be better understood from the following detailed description of a non-limiting example of the invention, with reference to the appended drawings in which:
The computer architecture illustrated in
All the processing is declarative and non-deterministic: at any time, the items of information used and calculated are associated with confidence levels the value of which is only known during the execution of the programs.
Four robustness mechanisms are implemented:
The system implements the following technical choices
In this way, the system of the autonomous vehicle tends to be more reliable by using a maximum of these technological and functional capabilities. However, it also becomes more tolerant to failures because it is capable of detecting them and safeguarding against them by continually adapting its behavior.
The first stage (5) comprises the modules (1 to 3) for processing signals from different sensors onboard the vehicle and the connected modules (4 to 6) receiving external data.
A plurality of sensors and sources detect the same object. The merging of these data make it possible to confirm the perception.
The sources of the autonomous vehicle are a multiple base for detection of the environment. Each sensor and each source is associated with an item of information representative of the reliability and confidence level.
The detection results are then processed in order to be useable by the second stage: production of perception variables.
The hyper-perception stage (15) is broken down into two parts:
The “Production of perception variables” part, grouping together all the perception algorithms that interpret the detections from the sensors and other sources and calculate perception variables representative of an object.
The “Safe supervision” part that groups together a set of cross-tests on reliabilities, software and hardware errors, confidence levels, and algorithmic coherences. This all makes it possible to determine the most competitive object of perception, i.e. the object that is best in terms of representativity, confidence, reliability and integrity.
From these detection results and via numerous algorithms, perception variables are calculated. These variables will allow the system to describe the objects of the scene and thus to define a safe trajectory for the vehicle.
In order to be able to satisfy the safety methodology, an object perception variable should be given by at least two different algorithms. A merger of multi-sources, when possible, should also be used to produce these variables.
When combined in an intelligent algorithm, all the merger methods involving a plurality of sensors or other sources can improve the different perception variables. All the object perception variables are then cross checked to test their validity and the confidence level that can be assigned to them. This is the third step.
At this stage, a plurality of sets of variables representative of the same object have been calculated. They must therefore be compared to each other in order to be able to select the “best” one or ones.
This selection is carried out in four steps:
The computer executes processing that synthesizes all the results and decides on the best object to send to the planning. This involves answering the question: What are the best objects in terms of coherence, reliability and confidence?
This second stage is duplicated from the hardware point of view (computers and communication bus) as well as from the software point of view.
It therefore comprises two independent computers, receiving the signals from the sensors of the first stage by means of two different communication buses.
This second stage transmits the same data two times to the third stage.
The third hyper-planning stage (35) comprises two planning modules (31, 32) for steering the autonomous vehicle.
The planning process is broken down into three different parts:
This part receives both series of signals from the second stage and decides on the hardware and software reliability of the two series of signals in order to select the most pertinent series of signals.
A plurality of algorithms calculates the trajectories that the autonomous vehicle can take. Each algorithm calculates one type of trajectory specific to the perception objects that it considers. However, it can calculate one or more trajectories of the same type depending on the number of paths that the vehicle can potentially take. For example, if the vehicle is moving over a two-lane road segment, the planning system can calculate a trajectory for each lane.
In order to satisfy the safety methodology utilized, the algorithms calculating trajectories must send the potential trajectory(ies) accompanied by the confidence level and intrinsic reliability associated therewith. Another specific aspect of the safety methodology is to use a multi-perception merger algorithm in order to diversify even more the trajectory calculation means.
At this stage, multiple trajectories have been calculated. They must be compared to each other and to the road context (rules of the road, history, infrastructures, obstacles, navigation) in order to be prioritized.
This prioritization takes place in four steps:
This selection is influenced by the history of the trajectory followed by the autonomous vehicle, traffic, types of infrastructure, following good road safety practices, rules of the road and the criticality of the potential risks associated with each trajectory, such as those defined by the standard ISO 26262, for example. This choice involves the hyper planning of the refuge mode.
The behavioral choice algorithm is the last layer of intelligence that analyzes all the possible strategies and opts for the most secure and the most “comfortable” one. It will therefore choose the most suitable trajectory for the vehicle and the attendant speed.
The refuge hyper-planning module (32) calculates a refuge trajectory in order to ensure all feasible fallback possibilities in case of emergency. This trajectory is calculated from perception objects determined in accordance with the hyper-perception and hyper-planning methodology, but which are considered in this case for an alternative in refuge mode.
The second embodiment concerns a particular case for determining the desired path for the vehicle.
The example concerns an autonomous vehicle that must be classified as “OICA” level 4 or 5 (International Organization of Automobile Manufacturers), i.e. a level of autonomy where the driver is out of the loop. The system alone, with no intervention from the driver, must steer and decide the movements of the car over any infrastructure and in any environment.
The following description concerns the safe functional architecture of the VEDECOM autonomous vehicle “over-system,” designed above an existing vehicle platform, to increase its operational safety and make it more reliable, but also to ensure the integrity of the operating information and decisions made by the intelligence of this “over-system.”
A safe architecture of the autonomous vehicle has been prepared according to the following four robustness mechanisms:
At the perception level, a generic scheme has been prepared from these principles. This is illustrated in
The perception of the path is provided by four algorithms:
The function of Safe perception is:
1) To construct 4 desired paths from perception information from 4 sources (GPS-RTK, SLAM, Marking, Tracking).
2) To select the best information given by these four algorithms.
3) In manual mode, to prevent switching over to auto mode if the paths given by these algorithms do not have a sufficient index of confidence.
4) In autonomous mode, requesting emergency braking, associated with a request to regain control if the paths given by these algorithms do not have a sufficient index of confidence OR if the paths given by the four algorithms are incoherent with each other.
It comprises sensors (40, 41) constituting sources of information.
For example, four sources can be distinguished:
These input functions are handled by the system (functions related to equipment manufacturers or technological components). The outputs of these four sources are therefore very heterogeneous:
From the functions and heterogeneous outputs from the “sources” blocks (40, 41), one or more computers apply perception algorithms (42, 43) to give a homogeneous output of the object: in the example described, the object is the desired path. The desired path is given by a vector (a,b,c) corresponding to the polynomial (y=ax2+bx+c) of the path in the ego-vehicle reference.
In this part, a quick description of each perception algorithm is provided.
The “path” perception algorithm (42) by tracking utilizes the position x,y of the shield vehicle. The strong assumption is therefore that the “shield” vehicle is in the desired path of the autonomous vehicle.
The path is constructed in the following way:
1) Retrieval of the position of the tracked vehicle in the vehicle reference,
2) Positioning of the vehicle in a sliding reference,
3) Positioning of the tracked vehicle in the sliding reference,
4) Placing in memory the history (about six seconds) of the position of the tracked vehicle in the sliding reference: this history constitutes a dynamic cartography: vector [xy]Rsliding,
5) Location of the vehicle in the dynamic cartography,
6) Determination of the local trajectory in the dynamic cartography,
7) Switching over from the local trajectory to the vehicle reference: vector [xy]Rego-vehicle,
8) Polynomial interpolation of the vector [xy].
The output is therefore a “path” variable defined by the three variables (a,b,c) of the polynomial interpolation thereof.
Indeed, the marking detection algorithm (43) already provides a second degree polynomial of the white line located to the right and left of the vehicle:
Right side: yright=arightx2+brightx+cright
Left side: yleft=aleftx2+bleftx+cleft
The polynomial of the path is therefore simply the average of the 2 coefficients of the 2 polynomials:
In the event of loss of one of the two markings by the perception algorithm (identified by a drop in the confidence level received by the safe-perception), an estimate is made by considering the width of the road (“Lane Width” cartographic input) and a symmetry of identical left side/right side form. Thus, for loss of the right marking:
The path perception algorithm by GPS-RTK using the data from the sensor 3 is based on:
S
i
=S
i−1+sqrt(dx_p2+dy_p2)
The cartography is produced upstream simply by rolling along the desired path and recording the x,y values given by the GPS. The strong assumption is therefore that the position given by the GPS is always of quality (<20 cm) (therefore RTK correction signal OK), which is not always the case.
Starting from this GPS position, the following steps to construct the path are:
Locating the vehicle in the IMU-RTK cartography.
Construction of the “path” trajectory, absolute reference from the map.
Change of the trajectory in the vehicle reference.
Polynomial interpolation.
The path perception algorithm by SLAM utilizing the data from the sensor 4 relies on the same principle as the GPS-RTK. The only difference pertains to the location reference: in the case of the SLAM, the x,y position, yaw, and therefore the associated cartography is given in the reference from the SLAM and not in a GPS type absolute reference.
The confidence indicators are calculated by algorithms (45).
The internal confidence only uses input or output information from the path perception algorithm by tracking; therefore here:
The “tracked target no longer exists” condition is given by reading the identifier. This identifier is equal to “−1” when no object is provided by the tracking function.
The “change of target” condition is normally identified by a change of the identifier to “−1.” Added to this are tests on the discontinuity of the position returned. (For example, if an object is at x=5 m, then at x=30 m in the next step, it can then be considered that it is not the same object). The thresholds of discontinuities have been set at 3 m per sampling period Te (Te=50 ms) in x, and 0.8 m in y by Te.
The “vehicle in the axis” condition is set at 1 if the longitudinal position x of the tracked vehicle is between 1 m and 50 m of the ego-vehicle, and if the lateral position thereof is −1.5 m<y<1.5 m.
To avoid following a fixed target, an additional activation condition consists of verifying that the absolute speed of the object is not zero, particularly when the speed of the ego-vehicle is not.
Ideally, it should be verified that the object in question is characterized as a vehicle (and not a pedestrian).
The “path” confidence by the marking is simply calculated from the 2 confidences of the 2 markings.
Path Confidence=1 if (Right MarkingConfidence>threshold OR Left MarkingConfidence>threshold)
Path Confidence=0 if (Right MarkingConfidence<threshold AND Left MarkingConfidence<threshold)
Indeed, as previously mentioned, in case of loss of one of the 2 markings by the perception algorithm, an estimate is made by considering the width of the road (“Lane width” cartographic input) and a left side/right side identical symmetry of form. Therefore, only one marking is sufficient.
The SLAM confidence is a Boolean that drops definitively to 0 when the confidence in the location of the SLAM drops below a certain threshold. Indeed, this VEDECOM SLAM is incapable of calculating a location once the SLAM algorithm is “lost.”
Moreover, the VEDECOM SLAM cannot always be activated at the start of the autonomous vehicle's route. The condition precedent should therefore only be activated when the SLAM has already been in an initialization phase (identified by a specific point on the map).
A condition related to the cartography has been added: in order for the SLAM to have a non-zero confidence, the following condition is added: the vehicle must be at least 4 meters from the path given by SLAM. To do this, the LaneShift of the vehicle is retrieved, i.e. the variable “c” of the polynomial (intercept) of the “path” perception given by the SLAM.
Like the SLAM, the confidence is a product of:
The external confidence is related to the environmental conditions.
The environmental conditions pertain to the following conditions:
In some cases the meteorological conditions are not taken into account: In general, the demonstrations are suspended in the event of poor conditions.
The geographical conditions are taken into account in the topological cartography: in a very generic way, for each planned geographical portion in the route of the autonomous vehicle, an external confidence (Boolean 0 or 1) is provided, irrespective of the cause (tunnel, steep slope, etc.). There are therefore four columns in the topological cartography:
Tracking Mode external confidence
Marking Mode external confidence
SLAM Mode external confidence
IMU-RTK Mode external confidence
Thus, when entering a tunnel for example, and therefore positioning by GPS-RTK will not function, external confidence is set at 0 before entering the tunnel.
In general, in demonstrations when the vehicle drives several times over a portion of the route and a mode never reaches an internal confidence of 1, the external confidence is forced to 0 on this mode several meters before: this avoids changing to a mode that risks being lost shortly afterwards.
The robustness is the lesser of the internal confidence and the external watchdog confidence.
The reliability of each sensor is derived from a self-diagnostic test of the sensor, currently provided by the sensor suppliers. For example, the Continental camera provides at the output an “extended qualifier” that takes the following states:
A reliability calculation (46) is also performed. The reliability of the sensor is considered OK (camera reliability=1) only if extended Qualifier=0
Thus, reliability A (reliability of the path by tracking) equals 1 (status OK) if:
(LIDAR sensor reliability OK) AND (Test Watchdog OK)
reliability B (reliability of the path by marking) equals 1 (status OK) if:
(Camera sensor reliability OK) AND (Test Watchdog OK)
reliability C (reliability of the path by SLAM) equals 1 (status OK) if:
(SLAM LIDAR sensor reliability) AND (Test Watchdog OK)
reliability D (reliability of the path by IMU-RTK) equals 1 (status OK) if:
(GPS sensor reliability=1 and IMU reliability=1) AND (Test Watchdog OK)
The watchdog test involves verifying that the increment of the watchdog (information coming from the upstream perception calculator) is correctly performed.
The reliability of each algorithm is related to the reliability of each sensor source, associated with a test.
The coherence function (45) includes two types of tests:
Intrinsic coherence
Coherence by comparison with the other objects.
An objective of intrinsic coherence is to verify the pertinence of the object itself. For example, an intrinsic coherence test of an obstacle verifies that the object seen is well within the visible zone of the sensor.
One possible test would be to verify that over the last N seconds, the path given by an algorithm is close to the path of the vehicle history. For example, the LaneShift (variable “c” of the polynomial of the path) of the algorithm can be checked and verified that it is close to 0 over the last 5 seconds.
The objective is to output a Boolean indicating if the “path” given by one algorithm is coherent with the path given by another one. With 4 paths given by 4 algorithms A, B, C, D, there are therefore 6 Booleans to be calculated: Aft AC, AD, BC, BD, CD.
The comparison of two paths is done roughly by comparing the courses of the algorithms. Specifically, the comparison is achieved as follows:
1) For the “path” polynomial given by each algorithm, the desired course is calculated for three different time horizons (0.5 s, 1 s, 3 s)
The desired course is equal to atan (desired LaneShift/distance to the defined time horizon).
1) Then the difference of the three courses given by two different “path” algorithms is calculated, and they are all averaged
2) They are all filtered on a low-pass filter set at 2 seconds (which represents an average of about 2 seconds), then divided by a “CourseCoherence_deg” reference threshold with a default parameter of 10°.
3) If they give a result of more than 1, then the two paths are considered as non-coherent.
4) This test is performed 6 times for the 6 pairs of possible paths AB, AC, AD, BC, BD, CD.
The decision block (47) performs the final choice of the path, as a function of the confidences, coherences, reliability indexes and performance index. In the event of failure, of a confidence index that is too low, of incoherence between the actual path and the proposed choices, an emergency braking decision can be requested.
The general principle is as follows:
If Default Ranking=[D, A, C, B], the four algorithms are then classified by order of priority: (1: D: GPS-RTK, 2: A: Tracking, 3: C: SLAM, 4: B: Marking)
At the input of this block (47), there is:
The internal/external confidence in the 4 algorithms (A: Tracking, B: Marking, C: SLAM, D: GPS-RTK), i.e. c′A,c′B,c′C,D
The reliability index of the 4 algorithms (A: Tracking, B: Marking, C: SLAM, D: GPS-RTK), i.e. fA,fB,fC,fD
The robustness c″ is the lesser of the 2, therefore;
c″X=min(c′X,fX).
The expertise rules consist of preliminary rules imposed from the VEDECOM expertise, in this case, on the path construction algorithms.
Thus, it is known from experience that:
Since for the moment autonomous vehicles are being used in “shuttle” mode, experience achieved by traveling the route with the four modes makes it possible to know which is the most overall efficient mode for a given route.
Also, expertise shows that it is always better to give priority to a particular algorithm based on the history recorded in real time in an information base (48) (even if it means abandoning the use of the current algorithm in order to go back to the priority algorithm). However, other people, in order to avoid algorithm transitions (which can cause micro-movements of the steering wheel compared to a safe and comfortable performance), prefer to minimize these transitions by retaining the current algorithm as much as possible (even if the better performing algorithm is again usable).
Two parameters related to the expertise have therefore been constructed:
The “Transfer Algo Number→Priority Number” will change the numbering of the confidence and coherence variables: referenced by default as (A: Tracking, B: Marking, C: SLAM, D: GPS-RTK), these variables are, via this transfer function, numbered as (1: Highest priority algorithm, 2: 2nd priority algorithm, 3: 3rd highest priority algorithm, 4: Lowest priority algorithm).
For example, if “Default Ranking”=[D, B, A, C], then the confidence “A” becomes the confidence “3,” and the B-A coherence becomes the 2-3 coherence.
The sequential logic is a Stateflow system having the following inputs:
The two outputs are:
In manual mode, the objective of the function will be to determine the best algorithm possible when the transition is going to be made to autonomous mode.
More importantly, however, the function must prevent the change to autonomous mode if no algorithm has a sufficient confidence index (not zero here).
In general, this diagram favors the return to mode 1, i.e. the choice of the priority algorithm. Only the confidence indexes are taken into account. The coherences are not, because in the case of manual mode, and unlike autonomous mode, a poor coherence between two paths will not have an impact (such as swerving).
Thus, a priority 3 algorithm will only be selected if the confidence of the algorithms 1 and 2 are zero.
If all the algorithms have a zero confidence, then there is a change to the Safety: EmergencyBraking=1 mode. However, there will not specifically be emergency braking on the vehicle (because it is in manual mode), but only a prevention of changing to autonomous mode (If EmergencyBraking=1 AND If manual mode, then change to autonomous mode is prohibited).
Example considering: “Default ranking”=[C, D, A, B]) or [SLAM, GPS-RTK, Tracking, Marking).
MODE_AUTO_1 represents the schema when the current algorithm is the priority algorithm (for example SLAM (C) if “Default ranking”=[C, D, A, B]).
In the example:
IF the confidence of the SLAM=1, it remains in SLAM
IF the confidence of the SLAM changes to 0, another mode (2, 3, 4) will be selected:
A change to mode 2 is made (D: GPS-RTK) if the confidence of the path in GPS-RTK equals 1 AND if the path given by the SLAM and that of the GPS-RTK are coherent (coherence_1_2=1)
ELSE: A change is made directly from mode 1 to mode 3 (A: Tracking), IF it is not possible to change to GPS-RTK (cf. condition in the previous sentence) AND if the confidence of the path in Tracking equals 1 AND if the path given by the SLAM and the path from the Tracking are coherent
ELSE: A change is made directly from mode 1 to mode 4 (B: Marking), IF it is not possible to change to GPS-RTK AND IF it is not possible to change to Tracking AND if the confidence of the path by Marking equals 1 AND if the path given by the SLAM and the one from the Marking are coherent
ELSE: a change Is made to emergency braking.
It is assumed in the example that a change is made to mode 2 (therefore D: GPS-RTK)
MODE_AUTO_2 represents the schema when the current algorithm is the second priority algorithm (therefore GPS-RTK if “Default ranking”=[C, D, A, B]).
There are two situations according to the “AlgoPrio1” parameterization.
IF “AlgoPrio1=0” AND IF the confidence of the path by GPS-RTK=1, it remains in GPS-RTK.
IF “AlgoPrio1=1” AND IF the confidence of the path by GPS-RTK=1, a change is still made to priority 1 mode (therefore returned to SLAM) IF confidence of the SLAM=1 AND if the path given by the SLAM and the one from GPS-RTK are coherent (coherence_1_2=1).
In the following, the same principle is used as the one previously given.
IF the confidence of the GPS-RTK changes to 0, another mode (3, 4) will be selected:
A change is made from mode 2 to mode 3 (A: Tracking) IF the confidence of the path in Tracking equals 1 AND if the path given by the GPS-RTK and the path from the Tracking are coherent
ELSE: a change is made directly from mode 2 to mode 4 (A: Marking), if it is not possible to change to tracking AND if the confidence of the path by Marking equals 1 AND if the path given by the GPS-RTK and the path from Marking are coherent
ELSE a change is made to emergency braking.
In general, the choice of the path is based on a sequential diagram based on:
The Transfer Priority Number→Algo Number function just makes the transfer between ranking by priority (1: the highest priority Algo, 2: the second highest priority Algo, 3: the third highest priority Algo, 4: the lowest priority algorithm) and the default ranking (A: Tracking, B: Marking, C: SLAM, D: GPS-RTK).
Thus, if “Default ranking”=[D, B, A, C] and the sequential logic block has chosen the third highest priority algorithm, then the algorithm chosen is (A: Tracking).
Number | Date | Country | Kind |
---|---|---|---|
1657337 | Jul 2016 | FR | national |
This application is the US National Stage under 35 USC § 371 of International App. No. PCT/FR2017/052049 filed Jul. 25, 2017, which in turn claims the priority of French application 1657337 filed on Jul. 29, 2016, the content of which (text, drawings and claims) is incorporated here by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2017/052049 | 7/25/2017 | WO | 00 |