The present invention lies in the field of the visualization of augmented reality data in 3D space. The invention is described here in the field of non-destructive testing.
The invention is presented here, by way of illustration and without limitation, in the field of non-destructive testing with an ultrasonic sensor. It will be clearly apparent to those skilled in the art, on the basis of the description of the invention, that the principle of the invention may be applied to other types of sensors in order to visualize data related to the sensor.
Ultrasonic non-destructive testing is a non-invasive method for detecting defects in a part, based on the emission of ultrasound and the detection of the reflection thereof related to the acoustic interfaces that are encountered.
A suitable sensor emits ultrasound at a frequency (generally between 500 kHz and 100 MHz) that is chosen depending on the nature of the part to be tested. The sensor must be in direct contact with the part in order for the waves that are propagated not to be slowed down by the impedance of the air between the point of emission of the sensor and the part.
The waves are reflected from the acoustic interfaces that are encountered: contours of the part, internal defects.
The sensor, placed in contact with the part to be tested, intercepts the waves re-emitted by a potential defect.
The detected waves are converted into signals by an associated electronic assembly of the sensor. Software combines these signals to form an image of the inside of the part. Analyzing the images makes it possible to discriminate echoes caused by a defect from those related to the geometry of the part.
In the field of non-destructive testing, an operator scans the sensor over an entire predefined examination area of the part to be tested. It is vital that the entire area is scanned by the sensor. The set of coordinates of the point of emission of the signal is a crucial element for verifying that the entire area has been scanned correctly. Furthermore, in addition to knowing that the entire area has been scanned, it is advantageous for the operator to detect, during the test, whether an area deserves special attention.
The technical problem that arises is that of visualizing data positioned in 3D in augmented reality, while at the same time giving the user the impression of seeing inside the part. Indeed, a 3D surface displayed as a hologram appears to float in space, even if its position has been defined precisely in relation to the 3D volume of the part. This means that a simple hologram display is not enough to give the impression of depth and allow the brain to interpret the data as though they were inside the part.
The invention aims to overcome all or some of the problems cited above by proposing a visualization method and device that make it possible to present, in real time, the acquired data, that is to say the spatialized signals, in the form of a holographic 3D surface positioned in the 3D volume in which they were acquired.
To this end, one subject of the invention is a method for visualizing, in real time, a signal for the non-destructive testing of a mechanical part, the signal being emitted by a non-destructive testing device comprising:
Advantageously, the orientation of the cut-out is related to the location of the augmented reality visualization device.
Advantageously, the holographic 3D representations are visualized in transparency.
Advantageously, the step of determining the intersection surface comprises the following steps:
Advantageously, the visualization method according to the invention comprises, beforehand, a step of calibrating the augmented reality visualization device in the reference coordinate system.
Advantageously, the visualization method according to the invention comprises, prior to the step of calibrating the augmented reality visualization device in the reference coordinate system, a step of calibrating the non-destructive testing device.
The invention also relates to a device for visualizing, in real time, a signal for the non-destructive testing of a mechanical part, the device comprising a non-destructive testing device comprising:
Advantageously, the testing device according to the invention furthermore comprises a pointing device comprising a tip and connected fixedly to a second rigid body, the pointing device being able to determine the position of points on a surface.
The invention also relates to a computer program comprising instructions that cause the device according to the invention to carry out the steps of the method according to the invention.
The invention also relates to a computer-readable recording medium on which said computer program is recorded.
The invention will be better understood and other advantages will become apparent on reading the detailed description of an embodiment given by way of example, which description is illustrated by the appended drawing, in which:
For the sake of clarity, the same elements bear the same reference signs in the various figures. For a better view and for the sake of greater understanding, the elements are not always shown to scale. The term user, operator or tester is used to designate the tangible person who carries out the non-destructive test, that is to say who manipulates and scans the sensor (and sensor holder) over the mechanical part.
The object of the invention is based on a visualization device and method that allow the user to visualize data (ultrasonic signals) positioned in 3D in augmented reality, giving the user the impression of seeing inside the part. Augmented reality consists in superimposing virtual elements on the real world. The device comprises notably an optical positioning system, and an augmented reality visualization device. And it is necessary to register the two 3D environments (real 3D scene and 3D representation of the data) by applying augmented reality rendering techniques such as transparencies, occlusions, cutting planes in a material recess. The aim of the invention is to allow the tester to be completely immersed in the part and thereby focus on the non-destructive test signals positioned precisely in the geometry.
The optical motion tracking system 1 is associated with an orthonormal coordinate system R0=(0, {right arrow over (u0)}, {right arrow over (v0)}, {right arrow over (n0)}), where O is the origin of the coordinate system and {right arrow over (u0)}, {right arrow over (v0)}, {right arrow over (n0)} are mutually orthogonal normalized vectors.
The optical motion tracking system 1 determines the Cartesian coordinates and the orientation of a rigid body in the orthonormal coordinate system of the optical motion tracking system 1.
The optical motion tracking system 1 comprises at least two cameras and one or more infrared emitters. Other types of optical system may be used within the scope of the invention, for example a laser-based optical system and/or an optical system with non-3D dot-type markers. It is important to note that the system 1 may be a passive (optical or non-optical) motion tracking system.
The non-destructive testing device 10 comprises a first rigid body 2 connected to a probe, or sensor, 3 and a sensor holder 8. The sensor 3 is integral with the sensor holder 8 connected fixedly to the first rigid body 2. The first rigid body 2, the sensor holder 8 and the sensor 3 are connected fixedly and form an indivisible assembly during the non-destructive test. The first rigid body 2 comprises at least three spherical infrared-reflecting targets located at distinct positions. The first rigid body 2 is associated with an orthonormal coordinate system RC=(C, {right arrow over (uC)}, {right arrow over (vC)}, {right arrow over (nC)}), where C is the origin of the coordinate system and {right arrow over (uC)}, {right arrow over (vC)}, {right arrow over (nC)} are mutually orthogonal normalized vectors.
In one preferred embodiment, the first rigid body 2 comprises six spherical targets. The sensor 3 is for example a single-element ultrasonic probe. It comprises an emitting and receiving surface, referred to as active surface, 31. The active surface 31 is a rectangle with a flat surface. As a variant, the sensor 3 is of another type, for example an eddy current probe. Generally speaking, an active surface is any surface that emits or receives tangible signals belonging to a non-destructive testing sensor. For example, in the case of a single-element ultrasonic sensor in contact, this corresponds to the surface of the piezoelectrics. In the case of a single-element ultrasonic sensor with a “Plexiglas” shoe, this corresponds to the surface of the shoe through which the ultrasonic signals are emitted.
The testing device 10 comprises an augmented reality visualization device 16 facing the mechanical part 7, to which an augmented reality coordinate system (RA) is tied. It is through this visualization device 16 that the operator sees the real mechanical part, and it is on this visualization device 16 that the holographic 3D representations (described in detail later) are displayed, superimposed on the view of the mechanical part on the visualization device 16.
These holographic 3D representations may form occlusions of the mechanical part 7 in the sense that they are fully or partially superimposed on the real mechanical part 7. In particular, a holographic 3D representation of the mechanical part may be superimposed on the real mechanical part 7, and thus form an occlusion.
An occlusion may consist of the envelope of the virtual representations of the mechanical part to be tested and/or of the sensor and/or of the sensor holder.
It is possible to add information about the occlusion, such as areas of coverage or welds, and also to cut out this occlusion in order to reveal signals.
The testing device 10 comprises a computer 6 configured to
The term occlusion means concealing real objects behind virtual objects. Occlusion occurs when an object in a 3D space blocks the view of another object. In augmented reality, computer-generated objects are placed in a real scene so as to provide additional information or change the nature of real objects. Thus, virtual objects and the real scene have to be perfectly aligned in order to maintain high levels of realism and allow objects to behave as they would under normal conditions. In particular, in the context of the invention, a holographic representation of the mechanical part 7 is superimposed on the real part 7, and thus constitutes a total or partial occlusion.
The cut-out of the occlusion corresponds to an intersection between a 3D surface, for example an ellipsoid, and the signal visualization surface.
The computer 6 makes it possible notably to combine the signals received by the sensor in order to form an image of the inside of the part.
And the augmented reality visualization device 16 is configured to display:
The testing device 10 may be calibrated using a method known to those skilled in the art, for example as disclosed in document FR 3 087 254 B1. Hereinafter, and by way of example and without limiting the invention, the testing device 10 is calibrated using another method.
The testing device 10 may comprise a calibration block and, in our calibration example, the computer may be configured to carry out the following steps that will be explained below:
determining, from three points on the first rigid body (2), the length and width of the sensor when it is placed on the flat upper surface of the calibration block;
The computer is furthermore configured to determine a third transformation matrix for changing from the reference coordinate system (R0) to the coordinate system (RS) tied to the sensor.
The invention is described for the tracking of a sensor, but is also applicable to the tracking of multiple sensors, simultaneously and independently.
The non-destructive testing device comprises a computer 6 connected to the optical motion tracking system 1 and to a control module 5.
The computer 6 is for example a computer or an electronic card. It comprises notably a processor running a computer program implementing the method that will be described and a memory for storing the results thereof. It also comprises input and output interfaces and may be associated with a visualization screen.
The link between the computer 6 and the optical motion tracking system 1 may be wired or wireless. Likewise, the link between the computer 6 and the control module 5 may be wired or wireless.
Advantageously, the non-destructive testing device comprises a pointing device 4.
In one preferred embodiment, the second rigid body 41 comprises seven spherical targets.
The non-destructive testing device comprises a control module 5 equipped with at least one actuation button 51. Preferably, the control module 5 is mounted on the pointing device 4 so as to facilitate use thereof.
To express a vector {right arrow over (V1)} in the coordinate system RA, the notation {right arrow over (AV1)} is used. It should be noted that a vector is used to express both a position and a displacement.
The transformation (formed of a translation and/or a rotation) for changing from a coordinate system RA to a coordinate system RB is defined by the transformation matrix ATB.
{right arrow over (AIB)}, {right arrow over (AJB)} and {right arrow over (AKB)} respectively denote the unit vectors along the axes xB, yBand zB of the coordinate system RB and expressed in the coordinate system RA.
{right arrow over (ACB)} is the vector expressing the origin of the coordinate system RB in the coordinate system RA.
The motion tracking system 1 is capable of locating rigid bodies 2, 41 in space. A rigid body is a non-deformable collection of spherical markers reflecting infrared rays, and thus able to be located by the optical positioning system. After calibration, the motion tracking system is able to associate, with a rigid body, an origin reference frame that is used to qualify the position and the orientation of rigid bodies in the coordinate system of the motion tracking system.
It should be noted that, in order to be located correctly, the rigid bodies must be located within the solid angle seen by the motion tracking system.
The motion tracking system is supplied notably with a factory-calibrated tool for obtaining the coordinates of a point in space in the coordinate system of said motion tracking system. Hereinafter, this tool is referred to as second rigid body 41.
Two different rigid bodies are used in the calibration method of the invention: the second rigid body 41 described above and the first rigid body 2 used to locate the sensor holder 8. It is the latter, first rigid body 2 that is the subject of the calibration procedure described in this calibration step. The aim of this calibration step is to be able to determine, quickly, easily and accurately, the geometric transformation (rotation and/or translation) that exists between the origin of the first rigid body 2 as located by the motion tracking system and the point of emission of ultrasound by the sensor 3, independently of the sensor holder that is used. In other words, this calibration makes it possible to determine the geometric transformation, in the form of a transformation matrix, between the origin of the first rigid body 2 and a point of the sensor, regardless of the shape of the sensor holder necessarily placed between these two elements.
The testing device 10 according to the invention comprises:
Prior to the step of calibrating the augmented reality visualization device 16 (described in detail below) in the reference coordinate system (R0), the visualization method of the invention may comprise a step 1000 of calibrating the non-destructive testing device. Step 1000 of calibrating a device for the non-destructive testing of a mechanical part 7 may comprise, by way of example, the following steps:
It will be recalled that the calibration step is compatible with the visualization method described in detail below. However, another calibration step may be implemented instead of step 1000.
Advantageously, the calibration step may furthermore comprise a step 140 of determining a third transformation matrix for changing from the reference coordinate system (R0) to the coordinate system (RS) tied to the sensor. The third transformation matrix is obtained by multiplying the second transformation matrix with the first transformation matrix.
The aim of steps 100 and 110 is to calibrate the calibration block 14 itself, that is to say to determine its dimensions (length and width) and to associate therewith a coordinate system RB tied to the calibration block. The motion tracking system and the calibration block must be firmly fixed in order to avoid any change in their relative positions and orientations. An operator then uses the motion tracking system and the second rigid body 41 to acquire three positions on the surface of the calibration block 14. These three positions Q1, Q2 and Q3 are marked beforehand, advantageously but not necessarily, by small holes created for this purpose in the flat upper surface of the calibration block 14, as indicated in
The calibration step may comprise, prior to step 110 of determining the first transformation matrix from the reference coordinate system R0 to a block coordinate system RB tied to the calibration block, a step 105 of determining the block coordinate system RB tied to the calibration block in the reference coordinate system R0.
More precisely, the computer 6 receives three vectors at input:
The computer 6 computes the following data:
The computing steps are described in detail below.
The vector defining the center of the calibration block in the reference coordinate system R0 is computed:
The vector defining the length of the calibration block in the world coordinate system R0 is computed:
The vector defining the width of the calibration block in the world coordinate system R0 is computed:
The length of the calibration block may thus be computed:
L
B=∥{right arrow over (0LB)}∥
The width of the calibration block may also be computed:
W
B=∥{right arrow over (0WB)}∥
From these data, it is possible to compute the three unit vectors of the block coordinate system RB in the reference coordinate system R0:
Finally, the transformation matrix for changing from the reference coordinate system R0 to the block coordinate system RB is computed:
The aim of steps 120 and 130 is to determine the dimensions of the sensor (length and width), along with the transformation matrix between the first rigid body located on the sensor holder and the center of the sensor, as indicated in
To carry out steps 120 and 130, the sensor 3 is placed (step 115) on the flat upper surface of the calibration block 14. The operator acquires three positions of the first rigid body present on the sensor holder, as may be seen in
It should be noted that the sensor holder is designed such that it is the sensor itself, and not the sensor holder, that comes into contact with the edges (blockers) of the calibration block when the sensor holder is brought into the positions P1, P2 and P3.
In addition to the previously available data, the computer 6 receives three vectors at input:
Vector defining the first position in the reference coordinate system R0: {right arrow over (0P1)}Vector defining the second position in the reference coordinate system R0: {right arrow over (0P2)}
Vector defining the third position in the reference coordinate system R0: {right arrow over (0P3)}
The computer 6 computes the following data:
The computing steps are described in detail below.
The following vectors are computed:
The length of the sensor is computed:
The width of the sensor is computed:
Finally, the three unit vectors of the sensor coordinate system RS in the reference coordinate system R0 are computed:
{right arrow over (0IS)}={right arrow over (0IB)}
{right arrow over (0JS)}={right arrow over (0JB)}
{right arrow over (0KS)}={right arrow over (0KB)}
It is then possible to compute the transformation matrix for changing from the sensor holder coordinate system RH to the reference coordinate system R0, the sensor holder being in P3:
In other words, the calibration step described in detail here comprises, prior to step 125 of determining the coordinate system RS tied to the sensor in the coordinate system RH tied to the sensor holder, a step 123 of determining a fourth transformation matrix for changing from the coordinate system RH tied to the sensor holder to the reference coordinate system R0.
The following step consists in computing the coordinates of the center of the sensor in the reference coordinate system R0 (the computing is carried out for the sensor holder in P3):
It is also necessary to compute the three unit vectors of the sensor coordinate system RS in the sensor holder coordinate system RH:
Finally, the center of the sensor coordinate system RS is computed in the sensor holder coordinate system RH:
In other words, the calibration step comprises, prior to step 130 of determining the second transformation matrix for changing from a coordinate system RH tied to the sensor holder to a coordinate system RS tied to the sensor, a step 125 of determining the coordinate system RS tied to the sensor in the coordinate system RHtied to the sensor holder.
Finally, it is possible to compute the transformation matrix for changing from the sensor holder coordinate system RH to the sensor coordinate system RS:
It will be recalled that this calibration step is given by way of indication. Other calibrations are also compatible with the visualization method described in detail below. In what has just been described, the sensor holder may be calibrated directly using a tool for recording the positions of various characteristic points. Using the calibrated calibration block naturally makes it possible to integrate a coplanarity constraint that increases the accuracy of the calibration. It is the combination of calibration of the calibration block and of the sensor that gives this calibration method a particularly beneficial aspect. This results in great simplicity and speed of implementation along with a significant improvement in the accuracy of the calibration compared to a conventional method.
After this preliminary step of calibrating the non-destructive testing device 10, the core of the invention, relating to the display of the data, is described in detailed below.
The object of the invention lies in visualizing, in augmented reality, the signals directly inside the mechanical part in the field of non-destructive testing. To this end, the invention is based on two aspects. On the one hand, the elements in the scene must be presented in the form of holographic occlusions so that the brain is able to correctly interpret the position and depth of the holographic 3D surface inside the part. And, on the other hand, a cut-out is produced within the occlusion in order to allow this holographic 3D surface to be visualized. The orientation of this cut-out is related to the relative position of the operator. More precisely, the orientation of the cut-out 80 is related to the location of the augmented reality visualization device. This cut-out and the signal visualization surface constructed from the ultrasonic paths thus face the augmented reality visualization device, and therefore the eyes of the operator. The major advantage of the invention is that of allowing the user to understand the positioning of signals inside the part and to achieve a real-time visualization of spatialized data in augmented reality.
The cut-out 80 is a 3D surface that passes through the depth of the occlusion until reaching an ultrasonic path acquisition surface, which corresponds to an ultrasonic signal visualization surface 81. The cut-out of the occlusion thus corresponds to the intersection between the 3D surface and the visualization surface.
For example, the cut-out 80 is produced in the holographic 3D representation of the mechanical part 7′ as illustrated in
The cut-out 80 is predetermined such that the operator is able to visualize the paths of the ultrasonic signals inside the mechanical part in its holographic 3D representation 7′, that is to say inside this cut-out 80 in the part. The position of the ellipsoid is centered notably on a point of the sensor chosen as a reference. The rotation and dimensions of the ellipsoid are computed based on the position of the operator in relation to the sensor and the part, in order to allow all of the signals to be visualized by this same operator.
The visualization method thus makes it possible to obtain a display in the augmented reality visualization device 16. The operator sees there, through the visualization device 16, the mechanical part 7, the sensor holder 8 and the sensor 3. In addition to these real objects, the display comprises a holographic 3D representation 7′ of the mechanical part, a holographic 3D representation 8′ of the sensor holder, and a holographic 3D representation 3′ of the sensor. These three holographic 3D representations are superimposed on the display of the real objects. In addition to the real objects and these holographic 3D representations superimposed on the real objects, the display comprises the holographic 3D representation of the cut-out of the occlusion 80 and of the intersection surface 81 or ultrasonic signal visualization surface, superimposed on the holographic 3D representations superimposed on the real objects. This surface corresponds to a quadric that is computed to be as close as possible to the mesh formed by the ultrasonic paths.
In the rest of the document, we will refer to the following coordinate systems:
The visualization method according to the invention comprises, beforehand, a step 190 of calibrating the augmented reality visualization device 16 in the reference coordinate system (R0).
In order to define the position of the elements known in the world coordinate system in the coordinate system of the augmented reality visualization device, it is necessary to calibrate the augmented reality visualization device. The augmented reality visualization device establishes an anchor, that is to say a coordinate system at the location of the QR code. During the calibration, this coordinate system is positioned in the world coordinate system, defined by the optical positioning system. The calibration makes it possible to establish the relationship between the two worlds.
In the world of the augmented reality visualization device, the coordinate system RQR associated with the QR code 86 is defined as shown in
The vector {right arrow over (0IQR)} in the world coordinate system is:
The vector {right arrow over (0JQR)} in the world coordinate system is:
The vector {right arrow over (0KQR)} in the world coordinate system is:
The calibration step 1000 defines and describes how to obtain the following transformation matrices by calibration:
It is possible to position the origin F of the sensor 3 in the part coordinate system RP:
P
P
F=PT0(0PF)
The information/spatially related to this point F such as the ultrasonic paths and the sectorial scan reconstructed therefrom and physical signals may then be located in the coordinate system of the augmented reality visualization device RRA:
RA
P
I=RAT0(0PI)
In order to clarify the various elements of the 3D scene to be visualized, the 3D object containing the signals to be visualized by the operator may be denoted hereinafter as “main object”, and the 3D objects with respect to which the main object is positioned (notably the part and the sensor) may be denoted as “secondary objects”. It should be noted that, in the case of augmented reality, each of the objects described above is visualized in the form of a hologram and is superimposed perfectly on a real object, thus creating an occlusion of this real object.
As already mentioned, the objective of the method of the invention is to allow the user to visualize the main object formed from ultrasonic data (sectorial scan for example) and to position it in 3D in augmented reality, giving the user the impression of visualizing it inside the part.
For the human being to correctly visually interpret the positioning of this main object in the 3D visual space, it is not enough to display the hologram of this main object. The hologram will give the impression of floating, and its spatial relationship with the volume of the part will be lost. In our case described here, the volume in question is the volume passed through by the ultrasonic signals used to construct this main object.
The invention consists in displaying the hologram of this main object accompanied by a set of graphic occlusions corresponding to the holograms of the secondary objects. The secondary objects are notably the part and the sensor (accompanied by its sensor holder). For these occlusions not to completely conceal the real objects, the occlusions advantageously have a medium transparency level. The graphic occlusions of the part (possibly comprising a weld to be tested) and of the sensor are therefore produced with textures and colors having a transparency percentage.
In order for the brain to interpret the main object as forming part of the internal integrity of the part (and possibly of the weld), this main object must appear on the graphic occlusion of the part. In order to achieve this objective, a transparent opening of the graphic occlusions of the part is produced. This is the cut-out 80 of the occlusion formed by the virtual representation of the main object. This opening is centered on the sensor (more precisely the point F of the sensor, the point with respect to which the main object to be displayed is located). This opening or cut-out 80 is also a 3D object, and a portion of the surface of its volume is coincident with the surface of the main object.
In order not to disturb the operator with information outside their field of view (for example the back of the part), the visible data, and therefore the direction of the opening, are selected according to the position and orientation of the eye of the operator. Thus, if the operator visualizes the holograms from the front or back of the part, the holograms remain oriented correctly. This is made possible by calibrating the visualization device 16 with respect to the motion tracking device 1.
Step 240 of determining the intersection surface 81 or ultrasonic path visualization surface comprises the following steps:
In other words, step 240 makes it possible to visualize a sectorial scan within the augmented reality visualization device 16. This is the intersection surface 81 of the plane containing the emission axis in the cut-out 80 of the occlusion.
The signal visualization surface 81 is determined, on the one hand, from the characteristics of the sensor, such as its position relative to the part to be imaged and the characteristics of the emitted signals, notably their orientations in the part. It is determined, on the other hand, from the cut-out 80 of the part as being included in this cut-out 80 and representing for example the bottom surface of this cut-out 80.
The signal visualization surface 81 changes with each new acquisition of signals, notably if the position of the sensor and/or the orientations of the signals change.
By virtue of the invention, it is thus possible for the operator to visualize secondary objects (that is to say real objects: mechanical part 7 to be tested, the sensor holder 8 and the sensor 3), on which the main object containing the signals to be visualized, that is to say the assembly formed by the cut-out 80 and the surface 81, is superimposed.
In one particular embodiment, the steps of the method according to the invention are implemented by computer program instructions. Consequently, the invention also targets a computer program on an information medium, this program being able to be implemented in a computer, this program comprising instructions designed to implement the steps of a method as described above.
This program may use any programming language, and may be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or any other desirable form. The invention also targets a computer-readable information medium comprising computer program instructions designed to implement the steps of a method as described above.
The information medium may be any entity or device capable of storing the program. For example, the medium may comprise a storage means, such as a ROM, for example a CD-ROM or a microelectronic circuit ROM, or else a magnetic recording means, for example a floppy disk or a hard disk.
Moreover, the information medium may be a transmissible medium such as an electrical or optical signal, which may be routed via an electrical or optical cable, by radio or by other means. The program according to the invention may in particular be downloaded on an Internet type network.
As an alternative, the information medium may be an integrated circuit in which the program is incorporated, the circuit being designed to execute or to be used in the execution of the method according to the invention.
It will be apparent more generally to those skilled in the art that various modifications may be made to the embodiments described above, in the light of the teaching that has just been disclosed to them. In the claims that follow, the terms that are used should not be interpreted as limiting the claims to the embodiments set out in the present description, but should be interpreted so as to include therein all of the equivalents that the claims are intended to cover by virtue of their wording and as may be foreseen by those skilled in the art on the basis of their general knowledge.
Number | Date | Country | Kind |
---|---|---|---|
2109076 | Aug 2021 | FR | national |
This application is a National Stage of International patent application PCT/EP2022/074036, filed on Aug. 30, 2022, which claims priority to foreign French patent application No. FR 2109076, filed on Aug. 31, 2021, the disclosures of which are incorporated by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/074036 | 8/30/2022 | WO |