The technical field of the invention is interpretation of non-destructive-testing measurements carried out on a mechanical part or a portion of a structure.
NDT, meaning non-destructive testing, consists in controlling the quality of parts or mechanical structures, using sensors, in a non-destructive way. The objective is to perform a test and/or to detect and monitor for the appearance of structural defects. This involves monitoring the integrity of a tested part, so as to prevent the occurrence of accidents, or to extend the time for which the part may be safely used.
NDT is commonly implemented on sensitive equipment, so as to optimize replacement or maintenance thereof. It has many applications to testing industrial equipment, for example in the petroleum industry, in the nuclear industry, or in the transportation field (aeronautics for example).
The sensors used in the NDT are non-destructive sensors that cause no damage to the tested parts. The tested parts may be structural elements of industrial equipment or aircraft, or of civil engineering works (bridges or dams for example). Various methods are implemented. They may for example employ X-rays, or ultrasonic waves or detection via eddy currents.
During use thereof, the sensors are connected to computing means, so as to make it possible to interpret the performed measurements. The presence of a defect, in a part, leads to a defect signature that is measurable by a sensor. The computing means perform an inversion. It is a question, based on the measurements, of obtaining quantitative data relating to the defect (for example its position, or its shape, or its dimensions).
The inversion may be carried out using direct analytical models (polynomial models for example) allowing a relationship to be established between the features of a defect and the measurements delivered by a sensor. The inversion of the model allows said features to be estimated based on the measurements carried out.
According to another approach, the features of the defects may be estimated using supervised artificial-intelligence algorithms, neural networks for example. However, one difficulty associated with use of neural networks is the need to perform a complete-as-possible training phase, to optimize the performance of the estimation. This takes time and requires a lot of data. The inventors have provided a method that addresses this problem. The objective is to make it easier to train a neural network intended to perform an inversion, while maintaining a good performance in respect of estimation of the features of the defect.
A first subject of the invention is a method for characterizing a part, the part being liable to comprise a defect, the method comprising the following steps:
By measurement, what is meant is a measurement of a physical quantity liable to vary in the presence of a defect in the part. It may be an acoustic quantity, an electric quantity, an electrostatic quantity, a magnetic quantity, an electromagnetic quantity (for example an intensity of a type of radiation), or a mechanical quantity.
The second configuration may notably take into account:
The first database may comprise measurements performed or simulated on a model part comprising the defect.
According to one embodiment, the processing block of the first neural network is a classifying block, configured, in the first training operation, to perform a classification of the features extracted by the extracting block, the first neural network being a convolutional neural network. In the second training operation, the classifying block of the second neural network may be initialized using the classifying block of the first neural network.
According to one embodiment, the first neural network is an autoencoder. Said processing block of the first neural network may be configured, in the first training operation, to reconstruct data obtained from the first database, and forming input data of the first neural network.
The defect may be of the following type: delamination, and/or crack, and/or perforation and/or crack propagating from a perforation and/or presence of a porous region and/or presence of an inclusion and/or presence of corrosion.
The part may be made of a composite, comprising components assembled with one another. The defect may then be an assembly defect between the components.
The measurements may be representative of a spatial distribution:
The defect may be a variation in the spatial distribution with respect to a reference spatial distribution. The reference spatial distribution may have been previously modeled or established experimentally.
The measurements may be of the following type:
The method may be such that:
The first configuration and the second configuration may be parametrized by at least one of the parameters chosen from:
The measurement conditions may comprise at least:
The first model part and the second model part may be identical.
According to one possibility:
The second database may comprise a number of data lower than the number of data of the first database.
The characterization of the defect may comprise:
The invention will be better understood on reading the examples of embodiment that are presented, in the rest of the description, with reference to the figures listed below.
The part to be characterized may be a monolithic part, or a more complex part resulting from assembly of a plurality of elementary parts, for example a lifting surface of an airplane wing or a skin of a fuselage.
The characterization may consist in detecting a potential presence of a structural defect 11. In this example, the sensor 1 is configured to perform measurements using eddy currents generated in the part 10 to be tested. The latter is an electrically conductive part. The principle of eddy-current-based non-destructive measurements is known. Under the effect of excitation by a magnetic field 12, eddy currents 13 are induced in the part 10. In the example shown, the sensor 1 is a coil, energized by an amplitude-modulated current. The coil generates a magnetic field 12, the field lines of which have been shown in
In this example, the sensor 1 is used to excite the part 10, and as sensor of the reaction of the part to the excitation. Generally, the sensor is moved along the part, parallel thereto. In the example shown, the part extends along an axis X and an axis Y. The sensor may be moved parallel to each axis, this having been represented by the double-headed arrows. Alternatively, a matrix array of sensors may be employed.
A series of measurements, forming a preferably two-dimensional spatial distribution of a measured quantity, in this case the impedance of sensor 1, is thus obtained. It is conventional to distinguish between the real and imaginary parts of the impedance. The measured impedance is generally compared to an impedance in the absence of defect, so as to obtain a map of the impedance variation ΔH. A measurement matrix M, representing the real part or the imaginary part of the impedance measured at each measurement point, may be formed.
The sensor 1 is connected to a processing unit 2, comprising a memory 3 containing instructions allowing implementation of a measurement-processing algorithm, the main steps of which are described below. The processing unit is usually a computer, connected to a display 4.
As described in connection with the prior art, the measurement matrix M corresponds to a spatial distribution of the response of the part to excitation, each measurement being a signature of the part. An inversion must be performed, so as to make it possible to conclude that a defect is present, and, where appropriate, to characterize the latter. The inversion is performed by the processing algorithm implemented by the processing unit 2.
By structural defect, what is meant is a mechanical defect affecting the part. It may notably be a question of a crack, or of a delamination, or of a perforation (for example forming a through-hole), or of a crack propagating from a hole, or of an abnormally porous region. The structural defect may also be a presence of an inclusion of an undesirable material, or of a corroded region. The structural defect may affect the surface of the part 10 located facing the sensor. It may also be located at depth in the part. The type of sensor used is selected depending on the defect to be characterized.
The part 10 to be tested may be formed from a composite. It then comprises components assembled with one another. It may be a question of an assembly of plates or of fibers. The defect may be an assembly defect: it may be a question of local delamination of plates, or of a decohesion of fibers or of fiber strands, or a non-uniformity in the orientation of fibers, or a defect resulting from an impact or shock.
The part 10 to be tested has mechanical properties that are spatially distributed along the part. The defect may be a variation in the spatial distribution of the mechanical properties with respect to a reference spatial distribution. It may for example be a question of one portion of the part, in which portion the mechanical properties do not correspond to reference mechanical properties, or are outside a range of tolerance. The mechanical property may be Young's modulus, or density, or a propagation speed of an acoustic wave. The reference spatial distribution may be obtained from a specification, and correspond to an objective to be achieved. It may result from a model or from experimental measurements.
The preceding paragraph also applies to electrical or magnetic properties, or even to a stress to which the part to be tested is exposed. It may for example be a question of a temperature-related stress or of a mechanical stress (a pressure-related stress for example) to which the part is subjected during its operation. Thus, the characterization of the part may consist in establishing a temperature of the part, or a level of mechanical stress (force, pressure, deformation) to which the part is subjected. The characterization may also consist in establishing a spatial distribution of a temperature of the part or, more generally, of a stress to which the part is subjected. The defect is then a divergence between the spatial distribution and a reference spatial distribution. When considering a spatial distribution of the temperature of the part, a defect may be the appearance of a hot spot, in which the temperature of the part is abnormally high with respect to a reference spatial distribution of temperature.
The characterization of the defect aims to determine the type of defect, among the aforementioned types. It may also comprise location of the defect, and estimation of all or some of its dimensions.
The measurements may be processed using a supervised artificial-intelligence algorithm, for example one based on a neural network. The algorithm may be trained by constructing a database formed from carried out or simulated measurements of a part comprising a defect the features of which are known: type of defect, dimensions, location, potentially porosity or another physical quantity allowing the defect to be characterized.
The database may be established by simulation, using a dedicated simulation software package. An example of a dedicated software package is the software package CIVA (supplier: Extende), which notably allows various non-destructive-testing techniques to be simulated: propagation of ultrasound, effects of eddy currents and X-ray radiography. Such a software package allows measurements to be simulated based on modeling of the part. Use of such a software package may allow a database to be constructed allowing the artificial-intelligence algorithm used to be trained.
For this type of application, the inventors have concluded that use of a convolutional neural network is appropriate. Specifically, the input data are in matrix format and may be likened to images. Each image corresponds to a map of a measured physical quantity liable to vary in the presence of a defect in the part.
The convolutional neural network comprises a feature-extracting block A1, connected to a processing block B1. The processing block is configured to process the features extracted by the extracting block A1. In this example, the processing block B1 is a classifying block. It allows a classification on the basis of the features extracted by the extracting block A1. The convolutional neural network is fed with input data Ain, which correspond to one or more images. In the example in question, the input data form an image obtained through concatenation of two images representing the real part and the imaginary part, respectively, of the impedance variation ΔH measured at various measuring points regularly distributed, facing the part, in a matrix-array arrangement.
The feature-extracting block A1 comprises J layers C1 . . . Cj . . . CJ downstream of the input data. J being an integer higher than or equal to 1. Each layer Cj is obtained by applying a convolution filter to the images of a previous layer Cj−1. The index j is the rank of each layer. The layer C0 corresponds to the input data Ain. The parameters of the convolution filters applied in each layer are determined during training. The last layer CJ may comprise a number of terms exceeding several tens, or even several hundreds, or even several thousands. These terms correspond to features extracted from each image forming the input data.
Between two successive layers, the method may comprise dimension-reducing operations (pooling operations for example). It is a question of replacing the values of a group of pixels with a single value—for example the mean, or the maximum value, or the minimum value of the group in question. Max pooling, which corresponds to the replacement of the values of each group of pixels with the maximum value of the pixels in the group, may for example be applied. The last layer CJ is flattened, to make the values of this layer form a vector. These values constitute the features extracted from each image.
The classifying block B1 is a fully connected neural network, or multilayer perceptron. It comprises an input layer Bin, and an output layer Bout. The input layer Bin is formed by the features of the vector resulting from the extracting block A1. Between the input layer Bin and the output layer Bout, one or more hidden layers H may be provided. The input layer Bin, each hidden layer H and the output layer Bout are thus placed in succession.
Each layer may be assigned a rank k. The rank k=0 corresponds to the input layer Bin. Each layer comprises nodes, the number of nodes of a layer possibly being different from the number of nodes of another layer. Generally, the value of a node yn,k of a layer of rank k is such that
where
The form of each activation function fn is determined by a person skilled in the art. It may for example be a question of an activation function fn of hyperbolic-tangent or sigmoid type.
The output layer Bout contains values allowing a defect identified by the images of the input layer Ain to be characterized. It is the result of the inversion performed by the algorithm.
In the simplest application, the output layer may comprise only a single node, taking the value 0 or 1 depending on whether the analysis reveals the presence or absence of a defect.
In an application to identification, the output layer may comprise as many nodes as there are types of defect to be identified, each node corresponding to a probability of presence of one type of defect among the predetermined types (crack, hole, delamination, etc.).
In an application to determining dimensions, the output layer may comprise as many nodes as there are dimensions of a defect, this assuming employment of a defect geometric model.
In an application to determining location, the output layer may comprise coordinates indicating the, two-dimensional or three-dimensional, position of a defect in the part.
In an application to characterization, the output layer may contain information on the inspected part, for example a spatial distribution of one or more mechanical properties (Young's modulus for example), or electrical or magnetic or thermal properties (temperature for example) or geometric properties (at least one dimension of the part for example). The dimension of the output layer corresponds to a number of points of the part at which the mechanical property is estimated on the basis of the input data.
These applications may be combined, so as to obtain both a location and dimensions, or a location, an identification and dimensions.
An important element of the invention is that the extracting block A1 may be established, for a given measurement technique, using the most exhaustive possible training operation, called the first training operation, employing a large database. The classifying block may be tailored to various specific cases in which the measurement technique is implemented. In other words, the invention makes it possible to parametrize various classifying blocks B1, B2, for various applications, while preserving the same extracting block A1.
Generally, in a first training phase, the extracting block A1 and a first classifying block B1 are parametrized. The first training phase is implemented using a first database DB1, established in a first configuration, considering a first model part. The method comprises using a second database DB2, on the basis of which a second training operation is performed. The second database is established in a second configuration different from the first configuration.
The first configuration is parametrized by various parameters Pi, i being an integer identifying each parameter. These parameters may notably comprise:
By fidelity, when mentioned in connection with the first parameter P1, what is meant is an ability of the model to replicate reality.
The first database DB1 contains various images representative of measurements of the first model part that were performed or simulated while varying certain parameters Pj, called variable parameters, whereas other parameters Pi≠j remain fixed for all the images of the first database. For example, the first database DB1 is constructed with only the parameter P8, representing the dimensions of the defect in question, varying, the parameters P1 to P7 remaining constant.
It will be understood that, in each training operation, the features of the defect that it is sought to estimate are known. Generally, the variable parameters correspond to the features intended to be estimated by the neural network.
The first database DB1 may contain a first number of images N1 that may exceed several hundred or even several thousand. Thus, following the first training operation, the first neural network, formed by combination of blocks A1 and B1, is then assumed to have a satisfactory prediction performance.
One important element of the invention is the ability to use the first training operation to perform the second training operation, in a different configuration. By different configuration, what is meant is that at least one of the fixed parameters of the first configuration is modified. The following examples show various possibilities in respect of modification of a parameter:
When the first and second databases are obtained experimentally, or by simulation, the number of sensors used (or simulated) may be different in the construction of each database.
Generally, the first training operation is carried out with certain parameters fixed. At least one of these parameters is modified in the second training operation, to construct the second database DB2.
One important aspect of the invention is that, in the second training operation, the extracting block A1 resulting from the first training operation is preserved. The first training operation is considered to be sufficiently exhaustive for the performance of the extracting block, in terms of extraction of features of the images supplied as input, to be considered sufficient. The extracting block may thus be used in the second training operation. In other words, the features extracted by the block A1 are a good descriptor of the measurements forming the input layer.
The second training operation is thus limited to an update of the parametrization of the classifying block, to obtain a second classifying block B2 tailored to the configuration of the second training operation. The second training operation may then be implemented with a second database DB2 containing less data than the first database.
In the course of the second training operation, the second classifying block B2 may be initialized taking into account the parameters governing the first classifying block B1. According to one possibility, the second classifying block B2 may comprise the same number of hidden layers as the first classifying block B1. These hidden layers may possess the same number of nodes as the layers of the first classifying block. The dimension of the output layer depends on the features of the defect to be estimated. Thus, the dimension of the output layer of the second classifying block B2 may be different from the dimension of the first classifying block B1. According to one possibility, the number of hidden layers and/or the dimension of the hidden layers of the second classifying block is different from the number of hidden layers and/or the dimension of the hidden layers of the first classifying block.
The advantage of the invention is that, with a sufficiently complete first training operation, the second training operation may be carried out considering an amount of data significantly smaller than the amount of data used to carry out the first training operation. By significantly smaller amount of data, what is meant is at least 10 times or even 100 times less data. The second database DB2, constructed to carry out the second training operation, is smaller than the first database DB1. Frugal learning is spoken of, insofar as the amount of training data required is modest.
The method allows a first training operation to be implemented under laboratory conditions, on the basis of simulations or of optimized experimental conditions. This first training operation is followed by a second training operation that is closer to reality in the field: experimental measurements, and/or more realistic measurement conditions, or a part of more complex shape or construction are/is taken into account. The invention makes it possible to limit the number of measurements necessary for the second training operation, while allowing a neural network that has a good prediction performance to be obtained. This is an important advantage, since acquiring measurements under realistic conditions is usually more complex than obtaining measurements in the laboratory or simulated measurements. The second training operation may allow unmodelable specificities to be taken into account, for example measurement noise, or variations relative to the composition or to the shape of the part.
Another advantage of the invention is that it makes it possible to use a first training operation, carried out on a part made of a certain material, to perform a “frugal” second training operation on a similar part, of a different material.
The first training operation may be thought of as a general training operation, suitable for various particular applications, or for parts of various types or shapes, or for various types of defects. It is essentially intended to provide an extracting block A1 allowing relevant features to be extracted from input data. The second training operation is more targeted, to a particular application, or to a particular type of part, or to a particular type of defect. The invention makes it easier to perform the second training operation, because it requires markedly less input data than the first training operation. Thus, the same first training operation may be used to perform various second training operations, corresponding to different configurations, respectively.
The first training operation may be carried out using a first database that is relatively easy to obtain, with respect to the second database. This makes it possible to provide a more exhaustive first database, for example reflecting a high degree of variability in the dimensions and/or in the shape of the defect.
The main steps of the invention are schematically shown in
Step 100: creating the first database DB1.
In the course of this step, the first database DB1 is built in a first configuration. As described above, the first configuration is parametrized by first parameters, some of these first parameters being fixed. The first database is formed from images representative of measurements obtained (performed or simulated) in the first configuration.
Step 110: first training operation.
In the course of this step, the first database is used to parametrize the blocks A1 and B1, so as to optimize the prediction performance of a first convolutional neural network CNN1.
Step 120: creating the second database DB2.
In the course of this step, the second database DB2 is built in a second configuration. As described above, at least one parameter, considered fixed in the first configuration, is modified. The size of the second database is preferably at least 10 times smaller than the size of the first database.
Step 130: second training operation.
In the course of this step, the second database DB2 is used to train a second convolutional neural network CNN2 formed by the first extracting block A1, resulting from the first training operation, and by a second classifying block B2 specific to the configuration adopted in the second training operation.
The configuration relating to the second training operation may correspond to conditions considered to be close to the measurement conditions. Specifically, the convolutional neural network CNN2 resulting from the second training operation is intended to be used to interpret measurements performed on the parts examined. This is the object of the following step.
Step 200: carrying out measurements.
Measurements are carried out, on an examined part, in the measurement configuration considered in the second training operation.
Step 210: interpreting the measurements.
The convolutional neural network CNN2, resulting from the second training operation, is used to estimate the features of a defect potentially present in the examined part, based on the measurements performed in step 200. These features may be established using the output layer Bout of the convolutional neural network CNN2. This network is therefore used to perform the step of inverting the measurements.
A first example of implementation of the invention will now be presented with reference to
Measurements performed using an eddy-current technique, in a scan consisting of 41×46 measurement points, at a distance of 0.3 mm above the part, were simulated. The part was a planar metal part. The light-gray line shown in
A first training operation was carried out on the basis of simulations. In the course of the first training operation, 2000 images were used taking into account a very low noise level (signal-to-noise ratio of 40 dB). Each input image was a concatenation of an image of the real part and of an image of the imaginary part of the impedance variation ΔH measured at each measurement point. In the course of this training operation, the dimensions of the defect were varied, its shape remaining the same.
The first training operation allowed a first convolutional neural network CNN1, comprising a first extracting block A1 and a first classifying block B1 such as described above, to be parametrized. The input layer comprised two images, corresponding to the real part and to the imaginary part of the impedance variation □H at the various measurement points, respectively. The extracting block A1 comprised four convolution layers C1 to C4 such that:
A maxpooling operation into groups of 2×2 pixels was performed between layers C2 and C3 and between layer C3 and layer C4, the latter being converted into a vector of 1024 size.
The vector of 1024 size obtained from the extracting block A1 formed the input layer Bin of a fully connected classifying block B1 comprising a single hidden layer H (512 nodes), which was connected to an output layer Bout. The latter was a vector that was 8 in size, each term corresponding to one estimate of the dimensions X1 to X8, respectively.
The first convolutional neural network CNN1 was tested to estimate the 8 dimensional parameters X1 to X8 shown in
The inventors trained a second neural network CNN2, using, in a second database, 20 images simulated taking into account a signal-to-noise ratio of 40 dB and 20 images simulated taking into account a signal-to-noise ratio of 5 dB, i.e. a total of 40 images. As described above, the second neural network CNN2 was parametrized while keeping the extracting block A1 of the first neural network CNN1. Only the classifying block B2 of the second neural network was parametrized, while keeping the same number of layers and the same number of nodes per layer as the first neural network CNN1.
The second neural network CNN2 was tested on the same test data as the first neural network, i.e. with test images having signal-to-noise ratios equal to 5 dB, 20 dB and 40 dB, respectively.
This first example demonstrates the relevance of the invention: it allows a neural network to adapt rapidly when a second configuration is passed to from a first configuration by modifying a parameter kept fixed in the first configuration, in the present case the signal-to-noise ratio. It will be noted that the second neural network was parametrized using a database of 40 images, i.e. 50 times fewer than the database used during training of the first neural network.
In a second example, the inventors went from a first training configuration, employing a defect of a first predetermined shape, to a second training configuration, based on a second shape different from the first shape. The second defect is shown in
The complex second defect is a crack forming three Ts having 23 positional or dimensional features:
The inventors parametrized a reference neural network CNNref, by first constructing a reference database DBref. The reference database DBref contained 2000 images obtained by simulating measurements at 89×69 measurement points that were regularly distributed in a square grid, each image being obtained through concatenation of images of the real part and of the imaginary part of the impedance variation. The path of the sensor has been shown in
The reference neural network CNNref was a convolutional neural network, of an analogous structure to the neural networks CNN1 and CNN2 described in connection with the first example. The only differences were:
Curve (a) in
The inventors compared the performance of the reference neural network with:
The small database DBaux was formed based on 50 different sets of features X1 . . . X23. The auxiliary neural network was parametrized using this small database. The structure of the auxiliary neural network was identical to the structure of the reference neural network CNNref. A plot of the obtained classification performance is given in
The inventors formed a neural network according to the invention. To do this, a first neural network CNN1 was parametrized, using a first database DB1 containing 2000 images resulting from simulations such as described in connection with the first example, on a “simple” defect such as shown in
The inventors then used the auxiliary database DBaux specific to the complex defect as second database DB2, to parametrize a second neural network CNN2, the latter using the extracting block A1 of the first neural network CNN1. Parametrization of the second neural network thus merely required parametrization of the classifying block B2 of the second neural network CNN2. The latter corresponded to a neural network according to the invention. The classifying block B2 of the convolutional neural network CNN2 was parametrized by modifying parameters that remained fixed during the construction of the extracting block A1 of the first convolutional neural network CNN1. In the present case, it was a question of the shape of the defect. Specifically, the extracting block A1 of the first convolutional neural network CNN1 was parametrized using a defect of simple shape (T-shaped defect shown in
The second neural network CNN2 was a neural network according to the invention. The inventors applied the second neural network to test images. The estimation performance of the neural network CNN2 has been shown in
In the preceding examples, the input data of the neural networks consisted of matrices resulting from simulations of measurements performed using the eddy-current technique. The invention may be applied to other techniques usually employed in the field of non-destructive testing, provided that the input data takes matrix form, i.e. a form comparable to an image. More precisely, other envisionable techniques are:
According to another variant, in the first training operation, a first extracting block A1 coupled to a reconstructing block B′1 is employed. Just like the first classifying block B1 described above, the reconstructing block B′1 is a block for processing the data extracted by the first extracting block A1. In this variant the first neural network CNN′1 is an autoencoder. As shown in
In a manner known to those skilled in the art, an autoencoder is a structure comprising an extracting block A1 (called the encoder) that allows relevant information to be extracted from input data Ain, defined in an input space. The input datum is thus projected into a space, called the latent space. In the latent space, the information extracted by the extracting block is called code. The autoencoder comprises a reconstructing block B′1, allowing the code to be reconstructed, so as to obtain an output datum Aout, defined in a space that is generally identical to the input space. Training is performed in such a way as to minimize an error between the input datum Ain and the output datum Aout. Following training, the code, extracted by the extracting block, is considered to be representative of the main features of the input data. In other words, the extracting block A1 allows the information contained in the input data Ain to be compressed.
The first neural network may notably be a convolutional autoencoder: each layer of the extracting block A1 results from application of a convolution kernel to a previous layer.
Unlike the classifying block B1 described above, the reconstructing block B′1 is not intended to determine the features of a defect. The reconstructing block allows the output data Aout to be reconstructed, based on the code (layer CJ), the reconstruction being as faithful as possible to the input datum Ain. The classifying block B1 and the reconstructing block B′1 are used for the same purpose: to allow parametrization of the first extracting block A1, the latter then being able to be used during the second training operation, to parametrize the second classifying block B2.
In this variant, the method follows steps 100 to 210 described above with reference to
Combining various databases, representative of different situations, to perform the first training operation allows a data-extracting block to be obtained that concentrates the useful information of each image.
Following training, steps 120 to 210 are performed as described above. It is a question of performing a second training operation, on the basis of the second database, so as to parametrize a classifying block B2, using the extracting block A1 resulting from the first training operation.
Number | Date | Country | Kind |
---|---|---|---|
2008564 | Aug 2020 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/073061 | 8/19/2021 | WO |