The present invention relates to the field of measurements and of the calibration of the measurement sensors. The invention can be applied to any type of sensors.
The invention is presented here, by way of illustration and in a nonlimiting manner, in the field of non-destructive inspection with an ultrasound sensor. Based on the description of the invention, it will be clearly apparent to a person skilled in the art that the principle of the invention can be applied to other types of sensors to calibrate the positioning thereof.
Non-destructive inspection by ultrasounds is a non-invasive method for detecting faults in a part, based on the emission of ultrasounds and the detection of their reflection linked to the acoustic interfaces encountered.
A suitable sensor emits ultrasounds at a frequency (generally between 500 kHz and 100 MHZ) chosen as a function of the nature of the part to be inspected. The sensor must be in direct contact with the part for the waves propagated not to be slowed down by the impedance of the air between the point of emission of the sensor and the part.
The waves are reflected on the acoustic interfaces encountered: outlines of the part, internal defects.
The sensor, placed in contact with the part to be inspected, intercepts the waves re-emitted by any defect.
The detected waves are converted into signals by an electronic assembly associated with the sensor. Software assembles these signals to form an image of the interior of the part. The analysis of the images makes it possible to discriminate the echoes due to a defect from those linked to the geometry of the part.
In order to position and orient a particular point of a sensor (for example a point of emission of the signal) accurately in the space, it is necessary to determine the transformation matrix (translation and rotation) between the sensor-holder (located by a positioning device) and the particular point of the sensor. A transformation matrix is a matrix which makes it possible, from the coordinates of an initial point, to find those of its image by a given geometrical transformation. In other words, by knowing the coordinates of the sensor-holder, the transformation matrix makes it possible to determine the coordinates of the point of emission of the signal.
In the field of non-destructive inspection, an operator scans the sensor over an entire, previously defined, zone of examination of the part to be inspected. It is vitally important for the entire zone to be scanned by the sensor. The set of the coordinates of the point of emission of the signal is a crucial element which makes it possible to check that all of the zone has indeed been scanned.
All sensors differ from one another in their geometry. It is therefore necessary to design a sensor support (also called sensor-holder) suitable for each new sensor. Thus, the transformation matrix between the sensor-holder and the sensor is, each time, different and unique. It is difficult to obtain this transformation matrix rapidly, simply and accurately.
The document FR 3 087 254 B1 proposes a method for configuring a device for non-destructive inspection of a mechanical part that makes it possible to determine the position and the orientation of the active surface of the sensor in the reference frame linked to the zone of examination of the mechanical part to be inspected. This method comprises a step of learning the origin and the axes of the active surface in the reference frame associated with a first rigid body fixedly linked to the sensor. This step has to be renewed for each different sensor and first rigid body assembly, and also for any new sensor-holder.
The invention aims to mitigate all or part of the problems mentioned above by proposing a calibration method and device which makes it possible to simply and rapidly determine the transformation matrix between a sensor-holder and the sensor, while guaranteeing minimizing the errors linked to the manipulation.
To this end, the subject of the invention is a method for calibrating a device for non-destructive inspection of a mechanical part, the device comprising:
Advantageously, the calibration method according to the invention further comprises a step of determination of a third transformation matrix making it possible to switch from the reference reference frame to the reference frame linked to the sensor.
Advantageously, the calibration method according to the invention comprises, prior to the step of determination of the first transformation matrix from the reference reference frame to a reference frame of the block linked to the calibration block, a step of determination of the reference frame of the block linked to the calibration block in the reference reference frame.
Advantageously, the calibration method according to the invention comprises, prior to the step of determination of the second transformation matrix making it possible to switch from a reference frame linked to the sensor-holder to a reference frame linked to the sensor, a step of determination of the reference frame linked to the sensor in the reference frame linked to the sensor-holder.
Advantageously, the calibration method according to the invention comprises, prior to the step of determination of the reference frame linked to the sensor in the reference frame linked to the sensor-holder, a step of determination of a fourth transformation matrix making it possible to switch from the reference frame linked to the sensor-holder to the reference reference frame.
The invention also relates to a device for non-destructive inspection of a mechanical part, comprising:
Advantageously, the computer is further configured to determine a third transformation matrix making it possible to switch from the reference reference frame to the reference frame linked to the sensor.
Advantageously, the inspection device according to the invention further comprises a pointing device comprising a tip and fixedly linked to a second rigid body, the pointing device being capable of determining the position of points on a surface.
The invention also relates to a computer program comprising instructions which cause the device according to the invention to execute the steps of the calibration method according to the invention.
The invention also relates to a computer-readable storage medium on which is stored said computer program.
The invention also relates to a method for real-time visualization of a signal of non-destructive inspection of a mechanical part, the signal being emitted by a non-destructive inspection device comprising:
The invention will be better understood and other advantages will become apparent on reading the detailed description of an embodiment given by way of example, the description being illustrated by the attached drawing in which:
In the interests of clarity, the same elements will bear the same references in the various figures. For better visibility and in the interests of improved understanding, the elements are not always represented to scale.
The object of the invention lies in the fact of determining, simply and rapidly, the transformation matrix between the sensor-holder and the sensor, in order to calibrate the non-destructive inspection device (by making the mathematical link between the movement tracking device and the point of emission of the sensor).
The optical movement tracking system 1 is associated with an orthonormal reference frame R0=(0, , , ) in which O is the origin of the reference frame and , , are mutually orthogonal normed vectors.
The optical movement tracking system 1 determines the cartesian coordinates and the orientation of a rigid body in the orthonormal reference frame of the optical movement tracking system 1.
The optical movement tracking system 1 comprises at least two cameras and one or more infrared emitters. Other types of optical system can be used in the context of the invention, for example an optical system based on laser and/or with no-volume markers of pad type. It is important to specify that the system 1 can be a passive (optical or non-optical) movement tracking system.
The non-destructive inspection device 10 comprises a first rigid body 2 linked to a probe, or sensor, 3 and a sensor-holder 8. The sensor 3 is secured to the sensor-holder 8 fixedly linked to the first rigid body 2. The first rigid body 2, the sensor-holder 8 and the sensor 3 are fixedly linked and form an indivisible assembly during the non-destructive inspection. The first rigid body 2 comprises at least three spherical targets that are reflecting in the infrareds situated at distinct positions. The first rigid body 2 is associated with an orthonormal reference frame Rc=(C, , , ) in which C is the origin of the reference frame and , , are mutually orthogonal normed vectors.
In a preferred embodiment, the first rigid body 2 comprises six spherical targets. The sensor 3 is for example a single-element ultrasound probe. It comprises an emitting and receiving surface, called active surface, 31. The active surface 31 is a rectangle of flat surface. In a variant, the sensor 3 is of another type, for example an eddy current probe. Generally, an active surface is any surface emitting or receiving physical signals belonging to a non-destructive inspection sensor. For example, in the case of a contact single-element ultrasound sensor, that corresponds to the surface of the piezoelectric. In the case of a single-element ultrasound sensor with a “Plexiglass” shoe, that corresponds to the surface of the shoe through which the ultrasound signals are emitted.
The inspection device 10 comprises a calibration block and a computer 6. Furthermore, the computer is configured to perform the following steps which will be explained hereinbelow:
The computer is further configured to determine a third transformation matrix making it possible to switch from the reference reference frame (R0) to the reference frame (RS) linked to the sensor.
The invention is described for the tracking of a sensor, but it applies equally to the tracking of several sensors, simultaneously and independently.
The non-destructive inspection device comprises a computer 6 linked to the optical movement tracking system 1 and to a control module 5. The computer 6 is for example a computer or an electronic circuit board. It notably comprises a processor running a computer program implementing the method which will be described and a memory to store the results thereof. It also comprises input and output interfaces and can be associated with a visualization screen.
The link between the computer 6 and the optical movement tracking system 1 can be wired or wireless. Similarly, the link between the computer 6 and the control module 5 can be wired or wireless.
Advantageously, the non-destructive inspection device comprises a pointing device 4.
In a preferred embodiment, the second rigid body 41 comprises seven spherical targets.
The non-destructive inspection device comprises a control module 5 provided with at least one actuation button 51. Preferably, the control module 5 is mounted on the pointing device 4 to facilitate the use thereof.
To express a vector in the reference frame RA, the notation is used. It should be noted that a vector serves equally to express a position and a displacement.
The transformation (composed of a translation and/or of a rotation) making it possible to switch from a reference frame RA to a reference frame RB is defined by the transformation matrix ATB.
This transformation matrix of 4×4 dimension is composed as follows:
, and respectively designate the unitary vectors following the axes xB, yB and zB of the reference frame RB and expressed in the reference frame RA.
is the vector expressing the origin of the reference frame RB in the reference frame RA.
The movement tracking system 1 is capable of locating in the space the rigid bodies 2, 41. A rigid body is a non-deformable collection of spherical markers reflecting the infrared rays and can thus be located by the optical positioning system. After calibration, the movement tracking system can associate with a rigid body an origin reference frame which serves to qualify the position and the orientation of rigid bodies in the reference frame of the movement tracking system.
It should be noted that, in order to be correctly located, the rigid bodies must be situated within the solid angle seen by the movement tracking system.
Notably, the movement tracking system is supplied with a factory-calibrated tool that makes it possible to obtain the coordinates of a point of the space within the reference frame of said movement tracking system. Hereinbelow, this tool is called second rigid body 41.
Two different rigid bodies are used in the calibration method of the invention: the second rigid body 41 previously described and the first rigid body 2 used to locate the sensor-holder 8. It is this latter first rigid body 2 which forms the object of the calibration procedure described in the present invention. The aim of this calibration method is to be able to determine rapidly, simply and accurately the geometrical transformation (rotation and/or translation) that exists between the origin of the first rigid body 2 as located by the movement tracking system and the point of emission of the ultrasounds by the sensor 3, independently of the sensor-holder used. In other words, the calibration method of the invention makes it possible to determine the geometrical transformation, in transformation matrix form, between the origin of the first rigid body 2 and a point of the sensor regardless of the form of the sensor-holder necessarily disposed between these two elements.
The inspection device 10 according to the invention comprises
The method for calibrating a device for non-destructive inspection of a mechanical part 7 comprises the following steps:
Advantageously, the calibration method according to the invention further comprises a step 140 of determination of a third transformation matrix making it possible to switch from the reference reference frame (R0) to the reference frame (RS) linked to the sensor. The third transformation matrix is obtained by multiplication of the second transformation matrix with the first transformation matrix.
The aim of the steps 100 and 110 is to calibrate the calibration block 14 itself, that is to say determine its dimensions (length and width) and associate with it a reference frame RB linked to the calibration block. The movement tracking system and the calibration block must be firmly fixed in order to avoid any modification of their relative positions and orientations. Then, an operator uses the movement tracking system and the second rigid body 41 in order to acquire three positions on the surface of the calibration block 14. These three positions Q1, Q2 and Q3 are previously marked, advantageously but not mandatorily by small holes created for this purpose on the flat top surface of the calibration block 14, as indicated in
The calibration method according to the invention can comprise, prior to the step 110 of determination of the first transformation matrix from the reference reference frame R0 to a reference frame of the block RB linked to the calibration block, a step 105 of determination of the reference frame of the block RB linked to the calibration block in the reference reference frame R0.
More specifically, the computer 6 receives as input three vectors:
The computer 6 calculates the following data:
The calculation steps are detailed hereinbelow.
The vector defining the center of the calibration block in the reference reference frame R0 is calculated:
The vector defining the length of the calibration block in the world reference frame R0 is calculated:
The vector defining the width of the calibration block in the world reference frame R0 is calculated:
The length of the calibration block can thus be calculated:
The width of the calibration block can also be calculated:
From these data, it is possible to calculate the three unitary vectors of the block reference frame RB in the reference reference frame R0:
Finally, the transformation matrix making it possible to switch from the reference reference frame R0 to the block reference frame RB is calculated:
The aim of the steps 120 and 130 is to determine the dimensions of the sensor (length and width), and the transformation matrix between the first rigid body situated on the sensor-holder and the center of the sensor, as indicated in
For the steps 120 and 130 to be carried out, the sensor 3 is disposed (step 115) on the top flat surface of the calibration block 14. The operator acquires three positions of the first rigid block present on the sensor-holder, as visible in
Note that the sensor-holder is designed in such a way that it is the sensor itself and not the sensor-holder which comes into contact with the borders (blockers) of the calibration block when the sensor-holder is brought to the positions P1, P2 and P3.
In addition to the data previously available, the computer 6 receives as input three vectors:
The computer 6 calculates the following data:
Transformation matrix making it possible to switch from the sensor-holder reference frame RH to the sensor reference frame RS: HTS
The calculation steps are detailed hereinbelow.
The following vectors are calculated:
The length of the sensor is calculated:
The width of the sensor is calculated:
Finally, the three unitary vectors of the sensor reference frame RS in the reference reference frame R0 are calculated:
It is then possible to calculate the transformation matrix making it possible to switch from the sensor-holder reference frame RH to the reference reference frame R0, the sensor-holder being at P3:
In other words, the calibration method according to the invention comprises, prior to the step 125 of determination of the reference frame RS linked to the sensor in the reference frame RH linked to the sensor-holder, a step 123 of determination of a fourth transformation matrix making it possible to switch from the reference frame RH linked to the sensor-holder to the reference reference frame R0.
The next step consists in calculating the coordinates of the center of the sensor in the reference reference frame R0 (the calculation is made for the sensor-holder at P3):
The three unitary vectors of the sensor reference frame RS in the sensor-holder reference frame RH should also be calculated:
Finally, the center of the sensor reference frame RS in the sensor-holder reference frame RH is calculated:
In other words, the calibration method comprises, prior to the step 130 of determination of the second transformation matrix making it possible to switch from a reference frame RH linked to the sensor-holder to a reference frame RS linked to the sensor, a step 125 of determination of the reference frame RS linked to the sensor in the reference frame RH linked to the sensor-holder.
To finish, it is possible to calculate the transformation matrix making it possible to switch from the sensor-holder reference frame RH to the sensor reference frame RS:
The invention defines a mechanical device composed of a calibration block (in plate form) intended to be calibrated by an optical (movement tracking) positioning system and which makes it possible, in conjunction with mathematical calculations, to calibrate a sensor-holder simply, rapidly and accurately. The calibration of the sensor-holder can be performed directly by using a tool that makes it possible to record the positions of different characteristic points. The use of the calibrated calibration block naturally makes it possible to incorporate a constraint of coplanarity which increases the accuracy of the calibration. It is the combination of the calibration of the calibration block and of the sensor which gives this method a particularly advantageous aspect. The result thereof is great simplicity and rapidity of implementation as well as a significant improvement in the accuracy of the calibration compared to the conventional method.
The invention relates also to a method for real-time visualization of a signal of non-destructive inspection of a mechanical part, which can optionally be coupled to the calibration method.
For the implementation of the visualization method, the inspection device 10 can comprise an augmented reality visualization device 16 facing the mechanical part 7, to which an augmented reality reference frame (RA) is linked. It is through this visualization device 16 that the operator sees the real mechanical part, and it is on this visualization device 16 that the holographic 3D representations (detailed later) are displayed superimposed on the view of the mechanical part on the visualization device 16.
The computer 6 can then be configured to
The term occlusion means concealment of the virtual objects behind real things. The occlusion occurs when an object in a 3D space blocks the view of another object. In augmented reality, the objects generated by computer are placed in a real scene to provide additional information or modify the nature of the real objects. Thus, the virtual objects and the real scene must be perfectly aligned in order to maintain high levels of realism and allow the objects to behave as if they were in normal conditions.
The computer 6 notably makes it possible to assemble the signals received by the sensor to form an image of the interior of the part.
And the augmented reality visualization device 16 is configured to display:
Thus, the visualization method makes it possible to obtain a display in the augmented reality visualization device 16. The operator sees therein, through the visualization device 16, the mechanical part 7, the sensor-holder 8 and the sensor 3. In addition to these real objects, the display comprises a holographic 3D representation 7′ of the mechanical part, a holographic 3D representation 8′ of the sensor-holder, and a holographic 3D representation 3′ of the sensor. These three holographic 3D representations are superimposed on the display of the real objects. In addition to the real objects and these holographic 3D representations superimposed on the real objects, the display comprises the holographic 3D representation of the occlusion 80 and the surface of intersection 81, superimposed on the holographic 3D representations superimposed on the real objects.
Hereinafter in the document, reference will be made to the following reference frames:
The visualization method according to the invention comprises, beforehand, a step 190 of calibration of the augmented reality visualization device 16 in the reference reference frame (R0).
In order to define the position of the known elements in the world reference frame in the reference frame of the augmented reality visualization device, the augmented reality visualization device must be calibrated. The augmented reality visualization device establishes an anchor, or a marker, at the location of the QR code. During the calibration, this marker is positioned in the world reference frame, defined by the optical positioning system. The calibration makes it possible to establish the relationship between the two worlds.
In the world of the augmented reality visualization device, the reference frame RQR associated with the QR Code 86 is defined as
The vector in the world reference frame is:
The vector in the world reference frame is:
The vector in the world reference frame is:
The calibration step 1000 defines and describes how to obtain by calibration the following transformation matrices:
The origin F of the sensor 3 can be positioned in the part reference frame RP:
All the information I linked spatially to this point F such as the ultrasound paths and the sector scan reconstructed from the latter and from the physical signals can then be located in the reference frame of the augmented reality visualization device RRA:
In order to clarify the different elements of the 3D scene to be visualized, the 3D object containing the signals to be viewed by the operator can hereinafter be designated as “main object” and the 3D objects with respect to which the main object is positioned (notably the part and the sensor) can be designated as “secondary objects”. It should be noted that, with respect to augmented reality, each of the objects described previously is visualized in hologram form and is superimposed perfectly on a real object, thus creating an occlusion of this real object.
As already stated, the objective of the method of the invention is to allow the user to view the main object constructed from ultrasound data (sector scan for example) and to position it in 3D in augmented reality by giving the impression to the user of viewing the interior of the part.
For the human being to correctly visually interpret the positioning of this main object in the 3D visual space, the display of the hologram of this main object is not sufficient. The hologram will give the impression of floating and its spatial relationship with the volume of the part will be lost. In the particular case described here, the volume concerned is the volume passed through by the ultrasound signals that were used to construct this main object.
The invention consists in displaying the hologram of this main object accompanied by a set of graphic occlusions corresponding to the holograms of the secondary objects. The secondary objects are the part and the sensor (accompanied by its sensor-holder) in particular. For these occlusions not to totally conceal the real objects, the occlusions advantageously have a medium level of transparency. The graphic occlusions of the part (comprising possibly a weld to be inspected) and of the sensor are therefore produced with textures and colors that have a percentage of transparency.
In order for the brain to interpret the main object as forming part of the internal integrity of the part (and possibly of the weld), it is essential for this main object to appear on the graphic occlusion of the part. In order to achieve this objective, a transparent opening of the graphic occlusions of the part is produced. This is the occlusion 80. This opening is centered on the sensor (more specifically the point F of the sensor, the point with respect to which the main object to be displayed is located). This opening 80 is also a 3D object and a part of the surface of its volume coincides with the surface of the main object.
So as not to hamper the operator with information situated outside of his or her field of view (for example the back of the part), the visible data, and therefore the direction of the opening, are selected as a function of the position and of the orientation of the eye of the operator. Thus, if the operator views the holograms from the front or the rear of the part, the holograms remain well oriented. That is made possible by virtue of the calibration of the visualization device 16 with respect to the movement tracking device 1.
The step 240 of determination of the surface of intersection 81 comprises the following steps:
In other words, the step 240 allows the visualization of a sector scan in the augmented reality visualization device 16. This is the surface of intersection 81 of the plane containing the axis of emission in the occlusion 80.
By virtue of the invention, it is thus possible for the operator to view the secondary objects (that is to say real objects: mechanical part 7 to be inspected, the sensor-holder 8 and the sensor 3), on which the main object containing the signals to be visualized, that is to say the assembly formed by the occlusion 80 and the surface 81, is superimposed.
In a particular embodiment, the steps of the method according to the invention are implemented by computer program instructions. Consequently, the invention also targets a computer program on an information medium, this program being able to be implemented in a computer, this program comprising instructions suited to the implementation of the steps of a method as described above.
This program can use any programming language, and be in the form of source code, object code, or of intermediate code between source code and object code, such as in a partially compiled form, or in any other desirable form. The invention also targets a computer-readable information medium, comprising computer program instructions suited to the implementation of the steps of a method as described above.
The information medium can be any entity or device capable of storing the program. For example, the medium can comprise a storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or even a magnetic storage means, for example a diskette or a hard disk.
Also, the information medium can be a transmissible medium such as an electrical or optical signal, which can be rerouted via an electrical or optical cable, wirelessly or by other means. The program according to the invention can in particular be downloaded over a network of Internet type.
Alternatively, the information medium can be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the method according to the invention.
It will become apparent more generally to the person skilled in the art that various modifications can be made to the embodiments described above, in light of the teaching which has just been disclosed to him or her. In the claims which follow, the terms used should not be interpreted as limiting the claims to the embodiments set out in the present description, but should be interpreted to include therein all the equivalents that the claims aim to cover through the formulation thereof and the planning of which is within the scope of the person skilled in the art based on his or her general knowledge.
Number | Date | Country | Kind |
---|---|---|---|
2109075 | Aug 2021 | FR | national |
This application is a National Stage of International patent application PCT/EP2022/073207, filed on Aug. 19, 2022, which claims priority to foreign French patent application No. FR 2109075, filed on Aug. 31, 2021, the disclosures of which are incorporated by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/073207 | 8/19/2022 | WO |