This document relates to a method for estimating a movement of particles in a bone.
To estimate a movement of blood in a body, methods using ultrasound systems have been proposed. These ultrasound systems produce videos based on echoes of ultrasonic waves that have passed through the body. These videos are then processed to extract movements.
These methods known from the state of the art make the following three hypotheses:
However, these methods do not allow to estimate the movement of blood inside a bone with sufficient precision.
A purpose of the invention is to precisely estimate the movement of particles in a bone.
For this purpose, according to a first aspect, the method as defined in claim 1 is proposed.
The method according to the first aspect is such that the computing of the movement involves not a single phase velocity (that is to say a velocity of propagation of the ultrasonic wavefront), but several phase velocities in various directions, which takes into account the fact that the bone is an anisotropic elastic medium. The method according to the first aspect thus leads to a much more precise movement estimate than a method whose computation would be based on the hypothesis that the bone is an isotropic elastic medium, in other words on the hypothesis that the phase velocity of a wave in the bone is the same in any direction in space.
The method according to the first aspect may further comprise the optional features set out in the dependent claims, taken alone or in combination where technically possible.
The video filtering mentioned in claims 2 and 3 is very advantageous. The first raw video or the second raw video may show crossing points between multiple superimposed blood vessels containing blood flowing in two different directions. If these vessels are thinner than the resolution cell of the ultrasound system used, computing phase variations between images of the first raw video or the second raw video can lead to erroneous movement computation. Computing phase variations between filtered video images as indicated in claims 2 and 3 overcomes this difficulty.
Other optional but advantageous features are set forth in claims 4 to 9.
Provision is also made of:
Other characteristics, aims and advantages of the invention will emerge from the description which follows, which is purely illustrative and not limiting, and which must be read with reference to the appended drawings in which:
In all the figures, similar elements bear identical references.
With reference to
The probe 2 is conventional. The probe 2 comprises a plurality of ultrasonic wave emitters and receivers. The emitters and receivers are distributed in such a way that an ultrasonic wave emitted by an emitter can be received by a receiver after its reflection at a point located inside a body to be analyzed, in particular a bone as will be seen later.
The imaging device 4 is configured to generate raw videos showing the interior of the structure to be analyzed from ultrasonic echoes received by the probe. The imaging device is known from the state of the art.
As a reminder, the impulse response of the imaging device 4 can be described by the point spread function (PSF). The name of this function illustrates the fact that the response of the imaging device 4 to a point object is a pattern visible in an image generated by the imaging device 4, this pattern having parallel fringes, as illustrated in
There is a close relationship between the fringes of the pattern and the ultrasonic wavefront from which the pattern comes (this wavefront being represented on the left of
On the one hand, the peak-to-peak distance between two successive fringes of the pattern is λ/2, knowing that λ=c0/f0, where c0 is the phase velocity of the ultrasonic wave, and f0 is the temporal frequency of the ultrasonic wave.
On the other hand, the inclination of the fringes of the pattern in an image produced by the imaging device is a function of the direction of propagation of the ultrasonic wave.
When a point object is static relative to the probe, the fringe pattern which constitutes the response is also static in a succession of images generated by the imaging device 4. But when the point object moves relative to the probe, the fringe pattern moves in the image succession. Thus, a signal observed in a given pixel of such a succession of images varies. This signal in particular has a phase which varies from one image of the video to another.
Returning to
The memory 12 is configured to store data which will be detailed below. The memory 12 is of any type. The memory 12 may in particular comprise a volatile memory for storing data temporarily (for example of RAM type), and a non-volatile memory for storing data persistently (for example of flash type, EEPROM, HDD, etc.).
The or each processor 10 is configured to execute a program comprising code instructions stored in the memory 12. When this program is executed by the or each processor 10, the device 6 implements a method for estimating the movement of particles in the structure probed by the probe using ultrasonic waves. This method is described below.
The or each processor 10 is of any type (CPU, controller, microcontroller, ASIC, FPGA, etc.). In what follows, the example of an embodiment in which the estimation device comprises a processor will be taken, it being understood that, in other embodiments, several processors 10 can be used, in particular to execute different tasks of the aforementioned program in parallel.
With reference to
In a probing step 100, the probe 2 is used to probe a bone using ultrasonic waves. The probe 2 emits ultrasonic waves at a frequency f0 towards the bone. These ultrasonic waves penetrate the bone.
The temporal frequency f0 has a value comprised between 0.1 MHz and 10 MHz.
Ultrasonic echoes resulting from the reflection of these ultrasonic waves inside the bone are then received by the probe.
It should be noted that ultrasonic waves having followed different trajectories can reach the same point located inside the bone. The ultrasonic echoes resulting from the reflection of these ultrasonic waves at this point also follow different trajectories.
Consider in particular a fixed point located inside the bone, which will be called point P.
The probe 2 emits first ultrasonic waves with a temporal frequency f0 towards this point P, and receives first ultrasonic echoes obtained by reflection of the first ultrasonic waves at the point P.
The first ultrasonic waves and the first echoes have wavefronts oriented differently during their propagation in the bone. Thus:
The probe 2 also emits second ultrasonic waves at the same temporal frequency f0 towards the same point P, and receives second ultrasonic echoes obtained by reflection of the second ultrasonic waves at the point P.
The second ultrasonic waves and the second echoes have differently oriented wavefronts as they propagate through the bone. Thus:
The second ultrasonic waves and the second echoes have the same temporal frequency and are distinguished from the first ultrasonic waves and their echoes in particular by their propagation trajectory, and by the orientation of their wavefronts during their propagation. In other words, at least one of the following two conditions is verified:
Thus, at least two pairs of wavefront directions are formally associated with the same point P located inside the bone, these pairs of different directions characterizing different trajectories of emitted ultrasonic waves and received ultrasonic echoes passing through this point P. As will be seen below, the probe can use more than two pairs of wavefront directions in step 100.
As will be seen in more detail later, the directions normal to the wavefronts discussed previously can be directions in a two-dimensional space or a three-dimensional space. Moreover, these directions can be indicated by data in different forms, in particular angular data or vectors. For example, when considering directions in a plane, the first emission direction may be indicated by a first emission phase angle, and the first reception direction may be indicated by a first reception phase angle.
2.2) Reconstruction of Videos of the Bone from Ultrasonic Echoes
In a reconstruction step 102 known from the state of the art, the imaging device 4 uses the ultrasonic echoes received by the probe to reconstruct raw videos showing the interior of the bone.
Each raw video generated by the imaging device comprises a succession of images showing the interior of the bone at different times. When particles move inside the observed bone, for example red blood cells, these particles can be represented in different locations in different frames of the same video.
Each image represents a layer of the bone extending in a plane defined by two axes X and Z (shown in
In particular, the imaging device 4 generates a first raw video showing the interior of the bone at the point P, from the first ultrasonic echoes.
Moreover, the imaging device generates a second raw video showing the interior of the bone at the point P, from the second ultrasonic echoes.
Each raw video generated in the reconstruction step 102 is stored in memory 12 (in particular the first raw video and the second raw video).
Likewise, each pair of angles used by the probe in the probing step 100 is stored in the memory 12 (in particular the first pair of angles and the second pair of angles). This storage can be carried out before or after the probing step 100 implemented by the probe 2. The determination of these pairs of angles is known to the person skilled in the art.
The estimation device 6 implements a method for estimating the movement of particles in the bone. This method comprises the following steps, relating to
In a step 201, the estimation device obtains data associated with different ultrasonic waves, and these data revealing the orientation of the wavefronts of the associated waves and their echoes.
In particular, in step 201, the estimation device obtains first data associated with the first ultrasonic waves, the first data indicating the first emission direction and the first reception direction. The estimation device also obtains second data, the second data indicating the second emission direction, and the second reception direction. The second data is at least partly different from the first data.
In a step 202, the processor 10 obtains at least one ultrasonic wave phase velocity in the bone. It will be seen below that the number of velocities obtained during this step can vary, depending on the embodiment considered.
In a step 203, the processor 10 computes phase variations at the point P, between two images coming from different videos.
In particular, the processor 10 computes in step 203 a first phase variation ΔΨ11 at the point P between two first images issued directly or indirectly from the first raw video. This computing is a known step in the state of the art.
Moreover, the processor computes a second phase variation ΔΨ22 at the point P, between two second images separated by the same time difference as the two first images discussed previously. The two second images are issued directly or indirectly from the second raw video.
In reality, several phase variations (including the first phase variation ΔΨ11 and the second phase variation ΔΨ22) are computed by the processor for a pixel of fixed coordinates in images (including the two first images and the two second images), this pixel imaging the point P discussed previously. It can therefore be said that the first phase variation ΔΨ11 and the second phase variation ΔΨ22 are computed at the point P.
In a step 204, the processor 10 computes a particle movement U at the point P. This movement U is a vector U with several components. The movement U is computed by the processor from the following data:
In a subsequent step, the processor can compute a particle velocity vector at the point P, by dividing the movement vector U by the time difference between the two images of the first video (corresponding to the time difference between the two images of the second video).
This velocity vector can then be displayed superimposed with an image of the interior of the bone by the display screen 8 (the point of origin of this vector being located at the pixel of this image which shows the point P discussed previously).
All the preceding steps are repeated for several points located inside the bone, imaged in different pixels of the videos considered.
In what follows, a conventional approach for estimating a movement of particles in a medium will be described first, before detailing different embodiments of the method 104, which differ from this conventional approach.
A known method consists in implementing steps 201 to 204 on images showing soft tissues only (therefore no bones). In this known method:
Furthermore, in this first embodiment, the processor makes the hypothesis in the computing step 204 that the medium observed in the videos is isotropic, as illustrated in
In the first embodiment, the first data comprise:
Moreover, the second data comprises:
η1=θ1
η2=θ2
μ1=φ1
μn=φ2
Where:
These equalities between group angle and phase angle come from the fact that the medium traversed by the waves has isotropic elasticity.
In this known method, the computing carried out in step 204 comprises a resolution of the following matrix equation by the processor 10, in which these two components are UZ and UX are unknowns:
Thus, only one phase velocity c0 is involved in this computing.
In a first embodiment of the estimation method 104:
The first embodiment differs from the known method discussed previously in that the medium traversed contains a bone, and the processor assumes that the bone is an elastically anisotropic medium, as illustrated in
In the presence of a bone, image reconstruction must correct the refraction effect appearing at the interface between the bone and the surrounding soft tissue. An ultrasound ray changes its direction as it enters or exits a bone, as shown in
In the first embodiment, the first data and the second data comprise the same phase angles η1, μ1, η2 and μ2 as in the first embodiment.
On the other hand, as the hypothesis in the first embodiment that the bone is an elastically anisotropic medium is made, there is no longer equality in the phase angles η1, μ1, η2 and μ2 and the group angles θ1, φ1, θ2 and φ2, respectively.
For example, the aforementioned phase angles η1, μ1, η2 and μ2 can be determined from the group angles of the corresponding waves or echoes, and from an anisotropy model of the bone known to the person skilled in the art.
The equation below is a model illustrating the relationship between the group angle θ of a wave, and the phase angle n of that wave, and the phase velocity v of that wave.
In this equation, v depends on the phase angle η, hence its partial derivation.
This equation can be solved to determine:
This determination of the phase angles from group angles can be implemented by the processor 10 during step 201. Alternatively, the phase angles are determined by other equipment, then supplied to the estimation device 6. In the first embodiment, the processor 10 further obtains different phase velocities of ultrasonic waves in the bone. These phase velocities are no longer equal to one and the same velocity c0 as was the case in the first embodiment. Thus, the phase velocities in the following directions are obtained by the processor 10:
In this embodiment, it will be considered that the waves arriving at the point P are compression waves; their phase velocity is noted vp.
To determine these phase velocities, the processor 10 or other equipment can use an anisotropy model of the bone known to the person skilled in the art, formed by a mathematical function, for example for a compression wave whose phase velocity is noted vp. This mathematical function allows to compute, from a phase angle, a phase velocity of an ultrasonic wave in the bone in a direction indicated by this phase angle.
The ultrasonic wave phase velocities can therefore be as follows, when such an anisotropy model of the bone is used: vp(η1), vp(η2), vp(μ1) and vp(μ2).
An anisotropy model usable by the person skilled in the art to determine these phase velocities is described in document “Measuring anisotropy of elastic wave velocity with ultrasound imaging and an autofocus method: application to cortical bone.”, Guillaume Renaud, Pierre Clouzet, Didier Cassereau, Maryline Talmant, published in November 2020. This anisotropy model is as follows:
The four parameters listed above are called Thomsen parameters, because they were proposed by L. Thomsen in the paper entitled “Weak elastic anisotropy”, published in 1986.
Another anisotropy model usable by the person skilled in the art to determine these phase velocities, also proposed by L. Thomsen in 1986, is the following:
Where.
In the first embodiment, the computing carried out in step 204 involves the phase velocities of ultrasonic waves in the bone vp(η1), vp(η2), vp(μ1) and vp(μ2), and not a single velocity c0 as is the case in the first embodiment.
The computing 204 comprises in the first embodiment a resolution of the following matrix equation by the processor 10, in which the two components UZ and UX are unknowns:
Making the hypothesis here that the bone is an anisotropic medium is more faithful to reality. Consequently, the estimation of the particle movement is more precise in the first embodiment than in the known method, the principles of which are recalled in section 2.3.1.
In the embodiments described above, a movement at the point P is computed from only two phase variations ΔΨ11 and ΔΨ22, respectively associated with first and second data in the form of pairs of phase angles (η1,μ1) and (η2,μ2). This is sufficient to determine the two unknowns which are the two components UZ and UX.
However, in other embodiments, additional data can be obtained by the estimation device in step 201, giving rise to the computing of variation of complementary phases during step 203, which are then used to compute a particle movement in the bone in step 204.
In general, the probe can emit ultrasonic waves at M different emission phase angles η1, . . . , ηm, . . . ηM, and at N different reception phase angles μ1, . . . , μn, . . . μN, which gives rise to NM pairs of angles (ηm,μn). NM phase variations ΔΨmn at the point P are then computed.
The computing of the first embodiment can therefore be generalized to the following equation:
Similarly, the computing of the first embodiment can therefore be generalized to the following equation, involving NM different phase velocities:
These embodiments involve the prior determination of NM ultrasonic wave phase velocities (one velocity per angle ηm and per angle μn).
These embodiments use excess data to determine two unknowns. This excess number nevertheless has the advantage of providing an estimate of the movement that is more robust to noise.
The method can be generalized to allow the computing in step 204 of a movement in a three-dimensional space where the movement to be estimated has three unknown components Ux, Uy and Uz. Then it becomes easier to describe the orientation of the wavefront by the unit vector normal to the surface of the ultrasonic wavefront, E(Ex,Ey,Ez) in emission and R(Rx,Ry,Rz) in reception, rather than with a phase angle. To estimate the three components of the movement in the three spatial dimensions called x,y and z, it is then necessary to use at least three pairs of emission and reception directions. These three pairs of emission and reception directions would then be defined by the pairs of unit vectors (E1,R1), (E2,R2) and (E3,R3).
As for a two-dimensional space, this system of three linear equations with three unknowns can be extended to a system of overdetermined equations using M emission directions and N reception directions, in order to obtain an estimate of the movement which is more robust to the noise.
In other embodiments, for any m and for any n, the two images leading to the phase variation ΔΨmn do not belong to the raw video associated with the pair of angles (θm,φn), but belong to a filtered video generated during an additional decomposition step 200 implemented by the processor 10. This decomposition step 200, optional but very advantageous, is shown in dotted lines in
With reference to
The decomposition 300 of the raw video associated with the pair of angles (θm,φn) comprises the following substeps.
The processor applies a transformation to the raw video associated with the pair of angles (θm,φn) so as to generate a transform (step 300). The transform has the particular property of comprising information relating to movements of particles in the bone in K different directions, and which can be distinguished from one another.
The processor applies filtering to the transform, so as to produce K filtered transforms. Each filtered transform selectively comprises information on movement of particles in the bone in one direction among the aforementioned K directions, excluding all other particle movement directions.
The processor then applies an inverse transformation to each filtered transform, so as to obtain the first K filtered videos. The first K filtered videos contain different and complementary movement information.
In particular, the transformation applied in step 300 is a Fourier transform, and the transformation applied in step 304 is an inverse Fourier transform.
As an illustration, consider an example of carrying out decomposition 200 in which K=2 is chosen. The transform then comprises:
In this embodiment, the processor can produce in step 302:
Then, two first filtered videos are obtained by applying the inverse transformation to these two filtered transforms.
Alternatively, K=4 can be chosen. The first transform then comprises:
As indicated previously, the decomposition step 200 is implemented for any m and for any n, that is to say for each raw video associated with a pair of angles (θm,φn), so as to produce K filtered videos associated for a pair of angles (θm,φn). In total, MNK filtered videos are therefore produced.
Then, for any m and for any n, the processor selects a filtered video among the K filtered videos associated with the pair of angles (θm,φn). MN filtered videos are therefore selected.
The MN videos filtered then selected by the processor all result from a filtering in the same direction of movement of particles (for example the first direction of movement mentioned above).
Each filtered then selected video thereafter gives rise to the computing of a phase variation Δψmn, as indicated previously.
The filtered videos constitute much more readable images than the raw videos originally generated by the imaging device. Thus, applying steps 203 and 204 to images belonging to these filtered videos improves the precision of the estimated movement.
Furthermore, it is possible to compute up to K−1 complementary movements at the point P by repeating the computing steps for images from filtered videos which have not yet been selected.
Implementation of the method according to the first embodiment detailed in section 2.3.2 and according to the conventional approach presented in section 2.3.1 allows to produce particle movements, from which particle velocity vectors can be deduced, as indicated previously.
As shown in
On the left of
In the middle of
On the right of
It is noted that the simulated raw image contains a certain number of vectors with aberrant orientations, in particular at the central intersection. This is due to the fact that the spatial resolution of the ultrasound system is not sufficient to spatially distinguish the different vessels. Thanks to the directional filtering moving in the “up” direction then in the “down” direction, the velocity vectors that could be determined are much less aberrant.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2022/050283 | 2/16/2022 | WO |