The present document relates to acoustic resolution photoacoustic imaging and more precisely to a method of processing images acquired by a multispectral photoacoustic imaging system and an associated device.
The technique of photoacoustic imaging (also called opto-acoustic) is based on the generation of ultrasonic waves produced in a sample (typically, a biological tissue) by excitation by irradiating the sample with electromagnetic radiation, typically laser pulses within the spectrum from 300 to 2000 nm, especially in the window of 650 to 950 nm.
In general, a photoacoustic image is reconstructed from the measurement of acoustic signals generated by absorption of the light sent on the sample. For reasons related to the efficiency of photoacoustic generation, short nanosecond pulses are generally used for biomedical imaging for illuminating the biological tissue to be imaged: the absorption of the pulsed light creates a rapid rise in temperature in the soft tissue which generates by thermoelastic effect a rise in pressure, which pressure then relaxes by causing the propagation of pulsed acoustic waves in the biological tissue.
Acoustic resolution photoacoustic imaging differs from optical resolution photoacoustic imaging in that an array of ultrasonic wave sensors is used: knowing the speed of sound, an image indicating the amplitudes and positions of sound sources can be reconstructed from the received acoustic signals. Whereas in optical resolution photoacoustic imaging each light pulse is focused point by point on the surface of the sample, and the sound emitted from the optical focal zone is measured by a single ultrasonic wave sensor.
Acoustic resolution photoacoustic imaging is a biomedical imaging technique that provides in depth optical absorption contrast in biological tissues. Photoacoustic imaging of blood e.g. is used for imaging blood vessels. Blood, via hemoglobin, is indeed a very abundant absorbing element in the human body.
However, fixed photoacoustic imaging devices using piezoelectric sensor arrays to capture ultrasonic waves have visibility limits and certain tissue structures do not appear on the acquired images due to the limitations of sensors in terms of bandwidth and numerical aperture, when the sensor array does not encompass the imaged medium. The use of acquisition systems with limited numerical aperture and limited bandwidth thus causes artifacts of limited visibility: the vessels the axes of which are almost aligned with the axis of the probe are invisible as well as large size vessels. More particularly, waves received on and around the 90° side are not accessible and only approximately horizontal vascular structures are detected, but vertical structures do not appear.
To solve such problem, it is possible to use a tomographic approach and use broadband detectors. Such non-resonant detectors do not, however, allow ultrasound imaging to be performed and the associated tomographic approaches are not compatible with all clinical and preclinical problems.
Approaches based on artificial intelligence with deep learning have also been proposed recently, for overcoming visibility problems but such solution raises problems of availability of training data as well as of generalization.
There is thus a need for a better adapted biological tissue imaging solution.
According to a first aspect, a method of processing photoacoustic images comprises: obtaining a temporal succession of images of a sample acquired by a photoacoustic imaging system for Mx excitation pulse wavelengths with N images acquired per wavelength; a multispectral spatio-temporal filtering by singular value decomposition applied to all the N*Mλ images acquired so as to obtain N*Mλ filtered images; for each wavelength, a calculation of a filtered variance image from the N filtered images, one pixel of coordinate r in the filtered variance image being equal to the variance of the distribution of pixel values of the same coordinate r in the filtered images obtained for said wavelength; for each wavelength, a correction of the filtered variance image by subtracting a variance of the residual electronic noise after multispectral spatio-temporal filtering produced by ultrasonic wave sensors of the photoacoustic imaging system. The corrected variance image thus obtained is proportional at each image point to the square of the energy absorbed at a point in space corresponding to the image point concerned.
In one or a plurality of embodiments, the method comprises, for each wavelength, a determination of a corrected image of fluctuations, each pixel of which is equal to the square root of the corresponding pixel of the corrected variance image obtained by said subtraction for the wavelength considered. The corrected image of fluctuations thus obtained is proportional in each image point to the energy absorbed at a point of the space corresponding to the image point concerned.
In one or a plurality of embodiments, the method further comprises an estimation of the variance of the residual electronic noise produced by the ultrasonic wave sensors on the images, the variance of the residual electronic noise produced by the ultrasonic wave sensors on the images being estimated as a function of a variance of the electronic noise produced in the photoacoustic signals acquired in the absence of a sample corrected by an amount of noise eliminated by multispectral spatio-temporal filtering by decomposition into singular values, the amount of noise eliminated being estimated based on the singular values corresponding to the lowest energy components removed by the multispectral spatio-temporal filtering by singular value decomposition.
In one or a plurality of embodiments, the method comprises, for each wavelength considered, a normalization of the image of fluctuations corrected by a function of the laser pulse fluence of the photoacoustic imaging system, so as to obtain an image of absorption fluctuations representative of the absorption fluctuations due to the sample.
In one or a plurality of embodiments, multispectral spatio-temporal filtering by singular value decomposition comprises a selection of the components corresponding to the highest energy singular values to be removed and removing selected components, the selection being made by choosing from a set of index values, an index identifying the first component to be kept for which a contrast-to-noise ratio is maximum, the contrast-to-noise ratio determined for an index being calculated from filtered variance images calculated by multispectral spatio-temporal filtering by decomposition into singular values applying the index.
In one or a plurality of embodiments, the contrast-to-noise ratio is determined by eliminating the contrast due to the mean value of the acquired images for at least one wavelength.
In one or a plurality of embodiments, the method comprises a calculation of an oxygen saturation rate image from at least two images of absorption fluctuations obtained for at least two corresponding wavelengths. The calculation can be performed on the basis of a model expressing, for each pixel of the coordinate r, a relation between a total hemoglobin concentration, an oxygen saturation level and the value at the pixel r of the image of absorption fluctuations.
According to another aspect, a photoacoustic image processing device comprises at least one data storage comprising program code instructions, at least one data processor, the data processor being configured to, when the program code instructions are executed by the data processor, make the photoacoustic image processing device execute a photoacoustic image processing method according to any of the embodiments.
In another aspect, a computer-readable data storage comprises computer program instructions that, when executed by a processor, execute a photoacoustic image processing method according to any of the embodiments.
According to another aspect, a computer program comprising computer program instructions that, when executed by a processor, execute a photoacoustic image processing method according to any of the embodiments.
Other features and advantages will result from the detailed description which follows, made on the basis of embodiments and examples given as an illustration, and not in a limiting way, with reference to the enclosed figures wherein:
A method for processing images acquired by a multispectral photoacoustic imaging system using an array of acoustic wave sensors for the acquisition of ultrasonic waves produced in a sample to be imaged (generally a biological tissue) by laser excitation, will be described in more detail.
The method is applicable to photoacoustic imaging of any type of fluctuations occurring in biological tissues. The application of the method to the quantitative analysis of the blood oxygenation rate will be described as a non-limiting example.
Within the framework of the present document, image of fluctuations refers to an image representative of temporal fluctuations at each image point, each image point (or pixel) representing a standard deviation of the fluctuations considered. An image of fluctuations can thus be obtained from a temporal succession of images, each pixel of coordinate r in the image of fluctuations being calculated from the standard deviation of the statistical distribution of the pixels of the same coordinate r in the temporal succession of images. To each image of fluctuations corresponds a variance image, designating the image of fluctuations squared: each image point of the variance image representing a variance of the fluctuations considered, i.e. a variance of the statistical distribution of pixels of the same coordinate in the temporal succession of images considered. Depending on the context, we will talk about the variance image or the image of fluctuations.
Within the framework of the present document, the images can be 2D or 3D images. For simplicity, the term “image pixel” or “image point” will be used herein in general to refer to a picture element. A pixel or image point corresponds to a voxel in the case of a 3D image. The image processing method uses a multispectral quantitative analysis of the fluctuations detected for a given sample voxel from the different images acquired. Images (usually 3D images) of fluctuations representing, at each pixel, the fluctuations of the pixel values representing a given voxel of the biological tissue, are thus generated for each wavelength. For example, at each pixel describing a blood vessel, the blood flow causes an amplitude fluctuation from one image to another. More particularly, in the case of blood vessels, red blood cells do not have exactly the same spatial conformation from image to image, which modulates local absorption over time. Like standard photoacoustic imaging, it is possible to show (see e.g. the document entitled “Photoacoustic fluctuation imaging: theory and application to blood flow imaging”, by Vilov S., Godefroy G., Arnal B., & Bossy E.; Optica, 7(11), 1495-1505 (2020)) that under ideal conditions (without noise and spurious fluctuations, at constant hematocrit), the photoacoustic image of fluctuations is proportional to the product of the local optical absorption uλ(r) multiplied by the fluence of the laser pulse ϕλ(r):
where r is a vector identifying a spatial position in an acquired three-dimensional (3D) image, A is a constant that does not depend on λ or r if one can consider a spatial invariance of the point spread function of the ultrasonic imager. If the point spread function varies in space, then the dependence of A on r can be taken into account and corrected.
For each wavelength of a plurality of wavelengths, an image of fluctuations can be determined. This image of fluctuations is representative of temporal fluctuations due to biological tissue (e.g., due to blood flow) but is affected by the pulse-to-pulse fluctuation of the laser and by the electronic noise of the ultrasonic wave sensors.
Indeed, apart from fluctuations in biological tissue, there are other sources of spurious fluctuations: fluctuations in the energy of laser pulses and electronic noise produced in particular by the ultrasonic wave sensors of the multispectral photoacoustic imaging system. It is thereby necessary to carry out a specific processing so that the image of fluctuations is representative only (or mainly) of the fluctuations of interest: herein, the fluctuations of the absorption due to the biological sample. The image of fluctuations obtained by the method described in the present document is in fact proportional to the absorption of the chromophores of the biological sample responsible for the absorption fluctuations. Such an image will be referred to as “image of absorption fluctuations”.
Thereby, due to fluctuations in laser pulse energy, the image of fluctuations, even if obtained by subtracting the average photoacoustic image, always comprises a residual term corresponding to the average photoacoustic image multiplied by the variance of the pulse energy. Multispectral spatio-temporal filtering by Singular Value Decomposition (SVD) effectively eliminates the residual term due to the specific spatio-temporal signature thereof. In general, by removing the components of the image corresponding to the first singular values, not only fluctuations in laser pulse energy are removed, possible movements (e.g. physiological) occurring in the biological sample, but also the contribution due to the average absorbing element in the image and only the parts of the image corresponding to temporal fluctuations of interest are kept. Furthermore, it appears that SVD has better performance when performed on a set of multispectral images than when applied on images having the same wavelength.
Filtered images of fluctuations are then generated for each wavelength from a temporal succession of filtered images obtained at the end of the multispectral spatio-temporal filtering. In the filtered image of fluctuations at a given wavelength, a pixel of coordinate r in the image of fluctuations is equal to the standard deviation of the distribution of pixel values of the same coordinate r in the filtered images obtained for said wavelength.
It is also possible to perform a correction of the filtered image of fluctuations obtained for each wavelength so as to keep only the fluctuations of interest and obtain a corresponding image of fluctuations of interest or image of variance of interest.
A first type of correction consists in correcting fluctuations due to the electronic noise of ultrasonic wave sensors. Such correction allows the image of fluctuations to become proportional to ϕλ(r)μλ(r). The noise being additive in the space of variances, the step could consist in estimating the background noise in the image space resulting from the electronic noise picked up by the ultrasonic wave sensors and subtracting same from each pixel of the variance image, so as to obtain a corrected variance image the square root of which, the corrected image of fluctuations, is representative at each pixel only of the fluctuation due to the biological tissue considered in the corresponding voxel of the sample (in the example, by blood flow), weighted by the distribution of fluence. The corrected image of fluctuations is thereby proportional at each image point to the energy absorbed at a point in the space corresponding to the image point concerned.
A second type of correction consists in normalizing by a fluence function, the corrected image of fluctuations obtained after correcting the fluctuations due to the electronic noise of the ultrasonic wave sensors.
The raw, unfiltered variance image obtained from the raw images which are not filtered by SVD for a given wavelength, corresponds at each pixel, to the sum of the variance representing the fluctuations induced in the corresponding voxel of the sample (in the example, the fluctuations induced by a blood flow), weighted by the fluence of the pulse, and of the variance of the electronic noise of ultrasonic wave sensors in the image space.
Mathematically, it can be shown that:
is the value of the pixel of coordinates r in the raw variance image obtained from the raw images acquired (not filtered by SVD) for the wavelength λ calculated on the realizations indexed by i, the realizations corresponding to the temporal succession of the images acquired for the wavelength λ;
is the mean square fluence of the laser for the wavelength λ at the pixel of coordinates r;
is the variance of the relative fluctuation of the laser pulse to pulse energy for the wavelength λ (referred to as “pulse energy fluctuation”, PEF);
is the modulus squared of the value of the pixel of coordinates r in the mean image mλ calculated as the mean of the images Ai,λ(r) acquired at the wavelength λ;
is the value of the pixel of coordinates r in the image of variance of interest and represents the temporal fluctuations due to biological tissue in a given voxel corresponding to the pixel of coordinate r, normalized by:
is the variance of electronic noise due to ultrasonic wave sensors in the image.
We can see that the PEF term influences two terms: same biases the coefficient on the variance image and is a prefactor of the mean image. Since the mean image is one to two orders of magnitude greater than the fluctuation due to biological tissue, a correction is necessary to recover the variance image of interest even if the PEF is low.
Monitoring the PEF by means of a photodiode at the output of the laser can also be an insufficient correction method because of the spatial fluctuations of the laser beam that the photodiode cannot measure. The SVD method eliminates the effects of PEF in order to obtain complete filtered images. The elimination of the first singular values during multispectral spatio-temporal filtering by the SVD has the effect of removing the contributions of the average absorber subjected to the PEF. SVD filtering also changes the background level of the variance image. Thus, after multispectral spatio-temporal filtering by the SVD, a filtered variance image is obtained, corresponding to the filtered image of fluctuations, such as:
is the variance of residual electronic noise after SVD filtering;
is the variance of the relative fluctuation (variance normalized by the square of the mean value) of laser pulse energies at the wavelength λ.
The relative fluctuation of laser pulses is usually a few percent and can be neglected.
Indeed,
So one can write
Thereby, it can be seen that an image of absorption fluctuations σλ,flow(r) representative of the absorption fluctuations of the biological tissue can be obtained by applying two corrections successively.
The first correction consists in subtracting, at each pixel of the filtered variance image, the variance of the residual electronic noise after filtering by SVD in order to obtain a corrected variance image and then a corresponding corrected image of fluctuations by taking the square root.
The second correction consists in normalizing the image of fluctuations corrected by the fluence of the laser pulse ϕλ(r), by dividing each pixel by ϕλ(r) so as to obtain a corrected and normalized image of fluctuations, called image of absorption fluctuations σλ,flow(r) or image of fluctuations of interest. The two corrections are applied for each wavelength considered.
The images of absorption fluctuations σλ,flow(r) thereby obtained for each excitation wavelength are representative only (or at least mainly) of the absorption fluctuations induced by the biological tissue. In the case where the absorption fluctuations in the biological tissue are mainly those due to the blood flow, σλ,flow(r) represents the absorption fluctuations due to the blood flow.
An approach of fluctuations by imaging can thereby be used for solving the problems of visibility mentioned in the introduction, as well as for improving the contrast, at the cost of acquiring a series of images. For example, information on fluctuations within large size vessels and information on vertically oriented vessels can be obtained.
Such method is useful in particular in the case of image acquisition by means of fixed acquisition devices held by hand (called “handheld devices” or “single-sided devices”) or held by a mechanical part, which are placed on the skin of the subject in order to image a given region, without the need for movement during the acquisition. In such acquisition devices, the acoustic wave sensors are e.g. piezoelectric sensors with limited visibility (limited passband and angular reception spectrum).
Furthermore, a quantitative analysis of the blood oxygenation rate is possible. Indeed, in the case of the application to imaging of oxygenation of blood vessels, from the different images of absorption fluctuations obtained for different wavelengths, 2D or 3D oxygenation maps can be determined by inverting a model linking the fluctuation maps due to blood flow at different wavelengths normalized by the fluence, to the rate of oxygenation. A quantification of the optical absorption spectra of blood vessels can be achieved therefrom. More precisely, using tissue excitation light waves at a plurality of wavelengths, the relative or absolute amount of oxyhemoglobin and deoxyhemoglobin can be determined from the differences in absorption profile obtained according to the wavelength.
To obtain an oxygenation map, it is possible to start either from images of fluctuations corrected and normalized by the incident fluence in the case where there is no spectral coloration due to the tissue, or from images of absorption fluctuations which are images of fluctuations corrected and normalized by the spatial fluence within the sample. In the first case, at each point there is proportionality with the images of absorption fluctuations but the attenuation of the light in depth is not corrected (which is assumed to be the same for each wavelength when there is no spectral coloration due to the tissue).
Quantitative imaging of the rate of oxygenation of blood vessels is thereby obtained. The image of absorption fluctuations is specific to blood flows, providing specificity in quantitative evaluations, in particular when under-resolved vessels surrounded by other absorbers are involved, and avoids the complexity of spectral unmixing. The method is particularly applicable to tumor imaging, brain imaging and vascular imaging. As mentioned hereinabove, the image of fluctuations has a better contrast than the conventional image and is not affected by the visibility artifacts present in conventional imaging. The oxygenation rate is thus obtained on an image much richer than the conventional image.
During a step 110, a temporal succession of images is acquired by a photoacoustic imaging system for Mλ wavelengths, i.e. N images per wavelength and N*Mλ images in total. The acquired images are denoted by Ai,j where i is an integer comprised between 1 and N, which represents the index of the acquired image for a given wavelength; j is an integer comprised between 1 and Mλ and represents the index identifying the wavelength concerned. A pixel of an image Ai,j is denoted by Ai,j(r) where r is the coordinate of a pixel of the image.
The acquisition rate is fixed, e.g. at 100 Hz. Typically, e.g. Mλ=6 wavelengths and N=250 images per wavelength are used. More generally, N can be comprised between 10 and 100,000, Mλ between 2 and 100, and the acquisition frequency can be comprised between 10 and 100,000 Hz.
The acquisition can be done e.g. by cyclically varying the wavelength from one image to the next: an image with a first wavelength λ1, then an image with a second wavelength λ2, and so on until the last wavelength λM, then repeating N times the same acquisition cycle with the wavelengths λ1 to λM. In this way, the period between two acquisitions at the same wavelength is fixed and identical regardless of the wavelength. In the case where only two wavelengths are used, the first wavelength λ1 is alternated with the second wavelength λ2.
The acquired images Aij are subjected to a post-processing comprising the steps 120 to 150 defined hereinbelow.
During a step 120, a multispectral spatio-temporal filtering by Singular Value Decomposition (SVD) is applied to all N*Mλ images and N*Mλ filtered images are obtained at the end of the step, denoted by
Such multispectral spatio-temporal filtering by SVD is used for effectively removing, in the variance image, the components of the average absorber, in particular the term
resulting from fluctuations in the pulse energy of the laser and the mean image, which by the amplitude thereof, masks the fluctuations of interest. The use of SVD filtering for a plurality of wavelengths makes SVD statistically more efficient and provides components representing fluctuations of the tissue in response to a plurality of excitation wavelengths. If spatio-temporal filtering by SVD were to be used on images acquired for a single wavelength, the choice of SVD filtering limits would vary from one wavelength to another, which would make the procedure cumbersome.
During a step 140, for each wavelength separately, a filtered image of fluctuations (a filtered variance image, respectively) is determined: the standard deviation (the variance, respectively) of the distribution of pixel values of the same coordinate r in the N filtered images obtained during the step 120 for said wavelength, is calculated for obtaining the pixel of coordinate r in the filtered image of fluctuations (in a filtered variance image, respectively). Thereby, for each wavelength j, at the end of step 140, Mλ variance images are obtained, denoted by
or more simply
With the above notation,
is the variance of the distribution of values
at the pixel of coordinate r calculated on the realizations i=1 to N for the wavelength j.
During a step 150, a correction of each filtered image of fluctuations is performed so as to obtain an image of absorption fluctuations representative, solely or essentially, of the absorption fluctuations of the biological sample. The correction of the filtered image of fluctuations thereby comprises a removal of fluctuations other than fluctuations of interest due to the sample, so as to obtain an image of absorption fluctuations representing absorption fluctuations due to the sample. Two types of corrections can be used in combination or individually.
A first type of correction (step 150A) consists in a correction of fluctuations due to the electronic noise of ultrasonic wave sensors by subtracting, at each pixel of coordinate r in the filtered variance image, the variance of the residual electronic noise after SVD, the variance of the residual electronic noise after SVD being denoted by:
The variance of the residual electronic noise of the acquisition sensors is estimated and then the variance σb,SVD2 of the residual electronic noise is subtracted at each pixel of the filtered variance image obtained during the step 140, so as to obtain a corrected variance image for a given wavelength. The variance of the residual electronic noise after SVD can correspond to the variance of the electronic noise before SVD if the limit b of the SVD filtering is equal to the total number of images.
A corresponding corrected image of fluctuations can be obtained by calculating the square root of the corrected variance image.
The corrected image of fluctuations can then be weighted by the fluence at the wavelength considered. The corrected image of fluctuations is proportional in each image point to the energy absorbed at a point in the space corresponding to the image point concerned.
The electronic noise produced by the ultrasonic wave sensors can be estimated as a function of a variance of the electronic noise produced in the photoacoustic signals acquired in the absence of a sample, the variance being corrected by an amount of noise eliminated by multispectral spatio-temporal filtering by decomposition into singular values, wherein the amount of noise can be estimated based on singular values corresponding to the lowest energy components removed by multispectral spatio-temporal filtering by decomposition into singular values.
A second type of correction (step 150B) consists of a normalization by a function of the fluence of laser pulses at the wavelength considered. A variance image normalized by the fluence can be obtained for each wavelength by normalization by the fluence spatial function squared:
For each wavelength j=1 at Mλ so as to obtain a normalized variance image
representative only of absorption fluctuations produced by the sample (e.g. by the blood flow), so that:
represents the pixel value of the normalized variance image obtained for the wavelength λ at the pixel of the coordinate r.
The normalization by fluence can be performed on the corrected variance image using the squared fluence function ϕ2λ(r) so as to obtain a normalized variance image and then a corresponding normalized image of fluctuations by calculating the square root of each pixel. Alternatively, and equivalently, the normalization by fluence can be performed on the image of fluctuations corresponding to the corrected variance image:
by calculating the square root of each pixel of the corrected variance image obtained after electronic noise correction before normalizing by the fluence of the laser pulse ϕλ(r).
Following such normalization, a normalized image of fluctuations is finally obtained, which is the image of absorption fluctuations σλ,flow(r) representing the absorption fluctuations due to the sample.
With regard to the determination of the fluence function to be used for normalization, different methods and approximations are possible.
Instead of using a spatial function of the fluence, an average fluence (independent of the point in space) at a given wavelength can be used. The average fluence at a wavelength can be determined, e.g. on the basis of energy measurements of excitation pulses obtained by means of a photodiode placed at the output of the laser at the moment when an excitation pulse at a given wavelength is sent to the sample. The intensities of the electrical signals produced by the photodiode are converted into an estimation of fluence using calibration coefficients, the estimation of the fluence being used for calculating an average fluence and then performing the correction by normalization.
The fluence spatial function can be defined as follows:
is the fluence measured at the surface of the sample, that can be obtained by a calibrated photodiode and
represents the relative spatial variation of the fluence.
In the general case, an estimation of the spatial distribution of fluence can be obtained by modeling or by measurements.
In some cases, it can be approximated that there is no spectral coloration due to the tissue. In such case, one can write:
In such case, an estimation of oxygenation can be made simply from the normalization by the fluence of the photodiode because the same constant:
can be used for all wavelengths.
When the fluence is spatially uniform (chicken embryo hypothesis), the fluence can also be estimated directly from the signals produced by the photodiode, i.e.:
During a step 160, an image α (r) of the oxygen saturation rate is determined from at least two images of absorption fluctuations σλ,flow obtained at the end of the step 150, for at least two wavelengths λ. In an alternative, the step 160 can also be executed from at least two corrected images of fluctuations σλ (with the corrections according to the step 150A but without the corrections according to step 150B) obtained by calculating the square root of each pixel of the corresponding images of variance, obtained at the end of the step 140.
The example case is described where step 160 is executed on the basis of the images of absorption fluctuations σλ,flow obtained at the end of the step 150 (the same calculations being usable starting from the corrected images of fluctuations which have not undergone the corrections according to step 150B). For this purpose, for each pixel of coordinates r, a model is used, expressing the relation between the value:
at pixel r of the image of absorption fluctuations and two parameters, K(r), including the total hemoglobin concentration and a sensitivity coefficient of the electronic receiver systems and the oxygen saturation rate α(r).
The model consists e.g. of an equation with two unknowns K(r) and α(r) giving the fluctuation:
as a function of K(r) and α(r), where r is the coordinate of an image pixel. It is thus possible to determine, for each pixel of coordinate r, the oxygen saturation level α(r) and thus generate an image α comprising, for each pixel, quantitative information on the oxygen saturation (between 0% and 100%). It is likewise possible to generate an image K comprising, for each pixel, quantitative information proportional to the total hemoglobin concentration K(r), also called “blood volume”.
In one embodiment, the model is an analytical model based on the following equation:
is the absorption coefficient of oxyhemoglobin at the wavelength λ and
is the absorption coefficient of desoxy-hemoglobin at the wavelength λ;
K(r) is a pre-factor that depends on the position r, but not on the wavelength λ.
The acquired images Aij obtained as input are, in the present example, three-dimensional (3D) images. We therefore have N*Mλ acquired 3D images denoted by
At the end of step 120 (SVD), we have N*Mλ filtered 3D images denoted by
At the end of the step 140 (calculation of the image of the variance), one has Mx 3D images of fluctuations, denoted by
for wavelengths j=λ1 to λM. At the end of the step 150 (noise correction and/or normalization by the fluence), one has Mλ 3D variance images representing the absorption fluctuations denoted by
for wavelengths j=λ1 to λM. Finally, at the end of the step 160 (calculation of the oxygen saturation), a 3D image of the oxygen saturation level is obtained, denoted by α(r).
According to what is illustrated in
From the temporal succession of multispectral images, a two-dimensional (2D) Casorati matrix (301) is formed during a step 310, representing space and time, denoted by A(r,t), where t is a temporal index and r is a scalar corresponding to a position of a pixel in the 3D space wherein the N*Mλ images are defined. The Casorati matrix (301) is of size nR*(N*Mλ), with nR=nX*nY*nZ. The Casorati matrix (301) contains the pixels of the N*Mλ images. The value t of the temporal index can thereby vary from t=1 to N*Mλ such that A(r,1)=A1,1(r) is the first image of the temporal succession for a first wavelength, A(r,2)=A1,2(r) is the second image of the temporal succession for a second wavelength, and so on. It should be noted that in the notation A(r,1), r is a scalar while in the notation Ai,j(r), r is a vector of coordinates (x,y,z) in a three-dimensional space. The scalar r can be calculated from the coordinates of the vector r=(x,y,z) and the size nX*nY*nZ of the acquired images. The use of the Casorati matrix (301) makes it possible to perform the analysis of fluctuations in the spatio-temporal domain.
During a step 320, the Casorati matrix (301) is decomposed according to a singular value decomposition method: A=U S V* where U is a 2D matrix of size nR*nR comprising the spatial singular vectors, S is the 2D matrix of the singular values of size nR*(N*Mλ), the diagonal coefficients Sk of which are positive real numbers and V* is the conjugated transposed matrix of V, V being the matrix of spatial singular vectors of size (N*Mλ)*(N*Mλ). An example of matrix S (302) is shown in
According to what is illustrated in
In the above sum, each matrix Uk(r)Vk(t)*defines a component corresponding to the singular value Sk. In order to perform a filtering, only certain components are kept and hence only certain matrices of the weighted sum are kept. When the values Sk are arranged in the order of decreasing values, it is a question of selecting, during a step 330, values of minimum index a and maximum index b of the index k so that the matrix obtained after multispectral spatio-temporal filtering is:
The choice of the value of the minimum index a is important in that the index determines the efficiency and relevance of the filtering by determining which components of higher energy will be eliminated by filtering. The choice of the value of the maximum index b has a less sensitive impact on the filtering because same concerns the components of lower energy which include pure noise and, potentially, information embedded therein and can be chosen e.g. equal to a+100 or be comprised between a+1 and N*Mλ.
After elimination of the components corresponding to the singular values of highest energy, a transformation is performed during a step 340 which is inverse to the transformation performed during the step 310 from the matrix ASVD(r,t) for obtaining a temporal succession of filtered images denoted by:
The choice of the minimum index a can be delicate in that, by taking values of minimum index a that are too large, there is a risk of removing parts of the objects represented in an image. The method proposed herein makes it possible to select a minimum index value a in an automated way and without any risk of deleting parts of the objects represented in an image.
The method is based on an estimation of a Contrast-to-Noise Ratio (CNR) estimated for images filtered by SVD (without noise correction according to step 150A nor normalization by fluence according to the step 150B) for a plurality of values of the index a, and the value of the minimum index a which maximizes the contrast-to-noise ratio is determined.
The evaluation of the contrast-to-noise ratio is performed on the basis of a comparison between images filtered by SVD and masked images, in order to extract the structures that appear therein. The mask is obtained on the basis of the steps 410 and 420.
During the step 410, a reference value ar of the minimum index a is arbitrarily selected. For example, ar=0.03×N×Mλ=0.03×10×100=30. Such value can be selected empirically from an analysis on a few representative samples.
A reference variance image at a single wavelength is then selected, denoted by:
for this reference value ar of the minimum index a and for a given wavelength λ=λjmax, which is e.g. the wavelength for which the SNR (“Signal to Noise Ratio”) on the raw signals is the highest. The calculation of:
comprises the execution of the steps 120 (SVD) and 140 (filtered variance image) for images acquired for the wavelength of index jmax, but not the correction step 150.
During the step 420, 2 binary images used as binary masks, are calculated:
The determination of the threshold value thσ for obtaining the binary mask Mσ can be performed on the basis of the following formula:
is to the spatio-temporal variance of the electronic noise, determined as e.g. described hereinabove for the step 150A.
is the intra-image standard deviation calculated on the pixel distribution of the reference variance image;
The determination of the threshold value thm for obtaining the binary mask Mm can be carried out on the basis of the following formula:
is the mean value of the pixels in the mean image denoted by:
is the intra-image standard deviation calculated on the pixel distribution of the mean image:
The coefficient θ is the same as the coefficient used to obtain the mask Mσ.
A binary mask is used to distinguish in an image the background of the image from the rest, in particular of the object or objects represented in the image. In a known way, to calculate a binary mask on the basis of an image and a threshold value, it is determined whether the pixel value of the image is greater than (or equal to) the threshold: if the answer is yes, the value of the corresponding pixel in the binary mask is equal to 1 and if not, the value of the corresponding pixel in the binary mask is equal to 0.
The two binary masks obtained are used in the CNR formula, the calculation of which makes it possible to determine the minimum index a to be used for multispectral spatio-temporal filtering by SVD. The filtering is performed on all wavelengths and the minimum index a is the same for all wavelengths.
During a step 430, a plurality of values of the minimum index a are selected, e.g. from a set of predefined values such as 1 and 100, and the variance images are calculated at all wavelengths, denoted by:
for each value of the minimum index a.
During a step 440, a contrast-to-noise ratio value CNR(a) is calculated for each value of the minimum index a by using the two binary masks.
During a step 450, the value of the minimum index a for which the contrast-to-noise ratio CNR(a) is maximum is selected as the lower limit. The following variance image is used:
corresponding to the lower limit for carrying out the rest of the processings (in particular the steps 150 and 160 of the method described with reference to
The Contrast-to-Noise Ratio (CNR) can be calculated in different ways on the basis of a binary mask that makes it possible to distinguish which pixels of the image are part of the image background and which pixels of the image are part of the object or objects represented in this image. In general, the contrast-to-noise ratio is the ratio between the contrast determined as the difference between the mean value of the pixels forming part of the object or objects and the mean value of the pixels forming part of the image background with respect to the fluctuation of the image background. The choice of the contrast-to-noise ratio as a criterion results in particular from the fact that, in order to visually detect pathologies in an image, the contrast thereof has to be greater than the fluctuation of the image background, and thus the contrast-to-noise ratio has to be greater than 1.
According to an exemplary embodiment, the contrast-to-noise ratio is calculated for each variance image at each wavelength:
obtained for the value of the minimum index a, then averaged over the wavelengths, as follows:
where
is the operator representing the intersection of the two binary marks, the intersection corresponding to a logical AND function performed pixel by pixel between the two masks;
is the intra-image standard deviation operation calculated on the distribution of pixels;
is the complement of the mask Mσ, i.e. 1−Mσ
is the term corresponding to the pixels belonging to the background of the image
is the intra-image averaging operation calculated on the pixel distribution.
Such type of calculation formula is used, by means of the term
to eliminate in the calculation of the contrast-to-noise ratio the structures present both in the mean image and in the variance image. In the images acquired by a photoacoustic imaging technique, the mean value of the image is about 100 times larger than the rest, which would result in a very high CNR(a) value if the contrast due to the mean image value was not subtracted for the low values of the minimum index a. The mask used thereby ensures that the contrast increases when new structures appear. Thereby, in order to be able to compare in a relevant way the values of CNR(a) whatever the minimum index a, said specific calculation formula is used.
The example curve (470) of CNR(a) variations shown in the
For the estimation of the electronic noise of the sensors used during the step 150A, a plurality of methods can be used.
According to a first noise estimation method, applicable in the case of a reconstructed image
the electronic noise in the reconstructed image can be deduced from a spatial noise measurement of the real radio frequency signals.
is the number of transducers used in the reconstruction
Is the variance of electronic noise on the measured radio frequency signals and is determined from raw signals obtained in the absence of a sample.
The factor 2 comes herein from the fact that signals with complex values are used for the reconstruction.
However, the estimation of
can be corrected by considering that the multispectral spatio-temporal filtering by SVD removes an amount of noise equal to:
where b is the maximum index of the weighted sum of the matrices defining
It has been observed that singular values representing the noise, present in the expression of:
are stationary with respect to wavelengths.
Finally, the residual electronic noise after SVD to be subtracted during the step 150A from the variance images in order to obtain corrected variance images will be:
According to a second noise estimation method, if the noise does not follow said law, it is possible to perform an alternative correction, e.g. by minimizing the norm L2 of the reconstructed image:
subject to the subtraction of a constant:
The value of the constant
minimizing the norm can be used for calculating the value:
to be subtracted from
Other methods for estimating the level of noise can be used.
The photoacoustic image processing method described in the present document was applied to volume images acquired in the chicken embryo and we find values close to oxygenation rate with conventional photoacoustic spectroscopy based on a mean image in visible structures.
In
In
In the case where the medium is transparent (as in the case of the chicken embryo with vascularization surrounded by a clear medium), it is not necessary to correct the spectral coloration due to the use of a plurality of wavelengths by the biological tissue.
The blood flow oxygenation imaging method that has been described herein pushes the limitations of conventional detectors with minor hardware modifications: in order to extract the fluctuation of interest and to perform post-processing as described herein, it is enough to acquire enough images (typically at least 50).
The functions, motors, principle diagrams, flow diagrams, state transition diagrams and/or flowcharts presented in the present document represent conceptual views of illustrative circuits implementing the principles of the invention. Similarly, all flowcharts, flow diagrams, state transition diagrams, pseudo-codes and other represent various aspects of the method which can be implemented essentially by computer program instructions stored on a computer-readable medium and thereby be implemented by a processor or a device including a processor, whether or not the processor or the device is explicitly represented.
According to one embodiment, one or a plurality of all the steps of a photoacoustic image processing method are implemented by a software or computer program.
The present description thereby relates to a computer program which can be executed by a data processor, the computer program including computer program instructions for controlling the execution by a device of one or a plurality of or all of the steps of a photoacoustic image processing method according to any of the embodiments described in the present document. The computer program instructions are intended e.g. to be stored in a memory of a device, loaded and then executed by a processor of this device.
The computer program can use any programming language, and may be in the form of source code, object code, or of intermediate code between source code and object code, such as in a partially compiled form, or any other desirable form.
The device can be implemented by one or a plurality of physically distinct machines and, overall, has the architecture of a computer, including components of such an architecture: data memory or memories, processor(s), communication bus, hardware interface(s) for connecting this computing device to a network or another equipment, user interface(s), etc.
The present description further relates to an information medium readable by a data processor and including instructions of a computer program as mentioned hereinabove. The information medium can be any entity or device apt to store the program.
The information medium can be any hardware means, entity or device apt to store a signal. For example, the medium can include a storage means, such as a ROM or RAM memory, e.g. a CD ROM disk or a magnetic recording means, a computer hard disk, optical storage media, flash memory devices and/or other tangible machine-readable media for storing information.
The term “computer-readable medium” can include, but is not limited to, portable or fixed storage devices, optical storage devices and various other media apt to store, contain or transport instructions and/or data.
It can be a computer storage medium and/or communication media, or more generally any medium that facilitates the transfer of a computer program from one place to another.
Examples of computer-readable media include, but are not limited to, a flash drive or other flash memory devices (e.g., memory keys, memory sticks, a USB key drive), a CD-ROM or other optical storage, a DVD, magnetic disk storage or other magnetic storage devices, a solid state memory, a memory chip, a RAM, a ROM, an EEPROM, smart cards, a relational database management system, a company data management system, etc.
The information medium can be a medium transmissible in the form of a carrier wave such as an electromagnetic signal (electrical, radio or optical signal), which can be carried via appropriate wired or non-wired transmission means: electrical or optical cable, radio or infrared link, or by other means.
The term “processor” can e.g. refer to any microprocessor, microcontroller, controller, integrated circuit or central processing unit (CPU) comprising one or a plurality of processing units or one or a plurality of processing kernels based on hardware. Moreover, the term “processor” should not be interpreted as referring exclusively to hardware apt to execute computer program instructions but can refer e.g. to a digital signal processor (DSP), a network processor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other circuit whether programmable or not, either specific or not specific. The term “processor” can also correspond to a combination of a plurality of exemplary embodiments mentioned herein.
The present invention relates to a photoacoustic image processing device comprising, at least one data memory comprising program code instructions, at least one data processor, the data processor being configured so that, when the program code instructions are executed by the data processor, the photoacoustic image processing device executes one or a plurality of or all of the steps of a photoacoustic image processing method according to any of the embodiments described herein.
In one embodiment, the photoacoustic image processing device comprises:
More generally, the photoacoustic image processing device comprises means for implementing one or a plurality of or all of the steps of a photoacoustic image processing method according to any of the embodiments described in the present document. The means comprise e.g. software and/or hardware means.
Number | Date | Country | Kind |
---|---|---|---|
2108452 | Aug 2021 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2022/051444 | 7/20/2022 | WO |