The present invention relates to an apparatus for acquiring biofunctional information, a method for acquiring biofunctional information, and a program for implementing the method.
Imaging apparatuses using one of X-rays and ultrasound are used in many fields requiring nondestructive testing, such as the medical field. Particularly in the medical field, diagnosis using ultrasound echo involves an advantage of being noninvasive and thus is used in many situations. It is important to derive biofunctional information within a living body, that is, physiological information, for discovery of a disease site, such as a cancer. But in conventional diagnosis using X-ray or ultrasound echo, only shape information within a living body is derived. Therefore, Photoacoustic Tomography (PAT), one of the light imaging techniques, is proposed as a new noninvasive diagnosis method that can image biofunctional information.
In PAT, in vivo information is imaged by irradiating a subject with pulsed light generated from a light source and detecting an acoustic wave (typically ultrasound) which is generated from a living body tissue absorbing the energy of the light propagated and which is diffused within the subject. Information related to optical properties inside the subject can be made three-dimensionally visible by detecting a temporal change in acoustic waves received at a plurality of places surrounding the subject, and mathematically analyzing (that is, reconstructing) the derived signals. When a profile of initial pressure generation within the subject is detected by using this method, information on a profile of optical properties, such as a profile of light absorption coefficient, can be derived.
Examples of the detection of biofunctional information using PAT include measurement of oxygen saturation.
Oxygen saturation is content of hemoglobin bound to oxygen with respect to an amount of total hemoglobin in blood. Whether cardiopulmonary function operates normally or not can be measured by detecting oxygen saturation. In addition, oxygen saturation is an indicator for distinguishing the benignancy/malignancy of a tumor, and therefore is expected as a measure for efficient discovery of a malignant tumor.
Near-infrared light is used for the measurement of oxygen saturation. Near-infrared light has the property of being easily transmitted through water which constitutes a large portion of a living body, while being easily absorbed by hemoglobin in blood. Hemoglobin in a living body includes two states: deoxyhemoglobin not bound to oxygen and oxyhemoglobin bound to oxygen, and the optical absorption spectra in the respective states are different. Therefore, oxygen saturation can be found by performing measurement a plurality of times using pulsed lights having different wavelengths in the near-infrared region, and subjecting calculated light absorption coefficients to comparison operation. In other words, when a living body is irradiated with near-infrared light, oxygen saturation as a biofunctional information can also be imaged in addition to a blood vessel image as a shape information of the living body.
In acquiring biofunctional information by this method, however, it is necessary to subject the results performed for the same place in a plurality of measurements to comparison operation, and thus when the measurement positions do not match due to the movement of the body and the like, a misdirected result may be derived.
For the problem of the comparison of a plurality of measurements, such a technique as disclosed in Patent Literature 1 has been mentioned. In the technique of Patent Literature 1, a moving vector between images, measured for a particular region in the images, is extracted. Then, an adjustment such as zooming, rotation, and shift, of the image is performed based on the vector to correct position displacement (i.e. position adjustment), and the plurality of images are compared.
PTL 1: Japanese Patent Application Laid-Open No. 2007-215930
However, the position adjustment between images still remains problems as shown below.
A first problem is that the extraction of a moving vector involves low robustness. In the position adjustment between images, a point or a structure (referred to as characteristic structures) presumed to be the same place is found out on a plurality of images to be compared, and a moving vector is extracted based on the point or the structure. However, since a living body is elastic and deforms in a complicated manner, even if a characteristic structure can be identified, due to deformation thereof, the characteristic structure may not be extracted in another image. In addition, when no characteristic structure can be identified in the images, the extraction of a moving vector would become more difficult.
A second problem is that it is difficult to completely match all pixels. A moving vector is derived only with a representative point, such as a characteristic structure, and therefore, an interpolation is necessary for adjustment of the positions of the other regions. However, since a living body is elastic, it is difficult to adjust the position between a plurality of images pixel by pixel in the interpolated regions.
In view of the above problems, it is an object of the present invention to provide a technique that can acquire biofunctional information, such as oxygen saturation, with which position adjustment between images is not necessary even if position displacement occurs in comparing the results of a plurality of measurements.
In an aspect of the present invention, the apparatus for acquiring biofunctional information, comprising: an acoustic wave detector, for receiving a plurality of acoustic waves generated when a subject is irradiated with a plurality of lights having different wavelengths, and for converting the plurality of acoustic waves to a plurality of signals corresponding to the plurality of lights; and a processing apparatus for deriving biofunctional information inside the subject using a plurality of profiles of absorption coefficient which are derived from the plurality of signals and are respectively corresponding to the plurality of signals, in which the processing apparatus includes: a first unit for deriving, from a signal corresponding to light having a first wavelength, first data showing a profile of first absorption coefficient corresponding to the light having the first wavelength, and deriving, from a signal corresponding to light having a second wavelength different from the first wavelength, second data showing a profile of second absorption coefficient corresponding to the light having the second wavelength; and a second unit for deriving the biofunctional information using the first data and the second data, and wherein the second data has lower image spatial resolution than the first data.
In another aspect of the present invention, the method for acquiring biofunctional information by: receiving acoustic waves generated when a subject is irradiated with a plurality of lights having different wavelengths, and converting the acoustic waves to a plurality of signals corresponding to the plurality of lights, by an acoustic wave detector; and deriving biofunctional information using a plurality of profiles of absorption coefficient which are calculated from the plurality of signals and are corresponding to the plurality of signals, includes the steps of: deriving, from an acoustic wave generated when the subject is irradiated with light having a first wavelength, first data showing a profile of first absorption coefficient corresponding to the light having the first wavelength; deriving, from an acoustic wave generated when the subject is irradiated with light having a second wavelength, second data showing a profile of second absorption coefficient corresponding to the light having the second wavelength, and having lower image spatial resolution than the first data; and deriving the biofunctional information using the first data and the second data.
In yet another aspect of the present invention, the program for allowing a computer to execute each step of a method for acquiring biofunctional information includes executing the steps of: deriving, from an acoustic wave generated when the subject is irradiated with light having a first wavelength, first data showing a profile of first absorption coefficient corresponding to the light having the first wavelength; deriving, from an acoustic wave generated when the subject is irradiated with light having a second wavelength, second data showing a profile of second absorption coefficient corresponding to the light having the second wavelength, and having lower image spatial resolution than the first data; and deriving the biofunctional information using the first data and the second data.
With the apparatus and the method for acquiring biofunctional information according to the present invention, oxygen saturation can be calculated with a minor error even if the position displacement of a subject occurs during measurements.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
The present invention will be described with reference to the drawings. The measurement of oxygen saturation will be described hereinbelow. But biofunctional information to be measured with the photoacoustic imaging apparatus of the present invention is not limited to oxygen saturation, and the total amount of hemoglobin or the like may also be measured. As long as biofunctional information inside a subject can be derived by irradiating the subject with at least two or more lights having different wavelengths to detect the difference between acoustic waves generated within the subject, the biofunctional information acquirement (photoacoustic imaging apparatus) of the present invention can be used for the measurement of any biofunctional information.
The present invention is not limited to a single apparatus having the following configuration. The present invention is also implemented by the use of a method for implementing functions described in this embodiment, and by processing in which software (computer program) for implementing these functions is supplied to one of a system and an apparatus via one of a network and various storage media, and the computer (or one of CPU, MPU, and the like) of one of the system and the apparatus reads and executes the program.
An photoacoustic imaging apparatus in this embodiment includes a light source 1 which irradiates a subject 3 with light 2 having a single wavelength, optical devices 4, such as lenses, which guides the light 2 from the light source 1 to the subject 3, an acoustic detector 7 which detects an acoustic wave 6 generated when an optical absorber 5 absorbs the energy of the light propagated and diffused inside the subject 3, and converts the acoustic wave 6 to an electrical signal, a controlling apparatus 8 which allows the acoustic detector 7 to scan, an electrical signal processing circuit 9 which performs the amplification, digital conversion, and the like of the electrical signal, an apparatus 10 for data processing which constructs an image regarding in vivo information (generates image data), an apparatus 11 for inputting misplacement amount which inputs the position displacement amount of the subject, and a display 12 which displays the image. The light source 1 can output the lights 2 in at least two or more wavelengths.
An implementation method will be described with reference to
First, in a measurement using light having wavelength A, data showing profile A of absorption coefficient is calculated in the unit 101 by reconstructing the digital signal which is sent from the electrical signal processing circuit 9 (S3), and the calculated data showing the profile A of absorption coefficient (first data) is stored in a memory A102. In addition, for a measurement using light having wavelength B, similarly, data showing profile B of absorption coefficient (third data) is calculated (S6) and stored in a memory B103. Next, the position displacement amount between the position of the optical absorber 5 in the measurement using the light having the wavelength A and the position of the optical absorber 5 in the measurement using the light having the wavelength B is input to the apparatus 11 for inputting misplacement amount, and an amount for changing resolution is determined in the unit 108 based on the value of position displacement (S7). The unit 104 reduces image spatial resolution, by the determined amount for changing resolution, in at least one data among the data showing the profiles of absorption coefficient stored in the memories, to thereby derive a profile of absorption coefficient after the reduction (S8). In this invention, the data to which the image spatial resolution is reduced (second data) is used in calculating information on the subject such as oxygen saturation.
Although in this specification the present invention is explained mainly in a three-dimensional data processing, the invention can be applied to both of two-dimensional image data (pixel data) and three-dimensional image data (voxel data). Image spatial resolution in this invention is resolution in an image space, rather than resolution determined by the size of the element of the acoustic detector 7. In this Specification, spatial resolution in three-dimensional image data is referred to as voxel spatial resolution, and spatial resolution in two-dimensional image data is referred to as pixel spatial resolution. In addition, voxel spatial resolution and pixel spatial resolution are together defined as image spatial resolution. In
When position displacement occurs between the measurement using the light having the wavelength A and the measurement using the light having the wavelength B, usually, images of the same optical absorber cannot be compared, as illustrated in
At this time, since the oxygen saturation is calculated from the image whose resolution is changed, oxygen saturation in a wide region including the periphery of the optical absorber is derived. However, in the imaging of biofunctional information such as oxygen saturation, since (the absolute value of) a derived value shows the benignancy/malignancy of a tumor or the like, a quantitativeness of how much each of a plurality of lights is absorbed is of more importance than its resolution (which is important in the imaging of shape information such as a blood vessel image) of the image. Therefore, even if the oxygen saturation is derived by lowering the resolution and it is an average value of oxygen saturation in the image of the optical absorber (the image of the optical absorber in the profile of absorption coefficient before the resolution is reduced) and of oxygen saturation in the periphery of the image of the optical absorber, the utility value of the derived oxygen saturation is still large. In addition, in the present invention, the place where the optical absorber is actually present can be identified (that is, the resolution can be increased) in a subsequent step (S10). Therefore, even if the oxygen saturation is calculated at the cost of the resolution at this stage, the utility value is large as long as the quantitativeness is sufficiently high.
The amount for changing (the extent of reducing) the image spatial resolution at this time is determined according to a position displacement amount input to the apparatus 11 for inputting misplacement amount or a method used for the resolution reduction processing. In an elastic object such as a living body, even if a position displacement of a certain particular place is accurately grasped, the same amount of position displacement cannot be always applied to other places. Therefore, when an attempt is made to accurately align images by position adjustment, enormous measurements of a position displacement amount for voxels would become necessary. However, in the present invention, the portion where the images of the optical absorber are composed is created by reducing the image spatial resolution, and therefore, it is not necessary to grasp a position displacement amount for each voxel. However, in order to create the portion where the images of the optical absorber are composed, the image of the optical absorber after the reduction of the image spatial resolution must be enlarged in an amount more than the actual position displacement. Thus, while the position displacement amount input to the apparatus 11 for inputting misplacement amount may be a rough amount, a value certainly larger than the actual position displacement amount is used.
The amount for changing the image spatial resolution with respect to the position displacement amount is determined so that the image of the optical absorber in the profile of absorption coefficient whose resolution is not reduced is at least included (when the resolution of all profiles of absorption coefficient is reduced, any one profile of absorption coefficient before resolution is reduced is included) in the region of the image of the optical absorber in the profile of absorption coefficient after the reduction, regardless of the number of profiles of absorption coefficient whose image spatial resolution is changed. At this time, the amount for changing the image spatial resolution may be independently determined for each profile of absorption coefficient whose image spatial resolution is to be reduced, or the amount for changing the image spatial resolution may be equally determined for all profiles of absorption coefficient whose image spatial resolution is to be reduced. The method for deriving the position displacement amount is not particularly limited, and the position displacement amount can be derived with any publicly known method. The position displacement amount may be derived from mechanical measurement or measurement from images, and the input may be either manual or automatic. The amount of image spatial resolution to be changed with respect to the position displacement amount is different for each method for changing resolution. Therefore, the relationship between the position displacement amount and the amount for changing resolution may be previously obtained for each method for changing resolution and prepared as a table or a relation, and the amount for changing resolution may be determined using this previously prepared table or relation.
The method for reducing the image spatial resolution is not limited, and the reduction of the image spatial resolution can be achieved, for example, by the convolution of a spatial filter such as a digital filter. In this method, the calculation amount is not large, and practically extendable to three dimensions. As the filter, a filter that reduces resolution such as a moving average filter or a gaussian filter is used. The size of the image of the optical absorber in voxel data can be adjusted by changing the size of the filter. At this time, it is necessary to perform adjustment so that the images of the optical absorber overlap each other, as illustrated in
The profile of absorption coefficient subjected to the resolution reduction processing is stored in a temporary memory B′105. When the resolution of the plurality of profiles of absorption coefficient is reduced, each profile of absorption coefficient is stored in a different temporary memory. Next, in the unit 106 for calculating oxygen saturation as a unit for calculating biofunctional information, oxygen saturation is derived using at least a profile of absorption coefficient whose resolution is reduced (S9). At this time, the profile of absorption coefficient whose image spatial resolution is reduced is used for at least one of the plurality of profiles of absorption coefficient used for obtaining the oxygen saturation. As long as at least one or more of the profile of absorption coefficient whose image spatial resolution is reduced are used, the oxygen saturation may be obtained using two or more profiles of absorption coefficient whose image spatial resolutions are reduced, or all profiles of absorption coefficient used may be the ones whose image spatial resolution is reduced. However, also here, the image of the optical absorber in the profile of absorption coefficient whose resolution is not reduced should be included in the region of the image of the optical absorber in the profile of absorption coefficient whose resolution is reduced. The method for calculating oxygen saturation will be described later.
Since the profile of absorption coefficient whose resolution is reduced is used, the derived oxygen saturation is the value of a region including the periphery of the image of the optical absorber. Therefore, in the unit 107, the derived information on the subject (e.g. oxygen saturation) is composed with the profile of absorption coefficient whose image spatial resolution is not reduced, as illustrated in
The method for extracting only the region of the image of the optical absorber is not particularly limited. For example, only the portion of the optical absorber can be extracted in the profile of absorption coefficient whose image spatial resolution is not reduced, by previously determining the threshold value of a voxel which represents an absorption coefficient of the position where the optical absorber is present and performing threshold processing. In other words, only the portion of the optical absorber can be extracted, by substituting the value of oxygen saturation of a spatial coordinates only into the same voxel having a value equal to or more than a predetermined threshold in the profile of absorption coefficient whose image spatial resolution is not reduced, and making oxygen saturation zero in a portion with a value lower than the threshold in the profile of absorption coefficient whose image spatial resolution is not reduced. Also in two-dimensional data, only the portion of the optical absorber can be extracted, by substituting the value of oxygen saturation in a spatial coordinates only into the same pixel having a value equal to or more than a threshold in the profile of absorption coefficient whose pixel spatial resolution is not reduced.
At this time, it is also possible to also simultaneously extract a profile of absorption coefficient whose image spatial resolution for the position of the optical absorber is not changed and allow the value of oxygen saturation and the value of the profile of absorption coefficient to correspond to at least one color attribute, different from each other, of hue, saturation and lightness to derive spatial data (image data). For example, it is possible to determine hue by the value of oxygen saturation and determine saturation by the value of the profile of absorption coefficient for each voxel to perform drawing.
This result is displayed by the display 12 (S11).
Next, the method for calculating oxygen saturation will be described. When the main optical absorbers are deoxyhemoglobin and oxyhemoglobin, an absorption coefficient μa (λ) derived by measurement using light having a wavelength λ is the sum of the product of the absorption coefficient μHb (λ) of deoxyhemoglobin and the abundance ratio CHb of deoxyhemoglobin, and the product of the absorption coefficient μHbO2 (λ) of oxyhemoglobin and the abundance ratio CHbO2 of oxyhemoglobin, as shown in a formula (1). μHb (λ) and μHbO2 (λ) are physical properties with a determined value, and previously measured by other methods. The unknowns in the formula (1) are two, CHb and CHbO2. Therefore, by performing measurement at least twice, using lights having different wavelengths, a simultaneous equation can be solved to calculate CHb and CHbO2. When more measurements are performed, CHb and CHbO2 can be derived, for example, by fitting using the method of least squares.
λa(λ)=CHb·μHb(λ)+CHbO2·μHbO2(λ) (1)
Oxygen saturation SO2 is the ratio of oxyhemoglobin in total hemoglobin and therefore calculated by a formula (2)
A method for placing band limitation on a signal derived by the acoustic wave detector, as a measure replacing the spatial filter described in Embodiment 1, to reduce the image spatial resolution of a derived profile of absorption coefficient and obtain the second data will be described using
The internal processing of the apparatus 10 for data processing for carrying out the present invention, which is a differential point, will be described, and the remaining apparatus configuration is similar to the apparatus configuration of Embodiment 1. The profile A of absorption coefficient is calculated in the unit 101, using a digital signal which is sent from the electrical signal processing circuit 9 and which is obtained in measurement using the wavelength A. On the other hand, the amount for changing the resolution of a digital signal to be reduced is determined in the unit 108, based on a value derived from the apparatus 11 for inputting misplacement amount. The amount for changing the resolution is determined as in Embodiment 1. The amount of the resolution of the digital signal to be changed with respect to the misplacement amount is different for each method for changing the resolution. Therefore, the relationship between the position displacement amount and the amount for changing resolution may be previously obtained for each method for changing resolution and prepared as a table or a relation, and the amount for changing resolution may be determined using this previously prepared table or relation.
The image spatial resolution of the derived profile of absorption coefficient is reduced by processing a time-series digital signal which is sent from the electrical signal processing circuit 9 in the unit 104. In the unit 104, the resolution of the signal is reduced according to the amount for changing the resolution to derive a reduced signal (first reduced signal). In other words, resolution of a signal corresponding to light having at least one wavelength, among signals corresponding to lights having a plurality of wavelengths, is reduced more than resolution of other signals corresponding to light having a wavelength different from the at least one wavelength, to derive a reduced signal corresponding to light having the at least one wavelength. Specifically, for example, images of the optical absorber whose image spatial resolution is reduced by limiting the band of a signal are superimposed on each other. Alternatively, a reduced signal can also be calculated by summing signals of the acoustic detector derived at a plurality of positions and using the summed signals as a signal at one place, and image spatial resolution can be reduced. In the above processing methods, only signal processing may be performed on the time-series signal, and processing in a three-dimensional space is not necessary. Therefore, the processing amount in the entire process is small. The amount of the resolution of the digital signal to be changed with respect to the position displacement amount is different for each method for changing the resolution. Therefore, the relationship between the position displacement amount and the amount for changing resolution may be previously obtained for each method for changing resolution and prepared as a table or a relation, and the amount for changing resolution may be determined using this previously prepared table or relation. By calculating data showing a profile of absorption coefficient in the unit 101 using the processed signal, data showing a profile of absorption coefficient, whose resolution is reduced compared with data showing a profile of absorption coefficient calculated when using the signal before the processing, is derived. As in Embodiment 1, profiles of absorption coefficient in which resolution for at least one of measurements using the light having the wavelength A and wavelength B is reduced by the above mentioned method are calculated, data showing the calculated profiles of absorption coefficient is stored in the memory A102 and the memory B103, and the average intensity of oxygen saturation is calculated in the unit 106 for calculating oxygen saturation, using both of the data showing the calculated profiles of absorption coefficient. It is also possible to calculate oxygen saturation using more lights having different wavelengths C, D, and so on. Processing thereof is also similar to the processing of Embodiment 1. Next, the data showing the profile of absorption coefficient whose image spatial resolution is not reduced, and the intensity of oxygen saturation are composed in the unit 107, and the result is displayed on the display 12.
The calculation of oxygen saturation was simulated for each of a case where the position displacement of an optical absorber between measurements using a plurality of lights did not occur, a case where the position displacement occurred and processing for the position displacement was not performed, and a case where the position displacement occurred and Embodiment 1 was carried out.
A spherical optical absorber having a diameter of 2 mm, in which 40% of oxyhemoglobin and 60% of deoxyhemoglobin were mixed to simulate blood, was placed at the center of a subject and irradiated with 800 nm and 850 nm lights, and signals thereof were derived by simulation. Profiles of absorption coefficient were respectively derived using both signals. Oxygen saturation was calculated without displacing both profiles of absorption coefficient, and is illustrated in
For comparison, a case where the position displacement occurred and processing for the position displacement was not particularly performed will be described. When the profiles of absorption coefficient of 800 nm and 850 nm were vertically displaced by 2 mm, the oxygen saturation derived by a conventional method not reducing resolution was as illustrated in
The result of carrying out the processing of Embodiment 1 when the position displacement occurred is illustrated in
An example will be described in which simulation similar to the simulation of Example 1 was performed and a method for summing acoustic signals derived at a plurality of positions and using the summed signals as a signal at one place was used as a method for reducing the voxel spatial resolution of a profile of absorption coefficient.
Acoustic signals generated by irradiating an optical absorber, in which 40% of oxyhemoglobin and 60% of deoxyhemoglobin were mixed, with 800 nm and 850 nm lights were derived by simulation. At this time, the probe for deriving acoustic signals included 100×100 square elements having a side of 2 mm, arrayed without gap in-between. Assuming that position displacement occurred during measurement for 800 nm and 850 nm, the position of the absorber was vertically displaced by 2 mm during the simulation for 800 nm and 850 nm.
For both of the 800 nm and 850 nm lights, signals of 5×5 elements were summed and regarded as a signal of one virtual element, and signals of 20×20 virtual elements were derived. Therefore, when the signals of the virtual elements were used, the voxel spatial resolution increased by a factor of 5, compared with a case where the absorption coefficient was calculated using each element. The profiles of absorption coefficient of 800 nm and 850 nm derived using the virtual elements were subjected to comparison operation to calculate oxygen saturation. The oxygen saturation of only voxels whose value was equal to or more than 50% of the maximum value in the profile of absorption coefficient of 800 nm before the signals derived using each element were summed was displayed. The displayed oxygen saturation of the voxels was about 40%. In this manner, even if position displacement occurred, images of the optical absorber were superimposed by processing the signals, and oxygen saturation was derived with a minor error. The increase in calculation time at this time was negligible, compared with the conventional method.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2010-022892, filed Feb. 4, 2010 and Japanese Patent Application No. 2011-010534, filed Jan. 21, 2011, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2010-022892 | Feb 2010 | JP | national |
2011-010534 | Jan 2011 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | 13515037 | Jun 2012 | US |
Child | 16238272 | US |