This disclosure relates to a medical image technique for diagnosis, analysis and treatment, and more particularly, to a probe structure, an imaging device and an imaging method for acquiring a fusion image capable of providing pathologic and anatomical information simultaneously based on various medical image techniques.
An ultrasound (US) imaging device is equipment for imaging structure and characteristics of an observation area by applying an ultrasound signal to an observation area in a human body with an ultrasound probe, receiving a returning ultrasound signal reflected by tissues, and extracting information included in the signal. The US imaging device may advantageously obtain an image in real time without any harm to the human body at low costs in comparison to other medical imaging systems such as X-ray, CT, MRI, PET or the like.
A photoacoustic (PA) imaging device applies photon to an observation area in a human body, receives an ultrasound signal directly generated from photons absorbed in tissues, and extracts image information from the signal. This peculiar situation where photons are absorbed in tissues to generate an ultrasound happens since the tissues are heated while absorbing the photons. Thus, if a pulse laser is irradiated to the absorptive tissue structure, the tissue temperature changes, and as a result the tissue structure is expanded. A pressure wave is propagated outwards from the expanded structure, and the pressure wave may be probed using an ultrasound transducer. The photoacoustic image has advantages in that an image may be obtained based on an optical absorption contrast ratio while ensuring resolution to the level of ultrasound, costs are very low in comparison to MRI, and patents are not exposed to ionizing radiation.
A fluorescent (FL) imaging device uses a principle that, cells or bacteria where a fluorescent protein gene is expressed are marked or put into a living body and a light source of a specific wavelength is irradiated thereto, the cells or tissues of the living body or a fluorescent material in the living body absorbs and excites the light irradiated from the outside to emit a light of a specific wavelength, and this light is probed and imaged. As the fluorescent protein gene required for acquiring a fluorescent image, green fluorescent protein (GFP), red fluorescent protein (RFP), blue fluorescent protein (BFP) and yellow fluorescent protein (YEP) or enhanced GFP (EGFP) which is a variety of GFP are widely used, and more diverse fluorescent proteins with increased brightness are being developed. The fluorescent image is generally acquired using a charged coupled device (CCD) camera, which allows rapid acquisition of a fluorescent image, and animals such as guinea pig are not sacrificed.
Such medical diagnosis imaging devices have different observation areas and characteristics, and thus different kinds of devices should be applied to a single observation area depending on purpose and situation. In addition, for more accurate diagnosis and more information, these imaging techniques may be utilized together. At present, there has been reported one-shot investigation methods in which images acquired using different imaging devices for a single lesion area are comparatively investigated for experiments or studies, but there has been proposed no technical means to acquire various kinds of image information simultaneously for the utilization in clinical trials.
The present disclosure is directed to overcoming the limit of the existing technique in which existing medical imaging devices are individually utilized at diagnosis and medical sites. Also, in the existing technique, due to the absence of a technical measure for simultaneously monitoring a single region in a multilateral way, medical images for an observation area are acquired sporadically depending on a target to be monitored and a purpose of monitoring, and then the images should be analyzed individually by experts. But, the present disclosure is directed to solving such inconvenience.
In one general aspect, there is provided a device for acquiring an image, comprising: a sound source configured to apply an ultrasound signal for an ultrasound (US) image to a target; a light source configured to apply an optical signal for a photoacoustic (PA) image and a fluorescent (FL) image to the target; a sound probing unit configured to receive the ultrasound signal generated by the sound source and the photoacoustic signal generated by the light source from the target; a light probing unit configured to receive the optical signal generated by the light source from the target; and an image generating unit configured to generate a fusion image including image information with different probing planes with respect to the target by using at least two signals of the received ultrasound signal, the received photoacoustic signal and the received optical signal.
In the device for acquiring an image according to an embodiment, the image generating unit may generate a single fusion image by: generating a depth image of the target from the received ultrasound signal or the received photoacoustic signal, generating a planar image of the target from the received optical signal, and mapping the generated depth image and the generated planar image.
In the device for acquiring an image according to an embodiment, the image generating unit may determine a feature point from each of the images with different probing planes, and map the determined feature points to generate an image where a relation among the images is visually matched and displayed.
In the device for acquiring an image according to an embodiment, the sound probing unit may be located adjacent to the target, and the light probing unit may be installed to be located relatively far from the target in comparison to the sound probing unit.
In the device for acquiring an image according to an embodiment, the sound probing unit and the light probing unit may be installed along different axes to prevent signal interference from each other.
In the device for acquiring an image according to an embodiment, the device may further include a switch for shifting operations of the sound probing unit and the light probing unit to each other, and a signal corresponding to each probing unit may be received according to a manipulation of a user on the switch.
In another aspect of the present disclosure, there is provided a device for acquiring an image, comprising: a sound source configured to apply an ultrasound signal for an ultrasound image to a target; a light source configured to apply an optical signal for a photoacoustic image and a fluorescent image to the target; a sound probing unit configured to receive the ultrasound signal generated by the sound source and the photoacoustic signal generated by the light source from the target; a light probing unit configured to receive the optical signal generated by the light source from the target; a location control unit configured to adjust physical locations of the sound probing unit and the light probing unit; and an image generating unit configured to generate a fusion image including image information with different probing planes with respect to the target by using at least two signals of the received ultrasound signal, the received photoacoustic signal and the received optical signal according to the adjusted locations.
In the device for acquiring an image according to another embodiment, the image generating unit may generate a single fusion image by: generating a three-dimensional image by moving the sound probing unit along a surface of the target according to the control of the location control unit to laminate a depth image of the target from the received ultrasound signal or the received photoacoustic signal, generating a planar image of the target by fixing the location of the light probing unit according to the control of the location control unit, and mapping the generated three-dimensional image and the generated planar image in consideration of the adjusted location.
In the device for acquiring an image according to another embodiment, the location control unit may move the sound probing unit in a longitudinal direction along a surface of the target based on the light probing unit to guide successive generation of depth images corresponding to the planar image by the light probing unit.
In the device for acquiring an image according to another embodiment, the sound probing unit may be located adjacent to the target, the light probing unit may be installed to be located relatively far from the target in comparison to the sound probing unit, and the sound probing unit may receive a sound signal from the target while changing the location thereof according to the control of the location control unit.
In the device for acquiring an image according to another embodiment, the device may further include an optical and/or acoustical transparent front which is adjacent to the target and has permeability with respect to an optical signal and a sound signal.
In another aspect of the present disclosure, there is provided a method for acquiring an image, comprising: applying an ultrasound signal for an ultrasound image or an optical signal for a photoacoustic image to a target, and receiving an ultrasound signal or photoacoustic signal corresponding to a signal applied from the target; applying an optical signal for a fluorescent image to the target, and receiving an optical signal from the target; and generating a fusion image including image information with different probing planes with respect to the target by using at least two signals of the received ultrasound signal, the received photoacoustic signal and the received optical signal, wherein the fusion image includes a depth image generated from the received ultrasound signal or the received photoacoustic signal, a planar image generated from the received optical signal and mapping information between the depth image and the planar image.
In the method for acquiring an image according to an embodiment, the method may further include displaying the generated fusion image on a display device, and the depth image and the planar image included in the fusion image may be shifted to each other according to a manipulation of a user to be displayed simultaneously or in order.
In the method for acquiring an image according to an embodiment, the method may further include generating a three-dimensional image by moving a probing unit for receiving the ultrasound signal or photoacoustic signal in a longitudinal direction along a surface of the target based on the probing unit for the fluorescent image, so that the depth image is successively laminated corresponding to the planar image, and the generating of a fusion image may generate a single fusion image by mapping the generated three-dimensional image and the generated planar image in consideration of the adjusted location.
In the method for acquiring an image according to an embodiment, the method may further include determining a feature point from each of the images with different probing planes, and mapping the determined feature points, and the generating of a fusion image may generate an image in which a relation among the images is visually matched and displayed.
In the method for acquiring an image according to an embodiment, the method may further include: displaying the ultrasound image, the photoacoustic image and the fluorescent image on a display device simultaneously; and generating an overlaying image in which at least two images selected by a user are overlaid, and displaying the overlaying image on the display device.
In the method for acquiring an image according to an embodiment, the method may further include: receiving an adjustment value for a location of the image, a parameter for the image and transparency of the image from the user; and generating an image changed according to the input adjustment value and displaying the changed image on the display device.
The embodiments of the present disclosure allows easier analysis of images by providing a probe structure, which may utilize various medical imaging techniques using an ultrasound image, a photoacoustic image and a fluorescent image simultaneously, provide pathologic information, anatomical information and functional information for a single observation area in a multilateral way by generating a fusion image based on planar image information and depth image information having different probing planes with respect to the observation target, and generate a fusion image through simple user manipulation at a medical site.
As an embodiment, the present disclosure provides a device for acquiring an image, which includes: a sound source configured to apply an ultrasound signal for an ultrasound (US) image to a target; a light source configured to apply an optical signal for a photoacoustic (PA) image and a fluorescent (FL) image to the target; a sound probing unit configured to receive the ultrasound signal generated by the sound source and the photoacoustic signal generated by the light source from the target; a light probing unit configured to receive the optical signal generated by the light source from the target; and an image generating unit configured to generate a fusion image including image information with different probing planes with respect to the target by using at least two signals of the received ultrasound signal, the received photoacoustic signal and the received optical signal.
Hereinafter, a basic idea adopted in embodiments of the present disclosure will be described briefly, and then detailed technical features will be described in order.
A biological tissue causes a radiative process and a nonradiative process through a photoacoustic coefficient, and a fluorescent image and a photoacoustic image are formed by means of different process bases through absorbed optical energy. The embodiments of the present disclosure allow observing the degree of light absorption and the generation of radiative/nonradiative process of a tissue, thereby proposing a system structure which may obtain an optical characteristic of the tissue as a more accurate quantitative index and provide elastic ultrasound image (elastography) and color flow imaging by processing the ultrasound signal. In addition, the embodiments of the present disclosure may provide a quantitative index with high contrast in an application using a contrast agent which is reactive with a single imaging technique or multiple imaging techniques, and propose individual information for various imaging techniques or applications used in existing ultrasound, photoacoustic and fluorescent imaging, or a structure required for developing a new application by combining such individual information.
For this, there is required a fusion probe and system structure capable of performing ultrasound, photoacoustic and fluorescent imaging techniques to a single tissue and processing them in association with each other in a bundle. In addition, there is proposed an auxiliary system structure for improving the shape of the fusion probe, the structure of the system and the quality of the image.
Hereinafter, embodiments of the present disclosure which may be easily implemented by those skilled in the art will be described in detail. However, these embodiments are just for better understanding of the present disclosure, and it is obvious to those skilled in the art that the scope of the present disclosure is not limited thereto.
The source 21 may be classified into a sound source and a light source depending on the kind of generated signal. The sound source applies an ultrasound signal for an ultrasound (US) image to a target 10, and the light source applies an optical signal for a photoacoustic (PA) image and a fluorescent (FL) image to the target 10.
In addition, the probing unit 24 may be classified into a sound probing unit and a light probing unit depending on the kind of received signal. The sound probing unit receives an ultrasound signal generated by the sound source or a photoacoustic signal generated by the light source from the target 10, and the light probing unit receives an optical signal generated by the light source from the target 10.
In
Further, the sound probing unit and the light probing unit have different minimum distances to the observation target 10 depending on signal observation characteristics. In other words, the sound probing unit may be located adjacent to the observation target 10, and the light probing unit may be installed to be located relatively far from the observation target 10 in comparison to the sound probing unit. This is because a probe based on a sound signal like the ultrasound image is used in contact with the observation target 10, and a probe based on an optical signal like the fluorescent image is used at a predetermined distance to observe a planar structure like the surface of the observation target 10.
The image generating unit 29 generates a fusion image including image information with different probing planes with respect to the target 10 by using at least two signals of the ultrasound signal, the photoacoustic signal and the optical signal received from the probing unit 24. In the embodiments of the present disclosure, a single fusion image including various kinds of information may be generated by collecting a plurality of image information with different probing planes, and the detailed configuration of each embodiment will be described later with reference to the drawings.
In addition, the image acquiring device of
In particular, the image generating unit 29 moves the probing unit 24, particularly the sound probing unit, along the surface of the target 10 according to the control of the location control unit 27, so that a depth image of the target 10 is laminated from the received ultrasound signal or photoacoustic signal to generate a three-dimensional image. Further, the image generating unit 29 may generate a planar image of the target 10 by fixing the location of the probing unit 24, particularly the light probing unit, according to the control of the location control unit 27, and may generate a single fusion image by mapping the three-dimensional image and the planar image generated in consideration of a finally adjusted location. In other words, the location control unit 27 is required for generating a three-dimensional image from the depth image by controlling the location of the probing unit 24.
Or else, instead of the control of the location control unit 27, a user may directly moves the probing unit 24 along the surface of the target 10 so that the light probing unit is located at the same portion as the acquired ultrasound signal or photoacoustic signal to acquire a fluorescent planar image, thereby providing a single fusion image by means of software image mapping.
Data of each imaging technique used in the above image mapping procedure may employ a high-contrast imaging method, used in individual techniques, to improve the quality of image. The ultrasound image may utilize, for example, harmonic, perfusion imaging, synthetic aperture, planar wave, blood flow imaging, adaptive beam focusing or the like. In addition, the photoacoustic image may utilize, for example, adaptive beam focusing, spectroscopy or the like. Further, the fluorescent image may utilize, for example, stereo 3D imaging, spectroscopy, wavelength separating or the like.
Meanwhile, the image generating unit 29 may determine a feature point from each of images with different probing planes and map the determined feature point, thereby generating an image where a relation of the images may be visually matched and displayed. For this, an image processing technique for extracting feature points from a plurality of images and mapping the feature points may be utilized. When mapping images, basically, an axis of one image is specified based on the target 10, and images are connected on the basis of the specified axis, so that a relation of common features is displayed. For this, a three-dimensional coordinate system for the target 10 is assumed, a display direction of the image with respect to x-, y- and z-directions of the coordinate system is set, and the images are mapped based on the feature point to generate a matched image.
Hereinafter, characteristics of an individual probe according to a medical imaging technique will be introduced briefly, and then various probe structures mechanically coupled to generate a fusion image will be described in order.
Characteristics of each medical image are shown in Table 1 below.
In
Considering the above different characteristics, the image generating unit proposed in the embodiments of the present disclosure generates a depth image of the target from the ultrasound signal or photoacoustic signal received through the sound probing unit, generate a planar image of the target from the optical signal received through the light probing unit, and maps the generated depth image and the generated planar image to generate a single fusion image.
First,
In addition, in
Second,
In addition, due to the difference in image acquiring structures as described above, the sound probing unit (PAUS array probe) may be located adjacent to the target, the light probing unit (WL/FL probe) may be installed to be located relatively far from the target in comparison to the sound probing unit (PAUS array probe), and the sound probing unit (PAUS array probe) may receive a sound signal from the target while changing its location according to the control of the location control unit (not shown).
Further, the inside of the probe may be filled with a coupler capable of transmitting light without loss and allowing ultrasound permeation, and the surface of the probe may be made of a material allowing permeation of ultrasound and light. For this, the probe structure 330 depicted in
The image processing system 420 includes a workstation for controlling the overall fusion image diagnosis system, a PAUS system for treating signals of photoacoustic and ultrasound images, a FL light source for applying an optical energy for the fluorescent image and the photoacoustic image, and a probe location control unit (probe positioner) capable of controlling a location of the probe as demanded by the user, and acquires bio data in various aspects through the multi-modal probe 410 serving as a fusion probe.
The multi-modal probe 410 includes a PAUS linear transducer for receiving photoacoustic and ultrasound signals, an optic fiber bundle for applying an optical energy transmitted from a main body, and a CCD sensor for acquiring fluorescent data generated from the human body, and transmits the acquired data to the image processing system 420. The image processing system 420 performs image restoration based on the received data and then displays the restored image on the display device 430.
In S510, the image acquiring device applies an ultrasound signal for the ultrasound image and an optical signal for the photoacoustic image to the target, and receives an ultrasound signal or photoacoustic signal corresponding to the signal applied from the target.
In S520, the image acquiring device applies an optical signal for the fluorescent image to the target, and receives an optical signal from the target.
In S530, the image acquiring device generates a fusion image including image information with different probing planes for the target by using at least two signals among the ultrasound signal and the photoacoustic signal received in S510 and the optical signal received in S520. Here, the fusion image may include a depth image generated from the ultrasound signal or photoacoustic signal, a planar image generated from the optical signal, and mapping information between the depth image and the planar image.
Meanwhile, the image acquiring method as depicted in
Referring to
In more detail, the sound signal-based image acquiring process 610 firstly acquires US frame data and generates and optimizes a US image therefrom, and also acquires PA frame data and generates and optimizes a PA image therefrom. After that, the generated PA image and US image may be mapped to generate image information in a depth direction. Now, the user presses the FL button to shift into another fluorescent image mode.
The optical signal-based image acquiring process 620 may firstly acquire a WL image and a FL image respectively, and map a WL image and a FL image therefrom to generate image information in a front direction. Now, if the user releases the FL button, the process returns to the PAUS image acquiring process 610.
Referring to
In
For this, the image acquiring device available for this embodiment moves the probing unit (PAUS array probe) for receiving an ultrasound signal or a photoacoustic signal in a longitudinal direction along the surface of the observation target on the basis of the probing unit (FL probe) for a fluorescent image, so that a depth image is successively laminated corresponding to the planar image to generate a three-dimensional image. At this time, in the fusion image generating process, the three-dimensional image and the planar image are mapped in consideration of the location adjusted through the location control unit to generate a single fusion image.
First, in the individual image acquiring process 810, a WL image and a FL image are acquired, US frame data and PA frame data are acquired along with the images, and then a PAUS image of a single frame is acquired. Now, the PAUS array probe is moved in a longitudinal direction of the observation target to acquire a PAUS image of a neighboring frame. This depth image acquiring process for each frame is repeatedly performed until a desired number of frames (for example, until an index of a final frame reaches a preset positive integer N), and then the individual image acquiring process 810 is completed. At this time, the generated image includes a single surface image and N number of depth images for the interested area.
Now, in the three-dimensional image generating process 820, the N number of depth images generated in the longitudinal direction is laminated to generate a single three-dimensional PAUS image. After that, the generated PAUS image is optimized, and then the PAUS image and the FL image are mapped and displayed on a single display device. The user may reconfigure the displayed image by adjusting and resetting image parameters as necessary.
Referring to
For this, in the image acquiring method according to the present disclosure, the ultrasound image, the photoacoustic image and the fluorescent image may be simultaneously displayed on the display device, and an image in which at least two images selected by the user are overlaid may be generated and displayed on the display device. In addition, in the image acquiring method according to the present disclosure, an adjustment value for a location of the image, a parameter for the image and transparency of the image may be input by the user, and an image changed according to the input adjustment value may be generated and displayed on the display device.
According to the embodiments of the present disclosure described above, at preclinical trials, delivery of medicine and resultant effects may be quantitatively figured out by means of reaction against light, thereby allowing more quantitative evaluation of medicine effects. In addition, at clinical trials, light characteristics of tissues having different clinical meanings may be more quantitatively understood to allow early diagnosis of diseases and accurate staging, which may be helpful for establishing a plane for treating a disease and actually treating the disease.
Further, the embodiments of the present disclosure may be applied in various ways by combining advantages of a photoacoustic/ultrasound imaging technique capable of observing a relatively greater depth for an observation target and a fluorescent imaging technique capable of observing an overall surface at a relatively smaller depth. In addition, if a contrast agent is used, characteristics of materials in or out of the contrast agent may be differently set for photoacoustic and fluorescent images, and then the distribution of the contrast agent and the degree of transfer of medicine included in the contrast agent may be quantitatively figured out.
Meanwhile, a motion control process of a probe structure and an image processing process for processing individual images obtained by the probe structure according to the embodiment of the present disclosure may be implemented as computer-readable codes on a computer-readable recording medium. The computer-readable recording medium may include any kind of storage devices where data readable by a computer system is stored.
The computer-readable recording medium includes, for example, ROM, RAM, CD-ROM, a magnetic tape, a floppy disk and optical media, and may also be implemented in the form of carrier wave (for example, transmission through Internet). In addition, the computer-readable recording medium may be distributed to computer systems connected through a network so that computer-readable codes may be stored and executed in a distribution way. Also, functional programs, codes and code segments for implementing the present disclosure may be easily inferred by programmers in the related art.
While the exemplary embodiments have been shown and described, it will be understood by those skilled in the art that various changes in form and details may be made thereto without departing from the spirit and scope of this disclosure as defined by the appended claims. In addition, many modifications can be made to adapt a particular situation or material to the teachings of this disclosure without departing from the essential scope thereof. Therefore, the spirit of the present disclosure should not be limited to the embodiments described above, and the appended claims and their equivalents or modifications should also be regarded as falling within the scope of the present disclosure.
According to the embodiments of the present disclosure, there is provided a probe structure, which may utilize various medical imaging techniques such as ultrasound imaging, photoacoustic imaging and fluorescent imaging simultaneously, generate a fusion image based on planar image information and depth image information with different probing planes with respect to an observation target so that pathologic information, anatomical information and functional information may be provided in a multilateral way with respect to a single observation area, and allow a user to generate a fusion image in a medical site just through a simple manipulation, thereby allowing easier image analysis.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2013/006943 | 8/1/2013 | WO | 00 |