The present disclosure relates to a method of measuring a fluorescence signal in a tissue of a body part, to which a fluorescent agent has been added, and of imaging a surface of the body part, wherein the tissue to which the fluorescent agent has been added forms part of the body part. Furthermore, the present disclosure relates to an image capturing and processing device configured to measure a fluorescence signal in a tissue of a body part, to which a fluorescent agent has been added and configured to image a surface of the body part, wherein the tissue to which the fluorescent agent has been added forms part of the body part. The present disclosure also relates to an endoscope or laparoscope comprising such image capturing and processing device.
Furthermore, the present disclosure relates to a method of diagnosing lymphedema and to a method of long-term therapy of lymphedema.
The method and the image capturing and processing device can relate to imaging and measuring of the lymphatic function, such as in view of the diagnosis, treatment and/or prevention of lymphedema.
Lymphedema is an accumulation of lymphatic fluid in the body's tissue. While oxygenated blood is pumped via the arteries from the heart to the tissue, deoxygenated blood returns to the heart via the veins. Because the pressure level on the arterial side is much higher than on the vein side, a colorless fluid part of the blood is pushed into the space between the cells. Typically, more fluid is pushed out, than reabsorbed on the vein side. The excess fluid is transported by the lymphatic capillaries. Furthermore, the fluid carries away local and foreign substances such as larger proteins and cellular debris. Once in the lymphatic system, this fluid including the transported substances is referred to as lymph or lymph fluid.
The lymphatic system comprises lymphatic vessels having one way valves similar to vein valves for transporting the lymph to the next lymph node. The lymph node performs removal of certain substances and cleans the fluid before it drains back to the blood stream.
If the lymphatic system becomes obstructed in that the lymph flow is blocked or not performed at the desired level, the lymph fluid accumulates in the interstitial space between the tissue cells. This accumulation, which is due to an impairment of lymphatic transport, is called lymphedema. The accumulation of lymph can cause inflammatory reaction which damages the cells surrounding the affected areas. It can further cause fibrosis which can turn into a hardening of the affected tissue.
Because lymphedema is a lifelong condition for that no cure or medication exists, early diagnoses and appropriate early counter measures for improving drainage and reducing the fluid load are of high importance for patients' well-being and recovering. Possible treatments such as lymphatic massage and compression bandages up to surgery depend on the level of severity, which is a four stage system defined by the World Health Organization (WHO) as follows:
For diagnosis of the function of the lymphatic system, commonly used techniques are a manual inspection of the affected limb or body part by a physician. A known imaging technique is lymphoscintigraphy. In this technique, a radio tracer is injected in the tissue of the affected body part and subsequently MRI (Magnetic Resonance Imaging), CT (Computer Tomography), a PET-CT-scan (Positron Emission Tomography) or ultrasound imaging is performed.
A relatively new imaging technique is infrared fluorescence imaging using a fluorescent dye, for example ICG (Indocyanine Green). ICG is a green colored medical dye that is used for over 40 years. The dye emits fluorescent light when exited with near infrared light having a wavelength between 600 nm and 800 nm. Due to this excitation, ICG emits fluorescence light between 750 nm and 950 nm. The fluorescence of the ICG dye can be detected using a CCD or CMOS sensor or camera. The fluorescent dye is administered to the tissue of an affected limb or body part and the concentration and flow of the lymphatic fluid can be traced on the basis of the detected fluorescence light.
An object is to provide an enhanced method of measuring a fluorescence signal and an enhanced image capturing and processing device as well as an enhanced endoscope or laparoscope, wherein an enhanced fluorescence imaging output can be provided.
Furthermore, an object is to provide an enhanced method of diagnosing lymphedema and an enhanced method of long-term therapy of lymphedema.
Such object can be solved by a method of measuring a fluorescence signal in a tissue of a body part, to which a fluorescent agent has been added, and of imaging a surface of the body part, wherein the tissue to which the fluorescent agent has been added forms part of the body part, the method comprising:
Two images, namely the large visible light image and the large fluorescent image, are output, wherein in contrast to traditional methods these two images are linked to each other. In other words, the two images share a common coordinate system, which means that objects that can be seen in one of the images, for example in the fluorescence image, can be found in the visible light image at the same position. The visible light image reproduces the surface of the body part as it can be objected with the human eye. In the fluorescence image, an intensity of the fluorescence light is reproduced. Both images can be displayed using the same image scale, image orientation and show the same region of interest of the body part. This is due to the fact that, both images can be captured with identical viewing direction and identical field of view. Based on the image information that is provided, the user is set into position to exactly spot the point or position at which a certain fluorescence phenomena is observed, at the real body part. This can provide unparalleled advantages with respect to diagnosis and therapy.
Within the context of this specification the terms “fluorescence image” and “visible light image” are not limited to 2D images. Images within the meaning of this specification can be 2D images but also 3D images or 2D images comprising additional information. 2D images can be captured using a specialized camera equipment. 3D images can be captured using for example a stereo camera equipment. The 2D or 3D images can also be line scans, which can be captured using an image line scanner. The images can also be LIDAR scans that similar to 3D images also comprise a depth information. The data from the LIDAR scanner can be added to 2D images as additional information. The combination of the 2D image data and the LIDAR scan data leads to the similar information as a 3D image because the 2D image data can be combined with corresponding depth information. Furthermore, the term “stitching” is not limited to the combination of two or more 2D images. Stitching can also be performed on the basis of the other above mentioned image data, for example on the basis of 3D image data. Further information like for example the depth information that is captured by the LIDAR scanner can also be taken into account when executing the stitching algorithm.
Within the context of this specification, a “visible light image” is an image of the real world situation. It reproduces an image impression similar to what can be seen by the human eye. Unlike the human eye, the visible light image can be a color image, a greyscale image or even a false color scale plot. The visible light image shows the surface of the body part comprising the tissue to which the fluorescent agent has be administered. If the tissue is arranged at the surface of the body part, the imaging of the surface of the body part includes imaging of the surface of the tissue.
Stitching can be performed on the basis of two or more 2D images and results in a larger 2D image. When the stitching algorithm is executed on the basis of a plurality of 3D images, the result is a large 3D image. It is also possible to perform the stitching in that a plurality of 2D images plus additional information (for example from a LIDAR scanner, a time of flight sensor or a similar device) is processed and the result of this is a large 3D image. In this case, the stitching includes the reconstruction of a 3D image from a 2D image data plus additional information. Stitching of images comprises the identifying of unique special features that correspond to each other and are shown in the two images that have to be stitched together. This requires that the field of view of the two images that are combined in the stitching process overlap at least slightly.
In view of this prerequisite for the stitching process, according to an embodiment, the repeating of the capturing of the fluorescence image and the visible light image to provide a series of fluorescence images and a series of visible light images, can be executed in that the individual subsequent images of said sequence are captured in that the field of view of the subsequent images are at least slightly overlapping.
The previously mentioned additional information, which can be used for the stitching process, is however not limited to depth information, which is assigned to for example every individual pixel in the 2D image. It is also possible that during the image acquisition of the series of fluorescent images and visible light images, the image capturing device acquires, as further data, data about spatial orientation of the image capturing device. The spatial orientation can be for example an orientation of the image acquisition device in the coordinate system of an examination room. This orientation can be characterized by the three space coordinates x, y and z together with a viewing direction (for example a vector in the coordinate system of the examination room) and a tilt angle of the image capturing device about the viewing direction (for example an angle of rotation of the image capturing device about the viewing direction). This information can be captured for every single image (or image pair comprising a fluorescence image and a visible light image) of the series of visible light images and fluorescence images. When performing the stitching, this additional information indicating the orientation of the image capturing device in space can be used. The orientation of the image capturing device in for example a coordinate system of the examination room can be recalculated into an orientation of the image capturing in relation to the body part. The information can be used when performing the 3D reconstruction of the body part during the stitching.
Furthermore, the stitching algorithm can also be applied in any other application to generate a large visible light image and a large fluorescence image. In other words, the method is not limited to the assessment of the lymphatic system. The assessment can be performed on for example blood vessels and blood flow or on a perfusion assessment of organs and tissues. The assessment can also encompass visually locating tissue with certain characteristica (e.g. tumorous tissue), locating glands (e.g. parathyroid glands) and nerves. This list is not exhaustive.
The stitching algorithm can be applied on images obtained in open surgery or in minimally invasive sugery. Images on which stitching is performed and which are captured during minimally invasive sugery, can be obtained using an endoscope, for example an endoscope having a rigid shaft or an endoscope having a flexible shaft.
According to these further embodiments, there is a method of measuring a fluorescence signal in a body part, to which a fluorescent agent has been added, and of imaging a surface of the body part, the method comprising:
The method can further comprise outputting the large visible light image and the large fluorescence image.
The method is not limited to the use of a fluorescent agent and/or dye. The method can be performed without the use of a dye, by exploiting the effect of auto-fluorescence of certain tissue, for example of parathyroid glands or intestinal tissue. Furthermore, the absence of auto-fluorescence can be used for example to determine lesions.
According to an embodiment, a method of measuring an auto-fluorescence signal in a tissue of a body part or in a body part is provided. It is not necessary to add or administer a fluorescent agent to the tissue or to the body part. The method comprises imaging a surface of the body part, wherein the tissue in which auto-fluorescence if measured forms part of the body part. The method further comprising:
The method can comprise outputting the large visible light image and the large auto-fluorescence image.
According to still another embodiment, the method comprises:
In other words, it is possible to apply the set of stitching parameters, which have been determined when stitching the visible light images, on the fluorescence images and vice versa.
Furthermore, it is within the scope of above explained method that the fluorescence image can be an image detected in the visible light spectrum. This applies for example, if Patent Blue, methylene blue or isosulfan blue is applied as the fluorescent agent or dye, because the light emission of this substance can be seen with the naked eye.
According to an embodiment, the method can further comprise superimposing the large visible light image and the large fluorescence image to provide an overlay image of the body part and outputting the overlay image as output of the large visible light image and the large fluorescence image.
The method can provide an overlay image showing the visible light image of the body part, which is for example affected by lymphedema, and the corresponding fluorescence image, which is indicative of a concentration of lymph in the respective region of the affected body part. In other words, the overlay image can be a combination of a visible light image and a fluorescence image. The fluorescence image can be for example an image, in which the intensity of the fluorescence light is shown as a false color plot. The overlay image can enhance and simplify the analysis of the situation of the lymphatic system. In traditional measurement methods for detecting a fluorescence signal, there is typically no visible light image combined with the fluorescence image. This, however, makes it very difficult to pinpoint exact locations, at which abnormalities in the lymphatic transport are detected, because they are for example visible as intensity spots in the fluorescence image. This drawback becomes even more relevant if different measurements are performed at different points in time or by different operators. Without the link to the visible light image, it is very difficult to compare fluorescence measurements, which have been taken at different points in time or by different persons. If no visible light image is combined with the fluorescence image or linked thereto, it is difficult to perform patient follow up or to monitor any progress of a therapy.
The stitching of the fluorescence images can include the reconstruction of the images to a 3D image. Stitching of the series of fluorescence images can be performed based on the set of stitching parameters, which have been previously determined when performing the stitching of the visible light images.
Fluorescence images typically offer rare special features that are suitable for performance of the stitching process. Because the visible light image and the fluorescence image are linked by a known and constant relationship with respect to viewing direction and/or perspective, the parameters of the stitching algorithm used for the visible light images can also be applied for the stitching of the fluorescence images. In other words, due to the fact the visible light image and the fluorescence image are captured via optically aligned sensors, the same stitching parameters can be used for stitching of the visible light images and for stitching of the fluorescence images. This can significantly enhance the stitching process and results in a large fluorescence image of better quality.
The fluorescent agent is for example ICG (Indocyanine Green) or methylene blue. Within the context of this specification, the term “fluorescent dye” or “dye” (also referred to as “fluorochrome” or “fluorophore”) refers to a component of a molecule, which causes the molecule to be fluorescent. The component is a functional group in the molecule that absorbs energy of a specific wavelength and re-emits energy at a different specific wavelength. In various embodiments, the fluorescent agent can comprise a fluorescence dye, an analogue thereof, a derivative thereof, or a combination of these. Appropriate fluorescent dyes include, but are not limited to, indocyanine green (ICG), fluorescein, methylene blue, isosulfan blue, Patent Blue, cyanine5 (Cy5), cyanine5.5 (Cy5.5), cyanine7 (Cy7), cyanine7.5 (Cy7.5), cypate, silicon rhodamine, 5-ALA, IRDye 700, IRDye 800CW, IRDye 800RS, IRDye 800BK, porphyrin derivatives, Illuminare-1, ALM-488, GCP-002, GCP-003, LUM-015, EMI-137, SGM-101, ASP-1929, AVB-620, OTL-38, VGT-309, BLZ-100, ONM-100, BEVA800.
In embodiments where fluorescence is derived from autofluorescence, one or more of the fluorophores or agents giving rise to the autofluorescence may be an endogenous tissue fluorophore (e.g., NADH, thyroid gland, parathyroid gland etc.).
The method can be performed with different fluorescent agents and dyes. Basically, any combination of an appropriate dye plus the suitable or matching excitation light source can be applied. The method can also be performed without the use of a dye exploiting the effect of auto-fluorescence of certain tissue (e.g. parathyroid glands) or molecules.
Examples for dyes that can be applied and are mainly used for the lymphatic system are for example: Indocyanine Green (ICG), methylene blue, isosulfan blue or Patent Blue. Patent Blue can even be seen with the naked eye. Hence, no fluorescence is needed, but the stitching of the images can be performed anyway.
The stitching algorithm can be, for example, a panorama stitching algorithm, in which the images are analyzed to extract special and distinguishing features. These features can then be linked to each other in the multiple images and image transformation (for example shifting, rotation, stretching along one or more axis or a keystone correction) is performed. The location of the linked features can be used to determine the image transformation parameters, which are also referred to as the stitching parameters. Subsequent to image orientation, the images can be merged and thereby “stitched” together. This transformation can be executed in the same way on both, the visible light images and the fluorescence images. Finally, the two images can be output. This output can include the displaying of the images for example on a screen. For example the two images are shown side-by-side. According to the above referred embodiment the two images are shown in an overlay image.
As previously mentioned, the performing of the stitching of the series of images is not limited to the stitching of 2D images. The stitching can also comprise a reconstruction of a 3D image based on for example a series of 3D images or based on a series of 2D images plus additional information.
Often, the function of the lymphatic system is affected in a limb of the body. However, the method of measuring the fluorescence signal is not limited to the inspection of a limb. The method generally refers to the inspection of a body part, which can be a limb of a patient but also the corpus, the head, the neck, the back or any other part of the body. It is also possible that the body part is an organ. The method of measuring the fluorescence signal as an indicator for the lymphatic flow is in this case performed during open surgery. The same applies to a situation in which the surgery is minimally invasive surgery, which is performed using an endoscope or laparoscope.
According to another embodiment, the viewing direction and the perspective of the fluorescence image and the visible light image can be identical and the fluorescence image and the visible light image can be captured through one and the same, such as through one single, objective lens.
The fluorescence image and the visible light image can be captured by an image capturing device, which can comprise a prism assembly and a plurality of image sensors assigned thereto. Fluorescent light and visible light enter the prism assembly as a common light bundle and through one and the same entrance surface of the prism assembly. The prism structure can comprise filters for separating the visible wavelength range from the infrared wavelength range, at which the exited emission of the fluorescent agent typically takes place. The different wavelength bands, i.e. the visible light (also abbreviated Vis) and the infrared light (is also abbreviated IR), can be directed to different sensors. The capturing of the visible light image and the fluorescence image through one single objective lens can allow a perfect alignment of the viewing direction and perspective of the two images. The viewing direction and the perspective of the visible light image and the fluorescence image can be identical.
In an embodiment, capturing of the fluorescence image and capturing of the visible light image can be performed simultaneously in absence of time-switching between a signal of the fluorescence image and a signal of the visible light image.
The method can dispense with time-switching of signals. In this way, the infrared image, which is the fluorescence image, and the visible light image can be captured exactly at the same time using separate image sensors. Hence, the images can also be captured with a high frame repeat rate, which allows the method to be applied in situations where reliable hand eye coordination is necessary. High frame rates of 60 fps or even higher are possible. High frame rates can typically not be achieved when time-switching is applied. Furthermore, when the fluorescence image and the visible light image are captured on individual sensors, the sensors can be arranged exactly in focus. Furthermore, the settings of the sensors can be adjusted to the individual requirements for image acquisition of the visible light image and fluorescence image. This pertains for example to an adjustment of the sensor gain, the noise reduction, the exposure time, etc.
According to yet another embodiment, the capturing of the fluorescence image, illuminating the tissue with excitation light and simultaneously capturing the visible light image can be performed by a single image capturing device. When illumination and image acquisition are integrated in one device, the overall process of measurement of the fluorescence signal and simultaneous acquisition of visible images can be enhanced.
According to an embodiment, the method can further comprise measuring a distance between a surface of the body part, which is captured in the visible light image and the capturing device and outputting a signal by the image capturing device, which is indicative of the measured distance. Measurements at different distances can be performed to optimize the illumination and image capture to find the best image acquisition conditions. The distance of this best fit can then be stored in the imaging system as a target distance for following measurements.
According to an embodiment, the method can further comprise outputting a signal by the image capturing device, which is indicative of the measured distance. For example, a visual signal and/or an audio signal can be output by the image capturing device. This signal can guide the operator when handling the image capturing device such that image acquisition is performed at an at least approximately constant distance to the surface of the body part. The integration of the distance sensor in the image capturing device and the user supporting output (visual or optical signal) can enable the operator to capture images with more homogeneous illumination. This can enhance the quality of the measurement of the fluorescence signal.
For determination of the best fit distance, the method can further comprise repeatedly capturing the fluorescence image and the visible light image of the same section of the surface of the body part while measuring the distance. A plurality of sets of fluorescence and visible light images can be captured at different distances. Subsequently, an analysis of the sets of images in view of imaging quality can be performed and a best matching distance resulting in the highest quality of images can be determined.
The output signal, which can be an audio signal, an optical signal or can be generated based on a basis of a measurement of a time of flight sensor, can also be indicative of a deviation of the measured distance from the best matching distance. Hence, the operator can be directly informed whether or not the optimum image capturing conditions such as, with respect to illumination are applied during the measurement.
Furthermore, the image capturing can be performed by a robot or another automatic camera holder. In this embodiment, the output signal can be used as an input for the robot or the automatic camera holder to adjust to best matching distance. No signal output is necessary in this case but the robot or camera holder can automatically move according to a stored and best matching distance.
According to still another embodiment, the measurement of the fluorescence signal can be performed on a tissue, to which at least a first and a second fluorescent agent has been added, wherein the capturing of the fluorescence image can comprise:
According to the above embodiment, the fluorescent agent can comprise a first and a second fluorescent dye. The first fluorescent dye can be for example a methylene blue, the second dye can be ICG. The capturing of the fluorescence image, according to this embodiment, can comprise capturing a first fluorescence image of the fluorescent light emitted by the first fluorescent dye and, capturing a second fluorescence image of the fluorescent light emitted by the second fluorescent dye. Capturing of the two images can be performed without time switching. The first fluorescence image can be captured in a wavelength range, which is between 700 nm and 800 nm, if methylene blue is used as the first fluorescent dye. The second fluorescence image can be captured in a wavelength range, which is between 800 nm and 900 nm, if ICG is used as the second fluorescent dye. Fluorescence imaging which can be based on two different fluorescent agents offers new possibilities for measurements and diagnosis.
Such object can be further solved by an image capturing and processing device configured to measure a fluorescence signal in a tissue of a body part, to which a fluorescent agent has been added, and configured to image a surface of the body part, wherein the tissue to which the fluorescent agent has been added forms part of the body part, the image capturing and processing device comprising an imaging unit, which further comprises:
Same or similar advantages, which have been mentioned with respect to the method of measuring the fluorescence signal, apply to the image capturing and processing device in a same or similar way and will therefore not be repeated.
The device can be configured in that the stitching unit can be configured to apply the same stitching algorithm for stitching of the visible light images and for stitching of the fluorescence images. The stitching algorithm can use the same stitching parameters, which have been determined and used for the stitching of the visible images for the stitching of the fluorescence images. The device can also be configured in that the stitching unit is configured to apply the stitching algorithm in that the same stitching parameters, which have been determined and used for the stitching of the fluorescence images, can be applied for stitching of the visible light images. As previously mentioned, the stitching is not limited to the stitching of 2D images. It is also possible to perform stitching on the basis of 3D images. Furthermore, the stitching can include an image reconstruction of 3D images. This can also be performed by the stitching unit. The stitching unit can be further configured to take into account additional information for performance of for example a 3D reconstruction. This additional information can be for example an orientation of the image capturing device or a depth information, which forms part of the 3D image or is the result of a scanning procedure. According to an embodiment, the image capturing device can comprise a scanner for scanning of a surface of the body part, for example a LIDAR scanner, a time of flight camera or any other comparable device.
The device is not limited to the assessment of the lymphatic system. The device can be configured for assessment of for example blood vessels and blood flow or on a perfusion assessment of organs and tissues. The assessment can also encompass visually locating tissue with certain characteristica (e.g. tumorous tissue), locating glands (e.g. parathyroid glands) and nerves. This list is not exhaustive.
Furthermore, the device is not limited to the use of a fluorescent agent and/or dye. The device can be operated without the use of a dye, by exploiting the effect of auto-fluorescence of certain tissue, for example of parathyroid glands. The image capturing and processing device can be configured to measure an auto-fluorescence signal in a tissue of a body part or on a body part, wherein the image capturing and processing device can be further configured to image a surface of the body part, wherein the tissue forms part of the body part, the image capturing and processing device comprising an imaging unit, which can further comprise:
According to another embodiment, the image capturing and processing device can further comprise a superimposing unit configured to superimpose the large visible light image and the large fluorescence image to provide an overlay image of the body part, wherein the output unit can be further configured to output the overlay image as output of the large visible light image and the large fluorescence image.
According to yet another embodiment, the fluorescence imaging unit and the visible light imaging unit can be configured in that the viewing direction and the perspective of the fluorescence image and the visible light image are identical, wherein the fluorescence imaging unit and the visible light imaging unit can be configured in that the fluorescence image and the visible light image are captured through one and the same objective lens.
Furthermore, the fluorescence imaging unit and the visible light imaging unit can be configured to capture the fluorescence image and the visible light image simultaneously, in absence of time-switching between a signal of the fluorescence image and a signal of the visible light image.
Furthermore, the image capturing device, which can comprise the fluorescence imaging unit and the visible light imaging unit can further comprise a dichroic prism assembly configured to receive fluorescent light and visible light through an entrance face, comprising: a first prism, a second prism, a first compensator prism located between the first prism and the second prism,
The above-referred five prism assembly can allow capturing two fluorescence imaging wavelengths and the three colors for visible light imaging, for example red, blue and green. In the five prism assembly, the optical paths of the light traveling from the entrance surface to a respective one of the sensors can have identical length. Hence, all sensors can be in focus and furthermore, there can be no timing gap between the signals of the sensors. The device, as configured, would not require time-switching of the received signals. This can allow an image capture using a high frame rate and enhanced image quality.
According to still another embodiment, the image capturing device, which can comprise the fluorescence imaging unit and the visible light imaging unit, can define a first, a second, and a third optical path for directing fluorescence light and visible light to a first, a second, and a third sensor, respectively, the image capturing device can further comprises a dichroic prism assembly, configured to receive the fluorescent light and the visible light through an entrance face, the dichroic prism assembly comprising: a first prism, a second prism and a third prism, each prism having a respective first, second, and third exit face, wherein: the first exit face is provided with the first sensor, the second exit face is provided with the second sensor, and the third exit face is provided with the third sensor, wherein the first optical path can be provided with a first filter, the second optical path can be provided with a second filter, and the third optical path can be provided with a third filter, wherein
Furthermore, according to another embodiment, the first, second, and third filters, in any order, can be a red/green/blue patterned filter (RGB filter), a first infrared filter, and a second infrared filter, wherein the first and second infrared filter can have different transmission wavelengths.
In other words, the first and second infrared filters can be for filtering IR-light in different IR wavelength intervals, for example in a first IR-band in which typical fluorescent dyes emit a first fluorescence peak and in a second IR-band in which a typical fluorescent dye emits a second fluorescence peak. Typically, the second IR-band is located at higher wavelength compared to the first IR-band. The first and second infrared filter can also be adjusted to emission bands of different fluorescent agents. Hence, the emission of for example a first fluorescent agent passes the first filter (and can be blocked by the second filter) and can be detected on the corresponding first sensor and the emission of the second fluorescent agent passes the second filter (and can be blocked by the first filter) and can be detected on the corresponding second sensor. For example, the first filter can be configured to measure the fluorescence emission of methylene blue and the second filter can be configured to measure the fluorescence emission of ICG.
The illumination unit, the fluorescence imaging unit and the visible light imaging unit can be arranged in a single image capturing device, which can further comprise a measurement unit configured to measure a distance between the surface of the body part, which can be captured in the visible light image, and the image capturing device can be configured to output a signal, which can be indicative of the measured distance.
Also with respect to the embodiments, same or similar advantages apply, which have been mentioned with respect to the method of measuring the fluorescence signal.
Such object can also be further solved by an endoscope or laparoscope being configured as the image capturing device in an image capturing and processing device according to one or more of the previously mentioned embodiments. The device for image capturing and processing can be used in open surgery as well as during surgery using an endoscope or laparoscope. Same or similar advantages, which have been mentioned with respect to the method and/or the device apply to the endoscope or laparoscope in a same or similar way.
Such object can be further solved by a method of diagnosing lymphedema, comprising:
The method of diagnosing lymphedema can be performed with higher precision and reliability and therefore can provide better results. This entirely new approach can replace the classical way of diagnosing lymphedema. The traditional way to diagnose lymphedema is to perform a manual inspection of the affected body parts by a physician. This method of performing the diagnosis, however, inevitably includes a non-reproducible and random component, which is due to the individual experience and qualification of the physician. Furthermore, the method of diagnosing lymphedema includes same or similar advantages, which have been previously mentioned with respect to the method of measuring the fluorescent signal.
The fluorescent agent can be administered to an arm or leg of a patient by injecting the fluorescent agent in tissue between phalanges of the foot or hand of the patient. Arms and/or legs are typically affected by lymphedema. Hence, the application of a new and successful method of diagnosing lymphedema can be useful when performed with respect to these limbs.
Such object can also be solved by a method of long-term therapy of lymphedema, comprising diagnosing a severity of lymphedema by performing the methods according to the previously mentioned method of diagnosing lymphedema on a patient. Furthermore, the method of long-term therapy can comprise
The method of long-term therapy can be useful because the diagnosis of lymphedema provides—in contrast to traditional methods—objective results with respect to the severity of the disease. The success of a long-term therapy can be analyzed from an objective point of view. The analysis and diagnosis can therefore be much more valuable when looking at the success of the therapy.
Further characteristics will become apparent from the description of the embodiments together with the claims and the included drawings. Embodiments can fulfill individual characteristics or a combination of several characteristics.
The embodiments are described below, without restricting the general intent of the invention, based on exemplary embodiments, wherein reference is made expressly to the drawings with regard to the disclosure of all details that are not explained in greater detail in the text. In the drawings:
In the drawings, the same or similar types of elements or respectively corresponding parts are provided with the same reference numbers in order to prevent the item from needing to be reintroduced.
Before the measurement initially starts, a fluorescent agent 8 is administered, i.e. injected, in the tissue of the patient's body part 4. The method for measuring a fluorescence signal in the tissue of the body part 4, which will also be explained when making reference to the figures illustrating the image capturing and processing device 2, excludes the administering of the fluorescent agent 8.
The fluorescent agent 8 is for example ICG. ICG (Indocyanine Green) is a green colored medical dye that is used for over 40 years. ICG emits fluorescent light when exited with near infrared light having a wavelength between 600 nm and 800 nm. The emitted fluorescence light is between 750 nm and 950 nm. It is also possible that the fluorescent agent 8 comprises two different medical dyes. For example, the fluorescent agent 8 can be a mixture of methylene blue and ICG.
Subsequent to the administration of the fluorescent agent 8, as it is indicated by an arrow in
The image capturing device 10 is configured to image a surface 11 of the body part 4 and to detect the fluorescence signal, which results from illumination of the fluorescent agent 8 with excitation light. When the image capturing device 10 is applied in surgery, the surface 11 of the body part 4 is a surface of for example an inner organ. In this case, the surface 11 of the body part 4 is identical to the surface of the tissue, to which the fluorescent agent 8 has been administered. For emission of light having a suitable excitation wavelength, the image capturing device 10 comprises an illumination unit 16 (e.g., a light source emitting the light having a suitable excitation wavelength) (not shown in
The captured images are communicated to a processing device 12 (i.e., a processor comprising hardware, such as a hardware processor operating on software instructions or a hardware circuit), which also forms part of the image capturing and processing device 2. The results of the analysis are output, for example displayed on a display 14 of the processing device 12. The image capturing device 10 can be handled by a physician 3.
The image capturing device 10 further comprises an objective lens 18 through which visible light and a fluorescence light are captured. Light is guided through the objective lens 18 to a prism assembly 20. The prism assembly 20 is configured to separate fluorescent light, which can be in a wavelength range between 750 nm and 950 nm, from visible light that results in the visible light image. The fluorescent light is directed on a fluorescence imaging unit 22, which is an image sensor, such as a CCD or CMOS sensor plus additional wavelength filters and electronics, if necessary. The fluorescence imaging unit 22 is configured to capture a fluorescence image by spatially resolved measurement of the emitted light, i.e. the excited emission of the fluorescent agent 8, so as to provide the fluorescence image. Furthermore, there is a visible light imaging unit 24, which can be another image sensor, such as a CCD or CMOS sensor plus an additional different wavelength filter and electronics, if necessary. The prism assembly 20 is configured to direct visible light on the visible light imaging unit 24 so as to allow the unit to capture the visible light image of a section of a surface 11 of the patient's body part 4. Similarly, the prism assembly 20 is configured to direct fluorescent light on the fluorescence imaging unit 22. The prism assembly 20, the fluorescence imaging unit 22 and the visible light imaging unit 24 will be explained in detail further below.
In an embodiment, the image capturing device 10 is as scanning unit, for example an image line scanning unit or a LIDAR scanning unit. The image capturing device 10 can also be 3D camera, which is suitable to capture a pair of stereoscopic images from which a 3D image including depth information can be calculated. Naturally, the image capturing device 10 can be a combination of these devices.
The image data is communicated from the image capturing device 10 to the processing device 12 via a suitable data link 26, which can be a wireless datalink or a wired data link, for example a data cable.
The image capturing device 10 is configured in that the fluorescence imaging unit 22 and the visible light imaging unit 24 are operated to simultaneously capture the visible light image and the fluorescence image. For example, the image capturing device 10 does not perform time switching between the signal of the fluorescence image and the signal of the visible light image. In other words, the sensors of the fluorescence imaging unit 22 and the visible light imaging unit 24 are exclusively used for capturing images in the respective wavelength range, which means that the sensors of the imaging units 22, 24 are used for either capturing a fluorescence image in the IR spectrum or for capturing a visible light image in the visible spectrum. The sensors 22, 24 are not used for capturing images in both wavelength ranges. This can result in significant advantages. For example, the sensors can be exactly positioned in focus, which is not possible when an image sensor is used for both purposes, i.e. to capture visible light and infrared light, because the focus point for these different wavelengths typically differ in position. Furthermore, the sensor parameters can be adjusted individually, for example with respect to a required exposure time or sensor gain. Individual settings can be used because IR signals are typically lower than visible light signals.
The fluorescence imaging unit 22 and the visible light imaging unit 24 have a fixed spatial relationship to each other. This is because the units are arranged in one single mounting structure or frame of the image capturing device 10. Furthermore, the fluorescence imaging unit 22 and the visible light imaging unit 24 use the same objective lens 18 and prism assembly 20 for imaging of the fluorescence image and the visible light image, respectively. Due to these measures, the fluorescence imaging unit 22 and the visible light imaging 24 are configured in that a viewing direction and a perspective of the fluorescence image and the visible light image are linked via a known and constant relationship. In the given embodiment, the viewing direction of the two images are identical because both units 22, 24 image via the same objective lens 18.
The image capturing device 10 is further configured to operate the fluorescence imaging unit 22 and the visible light imaging unit 24 to repeat the capturing of the fluorescence image and the visible light image so as to provide a series of fluorescence images and a series of visible light images. This operation can be performed by the processing device 12 operating the image sensor of the fluorescence imaging unit 22 and the image sensor of visible light imaging unit 24. The series of images is typically captured while an operator or physician 3 (see
The image capturing device 10 can be further configured to acquire a position and orientation of the image capturing device 10 during this movement. For example, a position and orientation of the image capturing device 10 in a reference system of the examination room or in a reference system of the patient 6 can be determined for each image or image pair that is captured. This information can be stored and communicated together with the image or image pair comprising the visible image and the fluorescence image. This information can be useful for the subsequent reconstruction of images so as to generate a 3D image from the series of 2D images.
Once the two series of images (i.e. a first series of visible light images and a second series of fluorescence images) or the series of image pairs (each image pair comprising a fluorescence image and a visible light image) are captured by the capturing device 10 and received in the processing device 12, the series of visible light images is processed by a stitching unit 28 (see
Within the context of this specification, the term “stitching” shall not be understood in that the process of stitching is limited to a combination of two or more 2D images. Stitching can also be performed on the basis of 3D images, wherein the result of this process is a larger 3D image. The process of stitching can also be performed on the basis of 2D images plus an additional information on the direction of view, from which the 2D images have been captured. Further information on the position of the image capturing device 10 can also be taken into account. As mentioned before, on the basis of these data sets, a larger 3D image can be generated, i.e. stitched together from a series of 2D images plus information on the position and orientation of the image capturing device 10. It is also possible to combine 3D scanning data, for example from a LIDAR sensor, with 2D image information. Also in this case, the result of the stitching process is a larger 3D image.
The stitching algorithm starts with stitching of the visible light images. The stitching algorithm generates and applies a set of stitching parameters when preforming the stitching operation. The detailed operation of the stitching unit 28 will be described further below. The stitching unit 28 is configured to apply the stitching algorithm not only on the series of visible light images but also on the series of fluorescence images so as to generate a large fluorescence image. Also in this case, the process of stitching is not limited to the combination of two or more 2D images. It is also possible to generate a 3D fluorescence image in a similar way as it is described above for the visible light images.
The stitching algorithm, which is applied for stitching of the fluorescence images is the same algorithm which is used for stitching of the visible light images. Furthermore, the stitching of the fluorescence images is performed using the same set of stitching parameters which was determined when performing the stitching of the visible light images. This is possible, because there is a fixed relationship between the viewing direction and perspective of the visible light images and the fluorescence images. Naturally, if the viewing direction and perspective of the visible light images and the fluorescence images are not identical, a fixed offset or a shift in the stitching parameters has to be applied. This takes into account the known and fixed spatial relationship between the IR and Vis image sensors and the corresponding optics.
Subsequent to the stitching, the large visible light image and the large fluorescence image are output. For example, the images are displayed side-by-side on the display 14. Unlike traditional inspection systems, the display 14 shows a visible light image and a fluorescence image that correspond to each other. In other words, details that can be seen on the fluorescence image, for example a high fluorescence intensity that indicates an accumulation of lymphatic fluid, can be found in the patient's body part 4 exactly on the corresponding position, which is shown in the visible light image. This enables the physician 3 to exactly spot areas in which an accumulation of lymphatic fluid is present. This is very valuable information for example for a tailored and specific therapy of the patient 6.
It is also possible that the visible light image and the fluorescence image, such as the large visible light image and the large fluorescence image are superimposed so as to provide an overlay image, such as in a large overlay image, of the body part 4. This is performed by a superimposing unit 30 of the processing device 12 (the superimposing unit 30 can also be a processor integral with or separate from the processing unit 12). The overlay image can also be output via the display 14.
In
An exemplary single visible light image 5 and fluorescence image 7 can also be seen in
In
From P2, the beam D enters a second pentagonal prism P3. As in prism P1, inward reflection is used to make the beam cross itself. For brevity, the description of the beam will not be repeated, except to state that in prism P3, the beam parts E, F and G correspond to beam parts A, B and C in prism P1, respectively. Prism P3 can also not use internal reflection to reflect the incoming beam towards sensor D2. Two non-internal reflections can be used to direct the incoming beam E via beams F and G towards sensor D2.
After prism P3, there is another compensating prism P4. Finally, beam H enters the dichroic prism assembly comprising prisms P5, P6, and P7, with sensors D3, D4 and D5 respectively. The dichroic prism assembly is for splitting visible light in red, green and blue components towards respective sensors D3, D4 and D5. The light enters the prism assembly through beam I. Between P5 and P6, an optical coating C1 is placed and between prisms P6 and P7 another optical coating C2 is placed. Each optical coating C1 and C2 has a different reflectance and wavelength sensitivity. At C1, the incoming beam I is partially reflected back to the same face of the prism as through which the light entered (beam J). At that same face, the beam, now labelled K, is once again reflected towards sensor D3. The reflection from J to K is an internal reflection. Thus, sensor D3 receives light reflected by coating C1, and in analogue fashion sensor D4 receives light from beam L reflected by coating S2 (beams M and N), and sensor D5 receives light from beam O that has traversed the prism unhindered.
Between prism P4 and prism P5 there is an air gap. In the prism assembly 20, the following total path lengths can be defined for each endpoint channel (defined in terms of the sensor at the end of the channel):
The path lengths are matched, so that A+B+C=A+D+E+F+G=A+D+E+H+l+J+K=A+D+E+H+l+O=A+D+E+H+I+M+N.
The matching of path lengths can comprise an adjustment for focal plane focus position differences in wavelengths to be detected at the sensors D1-D5. That is, for example the path length towards the sensor for blue (B) light may not be exactly the same as the path length towards the sensor for red (R) light, since the ideal distances for creating a sharp, focused image are somewhat dependent on the wavelength of the light. The prisms can be configured to allow for these dependencies. D+H lengths can be adjusted and act as focus compensators due to wavelength shifts, by lateral displacement of the compensator prisms P2, P4.
A larger air gap in path I can be used for additional filters or filled with a glass compensator for focus shifts and compensation. An air gap needs to exist in that particular bottom surface of the prism P5 because of the internal reflection in the path from beam J to beam K. A space can be reserved between the prism output faces and each of the sensors D1-D5 to provide an additional filter, or should be filled up with glass compensators accordingly.
The sensors D1 and D2 are IR sensors, configured for capturing the fluorescence image 7. By way of an example, the sensors D1 and D2 plus suitable electronics are a part of the fluorescence imaging unit 22. The sensors D3, D4 and D5 are for capturing the three components of the visible light image 5. By way of an example, the sensors D3, D4 and D5 plus suitable electronics are a part of the visible light imaging unit 24. It is also possible to consider the corresponding prisms that direct the light beams on the sensors, a part of the respective unit, i.e. the fluorescence imaging unit 22 and the visible light imaging unit 24, respectively.
The endoscope 50 comprises an image capturing device 10 that has been explained in further detail above. The image capturing device 10 comprises an objective lens 18 through which the fluorescent light image 7 and the visible light image 5 are captured. The objective lens 18 focuses the incoming light through the entrance face S1 of the prism assembly 20 on the sensors D1 to D5. The objective lens 18 can also be integrated in the last part of the endoscope part to match the prism back focal length.
The endoscope 50 comprises an optical fiber 52 connected to a light source 54 that couples light into the endoscope 50. The light source 54 can provide white light for illumination of the surface 11 of the body part 4 and for capturing of the visible light image 5. Furthermore, the light source 54 can be configured to emit excitation light which is suitable to excite the fluorescent dye that is applied as the fluorescent agent to emit fluorescence light. In other words, the light source 54 can be configured to emit both, visible light and light in the IR spectrum.
Inside a shaft 56 of the endoscope 50, the optical fiber 52 splits off into several fibers 51. The endoscope 50 can have a flexible shaft 56 or a rigid shaft 56. In a rigid shaft 56, a lens system consisting of lens elements and/or relay rod lenses can be used to guide the light through the shaft 56. If the endoscope 50 has a flexible shaft 56 the fiber bundle 51 can be used for guiding the light of the light source 54 to the tip of the endoscope shaft 56. For guiding light from the distal tip of the endoscope shaft 56 (is not shown in
The image capturing device 10, which is applied for capturing the visible light images 5 and the fluorescence images 7 can further comprise a measurement unit 32 (which can also be a processor integral with or separate from the processing unit 12) which together with a distance sensor 33 is configured to measure a distance d (see
In addition to the distance sensor 33, the image capturing device 10 can include an internal measurement unit (IMU), which can be used to collect data about a rotation in pitch, yaw, and roll as well as acceleration data in the three spatial axes (x, y and z). This information of the IMU may be used as additional data to enhance the performance of the stitching algorithm or to provide feedback to the operator to position and rotate the camera, providing better images for the stitching algorithm.
In
The light enters the prism assembly 20 through the arrow indicated. Between P5 and P6, an optical coating C1 is placed and between prisms P6 and P7 an optical coating C2 is placed, each optical coating C1 and C2 having a different reflectance and wavelength sensitivity. At C1, the incoming beam I is partially reflected back to the same face of the prism P5 as through which the light entered (beam J). At that same face, the beam, now labelled K, is once again reflected towards filter F3 and sensor D3. The reflection from J to K is an internal reflection. Thus, filter F3 and sensor D3 receive light reflected by coating C1, and in analogue fashion filter F4 and sensor D4 receive light from beam L reflected by coating S2 (beams M and N). Filter F5 and sensor D5 receives light from beam O that has traversed the prisms unhindered.
When making reference to the embodiment in which the incoming light is split up in a red, green and blue component, the coatings and filters are selected accordingly.
In the embodiment, in which the incoming light is separated in a green component, a red/blue component and an infrared component, the filter F3 can be a patterned filter (red/blue). For example, there is an array of red and blue filters in an alternating pattern. The pattern can consist of groups of 2×2 pixels, which are filtered for one particular color. Filter F4 can be a green filter, which means the filter comprising only green filters. There is a single pixel grid with the light received at each pixel being filtered with a green filter. Filter F5 can be an IR filter. Each pixel is filtered with an IR filter.
In general, the coatings C1, C2 should match the filters F3, F4, F5. For example, the first coating C1 may transmit visible light while reflecting IR light, so that IR light is guided towards IR filter F3. The second coating C2 may be transparent for green light while reflecting red and blue light, so that filter F4 should be the red/blue patterned filter and F5 should be the green filter 23.
According to the further embodiment, in which in incoming light is split up in the visible light component (RGB), the first infrared component and the second infrared component, the coatings C1, C2 and the filters F3, F4, F5 are configured in that for example the sensor D4 is a color sensor (RGB sensor) for detecting the visible light image in all three colors. Furthermore, the sensor D3 can be configured for detecting fluorescence light of the first wavelength and the sensor D5 is configured for detecting fluorescence light of the second wavelength.
Similarly, when making reference to the prism assembly 20 in
While there has been shown and described what is considered to be embodiments of the invention, it will, of course, be understood that various modifications and changes in form or detail could readily be made without departing from the spirit of the invention. It is therefore intended that the invention be not limited to the exact forms described and illustrated, but should be constructed to cover all modifications that may fall within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
23205166 | Oct 2023 | EP | regional |
The present application is based upon and claims the benefit of priority from U.S. Provisional Application No. 63/425,393 filed on Nov. 15, 2022, and EP 23205166 filed on Oct. 23, 2023, the entire contents of each of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63425393 | Nov 2022 | US |