The present invention relates to an endoscope apparatus.
In surgical treatment of living tissue, it is important for an operator to accurately recognize the presence of a blood vessel hidden inside the living tissue and to perform treatment while avoiding the blood vessel. In response to this need, a surgical treatment device provided with a function for optically detecting a blood vessel that is present in living tissue has been proposed (refer to, for example, Patent Literature 1 below). According to Patent Literature 1, it is possible to measure the volume of blood in living tissue and to determine whether or not a blood vessel is present on the basis of the measured volume of blood, thus calling for the attention of the operator.
Publication of Japanese Patent No. 4490807
One aspect of the present invention provides an image processing apparatus including: an image acquisition device for observing a living body; a blood vessel recognition device for irradiating the living body with an exploration laser beam and recognizing a blood vessel via a laser Doppler method; an arithmetic operation unit that calculates a first laser spot position on a first frame acquired by the image acquisition device when a blood vessel is recognized by the blood vessel recognition device and that adds a mark corresponding to the first laser spot position to a position on a second frame corresponding to the first frame, the second frame being acquired at a time different from the time of the first frame; and a display unit for displaying, on an image acquired by the image acquisition device, a plurality of the marks that are added by the arithmetic operation unit as a result of the blood vessel being recognized at a plurality of different times.
An endoscope apparatus 200 according to one embodiment of the present invention includes: a blood vessel recognition device 100; an image acquisition device 500; an arithmetic operation unit 60; and a display unit 70.
The endoscope apparatus 200 will be described below with reference to the drawings.
As shown in
The blood vessel recognition device 100 includes: an exploration laser light source 8 for outputting an exploration laser beam L; the light-emitting section 9 that is provided at a leading end of a probe body section and that emits the exploration laser beam L supplied from the exploration laser light source 8; a light receiving section 10 that is provided in the vicinity of the light-emitting section 9 and that receives scattered light S coming from the area in front of the leading end of the treatment tool 1; a light detection unit 11 for detecting the scattered light S received by the light receiving section 10; a frequency analysis unit 12 that acquires time-series data of the intensity of the scattered light S detected by the light detection unit 11 and that frequency-analyzes the time-series data; a determination unit 13 for determining the presence/absence of a blood vessel to be detected, which has a diameter in a predetermined range, on the basis of the frequency analysis result of the frequency analysis unit 12; and a visible light source 16.
The exploration laser light source 8 outputs the exploration laser beam L with a wavelength range that is barely absorbed into blood (e.g., near-infrared region). The exploration laser light source 8 is connected to the light-emitting section 9 via an optical fiber 14 that runs along the interior of a body section 3. The exploration laser beam L that is emitted from the exploration laser light source 8 and incident on the optical fiber 14 is guided to the light-emitting section 9 by the optical fiber 14 and is emitted towards the living tissue A from the light-emitting section 9. The exploration laser beam L radiated onto the living tissue A scatters in the interior of the living tissue but scatters differently depending on the presence/absence of a blood vessel.
The light receiving section 10 is, for example, a wavelength-selective photosensor for selectively outputting the amount of received light in response to the wavelength range of the exploration laser beam and is connected to the light detection unit 11 via an optical fiber 15 that runs along the interior of the body section 3. The scattered light S received by the light receiving section 10 is guided to the light detection unit 11 by the optical fiber 15 and is incident upon the light detection unit 11.
The light detection unit 11 converts, into a digital value, the intensity of the scattered light S that is incident thereon via the optical fiber 15 and sequentially transmits this digital value to the frequency analysis unit 12.
The frequency analysis unit 12 acquires the time-series data representing changes over time in the intensity of the scattered light S by recording the digital values received from the light detection unit 11 in a time series manner over a predetermined time period. The frequency analysis unit 12 applies fast Fourier transformation to the acquired time-series data and calculates the mean frequency of an obtained Fourier spectrum.
The image acquisition device 500 is composed of an endoscope body section 51, as well as an illuminating unit 52 and an image capturing unit 53 installed at a leading end section of the endoscope body section 51. White light W is radiated from the illuminating unit 52 towards the living tissue A. An observation image I of the living tissue A is acquired by acquiring an image with the image capturing unit 53 so as to detect observation image light WI resulting from the white light W.
Image information of the observation image I is sent to the arithmetic operation unit 60. A control signal from the control unit 2 of the blood vessel recognition device 100 is input to the arithmetic operation unit 60, and different arithmetic operations are performed on the basis of the control signal. Image information based on each arithmetic operation is added to the observation image I, and a display image IDisp, is generated and displayed on the display unit.
Here, time-series data and a Fourier spectrum acquired in the image acquisition device 500 will be described.
As shown in
Therefore, when the blood vessel B is included in a radiation area of the exploration laser beam L in the living tissue A, scattered light S having a frequency f+Δf as a result of being scattered by the blood in the blood vessel B, as well as scattered light S having a frequency f as a result of being scattered by static components other than the blood in the blood vessel B, are simultaneously received by the light receiving section 10. This results in interference between the scattered light S with a frequency f and the scattered light S with a frequency f+Δf, leading to a beat causing the intensity of the entire scattered light S to change with Δf in the time-series data, as shown in
Because the laser beam radiated on the living tissue A undergoes multiple scattering in the static components and dynamic components, when the laser beam is incident on the red blood cells, the incident angle between the traveling direction of the light and the moving direction of the red blood cells (blood flow direction) is not a single value but exhibits a distribution. Therefore, the amount of frequency shift, Δf, due to the Doppler shift also exhibits a distribution. Therefore, the beat of the intensity of the entire scattered light S includes superposition of multiple frequency components in accordance with the distribution of Δf. In addition, as the blood flow velocity becomes higher, the distribution of Δf extends to the higher frequency side.
As shown in
There is a relationship between the shape of the Doppler spectrum, and the presence/absence of the blood vessel B and the blood flow velocity in the blood vessel B, as shown in
Furthermore, it is known that the blood flow velocity in the blood vessel B is substantially proportional to the diameter of the blood vessel B.
The frequency analysis unit 12 obtains a function F(ω), representing the relationship between the frequency ω and the intensity of a Doppler spectrum, calculates the mean frequency of a Doppler spectrum F(ω) on the basis of expression (1) below, and transmits the calculated mean frequency to the determination unit 13.
The determination unit 13 compares the mean frequency received from the frequency analysis unit 12 with a threshold value. A first threshold value is the mean frequency corresponding to the minimum value of the diameter of the blood vessel B to be detected.
If the mean frequency received from the frequency analysis unit 12 is equal to or more than the first threshold value, then the determination unit 13 determines that the blood vessel B to be detected is present. On the other hand, if the mean frequency received from the frequency analysis unit 12 is less than the first threshold value, then the determination unit 13 determines that the blood vessel B to be detected is not present in the radiation area of the exploration laser beam L. By doing so, a blood vessel B having a diameter in a predetermined range is specified as a blood vessel to be detected, and it is determined whether or not this blood vessel B to be detected is present. The determination unit 13 outputs the determination result to the control unit 2 and the arithmetic operation unit 60.
The minimum value of the diameter of the blood vessel B to be detected is input by, for example, an operator using an input unit, which is not shown in the figure. For example, the determination unit 13 has a function for associating diameters of the blood vessel B with mean frequencies so as to obtain, from the function, the mean frequency corresponding to the minimum value of the diameter of the input blood vessel B and so as to set each of the calculated mean frequencies as a threshold value.
If it is determined by the determination unit 13 that the blood vessel B to be detected is present, then the control unit 2 causes the visible light V to be output from the visible light source 16, thereby emitting the visible light V from the light-emitting section 9 together with the exploration laser beam L. On the other hand, if it is determined by the determination unit 13 that the blood vessel B to be detected is not present, then the control unit 2 stops outputting the visible light V from the visible light source 16, thereby emitting only the exploration laser beam L from the light-emitting section 9.
Next, the operation of the endoscope apparatus 200 according to this embodiment with the above-described structure will be described.
An image display method using the endoscope apparatus 200 includes: a step of, if it is determined by the blood vessel recognition device 100 that a blood vessel is present (True), calculating a blood-vessel determination position on a measurement image from the measurement image; a step of calculating a displacement in the field of view by establishing a correlation between (arbitrary) two images in time series; a step of setting a mark position as the calculated position information on the basis of the displacement information; and a step of displaying a mark (trace) on a real-time image on the basis of the information about the mark position.
By doing so, even if the field of view shifts, the operator can be informed of where the blood vessel was located.
In short, the image display method according to this embodiment includes the following steps.
One example for performing each of the above-described steps will be given below.
It is determined whether or not a blood vessel is present (step 0).
A spot position on an image at the time when an exploration laser beam and/or an index laser beam is radiated is calculated using at least one of methods (1) to (3) below (step 1).
The amount of shift in the field of view is obtained using methods (1) to (3) below (step 2).
A correlation image can be acquired using at least one of methods (a-1), (a-2), and (a-3) below.
A correlation image simply representing the image relationship between frames is acquired (a-1).
A group of enlarged images, reduced images, and rotated images is generated, and a correlation image is acquired (a-2). The ranges and steps of the enlargement, reduction, and rotation angle are not restricted and may be adjusted as appropriate.
A group of images obtained by affine transformation is generated, and a correlation image is acquired (a-3).
Method (2) described above is performed using one of means (a) and (b) below.
Instead of the spot before being corrected, a spot the position of which has been corrected with the amount of shift obtained in step 2 is displayed on the image where the field of view shifts (step 3).
According to the flow from step 0 to step 3 described above, even if the field of view shifts, a past blood vessel position after the amount of shift is corrected can be constantly displayed as an afterimage by calculating the position at which the blood vessel is recognized, correcting the amount of displacement, and displaying the shift-corrected position on the display unit, thereby making it possible to inform the operator of where the blood vessel was located on a real-time display screen.
In order to recognize the blood vessel B in the living tissue A using the blood vessel recognition device 100 according to this embodiment, the light-emitting section 9 is placed in the vicinity of the living tissue A, the exploration laser beam L is radiated on the living tissue A, and the exploration laser beam L is moved so as to scan the living tissue A, as shown in
If it is determined by the determination unit 13 that the blood vessel B to be detected is not present in the radiation area of the exploration laser beam L, the control unit 2 emits only the exploration laser beam L from the light-emitting section 9. If it is determined by the determination unit 13 that the blood vessel B to be detected is present in the radiation area of the exploration laser beam L, the control unit 2 emits the visible light V, together with the exploration laser beam L, from the light-emitting section 9. In short, this radiation area is irradiated with visible light V only when the blood vessel B to be detected is present in the radiation area of the exploration laser beam L.
A case in which the above-described situation is observed on an image acquired with the image acquisition device 500 will be described.
The region in which the blood vessel B is present can be recognized on the image displayed on a screen display unit by scanning the exploration laser beam L over the living tissue A and capturing, with an image capturing device, the visible light V radiated at the blood vessel position.
When a region in which the blood vessel is present can be identified with one scan operation, it is desirable that the region be held on the display screen since the operator can easily identify the region in which the blood vessel is present. However, because the endoscope body section 51 is not fixed under actual scope-assisted surgery, the image acquisition device 500 and the blood vessel recognition device 100 are configured to be able to move independently of each other, thereby readily causing a shift between the field of view acquired during scanning and the field of view that is being observed in real time.
Therefore, simply holding the scanning trajectory is useless when the field of view shifts.
In order to correct the shift in the field of view, it is advisable to calculate the displacement in the field of view by calculating a correlation between the two images in time series.
Here, a case in which a green laser beam is used as the visible light V will be described to explain the principle. Hereinafter, a description will be given assuming that the visible light V is an index laser beam V.
As described above, it is desirable that a laser beam with a wavelength (near-infrared region) that undergoes only a small amount of absorption/scattering in the living body be selected as the exploration laser beam L. At this time, if the R-ch of the image capturing unit 53 has sensitivity to the exploration laser beam L, the spot formed when the exploration laser beam is radiated on the living body is observed on the R-ch image of the image capturing unit 53.
In addition, the spot formed when the index laser beam V is radiated on the living body is observed on the G-ch image of the image capturing unit 53.
Because the proportion at which the spots of these laser beams contribute to the B-ch image is small, a displacement in the field of view is evaluated using the B-ch image.
Therefore, the amount of displacement (Δx(i+1), Δy(i+1)) can be obtained by calculating the peak position on the correlation image ICorr,B(i+1).
A method for calculating a blood vessel position from an image acquired with the image acquisition device 500 will be described below with reference to
In addition, it is recognized that the color image I(i) contains a region that appears white as a result of being saturated due to radiation of the white light W.
In order to accurately calculate the position irradiated with the laser beams from the observation image, this saturation region due to the white light W needs to be excluded.
Therefore, it is difficult to calculate a laser position only from brightness values of R/G-ch, and hence it is required to obtain a region that has high brightness on R/G-ch and that has low brightness on B-ch, thereby calculating a spot position from the center of the gravity of the region.
More specifically, in order to realize this, it is advisable to use the procedure shown in
When it is determined that there is a blood vessel (True), the following processing is performed in the arithmetic operation unit 60, as shown in
The problem can be solved by performing processing and display according to the flow in
Flow (1) will be described below. First, an observation image I(i) is acquired in the i-th iteration.
This observation image I(i) is stored in the k-th element MI(k) of an array MT.
MI(k) is decomposed into the channels R, G, and B to acquire MR(k), MG(k), and MB(k).
Next, different processing is performed depending on the result of the blood vessel determination. First, if it is determined that there is a blood vessel (YES, i.e., True in the figure), then MinvB(k), which is the inversion of MB(k), is generated, and the position at which it is determined that a blood vessel is present, i.e., the spot position (Sx(i), Sy(i)), is calculated from the three images MR(k), MG(k), and MinvB(k). This spot position (Sx(i), Sy(i)) is substituted into the k-th element MS(k, X, Y) of an array MS. Here, for the sake of convenience, the coordinates (Sx(k), Sy(k)) of the spot are denoted as MS(k, X, Y), to indicate the k-th element of MS. On the other hand, if it is determined that there are no blood vessels (NO, i.e., false in the figure), then nan, indicating that no corresponding spot position is present, is substituted into the k-th element MS (k, X, Y) of the array MS.
Next, displacement is calculated.
A correlation image ICorr,B(i) is acquired on the basis of the images MB(k) and MB(k−1).
The displacement (Δx(i), Δy(i)) of measurement (i) relative to the measurement (i-1) is calculated from the correlation image iCorr,B(i) and is stored in an array MA(k−1, x, y) of displacements.
Subsequently, the accumulated amount of displacement, is calculated from each element. The accumulated amount of displacement is obtained by adding the displacement MΔ(k−1, x, y) to the amount of displacement M accumulated up to that point. Note that all elements are set to 0 as the initial values of MΣ.
MΣ(k−1, x, y)=0+MΔ(k−1, x, y), MΣ(k−2, x, y)=MΔ(k−2, x, y)+MΔ(k−1, x, y), . . . MΣ(1, x, y)=MΔ(1, x, y)+MΔ(k−1, x, y)
Note that because processing for shifting the index, indicating each array element, to the side smaller by 1 is performed at the end of the loop, the smaller the index becomes, such as k−2, k−3, . . . 1, the accumulated displacement reflects that the amounts of displacements on images of earlier frames have been added.
Next, the accumulated amount of displacement is added to the spot position obtained on each frame (at the time of each measurement) and is stored in an array MSσ(k−1, x, y).
A correction image MISΣ(k) is produced by placing a mark of a desired size on the image of each frame on the basis of this array information. The size of the mark may be the size in accordance with the actual spot diameter, or an image made to have a hue that is easy to identify is also acceptable.
By superimposing each element of these correction images MISΣ(k), a trace image ITr(k−1) of the marks over k=1 to (k−1) is generated. Here, if k is 2, then a single mark is produced, and if k>2, then the operation of superimposing a plurality of marks is performed, and therefore, a trace image of the spot positions (marks) on the frames is obtained.
Next, by superimposing the trace image ITr(k−1) on the MT(k), i.e., observation image I(i), a display image IDisp(k) is acquired.
The display image IDisp(k) is output to the display unit 70 (IDisp(i)=IDisp(k) in the case of measurement i).
The index indicating each element of array data is shifted to the side smaller by one at the end of the processing loop. By doing so, information acquired on the previous frame is accumulated.
The next observation image I(i+1) is acquired via the loop, and the same processing is continued.
The length of the accumulated information is specified by setting the size k of the array as appropriate, and hence the time period for which the trace image of blood vessel detection positions is displayed can be set.
It is preferable that an ON/OFF button for turning ON/OFF laser beam radiation for blood vessel recognition be provided so as to allow the operation of blood vessel recognition to be temporarily interrupted, for cases in which the treatment tool 1 provided with the blood vessel recognition device 100 and/or the endoscope apparatus 200 is moved to another affected area or observation field of view and blood vessel recognition is resumed or for cases in which only the operation for treatment of the affected area with the treatment tool 1 is performed before/after or during the operation of blood vessel recognition. By doing so, even if a surgery assistant secures a stable field of view, display of a trace image with low correlation can be prevented by setting the ON/OFF button to OFF in a case where the treatment tool 1 provided with the blood vessel recognition device 100 and/or the endoscope apparatus 200 is moved for purposes other than blood vessel recognition. When laser beam radiation is set ON again, the above-described flow in
Although flow (2) in
Furthermore, a trace is displayed only when the determination constant Jves is 0 downstream of the flow. This affords an advantage in that it is easy to identify a real-time determination spot because the historical record of determination is not superimposed while a determination that there is a blood vessel is maintained.
In the case of the complementary color optical system, it is possible to calculate a spot position on the image by performing the same processing as in step 1.1 using information obtained by conventional RGB conversion.
For (2) of step 1, processing can be performed in the same manner as in (1) of step 1 using information about RGB conventionally converted in the complementary color optical system.
For example, image processing of the complementary color optical system (refer to the Publication of Japanese Patent No. 4996773) is performed.
First, the following processing is performed:
Magenta (Mg), green (G), cyan (Cy), yellow (Ye)
↓Y/C separation circuit
Luminance signal Y, color-difference signal Cr′, Cb′
↓
R1, G1, B1
Thereafter, processing can be performed by using the images obtained with these signals for IR(i), IG(i), and IB(i) in step 1.
As (3)(a) of step 1, a spot position may be calculated by causing the index laser beam V to take the shape of cross-hairs, as shown in
When an inverse Fourier image in the shape of cross-hairs is placed on the surface of the lens mounted on an emission unit 81 for the index laser beam V and a laser beam is radiated, a laser spot in the shape of cross-hairs is projected on a sample.
This spot in the shape of cross-hairs is observed on a laparoscopic image.
Correlations between a group of reference images, which are prepared to contain cross-hairs having sizes (enlarged/reduced) and angles shifted little by little relative to one another, and the above-described observed laparoscopic image are established, and the correlation image having the maximum peak value is selected. A spot position is calculated from the coordinates of the central portion of the cross-hairs.
The spots in the shape of cross-hairs have shapes of lines intersecting each other at substantially uniform angles, and hence, even in a case where the index laser beam V leaks not only into the B-ch but also into another ch, a spot position can be accurately calculated by differentiating an overexposed white spot from the index laser spot on an image via pattern matching of the specific shape, which is not present in the living body. Although, in this embodiment, the index laser beam V is made to exhibit a spot in the shape of cross-hairs composed of two orthogonal straight lines, the index laser beam V may be made to exhibit a spot in the shape of cross lines composed of three or more intersecting lines, like those composed of two or more lines intersecting at uniform angles.
As (3)(b) of step 1, an embodiment in which the spot of the exploration laser beam L has a concentric-ring shape will be described with reference to
When a laser beam is radiated from an emission unit 82 for the exploration laser beam L, a spot in a concentric-ring shape is observed on the laparoscopic image.
Correlations between a group of reference images, which are prepared to contain images of spots in a concentric-ring shape shifted little by little relative to one another, and the above-described observed laparoscopic image are established, and the correlation image having the maximum peak value is selected. For example, as shown in
As a result of using a spot in a concentric-ring shape, even in a case where the index laser beam V leaks not only into the B-ch but also into another ch, a spot position can be accurately calculated by differentiating an overexposed white spot from the index laser spot on an image via pattern matching of the specific shape, which is not present in the living body.
Compared with the case where a spot in the shape of cross-hairs is used, this embodiment affords an advantage in that the group of reference images contains a smaller number of images because this group does not require the variable representing the angle direction, thus enhancing the processing speed at the time of exploration. Although, in this embodiment, the exploration laser beam L is made to exhibit a spot composed of a plurality of rings arranged on concentric circles, a concentric-ring shaped spot formed of a spiral having a known curvature is also acceptable.
As a modification of (1) (a-1) of step 2, an example in which the amount of displacement is calculated using a correlation image based on images in which a laser spot undesirably intrudes will be described with reference to
Here, a case in which a laser spot is contained (a case of undesirable intrusion thereof) in the B-ch image is described. The laser spot is contained at different positions in the i-th frame IB(i) and the (i+1)-th frame IB(i+1). On the correlation image ICorr,B(i+1) between these images, a peak indicating a displacement is also observed. The displacement can be obtained by calculating the position of this peak.
Therefore, the displacement can also be calculated using an image in which a laser spot undesirably intrudes.
Although a filter with a high OD value needs to be used in order to prevent undesirable intrusion of a laser spot, this embodiment affords an advantage in that a displacement can be calculated as described above without having to replace the filter.
As a modification of (1) (a-1) of step 2, an example in which a spot position is calculated on the basis of a correlation image using images of other channels will be described with reference to
Here, an example in which a laser spot is contained at different positions in the i-th frame IR(i) of the R-ch and the (i+1)-th frame IB(i+1) of the B-ch is given. On the correlation image Icorr,R-B(i+1) between these images, a peak indicating a displacement is also observed. This peak sharply rises compared with the peak at the center. Therefore, this peak can be enhanced by performing image processing using a high-pass filter that removes changes having a low frequency, consequently making it possible to calculate the peak position. A displacement can be calculated from this peak position.
Therefore, a displacement can be calculated by acquiring a correlation image between images of other channels.
This embodiment affords an advantage in that arbitrary images can be used according to the processing speed.
As (1)(a-2) of step 2, a method for acquiring a correlation image by generating a group of enlarged images, reduced images, and rotated images will be described with reference to
The figure shows an example in which the field of view on the i+1-th frame is rotated and moved to the upper left relative to the i-th frame.
If rotation or enlargement takes place in addition to parallel translation in this manner, a group of images formed by enlarging, reducing, and rotating the i-th frame is generated, and a correlation between this group and the (i+1)-th frame is established. Each of the constituent components of the image group is endowed with information about an enlargement factor and a rotational angle. Of the correlation images, the correlation image with the maximum peak (image with the maximum correlation) is selected, and the amount of displacement is calculated. Furthermore, how much the i+1 -th field of view has been subjected to enlargement/reduction and rotation can be obtained from the enlargement factor and the rotational angle of the component of the image group that has generated the correlation image with the maximum peak.
On the basis of the above-described information, the amount of shift in spot position is obtained.
In this manner, the amount of shift in spot position can be calculated even in the case where the field of view is subjected to displacement, enlargement/reduction, and rotation.
As (1)(a-3) of step 2, a method for acquiring a correlation image by generating a group of affine transformed images will be described with reference to
The figure shows an example in which the field of view on the i+1-th frame is rotated and moved to the upper left relative to the i-th frame.
If rotation or enlargement takes place in addition to parallel translation in this manner, a group of images formed by applying affine transformation to the i-th frame is generated, and a correlation between this group and the (i+1)-th frame is established. Affine transformation includes enlargement/reduction, angle transformation, and skew transformation. Each of the constituent components of this image group is endowed with information about an enlargement factor, a rotational angle, and a skew. Of the correlation images, the correlation image with the maximum peak (image with the maximum correlation) is selected, and the amount of displacement is calculated. Furthermore, how much the i+1-th field of view has been subjected to enlargement/reduction, rotation, and skew can be obtained from the enlargement factor, the rotational angle, and the skew information of the component of the image group that has generated the correlation image with the maximum peak.
On the basis of the above-described information, the amount of shift in spot position is obtained.
In this manner, the amount of shift in spot position can be calculated even in the case where the field of view is subjected to displacement, enlargement/reduction, rotation, and skew.
As (1)(b) of step 2, an example in which the amount of shift is calculated from a feature point obtained by the Lucas-Kanade method, as shown in
In this manner, the amount of shift in spot position can be calculated even in the case where the field of view is subjected to displacement, enlargement/reduction, and rotation.
As (2)(a) of step 2, an embodiment in which the amount of shift is acquired with the insertion length/rotational position sensors 84 and 85 will be described with reference to
In this embodiment, the amount of relative movement of the trocar 83 is calculated with the rotational position sensor 84 and the insertion length sensor 85 installed between the rigid endoscope 500 and the trocar 83.
Because how much the rigid endoscope 500 has moved towards or away from the living sample is known from the insertion length, the relative enlargement/reduction factor between frames can be calculated.
In addition, the amount of relative rotation between frames can be calculated with the rotational position sensor 84.
A spot is displayed at the corrected position on the basis of the amount of this shift (enlargement/reduction factor and amount of rotation).
A position shift can be calculated more accurately by using the sensors.
As (2)(b) of step 2, an embodiment in which the amount of shift is acquired with the electromagnetic position sensor 86 will be described with reference to
The amount of relative movement (the amount of shift) of the rigid endoscope 500 is acquired by placing the electromagnetic position sensor 86 at the leading end section of the rigid endoscope 500 and acquiring position information with an electromagnetic detector, which is not shown in the figure.
By doing so, the amount of shift can be acquired without having to install a new sensor on the trocar 83 itself.
As (3) of step 2, (1) acquisition of the amount of shift via image processing may be combined with (2) acquisition of the amount of shift with a sensor.
By doing so, the amount of shift can be calculated more accurately.
Calculation of the amount of shift in this embodiment is not limited to the calculation methods described in the description. Instead, other known calculation methods may be used. For example, in (3) of step 1, the index laser beam in (a) may be made to exhibit a spot in a concentric-ring shape, instead of the shape of intersecting lines (e.g., cross-hairs), and furthermore, the exploration laser beam in (b) may be made to exhibit a spot in the shape of intersecting lines (e.g., cross-hairs), instead of a concentric-ring shape.
In addition, although this embodiment has been described by way of an example of the endoscope apparatus 200 in which the blood vessel recognition device 100 and the image acquisition device 500 are independent of each other, the endoscope apparatus 200 is not limited to this. Instead, the blood vessel recognition device 100 and the image acquisition device 500 may be integrated with each other.
In addition, the blood vessel recognition device 100 of the endoscope apparatus 200 according to this embodiment may include a gripping means 150 for gripping the living body, as shown in
In this case, blood vessel recognition can be performed independently of a treatment tool 101, without having to additionally mount blood vessel recognition means on the treatment tool 101. In this case, blood vessel recognition can be performed without increasing the outer diameter of the treatment tool 101. In particular, in an energy device for surgical treatment, in which living tissue, fat, and a small-diameter blood vessel from which bleeding can be easily arrested are resected or coagulated and sealed via ultrasound and/or high frequency as described above, there is an advantage in that blood vessel recognition can be performed while avoiding the risk that mist originating from the affected area as a result of the motion of the treatment tool 101 adheres to the blood vessel recognition device 100.
In addition, in this embodiment, items of information about the acquired blood vessel may be joined to one another and may be displayed on the display unit 70 as an independent blood vessel image.
Because the image acquisition device 500 is generally held by a surgery assistant, only a smaller amount of shift in the field of view may occur in this case than in a case in which the image acquisition device 500 is held by the operator himself/herself. If this is the case, the operator may be informed of the blood vessel position by continuously displaying the blood vessel recognition position while omitting the correction of a shift in the field of view.
This can be achieved by performing processing and display according to the flow in
Flow (1) will be described below. First, an observation image I(i) is acquired in the i-th iteration.
This observation image I(i) is stored in the k-th element MI(k) of the array MI.
MT(k) is decomposed into the channels R, G, and B to acquire MR(k), MG(k), and MB(k).
Next, different processing is performed depending on the result of the blood vessel determination. First, if it is determined that there is a blood vessel (YES, i.e., True in the figure), then MinvB(k), which is the inversion of MB(k), is generated, and the position at which it is determined that a blood vessel is present, i.e., the spot position (Sx(i), Sy(i)), is calculated from the three images MR(k), MG(k), and MinvB(k). This spot position (Sx(i), Sy(i)) is substituted into the k-th element MS(k, X, Y) of the array MS. Here, for the sake of convenience, the coordinates (Sx(k), Sy(k)) of the spot are denoted as MS(k, X, Y), to indicate the k-th element of MS. On the other hand, if it is determined that there are no blood vessels (NO, i.e., false in the figure), then nan, indicating that no corresponding spot position is present, is substituted into the k-th element MS(k, X, Y) of the array MS.
Next, the spot position to be superimposed is stored in the array MSΣ′(k−1, x, y).
A correction image MISΣ′(k) is produced by placing a mark of a desired size on the image of each frame on the basis of this array information. The size of the mark may be the size in accordance with the actual spot diameter, or an image made to have a hue that is easy to identify is also acceptable.
By superimposing each element of these correction images MISΣ′(k), a trace image ITr(k−1) of the marks over k=1 to (k−1) is generated. Here, if k is 2, then a single mark is produced, and if k>2, then the operation of superimposing a plurality of marks is performed, and therefore, a trace image of the spot positions (marks) on the frames is obtained.
Next, by superimposing the trace image ITr(k−1) on the MI(k), i.e., observation image I(i), a display image IDisp(k) is acquired.
The display image IDisp(k) is output to the display unit 70 (IDisp(i)=IDisp(k) in the case of measurement i).
The index indicating each element of array data is shifted to the side smaller by one at the end of the processing loop. By doing so, information acquired on the previous frame is accumulated.
The next observation image I(i+1) is acquired via the loop, and the same processing is continued.
The length of the accumulated information is specified by setting the size k of the array as appropriate, and hence the time period for which the trace image of blood vessel detection positions is displayed can be set.
It is preferable that an ON/OFF button (operating button) for blood vessel recognition be provided so as to allow the operation of blood vessel recognition to be temporarily interrupted, for cases in which the treatment tool 101 provided with the blood vessel recognition device 100 and/or the endoscope apparatus 200 is moved to another affected area or observation field of view and blood vessel recognition is resumed or for cases in which only the operation for treatment of the affected area with the treatment tool 101 is performed before/after or during the operation of blood vessel recognition for the use of the trace image to which the flow in
In addition, the trace image to be displayed in the loop does not need to display all information based on the measured image. By doing so, the processing speed can be enhanced, and furthermore, display becomes more visually recognizable by eliminating excessive trace images.
Although flow (2) in
Furthermore, a trace is displayed only when the determination constant Jves is 0 downstream of the flow. By doing so, it is possible to add a function for prohibiting superimposition of historical records of determination (prohibiting tracing at the positions at which it is determined that there was a blood vessel in the past) while a determination that there is a blood vessel is maintained. Therefore, even if the operator repeatedly scans the same site of living tissue, only newly detected blood vessel information is progressively added while preventing a crowded trace display. Such prohibition of superimposition display can maintain accuracy, as long as the resolution of the blood vessel recognition device 100 is not exceeded. The treatment tool 101 that is provided with the blood vessel recognition device 100 having a function for prohibiting such superimposition display affords an advantage in that it becomes even easier to identify a real-time determination spot because the blood vessel near the affected area can be clearly and comprehensively displayed merely by the operator or the surgery assistant scanning the blood vessel recognition device 100 randomly.
In the case of the complementary color optical system, it is possible to calculate a spot position on the image by performing the same processing as in step 1.1 using information obtained by conventional RGB conversion.
In addition, the above-described embodiments may be provided with an interface for allowing the operator or the surgery assistant to switch among the determination methods illustrated in
Although a laser beam is used for blood vessel detection in the above-described embodiments, the embodiments are not limited to this. Instead, other coherent light with matched phases may be used.
In addition, although the near-infrared region is used as the wavelength of the laser beam in the above-described embodiments, the embodiments are not limited to this. Instead, NBI (Narrow Band Imaging: Narrow Band Imaging method) may be used.
This is a continuation of International Application PCT/JP2016/062830 which is hereby incorporated by reference herein in its entirety. This application is based on U.S. provisional patent application 62/151,585, the contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62151585 | Apr 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2016/062830 | Apr 2016 | US |
Child | 15789004 | US |