WO 2013/115323 discloses a method by which to separately capture reflected light in first to third wavelength bands according to absorption characteristics of carotene and hemoglobin to acquire first to third reflected light images, and display a combined image formed by combining the first to third reflected light images in different colors, thereby improving the visibility of the subject of a specific color (carotene in this case) in the body cavity.
In addition, WO 2016/151676 discloses a method by which to acquire a plurality of spectral images, calculate the amount of a separation target component using the plurality of spectral images, and perform a highlighting process on an RGB color image based on the amount of the separation target component. In the highlighting process, with a decrease in the amount of the separation target component that is the amount of the component of the subject to be increased in visibility, a luminance signal and a color difference signal are more attenuated to improve the visibility of the specific color of the subject.
As described above, there is known a method for improving the visibility of the specific color of the subject by highlighting the specific color in the body or attenuating the color with a smaller amount of component of the specific color.
According to one aspect of the invention, there is provided an image processing device comprising a processor including hardware,
the processor being configured to perform:
executing a color attenuation process on a region other than a yellow region in a captured image including a subject image to relatively enhance visibility of the yellow region in the captured image;
detecting a blood region that is a region of blood in the captured image based on color information of the captured image; and
suppressing or stopping the attenuation process on the blood region based on detection result of the blood region.
According to another aspect of the invention, there is provided an endoscope apparatus comprising an image processing device, wherein
the image processing device includes
a processor including hardware,
the processor being configured to perform:
executing a color attenuation process on a region other than a yellow region in a captured image including a subject image to relatively enhance visibility of the yellow region in the captured image;
detecting a blood region that is a region of blood in the captured image based on color information of the captured image; and
suppressing or stopping the attenuation process on the blood region based on detection result of the blood region.
According to another aspect of the invention, there is provided an operating method of an image processing device, comprising:
executing a color attenuation process on a region other than a yellow region in a captured image including a subject image to relatively enhance visibility of the yellow region in the captured image;
detecting a blood region that is a region of blood in the captured image based on color information of the captured image; and
suppressing or stopping the attenuation process on the blood region based on detection result of the blood region.
The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. These are, of course, merely examples and are not intended to be limiting. In addition, the disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, when a first element is described as being “connected” or “coupled” to a second element, such description includes embodiments in which the first and second elements are directly connected or coupled to each other, and also includes embodiments in which the first and second elements are indirectly connected or coupled to each other with one or more other intervening elements in between.
Exemplary embodiments are described below. Note that the following exemplary embodiments do not in any way limit the scope of the content defined by the claims laid out herein. Note also that all of the elements described in the present embodiment should not necessarily be taken as essential elements.
For example, hereinafter, an application example of the present disclosure to a rigid scope used for surgery or the like will be described. However, the present disclosure is also applicable to a flexible scope used in an endoscope for digestive tract and the like.
1. Endoscope Apparatus and Image Processing Section
Thus, in the present embodiment, a captured image is subjected to a process of attenuating color differences in colors other than yellow (specific color) so that the visibility of the subject is relatively improved (the yellow subject is highlighted) as illustrated in
As indicated with BR in
Thus, in the present embodiment, a region where blood exists is detected from a captured image, and a display mode of a display image is controlled based on the detection result (for example, the process of attenuating the colors other than yellow is controlled). Hereinafter, an image processing device and an endoscope apparatus including the image processing device according to the present embodiment will be described.
The insertion section 2 includes an illumination optical system 7 that emits light input from the light source section 3 toward a subject and a imaging optical system 8 (imaging device, imaging section) that captures reflected light from the subject. The illumination optical system 7 is a light guide cable that is arranged along the entire longitudinal side of the insertion section 2 to guide incident light from the light source section 3 at a proximal end to a distal end.
The imaging optical system 8 includes an objective lens 9 that collects reflected light from the subject having reflected the light emitted from the illumination optical system 7 and an image sensor 10 that captures the light collected by the objective lens 9. The image sensor 10 is, e.g., a single-plate color image sensor, which is a CCD image sensor or a CMOS image sensor, for example. As illustrated in
The light source section 3 includes a xenon lamp 11 (light source) that emits white light (normal light) in a wide wavelength band. As illustrated in
The signal processing section 4 includes an interpolation section 15 that processes an image signal acquired by the image sensor 10 and an image processing section 16 (image processing device) that processes the image signal processed by the interpolation section 15. The interpolation section 15 turns a color image acquired by pixels of the image sensor 10 corresponding to the individual colors (so-called Bayer array image) into a three-channel image by a publicly known demosaicing process (generating a color image with pixel values of RGB in pixels).
The control section 17 synchronizes the timing for capturing by the image sensor 10 and the timing for image processing by the image processing section 16, based on an instructive signal from the external I/F section 13.
Hereinafter, the case where the subject to be improved in visibility is carotene in fat will be described. As illustrated in
In the image processing section 16 illustrated in
The preprocessing section 14 performs an optical black (OB) clamp process, a gain correction process, and a white balance (WB) correction process on the three-channel image signals input from the interpolation section 15, using an OB clamp value, a gain correction value, and a WB coefficient value saved in advance in the control section 17. Hereinafter, the image processed and output by the preprocessing section 14 (RGB color image) will be called captured image.
The detection section 19 includes a blood image generation section 23 that generates a blood image based on the captured image from the preprocessing section 14 and a blood region detection section 22 (outflowing blood region detection section) that detects a blood region (outflowing blood region in a narrow sense) based on the blood image.
As described above, the image signals after the preprocessing include three types (three channels) of image signals of blue, green, and red. The blood image generation section 23 generates one channel of image signal from the two types (two channels) of image signals of green and red and forms the blood image from the image signal. In the blood image, the pixels with a larger amount of hemoglobin contained in the subject have higher pixel values (signal values). For example, the blood image generation section 23 generates the blood image by determining the differences between the pixel values of red and the pixel values of green in each pixel. Otherwise, the blood image generation section 23 generates the blood image by dividing the pixel value of red by the pixel value of green in each pixel.
In the example described above, the blood image is generated from the two channels of signals. However, the present disclosure is not limited to this, and the blood image may be generated by calculating luminance (Y) and color differences (Cr, Cb) from the three channels of RGB signals, for example. In that case, the blood image generation section 23 generates the blood image from the color difference signal such that the region where the chroma of red is sufficiently high or the region where the luminance signal is low to some degree is the region where blood exists. For example, the blood image generation section 23 determines an index value corresponding to the chroma of red for each pixel based on the color difference signal, and generates the blood image from the index values. Otherwise, the blood image generation section 23 determines the index value that becomes larger as the luminance signal is lower for each pixel based on the luminance signal, and generates the blood image from the index values.
The blood region detection section 22 sets a plurality of local regions (divided regions, blocks) in the blood image. For example, the blood region detection section 22 divides the blood image into a plurality of rectangular areas, and sets the divided rectangular areas as local regions. The size of rectangular areas can be set as appropriate but one local region is set to 16×16 pixels, for example. For example, as illustrated in
The local regions are not necessarily rectangular. It is obvious that the blood image can be divided into any polygonal shape and the divided regions can be set as local regions. In addition, the local regions may be appropriately settable in response to the operator's instruction. In the present embodiment, a region formed from a group of a plurality of adjacent pixels is set as one local region for the sake of reducing the amount of calculation later and removing noise. However, one pixel can be set as one local region. In this case, the following process is the same.
The blood region detection section 22 sets the blood region where blood exists in the blood image. That is, the blood region detection section 22 sets the region with a large amount of hemoglobin as the blood region. For example, the blood region detection section 22 performs a threshold process on all the local regions to extract local regions with sufficiently large values of blood image signals, performs an integration process on adjacent local regions, and sets the resultant regions as the blood region. In the threshold process, for example, the blood region detection section 22 compares values obtained by averaging the pixel values in the local regions with a given threshold, and extracts local regions with the averaged values larger than the given threshold. The blood region detection section 22 calculates the positions of all the pixels included in the blood region from coordinates a(m, n) of the local regions included in the blood region and information about the pixels included in the local regions, and outputs the calculated information to the visibility enhancement section 18 as blood region information indicating the blood region.
The visibility enhancement section 18 subjects the captured image from the preprocessing section 14 to a process of decreasing the chromas of the regions other than the yellow region in a color difference space. Specifically, the visibility enhancement section 18 converts the image signals of RGB pixels in the captured image into a YCbCr signal of luminance color difference. The conversion equations are the following (1) to (3):
Y=0.2126×R+0.7152×G+0.0722×B (1)
Cb=−0.114572×R−0.385428×G+0.5×B (2)
Cr=0.5×R−0.454153×G−0.045847×B (3)
Next, as illustrated in
Specifically, as shown in the following equations (4) to (6), the visibility enhancement section 18 controls the amount of attenuation according to the signal value of the blood image in the blood region detected by the blood region detection section 22. In the regions other than the blood region (excluding the yellow region), coefficients α, β, and γ are fixed to values smaller than 1, for example. Otherwise, in the regions other than the blood region (excluding the yellow region), the amount of attenuation may be controlled by the following equations (4) to (6):
Y′=α(SHb)×Y (4)
Cb′=β(SHb)×Cb (5)
Cr′=γ(SHb)×Cr (6)
where SHb represents the signal value (pixel value) of the blood image. As illustrated in
According to the foregoing equations (4) to (6), the coefficients come close to 1 in the region where blood exists, and thus the amount of attenuation becomes small. That is, the pixels with larger signal values in the blood image are less likely to be attenuated in color (color difference). Otherwise, in the blood region detected by the blood region detection section 22, the amount of attenuation is smaller than outside the blood region, and thus the colors (color differences) are unlikely to be attenuated.
Further, as illustrated in
The visibility enhancement section 18 converts the attenuated YCbCr signal into RGB signals by the equations (7) to (9) shown below. The visibility enhancement section 18 outputs the converted RGB signals (color image) to the postprocessing section 20.
R=Y′+1.5748×Cr′ (7)
G=Y′−0.187324×Cb′−0.468124×Cr′ (8)
B=Y′+1.8556×Cb′ (9)
In the example described above, the color difference signals and the luminance signals in the regions other than the yellow region are attenuated. Alternatively, only the color difference signals in the regions other than the yellow region may be attenuated. In this case, the foregoing equation (4) is not executed, and Y′=Y in the foregoing equations (7) to (9).
In the example described above, the process of attenuating the colors other than yellow is suppressed in the blood region. However, the control method of the process of attenuating the colors other than yellow is not limited to this. For example, when the ratio of the blood region to the image exceeds a specific ratio (that is, the number of pixels in the blood region/the number of all the pixels exceeds a threshold), the process of attenuating the colors other than yellow may be suppressed in the entire image.
The postprocessing section 20 performs postprocessing such as a grayscale transmission process, a color process, and a contour highlighting process on the image from the visibility enhancement section 18 (the image in which the colors other than yellow are attenuated) using a grayscale transformation coefficient, color conversion coefficient, and contour highlighting coefficient saved in the control section 17, thereby generating a color image to be displayed on the image display section 6.
According to the foregoing embodiment, the image processing device (the image processing section 16) includes the image acquisition section (for example, the preprocessing section 14) and the visibility enhancement section 18. The image acquisition section acquires a captured image including a subject image obtained by applying illumination light from the light source section 3 to the subject. Then, as described above with reference to
This makes it possible to attenuate the chroma of tissue having the colors other than yellow of the subject seen in the captured image as compared to tissue in yellow (for example, fat containing carotene). As a result, the tissue in yellow is highlighted so that the visibility of the tissue in yellow can be enhanced relative to the tissue in the colors other than yellow. In addition, the attenuation process is performed using the captured image (for example, RGB color image) acquired by the image acquisition section, which simplifies the configuration and the processes as compared to a case where a plurality of spectral images are prepared and the attention process is performed using the plurality of spectral images.
In this case, the yellow here refers to colors that belong to a predetermined region corresponding to yellow in the color space. For example, the range of angles with reference to the Cb axis centered on a point of origin in a CbCr plane in a YCbCr space constitutes colors belonging to a predetermined angle range. Otherwise, the yellow refers to colors that belong to a predetermined angle range in a hue (H) plane in an HSV space. In addition, the yellow refers to colors between red and green in the color space, which exist in the counterclockwise direction of red and exist in the clockwise direction of green in the CbCr plane, for example. However, the yellow is not limited to the foregoing definition and may be defined by spectral characteristics of a yellow substance (for example, carotene, bilirubin, stercobilin, or the like) or a region occupied by that substance in the color space. The colors other than yellow refer to colors that do not belong to a predetermined region corresponding to yellow (but belong to the regions other than the predetermined region) in the color space, for example.
The color attenuation process is a process of decreasing the chroma of colors. For example, the color attenuation process is a process of attenuating the color difference signals (Cb signal and Cr signal) in the YCbCr space as illustrated in
In the present embodiment, the image processing device (the image processing section 16) includes the detection section 19 that detects the blood region as a region of blood in the captured image, based on color information of the captured image. The visibility enhancement section 18 suppresses or stops the attenuation process on the blood region based on the result of detection by the detection section 19.
As described above with reference to
Here, the blood region is a region where it is estimated that blood exists in the captured image. Specifically, the blood region is a region with the spectral characteristics (colors) of hemoglobin (HbO2, HbO). As described above with reference to
The color information in the captured image refers to information that indicates the colors of pixels or regions of the captured image (for example, the local regions as illustrated in
In the present embodiment, the detection section 19 includes the blood region detection section 22 that detects the blood region based on at least one of the color information and brightness information of the captured image. The visibility enhancement section 18 suppresses or stops the attenuation process on the blood region based on the result of detection by the blood region detection section 22. The suppression of the attenuation process means that the amount of attenuation is larger than zero (for example, the coefficients β and γ in the foregoing equations (5) and (6) are smaller than 1). The stoppage of the attenuation process means that the attenuation process is not performed or the amount of attenuation is zero (for example, the coefficients β and γ in the foregoing equations (5) and (6) are 1).
The blood accumulating on the surface of the subject becomes dark due to light absorption (for example, the blood is captured in a darker color as the width of the accumulating blood is larger). Thus, using the brightness information of the captured image makes it possible to detect the blood accumulating on the surface of the subject, thereby suppressing or preventing a decrease in the chroma of the accumulating blood.
The brightness information of the captured image here refers to information that indicates the brightness of a pixel or region (for example, the local region as illustrated in
In the present embodiment, the blood region detection section 22 divides the captured image into a plurality of local regions (for example, the local regions illustrated in
This makes it possible to determine whether each local region of the captured image is the blood region. For example, it is possible to set the region obtained by combining adjacent ones of the local regions that have been determined to be blood regions as final blood region. Determining whether the local region is the blood region makes it possible to decrease the influence of noise, thereby improving the accuracy of determination on the blood region.
In the present embodiment, based on the captured image, the visibility enhancement section 18 performs the color attenuation process on the regions other than the yellow region in the captured image. Specifically, the visibility enhancement section 18 determines the amount of attenuation (calculates the attenuation coefficient) based on the color information (color information of pixels or regions) of the captured image, and performs the color attenuation process on the regions other than the yellow region by the amount of attenuation.
Accordingly, the attenuation process is controlled (the amount of attenuation is controlled) based on the captured image. This makes it possible to simplify the configuration and the process as compared to a case where a plurality of spectral images are captured and the attenuation process is controlled based on the plurality of spectral images, for example.
In the present embodiment, the visibility enhancement section 18 performs the attenuation process by determining a color signal corresponding to the blood for the pixel or region of the captured image and multiplying the color signals in the regions other than the yellow region by the coefficient that changes in value according to the signal value of the color signal. Specifically, when the color signal corresponding to the blood is a color signal that has a signal value becoming larger in the region where the blood exists, the color signals in the regions other than the yellow region are multiplied by the coefficient that becomes larger (approaches 1) with an increase in the signal value.
For example, according to the foregoing equations (5) and (6), the color signal corresponding to the blood has a signal value SHb that is a difference value or a division value between R signal and G signal, the coefficients are β(SHb) and γ(SHb), and the color signals to be multiplied by the coefficients are color difference signals (Cb signal and Cr signal). The signal corresponding to the blood is not limited to this and may be a color signal in a given color space, for example. In addition, the color signal to be multiplied by the coefficient is not limited to the color difference signal and may be a chroma (S) signal in the HSV space or may be a component of RGB (channel signal).
This makes it possible to increase the value of the coefficient as there is a higher possibility of the existence of blood (for example, as the signal value of the color signal corresponding to the blood is larger). Multiplying the color signals in the regions other than the yellow region by the coefficient makes it possible to suppress the attenuation amount of colors as there is a higher possibility of the existence of the blood.
In the present embodiment, the visibility enhancement section 18 performs the color conversion process on the pixel values of pixels in the yellow region so as to rotate toward green in the color space.
For example, the color conversion process is a process of converting a color so as to rotate counterclockwise in the CbCr plane of the YCbCr space. Otherwise, the color conversion process is a process of converting a color so as to rotate counterclockwise in the hue (H) plane of the HSV space. For example, the visibility enhancement section 18 perform rotational conversion at an angle smaller than the angular difference between yellow and green in the CbCr plane or the hue plane.
This converts the yellow region in the captured image so as to come closer to green. Since the color of blood is red and its complementary color is green, bringing the yellow region closer to green improves the color contrast between the blood region and the yellow region, thereby further enhancing the visibility of the yellow region.
In the present embodiment, the color of the yellow region is the color of carotene, bilirubin, or stercobilin.
Carotene is a substance contained in fat, cancer, and others, for example Bilirubin is a substance contained in bile and others. Stercobilin is a substance contained in stool, urine, and others.
This makes it possible to detect the region where the existence of carotene, bilirubin, or stercobilin is estimated as the yellow region, and to perform the attenuation process on the colors other than the color of the yellow region. Accordingly, it is possible to relatively improve the visibility of the region where there exists fat, cancer, bile, stool, urine, or the like in the captured image.
The image processing device according to the present embodiment may be configured as described below. That is, the image processing device includes a memory that stores information (for example, programs and various types of data) and a processor that operates based on the information stored in the memory (a processor including hardware). The processor performs an image acquisition process of acquiring a captured image including a subject image obtained by applying illumination light from a light source section 3 to a subject and a visibility enhancement process of relatively enhancing the visibility of a yellow region in the captured image by performing a color attenuation process on regions other than a yellow region in the captured image.
For example, the processor may have functions of its sections each implemented by individual hardware, or may have the functions of its sections each implemented by integrated hardware. For example, the processor may include hardware, and the hardware may include at least one of a circuit that processes a digital signal and a circuit that processes an analog signal. For example, the processor may include one or more circuit devices (e.g., an integrated circuit (IC)) mounted on a circuit board, or one or more circuit elements (e.g., a resistor or a capacitor). The processor may be a central processing unit (CPU), for example. Note that the processor is not limited to the CPU, and various other processors such as a graphics processing unit (GPU) and a digital signal processor (DSP) may also be used. Alternatively, the processor may be a hardware circuit by ASIC. The processor may include, e.g., an amplifier circuit or a filter circuit that processes an analog signal. The memory may be a semiconductor memory (e.g., SRAM or DRAM), or may be a register. The memory may be a magnetic storage device such as a hard disk drive (HDD), or may be an optical storage device such as an optical disc device. For example, the memory stores computer-readable instructions. When the instructions are executed by the processor, the functions of components of the image processing device are implemented. The instruction described herein may be an instruction set that is included in a program, or may be an instruction that instructs the hardware circuit included in the processor to operate.
For example, operations according to the present embodiment are implemented as follows. The image captured by an image sensor 10 is processed by a preprocessing section 14 and is stored as a captured image in the memory. The processor reads the captured image from the memory, performs the attenuation process on the captured image, and stores the image having undergone the attenuation process in the memory.
The components of the image processing device according to the present embodiment may be implemented as modules of programs that run on the processor. For example, the image acquisition section is implemented as an image acquisition module that acquires a captured image including a subject image obtained by applying illumination light from the light source section 3 to a subject. A visibility enhancement section 18 is implemented as a visibility enhancement module that performs the color attenuation process on the regions other than the yellow region in the captured image to relatively enhance the visibility of the yellow region in the captured image.
2. Second Detailed Configuration Example of the Image Processing Section
The blood vessel region detection section 21 detects a blood vessel region based on structural information of a blood vessel and a blood image. The method of generating the blood image by the blood image generation section 23 is the same as in the first detailed configuration example. The structural information of the blood vessel is detected based on a captured image from the preprocessing section 14. Specifically, the blood vessel region detection section 21 performs a direction smoothing process (noise suppression) and a high-pass filter process on a B channel (a channel with a high absorption rate of hemoglobin) of pixel values (image signals). In the direction smoothing process, the blood vessel region detection section 21 determines an edge direction with respect to the captured image. The edge direction is determined as any of horizontal direction, vertical direction, and oblique direction, for example. Next, the blood vessel region detection section 21 performs the smoothing process on the detected edge direction. The smoothing process is a process of averaging pixel values of pixels arrayed in the edge direction, for example. The blood vessel region detection section 21 performs the high-pass filter process on the image having undergone the smoothing process, thereby extracting the structural information of the blood vessel. The region in which the extracted structural information and the pixel value of the blood image are both at high levels is set as the blood vessel region. For example, the pixels in which the signal value of the structural information is larger than a first given threshold and the pixel value of the blood image is larger than a second given threshold are determined as the pixels in the blood vessel region. The blood vessel region detection section 21 outputs the information of the detected blood vessel region (the coordinates of the pixels belonging to the blood vessel region) to the visibility enhancement section 18.
The visibility enhancement section 18 controls the amount of attenuation according to the signal value of the blood image in the blood vessel region detected by the blood vessel region detection section 21. The method for controlling the amount of attenuation are the same as in the first detailed configuration example.
According to the embodiment described above, the detection section 19 includes the blood vessel region detection section 21 that detects the blood vessel region as the region of the blood vessel in the captured image based on the color information and structural information of the captured image. The visibility enhancement section 18 suppresses or stops the attenuation process on the blood vessel region based on the result of detection by the blood vessel region detection section 21.
Since a blood vessel is within tissue, the image of the blood vessel may be low in contrast depending on its thickness, depth and position in the tissue. When the color attenuation process is performed on the regions other than the yellow region, the low contrast of the blood vessel may further become lower. In this respect, according to the present embodiment, the attenuation process on the blood vessel region can be suppressed or stopped, which makes it possible to suppress or prevent a decrease in the contrast of the blood vessel region.
The structural information of the captured image here refers to extracted information on the structure of the blood vessel. For example, the structural information refers to the edge quantity of the image. The edge quantity refers to an edge quantity extracted by performing the high-pass filter process or the bandpass filter process on the image, for example. The blood vessel region refers to a region where it is estimated that a blood vessel exists in the captured image. Specifically, the blood vessel region is a region that has spectral characteristics (colors) of hemoglobin (HbO2, HbO) and structural information (for example, edge quantity). As described above, the blood vessel region is a kind of blood region.
In the present embodiment, the visibility enhancement section 18 may enhance the structure of the blood vessel region in the captured image based on the result of detection by the blood vessel region detection section 21, and perform the attenuation process on the captured image after enhancement.
For example, the visibility enhancement section 18 may perform the structural enhancement and the attenuation process on the blood vessel region without suppressing or stopping the attenuation process on the blood region (blood vessel region). Alternatively, the visibility enhancement section 18 may suppress or stop the attenuation process on the blood region (blood vessel region) and perform the structural enhancement and attenuation processes on the blood vessel region.
Here, the process of enhancing the structure of the blood vessel region can be implemented by adding the edge quantity (edge image) extracted from an image to the captured image, for example. The structural enhancement is not limited to this.
Accordingly, the contrast of the blood vessel can be improved by the structural enhancement, and the color attenuation process is performed on the regions other than the yellow region in the blood vessel region improved in contrast. This makes it possible to suppress or prevent a decrease in the contrast of the blood vessel region.
3. Modifications
As illustrated in
The light from the light emitting diodes 31a, 31b, 31c, and 31d enters an illumination optical system 7 (light guide cable) by means of the mirror 32 and the three dichroic mirrors 33. The light emitting diodes 31a, 31b, 31c, and 31d emit light at the same time such that white light is applied to the subject. An image sensor 10 is a single-plate color image sensor, for example. The wavelength bands of 400 to 500 nm of the light emitting diodes 31a and 31b correspond to the wavelength band of blue, the wavelength band 520 to 570 nm of the light emitting diode 31c corresponds to the wavelength band of green, and the wavelength band 600 to 650 nm of the light emitting diode 31d corresponds to the wavelength band of red.
The configurations of the light emitting diodes and their wavelength bands are not limited to the foregoing ones. That is, the light source section 3 is merely required to include one or more light emitting diodes such that the one or more light emitting diodes emit light to generate white light. The wavelength bands of the light emitting diodes may be randomly set as long as the light emission from the one or more light emitting diodes covers the wavelength band of white light as a whole. For example, the light emission from the one or more light emitting diodes only needs to cover the wavelength bands of red, green, and blue.
As illustrated in
White light emitted from the xenon lamp 11 passes through the filters B2, G2, and R2 of the rotating filter turret 12 in sequence, and the illumination light of blue B2, green G2, and red R2 are applied to the subject in a time-division manner
The control section 17 synchronizes the timing for capturing by the image sensor 27, the rotation of the filter turret 12, and the timing for image processing by the image processing section 16. The memory 28 stores the image signals acquired by the image sensor 27 in each of the wavelengths of the emitted illumination light. The image processing section 16 combines the image signals in the individual wavelengths stored in the memory 28 to generate a color image.
Specifically, when the illumination light of blue B2 is applied to the subject, the image sensor 27 captures an image and stores the image as a blue image (B channel) in the memory 28. When the illumination light of green G2 is applied to the subject, the image sensor 27 captures an image and stores the image as a green image (G channel) in the memory 28. When the illumination light of red R2 is applied to the subject, the image sensor 27 captures an image and stores the image as a red image (R channel) in the memory 28. Then, when the images corresponding to the illumination light of three colors are acquired, these images are sent from the memory 28 to the image processing section 16. The image processing section 16 performs image processing at the preprocessing section 14 and combines the images corresponding to the illumination light of three colors to acquire one RGB color image. Thus, the image of normal light (white light image) is acquired and output as the captured image to the visibility enhancement section 18.
The color separation prism 34 separates the reflected light from the subject into the wavelength bands of blue, green, and red according to transmittance characteristics illustrated in
4. Notification Process
Specifically, when the detection section 19 detects the blood region, the notification processing section 25 performs the notification process of notifying the user of the detection of the blood region. For example, the notification processing section 25 superimposes an alert indication on a display image and outputs the display image to the image display section 6. For example, the display image includes a region where the captured image is displayed and its peripheral region where the alert indication is displayed. The alert indication is a blinking icon or the like, for example.
Otherwise, the notification processing section 25 performs the notification process of notifying the user that the blood vessel region exists near a treatment tool based on positional relationship information (for example, distance) indicating the positional relationship between the treatment tool and the blood vessel region. The notification process is a process of displaying an alert indication similar to the one described above, for example.
The notification process is not limited to a process of displaying an alert and may be a process of highlighting the blood region (blood vessel region) or a process of displaying characters (text or the like) for attracting attention. Otherwise, the notification process is not limited to notification by image display and may be notification by light, sound, or vibration. In that case, the notification processing section 25 may be provided as a constituent element separate from the image processing section 16. Otherwise, the notification process is not limited to a process of notification to the user and may be a process of notification to a device (for example, a robot in a surgery support system described later). For example, an alert signal may be output to the device.
As described above, the visibility enhancement section 18 suppresses the process of attenuating the colors other than yellow in the blood region (blood vessel region). Therefore, there is a possibility that the chromas of colors of the blood region become low as compared to a case where the process of attenuating the colors other than yellow is not performed. According to the present embodiment, it is possible to, based on the detection result of the blood region (blood vessel region), perform a process of notifying the existence of blood in the captured image or a process of notifying that the treatment tool has approached the blood vessel.
5. Surgery Support System
The endoscope apparatus (endoscope system) according to the present embodiment is assumed to be a type in which a control device is connected to an insertion section (scope) so that the user operates the scope to capture the inside of a body as illustrated in
Although the embodiments to which the present disclosure is applied and the modifications thereof have been described in detail above, the present disclosure is not limited to the embodiments and the modifications thereof, and various modifications and variations in components may be made in implementation without departing from the spirit and scope of the present disclosure. The plurality of elements disclosed in the embodiments and the modifications described above may be combined as appropriate to implement the present disclosure in various ways. For example, some of all the elements described in the embodiments and the modifications may be deleted. Furthermore, elements in different embodiments and modifications may be combined as appropriate. Thus, various modifications and applications can be made without departing from the spirit and scope of the present disclosure. Any term cited with a different term having a broader meaning or the same meaning at least once in the specification and the drawings can be replaced by the different term in any place in the specification and the drawings.
This application is a continuation of International Patent Application No. PCT/JP2017/022795, having an international filing date of Jun. 21, 2017, which designated the United States, the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2017/022795 | Jun 2017 | US |
Child | 16718464 | US |