The present invention relates to an image processing device, an endoscope system, an information storage device, an image processing method, and the like.
In the fields of endoscopes, microscopes, and the like, a special light image obtained using special light having specific spectral characteristics has been used in addition to a normal light image obtained using normal white light.
For example, JP-A-63-122421 discloses an endoscope apparatus that alternately acquires a normal light image obtained using normal white light and a fluorescent image obtained using given excitation light from an object to which a fluorescent substance has been administered, and stores the normal light image and the fluorescent image in a storage device to simultaneously display the normal light image and the fluorescent image. According to the technology disclosed in JP-A-63-122421, it is possible to improve the capability to specify an attention area (e.g., lesion area) within the normal light image.
JP-A-2004-321244 discloses an endoscope apparatus that alternately acquires a normal light image obtained using normal white light and a special light image obtained using special light having a specific wavelength, stores the normal light image and the special light image in a storage device, subjects the normal light image and the special light image to different image processing, and displays the normal light image and the special light image either independently or in a blended state. According to the technology disclosed in JP-A-2004-321244, it is possible to obtain an optimum normal light image and special light image, and improve the capability to specify an attention area (e.g., lesion area) within the normal light image.
According to one aspect of the invention, there is provided an image processing device comprising:
a first image acquisition section that acquires a first image, the first image being an image that has information within a wavelength band of white light;
a second image acquisition section that acquires a second image, the second image being an image that has information within a specific wavelength band;
an attention area detection section that detects an attention area within the second image based on a feature quantity of each pixel within the second image;
a display state setting section that performs a display state setting process that sets a display state of a display image generated based on the first image; and
a designated elapsed time setting section that performs a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area,
the display state setting section performing the display state setting process based on the designated elapsed time that has been set by the designated elapsed time setting section. Another aspect of the invention relates to an information storage device storing a program that causes a computer to function as each section described above.
According to another aspect of the invention, there is provided an image processing device comprising:
a first image acquisition section that acquires a first image, the first image being an image that has information within a wavelength band of white light;
a second image acquisition section that acquires a second image, the second image being an image that has information within a specific wavelength band;
an attention area detection section that detects an attention area within the second image based on a feature quantity of each pixel within the second image;
an alert information output section that outputs alert information about the attention area detected by the attention area detection section; and
a designated elapsed time setting section that performs a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area,
the alert information output section outputting the alert information until the designated elapsed time set by the designated elapsed time setting section elapses. Another aspect of the invention relates to an information storage device storing a program that causes a computer to function as each section described above.
According to another aspect of the invention, there is provided an endoscope system comprising:
a first image acquisition section that acquires a first image, the first image being an image that has information within a wavelength band of white light;
a second image acquisition section that acquires a second image, the second image being an image that has information within a specific wavelength band;
an attention area detection section that detects an attention area within the second image based on a feature quantity of each pixel within the second image;
a display state setting section that performs a display state setting process that sets a display state of a display image generated based on the first image;
a designated elapsed time setting section that performs a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area; and
a display section that displays the display image,
the display state setting section performing the display state setting process based on the designated elapsed time that has been set by the designated elapsed time setting section.
According to another aspect of the invention, there is provided an image processing method comprising:
acquiring a first image, the first image being an image that has information within a wavelength band of white light;
acquiring a second image, the second image being an image that has information within a specific wavelength band;
detecting an attention area within the second image based on a feature quantity of each pixel within the second image;
performing a display state setting process that sets a display state of a display image generated based on the first image;
performing a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area; and
performing the display state setting process based on the designated elapsed time that has been set by the designated elapsed time setting process.
According to another aspect of the invention, there is provided an image processing method comprising:
acquiring a first image, the first image being an image that has information within a wavelength band of white light;
acquiring a second image, the second image being an image that has information within a specific wavelength band;
detecting an attention area within the second image based on a feature quantity of each pixel within the second image;
outputting alert information about the attention area that has been detected;
performing a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area; and
outputting the alert information until the designated elapsed time set by the designated elapsed time setting process elapses.
According to one embodiment of the invention, there is provided an image processing device comprising:
a first image acquisition section that acquires a first image, the first image being an image that has information within a wavelength band of white light;
a second image acquisition section that acquires a second image, the second image being an image that has information within a specific wavelength band;
an attention area detection section that detects an attention area within the second image based on a feature quantity of each pixel within the second image;
a display state setting section that performs a display state setting process that sets a display state of a display image generated based on the first image; and
a designated elapsed time setting section that performs a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area,
the display state setting section performing the display state setting process based on the designated elapsed time that has been set by the designated elapsed time setting section. Another embodiment of the invention relates to an information storage device storing a program that causes a computer to function as each section described above, or a computer-readable information storage medium that stores the program.
According to one embodiment of the invention, the first image and the second image are acquired, and the attention area is detected within the second image. The designated elapsed time setting process is performed based on the detection result for the attention area, and the display state setting process is performed based on the designated elapsed time. According to the configuration, since the designated elapsed time is set when the attention area has been detected, and the display state setting process that reflects the designated elapsed time is performed, it is possible to provide an image processing device that can prevent a situation in which the attention area is missed, and allows the user to reliably specify the attention area.
According to another embodiment of the invention, there is provided an image processing device comprising:
a first image acquisition section that acquires a first image, the first image being an image that has information within a wavelength band of white light;
a second image acquisition section that acquires a second image, the second image being an image that has information within a specific wavelength band;
an attention area detection section that detects an attention area within the second image based on a feature quantity of each pixel within the second image;
an alert information output section that outputs alert information about the attention area detected by the attention area detection section; and
a designated elapsed time setting section that performs a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area,
the alert information output section outputting the alert information until the designated elapsed time set by the designated elapsed time setting section elapses. Another embodiment of the invention relates to an information storage device storing a program that causes a computer to function as each section described above, or a computer-readable information storage medium that stores the program.
According to the above embodiment of the invention, the first image and the second image are acquired, and the attention area is detected within the second image. The designated elapsed time setting process is performed based on the detection result for the attention area, and the alert information is output until the designated elapsed time elapses. It is possible to provide an image processing device that can prevent a situation in which the attention area is missed, and allows the user to reliably specify the attention area by thus outputting the alert information until the designated elapsed time elapses.
According to another embodiment of the invention, there is provided an endoscope system comprising:
a first image acquisition section that acquires a first image, the first image being an image that has information within a wavelength band of white light;
a second image acquisition section that acquires a second image, the second image being an image that has information within a specific wavelength band;
an attention area detection section that detects an attention area within the second image based on a feature quantity of each pixel within the second image;
a display state setting section that performs a display state setting process that sets a display state of a display image generated based on the first image;
a designated elapsed time setting section that performs a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area; and
a display section that displays the display image,
the display state setting section performing the display state setting process based on the designated elapsed time that has been set by the designated elapsed time setting section.
According to another embodiment of the invention, there is provided an image processing method comprising:
acquiring a first image, the first image being an image that has information within a wavelength band of white light;
acquiring a second image, the second image being an image that has information within a specific wavelength band;
detecting an attention area within the second image based on a feature quantity of each pixel within the second image;
performing a display state setting process that sets a display state of a display image generated based on the first image;
performing a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area; and
performing the display state setting process based on the designated elapsed time that has been set by the designated elapsed time setting process.
According to another embodiment of the invention, there is provided an image processing method comprising:
acquiring a first image, the first image being an image that has information within a wavelength band of white light;
acquiring a second image, the second image being an image that has information within a specific wavelength band;
detecting an attention area within the second image based on a feature quantity of each pixel within the second image;
outputting alert information about the attention area that has been detected;
performing a designated elapsed time setting process that sets a designated elapsed time based on a detection result for the attention area; and
outputting the alert information until the designated elapsed time set by the designated elapsed time setting process elapses.
Exemplary embodiments of the invention are described below. Note that the following exemplary embodiments do not in any way limit the scope of the invention laid out in the claims. Note also that all elements of the following exemplary embodiments should not necessarily be taken as essential elements of the invention.
An outline of several embodiments of the invention is described below.
A method illustrated in
However, even when employing such a method, the doctor may miss the attention area (e.g., lesion area) when the doctor is paying attention to manipulation of the equipment (see
Several embodiments of the invention employ the following method in order to prevent a situation in which an attention area is missed, and make it possible to reliably specify an attention area.
Specifically, a normal light image (first image in a broad sense) and a special light image (second image in a broad sense) are acquired, as indicated by A1 and A2 in
When an attention area has been detected within the special light image (see A2 in
In
In
When an attention area (e.g., lesion area) has been detected within the special light image IMS1 (see B1 in
When the attention area has been detected within the special light image IMS2 (see B3) acquired within the designated elapsed time (see B2), the designated elapsed time is reset (see B4). More specifically, a new designated elapsed time that starts from the detection timing of the attention area indicated by B3 is set to extend the designated elapsed time. Therefore, the normal light image IMN7 is also subjected to the display state change process in addition to the normal light images IMN2 to IMN6, and the normal light images IMN2 to IMN7 for which the display state is changed are displayed as the display image. Since the attention area has not been detected within the special light image IMS3 (see B5), the designated elapsed time is not reset, and the normal light image IMN8 (see B6) is not subjected to the display state change process (i.e., the display state of the normal light image IMN8 is not changed).
According to the above method, the normal light image (first image) and the special light image (second image) are acquired, and an attention area is detected based on the feature quantity of each pixel of the special light image, for example. The designated elapsed time setting process that sets the designated elapsed time based on the detection result is performed, and the display state setting process that changes the display state of the display image is performed based on the set designated elapsed time. Therefore, the user can determine that the attention area is present near the current observation position even when the attention area is detected only for a short time during observation in a moving state. This makes it possible to prevent a situation in which the attention area is missed, and reliably specify the attention area. In particular, even after the attention area has become undetectable within the special light image (see A5 in
According to several embodiments of the invention, the normal light image and the special light image are acquired every given cycle period so that the normal light image is acquired in a high ratio as compared with the special light image (details thereof are described later). This makes it possible to prevent a decrease in temporal resolution of the normal light image, and obtain a high-quality normal light image. Moreover, since the alert area is set to the peripheral area of the normal light image, and the alert information (alert image) is set (added) to the alert area (see A4 in
An image signal acquired via a lens system 100 (optical system in a broad sense) and a CCD 101 (image sensor in a broad sense) provided on the end (insertion section) of the endoscope is amplified by a gain amplifier 104, and converted into a digital signal by an A/D conversion section 105. Illumination light emitted from an illumination light source 102 passes through a filter (F1 and F2) attached to a rotary filter 103 provided on the end of the endoscope, and is applied to an object via an optical fiber. The digital image signal output from the A/D conversion section 105 is transmitted to a white balance (WB) section 107, a photometrical evaluation section 108, and a switch section 109 through a buffer 106. The WB section 107 is connected to the gain amplifier 104, and the photometrical evaluation section 108 is connected to the illumination light source 102 and the gain amplifier 104.
As illustrated in
The switch section 109 is connected to the first image acquisition section 110 and the second image acquisition section 111. The first image acquisition section 110 is connected to a display section 115 (e.g., liquid crystal display) (output section in a broad sense) through the display state setting section 114. The second image acquisition section 111 is connected to the attention area detection section 112. The attention area detection section 112 is connected to the designated elapsed time setting section 113. The designated elapsed time setting section 113 is connected to the display state setting section 114.
A control section 116 that is implemented by a microcomputer or the like is bidirectionally connected to the rotary filter 103, the gain amplifier 104, the AID conversion section 105, the WB section 107, the photometrical evaluation section 108, the switch section 109, the first image acquisition section 110, the second image acquisition section 111, the attention area detection section 112, the designated elapsed time setting section 113, the display state setting section 114, and the display section 115. An external I/F (interface) section 117 is bidirectionally connected to the control section 116. The external I/F section 117 includes a power switch, a shutter release button, and an interface for setting (changing) various modes during imaging.
An operation implemented by the first configuration example illustrated in
The following description is given taking an example in which the illumination light source 102 is a normal white light source (e.g., xenon lamp), and two filters are attached to the rotary filter 103. The two filters include a normal light image filter F1 and a special light image filter F2.
The normal light image filter F1 and the special light image filter F2 are attached to the rotary filter 103 so that the area ratio of the normal light image filter F1 to the special light image filter F2 in the circumferential direction is 29:1. The rotary filter 103 rotates one revolution per second, for example. In this case, a cycle period (T) is 1 second, a period (T1) in which normal light image illumination light is applied is 29/30th of a second, and a period (T2) in which special light image illumination light is applied is 1/30th of a second. Since the image signal is acquired at intervals of 1/30th of a second, twenty-nine normal light images and one special light image are alternately obtained within one cycle period (1 second). Therefore, the temporal resolution of the normal light image is sufficiently maintained.
As illustrated in
In
When the length of the designated elapsed time set using the method illustrated in
The rotary filter 103 provided on the end of the endoscope is rotated in synchronization with the imaging operation of the CCD 101 under control of the control section 116. The analog image signal obtained by the imaging operation is amplified by the gain amplifier 104 by a given amount, converted into a digital signal by the A/D conversion section 105, and transmitted to the buffer 106. The buffer 106 can store (record) data of one normal light image or one special light image, and a new image acquired by the imaging operation is overwritten.
The normal light image stored in the buffer 106 is intermittently transmitted to the WB section 107 and the photometrical evaluation section 108 at given time intervals under control of the control section 116. The WB section 107 integrates a signal at a given level corresponding to each color signal that corresponds to the color filter to calculate a white balance coefficient. The WB section 107 transmits the white balance coefficient to the gain amplifier 104. The gain amplifier 104 multiplies each color signal by a different gain to implement a white balance adjustment. The photometrical evaluation section 108 controls the intensity of light emitted from the illumination light source 102, the amplification factor of the gain amplifier 104, and the like so that a correct exposure is achieved.
The switch section 109 transmits the normal light image stored in the buffer 106 to the first image acquisition section 110, or transmits the special light image stored in the buffer 106 to the second image acquisition section 111 under control of the control section 116.
The first image acquisition section 110 reads the normal light image from the switch section 109, performs an interpolation process, a grayscale process, and the like on the normal light image, and transmits the resulting normal light image to the display state setting section 114 under control of the control section 116.
The second image acquisition section 111 reads the special light image from the switch section 109, and performs an interpolation process, a grayscale process, and the like on the special light image under control of the control section 116. The second image acquisition section 111 also performs a process that generates a pseudo-color image from the B signal that corresponds to the narrow band of blue light and the G signal that corresponds to the narrow band of green light (see JP-A-2002-95635). The resulting special light image is transmitted to the attention area detection section 112.
The attention area detection section 112 reads the special light image from the second image acquisition section 111, and performs a process that detects a given attention area (e.g., a lesion area in which blood vessels are densely present) under control of the control section 116. The detection result is transmitted to the designated elapsed time setting section 113.
The designated elapsed time setting section 113 reads the attention area detection result from the attention area detection section 112, and determines the designated elapsed time under control of the control section 116, the designated elapsed time being a period of time in which the alert information that indicates information about the detection result is set to the normal light image.
In one embodiment of the invention, the designated elapsed time is set to 5 seconds (five cycle periods) when an attention area has been detected, and the alert information is set for at least 5 seconds even if the attention area has become undetectable, for example. When the attention area has been detected within the designated elapsed time, the designated elapsed time (5 seconds) is reset to start from the detection timing (see B4 in
The display state setting section 114 (display state determination section or display control section) reads the determination result as to whether or not the current time point is within the designated elapsed time from the designated elapsed time setting section 113, and selects a process that sets (adds or superimposes) the alert information (alert area) to the normal light image when the current time point is within the designated elapsed time under control of the control section 116. The display state setting section 114 does not perform a process when the current time point is not within the designated elapsed time. Note that the alert information is set to the normal light image acquired after the special light image within which the attention area has been detected. For example, when an attention area has been detected within the special light image IMS1 (see B1 in
The display image generated by the display state setting section 114 is transmitted to the display section 115, and sequentially displayed on the display section 115. The normal light image to which the alert information is set is transmitted as the display image when it has been selected to set the alert information (change the display state), and the normal light image is directly transmitted as the display image when it has been selected not to set the alert information (not to change the display state).
Although
According to one embodiment of the invention, the first image acquisition section 110 acquires the normal light image corresponding to at least one frame (K frames) every cycle period (T), and the second image acquisition section 111 acquires the special light image corresponding to at least one frame (L frames) every cycle period (see
When an attention area has been detected within the special light image in a cycle period TN (Nth cycle period) (see C1 in
When an attention area has been detected within the special light image in the cycle period TN (see C1 in
When an attention area has been detected within the special light image in the cycle period TN (see C4 in
As illustrated in
The second image acquisition section 111 transmits the special light image (pseudo-color image) to the buffer 200. The hue/chroma calculation section 201 reads the special light image (pseudo-color image) from the buffer 200 under control of the control section 116. The special light image (pseudo-color image) is expressed using an R signal, a G signal, and a B signal. The R signal, the G signal, and the B signal are converted into a luminance signal Y and color difference signals Cb and Cr using the following expressions (1) to (3), for example.
Y=0.29900R+0.058700+0.11400B (1)
Cb=−0.16875R−0.33126G+0.50000B (2)
Cr=0.50000R−0.41869G−0.08131B (3)
The hue H and the chroma C are calculated using the following expressions (4) and (5).
H=tan−1(Cb/Cr) (4)
C=(Cb·Cb+Cr·Cr)1/2 (5)
The hue H and the chroma C thus calculated are sequentially transmitted to the attention area determination section 202 on a pixel basis. The attention area determination section 202 reads the hue H and the chroma C from the hue/chroma calculation section 201, and reads hue threshold values and chroma threshold values from the threshold value ROM 203 under control of the control section 116.
−70°<hue H<30° (6)
16<chroma C<128 (7)
The upper-limit value and the lower-limit value of the hue H and the upper-limit value and the lower-limit value of the chroma C (see the expressions (6) and (7)) are stored in the threshold value ROM 203. The attention area determination section 202 reads these four threshold values. The attention area determination section 202 outputs a label value “1” to the buffer 204 corresponding to a pixel that satisfies the expressions (6) and (7), and outputs a label value “0” to the buffer 204 corresponding to a pixel that does not satisfy the expressions (6) and (7). The label value that indicates whether or not each pixel of the special light image belongs to the attention area is thus stored in the buffer 204.
An area determination section 206 included in the reliability calculation section 205 reads the label values from the buffer 204, and calculates the total number of pixels that belong to the attention area to calculate the area of the attention area under control of the control section 116. In one embodiment of the invention, the area of the attention area is used as the reliability that is an index that indicates the likelihood that the attention area is a legion. Specifically, the reliability is calculated based on the area of the attention area. The attention area is determined to have high reliability when the calculated area of the attention area exceeds a given threshold value (i.e., it is determined that an attention area has been detected). For example, it is determined that an attention area has been detected when the calculated area of the attention area exceeds 1% (i.e., threshold value) of the area of the entire image, for example. The attention area is determined to have low reliability when the calculated area of the attention area is equal to or less than the given threshold value (i.e., it is determined that the attention area has not been detected). Detection information that indicates the attention area detection result is transmitted to the designated elapsed time setting section 113.
As illustrated in
When the image processing device has been initialized upon power on, for example, the control section 116 initializes the detection information recording section 301 to set the value of the detection information to an initial value “0”. The update section 300 reads the detection information that indicates whether or not an attention area has been detected within the special light image from the attention area detection section 112 under control of the control section 116.
The update section 300 outputs a given value (e.g., “5”) to the detection information recording section 301 as the detection information when an attention area has been detected. The update section 300 decrements the value of the detection information recorded in the detection information recording section 301 by one when an attention area has not been detected. When the value of the detection information has become a negative number, the value of the detection information is set to 0.
The control information output section 302 reads the value of the detection information recorded in the detection information recording section 301 under control of the control section 116. When the value of the detection information is equal to or larger than 1, the control information output section 302 outputs control information that instructs the display state setting section 114 to change the display state (set the alert information) to the display state setting section 114. When the value of the detection information is 0, the control information output section 302 outputs control information that instructs the display state setting section 114 not to change the display state (not to set the alert information) to the display state setting section 114.
In the one embodiment of the invention, the length of the cycle period (T) is 1 second, and one special light image is acquired in each cycle period (see
For example, when an attention area has been detected within the special light image in the cycle period TN (see D1 in
When the attention area has not been detected within the special light image in the cycle periods TN+1 and TN+2 (see D4 and D5 in
The control information output section 302 outputs the control information that instructs the display state setting section 114 to change the display state to the display state setting section 114 until the value VD of the detection information reaches the second value VD2 (=0). When the value VD of the detection information has reached the second value VD2 (=0) (see D8 in
When the attention area has also been detected within the special light image in the cycle period TN+1 (see D10 in
According to one embodiment of the invention, the method illustrated in
As illustrated in
The normal light image output from the first image acquisition section 110 is transmitted to and stored (recorded) in the buffer 410. The selection section 401 reads the control information that instructs the display state setting section 114 to change or not to change the display state from the designated elapsed time setting section 113 under control of the control section 116. The selection section 401 also reads the normal light image from the buffer 410 under control of the control section 116. When the control information that instructs the display state setting section 114 to change the display state (add the alert information or superimpose the alert area) has been read from the designated elapsed time setting section 113, the selection section 401 transmits the normal light image read from the buffer 410 to the alert information addition section 402. When the control information that instructs the display state setting section 114 not to change the display state has been read from the designated elapsed time setting section 113, the selection section 401 transmits the normal light image read from the buffer 410 to the display section 115.
The alert information addition section 402 adds the alert information to the normal light image transmitted from the selection section 401 under control of the control section 116.
Although an example in which each section of the image processing device 90 is implemented by hardware has been described above, the configuration is not limited thereto. For example, a CPU may perform the process of each section on an image acquired using an imaging device such as a capsule endoscope. Specifically, the process of each section may be implemented by software by causing the CPU to execute a program. Alternatively, part of the process of each section may be implemented by software.
When implementing the process of each section of the image processing device 90 by software, a known computer system (e.g., work station or personal computer) may be used as the image processing device. A program (image processing program) that implements the process of each section of the image processing device 90 may be provided in advance, and executed by the CPU of the computer system.
As illustrated in
The computer system 600 is connected to a modem 650 that is used to connect to a public line N3 (e.g., Internet). The computer system 600 is also connected to a personal computer (PC) 681 (i.e., another computer system), a server 682, a printer 683, and the like via the LAN interface 618 and the local area network or the large area network N1.
The computer system 600 implements the functions of the image processing device by reading an image processing program (e.g., an image processing program that implements a process described below with reference to
Specifically, the image processing program is recorded in a recording medium (e.g., portable physical medium, stationary physical medium, or communication medium) so that the image processing program can be read by a computer. The computer system 600 implements the functions of the image processing device by reading the image processing program from such a recording medium, and executing the image processing program. Note that the image processing program need not necessarily be executed by the computer system 600. The invention may be similarly applied to the case where the computer system (PC) 681 or the server 682 executes the image processing program, or the computer system (PC) 681 and the server 682 execute the image processing program in cooperation.
A process performed when implementing the process of the image processing device 90 by software using an image acquired in advance is described below using the flowcharts illustrated in
The normal light image/special light image switch process is then performed (step S2). A first image acquisition process (step S3) is performed when the normal light image has been input, and a second image acquisition process (step S4) is performed when the special light image has been input. In the first image acquisition process (step S3), an interpolation process, a grayscale process, and the like are performed on the normal light image (first image).
In the second image acquisition process (step S4), an interpolation process, a grayscale process, and the like are performed on the special light image (second image), and a pseudo-color image is generated. The attention area detection process is then performed (step S5). The designated elapsed time in which the alert information (alert area) that indicates information about the detection result is set to (superimposed on) the normal light image is then set (determined) (step S6).
In the display state setting process (step S7), the alert information is set to the normal light image when the current time point is within the designated elapsed time, and the alert information is not set when the current time point is not within the designated elapsed time. The display image thus generated is output (step S8). Whether or not all of the image signals have been processed is then determined. The process is performed again from the step S2 when all of the image signals have not been processed. The process ends when all of the image signals have been processed.
According to one embodiment of the invention, the image processing device 90 includes the first image acquisition section 110, the second image acquisition section 111, the attention area detection section 112, the designated elapsed time setting section 113, and the display state setting section 114.
The first image acquisition section 110 acquires the first image (normal light image in a narrow sense), the first image being an image that has information (signal) within the wavelength band of white light. The second image acquisition section 111 acquires the second image (special light image in a narrow sense), the second image being an image that has information (signal) within a specific wavelength band (the wavelength band of narrow-band light, fluorescence, or the like in a narrow sense). The attention area detection section 112 detects an attention area within the second image based on the feature quantity (hue, chroma, luminance, or the like in a narrow sense) of each pixel within the second image. The display state setting section 114 performs the display state setting process that sets (determines or changes) the display state of the display image (i.e., an image displayed on the display section 115) generated based on the first image, and the designated elapsed time setting section 113 performs the designated elapsed time setting process that sets (determines or changes) the designated elapsed time based on the attention area detection result (detection information) of the attention area detection section 112.
The display state setting section 114 performs the display state setting process based on the designated elapsed time that has been set by the designated elapsed time setting section 113. For example, the display state setting section 114 performs the display state setting process that changes the display state of the display image until the designated elapsed time elapses (see
According to the above configuration, the user can determine that the attention area is present near the current observation position even when the attention area is detected only for a short time during observation in a moving state. This makes it possible to prevent a situation in which the attention area is missed, and reliably specify the attention area. Since the alert information is set to the peripheral area or the like of the first image, it is possible to prevent a situation in which observation of the first image is hindered, for example.
It suffices that the display image be generated using at least the first image. The display image may be an image obtained by blending the first image and the second image, for example. The display state of the display image may be changed by a method other that the alert information addition method (see
The term “attention area” used herein refers to an area for which the observation priority for the user is relatively higher than that of other areas. For example, when the user is a doctor, and desires to perform treatment, the attention area refers to an area that includes a mucosal area or a lesion area. If the doctor desires to observe bubbles or feces, the attention area refers to an area that includes a bubble area or a feces area. Specifically, the attention area for the user differs depending on the objective of observation, but necessarily has an observation priority relatively higher than that of other areas. The attention area can be detected using the feature quantity (e.g., hue or chroma) of each pixel of the second image (see the expressions (1) to (7)). For example, the threshold value of the feature quantity (see the expressions (6) and (7)) differs depending on the type of attention area. For example, the threshold value of the feature quantity (e.g., hue or chroma) of a first-type attention area differs from the threshold value of the feature quantity of a second-type attention area. When the type of attention area has changed, it suffices to change the threshold value of the feature quantity, and the process (e.g., designated elapsed time setting process and display state setting process) performed after detection of the attention area can be implemented by a process similar to the process described above.
The display state setting section 114 may perform the display state setting process that changes the display state of the display image at least until the designated elapsed time elapses even when the attention area has not been detected within the second image within the designated elapsed time (see A5 and A6 in
According to the above configuration, since the display image for which the display state is changed is displayed for a while even after the attention area has become undetectable, it is possible to more effectively prevent a situation in which the user misses the attention area.
The designated elapsed time setting section 113 may set a new designated elapsed time starting from the detection timing of the attention area when the attention area has been detected within the second image within the designated elapsed time (see B3 and B4 in
According to the above configuration, since the designated elapsed time is reset (extended) each time the attention area is detected, it is possible to implement an appropriate designated elapsed time setting process corresponding to detection of the attention area. Note that the new designated elapsed time may be set at a timing within the cycle period subsequent to the cycle period in which the attention area has been detected, or may be set at a timing within the cycle period in which the attention area has been detected.
The first image acquisition section 110 may acquire the first image corresponding to at least one frame (i.e., at least one first image) every cycle period (image acquisition period), and the second image acquisition section 111 may acquire the second image corresponding to at least one frame (i.e., at least one second image) every cycle period (see
It is possible to notify the user of detection of the attention area by thus performing the attention area detection process in each cycle period, and changing the display state of the display image in the next and subsequent cycle periods when the attention area has been detected. This makes it possible to minimize a delay time when displaying the alert information about the attention area or the like to the user, and more effectively prevent a situation in which the user misses the attention area.
The first image acquisition section 110 may acquire the first images corresponding to K frames (K is a natural number) every cycle period, and the second image acquisition section 111 may acquire the second images corresponding to L frames (L is a natural number) every cycle period. For example, the second image acquisition section 111 acquires the second image corresponding to one frame (L=1) every cycle period. The relationship “K>L” may be satisfied (see
It is possible to acquire the normal light image in a high ratio as compared with the special light image by thus acquiring the first images corresponding to K frames and the second images corresponding to L frames every cycle period so that the relationship “K>L” is satisfied. This makes it possible to prevent a decrease in temporal resolution of the normal light image, and obtain a high-quality display image (e.g., moving image).
The relationship “TE>T” may be satisfied when the length of each cycle period is referred to as T, and the length of the designated elapsed time is referred to as TE (see
In this case, since the designated elapsed time can be made longer than the cycle period (i.e., attention area detection unit period), a period in which the display state is changed increases. This makes it possible to effectively prevent a situation in which the user misses the attention area.
The display state setting section 114 may perform the display state setting process that changes the display state of the display image in (N+1)th to Mth cycle periods (TN+1 to TN+5; M is an integer that satisfies “M>N+1”) when the attention area has been detected within the second image in the Nth cycle period (TN), and the attention area has not been detected within the second image in the (N+1)th cycle period (TN+1) (see C1, C2, and C3 in
In this case, when the attention area has been detected in the (N+1)th cycle period (TN+1), the display state of the display image is changed at least until the Mth cycle period (TN+5) elapses (i.e., the designated elapsed time can be made longer than the cycle period).
The designated elapsed time setting section 113 may include the detection information recording section 301 that records the detection information about the attention area, the update section 300 that performs the update process that updates the detection information based on the detection result output from the attention area detection section 112, and the control information output section 302 that outputs the control information that controls the display state setting process performed by the display state setting section 114 based on the detection information recorded in the detection information recording section 301 (see
This makes it possible to implement the designated elapsed time setting process based on the detection information about the attention area. Since the detection information about the attention area is recorded (stored) in the detection information recording section 301 when the attention area has been detected, it is possible to set the designated elapsed time by utilizing the detection information stored in the detection information recording section 301 even when the attention area has become undetectable.
The update section 300 may set the value VD of the detection information to the first value VD1 (e.g., VD1=5) that corresponds to the designated elapsed time (see
The control information output section 302 may output the control information (control signal or control flag) that instructs the display state setting section 114 to change the display state of the display image until the value VD of the detection information reaches the second value VD2, and may output the control information that instructs the display state setting section 114 not to change the display state of the display image when the value VD of the detection information has reached the second value VD2.
It is possible to efficiently implement the designated elapsed time setting process (see
The update section 300 may reset the value VD of the detection information to the first value VD1 (e.g., VD=VD1=5) when the attention area has been detected within the second image in the cycle period (TN+1, TN+2, . . . ) subsequent to the Nth cycle period (TN) (see
This makes it possible to implement the process that sets the designated elapsed time when the attention area has been detected within the designated elapsed time by resetting the value of the detection information. The designated elapsed time can be extended by resetting the value of the detection information until the attention area becomes undetectable.
The display state setting section 114 may include the processing section 400 (see
It is possible to implement the display state change process based on detection of the attention area and the designated elapsed time by thus causing the control information output section 302 included in the designated elapsed time setting section 113 to output the control information, and causing the processing section 400 included in the display state setting section 114 to process the first image based on the control information. Since it suffices that the processing section 400 process the first image in accordance with the control information output from the designated elapsed time setting section 113, the configuration and the process of the processing section 400 can be simplified.
The attention area detection section 112 may include the reliability calculation section 205 (see
This makes it possible to improve the accuracy of the attention area (e.g., accuracy when determining a lesion area as an attention area) as compared with a method that does not utilize the reliability. Since a large area is detected as the attention area, and a small area is not detected as the attention area, the effects of noise can be reduced, for example.
Although an example in which the designated elapsed time is set to 5 cycle periods (5 seconds) when an attention area has been detected has been described above, the designated elapsed time may be set to an arbitrary value. The user may set an arbitrary designated elapsed time via the external I/F section 117. Although an example in which the rotary filter 103 rotates one revolution per second, and the area ratio of the normal light image filter F1 to the special light image filter F2 in the circumferential direction is 29:1 has been described above, the rotation speed of the rotary filter 103 and the area ratio of the normal light image filter F1 to the special light image filter F2 may be set arbitrarily. For example, the ratio of the special light image (i.e., the ratio of L to K) may be increased while giving priority to the temporal resolution of the normal light image.
Although an example in which a single-chip CCD in which a Bayer primary color filter is disposed on the front side is used as the imaging system has been described above, the imaging system is not limited thereto. For example, a double-chip or triple-chip CCD may also be used. Although an example in which the special light image is acquired using the narrow band of blue light and the narrow band of green light as disclosed in JP-A-2002-95635 has been described above, another configuration may also be employed. For example, the special light image may be acquired using fluorescence (see JP-A-63-122421), infrared light, or the like. Although an example in which the rotary filter 103 is used to capture the normal light image and the special light image has been described above, another configuration may also be employed. For example, white light and narrow-band light may be applied by changing the light source (e.g., LED light source).
Although an example in which the image processing device is integrated with the imaging section (lens system 100, CCD 101, illumination light source 102, rotary filter 103, gain amplifier 104, A/D conversion section 105, WB section 107, and photometrical evaluation section 108) has been described above, another configuration may also be employed. For example, an image signal acquired using a separate imaging section (e.g., capsule endoscope) may be stored in a recording medium in raw data format, and the image signal read from the recording medium may be processed.
The above embodiments may also be applied to a program that causes a computer to function as each section (e.g., first image acquisition section, second image acquisition section, attention area detection section, display state setting section, designated elapsed time setting section, and moving amount detection section) described above.
In this case, it is possible to store image data in advance (e.g., capsule endoscope), and process the stored image data by software using a computer system (e.g., PC).
The above embodiments may also be applied to a computer program product that stores a program code that implements each section (e.g., first image acquisition section, second image acquisition section, attention area detection section, display state setting section, designated elapsed time setting section, and moving amount detection section) described above.
The term “computer program product” used herein refers to an information storage medium, a device, an instrument, a system, or the like that stores a program code, such as an information storage medium (e.g., optical disk medium (e.g., DVD), hard disk medium, and memory medium) that stores a program code, a computer that stores a program code, or an Internet system (e.g., a system including a server and a client terminal), for example. In this case, each element and each process according to the above embodiments are implemented by respective modules, and a program code that includes these modules is recorded in the computer program product.
The second configuration example illustrated in
An image signal obtained via the lens system 100 and the CCD 101 of the microscope is amplified by the gain amplifier 104, and converted into a digital signal by the A/D conversion section 105. Illumination light emitted from the illumination light source 102 passes through the filter attached to the rotary filter 103, and is guided to an objective stage of the microscope. The first image acquisition section 110 is connected to the designated elapsed time setting section 500 and the display state setting section 501. The display state setting section 501 is connected to the display section 115. The attention area detection section 112 is connected to the designated elapsed time setting section 500, and the designated elapsed time setting section 500 is connected to the display state setting section 501. The control section 116 is bidirectionally connected to the designated elapsed time setting section 500 and the display state setting section 501.
An operation according to the second configuration example is described below. Note that the operation according to the second configuration example is basically the same as the operation according to the first configuration example. The differences from the first configuration example are mainly described below.
The first image acquisition section 110 reads the normal light image from the switch section 109, performs an interpolation process, a grayscale process, and the like on the normal light image, and transmits the resulting normal light image to the designated elapsed time setting section 500 and the display state setting section 501 under control of the control section 116. The attention area detection section 112 reads the special light image from the second image acquisition section 111, and performs the attention area detection process that detects an attention area (e.g., a lesion area in which blood vessels are densely present) under control of the control section 116. The attention area detection result is transmitted to the designated elapsed time setting section 500.
The designated elapsed time setting section 500 reads the attention area detection result from the attention area detection section 112, and reads two normal light images from the first image acquisition section 110 under control of the control section 116. In one embodiment of the invention, twenty-nine normal light images and one special light image are obtained in one cycle period (1 second) (see
The designated elapsed time setting section 500 determines the designated elapsed time while taking account of the motion amount calculated from the two normal light images in addition to the detection result read from the attention area detection section 112. More specifically, the designated elapsed time setting section 500 determines whether or not the current time point is within the set designated elapsed time, and transmits the determination result to the display state setting section 501. The display state setting section 501 reads the determination result as to whether or not the current time point is within the designated elapsed time from the designated elapsed time setting section 113, and selects the alert information addition process when the current time point is within the designated elapsed time under control of the control section 116. An edge/chroma enhancement process is performed on the normal light image as the alert information addition process (alert area superimposition process). The display state setting section 501 does not perform a process when the current time point is not within the designated elapsed time. The display image output from the display state setting section 501 is transmitted to the display section 115, and sequentially displayed on the display section 115.
In
When the designated elapsed time is set based on the motion amount of the normal light image, the designated elapsed time decreases (i.e., a period of time in which the display state is changed decreases) when the imaging section (camera gaze point) moves at high speed with respect to the object. Therefore, since a period of time in which the display state is changed decreases when the imaging section moves at high speed with respect to the object, and the attention area is not likely to be present near the current observation position, convenience to the user can be improved.
The designated elapsed time setting section 500 illustrated in
The first image acquisition section 110 is connected to the motion amount detection section 304 via the buffer 303. The motion amount detection section 304 and the designated elapsed time ROM 306 are connected to the designated elapsed time calculation section 305. The designated elapsed time calculation section 305 is connected to the detection information recording section 301. The update section 300 is connected to the detection information recording section 301 and the motion amount detection section 304. The control information output section 302 is connected to the display state setting section 501. The control section 116 is bidirectionally connected to the motion amount detection section 304, the designated elapsed time calculation section 305, and the designated elapsed time ROM 306.
When the image processing device has been initialized upon power on, for example, the control section 116 initializes the detection information recording section 301 to set the value of the detection information to an initial value “0”. The update section 300 reads the detection information that indicates whether or not an attention area has been detected within the special light image from the attention area detection section 112 under control of the control section 116.
The update section 300 transmits a control signal that instructs to calculate the motion amount to the motion amount detection section 304 when an attention area has been detected. The update section 300 decrements the value of the detection information recorded in the detection information recording section 301 by one when an attention area has not been detected. When the value of the detection information has become a negative number, the value of the detection information is set to 0.
The motion amount detection section 304 calculates the motion amount of the normal light image stored in the buffer 303 only when the control signal has been transmitted from the update section 300. The buffer 303 stores the first normal light image IN1 and the second normal light image IN2 in each cycle period (see
The designated elapsed time calculation section 305 calculates the designated elapsed time with respect to the motion amount calculated by the motion amount detection section 304 based on the relationship table read from the designated elapsed time ROM 306, and outputs the calculated designated elapsed time to the detection information recording section 301 as the value of the detection infatuation. In
The control information output section 302 reads the value of the detection information from the detection information recording section 301 under control of the control section 116. The control information output section 302 outputs the control information that instructs the display state setting section 501 to change the display state to the display state setting section 501 when the value of the detection information is equal to or larger than 1. The control information output section 302 outputs the control information that instructs the display state setting section 501 not to change the display state to the display state setting section 501 when the value of the detection information is 0.
The designated elapsed time setting section 500 is connected to the selection section 401. The selection section 401 is connected to the luminance/color difference separation section 403 and the display section 115. The luminance/color difference separation section 403 is connected to the edge enhancement section 404 and the chroma enhancement section 405. The edge enhancement section 404 and the chroma enhancement section 405 are connected to the luminance/color difference blending section 406. The luminance/color difference blending section 406 is connected to the display section 115. The control section 116 is bidirectionally connected to the luminance/color difference separation section 403, the edge enhancement section 404, the chroma enhancement section 405, and the luminance/color difference blending section 406.
The selection section 401 reads the control information that instructs whether or not to change the display state from the designated elapsed time setting section 500, and reads the normal light image from the buffer 410 under control of the control section 116. When the control information that instructs to change the display state has been read from the designated elapsed time setting section 500, the selection section 401 transmits the normal light image to the luminance/color difference separation section 403. When the control information that instructs not to change the display state has been read from the designated elapsed time setting section 500, the selection section 401 transmits the normal light image to the display section 115.
The luminance/color difference separation section 403 converts the R signal, the G signal, and the B signal of the normal light image into the luminance signal Y and the color difference signals Cb and Cr (see the expressions (1) to (3)) under control of the control section 116 when the normal light image has been transmitted from the selection section 401. The luminance signal Y is transmitted to the edge enhancement section 404, and the color difference signals Cb and Cr are transmitted to the chroma enhancement section 405.
The edge enhancement section 404 performs a known edge enhancement process on the luminance signal Y under control of the control section 116. A luminance signal Y′ obtained by the edge enhancement process is transmitted to the luminance/color difference blending section 406. The chroma enhancement section 405 performs a known chroma enhancement process on the color difference signals Cb and Cr under control of the control section 116. Color difference signals Cb′ and Cr′ obtained by the chroma enhancement process are transmitted to the luminance/color difference blending section 406.
The luminance/color difference blending section 406 blends the luminance signal Y′ from the edge enhancement section 404 and the color difference signals Cb′ and Cr′ from the chroma enhancement section 405 to generate an R′ signal, a G′ signal, and a B′ signal (see the expressions (7) to (9)), and outputs the resulting signal as the display image under control of the control section 116.
R′=Y′+1.40200Cr′ (7)
G′=Y′−0.34414Cb′−0.71414Cr′ (8)
B′=Y′+1.77200Cb′ (9)
In the second configuration example, the alert information is thus set to the entire normal light image.
According to the second configuration example, the designated elapsed time setting section 500 includes the motion amount detection section 304. The motion amount detection section 304 detects the motion amount of the first image acquired by the first image acquisition section 110.
The designated elapsed time setting section 500 performs the designated elapsed time setting process based on the motion amount detected by the motion amount detection section 304 (see
According to the above configuration, since the designated elapsed time is set based on the motion amount of the first image, the designated elapsed time (i.e., a period of time in which the display state is changed) is reduced when the motion amount of the first image is large (i.e., when it is determined that the imaging section moves at high speed). This makes it possible to provide a convenient image processing device that can prevent a situation in which the alert information is frequently displayed when the imaging section moves at high speed.
The motion amount detection section 304 may detect the motion amount of the first image when an attention area has been detected within the second image.
According to the above configuration, the motion amount of the first image is not detected when an attention area has been detected. This makes it possible to prevent a situation in which the heavy-load motion amount detection process is unnecessarily pertained. Therefore, an intelligent designated elapsed time setting process using the motion amount of the first image can be implemented with reduced processing load.
The designated elapsed time setting section 500 may include the detection information recording section 301, the update section 300, and the control information output section 302 in addition to the motion amount detection section 304 (see
More specifically, the update section 300 may set the value VD of the detection information to the first value VD1 that changes depending on the detected motion amount when the attention area has been detected within the second image. Specifically, the update section 300 sets the value VD of the detection information to the first value VD1 when an attention area has been detected (see D1 in
The control information output section 302 may output the control information that instructs to change the display state of the display image until the value VD of the detection information reaches the second value VD2 (=0), and may output the control information that instructs not to change the display state of the display image when the value VD of the detection information has reached the second value VD2.
It is possible to efficiently implement the designated elapsed time setting process corresponding to the motion amount by thus updating the value of the detection information recorded in the detection information recording section 301. Since the designated elapsed time setting process corresponding to the motion amount can be implemented by merely setting the first value VD1 that changes corresponding to the motion amount to the detection information recording section 301, the process and the configuration can be simplified.
The display state setting section 501 illustrated in
It is possible to provide an image processing device that is convenient to the user by thus performing the display state change process that enhances the edge/chroma of the entire normal light image to generate the display image.
Although an example in which the designated elapsed time is set using the motion amount of the normal light image has been described above, another configuration may also be employed. For example, the designated elapsed time may be fixed in the same manner as in the first configuration example. The motion amount of the normal light image may also be used in the first configuration example.
Although an example in which the display state change process enhances the edge/chroma of the entire normal light image has been described above, another configuration may also be employed. For example, the alert area may be added to the peripheral area of the normal light image in the same manner as in the first configuration example. The edge/chroma of the entire normal light image may also be enhanced in the first configuration example. Although an example in which the process is implemented by hardware has been described above, another configuration may also be employed. For example, the process may be implemented by software as described above in connection with the first configuration example.
An image signal obtained via the lens system 100 and the CCD 550 provided on the end of the endoscope is amplified by the gain amplifier 104, and converted into a digital signal by the AID conversion section 105. Illumination light emitted from the illumination light source 102 passes through the filter attached to the rotary filter 551 provided on the end of the endoscope, and is applied to an object via an optical fiber. The control section 116 is bidirectionally connected to the rotary filter 551 and the switch section 552.
An operation according to the third configuration example is described below. Note that the operation according to the third configuration example is basically the same as the operation according to the first configuration example. The differences from the first configuration example are mainly described below.
The image signal obtained via the lens system 100 and the CCD 550 is output as an analog signal. In one embodiment of the invention, the CCD 550 is a single-chip monochromatic CCD, and the illumination light source 102 is a normal white light source (e.g., xenon lamp).
The rotary filter 551 is provided with twenty-nine sets of filters respectively having the R, G, and B spectral characteristics of the normal light image. The rotary filter 551 is also provided with one set of a filter that allows light within the narrow band (390 to 445 nm) of blue light to pass through, a filter that allows light within the narrow band (530 to 550 nm) of green light to pass through, and a light-shielding filter in the same manner as in the first configuration example.
Twenty-nine normal light images (R, G, and B signals) are acquired while the rotary filter 551 rotates one revolution (1/30th of a second (=3×1/90)). The special light image is captured using the light-shielding filter (R signal), the green-light narrow-band filter (G2) (G signal), and the blue-light narrow-band filter (B2) (B signal). Therefore, one special light image is acquired while the rotary filter 551 rotates one revolution (1/30th of a second (=3×1/90)).
The buffer 106 can store (record) one normal light image or one special light image, and a new image acquired by the imaging operation is overwritten. The switch section 552 transmits the normal light image (R, G, and B signals) stored in the buffer 106 to the first image acquisition section 110 under control of the control section 116. When the special light image that includes a blue-light narrow-band component and a green-light narrow-band component is stored in the buffer 106, the switch section 552 transmits the special light image to the second image acquisition section 111.
The first image acquisition section 110 reads the normal light image from the switch section 552, performs a grayscale process and the like on the normal light image, and transmits the resulting normal light image to the display state setting section 114 under control of the control section 116. The second image acquisition section 111 reads the special light image from the switch section 552, and performs a grayscale process and the like on the special light image under control of the control section 116. The second image acquisition section 111 also performs a process that generates a pseudo-color image. The subsequent process is performed in the same manner as in the first configuration example illustrated in
According to the third configuration example, the normal light image and the special light image are acquired, and an attention area is detected based on the feature quantity of each pixel of the special light image. The designated elapsed time is determined (set) based on the detection result. The display state of the display image is set based on the determined designated elapsed time. Therefore, the user can determine that the attention area is present near the current observation position even when the attention area is detected only for a short time during observation in a moving state. This makes it possible to prevent a situation in which the attention area is missed, and reliably specify the attention area. Moreover, the normal light image and the special light image are acquired every given cycle period so that the normal light image is acquired in a high ratio as compared with the special light image. This makes it possible to prevent a decrease in temporal resolution of the normal light image, and obtain a high-quality display image. Since the alert information is set to the peripheral area of the normal light image, it is possible to provide an image processing device that ensures excellent operability, and does not hinder observation of the normal light image.
Although the first to third configuration examples according to several embodiments of the invention have been described in detail above, those skilled in the art would readily appreciate that many modifications are possible in the above embodiments.
For example, the specific wavelength band may be narrower than the wavelength band of white light (narrow band imaging (NBI)). The normal light image and the special light image may be an in vivo image, and the specific wavelength band included in the in vivo image may be the wavelength band of light absorbed by hemoglobin in blood, for example. The wavelength band of light absorbed by hemoglobin may be 390 to 445 nm (first narrow-band light or a B2 component of narrow-band light) or 530 to 550 nm (second narrow-band light or a G2 component of narrow-band light), for example.
This makes it possible to observe the structure of blood vessels positioned in a surface area and a deep area of tissue. A lesion area (e.g., epidermoid cancer) that is difficult to observe using normal light can be displayed in brown or the like by inputting the resulting signal to a given channel (G2→R, B2→G and B), so that the lesion area can be reliably detected. A wavelength band of 390 to 445 nm or 530 to 550 nm is selected from the viewpoint of absorption by hemoglobin and the ability to reach a surface area or a deep area of tissue. Note that the wavelength band is not limited thereto. For example, the lower limit of the wavelength band may decrease by about 0 to 10%, and the upper limit of the wavelength band may increase by about 0 to 10%, depending on a variation factor (e.g., experimental results for absorption by hemoglobin and the ability to reach a surface area or a deep area of tissue).
The specific wavelength band included in the in vivo image may be the wavelength band of fluorescence emitted from a fluorescent substance. For example, the specific wavelength band may be 490 to 625 nm.
This makes it possible to implement autofluorescence imaging (AFI). Intrinsic fluorescence (490 to 625 nm) from a fluorescent substance (e.g., collagen) can be observed by applying excitation light (390 to 470 nm). In this case, the lesion area can be highlighted in a color differing from that of a normal mucous membrane, so that the lesion area can be reliably detected, for example. A wavelength band of 490 to 625 nm is the wavelength band of fluorescence produced by a fluorescent substance (e.g., collagen) when excitation light is applied. Note that the wavelength band is not limited thereto. For example, the lower limit of the wavelength band may decrease by about 0 to 10%, and the upper limit of the wavelength band may increase by about 0 to 10% depending on a variation factor (e.g., experimental results for the wavelength band of fluorescence produced by a fluorescent substance). A pseudo-color image may be generated by simultaneously applying light within a wavelength band (540 to 560 nm) that is absorbed by hemoglobin.
The specific wavelength band included in the in vivo image may be the wavelength band of infrared light. For example, the specific wavelength band may be 790 to 820 nm or 905 to 970 nm.
This makes it possible to implement infrared imaging (IRI). Information about a blood vessel or a blood flow in a deep area of a mucous membrane that is difficult to observe visually, can be highlighted by intravenously injecting indocyanine green (ICG) (infrared marker) that easily absorbs infrared light, and applying infrared light within the above wavelength band, so that the depth of gastric cancer invasion or the therapeutic strategy can be determined, for example. An infrared marker exhibits maximum absorption in a wavelength band of 790 to 820 nm, and exhibits minimum absorption in a wavelength band of 905 to 970 nm. Note that the wavelength band is not limited thereto. For example, the lower limit of the wavelength band may decrease by about 0 to 10%, and the upper limit of the wavelength band may increase by about 0 to 10% depending on a variation factor (e.g., experimental results for absorption by the infrared marker).
The second image acquisition section 111 (see
More specifically, the second image acquisition section 111 may include a signal extraction section that extracts a signal within the wavelength band of white light from the acquired white light image, and the second image acquisition section may generate the special light image that includes a signal within the specific wavelength band based on the signal within the wavelength band of white light extracted by the signal extraction section. For example, the signal extraction section may estimate the spectral reflectance characteristics of the object from the RGB signals of the white light image at intervals of 10 nm, and the second image acquisition section 111 may integrate the estimated signal components within the specific wavelength band to generate the special light image.
The second image acquisition section 111 may include a matrix data setting section that sets matrix data for calculating a signal within the specific wavelength band from a signal within the wavelength band of white light. The second image acquisition section 111 may calculate a signal within the specific wavelength band from a signal within the wavelength band of white light using the matrix data set by the matrix data setting section to generate the special light image. For example, the matrix data setting section may set table data as the matrix data, the spectral characteristics of illumination light within the specific wavelength band being described in the table data at intervals of 10 nm. The estimated spectral reflectance characteristics of the object may be multiplied by the spectral characteristics (coefficient) described in the table data to generate the special light image.
In this case, since the special light image can be generated based on the normal light image, it is possible to implement a system using only one light source that emits normal light and one image sensor that captures normal light. This makes it possible to reduce the size of an insertion section of a scope endoscope or a capsule endoscope. Moreover, since the number of parts can be reduced, a reduction in cost can be achieved.
Although an example in which the alert information is an image (see
When the first image acquisition section 110 has acquired the first image, the second image acquisition section 111 has acquired the second image, and the attention area detection section 112 has detected an attention area based on the feature quantity of each pixel within the second image, the alert information output section (display state setting section 114) may output the alert information about the detected attention area. More specifically, the designated elapsed time setting section 113 may perform the designated elapsed time setting process based on the attention area detection result, and the alert information output section may output the alert information until the designated elapsed time elapses. When using sound as the alert information, the display section 115 (see
Although only some embodiments of the invention have been described in detail above, those skilled in the art would readily appreciate that many modifications are possible in the embodiments without materially departing from the novel teachings and advantages of the invention. Accordingly, such modifications are intended to be included within the scope of the invention. Any term (e.g., normal light image or special light image) cited with a different term (e.g., first image or second image) having a broader meaning or the same meaning at least once in the specification and the drawings may be replaced by the different term in any place in the specification and the drawings. The configurations and the operations of the image processing device and the endoscope system are not limited to those described in connection with the above embodiments. Various modifications and variations may be made of the above embodiments.
Number | Date | Country | Kind |
---|---|---|---|
2010-023750 | Feb 2010 | JP | national |
This application is a continuation of International Patent Application No. PCT/JP2011/50951, having an international filing date of Jan. 20, 2011, which designated the United States, the entirety of which is incorporated herein by reference. Japanese Patent Application No. 2010-023750 filed on Feb. 5, 2010 is also incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2011/050951 | Jan 2011 | US |
Child | 13548390 | US |