The present invention relates to an endoscope system that displays a virtual scale to be used to measure the size of a subject and a method of operating the endoscope system.
A distance to a subject, the size of a subject, or the like is acquired in an endoscope system that includes a light source device, an endoscope, and a processor device. For example, in WO2018/051680A (corresponding to US2019/0204069A1), a subject is irradiated with illumination light and measurement light and a measurement light-irradiation region, such as spotlight, appears on the subject due to irradiation with measurement light. Then, a virtual scale used to measure the size of the subject is displayed in an image to correspond to the position of the spotlight.
As in WO2018/051680A, a length measurement-compatible endoscope, which can emit measurement light, is necessary to display a virtual scale using measurement light. In order to execute a length measurement mode in which a virtual scale is displayed in an endoscope system, it is necessary to determine whether or not a length measurement-compatible endoscope is connected.
An object of the present invention is to provide an endoscope system that can determine whether or not a length measurement mode can be executed depending on connection of an endoscope and a method of operating the endoscope system.
An endoscope system according to an aspect of the present invention comprises an endoscope and a processor device that includes an image control processor. The image control processor determines whether or not the endoscope is a length measurement-compatible endoscope in a case where the endoscope is connected to the processor device, and enables switching of a mode to a length measurement mode in a case where the endoscope is the length measurement-compatible endoscope.
It is preferable that, in a case where the endoscope is the length measurement-compatible endoscope, the endoscope is capable of emitting measurement light and causing a display to display a length measurement image displaying a virtual scale based on the measurement light. It is preferable that, in a state where the switching of a mode to the length measurement mode is enabled, the image control processor performs at least one of switching of ON or OFF of the measurement light, switching of ON or OFF of length measurement image-display settings related to the length measurement image, switching of ON or OFF of length measurement function-operation state display showing that the virtual scale is being displayed on the display, or switching of ON or OFF of display of the virtual scale or display aspect change of the virtual scale by an operation for switching a mode to the length measurement mode.
It is preferable that the image control processor switches the measurement light to ON, switches the length measurement image-display settings to ON, switches the length measurement function-operation state display to ON, and switches the display of the virtual scale to ON by the operation for switching a mode to the length measurement mode. It is preferable that, in a case where conditions in switching a mode are not satisfied in the operation for switching a mode to the length measurement mode, the image control processor prohibits the switching of the measurement light to ON, prohibits the switching of the length measurement image-display settings to ON, prohibits the switching of the length measurement function-operation state display to ON, and prohibits the switching of the display of the virtual scale to ON. It is preferable that length measurement function-operation state-unavailability display showing that the virtual scale is not being displayed is switched to ON instead of prohibiting the switching of the length measurement function-operation state display to ON. It is preferable that, in a case where the length measurement image-display settings are switched to ON, the image control processor stores image display settings before a mode is switched to the length measurement mode.
It is preferable that display aspect change of the virtual scale is performed according to a selection from a plurality of scale patterns. It is preferable that the image control processor switches the measurement light to OFF, switches the length measurement image-display settings to OFF, switches the length measurement function-operation state display to OFF, and switches the display of the virtual scale to OFF by an operation for switching the length measurement mode to another mode. It is preferable that, in a case where the length measurement image-display settings are switched to OFF, the image control processor switches image display settings to image display settings stored before a mode is switched to the length measurement mode.
According to another aspect of the present invention, there is provided a method of operating an endoscope system that includes an endoscope and a processor device including an image control processor. The image control processor determines whether or not the endoscope is a length measurement-compatible endoscope in a case where the endoscope is connected to the processor device, and enables switching of a mode to a length measurement mode in a case where the endoscope is the length measurement-compatible endoscope.
According to the present invention, it is possible to determine whether or not a length measurement mode can be executed depending on the connection of an endoscope.
(A) of
As shown in
Further, the operation part 12b is provided with an observation mode selector switch 12f that is used for an operation for switching an observation mode, a static image-acquisition instruction switch 12g that is used to give an instruction to acquire a static image of the object to be observed, and a zoom operation part 12h that is used for an operation of a zoom lens 21b.
The processor device 14 is electrically connected to the display 15 and the user interface 16. The display 15 outputs and displays an image, information, or the like of the object to be observed that is processed by the processor device 14. The user interface 16 includes a keyboard, a mouse, a touch pad, a microphone, and the like and has a function to receive an input operation, such as function settings. The augmented processor device 17 is electrically connected to the processor device 14. The augmented display 18 outputs and displays an image, information, or the like that is processed by the augmented processor device 17.
The endoscope 12 has a normal observation mode, a special observation mode, and a length measurement mode, and these modes are switched by the observation mode selector switch 12f The normal observation mode is a mode in which an object to be observed is illuminated with illumination light. The special observation mode is a mode in which an object to be observed is illuminated with special light different from the illumination light. In the length measurement mode, an object to be observed is illuminated with the illumination light or measurement light and a virtual scale to be used for the measurement of the size and the like of an object to be observed is displayed in a subject image obtained from the image pickup of the object to be observed. A subject image on which the virtual scale is not superimposed and displayed is displayed on the display 15, but a subject image on which the virtual scale is superimposed and displayed is displayed on the augmented display 18.
The illumination light is light that is used to apply brightness to the entire object to be observed to observe the entire object to be observed. The special light is light that is used to enhance a specific region of the object to be observed. The measurement light is light that is used for the display of the virtual scale. Further, a virtual scale to be displayed in an image will be described in the present embodiment, but a real scale may be provided in a real lumen so that the real scale can be checked using an image. In this case, it is conceivable that the real scale is inserted through a forceps channel of the endoscope 12 and is made to protrude from the distal end part 12d.
In a case where a user operates the static image-acquisition instruction switch 12g, the screen of the display 15 is frozen and displayed and an alert sound (for example, “beep”) informing the acquisition of a static image is generated together. Then, the static images of the subject image, which are obtained before and after the operation timing of the static image-acquisition instruction switch 12g, are stored in a static image storage unit 42 (see
A static image-acquisition instruction may be given using an operation device other than the static image-acquisition instruction switch 12g. For example, a foot pedal may be connected to the processor device 14, and may be adapted to give a static image-acquisition instruction in a case where a user operates the foot pedal (not shown) with a foot. A static image-acquisition instruction may also be given by a foot pedal that is used to switch a mode. Further, a gesture recognition unit (not shown), which recognizes the gestures of a user, may be connected to the processor device 14, and may be adapted to give a static image-acquisition instruction in a case where the gesture recognition unit recognizes a specific gesture of a user. The gesture recognition unit may also be used to switch a mode.
Further, a sight line input unit (not shown), which is provided close to the display 15, may be connected to the processor device 14, and may be adapted to give a static image-acquisition instruction in a case where the sight line input unit recognizes that a user's sight line is in a predetermined region of the display 15 for a predetermined time or longer. Furthermore, a voice recognition unit (not shown) may be connected to the processor device 14, and may be adapted to give a static image-acquisition instruction in a case where the voice recognition unit recognizes a specific voice generated by a user. The voice recognition unit may also be used to switch a mode. Moreover, an operation panel (not shown), such as a touch panel, may be connected to the processor device 14, and may be adapted to give a static image-acquisition instruction in a case where a user performs a specific operation on the operation panel. The operation panel may also be used to switch a mode.
As shown in
A balloon 19 as a fixing member is attachably and detachably mounted on the insertion part 12a. The balloon 19 is a disposable type balloon, is discarded after being used one time or a small number of times, and is replaced with a new one. The number of times mentioned here means the number of cases, and a small number of times refer to 10 times or less.
The balloon 19 is formed in a substantially tubular shape, of which end portions are narrowed, using an elastic material, such as rubber. The balloon 19 includes a distal end portion 19a and a proximal end portion 19b having a small diameter and an intermediate bulging portion 19c. The insertion part 12a is inserted into the balloon 19, the balloon 19 is disposed at a predetermined position, and rings 20a and 20b made of rubber are then fitted to the distal end portion 19a and the proximal end portion 19b, so that the balloon 19 is fixed to the insertion part 12a.
It is preferable that, as shown in
In a state where the balloon 19 is deflated as shown in
Accordingly, a user sets the balloon 19 to an inflated state as shown in
As shown in
As shown in
The flat surface 28b is provided with a through-hole 27a through which a distal end surface 21c of the image pickup optical system 21 is exposed to the outside and through-holes 27b through which distal end surfaces 22b of the pair of illumination optical systems 22 are exposed to the outside. The distal end surface 21c, the distal end surfaces 22b, and the flat surface 28b are disposed on the same plane.
Through-holes 27c and 27d are disposed on the flat surface 28a. The air/water supply nozzle 25 is exposed from the through-hole 27c. That is, the flat surface 28a is a mounting position for the air/water supply nozzle 25 in the axial direction Z. A jetting tube portion 25a is formed on the distal end side of the air/water supply nozzle 25. The jetting tube portion 25a is formed in the shape of a tube that protrudes in a direction bent by, for example, 90° from the proximal end portion of the air/water supply nozzle 25, and includes a jetting port 25b at the distal end thereof. The jetting tube portion 25a is disposed to protrude from the through-hole 27c to the distal end side in the axial direction Z.
The jetting port 25b is disposed toward the image pickup optical system 21. Accordingly, the air/water supply nozzle 25 jets a washing solution or gas, which is fluid, to the distal end surface 21c of the image pickup optical system 21 and the peripheral portion of the distal end surface 21c.
In a case where washing water or gas is jetted from the air/water supply nozzle 25 to the image pickup optical system 21, it is preferable that a flow speed F1 of the washing water at a position where the washing water reaches the image pickup optical system 21, that is, at an outer peripheral edge of the image pickup optical system 21 is 2 m/s or more and a flow speed F2 of the gas at the outer peripheral edge of the image pickup optical system 21 is 40 m/s or more. It is preferable that the flow speeds F1 and F2 satisfy the above-mentioned values regardless of the orientation of the distal end part 12d. For example, in a case where the air/water supply nozzle 25 is positioned immediately below the image pickup optical system 21 in a vertical direction, the flow speed of washing water or gas is reduced due to an influence of gravity but it is preferable that the flow speeds F1 and F2 satisfy the above-mentioned values even in this case.
The distal end surface of the measurement light-emitting unit 23, which is exposed from the through-hole 27d, is disposed on the flat surface 28a. That is, the mounting position for the air/water supply nozzle 25 and the distal end surface of the measurement light-emitting unit 23 are disposed at the same position in the axial direction Z. The measurement light-emitting unit 23 is disposed between the image pickup optical system 21 and the air/water supply nozzle 25 in a range where fluid is jetted from the air/water supply nozzle 25. In the present embodiment, the measurement light-emitting unit 23 is disposed in a region that connects the jetting port 25b of the air/water supply nozzle 25 to the outer peripheral edge of the image pickup optical system 21 in a case where the distal end surface 28 is viewed in the axial direction Z. Accordingly, in a case where fluid is jetted from the air/water supply nozzle 25 to the image pickup optical system 21, fluid can also be jetted to the measurement light-emitting unit 23 at the same time.
The guide surface 28c is formed of a continuous surface that connects the flat surface 28a to the flat surface 28b. The guide surface 28c is an inclined surface that is formed in a flat shape from a position where the guide surface 28c is in contact with the outer peripheral edge of the measurement light-emitting unit 23 to a position where the guide surface 28c is in contact with the outer peripheral edge of the image pickup optical system 21. Since the guide surface 28c is disposed in the range where fluid is jetted from the air/water supply nozzle, fluid is jetted to even the guide surface 28c in a case where fluid is jetted from the air/water supply nozzle 25. The fluid jetted to the guide surface 28c is diffused and blown to the image pickup optical system 21. In this case, the entire guide surface 28c may be included in the range where fluid is jetted from the air/water supply nozzle or only a part of the guide surface 28c may be included in the range where fluid is jetted from the air/water supply nozzle. In the present embodiment, the entire guide surface 28c is included in the region that connects the jetting port 25b of the air/water supply nozzle 25 to the outer peripheral edge of the image pickup optical system 21.
As shown in
The light source processor 31 controls the light source unit 30 on the basis of an instruction given from a system controller 41. The system controller 41 not only gives an instruction related to light source control to the light source processor 31 but also controls a light source 23a (see
The illumination optical system 22 includes the illumination lens 22a, and the object to be observed is irradiated with light, which is emitted from the light guide LG, through the illumination lens 22a. The image pickup optical system 21 includes an objective lens 21a, a zoom lens 21b, and an image pickup element 32. Light reflected from the object to be observed is incident on the image pickup element 32 through the objective lens 21a and the zoom lens 21b. Accordingly, the reflected image of the object to be observed is formed on the image pickup element 32.
The zoom lens 21b has an optical zoom function to enlarge or reduce the subject as a zoom function by moving between a telephoto end and a wide end. ON/OFF of the optical zoom function can be switched by the zoom operation part 12h (see
The image pickup element 32 is a color image pickup sensor, and picks up the reflected image of an object to be examined and outputs image signals. It is preferable that the image pickup element 32 is a charge coupled device (CCD) image pickup sensor, a complementary metal-oxide semiconductor (CMOS) image pickup sensor, or the like. The image pickup element 32 used in the present invention is a color image pickup sensor that is used to obtain red images, green images, and blue images corresponding to three colors of R (red), G (green), and B (blue). The red image is an image that is output from red pixels provided with red color filters in the image pickup element 32. The green image is an image that is output from green pixels provided with green color filters in the image pickup element 32. The blue image is an image that is output from blue pixels provided with blue color filters in the image pickup element 32. The image pickup element 32 is controlled by an image pickup controller 33.
Image signals output from the image pickup element 32 are transmitted to a CDS/AGC circuit 34. The CDS/AGC circuit 34 performs correlated double sampling (CDS) or auto gain control (AGC) on the image signals that are analog signals. The image signals, which have been transmitted through the CDS/AGC circuit 34, are converted into digital image signals by an analog/digital converter (A/D converter) 35. The digital image signals, which have been subjected to A/D conversion, are input to a communication interface (I/F) 37 of the light source device 13 through a communication interface (I/F) 36.
In the processor device 14, programs related to various types of processing, control, or the like are incorporated into a program storage memory (not shown). The system controller 41 formed of an image control processor operates the programs incorporated into the program storage memory, so that the functions of a reception unit 38 connected to the communication interface (I/F) 37 of the light source device 13, a signal processing unit 39, and a display controller 40 are realized.
The reception unit 38 receives the image signals, which are transmitted from the communication I/F 37, and transmits the image signals to the signal processing unit 39. A memory, which temporarily stores the image signals received from the reception unit 38, is built in the signal processing unit 39, and the signal processing unit 39 processes an image signal group, which is a set of the image signals stored in the memory, to generate the subject image. The reception unit 38 may directly transmit control signals, which are related to the light source processor 31, to the system controller 41.
In a case where the endoscope 12 is set to the normal observation mode, signal assignment processing for assigning the blue image of the subject image to B channels of the display 15, assigning the green image of the subject image to G channels of the display 15, and assigning the red image of the subject image to R channels of the display 15 is performed in the signal processing unit 39. As a result, a color subject image is displayed on the display 15. The same signal assignment processing as that in the normal observation mode is performed even in the length measurement mode.
On the other hand, in a case where the endoscope 12 is set to the special observation mode, the red image of the subject image is not used for the display of the display 15, the blue image of the subject image is assigned to the B channels and the G channels of the display 15, and the green image of the subject image is assigned to the R channels of the display 15 in the signal processing unit 39. As a result, a pseudo-color subject image is displayed on the display 15. Further, in a case where the endoscope 12 is set to the length measurement mode, the signal processing unit 39 transmits a subject image, which includes the irradiation position of the measurement light, to a data transmission/reception unit 43. The data transmission/reception unit 43 transmits data, which are related to the subject image, to the augmented processor device 17. The data transmission/reception unit 43 can receive data and the like from the augmented processor device 17. The received data can be processed by the signal processing unit 39 or the system controller 41.
In a case where a digital zoom function is set to ON as a zoom function by the user interface 16, the signal processing unit 39 cuts out a portion of the subject image and enlarges or reduces the cut portion. As a result, the subject is enlarged or reduced at a specific magnification. (A) of
The display controller 40 causes the display 15 to display the subject image that is generated by the signal processing unit 39. The system controller 41 performs various controls on the endoscope 12, the light source device 13, the processor device 14, and the augmented processor device 17. The system controller 41 performs the control of the image pickup element 32 via the image pickup controller 33 provided in the endoscope 12. The image pickup controller 33 also performs the control of the CDS/AGC circuit 34 and the A/D converter 35 in accordance with the control of the image pickup element 32.
The augmented processor device 17 receives data, which are transmitted from the processor device 14, by a data transmission/reception unit 44. A signal processing unit 45 performs processing related to the length measurement mode on the basis of the data that are received by the data transmission/reception unit 44. Specifically, the signal processing unit 45 performs processing for determining the size of a virtual scale from the subject image including the irradiation position of the measurement light and superimposing and displaying the determined virtual scale on the subject image. A display controller 46 causes the augmented display 18 to display the subject image on which the virtual scale is superimposed and displayed. The data transmission/reception unit 44 can transmit data and the like to the processor device 14.
As shown in
For example, red (the color of beam light) laser light having a wavelength of 600 nm or more and 650 nm or less is used as the light emitted from the light source 23a in the present embodiment, but light having other wavelength ranges, for example, green light having a wavelength of 495 nm or more and 570 nm or less may be used. The light source 23a is controlled by the system controller 41, and emits light on the basis of an instruction given from the system controller 41. The DOE 23b converts the light, which is emitted from the light source, into the measurement light that is used to obtain measurement information. It is preferable that the amount of measurement light is adjusted from a standpoint of protecting a human body, eyes, and internal organs and is adjusted to the amount of light enough to cause halation (pixel saturation) in the observation range of the endoscope 12.
The prism 23c is an optical member that is used to change the travel direction of the measurement light converted by the DOE 23b. The prism 23c changes the travel direction of the measurement light such that the measurement light intersects with the visual field of the image pickup optical system 21 including the objective lens 21a. The details of the travel direction of the measurement light will also be described later. The subject is irradiated with measurement light emitted from the prism 23c.
As shown in
As shown in
An example in which “n1<n2” is satisfied and a light-emitting surface of the prism 23c is inclined toward the optical axis Ax in a case where a refractive index of the prism 23c is denoted by “n1” and a refractive index of the prism 49 is denoted by “n2” has been described, but configuration contrary to this example may be provided. “n1>n2” may be satisfied and the light-emitting surface of the prism 23c may be provided on a side opposite to the optical axis Ax. However, since there is a possibility that light is totally reflected by the light-emitting surface of the prism 23c in this case, it is necessary to impose a limitation on the light-emitting surface of the prism 23c.
The prism 23c may be formed of an auxiliary measurement slit, which is formed at the distal end part 12d of the endoscope, instead of being formed of an optical member. Further, in a case where the prism 23c is formed of an optical member, it is preferable that an anti-reflection coating (AR coating) (anti-reflection portion) is provided on an emission surface of the prism 23c. The reason why the anti-reflection coating is provided as described above is that it is difficult for an irradiation position detector 61 to be described later to recognize the position of a spot SP formed on the subject by the measurement light in a case where the measurement light is reflected without being transmitted through the emission surface of the prism 23c and a ratio of the measurement light with which a subject is irradiated is reduced.
The measurement light-emitting unit 23 has only to be capable of emitting the measurement light to the visual field of the image pickup optical system 21. For example, the light source 23a may be provided in the light source device and light emitted from the light source 23a may be guided to the DOE 23b by optical fibers. Further, the prism 23c may not be used and the orientations of the light source 23a and the DOE 23b may be inclined with respect to the optical axis Ax of the image pickup optical system 21 so that the measurement light is emitted in a direction crossing the visual field of the image pickup optical system 21.
With regard to the travel direction of the measurement light, the measurement light is emitted in a state where an optical axis Lm of the measurement light intersects with the optical axis Ax of the image pickup optical system 21 as shown in
Since the measurement light is emitted in a state where the optical axis Lm of the measurement light intersects with the optical axis Ax as described above, the size of the subject can be measured from the movement of the position of the spot with respect to a change in observation distance. Then, the image of the subject illuminated with the measurement light is picked up by the image pickup element 32, so that a subject image including the spot SP is obtained. In the subject image, the position of the spot SP varies depending on a relationship between the optical axis Ax of the image pickup optical system 21 and the optical axis Lm of the measurement light and an observation distance. The number of pixels showing the same actual size (for example, 5 mm) is increased in the case of a short observation distance, and the number of pixels showing the same actual size is reduced in the case of a long observation distance.
As shown in
The length measurement-compatible endoscope-availability determination unit 140 determines whether or not the endoscope 12 is a length measurement-compatible endoscope in a case where the endoscope 12 is connected to the processor device 14. In a case where the endoscope 12 is a length measurement-compatible endoscope, the switching of a mode to the length measurement mode is enabled. A length measurement-compatible endoscope refers to an endoscope that can emit and receive measurement light and causes the augmented display 18 (or the display 15) to display a length measurement image displaying a virtual scale based on the measurement light. The length measurement-compatible endoscope-availability determination unit 140 comprises a scope ID table (not shown) in which a scope ID given to the endoscope 12 and a flag for the presence or absence of a length measurement-compatible endoscope (for example, the flag is set to “1” in the case of a length measurement-compatible endoscope and is set to “0” in the cases of other endoscopes) are associated with each other. Further, in a case where the endoscope 12 is connected, the length measurement-compatible endoscope-availability determination unit 140 reads out the scope ID of the endoscope. The length measurement-compatible endoscope-availability determination unit 140 determines whether or not the scope ID read out is a scope ID of a length measurement-compatible endoscope with reference to the flag of the scope ID table.
The measurement light-ON/OFF switching unit 141 controls the light source 23a to switch the turning-on (ON) or turning-off (OFF) the measurement light. The length measurement image-display setting-ON/OFF switching unit 142 allows the various image display settings in the length measurement mode, such as the display settings (a color tone and the like) of a length measurement image, to be available (ON) or unavailable (OFF) via the user interface 16 or the like. The virtual scale-display switching controller 144 switches the virtual scale to any one of display (ON), non-display (OFF), or the change of a display aspect on the augmented display 18.
In a state where the switching of a mode to the length measurement mode is enabled, the system controller 41 performs at least one of the switching of ON or OFF of measurement light, the switching of ON or OFF of length measurement image-display settings, the switching of ON or OFF of length measurement function-operation state display, and the switching of ON or OFF of the display of a virtual scale or display aspect change of the virtual scale by an operation for switching a mode to the length measurement mode using the observation mode selector switch 12f.
For example, it is preferable that the system controller 41 switches the measurement light to ON, switches the length measurement image-display settings to ON, switches the length measurement function-operation state display to ON, and switches the display of a virtual scale to ON by an operation for switching a mode to the length measurement mode. On the other hand, it is preferable that the system controller 41 switches the measurement light to OFF, switches the length measurement image-display settings to OFF, switches the length measurement function-operation state display to OFF, and switches the display of a virtual scale to OFF by an operation for switching the length measurement mode to the other mode (the normal observation mode or the special observation mode).
It is preferable that the length measurement function-operation state display is displayed in an accessory information display region 18a of the augmented display 18 by a scale display icon 146 as shown in
The virtual scale 147 comprises a virtual scale 147a of 5 mm, a virtual scale 147b of 10 mm, and a virtual scale 147c of 20 mm. Each of the virtual scales 147a, 147b, and 147c includes a circular scale (displayed with a dotted line) and a line segment scale (displayed with a solid line). “5” of the virtual scale 147a indicates a scale of 5 mm, “10” of the virtual scale 147b indicates a scale of 10 mm, and “20” of the virtual scale 147c indicates a scale of 20 mm.
The display aspect of a virtual scale is changed according to a selection from a plurality of predetermined scale patterns. Examples of a plurality of scale patterns include a scale pattern that is formed of a combination of two virtual scales 147b and 147c each of which includes a circular scale and a line segment scale, a scale pattern that is formed of a combination of three virtual scales each of which includes only a line segment among the virtual scales 147a, 147b, and 147c, and the like in addition to a scale pattern that is formed of a combination of the three virtual scales 147a, 147b, and 147c each of which includes a circular scale and a line segment scale as shown in
Further, in a case where the length measurement image-display settings are switched to ON, it is preferable that image display settings before the switching of a mode to the length measurement mode are stored in the unswitched image display setting-storage unit 149. For example, in a case where an observation mode before the switching of a mode to the length measurement mode is the normal observation mode, it is preferable that image display settings in the normal observation mode set in the signal processing unit 39 are stored in the unswitched image display setting-storage unit 149. Further, in a case where the length measurement image-display settings are switched to OFF, it is preferable that image display settings are switched to the image display settings stored in the unswitched image display setting-storage unit 149. For example, in a case where image display settings in the normal observation mode are stored in the unswitched image display setting-storage unit 149, the signal processing unit 39 sets image display settings to the image display settings in the normal observation mode, which are stored in the unswitched image display setting-storage unit 149, according to the switching of a mode to the normal observation mode.
On the other hand, in a case where conditions in switching a mode are not satisfied in an operation for switching a mode to the length measurement mode, the system controller 41 prohibits the switching of the measurement light to ON, prohibits the switching of the length measurement image-display settings to ON, prohibits the switching of the length measurement function-operation state display to ON, and prohibits the switching of the display of a virtual scale to ON. The conditions in switching a mode are conditions that are suitable for the execution of the length measurement mode under setting conditions related to the endoscope 12, the light source device 13, the processor device 14, and the augmented processor device 17. It is preferable that the conditions in switching a mode are conditions not corresponding to the following prohibition setting conditions. In a case where the conditions in switching a mode are not satisfied, it is preferable that length measurement function-operation state-unavailability display showing that the virtual scale 147 is not being displayed on the augmented display 18 is displayed (ON) instead of prohibiting the displaying of the scale display icon 146. It is preferable that the length measurement function-operation state-unavailability display is displayed in the accessory information display region 18a by a scale non-display icon 148.
As shown in
The setting condition related to the light source device 13 includes an illumination condition for the illumination light that is used in the normal observation mode or the length measurement mode, an illumination condition for the special light that is used in the special observation mode, or an illumination condition for the measurement light that is used in the length measurement mode. The illumination condition includes, for example, the amount of illumination light and the like. The setting condition related to the endoscope 12 includes an image pickup condition related to the image pickup of the subject. The image pickup condition includes, for example, a shutter speed and the like. The setting condition related to the processor device 14 includes a processing condition, such as image processing related to the subject image. The processing condition includes, for example, color balance, brightness correction, various types of enhancement processing, and the like. In the length measurement mode, it is preferable that the detection of the position of the spot SP is optimized and the setting conditions are set to setting conditions (the amount of illumination light, a shutter speed, color balance, brightness correction, and various types of enhancement processing) satisfying visibility in the user's dimensional measurement.
The prohibition setting conditions include a first prohibition setting condition that causes the detection of the irradiation position of the measurement light from the subject image in the length measurement mode to be hindered, and a second prohibition setting condition that causes the accurate display of a virtual scale corresponding to an observation distance in a length measurement image to be hindered. The first prohibition setting condition includes, for example, the special observation mode, brightness enhancement or red emphasis in the subject image, and the like. Since a red image used to detect the spot SP or the like in the length measurement mode is not used for the display of an image in the special observation mode, it is difficult to detect the irradiation position of the measurement light. It is preferable that the brightness of the subject image is set to be low and redness is suppressed in the length measurement mode as compared to the normal observation mode or the special observation mode.
Further, for example, the use (ON) of a zoom function, such as the optical zoom function or the digital zoom function, is included as the second prohibition setting condition. The reason for this is that it is difficult for the virtual scale to be displayed to correspond to an observation distance in a case where the zoom function is turned on since a virtual scale displayed in a measurement image is determined according to the position of the spot SP and is not determined according to the magnification of the zoom function.
For example, in a case where an operation for switching a mode to the length measurement mode is performed by the observation mode selector switch 12f in a state where the endoscope 12 is set to the special observation mode, the length measurement mode controller 50 performs the first control that prohibits the switching of a mode to the length measurement mode and maintains the state of the special observation mode as shown in
Further, in a case where a setting change operation for turning on the zoom function by the operation of the zoom operation part 12h is performed in the length measurement mode, the length measurement mode controller 50 performs the second control that di sables the setting change operation for turning on the zoom function as shown in
Furthermore, in a case where a setting change operation for turning on the zoom function by the operation of the zoom operation part 12h is performed in the length measurement mode, the length measurement mode controller 50 performs the third control that cancels the length measurement mode and switches a mode to the normal observation mode as the other mode as shown in
As shown in
The first light emission control table 55 is used for the control of the amount of measurement light, and stores a first relationship between the coordinate information of the spot SP and the light amount level of measurement light. Specifically, as shown in
As shown in
Likewise, Coordinate area 3 is provided above Coordinate area 2. In a case where the spot SP belongs to Coordinate area 3, the spot SP is present at a position corresponding to an observation distance long as compared to the case of Coordinate area 2. Accordingly, Level 3 higher than Level 2 is assigned as the light amount level of measurement light. Further, Coordinate area 4 is provided above Coordinate area 3. In a case where the spot SP belongs to Coordinate area 4, the spot SP is present at a position corresponding to an observation distance long as compared to the case of Coordinate area 3. Accordingly, Level 4 higher than Level 3 is assigned as the light amount level of measurement light. Furthermore, Coordinate area 5 is an area that is set on the highest side. In a case where the spot SP belongs to Coordinate area 5, the spot SP is present at a position corresponding to an observation distance longest as compared to the cases of the other coordinate areas 1 to 4. Accordingly, the highest Level 5 is assigned as the light amount level of measurement light.
The second light emission control table 56 is used for the control of the amount of measurement light, and stores a second relationship between the coordinate information of the spot SP and the light amount level of illumination light and the light amount level of measurement light. Specifically, as shown in
The system controller 41 specifies the light amount level of measurement light from Coordinate area to which the position of the spot SP belongs and the light amount level of illumination light with reference to the second light emission control table 56. The system controller 41 controls the light source 23a to control the amount of measurement light so that the amount of measurement light is set to the specified light amount level.
In the second light emission control table 56, the light amount level of illumination light and the light amount level of measurement light are set to ratios that are required to specify the position of the spot SP. The reason for this is that it is difficult to specify the position of the spot SP since the contrast of the spot SP is lowered in a case where a ratio of the amount of illumination light to the amount of measurement light is not proper.
In the length measurement mode, the light source processor 31 continuously emits illumination light, which is used to illuminate the entire object to be observed, but emits the measurement light in the form of a pulse. Accordingly, as shown in
The patterns of light emission and image pickup in the length measurement mode are as follows. A first pattern is a case where a global shutter type image pickup element (CCD), which performs exposure and reads out electric charges in the respective pixels in the same timing to output image signals, is used as the image pickup element 32. Further, in the first pattern, measurement light is emitted at intervals of two frames as a specific frame interval.
In the first pattern, as shown in
Further, illumination light and measurement light are emitted in the timing T2. Electric charges are simultaneously read out in switching from the timing T2 to a timing T3 on the basis of exposure to illumination light and measurement light in this timing T2, so that a first captured image N+Lm including components of illumination light and measurement light is obtained. The position of the spot SP is detected on the basis of this first captured image N+Lm. A virtual scale corresponding to the position of the spot SP is displayed in the second captured image N displayed in the timing T2. Accordingly, a length measurement image S in which a virtual scale is displayed in the second captured image N displayed in the timing T2 is displayed in the timing T3.
The second captured image N obtained in the timing T2 (first timing) is displayed on the augmented display 18 not only in the timing T2 but also in the timing T3. That is, the second captured image obtained in the timing T2 is continuously displayed for two frames until a timing T4 (second timing) in which the next second captured image is obtained (the same subject image is displayed in the timings T2 and T3). The first captured image N+Lm is not displayed on the augmented display 18 in the timing T3. Here, the second captured image N is displayed while being changed every frame in the normal observation mode, but the same second captured image N is continuously displayed for two frames as described above in the first pattern of the length measurement mode. Accordingly, a frame rate of the first pattern of the length measurement mode is substantially ½ of that of the normal observation mode.
The same applies to the timing T4 or later. That is, a second captured image obtained in the timing T4 is continuously displayed in the length measurement image S in the timings T4 and T5, and a second captured image N obtained in a timing T6 is continuously displayed in the length measurement image S in the timings T6 and T7. On the other hand, the first captured image N+Lm is not displayed on the augmented display 18 in the timings T4, T5, T6, and T7. Since the second captured image N not including components of measurement light is displayed in the display of the length measurement image S as described above, a frame rate is slightly lowered but hindrance to the visibility of an object to be observed, which is likely to occur due to the emission of measurement light, does not occur.
A second pattern is a case where a rolling shutter type image pickup element (CMOS), which includes a plurality of lines for picking up an image of an object to be observed illuminated with illumination light or measurement light, performs exposure in an exposure timing different for each line, and reads out electric charges in a readout timing different for each line to output image signals, is used as the image pickup element 32. Further, in the second pattern, measurement light is emitted at intervals of three frames as a specific frame interval.
In the second pattern, as shown in
Further, illumination light and measurement light are emitted in the timing T2. A rolling shutter is performed on the basis of illumination with illumination light from the timing T1 to the timing T2 and illumination with measurement light in the timing T2, so that a first captured image N+Lm including components of illumination light and measurement light is obtained in switching from the timing T2 to the timing T3. Furthermore, a first captured image N+Lm including components of illumination light and measurement light is obtained even in switching from the timing T3 to the timing T4. The position of the spot SP is detected on the basis of the first captured images N+Lm described above. In addition, measurement light is not emitted in the timings T3 and T4.
A virtual scale corresponding to the position of the spot SP is displayed in the second captured image N displayed in the timing T2. Accordingly, a length measurement image S in which a virtual scale is displayed in the second captured image N displayed in the timing T2 is displayed in the timings T3 and T4. The second captured image N obtained in the timing T2 (first timing) is displayed on the augmented display 18 not only in the timing T2 but also in the timings T3 and T4. That is, the second captured image obtained in the timing T2 is continuously displayed for three frames until the timing T5 (second timing) at which the next second captured image is obtained (the same subject image is displayed in the timings T2, T3, and T4). On the other hand, the first captured image N+Lm is not displayed on the augmented display 18 in the timings T3 and T4. Since the same second captured image N is continuously displayed for three frames in the second pattern of the length measurement mode, a frame rate of the second pattern of the length measurement mode is substantially ⅓ of that of the normal observation mode.
The same applies to the timing T5 or later. That is, a second captured image obtained in the timing T5 is displayed in the length measurement image S in the timings T5, T6, and T7. On the other hand, the first captured image N+Lm is not displayed on the augmented display 18 in the timings T5, T6, and T7. Since the second captured image not including components of measurement light is displayed in the display of the length measurement image S as described above, a frame rate is lowered but hindrance to the visibility of an object to be observed, which is likely to occur due to the emission of planar measurement light, does not occur.
As shown in
The first signal processing unit 59 comprises an irradiation position detector 61 that detects the irradiation position of the spot SP from the captured image. It is preferable that the coordinates of the position of the centroid of the spot SP are acquired in the irradiation position detector 61 as the irradiation position of the spot SP.
The second signal processing unit 60 sets a first virtual scale as a virtual scale, which is used to measure the size of a subject, on the basis of the irradiation position of the spot SP, and sets a scale display position of the first virtual scale. The second signal processing unit 60 sets a virtual scale corresponding to the irradiation position of the spot SP with reference to a scale table 62 in which a virtual scale image of which the display aspect varies depending on the irradiation position of the spot SP and the scale display position is stored in association with the irradiation position of the spot. For example, the size or shape of the virtual scale varies depending on the irradiation position of the spot SP and the scale display position. The display of the virtual scale image will be described later. Further, the contents stored in the scale table 62 are maintained even in a case where the power of the augmented processor device 17 is turned off. The virtual scale image and the irradiation position are stored in the scale table 62 in association with each other, but a distance to the subject (a distance between the distal end part 12d of the endoscope 12 and the subject) corresponding to the irradiation position and the virtual scale image may be stored in the scale table 62 in association with each other.
Since a virtual scale image is required for each irradiation position, the amount of data is increased. For this reason, considering the standpoint of the storage capacity of a memory, startup, a processing time, and the like in the endoscope 12, it is preferable that the virtual scale images are stored in the augmented processor device 17 (or the processor device 14) rather than in a memory (not shown) in the endoscope 12. Further, virtual scale images are created from representative points of a virtual scale image obtained from calibration as described later, but a loss time occurs and the real-time property of processing is impaired in a case where virtual scale images are created from representative points in the length measurement mode. For this reason, after the endoscope 12 is connected to an endoscope connection portion and virtual scale images are created from representative points once to update the scale table 62, virtual scale images are not created from representative points and virtual scale images are displayed using the updated scale table 62. Further, in the second signal processing unit 60, in the case of an emergency where it is difficult to superimpose and display images, a reference scale, which is used to determine the size of a scale, is displayed in a length measurement image from a relationship between the irradiation position of a spot SP and the number of pixels corresponding to the actual size of a subject, instead of a virtual scale image that is to be superimposed and displayed on the length measurement image.
Furthermore, the second signal processing unit 60 comprises a table updating unit 64 that updates the scale table 62 in a case where the endoscope 12 is connected to the endoscope connection portion. The reason why the scale table 62 is adapted to be capable of being updated is that a positional relationship between the optical axis Lm of measurement light and the image pickup optical system 21 varies depending on the model and the serial number of the endoscope 12 and the display aspect of a virtual scale image is also changed according to the positional relationship. A representative point data table 66 in which representative point data related to representative points extracted from a virtual scale image are stored in association with irradiation positions is used in the table updating unit 64. Details of the table updating unit 64 and the representative point data table 66 will be described later. In the representative point data table 66, a distance to a subject corresponding to an irradiation position (a distance between the distal end part 12d of the endoscope 12 and the subject) and representative point data may be stored in association with each other.
In a case where a length measurement image in which a virtual scale is superimposed on a captured image is displayed on the augmented display 18, the display controller 46 performs a control where the display aspect of the virtual scale varies depending on the irradiation position of the spot SP and a scale display position. Specifically, the display controller 46 causes the augmented display 18 to display a length measurement image in which the first virtual scale is superimposed to be centered on the spot SP. For example, a circular measurement marker is used as the first virtual scale. In this case, as shown in
Since the scale display position of the virtual scale M1 is positioned at the peripheral portion of the captured image that is affected by distortion caused by the image pickup optical system 21, the virtual scale M1 has an elliptical shape due to an influence of the distortion or the like. Since the above-mentioned virtual scale M1 substantially coincides with the range of the tumor tm1, the size of the tumor tm1 can be measured as about 5 mm. In the captured image, the spot may not be displayed and only the first virtual scale may be displayed.
Further, as shown in
Furthermore, as shown in
In
The first virtual scale corresponding to the actual size of the subject, which is 5 mm, is displayed in
As shown in
In
Further, as shown in
In a case where the endoscope 12 is connected to the endoscope connection portion, the table updating unit 64 creates a virtual scale image corresponding to the model and/or the serial number of the endoscope 12 with reference to the representative point data table 66 and updates the scale table 62.
Representative point data related to representative points of a virtual scale image obtained in calibration are stored in the representative point data table 66 in association with the irradiation position of the spot SP. The representative point data table 66 is created by a calibration method to be described later. As shown in
In a case where the endoscope 12 is connected to the endoscope connection portion, the table updating unit 64 acquires information about a positional relationship between the optical axis Lm of measurement light and the image pickup optical system 21 and updates the scale table 62 using the positional relationship and the representative point data table 66. Specifically, the table updating unit 64 calculates difference values of the coordinate information of the representative points RP from a difference between a positional relationship between the optical axis Lm of measurement light and the image pickup optical system 21 in the endoscope 12 connected to the endoscope connection portion and a default positional relationship. Then, the table updating unit 64 creates a virtual scale image M* on the basis of representative points RP* that are obtained in a case where the coordinates of default representative points RP are shifted by the calculated difference values as shown in
Light, which forms a spot in a case where a subject is irradiated with the light, is used as the measurement light, but other light may be used. For example, planar measurement light, which is formed on the subject as an intersection line 67 as shown in
Further, the measurement light may be formed of planar light that includes at least two first feature lines CL as shown in
As shown in
The measurement information processing unit 70 calculates the measurement information from the positions of the first spots SPk1 or the second spots SPk2. The calculated measurement information is displayed in the captured image by the display controller 46. In a case where the measurement information is calculated on the basis of the positions of the two first spots SPk1, the measurement information can be accurately calculated even under a situation where the subject has a three-dimensional shape.
As shown in
First straight-line distance=((xp2−xp1)2+(yp2−yp1)2+(zp2−zp1)2)0.5 Equation)
The calculated first straight-line distance is displayed as measurement information 71 (“20 mm” in
Further, a plurality of spotlights arranged in the form of a grid at predetermined intervals in a vertical direction and a horizontal direction may be used as the measurement light. In a case where an image of a tumor tm or the like present in a subject is picked up with spotlights arranged in the form of a grid, an image of diffraction spots DS1 is acquired as shown in
In a case where an interval between the diffraction spots DS1 is measured, a direction and a distance to a subject are calculated on the basis of a measurement result. A relationship between an interval between the diffraction spots DS1 (the number of pixels) and a distance to a subject is used in this processing. Specifically, a direction (α, β) and a distance (r) to a diffraction spot DS1 as an object to be measured are calculated as shown in
X=r×cos α×cos β Equation A)
Y=r×cos α×sin β Equation B)
Z=r×sin α Equation C)
As shown in
The position specification unit 72 includes a noise component removal unit 74 that removes noise components hindering the specification of the position of the spot SP. In a case where a color (a color approximate to the color of the measurement light), which is different from the color of the measurement light forming the spot SP but is close to the color of the measurement light, is included in the first captured image, it may not possible to accurately specify the position of the spot SP. Accordingly, the noise component removal unit 74 removes components of the color approximate to the color of the measurement light from the first captured image as the noise components. The position specification unit 72 specifies the position of the spot SP on the basis of a noise-removed first captured image from which noise components have been removed.
The noise component removal unit 74 comprises a color information conversion unit 75, a binarization processing unit 76, a mask image generation unit 77, and a removal unit 78. A flow of processing for obtaining the noise-removed first captured image will be described with reference to
The binarization processing unit 76 binarizes the first color information image to generate a binarized first color information image, and binarizes the second color information image to generate a binarized second color information image. A threshold value for binarization including the color of the measurement light is used as a threshold value that is used for binarization. As shown in
The mask image generation unit 77 removes color information of noise components from the first captured image and generates a mask image to be used to extract color information of the measurement light, on the basis of the binarized first color information image and the binarized second color information image. As shown in
The removal unit 78 extracts color information from the first color information image using the mask image, so that a noise-removed first color information image from which color information of noise component has been removed and color information of the measurement light has been extracted is obtained. The noise-removed first color information image is changed to a noise-removed first captured image by being subjected to RGB conversion processing for returning color information to an RGB image. The position specification unit 72 specifies the position of the spot SP on the basis of the noise-removed first captured image. Since noise components are removed from a noise-removed second captured image, the position of the spot SP can be accurately specified.
The image processing unit 73 includes an image selection unit 82 and the scale table 62. The image selection unit 82 selects a processing target image, which is an image to be subjected to processing based on the position of the spot SP, from the first captured image or the second captured image. The image processing unit 73 performs the processing, which is based on the position of the spot SP, on the image selected as the processing target image. The image selection unit 82 selects the processing target image on the basis of a state related to the position of the spot SP. The image selection unit 82 may be adapted to select the processing target image according to a user's instruction. For example, the user interface 16 is used for a user's instruction.
Specifically, in a case where the spot SP is in a specific range for a specific period, it is considered that the subject or the distal end part 12d of the endoscope is less moved. Accordingly, the second captured image is selected as the processing target image. In a case where the subject or the distal end part 12d of the endoscope is less moved as described above, a virtual scale can be easily aligned with a lesion portion included in the subject even though there is no spot SP. Further, since color components of measurement light are not included in the second captured image, the color reproducibility of the subject is not impaired. On the other hand, in a case where the position of the spot SP is not in the specific range for a specific period, it is considered that the subject or the distal end part 12d of the endoscope is moved much. Accordingly, the first captured image is selected as the processing target image. In a case where the subject or the distal end part 12d of the endoscope is moved much as described above, a user operates the endoscope 12 such that the spot SP is positioned at a lesion portion. Therefore, a virtual scale is easily aligned with the lesion portion.
The image processing unit 73 generates a first virtual scale, which shows the actual size of the subject, as a virtual scale on the basis of the position of the spot SP in the first captured image. The image processing unit 73 calculates the size of the virtual scale from the position of the spot SP with reference to the scale table 62 in which a relationship between the position of the spot SP in the first captured image and the first virtual scale showing the actual size of the subject is stored. Then, the image processing unit 73 generates a first virtual scale corresponding to the size of the virtual scale.
As shown in
The first signal processing unit 84 comprises a mask processing unit 86, a binarization processing unit 87, a noise component removal unit 88, and an irradiation position detector 89. Processing for removing noise components in the first signal processing unit 84 will be described with reference to
Next, the binarization processing unit 87 obtains a binarized red image PRy (binarized first spectral image) by performing first binarization processing on pixels present in the illumination position-movable range in the red image PRx subjected to the mask processing. In the first binarization processing, as a threshold value condition for the first binarization processing, pixels having a pixel value equal to or larger than “225” are defined as “1” and pixels having a pixel value less than “225” are defined as “0”. The spot SP, which is a component of the measurement light, is detected by this first binarization processing. However, in the first binarization processing, a second noise component N2, which is halation (pixel saturation) caused by illumination light, is also detected in addition to a first noise component N1 that is a high-luminance component of a red component of the illumination light. These first and second noise components are factors that hinder the detection of the irradiation position of the spot SP. The threshold value condition refers to a condition that defines the range of the pixel value of a pixel defined as “0” by binarization and the range of the pixel value of a pixel defined as “1” by binarization in addition to a condition that is related to a threshold value indicating a boundary between the pixel value of a pixel defined as “0” by binarization and the pixel value of a pixel defined as “1” by binarization.
Then, in order to remove the first noise component, the noise component removal unit 88 performs first difference processing of the binarized red image PRy and a binarized green image PGy (binarized second spectral image) that is the green image PGx binarized by second binarization processing. The first noise component N1 has been removed in a first difference image PD1 that is obtained from the first difference processing. However, the second noise component N2 often remains in the first difference image PD1 without being removed. The pixel value of a pixel, which is defined as “0” or less by the first difference processing, is set to “0”. In the second binarization processing, as a threshold value condition for the second binarization processing, pixels having a pixel value in the range of “30” to “220” are defined as “1” and pixels having a pixel value in other ranges, that is, a pixel value equal to or larger than “0” and less than “30” or exceeding “220” are defined as “0”. The first noise component is removed by the first difference processing of the binarized red image and the binarized green image, but the first noise component may be removed by other first arithmetic processing.
Further, in order to remove the second noise component, as shown in
The irradiation position detector 89 detects the irradiation position of the spot SP from the first difference image or the second difference image. It is preferable that the coordinates of the position of the centroid of the spot SP are acquired in the irradiation position detector 89 as the irradiation position of the spot SP.
The second signal processing unit 85 sets a first virtual scale, which shows the actual size of the subject, as a virtual scale on the basis of the position of the spot SP. The second signal processing unit 85 calculates the size of the virtual scale from the position of the spot with reference to the scale table 62 in which a relationship between the position of the spot SP and the first virtual scale showing the actual size of the subject is stored. Then, the second signal processing unit 85 sets a first virtual scale corresponding to the size of the virtual scale.
As shown in
The irradiation region recognition unit 90 can recognize the spot SP that has the specific shape and the feature quantity described above. Specifically, it is preferable that the irradiation region recognition unit 90 includes a learning model 91 for recognizing the spot SP by outputting the spot SP, which is a measurement light-irradiation region, in response to the input of the captured image as shown in
Since the spot SP is recognized using the learning model 91, not only the circular spot SP (see
As shown in
The position specification unit 92 includes a distance calculation unit 94. The position specification unit 92 specifies the position of the spot SP, which is formed on the subject by the measurement light, on the basis of the captured image of the subject that is illuminated with the illumination light and the measurement light. The distance calculation unit 94 obtains an observation distance from the position of the spot SP.
The image processing unit 93 includes an image selection unit 95, a scale table 62, an offset setting unit 97, an offset distance calculation unit 98, and an offset virtual scale generation unit 99. The image selection unit 95 selects an image that is to be subjected to processing based on the position of the spot SP. The offset setting unit 97 sets an offset amount, which corresponds to the height of the spot SP of the convex polyp 100, for the observation distance. The offset distance calculation unit 98 adds the offset amount to the observation distance to calculate an offset distance. The offset virtual scale generation unit 99 generates an offset virtual scale on the basis of the offset distance.
An offset will be described below. First, the convex shape of the subject refers to a shape of the subject protruding from a peripheral portion. Accordingly, the convex shape has only to be a shape in which any portion protrudes from a peripheral portion, and other shapes, such as a size, the area of a shape, the height of a protruding portion and/or the number of protruding portions, and continuity of a height or the like, are not limited.
More specifically, for example, as shown in
Next, the height of the spot SP of the convex polyp 100 will be described. In the present embodiment, the height of the spot SP of the polyp 100 is a distance between the spot SP of the polyp 100 and the flat portion 100b of the polyp 100 in the vertical direction. More specifically, a spot SP1 is formed at the apex portion 100a of the polyp 100 as shown in
Further, in
The observation distance and the offset amount will be described below. As shown in
Accordingly, the offset setting unit 97 sets the height HT1 of the spot SP of the polyp 100 as an offset amount for the observation distance D5. Then, the offset distance calculation unit 98 adds the height HT1 of the spot SP1 of the polyp 100, which is an offset amount, to the observation distance D5 to calculate an offset distance D6. Accordingly, the offset distance calculation unit 98 calculates the offset distance D6 using the following equation OS). HT1 is a distance between the position P2 and a position P3.
D6=D5+HT1 Equation OS)
After that, the offset virtual scale generation unit 99 generates a virtual scale, which is based on the observation distance D6, as an offset virtual scale. More specifically, the offset virtual scale generation unit 99 generates a virtual scale, which is obtained in a case where an observation distance is a distance D6, as an offset virtual scale with reference to the scale table 62. The offset virtual scale shows an actual distance to the subject positioned on the extension surface 101 or the size of the subject.
The image processing unit 93 performs processing for superimposing the generated offset virtual scale on the captured image to generate a length measurement image. It is preferable that the offset virtual scale is superimposed to be displayed at a position where the spot SP is formed for more accurate measurement. Accordingly, in a case where the offset virtual scale is to be displayed at a position away from the spot SP, the offset virtual scale is displayed close to the spot SP as much as possible. The length measurement image on which the offset virtual scale is superimposed is caused to be displayed on the augmented display 18 by the display controller 46.
As shown in
In an example shown in
In an example shown in
Since the lines of the virtual scales M11 to M15 are changed according to an observation distance as described above, a medical doctor who is a user easily measures the accurate dimension of the subject. Further, since each of the widths W11 to W15 of the lines of the virtual scales M11 to M15 is set to a value inversely proportional to an observation distance, it is possible to recognize the magnitude of a dimensional error from the width of the line. For example, in consideration of the recognized error, it is understood that the size of a tumor tm is certainly smaller than the set actual sizes (5 mm or less in the examples shown in
Furthermore, a concentric circular virtual scale M2 including three concentric circles having different sizes may be set on the basis of the position of one spot SP as shown in
A width W22 of the concentric circle M22, which is positioned immediately outside the concentric circle M21 positioned on the innermost side, is set to be larger than a width W21 of the concentric circle M21, and a width W23 of the concentric circle M23 positioned on the outermost side is set to be larger than the width W22 of the concentric circle M22. In an example shown in
Each of the widths W21 to W23 is set to a value inversely proportional to an observation distance while a ratio of the widths W21 to W23 (a ratio of W21:W22:W23=1:√2:2 in the example shown in
As shown in
As shown in
In an example shown in
Since each of the gaps G1 to G3 of the broken lines of the virtual scales M41 to M43 is set to a value inversely proportional to an observation distance as described above, it is possible to recognize the magnitude of a dimensional error from the gap of the broken line.
A virtual scale is formed of the same number of lines regardless of an observation distance. As shown in
In an example shown in
As shown in
The functions of the virtual scale setting unit 105 and the length measurement image creation unit 107 will be described below. As shown in
The position specification unit 92 specifies the position of the spot SP on the basis of the captured image 109 input to the signal processing unit 45. The virtual scale setting unit 105 sets a virtual scale, which shows the actual size of the object to be observed corresponding to the position of the spot SP and includes gradations of which an end portion serves as a base point, with reference to the scale table 62. The end portion is a portion closer to an outer portion than a middle portion, or a starting point, an end point, or the like in the shape of the virtual scale.
As shown in
Various types of virtual scales are used depending on settings. For example, a virtual scale having the shape of a straight line segment or a shape in which straight line segments are combined with each other, a virtual scale having a circular shape or a shape in which circles are combined with each other, a virtual scale having a shape in which a circle and a straight line segment are combined with each other, and the like are used.
As shown in
As shown in
As shown in
As shown in
The region of interest is a region which is included in the subject and to which a user is to pay attention. The region of interest is, for example, a polyp or the like, and is a region that is likely to need to be measured. Further, a measurement portion is a portion, of which the length or the like is to be measured, of the region of interest. For example, in a case where the region of interest is a reddened portion, a measurement portion is the longest portion or the like of the reddened portion. Alternatively, in a case where the region of interest has a circular shape, a measurement portion is a diameter portion or the like of the region of interest.
The length measurement image generation unit 122 creates a length measurement image in which the measured value scale is superimposed on the captured image. The measured value scale is superimposed on the captured image to be aligned with the measurement portion of the region of interest. The length measurement image is displayed on the augmented display 18.
As shown in
The reference scale 131 includes, for example, a line segment that has the number of pixels corresponding to an actual size of 20 mm and a numerical value and a unit that represent the actual size. The reference scale 131 is not usually displayed on the augmented display 18 but is displayed like the captured image 124 in a case where the reference scale 131 is displayed on the augmented display 18.
As shown in
For example, in a case where the actual size of the reference scale is denoted by L0, the number of pixels of the reference scale 131 in the captured image 124 is denoted by Aa, the number of pixels of the measurement portion in a case where the reference scale 131 is superimposed on the region 129 of interest in the captured image 124 is denoted by Ba, and the actual size of a measured value scale 132 is denoted by L1, the measured value calculation unit 128 generates the measured value scale 132 such that the following equation (K1) is satisfied.
L1=L0×Ba/Aa Equation (K1)
As shown in
The length measurement image generation unit 122 generates a length measurement image 133 in which the measured value scale 132 is superimposed on the captured image 124. For example, as shown in
The type of the measured value scale 132 can be selected from a plurality of types. The measurement content reception unit 127 receives the setting of the contents of the measured value scale and sends the contents of the measured value scale to the measured value scale generation unit 121, and the length measurement image generation unit 122 generates the length measurement image 133 using the measured value scale 132 that is generated by the measured value scale generation unit 121 on the basis of the contents of the measured value scale.
It is preferable that the region-of-interest extraction unit 125 extracts a region of interest using a trained model trained using captured images acquired in the past. Various models suitable for image recognition using machine learning can be used as a model used as the trained model. A model using a neural network can be preferably used for the purpose of recognizing a region of interest in an image. In a case where these models are to be trained, these models are trained using captured images, which include information about the region of interest, as teacher data. Examples of the information about the region of interest include the presence or absence of the region of interest, the position or range of the region of interest, and the like. Some models may be trained using captured images not including the information about the region of interest.
Further, it is preferable that the measurement portion determination unit 126 also determines a measurement portion using a trained model trained using captured images acquired in the past. Models and the like used as the trained model are the same as those of the region-of-interest extraction unit. However, in a case where these models are to be trained, these models are trained using captured images that include information about the measurement portion. The information about the measurement portion includes a measured value and the measurement portion. Some models may be trained using captured images not including the information about the measurement portion. The trained model used by the region-of-interest extraction unit 125 and the trained model used by the measurement portion determination unit 126 may be common. In a case where a purpose is to extract the measurement portion, one trained model may be adapted to extract the measurement portion without extracting the region of interest from the captured image 124.
In the second signal processing unit 60, the scale table 62, which is used to display a virtual scale deformed according to the position of the spot SP, is updated from the representative point data table 66 in which the irradiation position of measurement light and the representative points of a virtual scale are stored (see
In consideration of the distortion of the image pickup optical system 21, the display aspect of a virtual scale may be changed between a region in which measurement using the virtual scale is effective and other regions. Specifically, in a case where a spot SP is present outside a range of an effective measurement region (near end Px side) as shown in
Furthermore, the type of a line of the virtual scale may be changed depending on whether the spot SP is present inside or outside the range of the effective measurement region. In this case, it is preferable that a movement locus MT of the spot SP is displayed as shown in
The details of the acquisition of a static image in the length measurement mode will be described. In a case where a static image-acquisition instruction is not given, the system controller 41 controls the light source device 13 to emit illumination light and measurement light. As shown in
A second captured image obtained from the image pickup of a subject illuminated with measurement light is obtained in the first timing. A first captured image obtained from the image pickup of the subject illuminated with illumination light and measurement light is obtained in the second timing and the third timing. Then, as shown in
As shown in
Further, a second timing and a third timing may be timings different from each other as shown in
As shown in
The diagnostic information acquisition unit 136 acquires diagnostic information about the first captured image or the second captured image from a diagnostic information management device 138. The diagnostic information acquisition unit 136 acquires a medical chart of a patient who is an object to be examined as the diagnostic information. The medical chart is information in which the progress and the like of medical care or examination for a patient are recorded, and includes, for example, a record, such as the name, the gender and age, the name of a disease, major symptoms, the contents of prescription or treatment, or the medical history of a patient. Information about the lesion portion that is subjected to recognition processing by the lesion recognition unit 135, and diagnostic information about the first captured image or the second captured image, which is acquired by the diagnostic information acquisition unit 136, are stored in the static image storage unit 42 as attached data of a data set DS in association with the first captured image or the second captured image.
The learning unit 137 performs machine learning using the first captured image or the second captured image that is stored in the static image storage unit 42 and the attached data (data set) that are associated with these first and second captured images. Specifically, the learning unit 137 performs machine learning on the learning model of the lesion recognition unit 135. It is preferable that the second captured image is used as a teacher data candidate for machine learning. Since the second captured image is an image that is obtained in response to a static image-acquisition instruction during the measurement of a tumor tm or the like, the second captured image is an image in which a region of interest as an object to be observed is highly likely to be included. Further, since the second captured image is a normal endoscopic image that is obtained in a case where measurement light is not emitted, the second captured image is highly useful as teacher data for machine learning. Furthermore, since information about a lesion portion, diagnostic information, and the like are also attached as the attached data, a user does not need to input the information about a lesion portion, the diagnostic information, and the like in a case where machine learning is performed. Since the second captured image as a teacher data candidate is accumulated, the accuracy of recognition processing performed by the lesion recognition unit 135 is improved as machine learning is performed. In a case where the first captured image is to be used as a teacher data candidate for machine learning, the first captured image may be used as it is but it is more preferable that a portion other than an irradiation region of measurement light is used as a teacher data candidate.
A calibration method of creating the representative point data table 66 using a calibration apparatus 200 shown in
The moving mechanism 202 includes a holding unit (not shown) that holds the distal end part 12d of the endoscope 12 toward the calibration display 201, and moves the holding unit at specific intervals to change a distance Z between the distal end part 12d of the endoscope 12 and the calibration display 201. Whenever the distance Z is changed by the moving mechanism 202, the calibration display controller 204 displays an image of a virtual scale of a first display aspect, which is not affected by the image pickup optical system 21, at the irradiation position of measurement light on the calibration display 201. Since an influence of distortion or the like caused by the image pickup optical system 21 is not considered for the image of the virtual scale of the first display aspect, the image of the virtual scale of the first display aspect is not displayed with a size, a shape, or the like corresponding to a scale display position in a case where the image of the virtual scale of the first display aspect is displayed on the augmented display 18.
The calibration image acquisition unit 206 acquires a calibration image, which is obtained from the image pickup of the virtual scale of the first display aspect displayed on the calibration display 201, by the endoscope 12. In a case where the endoscope 12 picks up an image whenever the distance Z is changed, that is, whenever the virtual scale of the first display aspect is displayed, the calibration image is acquired. For example, in a case where the virtual scale of the first display aspect is displayed n times, n calibration images are obtained.
An image of a virtual scale of a second display aspect, which is affected by the image pickup optical system 21, is included at the irradiation position of measurement light in the calibration image. Since an influence of distortion or the like caused by the image pickup optical system 21 is considered for the image of the virtual scale of the second display aspect, the image of the virtual scale of the second display aspect is displayed with a size, a shape, or the like corresponding to a scale display position.
The calibration unit 208 calibrates the display of the virtual scale on the augmented display 18 on the basis of the calibration image acquired by the calibration image acquisition unit 206. Specifically, in the calibration unit 208, a representative point data table, which is created by representative point extraction processing and table creation processing, is sent to the augmented processor device 17 and is stored in the representative point data table 66. The representative point extraction processing is to extract representative points from the image of the virtual scale of the second display aspect included in the calibration image, and the table creation processing is to create a representative point data table by associating representative point data related to the representative points with an irradiation position in a timing when the calibration image is acquired.
As shown in
As shown in
As shown in
For example, in a case where the irradiation position of measurement light is present at the inspection reference position 308 in the inspection image and the virtual scale M enters the inspection region 306a, a user determines that the virtual scale M is properly displayed. On the other hand, in a case where even a part of the virtual scale M does not enter the inspection region 306a, such as a case where even a part of the virtual scale M protrudes from the inspection region 306a, as shown in
The scale table 62 may be created as follows. A relationship between the position of a spot and the size of a virtual scale can be obtained from the image pickup of a chart in which a pattern having an actual size is regularly formed. For example, spot-like measurement light is emitted to the chart; the image of a graph paper-shaped chart including ruled lines (5 mm) having the same size as the actual size or ruled lines (for example, 1 mm) having a size smaller than the actual size is picked up while an observation distance is changed to change the position of a spot; and a relationship between the position of the spot (the pixel coordinates of the spot on the image pickup surface of the image pickup element 32) and the number of pixels corresponding to the actual size (how many pixels are used to represent an actual size of 5 mm?) is acquired.
As shown in
Since the X-coordinate and the Y-coordinate of a spot have a one-to-one correspondence, basically the same results (the same number of pixels for the same position of a spot) are obtained even though any one of the function g1 or g2 is used. Accordingly, in a case where the size of the first virtual scale is to be calculated, either function may be used and a function having higher sensitivity of a change in the number of pixels to a change in a position may be selected between g1 and g2. Further, in a case where the values of g1 and g2 are significantly different from each other, it may be determined that “the position of a spot could not be recognized”.
The functions g1, g2, h1, and h2 obtained as described above are stored in the scale table 62 in a look-up table format. The functions g1 and g2 may be stored in the scale table 62 in a function format.
Stripe-pattern light ZPL, which is formed as light having a stripe pattern on a subject as shown in
For example, a subject is alternately irradiated with stripe-pattern light having a phase X, stripe-pattern light having a phase Y, and stripe-pattern light having a phase Z. The phases of the vertical stripe patterns of the stripe-pattern light having the phase X, the stripe-pattern light having the phase Y, and stripe-pattern light having the phase Z are shifted from each other by 120° (2π/3). In this case, the three-dimensional shape of the subject is measured using three types of images obtained on the basis of the respective types of stripe-pattern light. For example, it is preferable that the subject is irradiated with the stripe-pattern light having the phase X, the stripe-pattern light having the phase Y, and the stripe-pattern light having the phase Z while the stripe-pattern light having the phase X, the stripe-pattern light having the phase Y, and the stripe-pattern light having the phase Z are switched every frame (or every few frames) as shown in
Grid-pattern measurement light LPL, which is formed as a grid pattern as shown in
In a case where the grid-pattern measurement light LPL is used as the measurement light, in the length measurement mode, the subject may be constantly irradiated with the illumination light and the grid-pattern measurement light LPL or the subject may be constantly irradiated with the illumination light and the grid-pattern measurement light LPL may be repeatedly turned on and turned off (dimmed) every frame (or every few frames) as shown in
Three-dimensional planar light TPL, which is represented in a subject image by mesh lines as shown in
In a case where the three-dimensional planar light TPL is used as the measurement light, in the length measurement mode, the subject may be constantly irradiated with the illumination light and the three-dimensional planar light TPL or the subject may be constantly irradiated with the illumination light and the three-dimensional planar light TPL may be repeatedly turned on and turned off (or dimmed) every frame (or every few frames) as shown in
In the embodiment, the hardware structures of processing units, which perform various types of processing, such as the reception unit 38, the signal processing unit 39, the display controller 40, the system controller 41, the static image storage unit 42, the data transmission/reception unit 43, the data transmission/reception unit 44, the signal processing unit 45, and the display controller 46 (including various controllers or processing units provided in these controllers and the like (for example, the length measurement mode controller 50, the first signal processing unit 59, and the like)), are various processors to be described below. Various processors include: a central processing unit (CPU) that is a general-purpose processor functioning as various processing units by executing software (program); a programmable logic device (PLD) that is a processor of which the circuit configuration can be changed after manufacture, such as a field programmable gate array (FPGA); a dedicated electrical circuit that is a processor having circuit configuration designed exclusively to perform various types of processing; and the like.
One processing unit may be formed of one of these various processors, or may be formed of a combination of two or more same kind or different kinds of processors (for example, a plurality of FPGAs, or a combination of a CPU and an FPGA). Further, a plurality of processing units may be formed of one processor. As an example where a plurality of processing units are formed of one processor, first, there is an aspect where one processor is formed of a combination of one or more CPUs and software as typified by a computer, such as a client or a server, and functions as a plurality of processing units. Second, there is an aspect where a processor fulfilling the functions of the entire system, which includes a plurality of processing units, by one integrated circuit (IC) chip as typified by System On Chip (SoC) or the like is used. In this way, various processing units are formed using one or more of the above-mentioned various processors as hardware structures.
In addition, the hardware structures of these various processors are more specifically electrical circuitry where circuit elements, such as semiconductor elements, are combined. Further, the hardware structure of the storage unit is a storage device, such as a hard disc drive (HDD) or a solid state drive (SSD).
Number | Date | Country | Kind |
---|---|---|---|
2020-147691 | Sep 2020 | JP | national |
This application is a Continuation of PCT International Application No. PCT/JP2021/008993 filed on 8 Mar. 2021, which claims priority under 35 U.S.C § 119(a) to Japanese Patent Application No. 2020-147691 filed on 2 Sep. 2020. The above application is hereby expressly incorporated by reference, in its entirety, into the present application.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/008993 | Mar 2021 | US |
Child | 18177537 | US |