The present invention relates to an examination support device, an examination support method, and an examination support program.
There is known an endoscope system that captures an inside of a stomach, for example, of a subject by an endoscope and that displays an image of it on a monitor. These days, examination support devices that analyze an image captured by an endoscope system and that notify a doctor of a result of it are becoming widespread (for example, see Patent Literature 1). Furthermore, a mode of use according to which an examination support device is connected to an endoscope system and the examination support device is caused to analyze in real time an in-body image that is captured by the endoscope system is also reaching practical levels.
Patent Literature 1: Japanese Patent Laid-Open No. 2019-42156
If an examination is performed while a diagnosis result calculated by an examination support device is displayed on a display unit, a doctor enjoys an advantage that observation can be performed while focusing on a specific part according to the diagnosis result that is displayed, for example. On the other hand, there is also a case where display of a diagnosis result which takes some time to be displayed is desired to be turned off due to a demand to reduce an examination time when an endoscope is inserted inside the body of a subject, for example. However, if a diagnosis function is completely turned off, a risk of overlooking a lesion is increased.
The present invention has been made to solve problems as described above, and is aimed at providing an examination support device and the like for preventing a lesion from being overlooked while allowing switching of display according to a state of an examination.
An examination support device according to a first mode of the present invention is an examination support device that is used by being connected to an endoscope system including a camera unit for being inserted into an inside of a body of a subject, the examination support device including: an acquisition unit for sequentially acquiring a captured image that is captured by the camera unit; a diagnosis unit for performing diagnosis of the inside of the body based on the captured image; a display unit for displaying a diagnosis result that is output by the diagnosis unit; a reception unit for receiving, from a user, selection of an on mode in which the diagnosis result is displayed on the display unit or an off mode in which display is not performed; and a control unit for causing the diagnosis result to be displayed on the display unit in real time in a case where selection of the on mode is received by the reception unit, and causing the diagnosis result to be stored in a storage unit without being displayed on the display unit at least while examination of the inside of the body by the camera unit is being performed, in a case where selection of the off mode is received.
An examination support method according to a second mode of the present invention is an examination support method performed using an examination support device that is used by being connected to an endoscope system including a camera unit for being inserted into an inside of a body of a subject, the examination support method including: a reception step of receiving, from a user, selection of an on mode in which a diagnosis result is displayed on a display unit or an off mode in which display is not performed; an acquisition step of sequentially acquiring a captured image that is captured by the camera unit; a diagnosis step of performing diagnosis of the inside of the body based on the captured image; a display control step of causing the diagnosis result to be displayed in real time on the display unit in a case where selection of the on mode is received in the reception step; and a storage control step of causing the diagnosis result to be stored in a storage unit without being displayed on the display unit at least while examination of the inside of the body by the camera unit is being performed, in a case where selection of the off mode is received in the reception step.
A storage medium storing an examination support program according to a third mode of the present invention is a storage medium storing an examination support program for controlling an examination support device that is used by being connected to an endoscope system including a camera unit for being inserted into an inside of a body of a subject, the examination support program being for causing a computer to perform: a reception step of receiving, from a user, selection of an on mode in which a diagnosis result is displayed on a display unit or an off mode in which display is not performed; an acquisition step of sequentially acquiring a captured image that is captured by the camera unit; a diagnosis step of performing diagnosis of the inside of the body based on the captured image; a display control step of causing the diagnosis result to be displayed in real time on the display unit in a case where selection of the on mode is received in the reception step; and a storage control step of causing the diagnosis result to be stored in a storage unit without being displayed on the display unit at least while examination of the inside of the body by the camera unit is being performed, in a case where selection of the off mode is received in the reception step.
According to the present invention, there may be provided an examination support device and the like for preventing a lesion from being overlooked while allowing switching of display according to a state of an examination.
Hereinafter, the present invention will be described based on an embodiment of the invention, but the invention according to the claims is not limited to the embodiment described below. Moreover, not all the configurations described in the embodiment are essential as means for solving the problems.
The endoscope system 200 includes a system monitor 220 that is configured by a liquid crystal panel, for example, and processes the image signal transmitted from the camera unit 210 and displays the same as a captured image 221 that can be viewed, on the system monitor 220. Furthermore, the endoscope system 200 displays examination information 222 including subject information, camera information about the camera unit 210, and the like on the system monitor 220.
The examination support device 100 is connected to the endoscope system 200 by a connection cable 250. The endoscope system 200 transmits a display signal that is transmitted to the system monitor 220, also to the examination support device 100 via the connection cable 250. That is, the display signal in the present embodiment is an example of an image signal that is provided to an external device by the endoscope system 200. The examination support device 100 includes, as a display unit, a display monitor 120 that is configured by a liquid crystal panel, for example, and extracts an image signal corresponding to the captured image 221 from the display signal that is transmitted from the endoscope system 200, and sequentially displays the same as a captured image 121 that can be viewed, on the display monitor 120.
In the case where the examination support device 100 is set to the on mode, a doctor performs an examination while viewing the captured image 121 that is displayed in real time according to operation of the camera unit 210. The doctor operates the camera unit 210 such that a target part that is to be diagnosed by the examination support device 100 is displayed as the captured image 121.
In the on mode, for example, when a diagnosis trigger from the doctor is detected, the examination support device 100 generates, from the captured image 121 at the time point, image data that is to be input to a neural network for analysis described later. Then, a diagnosis result that is generated from an output from the neural network for analysis is converted into a computer graphic and is displayed on the display monitor 120 as a CG indicator 122. In the illustrated example, a diagnosis result indicating a 75% probability of being estimated to be cancer is displayed. Furthermore, a region for which such determination is performed is indicated by a diagnosis region frame 121a that is superimposed on the captured image 121. The diagnosis result is displayed for a specific period of time, and then, switching to the captured image 121 that is captured by the camera unit 210 in real time is performed.
As described above, the display monitor 120 is a monitor including a liquid crystal panel, for example, and the display monitor 120 displays the captured image 121, the CG indicator 122, and the like in a visible manner. The input/output interface 130 is a connection interface that includes a connector for connecting the connection cable 250, and that is for exchanging information with an external appliance. The input/output interface 130 includes a LAN unit, for example, and takes in the examination support program and update data for a neural network for analysis 151 described later from an external appliance and transfers the same to the arithmetic processing unit 110.
The input device 140 is a keyboard, a mouse, or a touch panel that is superimposed on the display monitor 120, for example, and a doctor or an assistant may operate the same to change settings of the examination support device 100 and to input information necessary for examination.
The storage unit 150 is a non-volatile storage medium, and is configured by a hard disk drive (HDD), for example. The storage unit 150 is capable of storing, in addition to programs for controlling the examination support device 100 and for executing processes, various parameter values to be used for control and calculation, functions, display element data, look-up tables, and the like. In particular, the storage unit 150 stores the neural network for analysis 151. The neural network for analysis 151 is a trained model for calculating, when image data captured by the camera unit 210 is input, a region in the image where a lesion is estimated to be present and an estimated probability of estimating it to be a lesion. Additionally, the storage unit 150 may be configured by a plurality of pieces of hardware, and for example, a storage medium for storing programs, and a storage medium for storing the neural network for analysis 151 may be configured by separate pieces of hardware.
The arithmetic processing unit 110 also serves a role of an arithmetic functional unit that performs various calculations according to processes which the examination support program instructs. The arithmetic processing unit 110 may function as an acquisition unit 111, a diagnosis unit 112, a control unit 113, a reception unit 114, and a detection unit 115. The acquisition unit 111 sequentially acquires display signals that are transmitted from the endoscope system 200, and develops each into a regenerated signal image. Furthermore, an image region indicating a captured image that is captured by the camera unit 210 is established by retrieving a region of change where an amount of difference between successive regenerated signal images is equal to or greater than a reference amount. When the image region is once established, the acquisition unit 111 sequentially generates the captured image 121 as a frame image, from the image region of a display signal that is sequentially acquired thereafter.
The diagnosis unit 112 inputs the captured image 121 as a diagnosis image to the neural network for analysis 151 read from the storage unit 150, and causes a region, in the image, where a lesion is possibly present to be output together with its estimated probability. The diagnosis unit 112 generates a diagnosis result by associating an input image or the like with the output.
The control unit 113 controls display on the display monitor 120 by generating a display signal for a display page, controls reading/writing of data with respect to the storage unit 150, and controls transmission/reception of various control signals to/from an external appliance via the input/output interface 130, for example. In particular, when the on mode is executed, the diagnosis result generated by the diagnosis unit 112 is caused to be displayed in real time on the display monitor 120. When a background mode in the off mode is executed, the diagnosis result generated by the diagnosis unit 112 is stored in the storage unit 150 without being displayed on the display monitor 120 at least while examination of the inside of the body of a subject is being performed.
The reception unit 114 receives selection of the on mode or the off mode from a doctor, who is a user, via the input device 140. The detection unit 115 detects end of examination that is performed by inserting the camera unit 210 inside the body of a subject.
With a conventional examination support device, captured images captured by a camera unit inserted inside the body of a subject are sequentially taken in, and when a diagnosis trigger from a doctor is detected, diagnosis is performed based on the captured image at the time point, and a result of it is immediately displayed. That is, a use mode where a doctor views the diagnosis result in real time according to operation of the camera unit is generally adopted. However, an examination time when the camera unit is inserted inside the body of a subject is desired to be reduced as much as possible depending on a medical condition of the subject, property of the examination, and the like, or more specifically, if the subject is an elderly person, has a pre-existing condition, or taking medicines, for example. In such a case, a time when the camera unit is stopped to provide a diagnosis trigger or a time for a calculation process for calculating the estimated probability of a lesion may not be allowable. In such a case, the examination support device is turned off.
However, when the examination support device is turned off, an original function of supporting diagnosis by the doctor cannot be enjoyed, and a risk of overlooking a lesion is increased. Furthermore, though there is a demand to actively use the examination support device in small- to medium-sized clinics where biopsy often cannot be performed, it becomes impossible to cope with it. Moreover, a simple mistake is also conceivable where the doctor turns off the examination support device in an examination and forgets to turn on the examination support device in the next examination.
Accordingly, the examination support device 100 according to the present embodiment adopts two off modes in addition to the existing on mode. The two off modes are the background mode and a complete off mode, and in an initial setting, switching to the background mode is performed in a case where the off mode is selected by the doctor.
The on mode is a mode of displaying the diagnosis result in real time. Of the off modes, the background mode is a mode of performing a diagnosis process by the diagnosis unit 112, and storing the diagnosis result in the storage unit 150 without displaying the diagnosis result while in-body examination is being performed. Of the off modes, the complete off mode is a mode where the diagnosis process by the diagnosis unit 112 is not performed. Additionally, in the present embodiment, real-time display allows for a delay of a processing time required for image processing or diagnosis processing from real time, and may be immediate display after continuous processes. For example, a case where progress of a process is displayed by displaying a progress bar during the diagnosis process and a diagnosis result of it is immediately displayed after completion of the process is also expressed as real-time display.
A left diagram in
A right diagram in
When which region in an image region of the regenerated signal image 225 is an image region indicating the captured image is established in the above manner, the acquisition unit 111 cuts out the established image region and takes the same as the captured image 121. The captured image 121 can practically be said to be an image that is obtained by reproducing the captured image captured by the camera unit 210 by the examination support device 100.
In the case where the captured image 121 that is cut out is not an image of a diagnosis target, the acquisition unit 111 transfers the captured image 121 to the control unit 113. To obtain the display page in the standby display state described with reference to
The diagnosis unit 112 adjusts the captured image 121 and generates a diagnosis image according to specifications of the neural network for analysis 151 and the like. Particularly, in the on mode, there is a demand to reduce a delay time until the diagnosis result is displayed as much as possible, and thus, a diagnosis image with a reduced image size obtained by thinning out pixels of the captured image 121 is generated to reduce an amount of calculation by the neural network for analysis 151. Then, the diagnosis unit 112 inputs image data of the diagnosis image that is generated to the neural network for analysis 151, and causes the estimated probability of existence of a lesion in the diagnosis image to be calculated. The neural network for analysis 151 outputs, as a diagnosis result, a region, in the image, where there is a possibility of a lesion together with its estimated probability. In the example in
The control unit 113 develops and arranges captured image data of the captured image 121 received from the acquisition unit 111 and the diagnosis result received from the diagnosis unit 112 according to a display criterion set in advance, and displays the same on the display monitor 120. More specifically, display is performed in the mode of the result display state described with reference to
In the case where the captured image 121 that is cut out is an image of a diagnosis target, the acquisition unit 111 transfers the captured image 121 to the diagnosis unit 112. The diagnosis unit 112 adjusts the captured image 121 and generates a diagnosis image according to specifications of the neural network for analysis 151 and the like. In the background mode, a diagnosis result does not have to be immediately displayed, and thus, the image size may be adjusted to be greater than the image size in the on mode. That is, a thinning rate for the captured image 121 may be reduced, and a high-definition diagnosis image closer to the captured image 121 may be generated. If processing performance of the arithmetic processing unit 110 is high enough, the captured image 121 may be used as it is as the diagnosis image without being thinned out. If a high-definition diagnosis image is used, diagnosis accuracy may be expected to be increased.
The diagnosis unit 112 inputs image data of the diagnosis image that is generated to the neural network for analysis 151, and causes the estimated probability of existence of a lesion in the diagnosis image to be calculated. The neural network for analysis 151 outputs, as a diagnosis result, a region, in the image, where there is a possibility of a lesion together with its estimated probability. The diagnosis unit 112 transfers the diagnosis result to the control unit 113.
The control unit 113 stores captured image data of the captured image 121 that is received from the acquisition unit 111 and the diagnosis result that is received from the diagnosis unit 112 in the storage unit 150 according to a storage criterion set in advance. Details of the storage criterion will be given later. Additionally, in the on mode, the captured image 121 from which the diagnosis image is generated is displayed as a still image for a specific period of time together with the diagnosis result, but in the background mode, the diagnosis result is not displayed, and thus, the captured image 121 from which the diagnosis image is generated is not displayed as a still image.
In the background mode, when the detection unit 115 detects end of examination of an inside of a body by the camera unit 210, the control unit 113 reads out the diagnosis result stored in the storage unit 150, and causes the same to be displayed on the display monitor 120. For example, the detection unit 115 detects a timing when the camera unit 210 is removed from the mouth, by capturing a drastic change in brightness of the captured image 121. Alternatively, end of an examination may be detected by reading the examination information 222 included in the regenerated signal image 225.
The top drawing in
When wanting to check a specific thumbnail image 161 in detail, the doctor selects the thumbnail image 161 using the input device 140. When selection of a specific thumbnail image 161 is received, the control unit 113 generates a high-definition image 160 corresponding to the thumbnail image 161 from the original captured image 121, and displays the same on the display monitor 120 in a greater size than the thumbnail image 161 and together with a diagnosis result 163.
A scroll indicator 127 is arranged on lower right of the display page, and in the example in the drawing, it is indicated that thumbnail images 161 for “no abnormality observed” continue below. A doctor may check other thumbnail images 161 by scrolling the display page by selecting the scroll indicator 127. Additionally, as in the example in
Additionally, examples in
Next, an operation of setting a mode performed by a doctor before start of an examination will be described with reference to several setting pages. The reception unit 114 receives, via the input device 140, selection by a doctor related to mode setting.
In the example in the drawing, “Specify Trigger” and “Periodic Observation” are displayed in a selectable manner under a title 123 “On Mode Setting”, together with respective checkboxes 126. According to “Specify Trigger”, when a doctor keeps pressing a still button provided on the endoscope system, for example, the captured image 121 is made a still image, and when the acquisition unit 111 detects such a still image period, this is taken as a trigger for the diagnosis unit 112 to start the diagnosis process. That is, it is set such that the diagnosis process is started when it is determined that diagnosis is required by the doctor. Accordingly, in a case where “Specify Trigger” is selected, the diagnosis result is desirably displayed with no exception regardless of the contents.
In “Periodic Observation”, the diagnosis process is continuously performed according to a cycle that is set. In relation to the setting “Periodic Observation”, input boxes 128 “Cycle” and “Display Criterion” are provided, and a cycle (seconds) of performance of the diagnosis process and a threshold (%) for the estimated probability regarding whether to display the diagnosis result or not may be respectively set. In the example in the drawing, it is set such that the diagnosis process is performed every three seconds, and such that the diagnosis result is displayed in a case where there is a lesion part, in the diagnosis result, for which the estimated probability is calculated to be 40% or more. Additionally, when selecting the checkboxes 126 of both “Specify Trigger” and “Periodic Observation”, the doctor may cause the diagnosis process to be performed both based on the trigger from the doctor and at the cycle that is set.
When the radio button 124 “Complete Off” is selected, a transition to a display page on the top right takes place. In the complete off mode, a region regarding setting related to the background mode is displayed while being shaded. The complete off mode has to be more consciously selected because, in this mode, the diagnosis process is not performed by the diagnosis unit 112 and the diagnosis result cannot be reviewed after end of the examination. Accordingly, in an initial setting, it is set such that the background mode is first set when the off mode is selected, and such that selection of the complete off mode is received in a layer deeper than a layer where the background mode is set. By adopting such a user interface, the doctor is prevented from inadvertently selecting the complete off mode.
The display page on the top left will be referred to again. In the background mode, the doctor can set a cycle at which the diagnosis process by the diagnosis unit 112 is performed. For example, when “2” seconds is input in the input box 128 as shown in the diagram, the diagnosis unit 112 generates a diagnosis image and performs the diagnosis process every two seconds. In this manner, the cycle in the background mode may be made different from the cycle in the case of performing periodic observation in the on mode. That is, a proportion of performance of the diagnosis process by the diagnosis unit 112 on the captured image 121 acquired by the acquisition unit 111 may be changed. Real-time display of the diagnosis result in the on mode has an aspect that interrupts progress of the examination, and thus, it can be said that it is better to reduce frequency of the diagnosis process. In contrast, in the background mode in the off mode, progress of the examination is not interrupted, and thus, the diagnosis process may be performed more often. In this manner, convenience of the examination support device 100 may be increased by changing, if possible, the proportion of performance of the diagnosis process as appropriate according to the state of the examination.
Moreover, the doctor is allowed to set the storage criterion. The storage criterion is a criterion set for at least one of the captured image and the diagnosis result, and the control unit 113 causes a diagnosis result to be stored in the storage unit 150 when the storage criterion that is set is satisfied. In relation to the storage criterion, first, it is possible to select one of a case where all the diagnosis results obtained by performing the diagnosis process are stored, and a case where only the diagnosis result indicating an estimated probability equal to or greater than a threshold is stored. A value of the threshold for the estimated probability may be specified using an input box 128. If only the diagnosis result indicating an estimated probability equal to or greater than the threshold is stored, work may be reduced at a time when the doctor checks the diagnosis result after the examination. On the other hand, the threshold for the estimated probability set as the storage criterion may be set to a small value compared with the threshold for the estimated probability set as the display criterion in the on mode. In the background mode, real-time display of the examination result is not performed, and thus, progress of the examination can not obstructed even when the threshold for the estimated probability is set to a small value. In this manner, even in the case of setting items that are related to each other, the display criterion and the storage criterion may be made different according to respective properties.
When the scroll indicator 127 is selected on the display page on the top left, a transition to a display page on the middle left takes place, and the doctor is able to set the storage criterion for the captured image. In the example in the drawing, “Specify Resolution” and “Specify Observation Part” may be set. “Specify Resolution” is for specifying that the diagnosis process is performed only when the captured image 121 acquired by the acquisition unit 111 is a high-resolution image. Such a restriction is applied when a radio button 124 “Yes” is selected, and the diagnosis process is performed with no restriction when a radio button 124 “No” is selected. For example, the resolution is evaluated based on high-frequency components included in the captured image 121, and “high resolution” is determined in a case where a proportion of high-frequency components is equal to or greater than a threshold that is set in advance. When the captured image 121 as an original image based on which the diagnosis image is generated is limited to a high-resolution image, reliability of the estimated probability output by the neural network for analysis 151 is also increased.
Additionally, with respect to the storage criterion regarding the captured image 121, whether the captured image 121 satisfies the storage criterion or not may be determined at the time point of storing the diagnosis result in the storage unit 150, or whether the storage criterion is satisfied or not may be determined at the time point of acquisition of the captured image 121 by the acquisition unit 111. In the case where determination is to be performed at the time point of storing the diagnosis result in the storage unit 150, if it is determined that storage is not to be performed in the first place according to another storage criterion (for example, the threshold for the estimated probability), calculation of evaluation of resolution or the like for the captured image may be omitted. In the case where determination is to be performed at the time point of acquisition of the captured image 121 by the acquisition unit 111, the diagnosis process by the diagnosis unit 112 may be omitted.
“Specify Observation Part” is for specifying that the diagnosis process is to be performed only in the case where the captured image 121 that is acquired by the acquisition unit 111 is an image of an observation part that is specified. That is, the diagnosis part is limited to a specific organ. Such a restriction is applied in the case where a radio button 124 “Yes” is selected, and the diagnosis process is performed with no restriction in the case where a radio button 124 “No” is selected. In the case where “Yes” is selected, a window 129 for specifying the observation part appears as shown in a diagram on the middle right.
The window 129 includes a title 123 “Specify Observation Part”, a selection item 125 indicating an observation part, a radio button 124 allowing exclusive selection of each selection item, and a scroll indicator 127. As the selection items 125, “Automatic”, “Esophagus”, “Stomach”, and the like are prepared. In “Automatic”, for example, whether or not the neural network for analysis 151 that is prepared is a learning model that takes a specific observation part as a target is checked, and in the case of the learning model that takes the specific observation part as the target, the observation part in question is taken as a specified part. Alternatively, in the case where an observation part is specified by the endoscope system 200, if character information regarding the observation part is included in the examination information 222 in the regenerated signal image 225, the character information is read and the observation part is taken as the specified part. In the case where a specific observation part such as “Esophagus” or “Stomach” is specified, such specification is followed. When an observation part is specified in the window 129, a transition to the display page on the middle left takes place, and a text of the specified observation part appears in the manner of “Specified Part: Stomach”.
When an observation part is specified in this manner, the diagnosis unit 112 determines whether the captured image 121 transferred from the acquisition unit 111 is that obtained by capturing the observation part that is set. Then, the control unit 113 causes the diagnosis result to be stored in the storage unit 150 only when capturing of the observation part that is set is determined by the diagnosis unit 112. Additionally, the diagnosis unit 112 may determine, before the diagnosis process, whether the captured image 121 transferred from the acquisition unit 111 is that obtained by capturing the observation part that is set or not, and does not have to perform the diagnosis process in the case where other than the observation part that is set is determined. In the case of determining whether the captured image 121 transferred from the acquisition unit 111 is that obtained by capturing the observation part that is set or not, the diagnosis unit 112 may use a neural network for part determination that is prepared separately from the neural network for analysis 151. For example, in the case where “Stomach” is specified as the observation part, the diagnosis unit 112 performs determination regarding the captured image 121 transferred from the acquisition unit 111, using the neural network for part determination that outputs a probability of an input image being the stomach. Then, the captured image is input to the neural network for analysis 151 and the diagnosis process is performed only in the case where the probability that is output by the neural network for part determination is equal to or greater than a threshold.
When the scroll indicator 127 is selected on the display page on the middle left, a transition to a display page on the bottom left takes place. The display page on the bottom left is related to settings related to list display at the end of the examination and storage destination of the diagnosis result. “Display List” is for specifying whether to display diagnosis results in a list at the end of the examination by using thumbnail display described with reference to
Next, an example of a processing procedure of main processes performed by the arithmetic processing unit 110 will be described with reference to a flowchart.
In step S101, the reception unit 114 receives various settings from the doctor via the input device 140. The doctor performs setting according to the purpose of the examination, via the user interfaces described with reference to
In step S102, the control unit 113 checks whether the on mode is set or the off mode is set. When the on mode is set, step S103 is performed next, and when the off mode is set, step S121 in
When proceeding to step S103, the acquisition unit 111 acquires a display signal from the endoscope system 200, generates the regenerated signal image 225, and then, cuts out the captured image 121 from the regenerated signal image 225 in step S104 and transfers the same to the control unit 113. In step S105, the control unit 113 performs on-mode live display of sequentially updating and displaying the captured image 121 on the display monitor 120 as in the example in the left diagram in
In step S106, the acquisition unit 111 determines whether the captured image 121 that is generated is a diagnosis target image or not. For example, in the case where specification of trigger is set, whether the captured image 121 that is generated is in a still image state continuously for a specific period of time or not is determined. In the case where the diagnosis target image is determined, the captured image 121 is transferred to the diagnosis unit 112 and step S107 is performed next, and in the case where the diagnosis target image is not determined, step S110 is performed next.
When proceeding to step S107, the diagnosis unit 112 generates a diagnosis image from the captured image 121 transferred from the acquisition unit 111, and performs the diagnosis process. More specifically, a region that possibly includes a lesion and its estimated probability are calculated using the neural network for analysis 151. Then, the diagnosis result is transferred to the control unit 113. In step S108, the control unit 113 determines whether the diagnosis result that is received meets the display criterion that is set in step S101 or that is specified in advance.
For example, when the display criterion is specified to be 40% or more in the case where the on mode is being performed under periodic observation, the display criterion is determined to be met when the estimated probability included in the diagnosis result is 40% or more, and the display criterion is not determined to be met when the estimated probability is less than 40%. In the case where it is determined that the display criterion is not met, step S110 is performed next without displaying the diagnosis result. In the case where it is determined that the display criterion is met, step S109 is performed next, and the control unit 113 displays the diagnosis result in the manner of the right diagram in
In step S110, the detection unit 115 detects whether the in-body examination by the camera unit 210 is ended or not. When end is not detected, step S103 is performed again, and the examination is continued in the on mode. When end is detected, a specified end process is performed, and the series of processes are ended.
In step S121, the control unit 113 checks whether the background mode is set or the complete off mode is set. When the background mode is set, step S122 is performed next, and when the complete off mode is set, step S131 is performed next.
When proceeding to step S122, the acquisition unit 111 acquires a display signal from the endoscope system 200, generates the regenerated signal image 225, and then, cuts out the captured image 121 from the regenerated signal image 225 in step S123 and transfers the same to the control unit 113. In step S124, the control unit 113 performs off-mode live display of sequentially updating and displaying the captured image 121 on the display monitor 120 while applying shading as in the example in
In step S125, the acquisition unit 111 determines whether the captured image 121 that is generated is a diagnosis target image or not. For example, in the case where the cycle is set to two seconds, whether the captured image 121 that is generated matches the timing or not is determined. Furthermore, in the case where an observation part is specified, the acquisition unit 111 transfers the captured image 121 to the diagnosis unit 112 such that whether a part that is shown in the captured image 121 is the observation part that is specified is determined. In the case where the diagnosis target image is determined, the captured image 121 is transferred to the diagnosis unit 112 and step S126 is performed next, and in the case where the diagnosis target image is not determined, step S129 is performed next.
When proceeding to step S126, the diagnosis unit 112 generates a diagnosis image from the captured image 121 transferred from the acquisition unit 111, and performs the diagnosis process. More specifically, a region that possibly includes a lesion and its estimated probability are calculated using the neural network for analysis 151. Then, the diagnosis result is transferred to the control unit 113. In step S127, the control unit 113 determines whether the diagnosis result that is received meets the storage criterion that is set in step S101 or that is specified in advance.
For example, in the case where the captured image 121 that is an original image of the diagnosis image is determined not to have a high resolution when specification of resolution is set to “Yes”, it is determined that the storage criterion is not met. In the case where the storage criterion is not determined to be met, the diagnosis result is not stored, and step S129 is performed next. In the case where the storage criterion is determined to be met, step S128 is performed next, and the control unit 113 stores the diagnosis result in a storage unit such as the storage unit 150. When the storage process is complete, step S129 is performed next.
In step S129, the detection unit 115 detects whether the in-body examination by the camera unit 210 is ended or not. When end is not detected, step S122 is performed again, and the examination is continued in the background mode. When end is detected, step S130 is performed next.
When proceeding to step S130, the control unit 113 reads out the diagnosis result that is stored in the storage unit, and causes the same to be displayed on the display monitor 120 as in the example in
When proceeding from step S121 to step S131, the control unit 113 performs display in the complete off mode as in
Heretofore, the present embodiment has been described, but the user interfaces described herein are merely examples and are not restrictive, and any mode is allowed as long as reception of various setting items and provision of information are enabled. For example, audio input that recognizes voice of the doctor as a user, or augmented reality (AR) technology of superimposing provision information on wearable glasses may be adopted. Moreover, the setting items above are merely examples, and may be selected as appropriate according to specifications and the like of an examination device.
Furthermore, as a modification of the present embodiment, it is also possible to adopt an embodiment as follows, namely, an embodiment in which the background mode is also executed in parallel when executing the on mode in the embodiment described above. For example, in relation to a diagnosis process that is performed when a diagnosis trigger from the doctor is detected, there is a demand to reduce a delay time until the diagnosis result is displayed as much as possible, and thus, a diagnosis image, an image size of which is reduced by greatly thinning out the pixels of the captured image 121, is used. However, in relation to a periodical diagnosis process where the diagnosis result is checked after end of the examination and thus, is not displayed during the examination, securing of diagnosis accuracy is prioritized, and a diagnosis image, an image size of which is reduced without greatly thinning out pixels of the captured image 121, is used.
Convenience of the examination support device may be increased by executing, in parallel, a calculation process for a diagnosis result that is to be displayed in real time on the display monitor 120 (a first diagnosis process), and a calculation process that is for a diagnosis result that is stored in the storage unit during the examination without being displayed on the display unit, and that requires a greater amount of calculation (a second diagnosis process). A configuration of such an examination support device can be as follows.
An examination support device that is used by being connected to an endoscope system including a camera unit for being inserted into an inside of a body of a subject, the examination support device including:
Such an examination support device may satisfy both a demand to swiftly check the diagnosis result even during an examination, and a demand to obtain a highly accurate diagnosis result. To obtain a highly accurate diagnosis result, the amount of calculation is increased and a processing time is increased for that, but when the captured image 121 as the diagnosis target can at least be acquired, the diagnosis process can be accomplished due to there being no restriction of real-time display. The process for obtaining a highly accurate diagnosis result does not depend solely on the amount of thinning for the diagnosis image, and switching of the neural network for analysis to be used may also be adopted, for example. More specifically, there may be prepared a first neural network for the first diagnosis process that places a higher priority on a calculation speed than on accuracy, and a second neural network for the second diagnosis process that places a higher priority on the accuracy than on the calculation speed.
Additionally, the storage criterion and the display criterion described above may be set also in an embodiment according to such a modification. In the embodiment described above, setting is received based on the mode “on mode” or “off mode”, but setting may instead be received in relation to “first diagnosis process” or “second diagnosis process”. In this case, specification of the resolution or the observation part may be received in relation to the second diagnosis process, for example.
In the present embodiment described above, a case is assumed where the endoscope system 200 and the examination support device 100 are connected by the connection cable 250, but wireless connection may also be adopted instead of wired connection. Furthermore, an embodiment is described where the endoscope system 200 outputs a display signal to outside, and the examination support device 100 uses the display signal, but the format of an output signal is not particularly specified as long as an image signal that is provided to an external device by the endoscope system 200 includes an image signal of a captured image that is captured by the camera unit 210. Moreover, in the present embodiment described above, a description is given assuming that the camera unit 210 of the endoscope system 200 is a flexible endoscope, but the configuration and processing procedures of the examination support device 100 are no different even when the camera unit 210 is a rigid endoscope. Additionally, diagnosis by the diagnosis unit in the present embodiment is only for aiding diagnosis by the doctor, and final decision is made by the doctor.
Number | Date | Country | Kind |
---|---|---|---|
2021-104550 | Jun 2021 | JP | national |
The present application is based upon and claims the benefit of priority from Japanese Patent Application No. 2021-104550, filed on Jun. 24, 2021, and International application No. PCT/JP2022/025111 filed on Jun. 23, 2022, the entire contents of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2022/025111 | Jun 2022 | US |
Child | 18508212 | US |