The present invention relates to an image processing apparatus, an endoscope apparatus, and an image processing method for controlling display of an unobserved region.
In recent years, an endoscope system using an endoscope has been widely used in a medical field and in an industrial field. For example, in the medical field, an endoscope may be inserted into an organ having a complex luminal shape inside a subject, and be used for detailed observation and examination of the inside of the organ. Some of such endoscope systems have a function that allows an operator to grasp which site in a luminal organ the operator has observed with the endoscope.
For example, in order to present a region observed with an endoscope, some endoscope systems obtain a shape of inner cavities of an organ from an endoscopic image obtained by image pickup with the endoscope, generate a three-dimensional shape model image on the spot, and display an observation position on the generated three-dimensional shape model image during observation.
Japanese Patent Application Laid-Open Publication No. 2020-154234 discloses a technology for displaying, during observation such as prescribed examination with an endoscope, a region having been observed (hereinafter referred to as an observed region) and a region not having been observed (hereinafter referred to as an unobserved region) on a three-dimensional shape model image in an identifiable manner. According to the proposal in Japanese Patent Application Laid-Open Publication No. 2020-154234, the unobserved region is displayed on a three-dimensional model or within an examination screen of a monitor that displays an examination image acquired with the endoscope. The display inside the examination screen and the display on the three-dimensional shape model image allows an operator to grasp to some extent, for example, which position in the body into which an endoscope is inserted is under observation and to confirm whether or not observation of all the regions in a subject body is completed.
An image processing apparatus according to one aspect of the present invention includes a processor. The processor is configured to acquire image information from an endoscope during observation of an inside of a subject, generate an organ model from the acquired image information, identify an unobserved region that is not observed by the endoscope in the organ model, estimate a top and a bottom and an azimuth of an image pickup visual field of the endoscope with respect to the organ model, set a display direction of the organ model based on the top and the bottom and the azimuth of the image pickup visual field, and output the organ model to a monitor, the organ model being associated with the unobserved region identified.
An endoscope apparatus according to one aspect of the present invention includes an endoscope, an image processing apparatus including a processor, and a monitor. The processor is configured to acquire image information from the endoscope during observation of an inside of a subject, generate an organ model from the acquired image information, identify an unobserved region that is not observed by the endoscope in the organ model, estimate a position and posture of the endoscope with respect to the organ model, set a display direction of the organ model based on the position and posture of the endoscope, and output the organ model to the monitor, the organ model being associated with the unobserved region identified.
An image processing method according to one aspect of the present invention includes acquiring image information from an endoscope during observation of an inside of a subject, generating an organ model from the image information acquired, identifying an unobserved region that is not observed by the endoscope in the organ model, estimating a position and posture of the endoscope with respect to the organ model, setting a display direction of the organ model based on the position and posture of the endoscope, and outputting the organ model to a monitor, the organ model being associated with the unobserved region identified.
The present invention has an effect that the position of an unobserved region can be easily grasped.
Hereinafter, embodiments of the present invention will be described in details with reference to the drawings.
In the present embodiment, the unobserved region may be displayed in an easy-to-understand manner on the three-dimensional shape organ model that is already generated before observation. Note that existing organ models may be organ models generated in previous examinations or observations, or general-purpose organ models created for a prescribed luminal organ or the like. The present embodiment is applicable to both the cases where the organ model is already created before observation and where the organ model is created concurrently with the observation.
As shown in
The insertion portion 33 includes a flexible tube portion 33a, a bending portion 33b that is bendable, and a distal end portion 33c from the proximal end to the distal end of the insertion portion 33. The insertion portion 33 is inserted into a lumen of a patient to be a subject. A proximal end portion of the distal end portion 33c is connected to a distal end of the bending portion 33b, and a proximal end portion of the bending portion 33b is connected to a distal end of the flexible tube portion 33a. The distal end portion 33c is a distal end portion of the insertion portion 33, that is, a distal end portion of the endoscope 30, which is a distal end rigid portion that is rigid.
The bending portion 33b is bendable in a desired direction in response to an operation performed on a bending operation member 35 (a left and right bending operation knob 35a and an upper and lower bending operation knob 35b) provided in the operation portion 32. The bending operation member 35 additionally includes a fixing knob 14c that fixes a position of the bent bending portion 33b. When an operator bends the bending portion 33b in various directions while inserting the insertion portion 33 into the large intestine or pulling the insertion portion 33 from the large intestine, the operator can thoroughly observe the large intestine of the patient. Note that the operation portion 32 is provided with various operation buttons, such as a release button and an air/water feeding button, in addition to the bending operation member 35.
In the present embodiment, a direction in which the distal end portion 33c of the insertion portion 33 (hereinafter also referred to as a distal end of the endoscope) moves (bends) when upward operation is performed by the upper and lower bending operation knob 35b is defined as a top (upper) direction. A direction in which the distal end of the endoscope moves (bends) when downward operation is performed by the upper and lower bending operation knob 35b is defined as a bottom (lower) direction. A direction in which the distal end of the endoscope moves (bends) when rightward operation is performed by the left and right bending operation knob 35a is defined as a right direction A direction in which the distal end of the endoscope moves (bends) when leftward operation is performed by the left and right bending operation knob 35a is defined as a left direction.
At the distal end portion 33c of the insertion portion 33, an image pickup device 31 is provided as an image pickup apparatus. During image pickup, illumination light from the light source device is directed by the light guide to irradiate a subject through an illuminating window (not shown) provided on a distal end surface of the distal end portion 33c. Reflected light from the subject is incident on an image pickup surface of the image pickup device 31 through an observation window (not shown) provided on the distal end surface of the distal end portion 33c. The image pickup device 31 obtains an image pickup signal by photoelectrically converting an optical image of the subject that is incident on the image pickup surface via an image pickup optical system which is not shown. The image pickup signal is supplied to the image generation circuit 40 via a signal line, which is not shown, in the insertion portion 33 and the universal cable 34.
The image pickup device 31 is fixed to the distal end portion 33c of the insertion portion 33 in the endoscope 30, and a top and bottom moving direction at the distal end of the endoscope matches with a vertical scanning direction of the image pickup device 31. In other words, the image pickup device 31 is arranged so that a start side of vertical scanning by the image pickup device 31 is matched with a top direction (upward direction) at the distal end of the endoscope, and an end side of the vertical scanning matches with a bottom direction (downward direction) at the distal end of the endoscope. In other words, the top and the bottom of an image pickup visual field of the image pickup device 31 match with the top and the bottom of the distal end of the endoscope (the distal end portion 33c), respectively. In addition, the top and the bottom of the image pickup device 31, that is, the top and the bottom of the distal end of the endoscope, match with the top and the bottom (the upper and lower sides) of an examination image based on the image pickup signal from the image pickup device 31, respectively.
The image generation circuit 40 is a video processor that performs prescribed image processing on a received image pickup signal and generates an examination image. The image pickup signal of the generated examination image is outputted from the image generation circuit 40 to the monitor 60, so that a live examination image is displayed on the monitor 60. For example, in a case of performing examination of a large intestine, a doctor who performs the examination can insert the distal end portion 33c of the insertion portion 33 through an anus of a patient and observe the inside of the large intestine of the patient with use of the examination image displayed on the monitor 60.
The image processing apparatus 10 includes an image acquisition unit 11, a position and posture detection unit 12, a display interface (hereinafter referred to as an I/F) 13, and the processor 20. The image acquisition unit 11, the position and posture detection unit 12, the display I/F 13, and the processor 20 are connected to each other by a bus 14.
The image acquisition unit 11 takes in examination images from the image generation circuit 40. The processor 20 takes in the examination images via the bus 14. Based on the taken-in examination images, the processor 20 detects an unobserved region, generates an organ model, and generates display data for displaying an image indicating the unobserved region that is superimposed on the organ model. The display I/F 13 takes in the display data from the processor 20 via the bus 14, converts the data to a format that can be displayed on a display screen of the monitor 60, and then outputs the data to the monitor 60.
The monitor 60 as a notification unit displays the examination image from the image generation circuit 40 on the display screen, and displays the organ model from the image processing apparatus 10 on the display screen. For example, the monitor 60 may include a PinP (picture in picture) function, so that both the examination image and the organ model can be displayed at the same time. The notification unit is not limited to notification means using visual information, and may be configured to notify position information by voice or issue operation instructions, for example.
In the present embodiment, the processor 20 creates display data so that the operator can easily grasp the position of the unobserved region.
The processor 20 includes a central processing unit (hereafter referred to as a CPU) 21, a storage unit 22, an input/output unit 23, a position and posture estimation unit 24, a model generation unit 25, an unobserved region determination unit 26, and a display content control unit 27. The storage unit 22 is constituted of, for example, a ROM, a RAM or the like. The CPU 21 operates according to a program stored in the storage unit 22 to control each unit of the processor 20 and the image processing apparatus 10 as a whole.
Note that the position and posture estimation unit 24, the model generation unit 25, the unobserved region determination unit 26, and the display content control unit 27 included in the processor 20 may include a CPU not shown, and the CPU may operate according to programs stored in the storage unit 22 to implement desired processing, or some or all of the respective functions may be implemented by an electronic circuit. The CPU 21 may be configured to implement all the functions of the processor 20.
The input/output unit 23 is an interface to taken in the examination images at a fixed cycle. The input/output unit 23 acquires, for example, the examination images at a frame rate of 30 fps. Note that the frame rate of the examination images taken in by the input/output unit 23 is not limited to 30 fps.
The position and posture estimation unit 24 takes in the examination images via the bus 28 and estimates the position and the posture of the image pickup device 31. The model generation unit 25 takes in the examination images via the bus 28 and generates an organ model. Since the image pickup device 31 is fixed to the distal end side of the distal end portion 33c, it may be said that the position and posture of the image pickup device 31 is the position and the posture of the distal end portion 33c. It may also be said that the position and the posture of the image pickup device 31 is the position and the posture of the distal end of the endoscope.
Using Visual SLAM makes it possible to estimate the position and the posture of the image pickup device 31, that is, the position and the posture of the distal end portion 33c (the position and the posture of the distal end of the endoscope) and also makes it possible to generate the organ model. Since Visual SLAM using SfM makes it possible to acquire the position and the posture of the image pickup device 31 and a three-dimensional image of the subject, that is, an organ model, description is given on the premise, for the convenience of description, that the CPU 21 implements the functions of the position and posture estimation unit 24 and the model generation unit 25 by processing programs.
First, the CPU 21 performs initialization. It is assumed that setting values of each portion of the endoscope 30 relating to the position and posture estimation are already known by calibration to the CPU 21. By initialization, the CPU 21 also recognizes an initial position and posture of the distal end portion 33c.
In step S11 of
The examination images I1, I2, . . . are sequentially supplied to the CPU 21, and the CPU 21 detects feature points from each of the examination images I1, I2 . . . For example, the CPU 21 can detect as feature points corner portions and edge portions where luminance gradients are equal to or more than a prescribed threshold in the images. The example in
The CPU 21 finds corresponding feature points by collating each feature point in an examination image with each feature point in other examination images. The CPU 21 acquires the coordinates of the feature points (a pair of feature points) associated with each other in two examination images (the positions in the examination images), and calculates the position and the posture of the image pickup device 31 based on the acquired coordinates (step S12). Note that in calculation (tracking) of the position and the posture, the CPU 21 may use a basic matrix that holds relative positions and postures among the distal end portions 33cA, 33cB, . . . , that is, relative positions and postures among the image pickup devices 31 that have acquired the respective examination images.
The positions and the postures of the image pickup devices 31 are interrelated with the attention points corresponding to the feature points in the examination images, so that when one is known, the other can be estimated. The CPU 21 executes three-dimensional shape reconstruction processing of a subject based on the relative positions and postures of the image pickup devices 31. In other words, by using the corresponding feature points in the examination images obtained by the respective image pickup devices 31 at the distal end portions 33cA, 33cB, . . . , of which positions and postures are already known, the CPU 21 obtains the positions (hereinafter referred to as attention points) corresponding to the respective feature points on a three-dimensional image, based on the principle of triangulation (hereinafter referred to as mapping). The example of
The CPU 21 repeats tracking and mapping with use of the examination images that are picked up and obtained by the image pickup device 31 while the image pickup device 31 is moving, so as to acquire image data on the organ model that is a three-dimensional image (step S13). Thus, the position and posture estimation unit 24 sequentially estimates the position and the posture of the distal end portion 33c (the position of the distal end of the endoscope), and the model generation unit 25 sequentially creates the organ model.
The unobserved region determination unit 26 detects an unobserved region in the organ model generated by the model generation unit 25 (step S14) and outputs the information on the position of the unobserved region on the organ model to the display content control unit 27. Note that the unobserved region determination unit 26 detects a region surrounded with the organ model that is sequentially generated by the model generation unit 25 as an unobserved region. The display content control unit 27 receives image data from the model generation unit 25 and also receives the information on the position of the unobserved region from the unobserved region determination unit 26. The display content control unit 27 generates and outputs display data so as to display an organ model display that is obtained by synthesizing an image indicating the unobserved region on the image of the organ model. In this way, the organ model on which the image of the unobserved region is superimposed is displayed on the display screen of the monitor 60 (step S15).
(Relation of Position and Posture of Image Pickup Device 31 with Examination Screen and Organ Model Display)
As described above, the image pickup device 31 is fixed to the distal end portion 33c of the insertion portion 33 in the endoscope 30, and the top and the bottom of the distal end of the endoscope in the moving direction match with the top and the bottom of the image pickup device 31, respectively. The top and the bottom of the examination image acquired by the image pickup device 31 also match with the top and the bottom of the image pickup device 31 (the distal end of the endoscope) in the moving direction, respectively. The examination image acquired by the image pickup device 31 is subjected to image processing and is then supplied to the monitor 60. In the present embodiment, the terms “the top and the bottom of the distal end of the endoscope”, “the top and the bottom of the distal end portion 33c”, and “the top and the bottom of the image pickup device 31” are used synonymously. The terms “the position and the posture of the distal end of the endoscope”, “the position and the posture of the distal end portion 33c”, and “the position and the posture of the image pickup device 31” are also used synonymously.
The monitor 60 displays an examination image on the display screen. In the display screen of the monitor 60, an image displayed in a region where the examination image is displayed is defined as an examination screen. The top and bottom direction of the examination screen matches with a vertical scanning direction of the monitor 60, and the start side of vertical scanning (the top of the display screen) is defined as the top of the examination screen, and the end side of the vertical scanning (the bottom of the display screen) is defined as the bottom of the examination screen. The monitor 60 matches the top and the bottom of the examination image with the top and the bottom of the display screen for display. In other words, the top and the bottom of the examination image match with the top and the bottom of the examination screen, respectively. Therefore, the top and the bottom of the distal end portion 33c of the endoscope in the direction of movement by the upper and lower bending operation knob 35b match with the top and the bottom of the examination screen, respectively. However, the top and the bottom of the organ model display may not match with the top and the bottom of the examination screen.
The example on a left side in
Organ model displays IT1 and IT2 in
Therefore, if the operator should wish to photograph an upper region on the page rather than the photographing region Ri in the lumen corresponding to the organ model P1i in
Therefore, in the present embodiment, the display content control unit 27 is configured to display the image of the organ model P1i by rotating the image of the organ model P1i so that the top and the bottom of the examination screen are matched with the top and the bottom of the image of the organ model P1i, respectively. Note that the top and the bottom (the upper side and the lower side) of the organ model are defined based on the top and the bottom of the examination screen currently provided by the image pickup device 31 and arranged in the organ model image. In other words, the display content control unit 27 displays an organ model image by rotating the organ model image so that the top and the bottom of the examination screen I4 in
The display content control unit 27 displays the organ model display IT2 in a lower stage of
In this way, the display content control unit 27 creates the display data in which the organ model image is arranged so as to match the top and the bottom of the examination image with the top and the bottom of the organ model, respectively. As a result, even in the unobserved region determined by the unobserved region determination unit 26, the top and the bottom of the distal end portion 33c of the endoscope match with the top and the bottom of the display of the unobserved region on the organ model, respectively. Therefore, the organ model display allows the operator to easily and intuitively recognize the position of the unobserved region.
In the above description, an example of detecting the position and the posture of the distal end portion 33c by image processing has been described. However, other methods may be employed to detect the position and the posture of the distal end portion 33c. For example, a method using a magnetic sensor may be considered. For example, a magnetic sensor 36 shown by a dashed line in
Outside the subject in the vicinity of the magnetic sensor 36, the magnetic field generation apparatus 50 (dashed line in
The position and posture detection unit 12 causes the magnetic field generation apparatus 50 to generate a prescribed magnetic field. The position and posture detection unit 12 detects the magnetic field by the magnetic sensor 36, and generates in real time data on position coordinates (x, y, z) and direction (i.e., an Euler angle (Ψ, θ, φ), that is, position and posture information based on a detection signal of the detected magnetic field. In other words, the position and posture detection unit 12 is a detector that detects three-dimensional arrangement including at least part of the position and direction information on the image pickup device 31, based on the detection signal from the magnetic sensor 36. More specifically, the position and posture detection unit 12 detects three-dimensional arrangement time change information that is information on changes in the three-dimensional arrangement over time. Therefore, the position and posture detection unit 12 acquires three-dimensional arrangement information on the insertion portion 33 at a plurality of time points.
In the above description, the organ model is sequentially generated based on examination images that are sequentially inputted in the model generation unit 25. However, an existing organ model may also be used.
Note that in the present embodiment, even in the case of using the existing organ model, the display content control unit 27 creates display data in which the organ model image is arranged so as to match the top and the bottom of the examination image with the top and the bottom of the organ model, respectively.
Next, the operation of the thus-configured embodiment is described by referring to
After power is applied to an endoscope apparatus 1, the insertion portion 33 is inserted into an examination target, and examination is started. The image pickup device 31 is driven by the image generation circuit 40 to pick up an image of the inside of the patient and acquire an endoscope image (step S1). An image signal from the image pickup device 31 is supplied to the image generation circuit 40 for prescribed image processing. The image generation circuit 40 generates an examination image (endoscope image) based on the image signal and outputs the examination image to the monitor 60. In this way, the examination image is displayed on the display screen 60a of the monitor 60.
The examination image is also supplied to the image processing apparatus 10. The image acquisition unit 11 supplies the received examination image to the processor 20. The examination image is supplied to the position and posture estimation unit 24 and the model generation unit 25 by the input/output unit 23 of the processor 20. The position and posture estimation unit 24 and the model generation unit 25 perform generation of an organ model and estimation of the position and the posture of the distal end portion 33c (the distal end of the endoscope) in steps S2 and S3. When the model generation unit 25 receives the examination images, the model generation unit 25 generates an organ model for the observed region.
The unobserved region determination unit 26 detects an unobserved region surrounded with the organ model generated by the model generation unit 25 (step S4), and outputs the determination result to the display content control unit 27.
The display content control unit 27 superimposes an image of the unobserved region on an image of the organ model from the model generation unit 25, and generates display data for matching the top and the bottom of the organ model with the top and the bottom of the distal end portion 33c, that is, the top and the bottom of the examination screen, respectively (step S5). The display data from the display content control unit 27 is supplied to the display I/F 13 via the input/output unit 23, converted to a format that can be displayed on the monitor 60, and supplied to the monitor 60. In this way, the examination screen and the organ model display including the organ model, on which the unobserved region is superimposed, are displayed on the display screen 60a of the monitor 60. The top and the bottom of the organ model and the top and the bottom of the examination screen match with the top and the bottom of the distal end of the endoscope, respectively, so that the operator can easily and intuitively grasp the position of the unobserved region from the display on the display screen 60a.
The display content control unit 27 may further perform viewpoint direction control for the organ model.
The examination screen I5 is obtained when the image pickup device 31 performs image pickup with an image pickup visual field in a lumen direction. In other words, a line-of-sight direction of the image pickup device 31 (an optical axis direction of an image pickup optical system) is directed in the lumen direction. In the organ model display IT3, an image 33ci1 of the distal end portion 33c superimposed on an organ model P2ai indicates that the line-of-sight direction of the image pickup device 31 is the lumen direction. In other words, as shown on an upper right side in
The examination screen I6 is obtained when the image pickup device 31 performs image pickup with an image pickup visual field in a lumen wall direction. In this case, as shown on an lower right side in
For example, assume a case where the operator operates the left and right bending operation knob 35a to bend the distal end portion 33c to the right side while the organ model display IT3 is displayed on the display screen 60a, so that the examination screen I6 and the organ model display IT4 in
When the viewpoint direction control shown in
Thus, in the present embodiment, when the top and the bottom of the organ model display to be displayed is set based on the top and the bottom of the examination screen, the position of the unobserved region can be grasped easily and intuitively. This facilitates the bending operation of the endoscope for observation of the unobserved region. In addition, the organ model display is displayed according to the viewpoint direction of the image pickup device, which makes it easier to confirm the unobserved region.
The display content control unit 27 changes the display magnification rate of the organ model according to the moving speed of the image pickup device 31, in addition to display control similar to the display control in the first embodiment. In step S21 in
The organ model displays IT5S and IT5L are based on, for example, the organ model of an intestinal tract of the same subject. For example, when the operator inserts and removes the insertion portion 33 to and from the intestinal tract, the processor 20 creates the organ model of the intestinal tract. The operator examines the inside of the intestinal tract while removing the insertion portion 33 from the intestinal tract. In
The organ model display IT5S indicates that the organ model in a relatively wide range, from the distal end of the organ model to substantially the position of the image pickup device 31, is displayed at a relatively small display magnification rate. The organ model display IT5L indicates that the organ model in a relatively narrow range in the vicinity of the image pickup device 31 is displayed at a relatively large display magnification rate.
For example, when the insertion portion 33 is inserted and removed at relatively high speed, a relatively wide range of the organ model is displayed as in the case of the organ model display IT5S, for example, so that the movement is easily confirmed. On the contrary, in a case where, for example, a desired observation target region is confirmed in detail with the image pickup device 31, the speed of insertion and removal of the insertion portion 33 is relatively low speed, and a relatively small range of the organ model is displayed at a large magnification rate as in the case of the organ model display IT5L. This makes it possible to confirm the desired observation target region in detail.
Thus, in the modification, the organ model is displayed at a display magnification rate in accordance with the moving speed of the image pickup device 31, which makes it easy to observe the observation target region.
In addition to display control similar to the display control in the first embodiment, the display content control unit 27 also determines changeover of organs according to the examination image from the image pickup device 31, and switches the organ model display. In step S31 in
When the display content control unit 27 detects that the distal end portion 33c (the image pickup device 31) has passed the changeover portion (step S33), the display content control unit 27 generates display data for displaying an organ model about an organ after the changeover in place of the organ model displayed before the changeover (step S34).
Thus, in the present modification, every time the image pickup device 31 moves between organs, the organ model, corresponding to the organ to which the image pickup device 31 moves, is displayed, and this makes it easy to observe the observation target region. Note that the example in
Although a display direction of the organ model display is not shown in the example in
The display content control unit 27 creates display data for organ model displays that are shown in
In
In step S41 of
In step S43, the display content control unit 27 determines whether a current phase is an observation phase for observing the organ and searching for candidates of a lesion portion, a diagnosis phase for diagnosing the lesion portion, or a treatment phase for treating the lesion portion. For example, the display content control unit 27 may determine the diagnosis phase when a distance between the image pickup device 31 and the photographing target is equal to or less than a prescribed threshold. The display content control unit 27 may also determine the diagnosis phase or the treatment phase when the moving speed of the image pickup device 31 is equal to or less than a prescribed threshold speed.
When the display content control unit 27 determines the observation phase as a result of the phase determination (YES is determined in S44), the display content control unit 27 displays an image of the unobserved region superimposed on the organ model image in step S45. When the display content control unit 27 determines the diagnosis phase or the treatment phase (No is determined in S44), the display content control unit 27 does not display the image of the unobserved region.
This can prevent the display of the unobserved region from hindering recognition of the lesion portion when diagnosis or treatment is performed.
In the present embodiment, the unobserved regions are divided into four classification items to optimize the display. The four classification items include (1) an occlusion region, (2) a short observation time region, (3) a photographed region, and (4) an examination screen outside region. This allows the operator to grasp a cause of the unobserved regions or the like, and may be able to help determination of the position to be observed next, for example.
(1) The occlusion region is an unobserved region due to shielding objects. Examples of the occlusion region may include regions behind folds or regions occluded by residues or bubbles.
(2) The short observation time region refers to a region that is not observable due to fast moving speed of the distal end of a scope.
(3) The photographed region is a region that has been photographed from among the unobserved regions.
(4) The examination screen outside region is an unobserved region outside the examination screen that exists outside a current image pickup range.
The CPU 21 in the processor 20 classifies an unobserved region into at least one or more of the regions (1) to (4). The CPU 21 acquires information on an unobserved region from the unobserved region determination unit 26, and acquires the position and posture information on the image pickup device 31 from the position and posture estimation unit 24 so as to classify the unobserved region based on the position and the posture of the image pickup device 31. The CPU 21 provides the classification result of each region to the display content control unit 27. The display content control unit 27 creates the display data in a display form that is set for each of the unobserved regions (1) to (4).
The CPU 21 detects the regions (hereafter referred to as occlusion regions) that are likely to be occluded due to the presence of occlusion elements that shield the visual field, such as folds. For example, the CPU 21 detects folds inside the lumen to be examined, and elements such as residues, bubbles, and bleeding present inside the lumen as occlusion elements. For example, the CPU 21 may use AI to determine the occlusion elements. For example, an inference model is generated in advance by acquiring a plurality of examination images including occlusion elements, and performing deep learning using the examination images as teacher data. The CPU 21 may use the inference model to determine the occlusion elements and the occlusion regions in the examination images.
There is also a method of detecting the occlusion region using a region already determined as an unobserved region. With reference to
A left side in
The CPU 21 calculates the moving speed of the image pickup device 31 from the frame rate of the examination image and changes in position of the distal end portion 33c, for example. The CPU 21 acquires information on the position of the image pickup device 31 from the position and posture estimation unit 24, and acquires information on a position of an unobserved region from the unobserved region determination unit 26 to obtain the moving speed of the image pickup device 31 that passes the unobserved region.
When the moving speed of the image pickup device 31 that passes the unobserved region is equal to or more than a prescribed threshold, the CPU 21 classifies the unobserved region as a short observation time region. The CPU 21 provides information about the short observation time region to the display content control unit 27. The display content control unit 27 displays a display indicating the short observation time region in the examination screen.
The display content control unit 27 displays the short observation time region in a display format in conformity with the display methods of other classifications that are classified. In this case, the display content control unit 27 displays the short observation time region by changing a display color or a line type (solid line/dotted line) so as to be recognized as the short observation time region.
The CPU 21 acquires information about an unobserved region from the unobserved region determination unit 26. The unobserved region determination unit 26 sequentially determines the unobserved region. Based on the information from the unobserved region determination unit 26, the CPU 21 can grasp that the unobserved region has changed to the photographed region. The CPU 21 provides information about the photographed region to the display content control unit 27. The display content control unit 27 displays a display indicating the photographed region. The CPU 21 may also notify a user at predetermined timing that the unobserved region has changed to the photographed region.
Note that the CPU 21 may be configured to inform the operator of the position of the unobserved region. When the operator moves the insertion portion 33 and photographs an unobserved region with the image pickup device 31, the unobserved region is classified to the photographed region.
A left side in
When the CPU 21 is provided with information about the photographing region and the unobserved region from the model generation unit 25 and the unobserved region determination unit 26, the CPU 21 classifies the unobserved region outside the photographing region as the examination screen outside region. The CPU 21 outputs the classification result of the examination screen outside region to the display content control unit 27. The display content control unit 27 displays a display indicating the examination screen outside region.
An upper stage of
A lower stage in
The display content control unit 27 may also display various types of information about the examination screen outside region on the examination screen. The CPU 21 acquires the various types of information about the examination screen outside region by acquiring information from the unobserved region determination unit 26, and outputs the acquired information to the display content control unit 27. The display content control unit 27 displays on the examination screen a display based on the information from the CPU 21.
An upper stage in
A middle stage in
A lower stage in
Next, the operation of the thus-configured embodiment is described by referring to
After power is applied to the endoscope apparatus 1, the insertion portion 33 is inserted into an examination target, and examination is started. The image pickup device 31 is driven by the image generation circuit 40 to pick up images of an inside of a patient and acquire a plurality of endoscope images (step S51). Image signals from the image pickup device 31 are supplied to the image generation circuit 40 for prescribed image processing. The image generation circuit 40 generates examination images (endoscope images) based on the image signals and outputs the examination images to the monitor 60. In this way, the examination images are displayed on the display screen 60a of the monitor 60.
The position and posture detection unit 12 estimates the position of the distal end of the endoscope using magnetic sensor data from the magnetic field generation apparatus 50 (step S52). The position and the posture of the distal end of the endoscope estimated by the position and posture detection unit 12 are supplied to the processor 20. The examination image from the image generation circuit 40 is also supplied to the image processing apparatus 10. The image acquisition unit 11 supplies the received examination image to the processor 20. The examination image is supplied to the model generation unit 25 by the input/output unit 23 of the processor 20. The model generation unit 25 generates an organ model in step S53 (step S53).
The unobserved region determination unit 26 determines an unobserved region surrounded with the organ model generated by the model generation unit 25 (step S54), and outputs the determination result to the CPU 21 and the display content control unit 27. The CPU 21 classifies the unobserved region to an occlusion region, an short observation time region, a photographed region, and an examination screen outside region, based on the positional relation between the unobserved region and the distal end of the endoscope, and outputs the classification result to the display content control unit 27. The display content control unit 27 displays the examination screen and the organ model on the display screen 60a of the monitor 60 according to the classification result (step S56).
Thus, in the present embodiment, the unobserved regions are classified into four classification items including the occlusion region, the short observation time region, the photographed region, and the examination screen outside region, and displayed. This allows the operator to grasp a cause or the like of the unobserved region, and can help determination of a position to be observed next, for example.
The CPU 21 calculates a distance (Euclidean distance) between an unobserved region and the distal end of the endoscope for display control of the unobserved region. The CPU 21 acquires information on a position of the distal end of the endoscope from the position and posture estimation unit 24, and acquires information on a position of the unobserved region from the unobserved region determination unit 26, so as to obtain the distance between the unobserved region and the distal end of the endoscope.
The CPU 21 also determines diagnosis and treatment phases for display control of the unobserved region, and determines places where insertion and removal of the insertion portion 33 is difficult. For example, the CPU 21 generates an inference model by performing deep learning with examination images in the places having high operation difficulty, such as insertion and removal, as teacher data, and uses the inference model so that the places having high operation difficulty can be determined. The CPU 21 may also determine the diagnosis or treatment phase, which requires an intensive work of the operator, by such a method as determining treatment instruments by using an operating signal from the endoscope 30 or using AI. The CPU 21 also acquires information on the observation route for display control of the unobserved region. For example, the CPU 21 can determine which position on the observation route is currently under observation by storing the information on the observation route in the storage unit 22 and using outputs of the position and posture estimation unit 24, the model generation unit 25, and the unobserved region determination unit 26. The CPU 21 may also output, to the user, information on an operation method to reach an unobserved region. For example, the CPU 21 may output information, such as raising the distal end of the endoscope, pulling the endoscope back, or pushing the endoscope in, for example.
The CPU 21 outputs acquired various types of information to the display content control unit 27. The display content control unit 27 controls the display of the unobserved region based on the various types of information from the CPU 21.
In a process of organ model generation, a large number of unobserved regions emerge in the vicinity of the image pickup device 31, and when the regions in the vicinity of the image pickup device 31 are displayed on the examination screen as unobserved regions, visibility of an observation site may deteriorate, and this may cause difficulty in observation. Therefore, in the present embodiment, unobserved regions present at a distance closer than the threshold θ are controlled so as not to be displayed.
The display content control unit 27 is provided with the information about unobserved regions from the unobserved region determination unit 26, and is also provided with the distance d and the threshold θ for each of the unobserved regions from the CPU 21. The display content control unit 27 displays the unobserved region on the examination screen when the distance d to the pertinent unobserved region exceeds the threshold θ.
The CPU 21 also reduces the threshold value θ before the image pickup device 31 passes through a site having high operation difficulty. For example, in examination of a colon, the threshold θ is reduced in front of sites having difficulty in insertion or removal of the insertion portion 33, such as a sigmoid colon and splenic flexure portions. As a result, the unobserved regions at a relatively short distance from the image pickup device 31 are also displayed on the examination screen. The operator performs bending operation of the insertion portion 33 so that the unobserved regions are eliminated. As a result, the unobserved regions are less overlooked in such sites, and repeated insertion and removal of the insertion portion 33 can be prevented.
The CPU 21 also reduces the threshold θ immediately after diagnosis or treatment. During operation such as diagnosis and treatment, the operator often concentrates on a work in a fixed place for a fixed period of time. In such a case, it is possible for the operator to forget to observe the unobserved region that the operator remembered to observe before the work on such a place. Accordingly, immediately after such a treatment work, the unobserved regions at a relatively short distance from the image pickup device 31 are also displayed. For example, the CPU 21 recognizes the phases and reduces the threshold θ by such methods as switching between normal-light observation and NBI (narrow band) observation, detection of zoom-in and zoom-out operations, and detection of treatment instruments using AI.
An upper stage in
The organ model display IT12 shown in the lower stage in
An organ model display IT13L in an upper stage in
The organ model displays IT13S and IT13L are based on, for example, the organ model of an intestinal tract of the same subject. The organ model display IT13S indicates that the organ model in a relatively wide range, from the distal end of the organ model to substantially the position of the image pickup device 31, is displayed at a relatively small display magnification rate. The organ model display IT13S includes an image IT13Si of the organ model, an image 31bSi of the image pickup device 31, and an image Ru13S of an unobserved region.
The organ model display IT5L indicates that a relatively narrow range of an organ model in the vicinity of the image pickup device 31 is displayed at a relatively large display magnification rate. The organ model display IT13L includes an image IT13Li of the organ model, an image 31bLi of the image pickup device 31, and an image Ru13L of an unobserved region.
A left side in
It is assumed that the distance d is further increased by the movement of the image pickup device 31. A right side in
Such emphasis display of the unobserved region prevents the operator from overlooking the unobserved region. Note that in addition to the blinking control, various emphasis displays, such as changing brightness or changing a thickness of the square frame, may also be adopted.
(Deviation from Observation Route)
The CPU 21 acquires information on the observation route of an organ and outputs the acquired information to the display content control unit 27. The display content control unit 27 compares an observation route order of the region at the position of the distal end of the endoscope and an observation route order at the position of the unobserved region, and when the observation route order at the position of the unobserved region is latter, the pertinent unobserved region is not displayed.
Thus, an unobserved region is present in the section 2 and the section 5. Observation by the image pickup device 31 is performed in the order of the section number starting from the section 1. Assume that the image pickup device 31 is now located in the section 2 and is observing the region of the section 2. In this case, as shown in the middle stage of
Assume that as the observation proceeds, the image pickup device 31 reaches the section 5 and is in a state of observing the region in the section 5. In this case, as shown in the lower stage in
In this way, since the display of the unobserved regions is controlled according to the observation route, smooth observation is easily performed.
In the example shown in
Next, the operation of the thus-configured embodiment is described by referring to
After power is applied to the endoscope apparatus 1, the insertion portion 33 is inserted into an examination target, and examination is started. The image pickup device 31 is driven by the image generation circuit 40 to pick up images of an inside of a patient and acquire a plurality of endoscope images (step S61). Image signals from the image pickup device 31 are supplied to the image generation circuit 40 for prescribed image processing. The image generation circuit 40 generates examination images (endoscope images) based on the image pickup signals and outputs the examination images to the monitor 60. In this way, the examination images are displayed on the display screen 60a of the monitor 60.
The examination images from the image generation circuit 40 are also supplied to the image processing apparatus 10. The image acquisition unit 11 supplies the received examination images to the processor 20. The examination images are supplied to the position and posture estimation unit 24 and the model generation unit 25 by the input/output unit 23 of the processor 20. The model generation unit 25 generates an organ model (step S62), and the position and posture estimation unit 24 obtains a position of the distal end of the endoscope (step S63).
The unobserved region determination unit 26 determines an unobserved region surrounded with the organ model generated by the model generation unit 25 (step S64), and outputs the determination result to the CPU 21 and the display content control unit 27. The CPU 21 calculates the distance d between the unobserved region and the distal end of the endoscope based on the positional relation between the unobserved region and the distal end of the endoscope, and obtains the threshold θ. The CPU 21 outputs the distance d and the threshold θ to the display content control unit 27 to control the display (step S65).
The CPU 21 determines an examination phase and provides the determination result to the display content control unit 27 to control the display (step S66). In addition, the CPU 21 determines whether each of the unobserved regions is an unobserved region that deviates from the observation route, and provides the determination result to the display content control unit 27 to control the display (step S67). The display content control unit 27 controls the display based on the output of the CPU 21 (step S68). For steps S65 to S68, at least one of the processes may be executed, and the execution order is not particularly limited.
Thus, in the present embodiment, the display of the unobserved regions is controlled based on the distance to the distal end of the endoscope, the examination phase, and the observation route, which provides an effect of making it easier for the operator to perform observation based on the examination screen and the organ model.
The present invention is not limited to each of the embodiments in its entirety, but may be embodied by modifying the component members without departing from the meaning of the invention during an implementation stage. Moreover, various modes of the present invention may be formed by appropriately combining a plurality of the component members disclosed in each of the embodiments. For example, some component members out of all the component members shown in the embodiments may be deleted. Furthermore, component members in different embodiments may properly be combined.
This application is a continuation application of PCT/JP2021/026430 filed on Jul. 14, 2021, the entire contents of which are incorporated herein by this reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/026430 | Jul 2021 | US |
Child | 18385532 | US |