The present invention relates to an endoscope system, a controller, a control method, and a recording medium.
The present application claims priority under the provisional U.S. patent application No. 63/076408 filed on Sep. 10, 2020, which is incorporated herein by reference. This is a continuation of International Application PCT/JP2021/033210 which is hereby incorporated by reference herein in its entirety.
Conventionally, an endoscope system that controls an electric holder so as to move an endoscope held by the holder has been known (for example, see PTL 1).
An endoscope system of PTL 1 stores time series variations in the rotation angle of each joint of the holder in a manual mode while an operator moves the endoscope, and the endoscope system reversely reproduces the time series variations in the rotation angle of each joint in an automatic return mode. Thus, the endoscope moves reversely along a movement path in the manual mode and automatically returns to the initial position and orientation.
{PTL 1} The publication of Japanese Patent No. 6161687
An aspect of the present invention is an endoscope system including an endoscope that is inserted into a subject (into the body cavity of a patient) and captures an endoscope image in the subject; a moving device that holds the endoscope and moves the endoscope; a storage unit; and a controller including at least one processor, wherein the storage unit stores first position information and first rotation angle information on a first region in the subject and second position information and second rotation angle information on a second region different from the first region in the subject, the first rotation angle information defining a rotation angle of the endoscope image of the first region, the second rotation angle information defining a rotation angle of the endoscope image of the second region, the at least one processor calculates third rotation angle information on a third region in the subject on the basis of the first position information, the first rotation angle information, the second position information, the second rotation angle information, and third position information on the third region, the third region being different from the first and second regions, and the at least one processor rotates the endoscope image on a basis of the third rotation angle information if the third region includes a current imaging region that is currently being imaged by the endoscope, and the at least one processor outputs the rotated endoscope image to the display device.
Another aspect of the present invention is a controller configured to control an endoscope image that is captured by an endoscope and is displayed on a display device, the controller including: a storage unit; and at least one processor, wherein the storage unit stores first position information and first rotation angle information on a first region in a subject and second position information and second rotation angle information on a second region different from the first region in the subject, the first rotation angle information defining a rotation angle of the endoscope image of the first region, the second rotation angle information defining a rotation angle of the endoscope image of the second region, the at least one processor calculates third rotation angle information on a third region in the subject on the basis of the first position information, the first rotation angle information, the second position information, the second rotation angle information, and third position information on the third region, the third region being different from the first and second regions, and the at least one processor rotates the endoscope image on the basis of the third rotation angle information if the third region includes a current imaging region that is currently being imaged by the endoscope, and the at least one processor outputs the rotated endoscope image to the display device.
Another aspect of the present invention is a control method for controlling an endoscope image that is captured by an endoscope and is displayed on a display device, by using first position information and first rotation angle information on a first region in a subject and second position information and second rotation angle information on a second region different from the first region in the subject, the first rotation angle information defining a rotation angle of the endoscope image of the first region, the second rotation angle information defining a rotation angle of the endoscope image of the second region, the control method including the steps of: calculating third rotation angle information on a third region in the subject on the basis of the first position information, the first rotation angle information, the second position information, the second rotation angle information, and third position information on the third region, the third region being different from the first and second regions; rotating the endoscope image on the basis of the third rotation angle information if the third region includes a current imaging region that is currently being imaged by the endoscope, and outputting the rotated endoscope image to the display device.
Another aspect of the present invention is a computer-readable non-transitory recording medium in which a control program for causing a computer to perform the control method is recorded.
An endoscope system, a controller, a control method, and a recording medium according to a first embodiment of the present invention will be described below with reference to the accompanying drawings.
As illustrated in
As illustrated in
The endoscope 2 and the surgical instrument 6 may be inserted into the subject through a cannula passing through the hole H. The cannula is a cylindrical instrument opened at both ends. In this case, the endoscope 2 is supported by the cannula at the position of the hole H.
As illustrated in
The endoscope 2 is a direct-view endoscope having a visual axis (optical axis) C coaxial with a longitudinal axis I of the endoscope 2. The endoscope 2 is, for example, a rigid endoscope. The endoscope 2 including an image sensor 2a captures an image in a subject X, for example, an abdominal cavity and acquires the endoscope image E including the tip of the surgical instrument 6 (see
The endoscope image E is transmitted from the endoscope 2 to the endoscope processor 4, is subjected to necessary processing in the endoscope processor 4, is transmitted from the endoscope processor 4 to the display device 5, and is displayed on a display screen 5a of the display device 5. The display device 5 may be any display, for example, a liquid crystal display or an organic electroluminescent display. A surgeon operates the surgical instrument 6 in a body while observing the endoscope image E displayed on the display screen 5a. The display device 5 may include an audio system, for example, a speaker.
In addition to the display device 5, a user terminal for communications with the controller 1 and the endoscope processor 4 via a communication network may be provided to display the endoscope image E at the terminal. The terminal is, for example, a notebook computer, a laptop computer, a tablet computer, or a smartphone but is not particularly limited thereto.
The moving device 3 includes a robot arm 3a (including an electric scope holder) that holds the endoscope 2 and three-dimensionally controls the position and orientation of the endoscope 2. The moving device 3 includes a plurality of joints 3b and 3c that operate to move the endoscope 2 with the pivot axis P1 serving as a supporting point, thereby three-dimensionally changing the position and orientation of the endoscope 2.
As illustrated in
The moving device 3 includes a plurality of angle sensors 3d that detects the rotation angles of the joints 3b and 3c. The angle sensor 3d is, for example, an encoder, a potentiometer, or a Hall sensor that is provided at each of the joints 3b and 3c.
As illustrated in
The processor 11 may be a single processor, a multiprocessor, or a multicore processor. The processor 11 reads and executes a program stored in the storage unit 13.
The memory 12 is, for example, a semiconductor memory including a ROM (read-only memory) or RAM (Random Access Memory) area. The memory 12 may store data necessary for the processing of the processor 11 (that is, the memory 12 may operate as “storage unit”) like the storage unit 13, which will be described later.
The storage unit 13 is a computer-readable non-transitory recording medium, e.g., a hard disk or a nonvolatile recording medium including a semiconductor memory such as flash memory. The storage unit 13 stores various programs including a follow-up control program (not illustrated) and an image control program (control program) 1a and data necessary for the processing of the processor 11. Processing performed by the processor 11 may be implemented by dedicated logic circuits or hardware, for example, an FPGA (Field Programmable Gate Array), a SoC (System-on-a-Chip), an ASIC (Application Specific Integrated Circuit), and a PLD (Programmable Logic Device).
The storage unit 13 may be a server, e.g., a cloud server connected via a communication network to the controller 1 provided with a communication interface, instead of a recording medium integrated in the controller 1. The communication network may be, for example, a public network such as the Internet, a dedicated line, or a LAN (Local Area Network). The connection of the devices may be wired connection or wireless connection.
The endoscope processor 4 for processing the endoscope image E may be provided with the processor 11. Specifically, like the processor 11 included in the controller 1, the endoscope processor 4 may be provided with processors, dedicated logic circuits, or hardware to perform processing like the processor 11. The processing will be described later. The endoscope processor 4 and the controller 1 may be integrated into one unit. Each of the endoscope processor 4 and the controller 1 may be provided with at least one processor.
Any one of the configurations of the at least one processor 11, the memory 12, the storage unit 13, the input interface 14, the output interface 15, and the user interface 16 in the controller 1 may be provided for a user terminal, aside from the endoscope processor 4 and the controller 1. The controller 1 may be integrated with the moving device 3.
The input interface 14 and the output interface 15 are connected to the endoscope processor 4. The controller 1 can acquire the endoscope image E from the endoscope 2 via the endoscope processor 4 and output the endoscope image E to the display device 5 via the endoscope processor 4. The input interface 14 may be directly connected to the endoscope 2 and the output interface 15 may be directly connected to the display device 5 such that the controller 1 can directly acquire the endoscope image E from the endoscope 2 and directly output the endoscope image E to the display device 5.
The input interface 14 and the output interface 15 are connected to the moving device 3. The controller 1 acquires, from the moving device 3, information on rotation angles detected by the angle sensors 3d at the joints 3b and 3c and transmits, to the moving device 3, a control signal for driving the joints 3b and 3c.
The user interface 16 has input devices for inputs to the user interface 16 by users such as a surgeon and receives a user input. The input devices include a button, a mouse, a keyboard, and a touch panel.
Moreover, the user interface 16 has a means that allows a user to switch a manual mode and an autonomous mode, which will be described later. The means is, for example, a switch.
The user interface 16 is configured to receive a first instruction and a second instruction from a user. The first instruction and the second instruction are instructions for causing the controller 1 to register position information and rotation angle information, which will be described later. For example, the user interface 16 has a button operated by an operator. The user interface 16 receives the first instruction in response to a first button operation and receives the second instruction in response to a second button operation.
The processor 11 can be operated in the manual mode or the autonomous mode.
The manual mode is a mode that permits users such as a surgeon to operate the endoscope 2. In the manual mode, a surgeon can manually move the endoscope 2 with a hand holding the proximal end portion of the endoscope 2. Furthermore, the surgeon can remotely operate the endoscope 2 by using an operating device connected to the moving device 3. The operating device can include a button, a joystick, and a touch panel.
The autonomous mode is a mode that causes the endoscope 2 to automatically follow the surgical instrument 6 by controlling the moving device 3 on the basis of the position of the surgical instrument 6 in the endoscope image E. In the autonomous mode, the processor 11 acquires the three-dimensional position of the tip of the surgical instrument 6 from the endoscope image E and controls the moving device 3 on the basis of the three-dimensional position of the tip of the surgical instrument 6 and the three-dimensional position of a predetermined target point set in the field of view of the endoscope 2. The target point is, for example, a point that is located on the optical axis C and corresponds to the center point of the endoscope image E. Thus, the controller 1 controls a movement of the endoscope 2 and causes the surgical instrument 6 to follow the endoscope 2 such that the tip of the surgical instrument 6 is disposed at the center point in the endoscope image E.
In the autonomous mode, by performing a control method in
The control method performed by the processor 11 will be described below.
As indicated in
As indicated in
A user, e.g., a surgeon inserts the endoscope 2 held by the moving device 3 into an abdominal cavity, switches to the manual mode (SAl, SB1), and starts panning around by moving the endoscope 2 in the abdominal cavity (SA3). Panning around is an operation for observing the overall abdominal cavity to confirm the positions or the like of organs and tissues. The positions of organs and tissues vary among patients, so that the operation is required each time the endoscope is inserted. When panning around, the surgeon rotates the endoscope 2 about the pivot axis P1 so as to observe, through the endoscope 2, a range including at least two specific tissues having anatomical characteristics. In the present embodiment, the specific tissues are the aorta F and the pelvis G.
As indicated in
Subsequently, as illustrated in
After the first instruction is inputted, the surgeon observes the overall aorta F through the endoscope 2 by rotating the endoscope 2 from O-point about the pivot axis P1 while keeping the rotation angle ω adjusted at O-point. As illustrated in
In response to the first instruction received by the user interface 16, the processor 11 determines, on the basis of the endoscope image E, the first position information and the first rotation angle information on the first region including the aorta (first specific tissue) F (SB3, SB4). The first rotation angle information is information that defines the rotation angle of the endoscope image E of the first region.
Specifically, the storage unit 13 stores a learned model lb of machine learning of the correspondence between an image including a specific tissue and the type of the specific tissue. In step SB3, the processor 11 recognizes the aorta F in the endoscope image E by using the learned model 1b and determines, as the first position information, the range of the position φ of the endoscope 2 with the aorta F included in the endoscope image E. In other words, the first region is a region between O-point and B-point.
For example, the first position information is p=0° to 20°. As described above, instead of the setting of the initial position in steps SA2 and SB2, the processor 11 may set the position φ of the endoscope 2 at the time of the reception of the first instruction to the initial position p=0°. In other words, the initial position is determined at a time and a location as requested by the user.
Alternatively, the processor 11 may set the position φ of the endoscope 2 at the time of the reception of the first instruction, as the first position information without processing using the learned model 1b. In other words, the first position information is determined at a time and a location as requested by the user.
Subsequently, in step SB4, the processor 11 sets the endoscope image E and the rotation angle ω of the endoscope 2 at the time of the reception of the first instruction by the user interface 16, as a first reference endoscope image and a first reference rotation angle, and the processor 11 determines the first rotation angle information on the basis of the first reference endoscope image and the first reference rotation angle.
Specifically, the processor 11 calculates the first reference rotation angle corresponding to a predetermined initial rotation angle ω=0°, as a target rotation angle θt of the endoscope image E at the position φ at the time of the reception of the first instruction. The calculated target rotation angle θt represents a required rotation amount of the endoscope image E when the aorta F is to be horizontally placed in the endoscope image E at the position φ at the time of the reception of the first instruction. In the present embodiment, the first reference rotation angle ω is set at the initial rotation angle 0°.
The processor 11 then calculates a required rotation amount Δθ of the endoscope image E, which is obtained at another position φ included in the first position information, when the aorta F in the endoscope image E is to be aligned with the aorta F in the first reference endoscope image. Subsequently, the processor 11 calculates a target rotation angle θt at another position φ by adding the rotation amount Δθ to the first reference rotation angle. The calculated target rotation angle θt represents a required rotation amount of the endoscope image E when the aorta F is to be horizontally placed in the endoscope image E at another position φ.
As described above, the processor 11 calculates the target rotation angle θt of the endoscope image E when the aorta F is to be horizontally placed at each position φ=0°, . . . , 20° included in the first position information, and the processor 11 determines the target rotation angle θt at each position φ=0°, . . . , 20° as the first rotation angle information.
Thereafter, as illustrated in
After the second instruction is inputted, the surgeon observes the overall pelvis G through the endoscope 2 by rotating the endoscope 2 from D-point about the pivot axis P1 while keeping the rotation angle ω adjusted at D-point. Also at this point, the pelvis G makes a rotational movement in the endoscope image E as the endoscope 2 rotates from D-point to A-point. A-point is the end point of the observation range of the pelvis G in the endoscope image E.
In response to the second instruction received by the user interface 16, the processor 11 determines, on the basis of the endoscope image E, the second position information and the second rotation angle information on the second region including the pelvis (second specific tissue) G (SBS, SB6). The second rotation angle information is information that defines the rotation angle of the endoscope image E of the second region.
Specifically, in step SB5, the processor 11 recognizes the pelvis G in the endoscope image E by using the learned model 1b and determines, as the second position information, the range of the position φ of the endoscope 2 with the pelvis G included in the endoscope image E. In other words, the second region is a region between D-point and A-point. For example, the second position information is p=70° to 90°.
Also for the second position information, the processor 11 may set the position φ of the endoscope 2 at the time of the reception of the second instruction, as the second position information without processing using the learned model 1b. In other words, the second position information is determined at a time and a location as requested by the user.
Subsequently, in step SB6, the processor 11 sets the endoscope image E and the rotation angle ω of the endoscope 2 at the time of the reception of the second instruction by the user interface 16, as a second reference endoscope image and a second reference rotation angle, and the processor 11 determines the second rotation angle information on the basis of the second reference endoscope image and the second reference rotation angle.
Specifically, the processor 11 calculates the second reference rotation angle corresponding to an initial rotation angle ω =0°, as a target rotation angle θt of the endoscope image E at the position φ at the time of the reception of the second instruction. The calculated target rotation angle θt represents a required rotation amount of the endoscope image E when the pelvis G is to be placed in an upper part in the endoscope image E at the position φ at the time of the reception of the second instruction.
The processor 11 then calculates a required rotation amount A of the endoscope image E, which is obtained at another position φ included in the second position information, when the pelvis G in the endoscope image E is to be aligned with the pelvis G in the second reference endoscope image. Subsequently, the processor 11 calculates a target rotation angle θt at another position φ by adding the rotation amount A to the second reference rotation angle. The calculated target rotation angle θt represents a required rotation amount of the endoscope image E when the pelvis G is to be placed in an upper part in the endoscope image E at another position φ.
As described above, the processor 11 calculates the target rotation angle θt of the endoscope image E when the pelvis G is to be placed in an upper part at each position φ=70°, . . . , 90° included in the second position information, and the processor 11 determines the target rotation angle θt at each position φ=70°, . . . , 90° as the second rotation angle information.
The processor 11 then calculates third position information and third rotation angle information on a third region on the basis of the first position information, the first rotation angle information, the second position information, and the second rotation angle information (SB7, SB8). The third region is different from the first region and the second region and is located between A-point and B-point in the present embodiment.
In step SB7, the processor 11 determines, as the third position information, the range of the position φ between the first position information and the second position information. For example, the third position information is φ=20° to 70°.
In step SB8, the processor 11 then calculates the third rotation angle information on the basis of the first, second, and third position information and the first and second rotation angle information. The third rotation angle information is information that defines the rotation angle of the endoscope image E of the third region.
Specifically, the processor 11 calculates the positional relationship between the third position information and the first and second position information and calculates the third rotation angle information on the basis of the positional relationship, the first rotation angle information, and the second rotation angle information.
For example, it is assumed that each position φ (M-point) of the third position information is an internally dividing point that internally divides a path between A-point and B-point in a m:n ratio. The processor 11 calculates a target rotation angle θt at each position φ on the basis of the ratio m:n, the rotation angle of 100° at A-point, and the rotation angle of −10° at B-point. For example, the position φ=45° internally divides the path between A-point and B-point, so that the target rotation angle θt at the position φ=45° is 45°, a median value between −10° and 100°.
This calculates the target rotation angle θt that gradually changes from 100° to 10° as the position φ changes from B-point to A-point.
The processor 11 determines the target rotation angle et at each position φ=20°, . . . , 70° as the third rotation angle information.
In other words, the third region is a region where a specific tissue like the pelvis G and the aorta F in the first region and the second region is not included in an endoscope image, the specific tissue serving as an index of the rotation angle of the endoscope image E. Such a region provides difficulty in recognizing a specific tissue by the learned model 1b and determining a desired rotation angle by a user. This requires calculation of the third position information and the third rotation angle information on the basis of the first and second position information and the first and second rotation angle information of the first region and the second region.
Subsequently, in step SB9, the processor 11 stores the first position information, the first rotation angle information, the second position information, the second rotation angle information, the third position information, and the third rotation angle information, which are determined in steps SB3 to SB8, in the storage unit 13. Thus, as indicated in
After the completion of panning, the surgeon switches from the manual mode to the autonomous mode and performs treatment on the aorta F and the pelvis G with the surgical instrument 6. As indicated in
During the startup of the devices 1 and 3, the processor 11 sequentially receives the rotation angles of the joints 3b and 3c from the moving device 3 and calculates the current position φ of the endoscope 2 from the rotation angles of the joints 3b and 3c (SC1).
The processor 11 determines which one of the first region, the second region, and the third region includes the current imaging region on the basis of the current position of the endoscope 2, the first position information, and the second position information (SC4, SC6, SC8).
Specifically, if the current position φ is included in the first position information (φ=0° to 20°, the processor 11 determines that the current imaging region is included in the first region (YES at SC4). The processor 11 then rotates the endoscope image E in the plane of the endoscope image E on the basis of the first rotation angle information stored in the storage unit 13 (SC5). Specifically, the processor 11 reads the target rotation angle θt of the current position φ from the storage unit 13 and rotates the endoscope image E by the target rotation angle θt through image processing. The processor 11 then outputs the rotated endoscope image E from the controller 1 to the display device 5 and displays the image on the display screen 5a (SC10).
In the rotated endoscope image E, the aorta F is horizontally placed. Thus, while the endoscope 2 moves in the range of φ=0° to 20° and captures the endoscope image E including the aorta F, the aorta F in the endoscope image E displayed on the display screen 5a is kept in a horizontal position. For example, if the endoscope 2 pivots 20° from O-point to B-point about the pivot axis P1, the endoscope image E rotates from 0° to −10°.
If the current position φ is included in the second position information (p=70° to 90°, the processor 11 determines that the current imaging region is included in the second region (NO at SC4 and YES at SC6). The processor 11 then rotates the endoscope image E in the plane of the endoscope image E on the basis of the second rotation angle information stored in the storage unit 13 (SC7). Specifically, the processor 11 reads the target rotation angle θt of the current position φ from the storage unit 13 and rotates the endoscope image E by the target rotation angle et through image processing. The processor 11 then outputs the rotated endoscope image E from the controller 1 to the display device 5 and displays the image on the display screen 5a (SC10).
In the rotated endoscope image E, the pelvis G is placed in an upper part. Thus, while the endoscope 2 moves in the range of φ=70° to 90° and captures the endoscope image E including the pelvis G, the pelvis G in the endoscope image E displayed on the display screen 5a is kept in the upper part. For example, if the endoscope 2 pivots 20° from A-point to D-point about the pivot axis P1, the endoscope image E rotates from 100° to 90°.
If the current position φ is not included in the first position information or the second position information (NO at SC4 and NO at SC6), the processor 11 determines that the current imaging region is included in the third region (SC8). The processor 11 then rotates the endoscope image E in the plane of the endoscope image E on the basis of the third rotation angle information stored in the storage unit 13 (SC9). Specifically, the processor 11 reads the rotation angle of the current position φ from the storage unit 13 and rotates the endoscope image E by the rotation angle through image processing. The processor 11 then outputs the rotated endoscope image E from the controller 1 to the display device 5 and displays the image on the display screen 5a (SC10).
The endoscope image E displayed on the display screen 5a is rotated by the target rotation angle θt corresponding to the position φ. The target rotation angle θt gradually changes from the target rotation angle of the first region to the target rotation angle of the second region as the position φ changes from the first region to the second region. Thus, for example, if the endoscope 2 pivots from B-point to A-point about the pivot axis P1, the endoscope image E displayed on the display screen 5a rotates from −10° to 100° in one direction.
As described above, according to the present embodiment, the storage unit 13 stores the first position information on the first region including a specific tissue F and the first rotation angle information for defining the target rotation angle θt of the endoscope image E, the target rotation angle θt being defined for placing the specific tissue F at a desired rotation angle by the surgeon. Furthermore, the storage unit 13 stores the second position information on the second region including a specific tissue G and the second rotation angle information for defining the target rotation angle θt of the endoscope image E, the target rotation angle θt being defined for placing the specific tissue G at a desired rotation angle by the surgeon. Moreover, as the third rotation angle information of the third region between the first region and the second region, the target rotation angle θt that gradually changes between the target rotation angle θt of the first rotation angle information and the target rotation angle θt of the second rotation angle information is interpolated and is stored in the storage unit 13.
Thereafter, in the autonomous mode, the endoscope image E is rotated by the target rotation angle θt corresponding to the position φ of the current imaging region, thereby automatically adjusting the vertical direction of the endoscope image E. Specifically, when the current imaging region is the first or second region including the specific tissues F and G, the endoscope image E is automatically rotated by the target rotation angle θt that places the specific tissues F and G at a predetermined rotation angle. When the current imaging region is the third region that does not include the specific tissues F and G, the endoscope image E is automatically rotated by a proper target rotation angle et that is estimated from the first and second rotation angle information.
As described above, the operator can be provided with the endoscope image E in a proper vertical direction according to the position of the current imaging region in an abdominal cavity.
Moreover, an automatic adjustment to the vertical direction of the endoscope image E can relieve the stress of the surgeon and shorten the treatment time. Specifically, if the surgeon adjusts the vertical direction of the endoscope image E, the surgeon needs to take a hand off from the surgical instrument 6 during an operation and then manually rotate the endoscope 2. According to the present embodiment, the surgeon does not need to operate the endoscope 2 to adjust the vertical direction, so that the surgeon can continue treatment without being interrupted.
An endoscope system, a controller, a control method, and a recording medium according to a second embodiment of the present invention will be described below with reference to the accompanying drawings.
The present embodiment is different from the first embodiment in that a processor 11 rotates an endoscope image E by a rotation of an endoscope 2 instead of image processing. In the present embodiment, configurations different from those of the first embodiment will be described. Configurations in common with the first embodiment are indicated by the same reference numerals and an explanation thereof is omitted.
An endoscope system 10 according to the present embodiment includes a controller 1, the endoscope 2, a moving device 3, an endoscope processor 4, and a display device 5 as in the first embodiment.
As indicated in
As indicated in
As in the first embodiment, a user performs steps SAl to SA5. In response to a first instruction received by a user interface 16, the processor 11 determines the first position information and the first rotation angle information on the first region on the basis of the endoscope image E (SB3, SB4′).
Specifically, in step SB4′ subsequent to step SB3, the processor 11 sets the endoscope image E and a rotation angle co of the endoscope 2 at the time of the reception of the first instruction by the user interface 16, as a first reference endoscope image and a first reference rotation angle.
Subsequently, the processor 11 calculates the first reference rotation angle corresponding to a predetermined initial rotation angle ω =0°, as a target rotation angle cot of the endoscope 2 at a position φ at the time of the reception of the first instruction.
The processor 11 then calculates a required rotation amount A of the endoscope image E, which is obtained at another position φ included in the first position information, when an aorta F in the endoscope image E is to be aligned with the aorta F in the first reference endoscope image. Subsequently, the processor 11 calculates a target rotation angle cot of the endoscope 2 at another position φ by adding the rotation amount A to the first reference rotation angle.
As described above, the processor 11 calculates the target rotation angle cot of the endoscope 2 when the aorta F is to be horizontally placed at each position φ=0°, . . . , 20° included in the first position information, and the processor 11 determines the target rotation angle cot at each position φ=0°, . . . , 20° as the first rotation angle information.
The user then performs steps SA6 and SA7. In response to a second instruction received by the user interface 16, the processor 11 determines the second position information and the second rotation angle information on the second region on the basis of the endoscope image E (SB5, SB6′).
Specifically, in step SB6′ subsequent to step SB5, the processor 11 sets the endoscope image E and the rotation angle co of the endoscope 2 at the time of the reception of the second instruction by the user interface 16, as a second reference endoscope image and a second reference rotation angle.
Subsequently, the processor 11 calculates the second reference rotation angle corresponding to an initial rotation angle ω =0°, as a target rotation angle cot of the endoscope 2 at the position φ at the time of the reception of the second instruction.
The processor 11 then calculates a required rotation amount A of the endoscope image E, which is obtained at another position φ included in the second position information, when the pelvis G in the endoscope image E is to be aligned with the pelvis G in the second reference endoscope image. Subsequently, the processor 11 calculates a target rotation angle cot of the endoscope 2 at another position φ by adding the rotation amount A to the second reference rotation angle.
As described above, the processor 11 calculates the target rotation angle cot of the endoscope 2 when the pelvis G is to be placed in an upper part at each position φ=70°, . . . , 90° included in the second position information, and the processor 11 determines the target rotation angle cot at each position φ=70°, . . . , 90° as the second rotation angle information.
The processor 11 then calculates the third position information and the third rotation angle information on the third region on the basis of the first position information, the first rotation angle information, the second position information, and the second rotation angle information (SB7, SB8′). Specifically, in step SB8′ subsequent to step SB7, the processor 11 determines the target rotation angle cot at each position φ=20°, . . . , 70° of the third position information as the third rotation angle information as in step SB8.
Subsequently, in step SB9, the processor 11 stores the position information and the rotation angle information, which are determined in steps SB3, SB4′, SB5, SB6′, SB7, and SB8′, in the storage unit 13. Thus, data is generated in the storage unit 13, the data including the rotation angle φ of the endoscope 2 and the target rotation angle cot of the endoscope image E at each rotation angle φ indicating a position of the imaging region.
As indicated in
If the processor 11 determines that the current imaging region is included in the first region (YES at SC4), the processor 11 rotates the endoscope 2 on the basis of the first rotation angle information stored in the storage unit 13 (SC5′). Specifically, the processor 11 reads the target rotation angle cot of the current position φ from the storage unit 13 and rotates the endoscope image E by rotating the endoscope 2 to the target rotation angle cot.
If the processor 11 determines that the current imaging region is included in the second region (NO at SC4 and YES at SC6), the processor 11 rotates the endoscope 2 on the basis of the second rotation angle information stored in the storage unit 13 (SC7′). Specifically, the processor 11 reads the target rotation angle cot of the current position φ from the storage unit 13 and rotates the endoscope image E by rotating the endoscope 2 by the target rotation angle cot.
If the processor 11 determines that the current imaging region is included in the third region (SC7), the processor 11 rotates the endoscope 2 on the basis of the third rotation angle information stored in the storage unit 13 (SC8′). Specifically, the processor 11 reads the target rotation angle cot of the current position φ from the storage unit 13 and rotates the endoscope image E by rotating the endoscope 2 by the target rotation angle cot.
Subsequent to step SC5′, SC7′, or SC9′, the processor 11 outputs the rotated endoscope image E from the controller 1 to the display device 5 and displays the image on a display screen 5a (SC10).
As described above, in the autonomous mode according to the present embodiment, the endoscope 2 is rotated to the target rotation angle cot corresponding to the position φ of the current imaging region, thereby automatically adjusting the vertical direction of the endoscope image E as in the first embodiment. Specifically, when the current imaging region is the first or second region including the specific tissues F and G, the endoscope 2 is automatically rotated to the target rotation angle cot that places the specific tissues F and G at a predetermined rotation angle. When the current imaging region is the third region that does not include the specific tissues F and G, the endoscope 2 is automatically rotated to a proper target rotation angle cot that is estimated from the first and second rotation angle information.
As described above, an operator can be provided with the endoscope image E in a proper vertical direction according to the position of the current imaging region in an abdominal cavity. Moreover, an automatic adjustment to the vertical direction of the endoscope image E can relieve the stress of the surgeon and shorten the treatment time.
According to the present embodiment, a rotation of the endoscope image E by rotating the endoscope 2 about an optical axis C can eliminate the need for image processing for rotating the endoscope image E, thereby reducing a load of the processor 11. Moreover, the user can intuitively recognize the vertical direction of the endoscope image E by confirming the target angle ω of a portion of the endoscope 2 outside a body.
In the present embodiment, the endoscope image E is rotated by rotating the overall endoscope 2 about the optical axis C. Alternatively, an image sensor 2a may be rotated about the optical axis C while keeping the rotation angle ω of the endoscope 2 about the optical axis C. In this case, the endoscope 2 includes a rotating mechanism for rotating the image sensor 2a.
A rotation of the image sensor 2a relative to the body of the endoscope 2 can rotate the endoscope image E like a rotation of the overall endoscope 2.
An endoscope system, a controller, a control method, and a recording medium according to a third embodiment of the present invention will be described below with reference to the accompanying drawings.
The present embodiment is different from the first and second embodiments in that an endoscope image E is rotated by a combination of a rotation of an endoscope 2 about an optical axis C and image processing. In the present embodiment, configurations different from those of the first and second embodiments will be described. Configurations in common with the first and second embodiments are indicated by the same reference numerals and an explanation thereof is omitted.
An endoscope system 10 according to the present embodiment includes a controller 1, the endoscope 2, a moving device 3, an endoscope processor 4, and a display device 5 as in the first embodiment.
After Step SB9, as indicated in
In steps SC5′, SC7′, and SC9′, the processor 11 determines whether the rotation angle ω of the endoscope 2 has reached the critical angle of the rotatable range of the endoscope 2 on the basis of a rotation angle detected by an angle sensor 3d at a rotary joint 3c (SC11). The rotatable range in which the endoscope 2 is rotatable may be limited by physical constraints or the like. For example, a cable in the endoscope 2 and the moving device 3 is twisted by a rotation of the endoscope 2 and thus the rotatable range of the endoscope 2 is set without causing an excessive twist.
If the endoscope 2 rotates to a target rotation angle cot before the rotation angle ω reaches the critical angle (NO at SC11), the processor 11 outputs the rotated endoscope image E to the display device 5 (SC10).
If the rotation angle ω reaches the critical angle before the target rotation angle cot (YES at SC11), the processor 11 stops the rotation of the endoscope 2 at the critical angle, rotates the endoscope image E through image processing by a rotation angle to be added to reach the target rotation angle cot (SC12), and outputs the rotated endoscope image E to the display device 5 (SC10).
As described above, according to the present embodiment, the endoscope image E can be rotated by a combination of a rotation of the endoscope 2 about the optical axis C and image processing even if the endoscope image E is hard to rotate by a rotation of the endoscope 2 alone.
Other effects of the present embodiment are identical to those of the first and second embodiments and thus an explanation thereof is omitted.
A first modification of the endoscope system 10, the controller 1, the control method, and the recording medium according to the first to third embodiments will be described below.
As illustrated in
The oblique endoscope 2 includes a long insertion portion 2b that is inserted with the longitudinal axis I into a subject, and an imaging portion 2c that includes the image sensor 2a and is connected to the proximal end of the insertion portion 2b. The insertion portion 2b and the imaging portion 2c are integrally rotated about the longitudinal axis I by a rotation of the rotary joint 3c. In the case of a separate oblique mirror, a camera head (imaging portion 2c) and an optical visual tube (insertion portion 2b) have different pieces of rotation angle information. In the present modification, the camera head and the optical visual tube are integrally rotated to perform processing using common rotation angle information.
In the case of the direct-vision endoscope 2, a visual axis (optical axis) C is coaxial with the longitudinal axis I, so that the position of the visual axis C is kept even if the endoscope 2 rotates about the longitudinal axis I. In the case of the oblique endoscope 2, the visual axis C tilts with respect to the longitudinal axis I and thus makes a rotational movement about the longitudinal axis I in response to a rotation of the endoscope 2 about the longitudinal axis I, thereby moving the imaging region.
In step SB2′, the processor 11 sets the current position φ of the endoscope 2 at the initial position φ=0° and sets the current orientation ω of the endoscope 2 at the initial position ω =0°. The orientation ω of the endoscope 2 is a rotation angle about the longitudinal axis I and corresponds to the orientation of the visual axis C with respect to the longitudinal axis I.
In response to the first instruction received by the user interface 16 (SA5), the processor 11 determines the first position information and the first rotation angle information (SB3, SB4) and holds information on a first orientation of the endoscope 2 when the first instruction is received.
Subsequently, in response to the second instruction received by the user interface 16 (SA7), the processor 11 determines the second position information and the second rotation angle information (SB5, SB6) and holds information on a second orientation of the endoscope 2 when the second instruction is received.
In step SB9, the processor 11 stores the first orientation and the second orientation in the storage unit 13 in addition to the position information and the rotation angle information. Thus, data is generated in the storage unit 13, the data including a rotation angle φ of the endoscope 2, the target rotation angle θt of the endoscope image E at each rotation angle φ, and the first orientation and the second orientation of the endoscope 2, the rotation angle φ indicating the position of the imaging region, the first and second orientations corresponding to each imaging region.
Subsequently, in the autonomous mode, the processor 11 controls the position and orientation of the endoscope 2 by controlling the moving device 3 and causes the endoscope 2 to follow the tip of the surgical instrument 6 (SC3′). At this point, the processor 11 controls the position and orientation of the endoscope 2 on the basis of the first and second position information and the first and second orientations stored in the storage unit 13, so that an orientation ω of the endoscope 2 is controlled to the first orientation when the imaging region is included in the first region, whereas the orientation ω of the endoscope 2 is controlled to the second orientation when the imaging region is included in the second region.
As in the first embodiment, the processor 11 rotates the endoscope image E by the target rotation angle θt according to the current imaging region through image processing (SC4 to SC9).
As described above, in the case of the oblique endoscope 2, the imaging region is moved by a rotation of the endoscope 2 about the longitudinal axis I. Thus, the vertical direction of the endoscope image E is hard to control only by the control method of the second embodiment, in which the endoscope image E is rotated by a rotation of the endoscope 2.
According to the present modification, in the manual mode, the first orientation of the endoscope 2 at the time of imaging of the first region and the second orientation of the endoscope 2 at the time of imaging of the second region are stored. At the time of imaging of the first region in the autonomous mode, the orientation of the endoscope 2 is controlled to the first orientation and the vertical direction of the endoscope image E is adjusted by a rotation through image processing. At the time of imaging of the second region in the autonomous mode, the orientation of the endoscope 2 is controlled to the second orientation and the vertical direction of the endoscope image E is adjusted by a rotation through image processing. This can properly control the vertical direction of the endoscope image E captured by the oblique endoscope 2.
A second modification of the endoscope system 10, the controller 1, the control method, and the recording medium 13 according to the first to third embodiments will be described below.
As illustrated in
The endoscope 2 includes the long insertion portion 2b that is inserted into a subject and the curved portion 2d that is provided at the tip portion of the insertion portion 2b and can be curved in a direction that crosses the longitudinal axis I of the insertion portion 2b. When the curved portion 2d is bent, the visual axis C tilts with respect to the longitudinal axis I and thus makes a rotational movement about the longitudinal axis I in response to a rotation of the endoscope 2 about the longitudinal axis I, thereby moving the imaging region. Moreover, the tilt direction and the tilt angle of the visual axis C with respect to the longitudinal axis I change according to the curving direction and the curving angle of the curved portion 2d.
The control method performed by the processor 11 in the present modification includes steps SB2′ and SB3 to SB9 and steps SC3′ and SC4 to SC10 as in the first modification. As the orientation of the endoscope 2, the rotation direction and the rotation angle of the curved portion 2d are used instead of the rotation angle ω about the longitudinal axis I.
Specifically, in step SB2′, the processor 11 sets the current curving direction and curving angle of the curved portion 2d as an initial orientation. Subsequently, in step SB9, the curving direction and the curving angle of the curved portion 2d at the time of the reception of the first instruction are stored as a first orientation in the storage unit 13 by the processor 11, and the curving direction and the curving angle of the curved portion 2d at the time of the reception of the second instruction are stored as a second orientation in the storage unit 13 by the processor 11.
In step SC3′ of the autonomous mode, the processor 11 controls the position and orientation of the endoscope 2 on the basis of the first and second position information and the first and second orientations stored in the storage unit 13, so that the curving direction and the curving angle of the curved portion 2d are controlled to the first orientation when the imaging region is included in the first region, whereas the curving direction and the curving angle of the curved portion 2d are controlled to the second orientation when the imaging region is included in the second region (SC3′).
As described above, in the case of the endoscope 2 including the curved portion 2d, the imaging region makes a rotational movement by a rotation of the endoscope 2 according to the curving direction and the curving angle of the curved portion 2d. Thus, the vertical direction of the endoscope image E is hard to control by the control method of the second embodiment, in which the endoscope image E is rotated by a rotation of the endoscope 2.
According to the present modification, at the time of imaging of the first region in the autonomous mode, the orientation of the endoscope 2 is controlled to the first orientation stored in the manual mode and the vertical direction of the endoscope image E is adjusted by a rotation through image processing as in the first modification. At the time of imaging of the second region in the autonomous mode, the orientation of the endoscope 2 is controlled to the second orientation stored in the manual mode and the vertical direction of the endoscope image E is adjusted by a rotation through image processing. This can properly control the vertical direction of the endoscope image E captured by the endoscope 2 including the curved portion 2d.
In the embodiments and the modifications, the processor 11 calculates the third rotation angle information in the manual mode and stores the information in the storage unit 13. Alternatively, as indicated in
In the autonomous mode of the embodiments and the modifications, if it is determined that the current imaging region is included in the third region (not included in the first region or the second region), the processor 11 may calculate the target rotation angle θt or cot at the current position φ of the endoscope 2 in real time on the basis of the current position φ, the first position information, the first rotation angle information, the second position information, and the second rotation angle information (SC13). If the current imaging region is included in one of the first region and the second region (not included in the third region), the processor 11 may match the target rotation angle θt or cot with the first rotation angle information or the second rotation angle information without calculating the target rotation angle θt or cot in real time. This can reduce the amount of position information and rotation angle information to be stored in the storage unit 13 during the manual mode and only requires the calculation of the third position information and the third rotation angle information that are required for an operation of the autonomous mode, thereby reducing a load to the system.
If the current imaging region is included in the first region or the second region, the processor 11 may update the stored first position information or second position information or the stored first rotation angle information or second rotation angle information to the current position information and rotation angle information. The endoscope 2 is moved after the update. If it is determined that the current imaging region is included in the first region or the second region, the updated first position information, second position information, first rotation angle information, and second rotation angle information can be used. For the update, the user may provide an instruction to update from the user interface 16. Thus, even if the body of a patient is deformed by, for example, an adjustment to pneumoperitoneum or a body posture, the position information and the rotation angle information can be updated to correct information according to the current circumstances.
In the embodiments and the modifications, the processor 11 recognizes a specific tissue in the endoscope image E and determines the position information and the rotation angle information on the basis of the recognized specific tissue. Alternatively, the position information and the rotation angle information may be determined on the basis of the position φ and the rotation angle ω of the endoscope 2 at the time of the reception of the instruction.
Specifically, in the manual mode, the surgeon places the endoscope 2 at a desired position at a desired rotation angle co and inputs the first instruction. The processor 11 determines, as the first position information, a range around the position φ of the endoscope 2 at the time of the reception of the first instruction by the user interface 16 and determines, as the first rotation angle information, the rotation angle ω of the endoscope 2 at the time of the reception of the second instruction by the user interface 16.
Similarly, the surgeon places the endoscope 2 at another desired position at a desired rotation angle ω and inputs the second instruction. The processor 11 determines, as the second position information, a range around the position φ of the endoscope 2 at the time of the reception of the second instruction by the user interface 16 and determines, as the second rotation angle information, the rotation angle ω of the endoscope 2 at the time of the reception of the second instruction by the user interface 16.
With this configuration, the surgeon can register any regions in a subject as the first region and the second region, thereby determining the position information and the rotation angle information that are further adapted to the feeling of the surgeon. Also when the first and second regions do not include a specific tissue, any position information and rotation angle information can be determined and stored for the first and second regions without performing the processing of the learned model 1b.
The determination of the position information and the rotation angle information on the basis of a specific tissue in the endoscope image E may be used in combination with the determination of the position information and the rotation angle information on the basis of the position φ and the rotation angle ω of the endoscope 2 at the time of the reception of an instruction.
For example, after determining the first and second position information and the first and second rotation angle information on the basis of the specific tissues F and G in the endoscope image E as described in the first to third embodiments and the modifications thereof, the processor 11 may further determine position information and rotation angle information on any region different from the first and second regions on the basis of an instruction of the surgeon.
In the embodiments and the modifications, specific tissues are the aorta F and the pelvis G. The specific tissues may be any organs or tissues having anatomical characteristics. For example, a uterus may be used.
In the embodiments and the modifications, the position information and the rotation angle information on the two regions are stored. Position information and rotation angle information on three or more regions may be stored instead. This can improve accuracy when position information and rotation angle information are calculated on the basis of stored information.
In the embodiments and the modifications, the position φ of the endoscope 2 is expressed by a two-dimensional polar coordinate system with the pivot point H serving as an origin, the position φ indicating the position of the imaging region. The position φ may be expressed by a three-dimensional polar coordinate system. Specifically, the endoscope 2 may be supported so as to pivot about a second pivot axis P2 that passes through the pivot point H and is orthogonal to the first pivot axis P1, and the position of the imaging region may be expressed as (φ1, φ2), where φ1 is a rotation angle about the first pivot axis P1 and φ2 is a rotation angle about the second pivot axis P2. In this case, the first position information, the second position information, and the third position information are three-dimensional information including rotation angles φ1 and φ2.
In the embodiments and the modifications, the position of the imaging region may be expressed by other kinds of coordinate systems instead of a polar coordinate system. For example, the position of the imaging region may be expressed by a cartesian coordinate system with the hole H serving as an origin.
In the embodiments and the modifications, the coordinate system of the position φ of the imaging region is a global coordinate system fixed relative to a subject. A relative coordinate system for the tip of the endoscope 2 may be used instead.
In the embodiments and the modifications, the first and second position information are determined in the manual mode and are stored in the storage unit 13. Alternatively, the first and second position information may be stored in advance in the storage unit 13 before a surgical operation.
Before a surgical operation, an examination image of a range including an affected part, for example, a CT image of an abdominal region may be captured. Deconvolution on multiple CT images generates a three-dimensional image in an abdominal cavity. The first and second position information may be determined and stored in the storage unit 13 on the basis of such a three-dimensional image before a surgical operation. In this case, steps SB4 and SB6 are omitted in the manual mode.
This configuration can reduce the computational complexity of the processor 11 in the manual mode.
In the embodiments and the modifications, the processor 11 in the manual mode may store a first endoscope image and a second endoscope image in the storage unit 13. The first endoscope image is the endoscope image E of the first region, and the second endoscope image is the endoscope image E of the second region. For example, in step SB3, the processor 11 stores at least one endoscope image E, in which the aorta F is recognized, as the first endoscope image in the storage unit 13. In step SB6, the processor 11 stores at least one endoscope image E, in which the pelvis G is recognized, as the second endoscope image in the storage unit 13.
In this case, the processor 11 in the autonomous mode may determine which one of the first region, the second region, and the third region includes the current imaging region on the basis of the first endoscope image and the second endoscope image. In other words, the processor 11 compares the current endoscope image E with the first endoscope image and the second endoscope image. The processor 11 determines that the current imaging region is included in the first region in the presence of a first endoscope image identical or similar to the current endoscope image E. The processor 11 determines that the current imaging region is included in the second region in the presence of a second endoscope image identical or similar to the current endoscope image E.
In the embodiments and the modifications, if a specific tissue is included in the endoscope image E, the processor 11 may read information on the rotation angle of the specific tissue from a database 1c stored in the storage unit 13 and then rotate the endoscope image E on the basis of the read information on the rotation angle. The rotation angle is an angle around the center point of the endoscope image E. This configuration can rotate the endoscope image E such that a specific tissue in the endoscope image E is placed at a predetermined rotation angle.
For example, registered in the database 1c are the type of at least one specific tissue other than the aorta F and the pelvis G and the rotation angle of the type of the specific tissue. The processor 11 recognizes a specific tissue in the endoscope image E, reads the rotation angle of the specific tissue from the database 1c, and rotates the endoscope image E such that the specific tissue is placed at the rotation angle.
For example, a uterus J as a specific tissue is preferably placed in an upper part of the endoscope image E and thus 90° equivalent to the 12 o'clock position is registered as a rotation angle of the uterus J. The processor 11 rotates the endoscope image E such that the recognized uterus J is placed at the position of 90°. Thus, if the endoscope image E includes the uterus J, the vertical direction of the endoscope image E is automatically adjusted such that the uterus J is placed at the position of 90°.
In the embodiments and the modifications, the rotation of the endoscope image E is controlled on the basis of the specific tissues F and G in the endoscope image E. Additionally, the rotation of the endoscope image E may be controlled on the basis of the surgical instrument 6 in the endoscope image E.
For example, the processor 11 can operate in a first rotation mode for controlling the rotation of the endoscope image E on the basis of the specific tissues F and G and a second rotation mode for controlling the rotation of the endoscope image E on the basis of the surgical instrument 6. A user, for example, a surgeon can switch the first rotation mode and the second rotation mode by using the user interface 16.
In the second rotation mode, the processor 11 detects the angle of the surgical instrument 6 in the current endoscope image E, rotates the endoscope image E by a rotation of the endoscope 2 or image processing such that the angle of the surgical instrument 6 is equal to a predetermined target angle, outputs the rotated endoscope image E to the display device 5, and displays the image on the display screen 5a. The angle of the surgical instrument 6 is, for example, the angle of the longitudinal axis of the shaft of the surgical instrument 6 with respect to the horizon of the endoscope image E.
For a proper operation of the surgical instrument 6 by the surgeon who is observing the endoscope image E, it is important to properly set the angle of the surgical instrument 6 in the endoscope image E displayed on the display screen 5a. However, a movement of the surgical instrument 6 by the surgeon or a change of the orientation of the endoscope 2 following the surgical instrument 6 leads to a change of the angle of the surgical instrument 6 in the endoscope image E.
The surgeon optionally switches from the first rotation mode to the second rotation mode such that the surgical instrument 6 in the endoscope image E can be displayed at a target angle on the display screen 5a.
In the embodiments and the modifications, the surgeon manually operates the surgical instrument 6 held with his/her hand. Alternatively, as illustrated in
Number | Date | Country | |
---|---|---|---|
63076408 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/033210 | Sep 2021 | US |
Child | 18105300 | US |