ENDOSCOPE IMAGE PROCESSING DEVICE

Abstract
An image processing device for estimating a landing zone of a medical device extendable from a tip of an endoscope, the endoscope including one or more sensors including an image capturing device, the image processing device including a processing unit operationally connectable to the one or more sensors of the endoscope. The processing unit is configured to: obtain a stream of images captured by the image capturing device of the endoscope; process first sensor data recorded by the one or more sensors to estimate the landing zone of the medical device; and provide the stream of images with a visual indication, indicating the estimated landing zone.
Description
TECHNICAL FIELD

The present disclosure relates to an image processing device for estimating a landing zone of a medical device extendable from a tip of an endoscope, a display unit comprising an image processing device, an endoscope system comprising an endoscope and an image processing device, and a method of performing a medical procedure.


BACKGROUND

Endoscopes are widely used in hospitals for visually examining body cavities and obtaining samples of tissue identified as potentially pathological. An endoscope typically comprises an image capturing unit arranged at the distal end of the endoscope either looking forward or to the side. An endoscope is further typically provided with a working channel allowing a medical device such as a gripping device, a suction device, or a catheter to be introduced.


It may however be difficult for a number of medical procedures to secure that that the medical device reaches its intended destination.


Endoscopic retrograde cholangiopancreatography (ERCP) is an example of such a medical procedure. In ERCP the major duodenal papilla is catheterized using a catheter advanced from the tip of duodenoscope. The duodenoscope is provided with a guiding element is the form of an elevator element for guiding the catheter in a particular direction. By controlling the elevator element and positioning the duodenoscope correctly the catheter may be introduced into the major duodenal papilla. This may however by a challenging and time-consuming process.


Thus it remains a problem to provide an improved device/system for performing endoscopic procedures.


SUMMARY

According to a first aspect, the present disclosure relates to an image processing device for estimating a landing zone of a medical device extendable from a tip of an endoscope, the endoscope comprising one or more sensors including an image capturing device, the image processing device comprising a processing unit operationally connectable to the one or more sensors of the endoscope, wherein the processing unit is configured to: obtain a stream of images captured by the image capturing device of the endoscope; process first sensor data recorded by the one or more sensors to estimate the landing zone of the medical device; and provide the stream of images with a visual indication, indicating the estimated landing zone.


Consequently, the operator of the endoscope may directly be provided with visual aid for assisting with performing a successful procedure.


The image processing device may be built into a display unit or it may be a standalone device operationally connectable to both the one or more sensors of the endoscope and a display unit. The image capturing device may be the only sensor of the endoscope, i.e. the one or more sensors may consist of the image capturing device. The landing zone may be the position the medical device will reach when extended a predetermined length from the endoscope.


The processing unit of the image processing device may be any processing unit, such as a central processing unit (CPU), a graphics processing unit (GPU), a microcontroller unit (MCU), a field-programmable gate array (FPGA), or any combination thereof. The processing unit may comprise one or more physical processors and/or may be combined by a plurality of individual processing units.


In some embodiments, the processing unit is further configured to obtain an identification of the type of medical instrument, and wherein the estimated landing zone is further dependent on the identification of the type of medical instrument.


The processing unit may obtain the identification via user input and/or automated e.g. by processing sensor data from the one or more sensors.


The visual indication may be a visual element overlaid on the stream of images.


The visual element may comprise parts that are either fully or partly transparent. Thus, visual content of the stream of images may only to a small degree be covered by the visual element.


The visual element may be arranged at the estimated landing zone. Alternatively, the visual element may be arranged next to the estimated landing zone. Arranging the visual element next to the estimated landing zone may be beneficial as the landing zone typically is at a point of interest that optimally should be freely visible to the user.


In some embodiments, the processing unit is further configured to estimate a part of the trajectory by processing the first sensor data, and wherein the visual indication further indicates the estimated trajectory.


By further estimating and indicating the trajectory of the medical device procedures requiring a precise orientation between the medical device and the point of interest may be performed easier and safer. ERCP, is an example of such a medical procedure. In ERCP the major duodenal papilla is catheterized using a catheter advanced from the tip of duodenoscope where the orientation between the catheter and the major duodenal papilla is crucial for the success of the procedure. Removal of polyps may also require a precise orientation between the polyp and a cutting tool.


In some embodiments, the processing unit is further configured to: obtain an identification of an anatomic landmark in the stream of images; determine the location of the anatomic landmark in the stream of images; determine if the estimated landing zone intersects with the determined location of the anatomic landmark.


Consequently, a user may be provided with further assistance.


In some embodiments, the processing unit is further configured to: obtain the identification of the anatomic landmark by processing on or more images of the stream of images.


Consequently, the system may be at least partly automated.


In some embodiments, the processing unit is configured to obtain the identification of the anatomic landmark by processing the stream of images using a machine learning data architecture trained to identify the anatomic landmark in endoscope images.


The machine learning data architecture may be a supervised machine learning architecture, trained by being provided with a training data set of images from a large number of patients, where a first subset of images of the training data set includes the anatomic landmark and a second subset of images does not include the anatomic landmark.


In some embodiments, the machine learning data architecture is an artificial neural network such as a deep structured learning architecture.


Additionally/alternatively, in some embodiments, the processing unit is further configured to: obtain the identification of the anatomic landmark through a user input.


As an example the processing unit may be operationally connected to a touch screen displaying the stream of images live to the user, where the user may manually identify the anatomic landmark e.g. by clicking on it on the touchscreen. The processing unit may then use motion tacking techniques in order to keep track of the anatomic landmark as the endoscope moves around.


In some embodiments, the processing unit is further configured to: provide the stream of images with a visual indication, indicating that the estimated landing zone intersects with the determined location of the anatomic landmark.


Consequently, the user may be provided with further assistance when performing a medical procedure. This may make the medical procedure easier and safer.


In some embodiments, the first sensor data is one or more images of the stream of images.


Consequently, by utilizing one or more images from the stream of images for estimating the landing zone, the endoscope does not need be provided with further sensors.


In some embodiments, the one or more images of the stream of images are processed using a machine learning data architecture trained to estimate the landing zone of the medical device.


The machine learning data architecture may be a supervised machine learning architecture.


In some embodiments, the machine learning data architecture is an artificial neural network such as a deep structured learning architecture.


In some embodiments, the endoscope further comprises a guiding element configured to guide the medical device in a particular direction, and wherein the one or more sensors further comprises a guiding element sensor for detecting the position and/or orientation of the guiding element, and wherein the first sensor data is recorded by the guiding element sensor.


In some embodiments, the endoscope is duodenoscope and wherein the guiding element is an elevator element configured to elevate a catheter.


According to a second aspect the disclosure relates to a display unit for displaying images obtained by an image capturing device of an endoscope, wherein the display unit comprises an image processing device as disclosed in relation to the first aspect of the disclosure.


According to a third aspect the disclosure relates to an endoscope system comprising an endoscope and an image processing device as disclosed in relation to the first aspect of the invention, wherein the endoscope has an image capturing device and the processing unit of the image processing device is operationally connectable to the image capturing device of the endoscope.


The endoscope may comprise a working channel for allowing a medical device to extend from its tip. The endoscope may further comprise a guiding element for guiding the medical device in a particular direction. The guiding element may be movable from a first position to a second position. The guiding element may be provided with or without a guiding element sensor. The guiding element may be controllable from the handle of the endoscope. The guiding element may be configured guide the medical device in a particular direction relative to the tip of the endoscope. Thus, dependent on the position of the guiding element the landing zone of the medical device will be arranged at the different parts of the endoscope image.


The endoscope may be a duodenoscope and wherein the guiding element is an elevator element configured to elevate a catheter.


According to a fourth aspect the disclosure relates to a method of training a machine learning data architecture for estimating a landing zone of a medical device extendable from a tip of an endoscope, comprising the steps of: providing a training data set comprising a plurality of images captured by an image capturing device of an endoscope, and providing for each image the position of the resulting landing zone of the medical device.


In some embodiments, the endoscope comprises a guiding element for guiding the medical device in a particular direction, wherein the guiding element is movable from a first position to a second position, and wherein the guiding element is arranged in different positions in the plurality of images.


In some embodiments, a part of the medical device can be seen in at least some of the plurality of images.


In some embodiments, the plurality of images includes images of different medical devices.


According to a fifth aspect the disclosure relates to a method of performing a medical procedure using an endoscope system as disclosed in relation to the third aspect of the disclosure, comprising the steps of: advancing the endoscope through a body to a position near a point of interest, extending a medical device from the endoscope to a treatment position based on a stream of visual images provided with a visual indication, indicating an estimated landing zone of the medical device, and performing a medical procedure at the treatment position, such as removing a polyp, catheterizing the major duodenal papilla or cutting a larger opening in the major duodenal papilla.


The different aspects of the present disclosure can be implemented in different ways including image processing devices, display units, endoscope systems, methods of training a machine learning data architecture, and methods of performing a medical procedure described above and in the following, each yielding one or more of the benefits and advantages described in connection with at least one of the aspects described above, and each having one or more preferred embodiments corresponding to the preferred embodiments described in connection with at least one of the aspects described above and/or disclosed in the dependent claims. Furthermore, it will be appreciated that embodiments described in connection with one of the aspects described herein may equally be applied to the other aspects.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and/or additional objects, features and advantages of the present disclosure, will be further elucidated by the following illustrative and nonlimiting detailed description of embodiments of the present disclosure; with reference to the appended drawings, wherein:



FIG. 1 shows an example of an endoscope.



FIG. 2 shows an example of a display unit that can be connected to the endoscope shown in FIG. 1.



FIG. 3 shows a schematic drawing of an endoscope system according to an embodiment of the disclosure.



FIG. 4 shows a schematically drawing of an endoscope system according to an embodiment of the disclosure.



FIGS. 5a-d show schematically images captured by an image capturing device of an endoscope according to an embodiment of the disclosure.



FIGS. 6a-b show images captured by an image capturing device of an endoscope according to an embodiment of the disclosure



FIG. 7 shows a flowchart of a method of training a machine learning data architecture for estimating a landing zone of a medical device extendable from a tip of an endoscope according to an embodiment of the disclosure.



FIG. 8 shows a flowchart of a method of performing a medical procedure using an endoscope system according to an embodiment of the disclosure.





DETAILED DESCRIPTION

In the following description, reference is made to the accompanying figures, which show by way of illustration how the embodiments of the present disclosure may be practiced.



FIG. 1 illustrates an example of an endoscope 100. This endoscope may be adapted for single-use. The endoscope 100 is provided with a handle 102 attached to an insertion tube 104 provided with a bending section 106. The insertion tube 104 as well as the bending section 106 may be provided with one or several working channels such that instruments, such as a gripping device or a catheter, may extend from the tip and be inserted into a human body via the endoscope. One or several exit holes of the one or several channels may be provided in a tip part 108 of the endoscope 100. In addition to the exit holes, a camera sensor, such as a CMOS sensor or any other image capturing device, as well as one or several light sources, such as light emitting diodes (LEDs), fiber, or any other light emitting devices, may be placed in the tip part 108. By having the camera sensor and the light sources and a monitor 200, illustrated in FIG. 2, configured to display images based on image data captured by the camera sensor, an operator is able to see and analyze an inside of the human body in order to for instance localize a position for taking a sample. In addition, the operator will be able to control the instrument in a precise manner due to the provided visual feedback. Further, since some diseases or health issues may result in a shift in natural colors or other visual symptoms, the operator is provided with valuable input for making a diagnosis based on the image data provided via the image capturing device and the monitor.


The endoscope may further comprise a guiding element 121 for guiding the medical device in a particular direction (only schematically shown). The guiding element 121 may be movable from a first position to a second position. The guiding element may be controllable from the handle of the endoscope via an actuation element 120 (only schematically shown). The guiding element may be configured guide the medical device in a particular direction relative to the tip of the endoscope. Thus, dependent on the position of the guiding element 121 the landing zone of the medical device will be arranged at the different parts of the endoscope image.


The endoscope may further optionally comprise a guiding element sensor 122. The guiding element sensor 122 may detect the position of the guiding element, which may then be used to estimate the landing zone a medical device extendable from the tip of the catheter. However, in some embodiment the endoscope does not comprise a guiding element sensor 122, and the landing zone of the medical device is estimated only using image data recorded by the image capturing device.


In order to make it possible for the operator to direct the camera sensor such that different field of views can be achieved, the bending section 106 can be bent in different directions with respect to the insertion tube 104. The bending section 106 may be controlled by the operator by using a knob 110 placed on the handle 102. The handle 102 illustrated in FIG. 1 is designed such that the knob 110 is controlled by a thumb of the operator, but other designs are also possible. In order to control a gripping device or other device provided via a working channel a push button 112 may be used. The handle 102 illustrated in FIG. 1 is designed such that a index finger of the operator is used for controlling the gripping device, but other designs are also possible.



FIG. 3 shows a schematic drawing of an endoscope system 301 according to an embodiment of the disclosure. The endoscope system 301 comprises an endoscope 302 and an image processing device 304. The endoscope 302 has a working channel allowing a medical instrument to extend from the tip of the endoscope 302. The image processing device comprises a processing unit 307. The endoscope 302 comprises one or more sensors including an image capturing device 303. The processing unit 307 of the image processing device 304 is operationally connectable to the image capturing device of the endoscope 303. The processing unit 307 is configured to: obtain a stream of images captured by the image capturing device of the endoscope; process first sensor data recorded by the one or more sensors to estimate the landing zone of the medical device; and provide the stream of images with a visual indication, indicating the estimated landing zone. In this embodiment, the image processing device 304 is integrated in a display unit 305. Consequently, the user may in real time be assisted when performing complex medical procedures such as catheterization of the major duodenal papilla. The landing zone may be the position the medical device will reach when extended a predetermined length from the endoscope.


The position of the landing zone may dependent on the type of medical instrument and/or the orientation of the medical instrument. As an example, if the medical instrument e.g. a catheter, is provided with a tip having a pre-bend then the landing zone will dependent of the orientation of the medical instrument within the working channel. The one or more sensors of the endoscope may include a sensor configured to detect the type of medical instrument in the working channel and/or the orientation of the medical instrument. However, the processing unit may also be configured to estimate the landing zone by analyzing one or more images of the stream of images e.g. one or more images showing the type of medical instrument and/or the orientation of the medical instrument e.g. after the medical instrument has propagated a short distance out of the working channel.


The one or more images of the stream of images may be processed using a machine learning data architecture trained to estimate the landing zone of the medical device. The machine learning data architecture may be trained by being provided with a training data set comprising a larger number of images showing medical instruments extended a short distance from the tip of the endoscope and further data of the resulting landing zones.


Additionally/alternatively, the position of the landing zone may be dependent on the position of a guiding element configured to guide the medical device in a particular direction. As an example, if the endoscope is a duodenoscope, the guiding element is an elevator element configured to elevate a catheter, enabling the catheter to catheterize the major duodenal papilla. The one or more sensors of the endoscope may include a guiding element sensor configured to detect the position and/or orientation of the guiding element.


However, the processing unit may also be configured to estimate the landing zone by analyzing one or more images of the stream of images e.g. one or more images showing the medical instrument and/or the guiding element e.g. after the medical instrument has propagated a short distance out of the working channel.


The one or more images of the stream of images may be processed using a machine learning data architecture trained to estimate the landing zone of the medical device. The machine learning data architecture may be trained by being provided with a training data set comprising a larger number of images showing medical instruments extended a short distance from the tip of the endoscope where the guiding element is positioned and/or oriented in different positions/orientations and further data of the resulting landing zones. If the position/orientation of the guiding element is visible to the image capturing device, then the images of the training data set may not include a medical instrument.



FIGS. 5a-d show schematically images captured by an image capturing device of an endoscope according to an embodiment of the disclosure. FIG. 5a shows a traditional unaltered image captured by the image capturing device of the endoscope. In the image, a medical device 501, e.g. a cutting tool or a catheter, and an area of interest 502, can be seen. The area of interest may be polyp that needs to be removed using a cutting tool or an opening that needs to be catheterized.



FIG. 5b shows an image captured by the image capturing device of the endoscope provided with a visual indication 503, indicating the estimated landing zone of the medical device 501, e.g. the position the medical device 501 will reach when extended a predetermined length from the endoscope. In this embodiment the visual indication 503 marks the center of the landing zone. Thus, since the visual indication 503 is not positioned at the area of interest 502, the user will have to re-arrange the endoscope in order to secure that the landing zone will be positioned at the area of interest 502. This may be done by moving the entire endoscope. Additionally/alternatively, if the endoscope comprises a guiding element configured to guide the medical device 501 in a particular direction, the user may move the guiding element so that the medical device is guided in a different direction.



FIG. 5c shows an image captured by the image capturing device of the endoscope provided with the visual indication 503, indicating the estimated landing zone of the medical device 501 after a guiding element of the endoscope has been moved. As a result, the medical device 501 is now arranged with a different angle in the image and the visual indication 503 now has moved and is positioned at the area of interest 502. The user can now safely extend the medical device 501 to the area of interest 502.



FIG. 5d shows an image captured by the image capturing device of the endoscope provided with the visual indication 503 indicating the estimated landing zone of the medical device 501. In this image the medical device 501 has been extended a distance and has reached the estimated landing zone 503. By providing the images with the visual indication 503 indicating the estimated landing zone procedures may be conducted faster and safer.



FIGS. 6a-b show images captured by an image capturing device of an endoscope according to an embodiment of the disclosure. In this embodiment the endoscope is a duodenoscope and the medical doctor is catheterizing the major duodenal papilla as part of an ERCP procedure.


In FIG. 6a a catheter 601 and the major duodenal papilla 602 are shown. The catheter 601 extends a distance from the duodenoscope. The image has further been provided with visual indications 604, 605, and 606 indicating an estimated trajectory and landing zone of the catheter 601. The image has further been provided with a visual indication 603 showing the location of the major duodenal papilla 602 in the image. In FIG. 6a the visual indications 604, 605, and 606 indicate that the estimated trajectory and landing zone of the catheter are not aligned with the major duodenal papilla 602. This may be signalled to the user by displaying the visual indication 603 with a first colour e.g. a yellow colour. If the user then moves the duodenoscope and/or an elevator element of the duodenoscope so that the estimated trajectory and landing zone of the catheter are aligned with the major duodenal papilla 602, then the visual indication 603 may be displayed with a second colour e.g. a green colour.



FIG. 6b shows an image after the major duodenal papilla 602 has been catheterized with the catheter 601.



FIG. 7 shows a flowchart 700 of a method of training a machine learning data architecture for estimating a landing zone of a medical device extendable from a tip of an endoscope according to an embodiment of the disclosure. In step 701 a training data set comprising a plurality of images captured by an image capturing device of an endoscope is provided. The images may show a part of the medical device. Next, in step 702, for each image, the position of the resulting landing zone of the medical device is provided.



FIG. 8 shows a flowchart 800 of a method of performing a medical procedure using an endoscope system according to an embodiment of the disclosure. In step 801 the endoscope is advanced through a body to a position near a point of interest. Then in step 802, a medical device is extended from the endoscope to a treatment position based on a stream of visual images provided with a visual indication, indicating an estimated landing zone of the medical device. Finally, in step 803 a medical procedure is performed at the treatment position, such as removing a polyp, catheterizing the major duodenal papilla or cutting a larger opening in the major duodenal papilla.


The following items correspond to the features recited in the original claims:


1. An image processing device for estimating a landing zone of a medical device extendable from a tip of an endoscope, the endoscope comprising one or more sensors including an image capturing device, the image processing device comprising a processing unit operationally connectable to the one or more sensors of the endoscope, wherein the processing unit is configured to: obtain a stream of images captured by the image capturing device of the endoscope; process first sensor data recorded by the one or more sensors to es-timate the landing zone of the medical device; and provide the stream of images with a visual indication, indicating the estimated landing zone.


2. An image processing device according to item 1, wherein the processing unit is further configured to estimate a part of the trajectory by processing the first sensor data, and wherein the visual indication further indicates the estimated trajectory.


3. An image processing device according to items 1 or 2, wherein the processing unit is further configured to: obtain an identification of an anatomic landmark in the stream of images; determine the location of the anatomic landmark in the stream of images; determine if the estimated landing zone intersects with the determined location of the anatomic landmark.


4. An image processing device according to item 3, wherein the processing unit is further configured to: obtain the identification of the anatomic landmark by processing on or more images of the stream of images.


5. An image processing device according to item 4, wherein the processing unit is configured to obtain the identification of the anatomic landmark by processing the stream of images using a machine learning data architecture trained to identify the anatomic landmark in endoscope images.


6. An image processing device according to item 3, wherein the processing unit is further configured to: obtain the identification of the anatomic landmark through a user input.


7. An image processing device according to any one of items 3 to 6, wherein the processing unit is further configured to: provide the stream of images with a visual indication, indicating that the estimated landing zone intersects with the determined location of the anatomic landmark.


8. An image processing device according to any one of items 1 to 7, wherein the first sensor data is one or more images of the stream of images.


9. An image processing device according to item 8, wherein the one or more images of the stream of images are processed using a machine learning data architecture trained to estimate the landing zone of the medical device.


10. An image processing device according to any one of items 1 to 9, wherein the endoscope further comprises a guiding element configured to guide the medical device in a particular direction, and wherein the one or more sensors further comprises a guiding element sensor for detecting the position and/or orientation of the guiding element, and wherein the first sen-sor data is recorded by the guiding element sensor.


11. A display unit for displaying images obtained by an image capturing device of an endoscope, wherein the display unit comprises an image processing device according to any one of items 1 to 10.


12. An endoscope system comprising an endoscope and an image processing device according to any one of items 1 to 10, wherein the endoscope has an image capturing device and the processing unit of the image processing device is operationally connectable to the image capturing device of the endoscope.


13. An endoscope system according to item 12, wherein the endoscope is a duodenoscope and wherein the guiding element is an elevator element configured to elevate a catheter.


14. A method of training a machine learning data architecture for estimating a landing zone of a medical device extendable from a tip of an endoscope, comprising the steps of: providing a training data set comprising a plurality of images captured by an image capturing device of an endoscope, providing for each image the position of the resulting landing zone of the medical device.


15. A method according to item 14, wherein the endoscope comprises a guiding element for guiding the medical device in a particular direction, wherein the guiding element is movable from a first position to a second position, and wherein the guiding element is arranged in different positions in the plurality of images.


16. A method of performing a medical procedure using an endoscope system according to any one of items 12 or 13, comprising the steps of: advancing the endoscope through a body to a position near a point of interest, extending a medical device from the endoscope to a treatment position based on a stream of visual images provided with a visual indication, indicating an estimated landing zone of the medical device, performing a medical procedure at the treatment position, such as removing a polyp, catherizing the major duodenal papilla or cutting a larger opening in the major duodenal papilla.


Although some embodiments have been described and shown in detail, the invention is not restricted to them, but may also be embodied in other ways within the scope of the subject matter defined in the following claims. In particular, it is to be understood that other embodiments may be utilised and structural and functional modifications may be made without departing from the scope of the present invention.


In device claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims or described in different embodiments does not indicate that a combination of these measures cannot be used to advantage.


It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.

Claims
  • 1-16. (canceled)
  • 17. A method of performing a medical procedure, the method comprising: advancing an endoscope through a body of a patient;obtaining, from an image sensor of the endoscope, images including a treatment position;processing the images to locate, in the images, a portion of a medical device in a present condition of the endoscope, the present condition comprising a present position of the endoscope and/or a present articulation angle of the endoscope and/or a present angle of an elevator at a tip of the endoscope;presenting with a display an image including the treatment position, the portion of the medical device, and a visual indication of an estimated landing zone of the portion of the medical device, the estimated landing zone representing a contact point of the portion of the medical device in the present condition of the endoscope;adjusting the present condition of the endoscope to move the estimated landing zone to the treatment position; andextending the medical device through the endoscope to the treatment position.
  • 18. The method of claim 17, wherein the medical procedure consists of removing a polyp, catheterizing a major duodenal papilla, and/or cutting a larger opening in the major duodenal papilla.
  • 19. The method of claim 17, further comprising locating the treatment position in the images.
  • 20. The method of claim 19, wherein locating the treatment position in the images comprises processing the images to locate the treatment position.
  • 21. The method of claim 20, wherein the treatment position corresponds to an anatomical landmark, and wherein said processing comprises using a machine learning data architecture trained to identify a landmark in the images to locate the anatomic landmark.
  • 22. The method of claim 21, wherein the machine learning data architecture comprises a supervised artificial neural network.
  • 23. The method of claim 19, wherein locating the treatment position comprises receiving a user input.
  • 24. The method of claim 23, wherein receiving the user input comprises a user touching a touch-screen display to identify the treatment position.
  • 25. The method of claim 24, further comprising tracking the treatment position as the condition of the endoscope changes.
  • 26. The method of claim 25, wherein tracking the treatment position comprises motion-tracking the treatment position in subsequent images.
  • 27. The method of claim 17, wherein the visual indication includes a portion of a trajectory of the medical device.
  • 28. The method of claim 17, further comprising processing signals from a guiding element sensor of the endoscope to determine the present angle of the elevator.
  • 29. The method of claim 17, wherein presenting the image including the treatment position, the portion of the medical device, and the visual indication of the estimated landing zone comprises presenting the visual indication (a) in a first color when the estimated landing zone overlaps the treatment position or (b) in a second color different from the first color when the estimated landing zone does not overlap the treatment position.
  • 30. An image processing device comprising a processor configured to execute a method according to claim 17.
  • 31. The image processing device of claim 30, wherein the image processing device comprises a display configured to present an image including a treatment position in accordance with claim 17.
  • 32. A system for performing a medical procedure, the system comprising: an image processing device according to claim 30; anda display configured to present an image including a treatment position in accordance with claim 17.
  • 33. The system of claim 32, further comprising an endoscope including an image sensor configured to provide images including the treatment position.
  • 34. The system of claim 33, wherein the endoscope further comprises an elevator and a guiding element sensor configured to determine a present angle of the elevator.
  • 35. The system of claim 32, wherein the image processing device comprises the display.
  • 36. The system of claim 32, the image processing device further comprising a machine learning data architecture trained to identify a landmark in images provided by the endoscope.
Priority Claims (1)
Number Date Country Kind
20184995.7 Jul 2020 EP regional
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a National Stage filing under 35 U.S.C. § 371 of International Patent Application No. PCT/EP2021/068628, filed Jul. 6, 2021, which claims priority from and the benefit of European Patent Application No. EP 2018 4995.7, filed Jul. 9, 2020; said applications are incorporated by reference herein in their entirety.

PCT Information
Filing Document Filing Date Country Kind
PCT/EP2021/068628 7/6/2021 WO