The present disclosure relates to an image processing device for estimating a landing zone of a medical device extendable from a tip of an endoscope, a display unit comprising an image processing device, an endoscope system comprising an endoscope and an image processing device, and a method of performing a medical procedure.
Endoscopes are widely used in hospitals for visually examining body cavities and obtaining samples of tissue identified as potentially pathological. An endoscope typically comprises an image capturing unit arranged at the distal end of the endoscope either looking forward or to the side. An endoscope is further typically provided with a working channel allowing a medical device such as a gripping device, a suction device, or a catheter to be introduced.
It may however be difficult for a number of medical procedures to secure that that the medical device reaches its intended destination.
Endoscopic retrograde cholangiopancreatography (ERCP) is an example of such a medical procedure. In ERCP the major duodenal papilla is catheterized using a catheter advanced from the tip of duodenoscope. The duodenoscope is provided with a guiding element is the form of an elevator element for guiding the catheter in a particular direction. By controlling the elevator element and positioning the duodenoscope correctly the catheter may be introduced into the major duodenal papilla. This may however by a challenging and time-consuming process.
Thus it remains a problem to provide an improved device/system for performing endoscopic procedures.
According to a first aspect, the present disclosure relates to an image processing device for estimating a landing zone of a medical device extendable from a tip of an endoscope, the endoscope comprising one or more sensors including an image capturing device, the image processing device comprising a processing unit operationally connectable to the one or more sensors of the endoscope, wherein the processing unit is configured to: obtain a stream of images captured by the image capturing device of the endoscope; process first sensor data recorded by the one or more sensors to estimate the landing zone of the medical device; and provide the stream of images with a visual indication, indicating the estimated landing zone.
Consequently, the operator of the endoscope may directly be provided with visual aid for assisting with performing a successful procedure.
The image processing device may be built into a display unit or it may be a standalone device operationally connectable to both the one or more sensors of the endoscope and a display unit. The image capturing device may be the only sensor of the endoscope, i.e. the one or more sensors may consist of the image capturing device. The landing zone may be the position the medical device will reach when extended a predetermined length from the endoscope.
The processing unit of the image processing device may be any processing unit, such as a central processing unit (CPU), a graphics processing unit (GPU), a microcontroller unit (MCU), a field-programmable gate array (FPGA), or any combination thereof. The processing unit may comprise one or more physical processors and/or may be combined by a plurality of individual processing units.
In some embodiments, the processing unit is further configured to obtain an identification of the type of medical instrument, and wherein the estimated landing zone is further dependent on the identification of the type of medical instrument.
The processing unit may obtain the identification via user input and/or automated e.g. by processing sensor data from the one or more sensors.
The visual indication may be a visual element overlaid on the stream of images.
The visual element may comprise parts that are either fully or partly transparent. Thus, visual content of the stream of images may only to a small degree be covered by the visual element.
The visual element may be arranged at the estimated landing zone. Alternatively, the visual element may be arranged next to the estimated landing zone. Arranging the visual element next to the estimated landing zone may be beneficial as the landing zone typically is at a point of interest that optimally should be freely visible to the user.
In some embodiments, the processing unit is further configured to estimate a part of the trajectory by processing the first sensor data, and wherein the visual indication further indicates the estimated trajectory.
By further estimating and indicating the trajectory of the medical device procedures requiring a precise orientation between the medical device and the point of interest may be performed easier and safer. ERCP, is an example of such a medical procedure. In ERCP the major duodenal papilla is catheterized using a catheter advanced from the tip of duodenoscope where the orientation between the catheter and the major duodenal papilla is crucial for the success of the procedure. Removal of polyps may also require a precise orientation between the polyp and a cutting tool.
In some embodiments, the processing unit is further configured to: obtain an identification of an anatomic landmark in the stream of images; determine the location of the anatomic landmark in the stream of images; determine if the estimated landing zone intersects with the determined location of the anatomic landmark.
Consequently, a user may be provided with further assistance.
In some embodiments, the processing unit is further configured to: obtain the identification of the anatomic landmark by processing on or more images of the stream of images.
Consequently, the system may be at least partly automated.
In some embodiments, the processing unit is configured to obtain the identification of the anatomic landmark by processing the stream of images using a machine learning data architecture trained to identify the anatomic landmark in endoscope images.
The machine learning data architecture may be a supervised machine learning architecture, trained by being provided with a training data set of images from a large number of patients, where a first subset of images of the training data set includes the anatomic landmark and a second subset of images does not include the anatomic landmark.
In some embodiments, the machine learning data architecture is an artificial neural network such as a deep structured learning architecture.
Additionally/alternatively, in some embodiments, the processing unit is further configured to: obtain the identification of the anatomic landmark through a user input.
As an example the processing unit may be operationally connected to a touch screen displaying the stream of images live to the user, where the user may manually identify the anatomic landmark e.g. by clicking on it on the touchscreen. The processing unit may then use motion tacking techniques in order to keep track of the anatomic landmark as the endoscope moves around.
In some embodiments, the processing unit is further configured to: provide the stream of images with a visual indication, indicating that the estimated landing zone intersects with the determined location of the anatomic landmark.
Consequently, the user may be provided with further assistance when performing a medical procedure. This may make the medical procedure easier and safer.
In some embodiments, the first sensor data is one or more images of the stream of images.
Consequently, by utilizing one or more images from the stream of images for estimating the landing zone, the endoscope does not need be provided with further sensors.
In some embodiments, the one or more images of the stream of images are processed using a machine learning data architecture trained to estimate the landing zone of the medical device.
The machine learning data architecture may be a supervised machine learning architecture.
In some embodiments, the machine learning data architecture is an artificial neural network such as a deep structured learning architecture.
In some embodiments, the endoscope further comprises a guiding element configured to guide the medical device in a particular direction, and wherein the one or more sensors further comprises a guiding element sensor for detecting the position and/or orientation of the guiding element, and wherein the first sensor data is recorded by the guiding element sensor.
In some embodiments, the endoscope is duodenoscope and wherein the guiding element is an elevator element configured to elevate a catheter.
According to a second aspect the disclosure relates to a display unit for displaying images obtained by an image capturing device of an endoscope, wherein the display unit comprises an image processing device as disclosed in relation to the first aspect of the disclosure.
According to a third aspect the disclosure relates to an endoscope system comprising an endoscope and an image processing device as disclosed in relation to the first aspect of the invention, wherein the endoscope has an image capturing device and the processing unit of the image processing device is operationally connectable to the image capturing device of the endoscope.
The endoscope may comprise a working channel for allowing a medical device to extend from its tip. The endoscope may further comprise a guiding element for guiding the medical device in a particular direction. The guiding element may be movable from a first position to a second position. The guiding element may be provided with or without a guiding element sensor. The guiding element may be controllable from the handle of the endoscope. The guiding element may be configured guide the medical device in a particular direction relative to the tip of the endoscope. Thus, dependent on the position of the guiding element the landing zone of the medical device will be arranged at the different parts of the endoscope image.
The endoscope may be a duodenoscope and wherein the guiding element is an elevator element configured to elevate a catheter.
According to a fourth aspect the disclosure relates to a method of training a machine learning data architecture for estimating a landing zone of a medical device extendable from a tip of an endoscope, comprising the steps of: providing a training data set comprising a plurality of images captured by an image capturing device of an endoscope, and providing for each image the position of the resulting landing zone of the medical device.
In some embodiments, the endoscope comprises a guiding element for guiding the medical device in a particular direction, wherein the guiding element is movable from a first position to a second position, and wherein the guiding element is arranged in different positions in the plurality of images.
In some embodiments, a part of the medical device can be seen in at least some of the plurality of images.
In some embodiments, the plurality of images includes images of different medical devices.
According to a fifth aspect the disclosure relates to a method of performing a medical procedure using an endoscope system as disclosed in relation to the third aspect of the disclosure, comprising the steps of: advancing the endoscope through a body to a position near a point of interest, extending a medical device from the endoscope to a treatment position based on a stream of visual images provided with a visual indication, indicating an estimated landing zone of the medical device, and performing a medical procedure at the treatment position, such as removing a polyp, catheterizing the major duodenal papilla or cutting a larger opening in the major duodenal papilla.
The different aspects of the present disclosure can be implemented in different ways including image processing devices, display units, endoscope systems, methods of training a machine learning data architecture, and methods of performing a medical procedure described above and in the following, each yielding one or more of the benefits and advantages described in connection with at least one of the aspects described above, and each having one or more preferred embodiments corresponding to the preferred embodiments described in connection with at least one of the aspects described above and/or disclosed in the dependent claims. Furthermore, it will be appreciated that embodiments described in connection with one of the aspects described herein may equally be applied to the other aspects.
The above and/or additional objects, features and advantages of the present disclosure, will be further elucidated by the following illustrative and nonlimiting detailed description of embodiments of the present disclosure; with reference to the appended drawings, wherein:
In the following description, reference is made to the accompanying figures, which show by way of illustration how the embodiments of the present disclosure may be practiced.
The endoscope may further comprise a guiding element 121 for guiding the medical device in a particular direction (only schematically shown). The guiding element 121 may be movable from a first position to a second position. The guiding element may be controllable from the handle of the endoscope via an actuation element 120 (only schematically shown). The guiding element may be configured guide the medical device in a particular direction relative to the tip of the endoscope. Thus, dependent on the position of the guiding element 121 the landing zone of the medical device will be arranged at the different parts of the endoscope image.
The endoscope may further optionally comprise a guiding element sensor 122. The guiding element sensor 122 may detect the position of the guiding element, which may then be used to estimate the landing zone a medical device extendable from the tip of the catheter. However, in some embodiment the endoscope does not comprise a guiding element sensor 122, and the landing zone of the medical device is estimated only using image data recorded by the image capturing device.
In order to make it possible for the operator to direct the camera sensor such that different field of views can be achieved, the bending section 106 can be bent in different directions with respect to the insertion tube 104. The bending section 106 may be controlled by the operator by using a knob 110 placed on the handle 102. The handle 102 illustrated in
The position of the landing zone may dependent on the type of medical instrument and/or the orientation of the medical instrument. As an example, if the medical instrument e.g. a catheter, is provided with a tip having a pre-bend then the landing zone will dependent of the orientation of the medical instrument within the working channel. The one or more sensors of the endoscope may include a sensor configured to detect the type of medical instrument in the working channel and/or the orientation of the medical instrument. However, the processing unit may also be configured to estimate the landing zone by analyzing one or more images of the stream of images e.g. one or more images showing the type of medical instrument and/or the orientation of the medical instrument e.g. after the medical instrument has propagated a short distance out of the working channel.
The one or more images of the stream of images may be processed using a machine learning data architecture trained to estimate the landing zone of the medical device. The machine learning data architecture may be trained by being provided with a training data set comprising a larger number of images showing medical instruments extended a short distance from the tip of the endoscope and further data of the resulting landing zones.
Additionally/alternatively, the position of the landing zone may be dependent on the position of a guiding element configured to guide the medical device in a particular direction. As an example, if the endoscope is a duodenoscope, the guiding element is an elevator element configured to elevate a catheter, enabling the catheter to catheterize the major duodenal papilla. The one or more sensors of the endoscope may include a guiding element sensor configured to detect the position and/or orientation of the guiding element.
However, the processing unit may also be configured to estimate the landing zone by analyzing one or more images of the stream of images e.g. one or more images showing the medical instrument and/or the guiding element e.g. after the medical instrument has propagated a short distance out of the working channel.
The one or more images of the stream of images may be processed using a machine learning data architecture trained to estimate the landing zone of the medical device. The machine learning data architecture may be trained by being provided with a training data set comprising a larger number of images showing medical instruments extended a short distance from the tip of the endoscope where the guiding element is positioned and/or oriented in different positions/orientations and further data of the resulting landing zones. If the position/orientation of the guiding element is visible to the image capturing device, then the images of the training data set may not include a medical instrument.
In
The following items correspond to the features recited in the original claims:
1. An image processing device for estimating a landing zone of a medical device extendable from a tip of an endoscope, the endoscope comprising one or more sensors including an image capturing device, the image processing device comprising a processing unit operationally connectable to the one or more sensors of the endoscope, wherein the processing unit is configured to: obtain a stream of images captured by the image capturing device of the endoscope; process first sensor data recorded by the one or more sensors to es-timate the landing zone of the medical device; and provide the stream of images with a visual indication, indicating the estimated landing zone.
2. An image processing device according to item 1, wherein the processing unit is further configured to estimate a part of the trajectory by processing the first sensor data, and wherein the visual indication further indicates the estimated trajectory.
3. An image processing device according to items 1 or 2, wherein the processing unit is further configured to: obtain an identification of an anatomic landmark in the stream of images; determine the location of the anatomic landmark in the stream of images; determine if the estimated landing zone intersects with the determined location of the anatomic landmark.
4. An image processing device according to item 3, wherein the processing unit is further configured to: obtain the identification of the anatomic landmark by processing on or more images of the stream of images.
5. An image processing device according to item 4, wherein the processing unit is configured to obtain the identification of the anatomic landmark by processing the stream of images using a machine learning data architecture trained to identify the anatomic landmark in endoscope images.
6. An image processing device according to item 3, wherein the processing unit is further configured to: obtain the identification of the anatomic landmark through a user input.
7. An image processing device according to any one of items 3 to 6, wherein the processing unit is further configured to: provide the stream of images with a visual indication, indicating that the estimated landing zone intersects with the determined location of the anatomic landmark.
8. An image processing device according to any one of items 1 to 7, wherein the first sensor data is one or more images of the stream of images.
9. An image processing device according to item 8, wherein the one or more images of the stream of images are processed using a machine learning data architecture trained to estimate the landing zone of the medical device.
10. An image processing device according to any one of items 1 to 9, wherein the endoscope further comprises a guiding element configured to guide the medical device in a particular direction, and wherein the one or more sensors further comprises a guiding element sensor for detecting the position and/or orientation of the guiding element, and wherein the first sen-sor data is recorded by the guiding element sensor.
11. A display unit for displaying images obtained by an image capturing device of an endoscope, wherein the display unit comprises an image processing device according to any one of items 1 to 10.
12. An endoscope system comprising an endoscope and an image processing device according to any one of items 1 to 10, wherein the endoscope has an image capturing device and the processing unit of the image processing device is operationally connectable to the image capturing device of the endoscope.
13. An endoscope system according to item 12, wherein the endoscope is a duodenoscope and wherein the guiding element is an elevator element configured to elevate a catheter.
14. A method of training a machine learning data architecture for estimating a landing zone of a medical device extendable from a tip of an endoscope, comprising the steps of: providing a training data set comprising a plurality of images captured by an image capturing device of an endoscope, providing for each image the position of the resulting landing zone of the medical device.
15. A method according to item 14, wherein the endoscope comprises a guiding element for guiding the medical device in a particular direction, wherein the guiding element is movable from a first position to a second position, and wherein the guiding element is arranged in different positions in the plurality of images.
16. A method of performing a medical procedure using an endoscope system according to any one of items 12 or 13, comprising the steps of: advancing the endoscope through a body to a position near a point of interest, extending a medical device from the endoscope to a treatment position based on a stream of visual images provided with a visual indication, indicating an estimated landing zone of the medical device, performing a medical procedure at the treatment position, such as removing a polyp, catherizing the major duodenal papilla or cutting a larger opening in the major duodenal papilla.
Although some embodiments have been described and shown in detail, the invention is not restricted to them, but may also be embodied in other ways within the scope of the subject matter defined in the following claims. In particular, it is to be understood that other embodiments may be utilised and structural and functional modifications may be made without departing from the scope of the present invention.
In device claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims or described in different embodiments does not indicate that a combination of these measures cannot be used to advantage.
It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
Number | Date | Country | Kind |
---|---|---|---|
20184995.7 | Jul 2020 | EP | regional |
This application is a National Stage filing under 35 U.S.C. § 371 of International Patent Application No. PCT/EP2021/068628, filed Jul. 6, 2021, which claims priority from and the benefit of European Patent Application No. EP 2018 4995.7, filed Jul. 9, 2020; said applications are incorporated by reference herein in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/068628 | 7/6/2021 | WO |