The present disclosure relates to devices, systems, and methods for surgical tool identification in images, and more particularly, to enhancing aspects of discernable features of objects during surgical procedures.
Endoscopes are introduced through an incision or a natural body orifice to observe internal features of a body. Conventional endoscopes are used for visualization during endoscopic or laparoscopic surgical procedures. During such surgical procedures, it is possible for the view of the instrument to be obstructed by tissue or other instruments.
During minimally invasive surgery, and especially in robotic surgery, knowledge of the exact surgical tools appearing in the endoscopic video feed can be useful for facilitating features that enhance the surgical experience. While electrical or wireless communication between something attached/embedded in the tool is a possible means to do this, when this infrastructure is either not available or not possible, another identification means is needed. Accordingly, there is interest in improving imaging technology.
The disclosure relates to devices, systems, and methods for surgical tool identification in images. In accordance with aspects of the disclosure, a system for object enhancement in endoscopy images is presented. The system includes a light source, an imaging device, and an imaging device control unit. The light source is configured to provide light within a surgical operative site. The imaging device control unit includes a processor and a memory storing instructions. The instructions, when executed by the processor, cause the system to capture an image of an object within the surgical operative site, by the imaging device. The image includes a plurality of pixels. Each of the plurality of pixels includes color information. The instructions, when executed by the processor, further cause the system to access the image, access data relating to depth information about each of the pixels in the image, input the depth information to a neural network, emphasize a feature of the image based on an output of the machine learning algorithm, generate an augmented image based on the emphasized feature, and display the augmented image on a display.
In an aspect of the present disclosure, emphasizing the feature may include augmenting a 3D aspect of the image, emphasizing a boundary of the object, changing the color information of the plurality of pixels of the object, and/or extracting 3D features of the object.
In another aspect of the present disclosure, the instructions, when executed, may further cause the system to perform real-time image recognition on the augmented image to detect an object and classify the object.
In an aspect of the present disclosure, the image may include a stereographic image. The stereographic image may include a left image and a right image. The instructions, when executed, may further cause the system to calculate depth information based on determining a horizontal disparity mismatch between the left image and the right image. The depth information may include pixel depth.
In yet another aspect of the present disclosure, the instructions, when executed, may further cause the system to calculate depth information based on structured light projection. The depth information may include pixel depth.
In a further aspect of the present disclosure, the machine learning algorithm may include a convolutional neural network, a feed forward neural network, a radial bias neural network, a multilayer perceptron, a recurrent neural network, and/or a modular neural network.
In an aspect of the present disclosure, the machine learning algorithm may be trained based on tagging objects in training images. The training may further include augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training images, and/or stretching the training images.
In a further aspect of the present disclosure, the training may include supervised, unsupervised, and/or reinforcement learning.
In yet another aspect of the present disclosure, the instructions, when executed, may further cause the system to: process a time series of the augmented image based on a learned video magnification, phase-based video magnification, and/or Eulerian video magnification.
In a further aspect of the present disclosure, the instructions, when executed, may further cause the system to perform tracking of the object based on an output of the machine learning algorithm.
In accordance with aspects of the disclosure, a computer-implemented method of object enhancement in endoscopy images is presented. The method includes capturing an image of an object within a surgical operative site, by an imaging device. The image includes a plurality of pixels. Each of the plurality of pixels includes color information. The method further includes accessing the image, accessing data relating to depth information about each of the pixels in the image, inputting the depth information to a machine learning algorithm, emphasizing a feature of the image based on an output of the machine learning algorithm, generating an augmented image based on the emphasized feature, and displaying the augmented image on a display.
In an aspect of the present disclosure, emphasizing the feature may include augmenting a 3D aspect of the image, emphasizing a boundary of the object, changing the color information of the plurality of pixels of the object, and/or extracting 3D features of the object.
In yet a further aspect of the present disclosure, the computer-implemented method may further include performing real-time image recognition on the augmented image to detect an object and classify the object.
In yet another aspect of the present disclosure, the image may include a stereographic image. The stereographic image may include a left image and a right image. The computer-implemented method may further include calculating depth information based on determining a horizontal disparity mismatch between the left image and the right image. The depth information may include pixel depth.
In a further aspect of the present disclosure, the computer-implemented method may further include calculating depth information based on structured light projection. The depth information may include pixel depth.
In yet a further aspect of the present disclosure, the machine learning algorithm may include a convolutional neural network, a feed forward neural network, a radial bias neural network, a multilayer perceptron, a recurrent neural network, and/or a modular neural network.
In yet another aspect of the present disclosure, the machine learning algorithm may be trained based on tagging objects in training images. The training may further include augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training image, and/or stretching the training images.
In a further aspect of the present disclosure, the computer-implemented method may further include processing a time series of the augmented image based on a learned video magnification, phase-based video magnification, and/or Eulerian video magnification.
In an aspect of the present disclosure, the computer-implemented method may further include performing tracking of the object based on an output of the machine learning algorithm.
In accordance with aspects of the present disclosure, a non-transitory storage medium that stores a program causing a computer to execute a computer-implemented method of object enhancement in endoscopy images is presented. The computer-implemented method includes capturing an image of an object within a surgical operative site, by an imaging device. The image includes a plurality of pixels, each of the plurality of pixels includes color information. The method further includes accessing the image, accessing data relating to depth information about each of the pixels in the image, inputting the depth information to a machine learning algorithm, emphasizing a feature of the image based on an output of the machine learning algorithm, generating an augmented image based on the emphasized feature, and displaying the augmented image on a display.
In accordance with aspects of the present disclosure, a system for object detection in endoscopy images is presented. The system includes a light source configured to provide light within a surgical operative site, an imaging device configured to acquire stereographic images, and an imaging device control unit configured to control the imaging device. The control unit includes a processor and a memory storing instructions. The instructions, when executed by the processor, cause the system to: capture a stereographic image of an object within a surgical operative site, by the imaging device. The stereographic image includes a first image and a second image. The instructions, when executed by the processor, further cause the system to: access the stereographic image, perform real time image recognition on the first image to detect the object, classify the object, and produce a first image classification probability value, perform real time image recognition on the second image to detect the object, classify the object, and produce a first image classification probability value, and compare the first image classification probability value and the second image classification probability value to produce a classification accuracy value. In a case where the classification probability value is above a predetermined threshold, the instructions, when executed by the processor, further cause the system to: generate a first bounding box around the detected object, generate a first augmented view of the first image based on the classification, generate a second augmented view of the second image based on the classification, and display the first and second augmented images on a display. The first augmented view includes the bounding box and a tag indicating the classification. The second augmented view includes the bounding box and a tag indicating the classification.
In an aspect of the present disclosure, in a case where the classification accuracy value is below the predetermined threshold, the instructions, when executed, may further cause the system to display on the display an indication that the classification accuracy value is not within an expected range.
In another aspect of the present disclosure, the real-time image recognition may include: detecting the object in the first image, detecting the object in the second image, generating a first silhouette of the object in the first image, generating a second silhouette of the object in the second image, comparing the first silhouette to the second silhouette, and detecting inconsistencies between the first silhouette and the second silhouette based on the comparing.
In an aspect of the present disclosure, the real-time image recognition may include: detecting the object based on a convolutional neural network. In various The detecting may include generating a segmentation mask for the object, detecting the object, and classifying the object based on the detecting.
In yet another aspect of the present disclosure, the convolutional neural network may be trained based on tagging objects in training images, and wherein the training further includes augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training image, and/or stretching the training images.
In a further aspect of the present disclosure, the real-time image recognition may include detecting the object based on a region based neural network. The detecting may include dividing the first image and second image into regions, predicting bounding boxes for each region based on a feature of the object, predicting an object detection probability for each region, weighting the bounding boxes based on the predicted object detection probability, detecting the object, and classifying the object based on the detecting.
In an aspect of the present disclosure, the region based neural network may be trained based on tagging objects in training images, and wherein the training further includes augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training images, changing a background, and/or stretching the training images.
In a further aspect of the present disclosure, the instructions, when executed, may further cause the system to: perform tracking of the object based on an output of the region based neural network.
In yet another aspect of the present disclosure, the first and second augmented views each may further include an indication of the classification accuracy value.
In accordance with aspects of the present disclosure, a computer-implemented method of object detection in endoscopy images is presented. The computer-implemented method includes accessing a stereographic image of an object within a surgical operative site, by an imaging device. The stereographic image includes a first image and a second image. The method further includes performing real-time image recognition on the first image to detect the object and classify the object performing real-time image recognition on the second image to detect the object, classify the object, and produce a classification probability value, and comparing the classification probability value of the first image and the classification probability value of the second image based on the real-time image recognition to produce a classification accuracy value. In a case where the classification accuracy value is above a predetermined threshold, the method further includes generating a first bounding box around the detected object, generating a first augmented view of the first image based on the classification generating a second augmented view of the second image based on the classification the bounding box, and displaying the first and second augmented images on a display. The first augmented view includes the bounding box and a tag indicating the classification. The second augmented view includes the bounding box and a tag indicating the classification.
In a further aspect of the present disclosure, in a case where the classification accuracy value is below the predetermined threshold, the method may further include displaying on the display an indication that the classification accuracy value is not within an expected range.
In yet a further aspect of the present disclosure, the real-time image recognition may include detecting the object in the first image, detecting the object in the second image, generating a first silhouette of the object in the first image, generating a second silhouette of the object in the second image, comparing the first silhouette to the second silhouette, and detecting inconsistencies between the first silhouette and the second silhouette based on the comparing.
In yet another aspect of the present disclosure, the real-time image recognition may include detecting the object based on a convolutional neural network. The detecting may include generating a segmentation mask for the object, detecting the object, and classifying the object based on the detecting.
In a further aspect of the present disclosure, the convolutional neural network may be trained based on tagging objects in training images. The training may further include augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training images, and/or stretching the training images.
In yet a further aspect of the present disclosure, the real-time image recognition may include detecting the object based on a region based neural network. The detecting may include diving the image into regions, predicting bounding boxes for each region based on a feature of the object, predicting an object detection probability for each region, weighting the bounding boxes based on the predicted object detection probability, detecting the object, and classifying the object based on the detecting.
In yet another aspect of the present disclosure, the region based neural network may be trained based on tagging objects in training images. The training may further include augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training images, changing background, and/or stretching the training images.
In a further aspect of the present disclosure, the method may further include performing tracking of the object based on an output of the region based neural network.
In an aspect of the present disclosure, the first and second augmented views each may further include an indication of the classification probability value.
In accordance with aspects of the present disclosure, a non-transitory storage medium that stores a program causing a computer to execute a computer-implemented method of object enhancement in endoscopy images is presented. The computer-implemented method includes accessing a stereographic image of an object within a surgical operative site, by an imaging device.
The stereographic image includes a first image and a second image. The computer-implemented method further includes performing real-time image recognition on the first image to detect the object and classify the object performing real-time image recognition on the second image to detect the object, classify the object, and produce a classification probability value, and comparing the classification probability value of the first image and the classification probability value of the second image based on the real-time image recognition to produce a classification accuracy value. In a case where the classification accuracy value is above a predetermined threshold, the method further includes generating a first bounding box around the detected object, generating a first augmented view of the first image based on the classification, generating a second augmented view of the second image based on the classification the bounding box, and displaying the first and second augmented images on a display. The first augmented view includes the bounding box and a tag indicating the classification. The second augmented view includes the bounding box and a tag indicating the classification.
Further details and aspects of various embodiments of the disclosure are described in more detail below with reference to the appended figures.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Embodiments of the disclosure are described herein with reference to the accompanying drawings, wherein:
Further details and aspects of exemplary embodiments of the disclosure are described in more detail below with reference to the appended figures. Any of the above aspects and embodiments of the disclosure may be combined without departing from the scope of the disclosure.
Embodiments of the presently disclosed devices, systems, and methods of treatment are described in detail with reference to the drawings, in which like reference numerals designate identical or corresponding elements in each of the several views. As used herein, the term “distal” refers to that portion of a structure that is farther from a user, while the term “proximal” refers to that portion of a structure that is closer to the user. The term “clinician” refers to a doctor, nurse, or other care provider and may include support personnel.
The disclosure is applicable where images of a surgical site are captured. Endoscope systems are provided as an example, but it will be understood that such description is exemplary and does not limit the scope and applicability of the disclosure to other systems and procedures.
Convolutional neural network-based machine learning may be used in conjunction with minimally invasive endoscopic surgical video for surgically useful purposes, such as discerning potentially challenging situations, which requires that the networks be trained on clinical video. The anatomy seen in these videos can be complex as well as subtle and the surgical tool interaction with the anatomy equally challenging to yield the details of the interaction. Means by which the actions observed are enhanced/emphasized would be desirable to assist the machine learning to yield better insights with less training.
Referring initially to
With reference to
With reference to
Referring to
In various embodiments, the memory 454 can be random access memory, read-only memory, magnetic disk memory, solid state memory, optical disc memory, and/or another type of memory. In various embodiments, the memory 454 can be separate from the imaging device controller 450 and can communicate with the processor 452 through communication buses of a circuit board and/or through communication cables such as serial ATA cables or other types of cables. The memory 454 includes computer-readable instructions that are executable by the processor 452 to operate the imaging device controller 450. In various embodiments, the imaging device controller 450 may include a network interface 540 to communicate with other computers or a server.
Referring now to
Initially, at step 502, an image of a surgical site is captured via the objective lens 36 and forwarded to the image sensor 32 of endoscope system 1. The term “image” as used herein may include still images or moving images (for example, video). The image includes a plurality of pixels, wherein each of the plurality of pixels includes color information. In various embodiments, the captured image is communicated to the video system 30 for processing. For example, during an endoscopic procedure, a surgeon may cut tissue with an electrosurgical instrument. When the image is captured, it may include objects such as the tissue and the instrument. For example, the image may contain several frames of a surgical site. At step 504, the video system 30 accesses the image for further processing.
At step 506, the video system 30 accesses data relating to depth information about each of the pixels in the image. For example, the system may access depth data relating to the pixels of an object in the image, such as an organ or a surgical instrument. In various embodiments, the image includes a stereographic image. In various embodiments, the stereographic image includes a left image and a right image. In various embodiments, the video system 30 may calculate depth information based on determining a horizontal disparity mismatch between the left image and the right image. In various embodiments, the depth information may include pixel depth. In various embodiments, the video system 30 may calculate depth information based on structured light projection.
At step 508, the video system 30 inputs the depth information to a neural network. In various embodiments, the neural network includes a convolutional neural network (CNN). CNNs are often thought of as operating on images, but they can just as well be configured to handle additional data inputs. The C in CNN stands for convolutional which is about applying matrix processing operations to localized portions of an image, and the results of those operations (which can involve dozens of different parallel and serial calculations) are sets of many features that are used to train neural networks. In various embodiments, additional information may be included in the operations that generate these features. In various embodiments, providing unique information that yields features that give the neural networks information that can be used to ultimately provide an aggregate way to differentiate between different data input to them. In various embodiments, the neural network may include a feed forward neural network, a radial bias neural network, a multilayer perceptron, a recurrent neural network, and/or a modular neural network.
In various embodiments, the depth information now associated with the pixels can be input to the image processing path to feed the neural network. At this point, the neural networks may start with various mathematical operations extracting and/or emphasizing 3D features. It is contemplated that the extraction of depth does not need to be real-time for training the neural networks. In various embodiments, a second source of enhancement of the images input to neural networks is to amplify the change in color of the pixels over time. This is a technique which can make subtle color changes or be magnified, for example, being able to discern one's pulse from the change in the color of a person's face as a function of cyclic cardiac output. In various embodiments, the change in tissue color as a result of various types of tool-tissue interactions such as grasping, cutting, and joining may be amplified. It is a function of the change in blood circulation, which would be cyclical as well as a result of tool effects on tissue. These enhanced time series videos can replace normal videos in the training and intraoperative monitoring process. It is contemplated that color change enhancement does not need to be real-time to train the networks.
In various embodiments, the neural network is trained based on tagging objects in training images, and wherein the training further includes augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training images, and/or stretching the training images. In various embodiments, the training includes supervised, unsupervised, and/or reinforcement learning. It is contemplated that training images may be generated via other means that do not involve modifying existing images.
At step 510, the video system 30 emphasizes a feature of the image based on an output of the neural network. In various embodiments, emphasizing the feature includes augmenting a 3D aspect of the image, emphasizing a boundary of the object, changing the color information of the plurality of pixels of the object, and/or extracting 3D features of the object. In various embodiments, the video system 30 performs real-time image recognition on the augmented image to detect an object and classify the object. In various embodiments, the video system 30 processes a time series of the augmented image based on a learned video magnification, phase-based video magnification, and/or Eulerian video magnification. For example, the video system 30 may change the color of a surgical instrument to emphasize the boundary of the surgical instrument. In various embodiments, the enhanced image may be fed as an input into the neural network of
At step 512, the video system 30 generates an augmented image based on the emphasized feature. For example, the video system may generate an augmented image
At step 514, the video system 30 displays the augmented image on a display device 40. In various embodiments, the video system 30 performs tracking of the object based on an output of the neural network.
With reference to
Referring now to
Initially, at step 702, a stereographic image of a surgical site is captured via the objective lens 36 and forwarded to the image sensor 32 of endoscope system 1. The term “image” as used herein may include still images or moving images (for example, video). The stereographic image including a first image and a second image (e.g., a left and a right image). The stereographic image includes a plurality of pixels, wherein each of the plurality of pixels includes color information. In various embodiments, the captured stereographic image is communicated to the video system 30 for processing. For example, during an endoscopic procedure a surgeon may cut tissue with an electrosurgical instrument. When the image is captured, it may include objects such as the tissue and the instrument.
With reference to
With continued reference to
In various embodiments, to perform the real-time image recognition the video system 30 may detect the object based on a convolutional neural network. A convolutional neural network typically includes convolution layers, activation function layers, pooling (typically max-pooling) layers to reduce dimensionality without losing a lot of features. The detection may include initially generating a segmentation mask for the object, detecting the object and then classifying the object based on the detection.
In various embodiments, to perform the real-time image recognition, the video system 30 may detect the object based on a region based neural network. The video system 30 may detect the object by initially dividing the first image and second image into regions. Next, the video system 30 may predict bounding boxes for each region based on a feature of the object. Next, the video system 30 may predict an object detection probability for each region and weight the bounding boxes based on the predicted object detection probability. Next, the video system 30 may detect the object based on the bounding boxes and the weights and classify the object based on the detecting. In various embodiments, the region based or convolutional neural network may be trained based on tagging objects in training images. In various embodiments, the training may further include augmenting the training images to include adding noise, changing colors, hiding portions of the training images, scaling of the training images, rotating the training images, and/or stretching the training images.
Next, at step 706, the video system 30 performs real-time image recognition on the second image to detect the object, classify the object, and produce a second image classification probability value. For example, the video system 30 may detect a surgical instrument such as a stapler in the second image.
With reference to
With continued reference to
Next at step 710, the video system 30 determines whether the classification accuracy value is above a predetermined threshold. For example, the threshold may be about 80%. If the classification accuracy value is about 90%, then it would be above the predetermined threshold of 80%. If the video system 30 at step 710 determines whether the classification accuracy value is above a predetermined threshold, then at step 712, the video system 30 generates a first bounding box around the detected object.
Next at step 714, the video system 30 generates a first augmented view of the first image based on the classification. The first augmented view includes the bounding box and a tag indicating the classification. For example, the tag may be “stapler.”
Next at step 716, the video system 30 generates a second augmented view of the second image based on the classification of the bounding box. The augmented view including the bounding box and a tag indicating the classification. In various embodiments, the first and second augmented views each include an indication of the classification probability value.
Next at step 718, the video system 30 displays the first and second augmented images on a display device 40. In various embodiments, the video system 30 performs tracking of the object based on an output of the region based neural network.
With reference to
With reference to
With reference to
The embodiments disclosed herein are examples of the disclosure and may be embodied in various forms. For instance, although certain embodiments herein are described as separate embodiments, each of the embodiments herein may be combined with one or more of the other embodiments herein. Specific structural and functional details disclosed herein are not to be interpreted as limiting, but as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the disclosure in virtually any appropriately detailed structure. Like reference numerals may refer to similar or identical elements throughout the description of the figures.
The terms “artificial intelligence,” “data models,” or “machine learning” may include, but are not limited to, neural networks, convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial networks (GAN), Bayesian Regression, Naive Bayes, nearest neighbors, least squares, means, and support vector regression, among other data science and artificial science techniques.
The phrases “in an embodiment,” “in embodiments,” “in some embodiments,” or “in other embodiments” may each refer to one or more of the same or different embodiments in accordance with the disclosure. A phrase in the form “A or B” means “(A), (B), or (A and B).” A phrase in the form “at least one of A, B, or C” means “(A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).” The term “clinician” may refer to a clinician or any medical professional, such as a doctor, physician assistant, nurse, technician, medical assistant, or the like, performing a medical procedure.
The systems described herein may also utilize one or more controllers to receive various information and transform the received information to generate an output. The controller may include any type of computing device, computational circuit, or any type of processor or processing circuit capable of executing a series of instructions that are stored in a memory. The controller may include multiple processors and/or multicore central processing units (CPUs) and may include any type of processor, such as a microprocessor, digital signal processor, microcontroller, programmable logic device (PLD), field programmable gate array (FPGA), or the like. The controller may also include a memory to store data and/or instructions that, when executed by the one or more processors, causes the one or more processors to perform one or more methods and/or algorithms.
Any of the herein described methods, programs, algorithms or codes may be converted to, or expressed in, a programming language or computer program. The terms “programming language” and “computer program,” as used herein, each include any language used to specify instructions to a computer, and include (but is not limited to) the following languages and their derivatives: Assembler, Basic, Batch files, BCPL, C, C+, C++, Delphi, Fortran, Java, JavaScript, machine code, operating system command languages, Pascal, Perl, PL1, Python, scripting languages, Visual Basic, metalanguages which themselves specify programs, and all first, second, third, fourth, fifth, or further generation computer languages. Also included are database and other data schemas, and any other meta-languages. No distinction is made between languages which are interpreted, compiled, or use both compiled and interpreted approaches. No distinction is made between compiled and source versions of a program. Thus, reference to a program, where the programming language could exist in more than one state (such as source, compiled, object, or linked) is a reference to any and all such states. Reference to a program may encompass the actual instructions and/or the intent of those instructions.
Any of the herein described methods, programs, algorithms, or codes may be contained on one or more machine-readable media or memory. The term “memory” may include a mechanism that provides (for example, stores and/or transmits) information in a form readable by a machine such a processor, computer, or a digital processing device. For example, a memory may include a read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, or any other volatile or non-volatile memory storage device. Code or instructions contained thereon can be represented by carrier wave signals, infrared signals, digital signals, and by other like signals.
It should be understood that the foregoing description is only illustrative of the disclosure. Various alternatives and modifications can be devised by those skilled in the art without departing from the disclosure. Accordingly, the disclosure is intended to embrace all such alternatives, modifications, and variances. The embodiments described with reference to the attached drawing figures are presented only to demonstrate certain examples of the disclosure. Other elements, steps, methods, and techniques that are insubstantially different from those described above and/or in the appended claims are also intended to be within the scope of the disclosure.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/053790 | 10/1/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62910514 | Oct 2019 | US |