Surgical systems may incorporate an imaging system, which may allow the clinician(s) to view the surgical site and/or one or more portions thereof on one or more displays such as a monitor. The display(s) may be local and/or remote to a surgical theater. An imaging system may include a scope with a camera that views the surgical site and transmits the view to a display that is viewable by the clinician. Scopes include, but are not limited to, laparoscopes, robotic laparoscopes, arthroscopes, angioscopes, bronchoscopes, choledochoscopes, colonoscopes, cytoscopes, duodenoscopes, enteroscopes, esophagogastro-duodenoscopes (gastroscopes), endoscopes, laryngoscopes, nasopharyngo-neproscopes, sigmoidoscopes, thoracoscopes, ureteroscopes, and exoscopes.
Surgical imaging systems may also involve stereo vision, which can allow for 3D reconstruction of patient anatomy captured by the imaging systems. Scene reconstruction or 3D reconstruction is a process of capturing the shape and appearance of real objects. Thus, allowing medical professionals to use a scope-based imaging system to capture, reconstruct, track, and potentially measure an internal area of a patient as well as any tools present in the images.
While various kinds of surgical instruments and image captures systems have been made and used, it is believed that no one prior to the inventor(s) has made or used the invention described herein.
While the specification concludes with claims which particularly point out and distinctly claim the invention, it is believed the present invention will be better understood from the following description of certain examples taken in conjunction with the accompanying drawings, in which like reference numerals identify the same elements and in which:
The drawings are not intended to be limiting in any way, and it is contemplated that various embodiments of the invention may be carried out in a variety of other ways, including those not necessarily depicted in the drawings. The accompanying drawings incorporated in and forming a part of the specification illustrate several aspects of the present invention, and together with the description serve to explain the principles of the invention; it being understood, however, that this invention is not limited to the precise arrangements shown.
The following description of certain examples of the invention should not be used to limit the scope of the present invention. Other examples, features, aspects, embodiments, and advantages of the invention will become apparent to those skilled in the art from the following description, which is by way of illustration, one of the best modes contemplated for carrying out the invention. As will be realized, the invention is capable of other different and obvious aspects, all without departing from the invention. Accordingly, the drawings and descriptions should be regarded as illustrative in nature and not restrictive.
For clarity of disclosure, the terms “proximal” and “distal” are defined herein relative to a surgeon, or other operator, grasping a surgical device. The term “proximal” refers to the position of an element arranged closer to the surgeon, and the term “distal” refers to the position of an element arranged further away from the surgeon. Moreover, to the extent that spatial terms such as “top,” “bottom,” “upper,” “lower,” “vertical,” “horizontal,” or the like are used herein with reference to the drawings, it will be appreciated that such terms are used for exemplary description purposes only and are not intended to be limiting or absolute. In that regard, it will be understood that surgical instruments such as those disclosed herein may be used in a variety of orientations and positions not limited to those shown and described herein.
Furthermore, the terms “about,” “approximately,” and the like as used herein in connection with any numerical values or ranges of values are intended to encompass the exact value(s) referenced as well as a suitable tolerance that enables the referenced feature or combination of features to function for the intended purpose(s) described herein.
Similarly, the phrase “based on” should be understood as referring to a relationship in which one thing is determined at least in part by what it is specified as being “based on.” This includes, but is not limited to, relationships where one thing is exclusively determined by another, which relationships may be referred to using the phrase “exclusively based on.”
Disclosed herein are various systems and/or methods that relate generally to acquiring laparoscope stereo images and using image processing techniques to perform a digital measurement. The digital measurement may be based on a variety of factors, which will be discussed in greater detail herein.
The benefits of using minimally invasive surgery (MIS) are extensive and well known. Thus, improving the ability of surgeons to perform MIS and/or expanding the scope of procedures capable of being performed using MIS can lead to improved patient care.
Referring now to
In some embodiments, once the stereo imaging device 102 captures at least two stereo images, those images may be sent to the control system 101. The control system 101 may then process the captured images. In one embodiment, the control system 101 may have one or more computer processing algorithms installed to allow for processing the pair of stereo images. In general, the captured images are considered to be red-green-blue (RGB) images, however, it should be understood that various other color spaces may be used, such as, for example, cylindrical-coordinate color models, color models that separate the luma (or lightness) from the chroma signals, and/or grayscale. In a further embodiment, the system may also utilize hyperspectral wavelengths for imaging.
Once two or more stereo images are captured by the stereo imaging device 102, the control system 101 may utilize one or more processing algorithms to derive depth information for the surgical scene. For example, a system such as shown in
Once the depth information (e.g., a depth map) has been derived, the system may, in some embodiments, generate or construct a 3D representation of the surgical scene, including any objects that are present in the scene. For example, in some embodiments, there may be surgical tools present in the scene. In other embodiments, there may be specific patient anatomy (e.g., cysts, tumors, lesions, etc.). As will be discussed in greater detail herein, using the depth information and/or 3D reconstruction, the system can determine and/or measure distances between points in the surgical scene.
In some embodiments, and as shown in
As will be discussed herein, various methods exist for determining the location of the plurality of specified points 202. Stated differently, systems and methods, as disclosed herein, may be capable of receiving inputs from various input-output (I/O) devices, such as, for example, a touch screen, keyboard, mouse, gesture recognition, audio recording, and the like. By way of non-limiting example, the system may, in some embodiments, display one of the stereo images of the surgical scene 201 (e.g., the left image or right image) on a touchscreen device 104, while in other embodiments, the system may display a 3D reconstruction image of the surgical scene 201 on a touchscreen device 104. A user may then use their finger or a stylus to provide user input to the interface (e.g., by dragging the stylus across the two-dimensional image shown on a touch display) in order to select and/or specify two or more points 202 to be measured. These measurements may then be displayed on the image on which they were specified (e.g., a snapshot of a surgical scene), or may be displayed on real time images of a surgical scene during a procedure (e.g., locations of points may be tracked over time, and data such as Euclidian distances between points may be overlaid on a display which is updated with new information regarding a surgical scene as it is available). In a further embodiment, the user may also perform additional actions, such as, navigating and/or drawing on the displayed image.
In another embodiment, the system may overlay various additional information, such as, for example, a 3D representation 203 of the area to be measured. Referring to
Thus, the 3D surgical view 203 and/or cross-sectional views 303 may be used in some embodiments to help orient a user with the surface topography and associated measurements of the surgical scene to help a user ensure that a measurement reflects the proper points in a surgical scene. In a further embodiment, either display 203/303 may be rotated and/or zoomed to help a user view and\or understand how the selected points and measurements relate to the scene topography.
Moreover, as shown in
As shown in
Referring now to
As shown in
In some embodiments, if a user specifies a margin which should be displayed around an object, a system implemented based on this disclosure may use a depth map of the surgical scene to automatically generate the margin and display it in an interface such as shown in
It should be understood that the above description of interfaces illustrating particular objects and methods of identifying those objects and their resection margins should be understood as being illustrative only, and that other variations are also possible. For example, in some cases, in addition to, or as an alternative to, highlighting the border of a specified object, a system implemented based on this disclosure may automatically calculate (e.g., using a depth map in a manner similar to that discussed in the context of
In addition to measuring points specified by a user and/or points that are specified automatically by the system, in some embodiments, the system may detect one or more tools within the surgical scene and determine a measurement utilizing their relative locations. Accordingly, in some embodiments, the system may analyze at least two images (e.g., a first image from a first image capture device and a second image from a second image capture device) in order to identify a first surgical tool and a second surgical tool. It should be understood that although the figures and disclosure generally refer to two tools, that the system can utilized more than two tools when conducting a tool based measurement.
Referring now to
In order for the system to accurately identify the two tools 601/602 and determine the specified points to measure (e.g., the tool tips 603/604), the system may obtain or already have information associated with the specific tools. For example, the system may have been given a surgical plan that included a listing of all possible tools, and their specific characteristics, which would be used during the procedure. In an alternative example, the system may have, or obtain, a database of surgical tools, including their specific characteristics. Stated differently, the system my know, prior to analyzing the images, what tools are expected to be present in the surgical scene 600 and what their specific characteristics should be. Specifically, the system may know or obtain the tool size, shape, color, construction material, and the like. In a further embodiment, the tool may have an identifying marker that the system can use to associate it with a known tool’s characteristics. In an alternative embodiment, the system may use a deep-learning neural network that has been trained on various surgical tools to track and/or identify the tools.
Once the tool tips 603/604 are identified, the system uses their locations as the specified points and as discussed herein, calculates a measured distance between them. In some embodiments, the system may display (e.g., on a display device 103/104) a measurement value 605. Stated differently, in some embodiments, the system may analyze the stereo images to identify a first surgical tool and a second surgical tool and then determine a set of specified points based on a first point associated with the first surgical tool and a second point associated with the second surgical tool. A measurement is then calculated based on one of the methods disclosed herein (e.g., direct triangulation, using the depth map, and/or using a 3D reconstruction) and provided to the user (e.g., via display or audio device).
While the examples set forth herein may be implemented using 3D reconstructions generated based on combining left and right images from overlapping fields of view of an image sensor from a stereo camera, it should be understood that the disclosed technology may also be used in embodiments which provide reconstructions based on images extending beyond individual fields of view of a stereo camera’s image sensors, such as images captured over time as a stereo camera pans across a surgical scene. Examples of approaches which may be taken to support this type of panoramic reconstruction are described below in the context of
Turning first to
Other approaches to determining an imaging device’s pose by combining multiple types of data are also possible, and could be used in some embodiments which provide panoramic reconstruction functionality. An example of such an alternative approach is provided in
The keyframe based approach of
Whatever approaches are used to determine a 3D reconstruction, whether panoramic or otherwise, in some embodiments, such a reconstruction may be used in a method as illustrated in
The following examples relate to various non-exhaustive ways in which the teachings herein may be combined or applied. It should be understood that the following examples are not intended to restrict the coverage of any claims that may be presented at any time in this application or in subsequent filings of this application. No disclaimer is intended. The following examples are being provided for nothing more than merely illustrative purposes. It is contemplated that the various teachings herein may be arranged and applied in numerous other ways. It is also contemplated that some variations may omit certain features referred to in the below examples. Therefore, none of the aspects or features referred to below should be deemed critical unless otherwise explicitly indicated as such at a later date by the inventors or by a successor in interest to the inventors. If any claims are presented in this application or in subsequent filings related to this application that include additional features beyond those referred to below, those additional features shall not be presumed to have been added for any reason relating to patentability.
A system comprising: a) a first image capture device configured to capture a first image of an interior of a cavity of a patient; b) a second image capture device configured to capture a second image of the interior of the cavity of the patient; c) a two-dimensional display; d) a processor; and e) a non-transitory computer readable medium storing instructions operable to, when executed, cause the processor to perform a set of acts comprising: i) display, on the two-dimensional display, a two-dimensional image of the interior of the cavity of the patient; ii) determine a three-dimensional distance between a plurality of points on the two-dimensional image; and iii) display, on the two-dimensional display, the three-dimensional distance.
The system of example 1, wherein: a) the two-dimensional display is a touch display; b) the plurality of points on the two-dimensional image comprises a first point and a second point; and c) the non-transitory further stores instructions operable to, when executed, cause the processor to: i) receive the plurality of points on the two-dimensional image as user input provided by touching the touch display; ii) determine a three-dimensional location of each of the plurality of points using triangulation based on the first image and the second image; iii) determine the three-dimensional distance as the length of a straight line connecting, and having endpoints at, the first point and the second point in the three-dimensional space; iv) identify a cutting plane comprising the straight line connecting, and having endpoints at, the first point and the second point; and v) display, on the two-dimensional display simultaneously with the two-dimensional image of the interior of the cavity of the patient, a depiction of the straight line connecting, and having endpoints at, the first point and the second point in three-dimensional space on an image selected from a group consisting of: A) a cross-sectional view of a portion of the interior of the cavity of the patient taken on the cutting plane; and B) a three-dimensional reconstruction of the interior of the cavity of the patient which highlights a surface of the interior of the cavity of the patient intersecting the cutting plane.
The system of example 2, wherein the system comprises a laparoscope housing the first image capture device and the second image capture device, and wherein the cutting plane is selected from a group consisting of: a) a plane parallel to a direction of view of the laparoscope; and b) a plane perpendicular to a plane defined by an average surface of the cavity of the patient.
The system of example 1, wherein the two-dimensional image of the interior of the cavity of the patient is selected from a group consisting of: a) the first image of the interior of the cavity of the patient; b) the second image of the interior of the cavity of the patient; and c) a three-dimensional reconstruction of the interior of the cavity of the patient.
The system of example 1, wherein the instructions stored on the non-transitory computer readable medium are operable to, when executed, cause the processor to: a) create a depth map based on the first image and the second image; and b) determine, using the depth map, the three-dimensional distance between the plurality of points on the two-dimensional image as a distance between a first point and a second point from the plurality of points along a surface of the interior of the cavity of the patient on a plane comprising a straight line connecting the first point and the second point.
The system of example 1, wherein: a) the plurality of points on the two-dimensional image comprise: i) a point on a border of an anatomical object in the interior of the cavity of the patient; and ii) a point on an outer edge of a resection margin surrounding the anatomical object in the interior of the cavity of the patient; and b) the non-transitory computer readable medium further stores instructions operable to, when executed, cause the processor to: i) highlight the border of the anatomical object in the two-dimensional image of the interior of the cavity of the patient; and ii) highlight the outer edge of the resection margin surrounding the anatomical object in the two-dimensional image of the interior of the cavity of the patient.
The system of example 1, wherein: a) the system further comprises: i) a laparoscope housing the first image capture device; and ii) an inertial measurement unit (“IMU”) coupled to the laparoscope; and b) the non-transitory computer readable medium further stores instructions operable to, when executed, cause the processor to: i) generate a plurality of representations of the interior of the cavity of the patient, wherein each of the plurality of representations corresponds to a time from a plurality of times; and ii) for each time from the plurality of times, determine a pose corresponding to that time, based on: A) movement information captured from the IMU at that time; and B) the representation from the plurality of representations corresponding to that time; and iii) generate a panoramic view of the interior of the cavity of the patient based on combining the plurality of representations of the interior of the cavity of the patient using the poses corresponding to the times corresponding to those representations.
The system of example 7, wherein for each time from the plurality of times, determining the pose corresponding to that time comprises: a) determining a set of potential poses by, for each potential pose from the set of potential poses, determining that potential pose based on: i) the representation corresponding to that time, and ii) a different representation from the plurality of representations corresponding to a previous time; b) determining a representation pose corresponding to that time based on the set of potential poses; and c) determining the pose corresponding to that time based on: i) the representation pose corresponding to that time; and ii) an IMU pose based on the movement information captured from the IMU at that time.
The system of example 7, wherein: a) each representation from the plurality of representations is a three-dimensional representation of the interior of the cavity of the patient; and b) the non-transitory computer readable medium stores instructions operable to, when executed, cause the processor to generate each representation from the plurality of representations based on a pair of images captured by the first and second image capture devices at the corresponding time for that representation.
The system of example 1, wherein: a) the non-transitory computer readable medium further stores instructions operable to, when executed, cause the processor to: i) analyze the first image and the second image to identify a first surgical tool and a second surgical tool; and ii) determine, based on the analyzing, a first point associated with the first surgical tool and a second point associated with the second surgical tool; and b) the plurality of points on the two-dimensional image comprises the first point associated with the first surgical tool, and the second point associated with the second surgical tool.
The system of example 10, wherein the non-transitory computer readable medium further stores instructions operable to, when executed, cause the processor to display, on the two-dimensional display simultaneously with the two-dimensional image of the interior of the cavity of the patient, a depiction of a straight line connecting, and having endpoints at, the first point associated with the first surgical tool and the second point associated with the second surgical tool.
A method comprising: a) capturing a first image of an interior of a cavity of a patient and a second image of the interior of the cavity of the patient; b) displaying, on a two-dimensional display, a two-dimensional image of the interior of the cavity of the patient; c) determining a three-dimensional distance between a plurality of points on the two-dimensional image; and d) displaying, on the two-dimensional display, the three-dimensional distance.
The method of example 12, wherein: a) the two-dimensional display is a touch display; b) the plurality of points on the two-dimensional image comprises a first point and a second point; and c) the method further comprises: i) receiving the plurality of points on the two-dimensional image as user input provided by touching the touch display; ii) determining a three-dimensional location of each of the plurality of points using triangulation based on the first image and the second image; iii) determining the three-dimensional distance as the length of a straight line connecting, and having endpoints at, the first point and the second point in the three-dimensional space; iv) identifying a cutting plane comprising the straight line connecting, and having endpoints at, the first point and the second point; and v) displaying, on the two-dimensional display simultaneously with the two-dimensional image of the interior of the cavity of the patient, a depiction of the straight line connecting, and having endpoints at, the first point and the second point in three-dimensional space on an image selected from a group consisting of: A) a cross-sectional view of a portion of the interior of the cavity of the patient taken on the cutting plane; and B) a three-dimensional reconstruction of the interior of the cavity of the patient which highlights a surface of the interior of the cavity of the patient intersecting the cutting plane.
The method of example 13, wherein: a) the first and second images of the interior of the cavity of the patient are captured, respectively, by first and second image capture devices; b) the first and second image capture devices are housed within a laparoscope; c) the cutting plane is selected from a group consisting of: i) a plane parallel to a direction of view of a laparoscope housing first and second image; and ii) a plane perpendicular to a plane defined by an average surface of the cavity of the patient.
The method of example 12, wherein the two-dimensional image of the interior of the cavity of the patient is selected from a group consisting of: a) the first image of the interior of the cavity of the patient; b) the second image of the interior of the cavity of the patient; and c) a three-dimensional reconstruction of the interior of the cavity of the patient.
The method of example 12, wherein the method further comprises: a) creating a depth map based on the first image and the second image; and b) determining, using the depth map, the three-dimensional distance between the plurality of points as a distance between a first point and a second point from the plurality of points along a surface of the interior of the cavity of the patient on a plane comprising a straight line connecting the first pointe and the second point.
The method of example 12, wherein: a) the plurality of points on the two-dimensional image comprise: i) a point on a border of an anatomical object in the interior of the cavity of the patient; and ii) a point on an outer edge of a resection margin surrounding the anatomical object in the interior of the cavity of the patient; and b) the method further comprises: i) highlighting the border of the anatomical object in the two-dimensional image of the interior of the cavity of the patient; and ii) highlighting the outer edge of the resection margin surrounding the anatomical object in the two-dimensional image of the interior of the cavity of the patient.
The method of example 12, wherein the method further comprises: a) generating a plurality of representations of the interior of the cavity of the patient, wherein each of the plurality of representations corresponds to a time from a plurality of times; and b) for each time from the plurality of times, determining a pose corresponding to that time, based on: i) movement information captured from an inertial measurement unit (“IMU”) coupled to a laparoscope housing first and second image capture devices used to capture the first and second images of the interior of the cavity of the patient at that time; and ii) the representation from the plurality of representations corresponding to that time; and c) generating a panoramic view of the interior of the cavity of the patient based on combining the plurality of representations of the interior of the cavity of the patient using the poses corresponding to the times corresponding to those representations.
The method of example 18, wherein for each time from the plurality of times, determining the pose corresponding to that time comprises: a) determining a set of potential poses by, for each potential pose from the set of potential poses, determining that potential pose based on: i) the representation corresponding to that time, and ii) a different representation from the plurality of representations corresponding to a previous time; b) determining a representation pose corresponding to that time based on the set of potential poses; and c) determining the pose corresponding to that time based on: i) the representation pose corresponding to that time; and ii) an IMU pose based on the movement information captured from the IMU at that time.
The method of example 18, wherein the method comprises generating each representation from the plurality of representations based on a pair of images captured by the first and second image capture devices at the corresponding time for that representation.
The method of example 12, wherein: a) the method further comprises: i) analyzing the first image and the second image to identify a first surgical tool and a second surgical tool; and ii) determining, based on the analyzing, a first point associated with the first surgical tool and a second point associated with the second surgical tool; and b) the plurality of points on the two-dimensional image comprises the first point associated with the first surgical tool, and the second point associated with the second surgical tool.
The method of example 21, wherein the method further comprises displaying, on the two-dimensional display simultaneously with the two-dimensional image of the interior of the cavity of the patient, a depiction of a straight line connecting, and having endpoints at, the first point associated with the first surgical tool and the second point associated with the second surgical tool.
It should be understood that any one or more of the teachings, expressions, embodiments, examples, etc. described herein may be combined with any one or more of the other teachings, expressions, embodiments, examples, etc. that are described herein. The above-described teachings, expressions, embodiments, examples, etc. should therefore not be viewed in isolation relative to each other. Various suitable ways in which the teachings herein may be combined will be readily apparent to those of ordinary skill in the art in view of the teachings herein. Such modifications and variations are intended to be included within the scope of the claims.
It should be appreciated that any patent, publication, or other disclosure material, in whole or in part, which is said to be incorporated by reference herein is incorporated herein only to the extent that the incorporated material does not conflict with existing definitions, statements, or other disclosure material set forth in this disclosure. As such, and to the extent necessary, the disclosure as explicitly set forth herein supersedes any conflicting material incorporated herein by reference. Any material, or portion thereof, that is said to be incorporated by reference herein, but which conflicts with existing definitions, statements, or other disclosure material set forth herein will only be incorporated to the extent that no conflict arises between that incorporated material and the existing disclosure material.
Having shown and described various embodiments of the present invention, further adaptations of the methods and systems described herein may be accomplished by appropriate modifications by one of ordinary skill in the art without departing from the scope of the present invention. Several of such potential modifications have been mentioned, and others will be apparent to those skilled in the art. For instance, the examples, embodiments, geometrics, materials, dimensions, ratios, steps, and the like discussed above are illustrative and are not required. Accordingly, the scope of the present invention should be considered in terms of the following claims and is understood not to be limited to the details of structure and operation shown and described in the specification and drawings.