This disclosure relates to a pointer tool for use in endoscopic surgical procedures that includes features on the tool that allow for determining the location of a tip of the pointer tool in two or more dimensions within imaging data taken when the pointer tool is within a field of view of a medical imaging device.
Medical imaging involves the use of a high-definition camera often coupled to an endoscope inserted into a patient to provide a surgeon with a clear and precise view within the body. In many instances, the video data collected at the camera will be transmitted to a display device that will render the video data collected onto a display so that the surgeon can visualize the internal area of the body that is being viewed by the camera. In many instances, the camera can serve as the eyes of the surgeon during the surgery since the camera may provide the only view of the internal area of the patient. In many instances, the surgeon may depend on the camera to perform procedures in the internal area of the patient using one or more tools that are specifically configured to aid the surgeon as they perform the medical procedure. The surgeon can view the imaging feed being displayed to them during a surgery to manipulate the tool and to navigate the tool within the internal area of the patient.
Medical imaging data such as an endoscopic video feed and/or image can also be used by the surgeon to measure distances within the internal portion of a patient. For instance, if the scale of the image shown on the screen is known, as well as depth information, then the surgeon can use the endoscopic imaging data to measure distances of the internal portion of the patient in either two-dimensions, three-dimensions, or both. In the instance where sufficient information about an endoscopic image exists to measure distances or determine the position of an object in the image, the tools that are used in the endoscopic procedure can be used to measure distances and or determine the position of a feature in the internal area of the patient. A pointer tool is an example of a tool that can be used by a surgeon during an endoscopic procedure. A pointer tool can include a tip that the surgeon can use to palpate the anatomy of the patient, and act as the “fingers” of the surgeon during an endoscopic surgery. Thus, the surgeon can use the tip of a pointer tool to measure distance in the anatomy or otherwise determine the precise two or three dimensional location of a feature of the patient's anatomy. For instance, the end of the tip can be used to delineate two end points of a measurement. The tip of the point tool can be used to indicate a start point and an endpoint of a measurement. With respect to determining the location of a feature, the tip of the pointer tool can be placed at a feature of interest, and the position of the tip can be recorded.
However, in order to use the pointer in the manner described above, the endoscopic imaging system, and more specifically the device processing the imaging data must be able to recognize the location of the tip in the endoscopic imaging data. The imaging data can be represented by a plurality of digital pixels, and thus in order to determine the tip of the tool in the image, the device must first determine the presence of the pointer tool in the imaging data, and then determine the exact pixels that are associate with the tip of the pointer tool. Determining the position of the tip can be challenging in its own right, but this challenge can be made even more complex and difficult to overcome when the tip gets obscured or hidden from the view of the camera during the surgical procedure. For instance, if the tip gets buried in the anatomy of the patient such that it is not visible in the endoscopic images, then determining its two or three dimensional position can be difficult.
Disclosed herein is a pointer tool that is configured to help an imaging analysis device/process determine the location of the tip in two or three dimensions using medical imaging data. In one or more examples, the pointer tool can include a shaft that on one end includes a pointer. In one or more examples, the pointer includes a first portion that is axially aligned with the shaft, and a second portion that is oriented angularly with respect to the shaft. In one example, the pointer includes a tip located on the distal end of the pointer. In one or more examples, the pointer includes one or more sets of fiducial markers, wherein each fiducial marker can be used to determine location information regarding the tip of the pointer. In one or more examples, the fiducial markers can include one or more ArUco markers. The ArUco makers can be configured to be identified in an endoscopic image, and can be used to determine the location of the tip in two dimensional or three dimensional space.
In one or more examples, the sets of ArUco markers can include ArUco markers disposed on faces of one or more fiducial rings. In one or more examples, the ArUco markers can be used to identify the locations of features of the ArUco markers, features of the fiducial rings, and/or of other features of the tool. The systems and methods can use the features to then determine the location of the tip by accessing a look-up table which provides geometric relationships between the features and the tip. Thus, the ArUco markers can be used to identify the location of the features for determining the position of the tip.
In one or more examples, a tool, such as for use in endoscopic surgical procedures, comprises: a shaft, wherein the shaft includes a first end, a second end, and a first axis, a pointer at the first end of the shaft, wherein the pointer comprises: a tip, and a plurality of fiducial markers disposed proximally of the tip, wherein at least one of the fiducial markers is disposed on a surface that extends transversely to the first axis, wherein the plurality of fiducial markers are configured for providing information for locating the tip of the pointer in an endoscopic image captured by an endoscopic imaging device.
Optionally, the shaft and the pointer are machined from a single piece of material.
Optionally, the pointer is generated using an injection molding process, and wherein the pointer is attached to the shaft.
Optionally, the pointer is over-molded onto the shaft.
Optionally, the plurality of fiducial markers are ArUco markers.
Optionally, each fiducial marker is disposed on a face of a first fiducial ring.
Optionally, each fiducial marker is configured to identify the fiducial marker.
Optionally, the pointer comprises a first portion that is axially aligned with the first axis of the shaft; and a second portion aligned with a second axis that is oriented angularly with respect to the first axis of the shaft.
Optionally, the pointer comprises at least one fiducial ring that comprises the plurality of fiducial markers.
Optionally, the pointer comprises first and second fiducial rings.
Optionally, the first and second fiducial rings are spaced apart from the tip of the pointer.
Optionally, the plurality of fiducial markers disposed on the first and second fiducial rings are ArUco markers.
Optionally, each ArUco marker is disposed on a face of the at least one fiducial ring.
Optionally, each ArUco marker comprises a unique identifier that can be used to obtain a distance between one or more features of the fiducial marker and the tip.
Optionally, the pointer comprises a first set of fiducial markers spaced apart from a second set of fiducial markers.
Optionally, the first set of fiducial markers is axially aligned with the first axis of the shaft.
Optionally, the second set of fiducial markers is aligned with a second axis that is oriented angularly with respect to the first axis of the shaft.
Optionally, the tool comprises a controller disposed on the second end of the shaft, and wherein the controller is configured to receive one or more inputs from a user of the tool.
Optionally, the controller comprises one or more buttons that are configured to be pushed by a user to indicate a user's desired action.
Optionally, the buttons are configured to allow a user to navigate a graphical user interface displayed on an external display.
Optionally, the controller comprises a printed circuit board.
Optionally, the controller is configured to provide a message through a cable to an external computing system identifying the tool to the external computing system.
Optionally, the controller is configured to wirelessly communicate with an external computing system.
Optionally, the controller communicates wirelessly with the external communication system using a pre-defined wireless transmission standard.
Optionally, the tool comprises an interface component attached to the second distal end of the shaft, and wherein the interface component is configured to interface with an external controller.
Optionally, the tool comprises a memory configured to identify the tool to the external controller.
Optionally, the tool comprises a memory configured to identify the tool to the external computing system.
Optionally, at least a portion of the pointer is part of the shaft.
In one or more examples, a method for determining the three dimensional location of a tip of a tool in an image, such as in an endoscopic image, comprises: receiving an image, such as an endoscopic image, wherein the (e.g., endoscopic) image comprises an image of the tool, optionally, wherein the tool comprises any of the tools described above; determining an identity of at least one fiducial marker of a plurality of fiducial markers of the tool; and determining a, for example three-dimensional, location of the tip of the tool based on the determined identity of the at least one fiducial marker.
Optionally, the method includes determining locations of corners of the at least one fiducial marker of the plurality of fiducial markers in the received endoscopic image, wherein the three-dimensional location of the tip of the tool is determined based on the locations of the corners.
Optionally, determining an identity of the at least one fiducial marker of the plurality of fiducial markers comprises: determining the location of the at least one fiducial marker in the received endoscopic image, determining one or more visual patterns of the at least one fiducial marker, and extracting the identity of the at least one fiducial marker based on the one or more visual patterns.
Optionally, determining the three-dimensional location of the tip of the tool based on the determined identity of the at least one fiducial marker comprises: identifying at least one corner of a plurality of corners of the at least one fiducial marker, and determining a three-dimensional location of the at least one corner of the plurality of corners relative to the tip.
Optionally, determining the three-dimensional location of the at least one corner of the plurality of corners relative to the tip comprises: accessing a look-up table comprising a list of the plurality of fiducial markers and a corresponding plurality of corners, wherein each entry pertaining to a corner in the look-up table comprises a three-dimensional location of the respective corner relative to the tip, and extracting the three-dimensional location of the at least one corner relative to the tip.
Optionally, the three-dimensional location of the tip of the tool is determined based on the determined three-dimensional location of the at least one corner of the plurality of corners relative to the tip.
Optionally, the tool comprises a first set of fiducial markers arranged on a first fiducial ring and a second set of fiducial markers arranged on a second fiducial ring.
Optionally, the first and second sets of fiducial markers are spaced apart from the tip of the pointer.
Optionally, the first set of fiducial markers is disposed on a first portion of the pointer that is axially aligned with a first axis of the shaft.
Optionally, the second set of fiducial markers is disposed on a second portion of the pointer that is aligned with a second axis that is oriented angularly with respect to the first axis of the shaft.
Optionally, the plurality of fiducial markers are ArUco markers.
Optionally, the method comprises applying one or more machine learning models to the received endoscopic image if an identity of the plurality of fiducial markers cannot be determined.
Optionally, the method includes applying one or more machine learning models to the received endoscopic image to determine the location of the first fiducial marker in the received endoscopic image.
Optionally, determining the location of the tip of the tool comprises: applying one or more machine learning models to the received endoscopic image to segment the tool from the received endoscopic image, and determining the location of the tip of the tool based on the determined identity of the first fiducial marker and the segmented tool from the received endoscopic image.
In one or more examples, a system for determining the three dimensional location of a tip of a tool in an endoscopic image comprises: a memory, one or more processors, wherein the memory stores one or more programs that when executed by the one or more processors, cause the one or more processors to perform any of the methods described above
In one or more examples, a non-transitory computer readable storage medium stores one or more programs for execution by one or more processors of a computing system for performing any of the methods described above.
It will be appreciated that any of the variations, aspects, features, and options described in view of the systems apply equally to the methods and vice versa. It will also be clear that any one or more of the above variations, aspects, features, and options can be combined.
The invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Reference will now be made in detail to implementations and examples of various aspects and variations of systems and methods described herein. Although several exemplary variations of the systems and methods are described herein, other variations of the systems and methods may include aspects of the systems and methods described herein combined in any suitable manner having combinations of all or some of the aspects described.
Described herein is a pointer tool that is configured to provide information to an image processing algorithm that can use the information to determine the two-dimensional and/or three-dimensional location of a tip of the pointer tool, for example, within the internal area of a patient during a endoscopic surgical procedure. In one or more examples, the pointer tool can include a shaft that on one end includes a pointer. In one or more examples, the pointer includes a first portion that is axially aligned with the shaft, and a second portion that is oriented angularly with respect to the shaft. In one example, the pointer includes a tip located on the distal end of the pointer. In one or more examples, the pointer includes one or more fiducial rings that has a plurality of fiducial markers disposed on them, wherein each fiducial marker is configured to provide information that can be used by an image processing algorithm to locate the tip of the pointer. In one or more examples, the fiducial markers can include one or more ArUco markers. The ArUco makers can be configured to be identified in an (e.g., endoscopic) image, and can be used to determine the location of the tip in two dimensional or three dimensional space.
In one or more examples, the ArUco markers can be disposed on a face of the fiducial rings, and each fiducial ring can have one or more corners or other geometric feature. In one or more examples, the ArUco markers can be used to identify the location of the geometric features or features of the ArUco markers. The systems and methods can use the features to then determine the location of the tip by accessing a look-up table which provides geometric relationships between the features and the tip. Thus, the ArUco markers can help to identify the location of the features of the fiducial ring and can also help to identify the position of the tip.
In one or more examples, the pointer tool can include a controller located on an end of the shaft of the pointer that is configured to allow for a user to provide input commands with the tool, for instance to indicate a start time to initiate a measurement using the pointer tool. In one or more examples, the controller of the pointer tool can be communicatively coupled to a cable interface that can be configured to provide the user inputs provided to the controller to a processing system that can process the user input to perform measurements or other procedures involving the tool. In one or more examples, the shaft can include an interface on one end of the tool that can be configured to interface with an external controller. In one or more examples, the pointer tool can include an RFID component that can be configured to identify the tool to the external controller.
In the following description of the various embodiments, it is to be understood that the singular forms “a,” “an,” and “the” used in the following description are intended to include the plural forms as well, unless the context clearly indicates otherwise. It is also to be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It is further to be understood that the terms “includes, “including,” “comprises,” and/or “comprising,” when used herein, specify the presence of stated features, integers, steps, operations, elements, components, and/or units but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, units, and/or groups thereof.
Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware, or hardware and, when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that, throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” “generating” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
The present disclosure in some embodiments also relates to a device for performing the operations herein. This device may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, computer readable storage medium, such as, but not limited to, any type of disk, including floppy disks, USB flash drives, external hard drives, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each connected to a computer system bus. Furthermore, the computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs, such as for performing different functions or for increased computing capability. Suitable processors include central processing units (CPUs), graphical processing units (GPUs), field programmable gate arrays (FPGAs), and ASICs.
The methods, devices, and systems described herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.
The one or more imaging sensors 110 generate pixel data that can be transmitted to a camera control unit 112 that is communicatively connected to the camera head 108. The camera control unit 112 generates a video feed from the pixel data that shows the tissue being viewed by the camera at any given moment in time. In one or more examples, the video feed can be transmitted to an image processing unit 116 for further image processing, storage, display, and/or routing to an external device (not shown). The images can be transmitted to one or more displays 118, from the camera control unit 112 and/or the image processing unit 116, for visualization, such as by medical personnel (such as a surgeon) for visualizing the surgical cavity 104 during a surgical procedure on a patient.
In one or more examples, the images generated by the system 100 described above can, for example, be used to create two-dimensional and/or three-dimensional maps of the internal anatomy of a patient. For instance, in one or more examples, the images are represented on a screen in two-dimensions and thus can be represented using an (x,y) coordinate system, in which each location or point in the internal portion can correspond to a specific (x,y) coordinate. Even as the camera is repositioned throughout the surgery, the images created by the camera can be stitched together to create an overall two-dimensional mapping of the internal anatomy of the patient, such that no two points in the internal anatomy of the patient viewed by the camera will have the same (x,y) coordinate.
In one or more examples, the two-dimensional model created by the endoscopic video feed during or after a surgical procedure can, for example, be transformed into a three-dimensional model by adding depth information to the two-dimensional model. In one or more examples, depth information pertaining to endoscopic image or endoscopic video feed can be obtained by using hardware-based methods such as employing the use of stereo cameras, time of flight sensors, etc. Additionally or alternatively, the depth information can be acquired algorithmically, for instance by using a structure from motion process in conjunction with a camera to acquire depth information. Additionally or alternatively, the depth information can be acquired using external data acquired on the patient such as magnetic resonance images (MRIs), etc. Similar to two-dimensional mappings, the above techniques can be employed to create a three-dimensional map of the internal anatomy of the patient, such that every point visualized by an endoscopic camera can have a unique (x,y,z) coordinate.
The two and/or three-dimensional mappings discussed above can be used to generate two or three dimensional measurements within the internal portion of the patient. For instance, the distance between two (x,y,z) points within the patient's internal anatomy can be measured in real-time using the three-dimensional mappings acquired using the systems and processes described above. In order to take a measurement, a surgeon may need to accurately identify the start point and end point of such a measurement, and/or the contours of the measurement to be taken. In one or more examples, a surgeon can, for example, utilize a pointer tool 122 to point to the specific points in the internal anatomy of a patient to use in a two or three-dimensional measurement that is being taken using images taken from an endoscopic imaging device. In one or more examples, the pointer tool 122 can include a pointer that has a tip 138 located at an end of the pointer tool 122 that can be captured in imaging data generated by the camera head 108 and used by the surgeon to mark or point to a specific point of interest 124 in the imaging data of the patient's internal anatomy. One challenge associated with using a pointer tool to mark points in a patient's anatomy is identifying the precise location of the tip in the endoscopic image.
In order to use the pointer as a “marking” device, the image processor (that maps two and three-dimensional points in a patient's anatomy to a two/three-dimensional model of the patient's anatomy) must determine where the tip of the pointer being used to mark is precisely located. The task of finding the tip of the pointer tool can be even more complicated when the tip is obscured by a patient's anatomy (for instance by being buried in the patient's tissue) or otherwise not completely visible in the endoscopic image due to other occlusions or obfuscations. To this end, pointer tool 122 can be specifically configured to allow for easy and robust identification of the tip 138 of a pointer tool by an image processing system, such as image processing unit 116, for the purposes of marking a portion of a patient's anatomy or any other context in which the precise two and/or three-dimensional location of the tip may be required. The pointer tool 122 can include multiple features that collectively enable an image processing system to acquire the precise location of the tip 138 regardless of the orientation the pointer tool 122 is in, and regardless of whether the tip 138 is visible in the image or not. For example, the pointer tool 122 can include one or more fiducial markers 128 that can be captured in imaging data and used by an image processing system, such as image processing unit 116, to not only identify the pointer tool 122, but identify its orientation and identify the precise two or three-dimensional location of the tip of the tool, which the image processing system can use to take two or three-dimensional measurements.
In some examples, the pointer tool 122 can include one or more buttons 132 or other user interface that a user can use to instruct the image processing unit 116 to determine the position of the location of interest 124 based on the position of the tip 138 of the pointer tool 122. For example, the user can position the pointer tool 122 at or near the location of interest 124 and press the button 132 on the pointer tool 122 to indicate that the image processing unit 116 should determine the position of the location of interest 124. The pointer tool 122 can be directly connected to the image processing unit 116 or can be connected to a tool controller 126 configured to receive input from the pointer tool 122. The tool controller 126 can receive a signal from the pointer tool 122 responsive to a button press. The tool controller 126 can send a notification to the image processing unit 116 indicative of the user's instruction to determine the location of interest 124. The image processing unit 116 can then analyze one or more endoscopic images to determine the three-dimensional position of the location of interest 124. The user can reposition the pointer tool 122 and provide another button press to control the system 100 to determine a new location of interest based on the repositioned position of the pointer tool 122. This can be repeated any number of times by the user. In some examples, the pointer tool 122 may include a memory storing identifying information for the pointer tool 122 that the pointer tool 122 may provide to the image processing unit 116 and/or the tool controller 126 so that the imaging processing unit 166 and/or the tool controller 126 can determine how to interpret communications from the pointer tool 122.
In some examples, the pointer tool 122 does not include any user input features. Instead, the pointer tool 122 may include a shaft extending from a simple handpiece or simply a shaft grasped at one end by a user. In such examples, a user input instructing the image processing unit 116 to determine the three-dimensional position of the location of interest 124 can be provided via any other user interface of system 100, including, for example, a voice control system, a remote control, another tool, or a foot switch. For example, the tool controller 126 may include or be connected to a user interface 140, such as a foot switch, to which a user may provide an input to instruct the image processing unit 116 to determine the three-dimensional position of the location of interest 124. Optionally, the tool controller 126 and user interface 140 can be used to communicate with tools other than the pointer tool 122, such as a cutting tool, and the tool controller 126 can change how it responds to inputs to the user interface 140 based on which tool is being used. The image processing unit 116 may detect the presence of the pointer tool 122 in imaging data, such as by detecting the fiducial marker 128, and may inform the tool controller 126 that the pointer tool 122 is being used. The tool controller 126 may then respond to inputs to the user interface 140 based on configuration data associated with the pointer tool 122 (instead of, for example, configuration data associated with a cutter). Optionally, the configuration data may be customizable based user preferences so that, for example, mappings of user interface 140 inputs to tool controller 126 outputs can be different for different users.
In one or more examples, the pointer tool 200 can include a pointer 204 which can be located on one end of the shaft. In one or more examples, the pointer 204 can refer to a plurality of features located on the end of the shaft 202 that include a tip 206 that can be used by the surgeon to mark points in the internal portion of the patient's anatomy during a minimally invasive surgical procedure. As will be described in further detail below, the pointer 204 can also include a plurality of features and can be shaped in manner that can aid an image analysis tool or computer vision tool in acquiring the two-dimensional and/or three-dimensional location of the tip 206 in an endoscopic image. In one or more examples, the pointer 204 can be part of the shaft 202 (i.e., the shaft 202 and the pointer 204 are part of the same body), or alternatively, in or more examples, the pointer 204 (including the features of the pointer) can be formed as a separate component that is then adhered to the shaft 202. For instance, in one or more examples, the pointer 204 can be made using an injection molding process using a thermoplastic that can be press fit to the shaft 202. In one or more examples, the tip 206 can be over-molded to the shaft to give the pointer 204 additional structural reinforcement.
In one or more examples, the pointer tool 200 can include a controller 208 that is located on an end of the shaft opposite to the end where the pointer 204 is disposed. In one or more examples, the controller 208 (described in further detail below) can include a plurality of components configured to accept an input from the user. For instance, in one or more examples, the controller 208 can be configured to allow for the user to indicate the starting points and end points of a measurement to be taken using point. Thus, in one or more examples, when a user engages a button on the controller, an image processing or computer vision process that is used to take the measurement can determine the location of the tip 206 of the pointer 204, and can track the motion of the tip, until the user engages the same or another button on the controller 208 indicating the end of the measurement. In one or more examples, the controller 208 can be used to facilitate user inputs to an external controller and/or display that the pointer tool 200 is communicatively coupled to. Thus, in one or more examples, the controller 208 is not limited to operations involving the pointer 204 of the pointer tool 200 and can be generally used to facilitate interaction between a user and a computing device in much the same way that a mouse or keyboard would.
In one or more examples, the pointer tool 200 can also include a cable assembly 210 that is connected to the controller 208 and is configured to transmit electronic messages/inputs or other data processed by controller 208 to an external device, such as image processing unit 116 (directly or via one or more intermediate computing systems). In one or more examples, the cable assembly 210 can be configured to transmit messages using one or more varieties of communications protocols/standards including universal serial bus (USB) or any other interface protocol that can carry communications from the controller to an external power device as well as provide any needed power to the controller 208. Alternatively, the controller 208 can be powered using an internal battery.
In one or more examples, the pointer 204 of the pointer tool 200 can be configured to include one or more fiducial markers, which can be captured in endoscopic images and used by an image processing system, such as image processing unit 116, to locate the precise location of the tip 206 of the pointer 204. In one or more examples, the pointer 204 can be configured so that at least one fiducial marker (whose operation is described in further detail below) can be visible to the endoscopic camera no matter the orientation of the pointer tool 200 in relation to the camera that is being used to view the pointer tool in the internal portion of the patient during a minimally invasive procedure.
An example of the use of a pointer tool for facilitating interaction between a user and a computing device is illustrated in
The image processing unit 116 can track the location of the tip 1208 of the pointer tool 1206 based on the one or more fiducial markers 1210, enabling the image processing unit 116 to determine where the tip 1208 of the pointer tool 1206 is at any given time. Based on this knowledge, the image processing unit 116 can automatically respond to particular positioning of the tip 1208 of the pointer tool 1206 with respect to the GUI 1200, which can enable the user to use the pointer tool 1206 to directly interact with the GUI 1200. For example, shown in
In one or more examples, the pointer 300 can include a second portion 304 that is aligned to a second axis that is oriented angularly with respect to the axis of the shaft and the first portion 302. As shown in
In one or more examples, the pointer 300 can include one or more fiducial rings 308A-B located proximally of the tip 306. In the example of
In one or more examples, a first fiducial ring 308A can be disposed on the first portion 302 of pointer 300, while a second fiducial ring 308B can be disposed on the second portion 304 of pointer 300. In this way, first fiducial ring 308A can be aligned with axis 310, while second fiducial ring 308B can be aligned with axis 312, thus making the first fiducial ring 308A and second fiducial ring 308B angularly oriented with respect to one another. Thus, in one or more examples, the orientation of the fiducial rings with respect to one another can help to ensure that at least one face of a single fiducial ring is visible to the camera at any orientation of the tool in relation to the camera. As discussed in detail below, the faces of the fiducial rings can have one or more fiducial markers disposed on them that are configured to allow for the location of the tip 306 to be determined from an image of the tool. Thus, in one or more examples, ensuring that at least one face of at least one fiducial ring is visible to the camera at any given time may allow the tip of the tool to be located in any orientation of the pointer tool.
As described above, the fiducial rings that form part of the pointer can be configured to have one or more fiducial markers disposed on them that can aid an image processing tool to locate the precise location of the tip of the pointer tool in an image. In one or more examples, the fiducial markers can be used identify features of the pointer, that can be used to then locate the tip as described below. Since the shape of the tip may not lend itself to having a fiducial marker placed directly onto it, providing one or more fiducial rings on the pointer of the pointer tool can provide adequate surface area to place a fiducial marker. Additionally, the fiducial ring(s) can provide multiple surfaces to locate fiducial markers to help improve the accuracy of the tip location estimates as well as make the overall image processing algorithm used to detect the location of the tip robust to various orientations that the pointer tool can be placed in with respect to a camera.
In the example of pointer 400, the fiducial rings 402 and 404 can include five sides, thus making the fiducial ring a pentagon that is “wrapped around” the axis of the pointer that the fiducial ring is disposed on. Cross-section 406 illustrates a cross-sectional view of a fiducial ring. Cross-section 406 illustrates that the pentagonal shape of the fiducial ring includes five faces. In one or more examples, the five faces may collectively form two separate rings of 10 corners. In one or more examples, the 10 corners of a single ring are shown in cross-sectional view 406. In one or more examples, each fiducial ring 402 and 404 can form two rings of corners. For instance, fiducial ring 402 can form a first ring of corner labeled as “A” in the drawing, and a second ring of corners labeled “B” in the drawing. In one or more examples, and as shown by cross-sectional view 406, each ring of corners can include 10 separate corners, with each of the sides of fiducial ring having two corners per face. Likewise, fiducial ring 404 can also form two rings of 10 corners each labeled “C” and “D” in the figure.
In one or more examples, each corner of each ring of corners can have its distance from the tip 408 measured a priori so that anytime the location of a corner is determined in an endoscopic image, the location of the tip 408 can be automatically derived based on the a priori measured distance. In one or more examples, the distance from each corner to the tip can be empirically measured at the time of manufacture of the pointer tool. Thus, in one or more examples, when the tool is in use during a surgical procedure, an image processing system analyzing images that include the pointer tool can have beforehand knowledge about the distance relationships between each corner of the fiducial rings and the tip 408. The distance from a corner to a tip can be based on a three-dimensional (x,y,z) coordinate system in which the tip 408 of the pointer is assumed to be the origin (i.e., (0,0,0)). Thus, in one or more examples, the distance of each corner of the rings of corners can be measured in terms of displacement in a particular coordinate axis from the tip 408. Table 410 illustrates an exemplary table of distances for the corners of fiducial rings 402 and 404. Table 410 can include sections 412A-D, with each section representing a ring of corners associated with a fiducial ring. For instance, section 412A can represent the distances from each corner 1-10 of the ring of corners “A”. Section 412B can represent the distances from each corner 1-10 of the ringer of corners labeled “B”. Section 412C can represent the distances from each corner 1-10 of the ringer of corners labeled “C”. Section 412D can represent the distances from each corner 1-10 of the ring of corners labeled “D”.
Looking at an example entry of table 410, the ring of corners denoted as “A” and specifically corner “1” of ring A can have an (x,y,z) entry that represents the distance for each respective coordinate plane from the corner to tip 408. Thus, in the example of A-1, the corner can be 0.221763 inches away from the tip 408 in the X-direction, 0.069937 inches away from the tip in the Y-direction, and 0.012287 in the Z-direction. Thus, in one or more examples, if a three-dimensional location of a corner is determined (described in detail below), the entries in table 410 corresponding to the corner can be added and/or subtracted to the three-dimensional location of the corner to derive the location of the tip 408.
In one or more examples, the corners associated with each entry in table 410 are corners of a fiducial marker. In these examples, each (x,y,z) entry represents the distance from the respective corner of the fiducial marker to tip 408.
In one or more examples, a fiducial marker can be placed on each face of the fiducial rings in order to aid an image processing algorithm to determine the three-dimensional location of one or more corners associated with the fiducial rings. In the example shown in
In one or more examples, each side (i.e., face) of a fiducial ring can have a fiducial marker disposed on it that can be used to not only identify the specific fiducial ring, but also to identify the particular face of the plurality of faces of the fiducial ring. The fiducial marker can provide the locations of the corners of the fiducial ring face or fiducial marker so that that the location of the tip can be ultimately determined as described above. In one or more examples, the fiducial markers disposed on each fiducial ring face can be an ArUco marker. As described in detail below, an ArUco marker can provide a visible pattern that can be efficiently found in an endoscopic image and can provide the information needed to uniquely identify the ArUco marker and, thereby, determine the two-dimensional and/or three-dimensional location in space of the tip of the pointer tool.
In one or more examples, the ArUco marker 604 can include a border 608 that frames the ArUco marker 604 with a black border. In one or more examples, the border 608 can be disposed on the first row, the last row, the first column, and the last column of the marker. In the example of an 8×8 matrix, the border 608 can be arranged as described above to leave an internal 6×6 matrix. In one or more examples, each block 606 of the internal 6×6 matrix can have either a black or white block placed on it. The examples of an 8×8 matrix and 6×6 internal matrix are meant as examples only and should not be seen as limiting to the disclosure. Thus, in one or more examples, a particular ArUco marker can be configured in a variety of dimensions and grid layouts without departing from the scope of the present disclosure.
In one or more examples, the white and black blocks can be arranged on the 6×6 internal matrix to provide each ArUco marker with a unique pattern which can be used to uniquely identify each fiducial marker and, thereby, each face of each fiducial ring of the pointer tool. In one or more examples, an image processing system can determine the pattern of the blocks of the ArUco marker and obtain the location of the corners of the marker (i.e., the corners of the 8×8 matrix). The image processing system can use the determined pattern to extract the identity of the ArUco marker and can use the determined location of the corners to then find the location of the tip of the pointer tool based on a table correlating ArUco marker identities with locations of the corners of the ArUco markers relative to the tip of the pointer tool, as described above with respect to
In one or more examples, the pointer tool described above with respect to
At step 704, one or more of the fiducial markers of the pointer tool (described above) can be located within the image received at step 702. At step 706, the fiducial marker can be identified. As described above, in the case of an ArUco marker, identifying the one or more fiducial markers can include identifying one or more of the unique patterns (e.g., bit patterns) of the marker and determining the identity of the specific ArUco marker that is present within the image received at step 702 based on the identified pattern. In one or more examples, the image processing system performing the process 700 can access a look-up table that stores the association between ArUco markers and the specific faces of a fiducial rings that each pattern on the marker is associated with or the locations of the corners of the ArUco markers. In one or more examples, if the image processing system cannot identify any of the fiducial markers in an image, then the image processing system can apply one or more machine learning models to identify the pointer tool in an image and its orientation. Additionally or alternatively, fiducial markers may encode error detection and correction information such that any discrepancies and/or errors in detecting the bit pattern of the fiducial marker may be minimized to determine the correct information associated with a given fiducial marker.
In one or more examples, one or more machine learning models can be applied in parallel to and in combination with the process 700 of
In one or more examples, both the process 700 as well as the machine learning model processes described above can be used to find the location of the tip in both pixel space (i.e., the (x,y) coordinates in the image where the tip is located) and/or three-dimensional space. In one or more examples, the process 700 can also determine the size of the fiducial marker, and use the determined size as a scale to generate measurements in the endoscopic image. Similarly, the segmented tool generated by the one or more machine learning models can also be used to determine the size of the tool in the image, which can then be used to generate a scale in the image that can be used to generate measurements in the endoscopic image.
Returning to the example of process 700 of
In one or more examples, at step 712, the location of the tip is determined based on the two-dimensional locations of the corners of the fiducial marker in the image and the determination of which corners have been located in step 710. In one or more examples, step 712 can include accessing a look-up table such as table 410 of
As noted above, one or more machine learning models can be used in combination with one or more steps of process 700 of
In some examples, an “instrument-verb-target” machine learning model can continuously process incoming video to detect use of the pointer tool to point to tissue. For example, referring to process 700 of
In some example, the “instrument-verb-target” machine learning model can identify a region of the image containing the tip region of the pointer tool and the fiducial marker(s) and this information can be used in step 704 to reduce the amount of image data that is processed to locate the fiducial marker(s) in step 704, which can make locating the fiducial marker faster than processing an entire image. In other words, instead of step 704 including the processing of an entire image to locate the fiducial marker(s), processing may be limited to the region(s) of the image identified by the “instrument-verb-target” machine learning model. Optionally, one or more image enhancement techniques may be applied to the region(s) to improve the identification of the fiducial marker(s), which may also reduce the amount of processing relative to a process that applies image enhancement techniques to the entire image.
Although the above refers to the “instrument-verb-target” machine learning model detecting the use of a pointer tool, this is merely exemplary, and it should be understood that the “instrument-verb-target” machine learning model can be trained to detect the use of any tool, including, for example, a cutter, drill, or any other surgical tool. Additionally, the detection of a suitable action need not lead to (or only to) step 704. In some examples, a notification associated with the detection of the action can be provided to the user. For example, the detection of the use of a pointer tool can lead to a display, on a graphical user interface (e.g., GUI 1200 of
Referring back to
In one or more examples, instead of the pointer tool including a dedicated controller, the pointer tool can be inserted into an external controller that can be configured to operate with a variety of surgical tools. However, in such a scenario, the external controller may need to identify the tool that is inserted into it, so that it can translate the user's input appropriately to indicate the desired action initiated by the user pressing a button on the external controller. As an example, and with respect to the pointer tool, if a user pushes a specific button on the external controller to initiate a measurement, then in one or more examples, the controller recognizing that a pointer tool is plugged into it can either send a signal (e.g., to an image processing system) indicating the beginning of a measurement or can send a signal (e.g., to an image processing system) that includes the identity of the tool that caused the transmission (described below).
In one or more examples, in addition to being configured to mechanically interface with the external controller 904, interface 910 can also include one or more components that can identify the pointer tool 902 to the external controller 904 that it is connected to. For instance, in one or more examples, interface 910 can include a radio frequency identification (RFID) transmitter 912 that can transmit identification information stored in a memory to the external controller 904, allowing for the external controller 904 to determine that it is connected to the pointer tool 902. In one or more examples, the RFID transmitter 912 can transmit a signal to the external controller 904 to cause the external controller 904 to alter the function of the one or more buttons 914 located on the external controller 904 to reflect the operation of the pointer tool 902. The external controller 904 includes an RFID antenna, and an RFID signal received via the RFID antenna can be relayed to a connected system, such as tool controller 126 of
Input device 1020 can be any suitable device that provides input, such as a touch screen, keyboard or keypad, mouse, gesture recognition component of a virtual/augmented reality system, or voice-recognition device. Output device 1030 can be or include any suitable device that provides output, such as a display, touch screen, haptics device, virtual/augmented reality display, or speaker.
Storage 1040 can be any suitable device that provides storage, such as an electrical, magnetic, or optical memory including a RAM, cache, hard drive, removable storage disk, or other non-transitory computer readable medium. Communication device 1060 can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or device. The components of the computing system 1000 can be connected in any suitable manner, such as via a physical bus or wirelessly.
Processor(s) 1010 can be any suitable processor or combination of processors, including any of, or any combination of, a central processing unit (CPU), field programmable gate array (FPGA), and application-specific integrated circuit (ASIC). Software 1050, which can be stored in storage 1040 and executed by one or more processors 1010, can include, for example, the programming that embodies the functionality or portions of the functionality of the present disclosure (e.g., as embodied in the devices as described above)
Software 1050 can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a computer-readable storage medium can be any medium, such as storage 1040, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device.
Software 1050 can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a transport medium can be any medium that can communicate, propagate, or transport programming for use by or in connection with an instruction execution system, apparatus, or device. The transport computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, or infrared wired or wireless propagation medium.
System 1000 may be connected to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications protocol and can be secured by any suitable security protocol. The network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T3 lines, cable networks, DSL, or telephone lines.
System 1000 can implement any operating system suitable for operating on the network. Software 1050 can be written in any suitable programming language, such as C, C++, Java, or Python. In various embodiments, application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement or through a Web browser as a Web-based application or Web service, for example.
The foregoing description, for the purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated. For the purpose of clarity and a concise description, features are described herein as part of the same or separate embodiments; however, it will be appreciated that the scope of the disclosure includes embodiments having combinations of all or some of the features described.
Although the disclosure, aspects, features, options, and examples have been fully described with reference to the accompanying figures, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure, aspects, features, options, and examples as defined by the claims. Finally, the entire disclosure of the patents and publications referred to in this application are hereby incorporated herein by reference.
This application claims the benefit of U.S. Provisional Application No. 63/363,106, filed Apr. 15, 2022, the entire contents of which are hereby incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63363106 | Apr 2022 | US |