In recent years, with the advent of advanced volumetric segmentation techniques, preoperative imaging raw data from computed tomography (CT), magnetic resonance imaging (MRI), etc. have been used to generate 3D models of the surgical site. Such 3D models provide preoperative planning guidance that helps the surgeon plan the surgical approach during a surgical procedure, e.g., resect a tumor. There is an unmet need to provide surgically relevant guidance derived from the 3D model of the surgical site in both the preplanning and the intraoperative stages of the surgical procedure.
The present disclosure provides a system and method for providing surgically relevant preplanning and intraoperative guidance derived from a 3D model of a surgical site. In particular, the system and method provide preoperative guidance in the form of generative perfusion zones from 3D models as well as guidance on which blood vessels, i.e., arteries, that need to be clamped or clipped. In the pre-operative stage, the system presents the user with a user interface that also allows for modification of the automatic selective clamping location as well as the perfusion and ischemic zones. The system and method also provide intra-operative guidance to help the surgeon identify the blood vessels to be clamped while performing the surgical procedure.
According to one embodiment of the present disclosure an imaging system is disclosed. The imaging system includes an endoscopic camera configured to acquire an intraoperative image of tissue and blood vessels. The system also includes an image processing device coupled to the endoscopic camera. The image processing device includes a processor configured to: receive a perfusion zone model of the tissue and an operative plan that includes at least one clamp location; and generate an overlay of the perfusion zone model over the at least one clamp location on the intraoperative image of the tissue and the blood vessel, respectively. The system also includes a screen configured to display the overlay and the intraoperative image.
Implementations of the above embodiment may include one or more of the following features. According to one aspect of the above embodiment, the processor may be further configured to generate a depth map and a point cloud based on the intraoperative image. The processor may be further configured to register the perfusion zone model with the intraoperative image based on the depth map and the point cloud. The perfusion zone model may include an ischemic volume zone and a perfused volume zone. The processor may be further configured to register the ischemic volume zone and the perfused volume zone with an ischemic surface and a perfused surface of the tissue, respectively.
According to another embodiment of the present disclosure, a surgery planning device is disclosed. The surgery planning device includes a processor configured to receive a 3D preoperative tissue image having a 3D arterial tree, and generate a 3D perfusion model based on the 3D arterial tree. The device also includes a screen configured to display the 3D perfusion model and a graphical user interface configured to generate a selective guidance plan based on the 3D perfusion model.
Implementations of the above embodiment may include one or more of the following features. According to one aspect of the above embodiment, the processor may be further configured to verify the 3D arterial tree by generating a voxel count bounded by a vessel boundary of the 3D arterial tree. The processor may be further configured to compute a normalized vessel voxel ratio based on the voxel count. Generation of the 3D perfusion model by the processor may further include generating a skeleton model of the 3D arterial tree. Generation of the 3D perfusion model by the processor may further also include generating bifurcation points for vessels of the 3D arterial tree. Generation of the 3D perfusion model by the processor may further include computing a volumetric multi-label distance transform map based on the skeleton model. Generation of the 3D perfusion model by the processor may further include generating a tumor volume zone, an ischemic volume zone, and/or a perfused volume zone. The graphical user interface may be further be configured to display at least one virtual clamp. The graphical user interface may be further configured to update at least one parameter of the tumor volume zone, an ischemic volume zone, and/or a perfused volume zone based on a location of the at least one virtual clamp. Furthermore, the graphical user interface may allow the user to accept or modify the selective clamping location as well as the tumor volume zone, an ischemic volume zone, and/or a perfused volume zone during the pre-planning phase.
According to a further embodiment of the present disclosure, a surgical robotic system is disclosed. The system includes a robotic arm having an endoscopic camera configured to acquire an intraoperative image of tissue and a blood vessel. The system also includes an image processing device coupled to the endoscopic camera. The image processing device includes a processor configured to receive a perfusion zone model of the tissue and an operative plan having at least one clamp location. The processor is further configured to generate an overlay of the perfusion zone model over and the at least one clamp location of the intraoperative image of the tissue and the blood vessel, respectively. The system also includes a screen configured to display the overlay and the intraoperative image.
Implementations of the above embodiment may include one or more of the following features. According to one aspect of the above embodiment, the processor may be further configured to generate a depth map and a point cloud based on the intraoperative image. The processor may be further configured to register the perfusion zone model with the intraoperative image based on the depth map and the point cloud. The processor may be further configured to register the perfusion zone model with the intraoperative image based on the fusion between the kinematics data of the robotic arm and visual SLAM. The perfusion zone model may include an ischemic volume zone and a perfused volume zone. The processor may be further configured to register the ischemic volume zone and the perfused volume zone with an ischemic surface and a perfused surface of the tissue, respectively.
The present disclosure may be understood by reference to the accompanying drawings, when considered in conjunction with the subsequent, detailed description, in which:
Embodiments of the presently disclosed system are described in detail with reference to the drawings, in which like reference numerals designate identical or corresponding elements in each of the several views. In the following description, well-known functions or constructions are not described in detail to avoid obscuring the present disclosure in unnecessary detail. Those skilled in the art will understand that the present disclosure may be adapted for use with any imaging system.
The present disclosure provides for a system and method for generating preoperative perfusion zones and surgery planning, which may then be used intraoperatively to guide clamping. Clamping during surgical procedure is commonly used in resection to cut off blood supply to a resection portion. An exemplary procedure that involves clamping is a partial nephrectomy during which a tumor is removed from a kidney. Global clamping during partial nephrectomy results in a large ischemic volume. Thus, clamping only the arteries supplying blood to the tumor would minimize the ischemic volume. The system and method generate a 3D model of vasculature from preoperative images (e.g., CT/MRI) and estimate perfusion zones based on detailed arterial trees. The system also provides planning stage guidance to clamp selective arteries and subsequently uses the perfusion zones to provide clamping guidance during the surgery to minimize ischemia.
Perfusion zones modeling enables identification of sub-arterial trees that feed different sub-volumes of organs and tumors as well as for identification of sub-volume regions fed by each sub-arterial tree. The system simulates selective clamping process and enables identification of the set of sub-arterial trees that feed the tumors and the set of sub-arterial trees that perfuse the healthy tissue. Selective clamping also allows for marking the sub-arterial trees that should be clamped to maintain healthy tissue perfusion. Selective clamping guidance may be used intraoperatively to reduce the ischemic volume, while keeping the healthy tissue perfused. The guidance may be also implemented in surgical robotic systems.
With reference to
The input device 145 may be any suitable user input device such as a keyboard, a touch screen, or a pointing device that can be operated by the operator and send input signals according to an operation to the processor 141. The processor 141 may be configured to perform operations, calculations, and/or sets of instructions described in the disclosure and may be a hardware processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a central processing unit (CPU), a microprocessor, and combinations thereof. If an instruction is input by an operator such as a physician operating the input device 145, the processor 141 executes a program stored in the memory 142. The processor 141 is configured to load software instructions stored in the storage device 144 and/or transferred from the network 150 or a removable storage device (not shown) into the memory 142 to execute such instructions. The memory 142 may be a transitory storage device such as RAM (random access memory) and the like and is used as working memory for the processor 141 and used to temporarily store data.
The storage device 144 is a non-transitory storage device, e.g., hard disc drive, flash storage, etc. The storage device 144 is a storage device in which programs installed in the surgery planning device 100 (including an application program as well as an OS (operating system)) and data are stored. Also, the OS provides a GUI (graphical user interface) that displays information to the operator so that the operator can perform operations through the input device 145. The screen 146 may be any suitable monitor and may include a touchscreen that is configured to display the GUI for planning surgery.
The surgery planning device 100 is configured to receive a 3D tissue or organ model 160 (
The method of
At step 222, the surgery planning device 100 computes normalized vessel voxel ratio as a measure of vessel segmentation density. The surgery planning device 100 also predicts if the 3D vessel model 170 may be used to generate acceptable perfusion zones at step 224 by comparing the resulting vessel voxel ratio to a preset threshold at step 226. If the voxel ratio is below the threshold, then at step 228, the surgery planning device 100 determines that a high-resolution arterial tree cannot be generated, and thus, perfusion zones cannot be generated either. The surgery planning device 100 may then request re-creation of 3D tissue model 160 with detailed arterial tree, i.e., suitable 3D vessel model 170. If the voxel ratio is above the threshold, the surgery planning device 100 proceeds to generate perfusion zone model 190 (
With reference to the general method of
At step 237, the surgery planning device 100 performs graph clustering to combine low-level nodes (i.e., from volumetric regions) perfused by combined edges (i.e., vessel segments). The surgery planning device 100 may use any suitable classical graph clustering algorithms or graph neural networks (GNN's) to generate perfusion zone model 190 (
With reference to the general method of
The surgery planning device 100 further generates the ischemic volume zone 194 and perfused volume zone 196 at step 246 and displays the perfusion zone model 190 on the screen 146 at step 248. The surgery planning device 100 may also display selective clamping guidance for preoperative planning. This may include displaying preferred locations for placing virtual clamps 197 based on the location of the tumor volume zone 192. The surgery planning device 100 may automatically identify the tumor volume zone 192 (e.g., using image processing algorithms) or the tumor volume zone 192 may be identified by the user of the surgery planning device 100 by using a GUI. The user may draw boundaries using the input device 145 around the tumor volume zone 192. The GUI and the input device 145 may be used to place, move, and/or remove the virtual clamps 197 and the surgery planning device 100 then updates the zones 192-196 based on the placement of the virtual clamps 197 in real time, i.e., the boundaries of the zones 192-196 are updated based on the placement of the virtual clamps 197. After adjusting placement of one or more virtual clamps 197 to achieve the desired size and shape of the zones 192-196, at step 249 the surgery planning device 100 generates an operative plan based on preoperative planning.
With reference to the general method of
With reference to
The image processing unit 20 is configured to receive image data and process raw image data signals from the cameras 12 and 13, and generate blended white light, NIR images for recording and/or real-time display. The image processing unit 20 is also configured to blend images using various AI image augmentations.
The imaging system 10 may be also integrated with the surgical robotic system 11, which is shown in
The surgeon console 30 includes a first screen 32, which displays a video feed of the surgical site provided by camera 12, and a second screen 34, which displays a user interface for controlling the surgical robotic system 10. The first screen 32 and second screen 34 may be touchscreens (e.g., monitors 72) allowing for displaying various graphical user inputs. In embodiments, the ultrasound images may be also displayed on the first and second screens 32 and 34. The surgeon console 30 also includes a plurality of user interface devices, such as foot pedals 36 and a pair of hand controllers 38a and 38b which are used by a user to remotely control robotic arms 40 and endoscopic camera 12.
The control tower 21 also acts as an interface between the surgeon console 30 and one or more robotic arms 40. In particular, the control tower 21 is configured to control the robotic arms 40, such as to move the robotic arms 40 and the attached devices, based on a set of programmable instructions and/or input commands from the surgeon console 30, in such a way that robotic arms 40 and the attached device execute a desired movement sequence in response to input from the foot pedals 36 and the hand controllers 38a and 38b. The foot pedals 36 may be used to enable and lock the hand controllers 38a and 38b, repositioning the endoscopic camera 12. In particular, the foot pedals 36 may be used to perform a clutching action on the hand controllers 38a and 38b. Clutching is initiated by pressing one of the foot pedals 36, which disconnects (i.e., prevents movement inputs) the hand controllers 38a and/or 38b from the robotic arm 40 and the attached device. This allows the user to reposition the hand controllers 38a and 38b without moving the robotic arm(s) 40 and the endoscopic camera 12. This is useful when reaching control boundaries of the surgical space.
The method of
At step 256, the image processing unit 20 also locally registers externally visible ischemic surface and perfused surface with the ischemic volume zone 194 and the perfused volume zone 196 based on selective clamping guidance provided by the operative plan. The image processing unit 20 then localizes the endoscope 14 at each frame using Visual Simultaneous Localization and Mapping (SLAM) at step 258. Visual SLAM may use robotic arm kinematics data as well as the previous and current set of images to estimate the location and pose of the endoscope 14.
At step 260, the image processing unit 20 outputs a visual, augmented reality overlay 300 over the video feed of the endoscope 14 as shown in
While several embodiments of the disclosure have been shown in the drawings and/or described herein, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular embodiments. Those skilled in the art will envision other modifications within the scope of the claims appended hereto.
This application claims the benefit of, and priority to, U.S. Provisional Patent Application Ser. No. 63/452,201 filed on Mar. 15, 2023. The entire contents of the foregoing application are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63452201 | Mar 2023 | US |