This disclosure is directed to systems and method of annotating anatomical tree structures in 3D images. In particular, the disclosure is directed to a software application configured to generate three-dimensional models from computed tomography and other image type data sets.
During a surgical procedure, clinicians often use CT images for determining a plan or pathway for navigating through the luminal network of a patient. Absent software solutions, it is often difficult for the clinician to effectively plan a pathway based on CT images alone. This challenge in creating paths to certain targets is especially true in the smaller branches of the bronchial tree where CT images typically do not provide sufficient resolution for accurate navigation.
While the software solutions for pathway planning a pathway through the luminal networks of, for example the lungs, are great for their intended purpose, they do not assist the clinicians in planning for thoracic surgeries. Thoracic surgeries are typically performed laparoscopically or via open surgery through the patient's chest. Lobectomies are one such thoracic procedure and is one where an entire lung lobe is removed. One reason for performing a lobectomy is that the lobes of the lung are readily discernable and separated from one another via a fissure. As a result, the vasculature of the lobe is also relatively distinct and can be planned for and can be addressed during the surgery with reasonable certainty. However, in many instances a lobectomy removes too much tissue, particularly healthy lung tissue. This can be critical in determining whether a patient is even a candidate for surgery.
Each lung lobe is composed of either three or four lung segments. These segments are generally independently vascularized. This means that if the individual segments can be identified, and the vasculature related to the segments distinguished from other lobes, a segmentectomy may be undertaken. A segmentectomy procedure can increase the number of patients that are surgical candidates because it enables the surgeon to remove the diseased tissue while leaving all other tissue. The problem with segmentectomy procedures is that while they are more tissue efficient, determining the locations of the relevant vascular structures can be very challenging even for highly trained professionals.
The instant disclosure is directed to addressing the shortcomings of current imaging and planning systems.
One aspect of the disclosure is directed to a system for generating a generating a three-dimensional (3D) model of vasculature of a patient a memory in communication with a processor and a display, the memory storing instructions that when executed by the processor: cause a display to display a plurality of images from an image data set in a user interface, the images including at least an axial, sagittal, and coronal view; receive instructions to scroll through at least one of the axial, sagittal, and coronal images; receive an indication of a position of one of the axial, sagittal, and coronal images being within a first portion of a vasculature; snap the remaining images to the position of the received indication; display crosshairs on the images at the position of the received indication; depict the position as a first point in a three-dimensional (3D) view; receive inputs to adjust a level of zoom or a location of the crosshairs in the images; receive an indication that all three crosshairs are located in the center of the first portion of the vasculature; depict a second point in a 3D view at the location of all three crosshairs; depict the first point in an oblique view of the image data set; depict a circle in the oblique view around the first point; receive an input to size the size of the circle to match a diameter of the first portion of the vasculature at the second point; receive an input to add a segment; and display the segment in the 3Dview, where the segment extends from the first point to a first node at the location of the second point. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
Implementations of this aspect of the disclosure may include one or more of the following features. The system where a depiction of the segment is also presented in the axial, sagittal, and coronal images. The system where when further segments of the first portion of the vasculature remain unmodeled, the processor executes instructions to: receive an input to scroll through the images in at least one of the axial, sagittal, and coronal images; receive an input identifying a third point in the first portion of the vasculature in at least one of the axial, sagittal, and coronal images; depict a circle in the oblique view around the second point; receive an input to size the size of the circle to match a diameter of the first portion of the vasculature; receive an input to add a segment; and display the segment in the 3D view, where the segment extends from the first node to a second node at the location of the third point. The instructions are executed in a repeating fashion until the entirety of the first portion of the vasculature is modeled. Following modeling of all the segments of the first portion of the vasculature, the processor executes instructions to: receive instructions to scroll through at least one of the axial, sagittal, and coronal images; receive an indication of a portion of one of the axial, sagittal, and coronal images being within a second portion of the vasculature; snap the remaining images to the position of the received indication; display crosshairs on the images at the position of the received indication; depict the position as a first point in the 3D view; receive inputs to adjust a level of zoom or a location of the crosshairs in the images; and receive an indication that all three crosshairs are located in the center of the vasculature. The segment extends from the first point to a first node at the location of the second point. The first portion of the vasculature are arteries and the second portion of the vasculature are veins. The processor executes instructions to export a 3D model formed of a plurality of the segments to an application for planning a thoracic surgery. The system further including identifying an error in at least one segment of a 3D model formed of a plurality of the segments and inserting a segment before the segment with the error. Following identification of a node is defined between the nodes of the segment containing the error. A diameter of the inserted segment is defined in the oblique view. The segment has a diameter matching the size of the circle around the first point. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
A second aspect of the disclosure is directed to a system for correcting a 3D model of vasculature of a patient a memory in communication with a processor and a display, the memory storing instructions that when executed by the processor: select a 3D model for presentation on a display; present the 3D model and axial, coronal and sagittal images from which the 3D model is derived on a user interface; receive an input to scroll or zoom one or more of the images, or receive a selection of a segment of the 3D model; receive an indication of a point in a first segment in the 3D model in need of correction; depict the point in an oblique view of the images; depict a circle in the oblique view around the first point; receive an input to size the size of the circle to match a diameter of the vasculature in the oblique view; receive an input to add a segment; and display the added segment in the 3D model, where the added segment extends from a point defining a beginning of the first segment to the first point in and corrects an error in the 3D model. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
Implementations of this aspect of the disclosure may include one or more of the following features. The system where the processor executes the instructions until the entirety of the 3D model is reviewed and corrected. The system where segments of the 3D model depict arterial vasculature in a first color and venous vasculature in a second color. The processor further executes an instruction to export the correct 3D model to a thoracic surgery planning application. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
Yet a further aspect of the disclosure is directed to a method of generating a 3D model of a vasculature of lungs. The method includes displaying a plurality of images from an image data set in a user interface, the images including at least an axial, sagittal, and coronal view; receiving instructions to scroll through at least one of the axial, sagittal, and coronal images; receiving an indication of a position of one of the axial, sagittal, and coronal images being within a first portion of a vasculature; displaying crosshairs on the axial, coronal and sagittal images at the position of the received indication; depicting the position as a first point in a three-dimensional (3D) view; receiving an input to adjust a level of zoom or a location of the crosshairs in the images; receiving an indication that all three crosshairs are located in the center of the first portion of the vasculature; depicting a second point in a 3D view at the location of all three crosshairs; depicting the first point in an oblique view of the image data set; depicting a circle in the oblique view around the first point; receiving an input to size the size of the circle to match a diameter of the first portion of the vasculature around the first point; receiving an input to add a segment; and displaying the segment in the 3D view, where the segment extends from the first point to a first node at the location of the second point. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
Implementations of this aspect of the disclosure may include one or more of the following features. The method where a depiction of the segment is also presented in the axial, sagittal, and coronal images. The method where the segment has a diameter matching the size of the circle around the first point. Where further segments of the first portion of the vasculature remain unmodeled: receiving an input to scroll through the images in at least one of the axial, sagittal, and coronal images; receiving an input identifying a third point in the first portion of the vasculature in at least one of the axial, sagittal, and coronal images; depicting a circle in the oblique view around the second point; receiving an input to size the size of the circle to match a diameter of the first portion of the vasculature; receiving an input to add a segment; displaying the segment in the segmentation view, where the segment extends from the first node to a second node at the location of the third point. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
Objects and features of the presently disclosed system and method will become apparent to those of ordinary skill in the art when descriptions of various embodiments thereof are read with reference to the accompanying drawings, of which:
This disclosure is directed to a system and method of receiving image data and generating 3D models from the image data. In one example, the image data is CT image data, though other forms of image data such as Magnetic Resonance Imaging (MM), fluoroscopy, ultrasound, and others may be employed without departure from the instant disclosure.
In one aspect of the disclosure, a user navigates to the portion of the image data set such that the patient's heart is in view. This allows the user to identify important vascular features around the heart, such as the right and left pulmonary arteries, left and right pulmonary veins, the aorta, descending aorta, inferior and superior vena cava. These larger vascular features are generally quite distinct and relatively uniform in location from patient to patient. These methods and the 3D models generated may be used for a variety of purposes, including for importation into a thoracic surgery planning system, as outlined below.
In a further aspect the disclosure is directed to an annotation method allowing for manually tracing pulmonary blood vessels from mediastinum towards periphery. The manual program described herein can be used for a number of purposes including generating 3D models, performing peer review, algorithms training, algorithms evaluation, and usability sessions as well as allowing for user correction and verification of algorithm-based 3D model generation from CT image data sets.
The tool enables manual annotation of anatomical trees. Separate trees may be generated for each blood vessel that enters/exits the heart. Each tree model is decomposed into a set of cylinder-shaped segments. In one aspect the user marks segment start and end points. An oblique view is displayed, where radius is marked accurately. The segment's cylinder is then added to tree and displayed.
As is known to those of skill in the art, the vasculature of the lungs generally follows the airways until reaching the periphery of the lungs where the blood-air barrier (alveolar-capillary barrier) where gas exchange occurs allowing carbon dioxide to be eliminated from the blood stream and oxygen to enter the blood stream as part of normal respiration. However, while the vasculature generally follows the airways, it there are instances where portions of the same blood vessel supplies two or more segments. Particularly the more central vasculature can be expected to supply multiple segments.
Further, in instances where the segmentectomy is the result of a tumor, the tumor, which is a very blood rich tissue, is fed by multiple blood vessels. These blood vessels may in fact be supplying blood to the tumor from different segments of the lungs. As a result, it is critical that the thoracic surgeon be able to identify all of the blood vessels entering the tumor and ensure that they are either sutured closed prior to resection or that a surgical stapler is employed to ensure limit the possibility of the surgeon experiencing an unexpected bleeding blood vessel during the procedure.
The application generating the 3D model 202 may include a CT image viewer (not shown) enabling a user to view the CT images (e.g., 2D slice images from the CT image data) prior to generation of the 3D model 202. By viewing the CT images the clinician or other user may utilize their knowledge of the human anatomy to identify one or more tumors in the patient. The clinician may mark the position of this tumor or suspected tumor in the CT images. If the tumor is identified in for example an axial slice CT image, that location may also be displayed in for example sagittal and coronal views. The user may then adjust the identification of edges of the tumor in all three views to ensure that the entire tumor is identified. As will be appreciated, other views may be viewed to assist in this process without departing from the scope of the disclosure. The application utilizes this indication of location provided by the clinician to generate and display an indicator of the location of the tumor 210 in the 3D model 202. In addition to manual marking of the location of the tumor, there are a variety of known automatic tumor identification tools that are configured to automatically process the CT image scan and to identify the suspected tumors.
The user interface 200 includes a variety of features that enable the clinician to better understand the physiology of the patient and to either enhance or reduce the volume of information presented such that the clinician is better able to understand. A first tool is the tumor tool 212 which provides information regarding the tumor or lesion that was identified in the 2D CT image slices, described above. The tumor tool 212 provides information regarding the tumor such as its dimensions. Further, the tumor tool 212 allows for creation of a margin 214 around the tumor 210 at a desired distance from edges of the tumor 210. The margin 214 identifies that portion of healthy tissue that should be removed to ensure that all of the cancerous or otherwise diseased tissue is removed to prevent future tumor growth. In addition, by providing an indicator of the margin 214, the user may manipulate the 3D model 202 to understand the vasculature which intersects the tumor 210. Since tumors are blood rich tissue there are often multiple blood vessels which lead to or from the tumor. Each one of these needs to be identified and addressed during the segmentectomy procedure to ensure complete closure of the blood vessels serving the tumor. Additionally, the margin may be adjusted to or changed to limit the impact of the procedure on adjacent tissue that may be supplied by common blood vessels. For example, the margin is reduced to ensure that only one branch of a blood vessel is transected and sealed, while the main vessel is left intact so that it can continue to feed other non-tumorous tissue. The identification of these blood vessels in an important feature of the disclosure.
The next tool depicted in
Both a venous blood vessel generation tool 220 and an arterial blood vessel generation tool 222 are depicted in
While these blood vessel generation tools 220 and 222 and the airway generation tool 216 are described here as being a global number of generations of blood vessels and airways displayed in the 3D model 202, they may also be employed to depict the number of generations distal to a given location or in an identified segment of the 3D model 202. In this manner the clinician can identify a particular branch of an airway or blood vessel and have the 3D model 202 updated to show a certain number of generations beyond an identified point in that airway or blood vessel.
In accordance with the disclosure, a generation algorithm has been developed to further assist in providing useful and clear information to the clinician when viewing 3D models having airways and blood vessels both displayed in the UI 200. Traditionally in a luminal network mapping each bifurcation is treated as creation of a new generation of the luminal network. The result is that a 3D model 202 may have up to 23 generations of, for example, the airways to the alveolar sacs. However, in accordance with one aspect of the disclosure a generation is defined differently by the software application generating the 3D model. The application employs a two-step model. The first step identifies the bifurcation in a luminal network. In a second step, at the bifurcation both subsequent branching lumens are measured and if one of the branching lumens has a diameter that is similar in size to the lumen leading to the bifurcation, that branching lumen segment is considered the same generation as the preceding segment. As an example, a branching lumen of “similar size” is one that is at least 50% of the size of the lumen leading to the bifurcation. The result is that a clearer indication of the luminal network from the root lumen is depicted in the 3D model at lower levels of generation. Again, this eliminates much of the clutter in the 3D model providing better actionable data for the clinician.
Additional features of the user interface 200 include a CT slice viewer 226. When selected, as shown in
A hidden tissue feature 234 allows for tissue that is hidden from the viewer in the current view of the 3D model 202 to be displayed in a ghosted or outlined form. Further, toggles 236, and 238 allow for the 3D model 202 to be flipped or rotated.
As described herein there are a variety of tools that are enabled via the UI 200. These tools may be in the form of individual buttons that appear on the UI 200, in a banner associated with the UI 200, or as part of a menu that may appear in the UI 200 when right or left clicking the UI 200 or the 3D model 202. Each of these tools or the buttons associated with this is selectable by a user employing the pointing device to launch features of the application described herein.
Additional features of the user interface 200 include an orientation compass 240. The orientation compass provides for an indication of the orientation of the three primary axes (axial, sagittal, and coronal) with respect to the 3D model. As shown the axes are defined as axial in green, sagittal in red, and coronal in blue. An anchoring tool 241 when selected by the user ties the pointing tool (e.g., mouse or finger on touch screen) to the orientation compass 240. The user then may use a mouse or other pointing tool move the orientation compass 240 to a new location in the 3D model and anchor the 3D model 202 in this location. Upon release of the pointing tool, the new anchor point is established and all future commands to manipulate the 3D model 202 will be centered on this new anchor point. The user may then to drag one of the axes of the orientation compass 240 to alter the display of the 3D models 202 in accordance with the change in orientation of the axis selected.
A related axial tool 242 is can also be used for to change the depicted orientation of the 3D model. As shown axial tool 242 includes 3 axes axial (A), sagittal (S), coronal (C). Though shown with the axes extending just to a common center point the axes extend through to the related dot 244 opposite the dot 246 with the lettering. By selecting any of the lettered or unlettered dots the 3D model be rotated automatically to the view along that axis from the orientation of the dot 244 or 246. Alternatively, any of the dots 244, 246 may be selected and dragged and the 3D model will alter its orientation to the corresponding viewpoint of the selected dot. In this way the axial tool 242 can be used in both free rotation and snap modes.
A single axis rotation tool 248 allows for selection of just a single axis of the three axes shown in the orientation compass 240 and by dragging that axis in the single axis rotation tool 248, rotation of the 3D model 202 is achieved about just that single axis. Which is different than the free rotation described above, where rotation of one axis impacts the other two depending on the movements of the pointing device.
A 3D model orientation tool 250 depicts an indication of the orientation of the body of a patient relative to the orientation of the 3D model 202. A reset button 252 enables the user to automatically return the orientation of the 3D model 202 to the expected surgical position with the patient lying on their back.
A zoom indicator 254 indicates the focus of the screen. By default, the inner white rectangle will be the same size as the outer grey rectangle. As the user zooms in on the 3D model 202, the relative size of the white rectangle to the grey indicates the level of zoom. In addition, once zoomed in, the user may select the white rectangle and drag it left or right to pan the view of the 3D model displayed in the user interface 200. The inner white rectangle can also be manipulated to adjust the level of the zoom. The plus and minus tags can also be used to increase or decrease the level of zoom.
A crop tool 404 is also provided for in the menu 402. When selected the crop tool defines a region 406 around the tumor 210 as shown in
One of the benefits of this tool is to be able to identify the root branches of the airways and blood vessels leading to the tumor 210. This is made possible by removing all of the clutter caused by the other objects (e.g., airways and blood vessels) of the 3D model that are not related to the cropped region. This allows the user to consider the airways and blood vessels leading to the tumor 210 and determine which segments are implicated by the tumor 210 and which airways and blood vessels might need resection in order to achieve a successful segmentectomy. In this manner the clinician can adjust the size of the margin to identify the relevant blood vessels and airways to minimize the area for resection.
The region 406 may be depicted in the CT image slices 228, 230, 232. Similarly, the tissue that has been cropped from the 3D model may also be cropped in the CT image slices. Further, the tissue that is hidden by the crop selection may not be completely hidden but may be ghosted out to limit the visual interference but leave the clinician able to ascertain where that structure is in the 3D model 202.
A second menu 410 may be displayed by the user using the pointing tool to select any location within the 3D model 202. The menu 410 includes a depth slider 412 which is enabled by selecting a button 414 shaped like a palm tree allows the user to change the number of generations related to a tissue at the selected point. This allows for local decluttering around the point selected. Additional features in menu 410 include a clip button 416 which provides an indication of the tissue to be excised in the surgical procedure. By selecting the clip button 416, the user may then use the pointing device to select a location on the 3D model 202. A resection line 418 is drawn on the model at that point and the portions of the 3D model to be resected are presented in a different color. A hide tissue button 420 allows for the selection of tissue using the pointing device and hiding the selected tissue from view to again assist in decluttering the 3D model. A flag button 422 allows for placement of a flag at a location in the 3D model with the pointing device and for the insertion of notes related to that flag.
Though described generally herein as a thoracic surgical planning, the software applications described herein are not so limited. As one example, the UI 200 may be shown herein in the surgical room on one or more monitors. The clinician may then direct surgical staff to select screenshots 426 so that the clinician can again observe the 3D model 202 and familiarize themselves with the structures displayed in the screen shot 426 to advise them on conducting further steps of the procedure.
In accordance with another aspect of the disclosure, the UI 202 may be displayed as part of an augmented reality. Further, they may be displayed in an augmented reality (AR) or virtual reality (VR) systems. For example, the UI 200, and particularly the 3D model 202 may be displayed on a headset or goggles worn by the clinician. The display of the 3D model 202 may be registered to the patient. Registration allows for the display of the 3D model 202 to be aligned with the physiology of the patient. Again, this provides greater context for the clinician when performing the procedure and allows for incorporating the plan into the surgical procedure. Alternatively, the UI 200 and the 3D model 202 may be projected such that it appears on the patient such that the 3D model 202 overlays the actual tissue of the patient. This may be achieved in both open and laparoscopic procedures such that the 3D model provides guidance to the clinician during the procedure. As will be appreciated, such projection requires an image projector in the surgical suite or associated with the laparoscopic tools.
Reference is now made to
Application 718 may further include a user interface 716 such as UI 200 described in detail above. Image data 714 may include the CT image scans or Mill image data. Processor 704 may be coupled with memory 702, display 706, input device 710, output module 712, network interface 708 and imaging device 715. Workstation 701 may be a stationary computing device, such as a personal computer, or a portable computing device such as a tablet computer. Workstation 701 may embed a plurality of computer devices.
Memory 702 may include any non-transitory computer-readable storage media for storing data and/or software including instructions that are executable by processor 704 and which control the operation of workstation 701 and, in some embodiments, may also control the operation of imaging device 715. In an embodiment, memory 702 may include one or more storage devices such as solid-state storage devices, e.g., flash memory chips. Alternatively, or in addition to the one or more solid-state storage devices, memory 702 may include one or more mass storage devices connected to the processor 704 through a mass storage controller (not shown) and a communications bus (not shown).
Although the description of computer-readable media contained herein refers to solid-state storage, it should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by the processor 704. That is, computer readable storage media may include non-transitory, volatile, and non-volatile, removable, and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media may include RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, Blu-Ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information, and which may be accessed by workstation 701.
Application 718 may, when executed by processor 704, cause display 706 to present user interface 716. An example of the user interface 716 is UI 200 shown, for example, in
Network interface 708 may be configured to connect to a network such as a local area network (LAN) consisting of a wired network and/or a wireless network, a wide area network (WAN), a wireless mobile network, a Bluetooth network, and/or the Internet. Network interface 708 may be used to connect between workstation 701 and imaging device 715. Network interface 708 may be also used to receive image data 714. Input device 710 may be any device by which a user may interact with workstation 701, such as, for example, a mouse, keyboard, foot pedal, touch screen, and/or voice interface. Output module 712 may include any connectivity port or bus, such as, for example, parallel ports, serial ports, universal serial busses (USB), or any other similar connectivity port known to those skilled in the art. From the foregoing and with reference to the various figures, those skilled in the art will appreciate that certain modifications can be made to the disclosure without departing from the scope of the disclosure.
Though the systems described hereinabove are very useful for planning of a thoracic surgeries, the 3D model information employed in such a thoracic surgery planning system must first be established with confidence. There are a number of methods of generating a 3D model from a CT or Mill data set. Some of these methods employ various neural networks, machine learning, and artificial intelligence (AI) to process the image data set from for example a CT scan and to recognize the patterns to create a 3D model. However, due to the highly overlapping nature of the vasculature, and the limits of image processing, manual methods of analyzing the data set and generating the 3D model or updating/correcting the 3D model are desirable. A further aspect of the disclosure is directed to a tool 800 that allows for expert annotation of pre-procedure images (e.g., a CT scan or an MRI data set) to define all or a portion of the vasculature of the patient, particularly the vasculature around the lungs and heart in the thoracic cavity.
In accordance with one method of use 900 as outlined in
At step 912 the level of zoom and the position of the crosshairs 814 is adjusted to in each view window 804, 806, 808 so that the crosshairs 814 are positioned in the center of the selected vasculature. Once so positioned at step 914 an input is received indicating that all three crosshairs 814 are in the center of the center of the vasculature in the three view windows 804, 806, 808. This input may be for example, clicking again in the view window where the vasculature was originally identified (e.g., axial view window 808, step 904). Following receipt of the input, at step 916 a second point 818 is placed in the 3D view 810 depicting the location of the three crosshairs 814 in the 3D volume of the image scan data. At step 918, the first point 815 is depicted as a cross 817 in an oblique view 816. The oblique view 816 is the view from within the CT image data set from the first point 815 along an axis that would connect with the second point 818.
With the first point 815 depicted in the oblique view 816 as a cross 817 as shown in
Following sizing of the circle 820, a segment name is optionally added via the input device (e.g., a keyboard) in naming box 822 at step 926 and the add button 824 is selected at step 928 The segment 826 is displayed in the 3D view 810 at step 930 and in the axial, coronal, and sagittal views at step 932 (
Following depiction of the segment 826 in the 3D view 810 the application asks a user to determine whether all segments have been marked and if not to user is directed in step 936 to scroll the images similar to step 904, but within branch of the selected vasculature as the first segment 826. The process 904-936 repeats to identify a next point and to generate the next segment to depict in the 3D view 810 and the view windows 804, 806, 808 (Step 938) as depicted in
Those of skill in the art considering the process will understand that in a second segment between a second and a third point, the diameter of the segment will be based on the diameter of the second point 818. If the diameter of the vasculature at the second point is similar to the first point, the segment will be substantially cylindrical. If the diameter at the second point is less than the diameter of the first point, then the segment 826 may be adjusted to reflect the change in diameter that decreases from the first point to the second. This process continues with the subsequent segment updating the diameter of the preceding segment until all segments of the selected vasculature have been marked and depicted in the 3D view 810.
If it is believed that all segments of the originally identified vasculature (Yes at step 934) the method moves to step 938 where the user must determine whether all the vasculatures have been identified and incorporated into the 3D model. For example, if only the arteries extending from the right pulmonary artery has been identified and modeled, the answer to question 940 is no, and the process returns to step 904 so that additional vasculature may be identified and mapped in the image data set. For example, the user may employ the processes described above to generate a 3D map of the left pulmonary artery. Subsequently, the left inferior pulmonary vein left superior pulmonary vein, the right superior pulmonary vein, and the right inferior pulmonary vein may all be mapped using the processes described above. If the user, viewing the 3D model and considering the image data set believes that all such vasculature has been modeled then the answer to question 930 is yes. The application 718 may receive input from an input device to save the 3D model at step 940 and the process ends. The saved 3D model may be imported as the 3D model 202 and analyzed using the user interface 200 of a thoracic surgery planning system of
While the forgoing describes the basic functionality, additional functionality of the application depicted in the user interface 802 is available. For example, at any point during the process a previously defined segment may be observed to extend outside the boundaries of the vasculature as depicted by the cursor 828 in
At any point during the modeling, a bifurcation in the vasculature may be identified in the view windows 804, 806, 808. For example, the cursor 828 in
Any node or segment may be selected and rotated in the 3D view 810 as shown by comparison of
Described herein are a variety of interactions with the windows 804-808 and 816. As described herein these are undertaken with a mouse, touchpad, or other pointing device useable with a computing device. Additionally or alternatively, the UI 802 may also receive input via a keyboard or a touchscreen, both of which can be particularly useful when annotating the 3D model or the images. The use of these tools for interacting with the UI 802 ease navigation through the views and the 3D model and enable translation, rotation and zoom of the 3D model.
Ultimately the entirety of the image data set and all, or substantially all, of the vasculature in the image data set can be modeled using the method of
Though described above with respect to a fully manual operation the instant disclosure is not so limited. In accordance with one aspect of the disclosure, instead of manual modeling of the vasculature, the vasculature in the image scan data may first be automatically processed by an algorithm, neural network, machine learning, or artificial intelligence to produce an initial 3D model. Known techniques for this initial 3D model creation can be employed. These techniques generally use contrast and edge detection methods of image processing to identify the different portions of the vasculature, identify other structures in the images scan data, and make determinations regarding which are veins, arteries, and airways based on a variety of different matters. These systems may also employ connected component algorithms to seek to form boundaries on the identified structures to limit the bleeding of the modeled segments into neighboring but separate vasculature.
Regardless of the techniques employed, a method of reviewing the 3D model is described with reference to
Still further, those of ordinary skill in the art will recognize that the corrected 3D models, and their differences from the 3D models that were automatically generated may be used to train the neural networks, algorithms, AI, etc. to improve the output of the automatic 3D model generation systems.
Another aspect of the disclosure is directed to a partial automation of the process described above. The
This process, described generally above, can be observed in
The method 1100 starts at step 1102 with the receipt of a user selection of a point 854 in the 3D model 850 (
At step 1106 the tool 800 receives a selection of a point (858
In
The results can be seen in
Referring back to method 1100, if at step 1116 it is determined there are no more blood vessels 860 to generate in the 3D model, the method ends, however, if there is a desire to add more blood vessels 860 to the 3D model 850, the method returns to step 1106. As depicted in
Those of skill in the art will recognize that though the method 1100 is described above in connection with quickly expanding an existing 3D model, the disclosure is not so limited, and instead of receiving the selection of point 854 in the 3D model, the selection may be made in the axial image viewer 808 (or any other viewer) to identify the one point within the blood vessel 860. In this manner, the 3D model may be entirely generated using the method 1100.
IFIG. 29FIG. 31FIG. 32FIG. 33FIG. 34FIG. 35
Any of the above aspects and embodiments of the present disclosure may be combined without departing from the scope of the present disclosure.
While detailed embodiments are disclosed herein, the disclosed embodiments are merely examples of the disclosure, which may be embodied in various forms and aspects. For example, embodiments of an electromagnetic navigation system, which incorporates the target overlay systems and methods, are disclosed herein; however, the target overlay systems and methods may be applied to other navigation or tracking systems or methods known to those skilled in the art. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the disclosure in virtually any appropriately detailed structure.
Number | Date | Country | |
---|---|---|---|
63166114 | Mar 2021 | US | |
63110271 | Nov 2020 | US |