This disclosure relates to image processing systems and methods to improve 3D segmentation and anatomy classification. In particular the disclosure is directed at improved techniques and method of identifying structures within computed tomography (CT) images and three-dimensional (3D) models derived therefrom, to improve surgical or treatment planning.
In many domains there's a need for segmenting and/or classifying voxels in a volumetric data. In terms of medical imaging, there are many open source and proprietary systems that enable manual segmentation and/or classification of medical images such as CT images. These systems typically require a clinician or a technician in support of a clinician to manually review the CT images and to effectively paint in the blood vessels or other structures manually, sometimes pixel by pixel. The user normally must scroll through many 2D slices and mark many pixels in order to obtain an accurate 3D segmentation/classification. As can be appreciated, such manual efforts are tedious and time-consuming rendering such methods very difficult to utilize for any type of surgical planning.
One aspect of the disclosure is directed to a method of image processing including: acquiring a computed tomography (CT) image data set of the lungs; segmenting the CT image data set to identify airways and/or blood vessels in the CT image data set; skeletonizing the segmented CT image data by identifying the center points of the airways and/or blood vessels and forming a skeleton; graphing the skeleton to identify branches of the skeleton; assigning a branch identification (ID) to each branch of the skeleton; associate each voxel of the segmented CT image data set with a branch ID, where the branch ID of the voxel is the same as the branch ID of the closest center point. The method of image processing also includes generating a three-dimensional (3D) mesh model from the graph of the skeleton. The method of image processing also includes associating each vertex of the 3D mesh model with a branch id, and displaying in a user interface the 3D mesh model and a slice image from the image data set, where portions of the 3D mesh model that appear in slice image are highlighted. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
A further aspect of the disclosure is directed to a method of image processing including: segmenting an image data set, skeletonizing the segmented image data set, graphing the skeletonized image data set, assigning a branch identification (ID) for each branch in the graph, associate each voxel of the segmented image data set with a branch id. The method of image processing also includes generating a three-dimensional (3D) mesh model from the graphed skeletonized image data set; and associating each vertex of the 3D mesh model with a branch ID. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
Implementations of this aspect of the disclosure may include one or more of the following features. The method where a plurality of vertices with the same branch ID form an object. The method where each pixel of the segmented image data set is associated with an object in the 3D mesh model based on their branch ID. The method further including presenting the 3D mesh model and the image data set on a user interface. The method where the image data set is presented on the user interface as a slice image of the image data set. The method where any portion of the slice image that corresponds to a portion of an object of the 3D mesh model is colored the same as the corresponding object. The method where the color of an object in the 3D mesh model may be changed upon receipt of an appropriate command. The method where change of color of the object results in a change in color of a corresponding portion of the image. The method where the slice images of the image data set are scrollable. The method further including receiving a selection of a pixel in the displayed image of the image data set. The method may also include determining if a branch ID is associated with the selected pixel. The method may also include highlighting all pixels in the displayed image having the same branch ID in a common color when the selected pixel is associated with a branch id. The method further including highlighting in the user interface an object of the 3D mesh model having the same branch ID as the selected pixel in the image data set. The method further including receiving a selection of an object in a displayed 3D mesh model; determining the branch ID of the object. The method may also include displaying all pixels in a displayed image of the image data set having the same branch ID as the selected branch in a common color. The method further including highlighting the object in the 3D mesh model in a contrasting color. The method further including defining a cluster ID for each branch. The method further including displaying all objects having a common cluster ID in a common color. The method where the cluster ID is based on a commonality of the objects of the cluster. The method where the commonality of the objects of the cluster is based on selecting the objects by the smallest angle of intersection between connected branches and objects. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions
One aspect of the disclosure is directed to a system including: a processor in communication with a display, and a computer readable recording medium storing thereon instructions that when executed by the processor: read an image data set from the computer readable recording medium, segment the image data set, skeletonize the segmented image data set, graph the skeletonized image data set, assign a branch identification (ID) for each branch in the graph, associate each voxel of the segmented image data set with a branch ID, generate a three-dimensional (3D) mesh model from the graphed skeletonized image data set. The system also causes the processor to associate each vertex of the 3D mesh model with a branch ID. The system also includes display in a user interface the 3D mesh model and a slice image from the image data set, where portions of the 3D mesh model that appear in slice image are highlighted. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
Implementations of this aspect of the disclosure may include one or more of the following features. The system where the instructions that when executed by the processor: receive a selection of a pixel in the displayed slice image, determine if a branch ID is associated with the selected pixel, and highlight all pixels in the displayed image having the same branch ID in a common color when the selected pixel is associated with a branch ID, or receive a selection of an object in a displayed 3D mesh model, determine the branch ID of the object, and display all pixels in the slice image having the same branch ID as the selected branch in a common color. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
Various exemplary embodiments are illustrated in the accompanying figures. It will be appreciated that for simplicity and clarity of the illustration, elements shown in the figures referenced below are not necessarily drawn to scale. Also, where considered appropriate, reference numerals may be repeated among the figures to indicate like, corresponding or analogous elements. The figures are listed below.
The disclosure is directed at improved techniques and method of identifying structures within CT images and 3D models derived therefrom. The improved identification of structures allows for additional analysis of the images and 3D models and enables accurate surgical or treatment planning.
One aspect of the disclosure is described with respect to the steps outlined in
Those of skill in the art will understand that while generally described in conjunction with CT image data, that is a series of slice images that make up a 3D volume, the instant disclosure is not so limited and may be implemented in a variety of imaging techniques including magnetic resonance imaging (MRI), fluoroscopy, X-Ray, ultrasound, positron emission tomography (PET), and other imaging techniques that generate 3D image volumes without departing from the scope of the disclosure. Further, those of skill in the art will recognize that a variety of different algorithms may be employed to segment the CT image data set including connected component, region growing, thresholding, clustering, watershed segmentation, edge detection, and others.
After the CT image data set is segmented (step 104), a skeleton is formed from the volumetric segmentation at step 106. A skeleton is a shape that represents the general form of an object.
Following skeletonization a graph 210 is created at step 108 as shown in
At step 110, each voxel of the volumetric segmentation is associated with a branch ID. The branch ID associated with each voxel is the branch ID of the closest skeleton point which is derived from the graphing process of step 210. In this way every voxel of the volumetric segmentation is assigned a branch ID.
Next at step 112, a 3D mesh is generated from the graph. A 3D mesh is the structural build of a 3D model consisting of polygons. 3D meshes use reference points, here the voxels identified in the graph, to define shapes with height, width and depth. There are a variety of different methods and algorithms for creating a 3D mesh, one such algorithm is the marching cubes algorithm. The result of the 3D mesh generation is again a 3D model, that is similar in outward appearance to a 3D model generated via segmentation techniques, see
At step 114 each vertex (point of connection of the polygons) in the 3D mesh is then associated with a branch ID. The branch ID may be assigned by finding the closet skeleton point to the vertex. In this way every vertex of the 3D mesh is associated with a specific branch ID.
From the original volumetric segmentation at step 104, a variety of additional data related to each branch has also been developed including branch size (diameter), branch class, branch type (e.g., artery or vein), branch status and others. These data may be used to limit the data in the 3D mesh model and to perform other analytical steps as described herein. For example, the view of the 3D mesh model may be limited to only those vessels larger than a pre-defined diameter, e.g., >1 mm.
With each voxel and each vertex of the mesh associated with a specific branch ID a user interface (UI) 300 may be presented to a clinician such as seen in
The 3D mesh model can now be overlaid the original image data set (e.g. the CT image data set). Because the 3D mesh model and the image data set are registered to one another by overlaying them, as a user scrolls through the images of the original image data set, portions of the 3D mesh model that align with that slice of the original image data set are revealed. By using different colors such as blue for veins and red for arteries the locations of these blood vessels can be seen in the image data set. The nature of the portions of the 3D mesh model (e.g., vein, airway or artery) may be determined using a variety of algorithms and techniques for determining their nature based on the physiology of the patient, the size and shape of the structure in question, and its connectedness to other components as well as other criteria known to those of skill in the art. As will further be appreciated other colors may be used for identifying aspects of the 3D mesh model and providing that information to a user via a user interface.
As shown in
In one embodiment, the clinician can click on a branch in the 3D mesh model 302. The application driving the user interface 300 can then synchronize the second pane 308 such that the selected branch is visible in the displayed CT image 306. The CT image 306 may be centered to display the CT image 306 in which the closest skeleton point to the branch selected in 3D mesh model 302. The portions of the branch which are visible in that CT image 306 may be highlighted as depicted in
Alternatively, the clinician, when scrolling through the CT images 306 in pane 308 may select a point in the CT image 306. If that point, a pixel, corresponds to a segmentation (e.g., an airway, an artery, or a vein, etc.) all of the voxels that belong to the same branch can be highlighted in both the CT image as shown in
In an alternative option depicted in
Another aspect of the disclosure is the use of clustering of branches. In accordance with one aspect of the disclosure, the nature of the cluster may be selected or defined by a user via a user interface either with respect to the CT image data set or the 3D mesh model. In one example of clustering, a clinician may be interested in the blood vessels which feed or are in proximity to a tumor or lesion. While identifying the blood vessels that are visible within a small window or margin around the tumor may be useful, a better indication of blood flow and the related vasculature can be viewed when considering all of the blood vessels within the lobe of the lung where the tumor or lesion appears. By viewing all of the blood vessels (all branches) in a single lobe of the lung where the lesion or tumor is found, determinations can be made on how to proceed with the treatment, ordering of resection steps and determination of approximate locations of critical structures (e.g., arteries for clamping, suturing, or sealing) so that prior to the resection steps sufficient access is provided to manipulate tools. Further, particular complications related to the resection may be understood (e.g., proximity to the aorta, the heart, or other anatomical features) long before the surgery is attempted.
In this example, all of the branches which are considered a portion of the cluster (e.g., the lung lobe in question) are associated with a cluster ID. When clustering is utilized, in the example described above with respect to
A further example of clustering can be useful in pathway generation as depicted in
A further aspect of the disclosure relates to the use of neural networks or some other appropriate learning software or algorithm in connection with the methods described herein. Referring to the use of neural networks, a neural network must be trained. This is done by allowing the neural network to analyze images (e.g., from CT image data sets) in which the locations and identity of the vessels, airways, and structures are known and have been analyzed and annotated to depict the location of these structures in accordance with the methods described herein. Thus, the expedited and highly accurate analysis and identification of the blood vessels and airways provide a high-quality baseline to determine the efficacy and completeness of the neural network training.
During training of a neural network, a score is provided following each analysis of each frame by the neural network. Over time and training, the neural network becomes more adept at distinguishing the structures based on their size, changes in shape, location in the CT image data set, the interconnections, etc. The result is a segmented image data set where the distinctions between airways, blood vessels and other tissues can be identified and displayed without the need for manual marking and identification of the structures. There are a number of methods of training neural networks for use in the methods of the instant disclosure. As will be appreciated, in at least one embodiment, regardless of how robust the neural network becomes, a UI 300 may include a requesting the clinician confirm the analyzes of the neural network.
In order to help physicians plan the treatment, a pulmonary vasculature analysis tool has been developed for the analysis of CT images in which the following steps are performed. First segment the pulmonary vasculature from the patient CT. Next in order to simplify the representation the segmentation is skeletonized, and a graph is generated based on the skeleton. In the third step a deep learning classifier is employed to separate arteries and veins. The resulted classified segmentation is visualized to the user with the ability of editing.
Blood vessel classification relies on analyzing the local environment and by tracking the vessel origin. To achieve this, a convolutional neural network classifies each vessel independently based on its surrounding followed by a post processing algorithm that incorporates global knowledge to refine the classification.
The input to the neural network is a 3D patch extracted around the main axis of the vessel and the output is a probability of being artery or vein. The post processing includes tracking of anatomical pathways along vasculature and performing majority voting. The neural network has been trained and evaluated in a leave-one-out cross validation scheme on 10 fully annotated CTs scans from the CARVE dataset found at https://arteryvein.grand-challenge.org/.
The performance was measured using two methods. First an accuracy of individual vessel classification was calculated and resulted in an average result of 86%. In addition, the accuracy specifically on the segmental blood vessels was evaluated and the accuracy in this case was 87%. The tool developed achieves significant performance of the classification that can be even further improved by additional annotated training data or by a more accurate input skeletonization.
Performing lung cancer therapy treatment such as nodule ablation or surgery requires a physician to study the patient anatomy, specifically the blood vessels that are in the vicinity of the lesion or the area of interest. For example, when performing a lobectomy, a surgeon is interested in blood vessels that enter and leave the specific lobe. Physicians usually look at a CT scan prior and use it to plan the therapeutic procedure.
A tool has been developed to automatically segment lung blood vessels using deep learning to assist a physician in visualizing and planning a therapeutic procedure in the lung. The suggested network architecture is a 3D fully convolutional neural network based on V-Net. (Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. “V-net: Fully convolutional neural networks for volumetric medical image segmentation.” 2016 Fourth International Conference on 3D Vision (3DV), IEEE, 2016). The network input are 3D CT patches with a size of 64×64×64 voxels and normalized for pixel spacing. The output is a corresponding blood vessel segmentation of the same size. The network was trained on 40 scans from the CARVE14 dataset, found at https://arteryvein.grand-challenge.org/, and was validated using 15 different scans from the same dataset.
The network achieved an average validation dice accuracy score of 0.922. The network was also compared to an existing rule-based algorithm. Visual inspection revealed that the network was far better than the rule-based algorithm and was even able to correct some mistakes in the ground truth. In terms of computational costs, the network was able to fully segment a new CT in an average time of ˜15 seconds, while the classical algorithm average time was ˜10 minutes. The neural network can be further trained to be more robust to pathologies and as a basis for a blood vessel classification network that distinguishes between arteries and veins.
Those of ordinary skill in the art will recognize that the methods and systems described herein may be embodied on one or more applications operable on a computer system (
Reference is now made to
Application 1018 may further include a user interface 1016. Image data 1014 may include image data sets such as CT image data sets and others useable herein. Processor 1004 may be coupled with memory 1002, display 1006, input device 1010, output module 1012, network interface 1008 and fluoroscope 1015. Workstation 1001 may be a stationary computing device, such as a personal computer, or a portable computing device such as a tablet computer. Workstation 1001 may embed a plurality of computer devices.
Memory 1002 may include any non-transitory computer-readable storage media for storing data and/or software including instructions that are executable by processor 1004 and which control the operation of workstation 1001 and, in some embodiments, may also control the operation of imaging device 1015. In an embodiment, memory 1002 may include one or more storage devices such as solid-state storage devices, e.g., flash memory chips. Alternatively, or in addition to the one or more solid-state storage devices, memory 1002 may include one or more mass storage devices connected to the processor 1004 through a mass storage controller (not shown) and a communications bus (not shown).
Although the description of computer-readable media contained herein refers to solid-state storage, it should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by the processor 1004. That is, computer readable storage media may include non-transitory, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media may include RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, Blu-Ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information, and which may be accessed by workstation 1001.
Application 1018 may, when executed by processor 1004, cause display 1006 to present user interface 1016. User interface 1016 may be configured to present to the user a variety of images and models as described herein. User interface 1016 may be further configured to display and mark aspects of the images and 3D models in different colors depending on their purpose, function, importance, etc.
Network interface 1008 may be configured to connect to a network such as a local area network (LAN) consisting of a wired network and/or a wireless network, a wide area network (WAN), a wireless mobile network, a Bluetooth network, and/or the Internet. Network interface 1008 may be used to connect between workstation 1001 and imaging device 1015. Network interface 1008 may be also used to receive image data 1014. Input device 1010 may be any device by which a user may interact with workstation 1001, such as, for example, a mouse, keyboard, foot pedal, touch screen, and/or voice interface. Output module 1012 may include any connectivity port or bus, such as, for example, parallel ports, serial ports, universal serial busses (USB), or any other similar connectivity port known to those skilled in the art.
While several aspects of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular aspects.
This application is a continuation of U.S. patent application Ser. No. 17/108,843, filed on Dec. 1, 2020, which claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 62/965,288, filed on Jan. 24, 2020, the entire content of each of which is hereby incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
8335359 | Fidrich et al. | Dec 2012 | B2 |
8706184 | Mohr et al. | Apr 2014 | B2 |
8827934 | Chopra et al. | Sep 2014 | B2 |
9918659 | Chopra et al. | Mar 2018 | B2 |
10373719 | Soper et al. | Aug 2019 | B2 |
10376178 | Chopra | Aug 2019 | B2 |
10405753 | Sorger | Sep 2019 | B2 |
10478162 | Barbagli et al. | Nov 2019 | B2 |
10480926 | Froggatt et al. | Nov 2019 | B2 |
10524866 | Srinivasan et al. | Jan 2020 | B2 |
10555788 | Panescu et al. | Feb 2020 | B2 |
10610306 | Chopra | Apr 2020 | B2 |
10638953 | Duindam et al. | May 2020 | B2 |
10674970 | Averbuch et al. | Jun 2020 | B2 |
10682070 | Duindam | Jun 2020 | B2 |
10706543 | Donhowe et al. | Jul 2020 | B2 |
10709506 | Coste-Maniere et al. | Jul 2020 | B2 |
10772485 | Schlesinger et al. | Sep 2020 | B2 |
10796432 | Mintz et al. | Oct 2020 | B2 |
10823627 | Sanborn et al. | Nov 2020 | B2 |
10827913 | Ummalaneni et al. | Nov 2020 | B2 |
10835153 | Rafii-Tari et al. | Nov 2020 | B2 |
10885630 | Li et al. | Jan 2021 | B2 |
20030013972 | Makin | Jan 2003 | A1 |
20030016850 | Kaufman et al. | Jan 2003 | A1 |
20130303945 | Blumenkranz et al. | Nov 2013 | A1 |
20140035798 | Kawada et al. | Feb 2014 | A1 |
20150148690 | Chopra et al. | May 2015 | A1 |
20150265368 | Chopra et al. | Sep 2015 | A1 |
20160157939 | Arkin et al. | Jun 2016 | A1 |
20160183841 | Duindam et al. | Jun 2016 | A1 |
20160192860 | Allenby et al. | Jul 2016 | A1 |
20160287344 | Donhowe et al. | Oct 2016 | A1 |
20170024492 | Kim | Jan 2017 | A1 |
20170112576 | Coste-Maniere et al. | Apr 2017 | A1 |
20170209071 | Zhao et al. | Jul 2017 | A1 |
20170265952 | Donhowe et al. | Sep 2017 | A1 |
20170311844 | Zhao et al. | Nov 2017 | A1 |
20170319165 | Averbuch | Nov 2017 | A1 |
20180078318 | Barbagli et al. | Mar 2018 | A1 |
20180153621 | Duindam et al. | Jun 2018 | A1 |
20180235709 | Donhowe et al. | Aug 2018 | A1 |
20180240237 | Donhowe et al. | Aug 2018 | A1 |
20180256262 | Duindam et al. | Sep 2018 | A1 |
20180263706 | Averbuch | Sep 2018 | A1 |
20180279852 | Rafii-Tari et al. | Oct 2018 | A1 |
20180325419 | Zhao et al. | Nov 2018 | A1 |
20190000559 | Berman et al. | Jan 2019 | A1 |
20190000560 | Berman et al. | Jan 2019 | A1 |
20190008413 | Duindam et al. | Jan 2019 | A1 |
20190038365 | Soper et al. | Feb 2019 | A1 |
20190065209 | Mishra et al. | Feb 2019 | A1 |
20190110839 | Rafii-Tari et al. | Apr 2019 | A1 |
20190175062 | Rafii-Tari et al. | Jun 2019 | A1 |
20190183318 | Froggatt et al. | Jun 2019 | A1 |
20190183585 | Rafii-Tari et al. | Jun 2019 | A1 |
20190183587 | Rafii-Tari et al. | Jun 2019 | A1 |
20190192234 | Gadda et al. | Jun 2019 | A1 |
20190209016 | Herzlinger et al. | Jul 2019 | A1 |
20190209043 | Zhao et al. | Jul 2019 | A1 |
20190216548 | Ummalaneni | Jul 2019 | A1 |
20190239723 | Duindam et al. | Aug 2019 | A1 |
20190239831 | Chopra | Aug 2019 | A1 |
20190250050 | Sanborn et al. | Aug 2019 | A1 |
20190254649 | Walters et al. | Aug 2019 | A1 |
20190269470 | Barbagli et al. | Sep 2019 | A1 |
20190272634 | Li et al. | Sep 2019 | A1 |
20190298160 | Ummalaneni et al. | Oct 2019 | A1 |
20190298451 | Wong et al. | Oct 2019 | A1 |
20190320878 | Duindam et al. | Oct 2019 | A1 |
20190320937 | Duindam et al. | Oct 2019 | A1 |
20190336238 | Yu et al. | Nov 2019 | A1 |
20190343424 | Blumenkranz et al. | Nov 2019 | A1 |
20190350659 | Wang et al. | Nov 2019 | A1 |
20190365199 | Zhao et al. | Dec 2019 | A1 |
20190365479 | Rafii-Tari | Dec 2019 | A1 |
20190365486 | Srinivasan et al. | Dec 2019 | A1 |
20190380787 | Ye et al. | Dec 2019 | A1 |
20200000319 | Saadat et al. | Jan 2020 | A1 |
20200000526 | Zhao | Jan 2020 | A1 |
20200008655 | Schlesinger et al. | Jan 2020 | A1 |
20200030044 | Wang et al. | Jan 2020 | A1 |
20200030461 | Sorger | Jan 2020 | A1 |
20200038750 | Kojima | Feb 2020 | A1 |
20200043207 | Lo et al. | Feb 2020 | A1 |
20200046431 | Soper et al. | Feb 2020 | A1 |
20200046436 | Tzeisler et al. | Feb 2020 | A1 |
20200054399 | Duindam et al. | Feb 2020 | A1 |
20200060771 | Lo et al. | Feb 2020 | A1 |
20200069192 | Sanborn et al. | Mar 2020 | A1 |
20200077870 | Dicarlo et al. | Mar 2020 | A1 |
20200078095 | Chopra et al. | Mar 2020 | A1 |
20200078103 | Duindam et al. | Mar 2020 | A1 |
20200085514 | Blumenkranz | Mar 2020 | A1 |
20200109124 | Pomper et al. | Apr 2020 | A1 |
20200129045 | Prisco | Apr 2020 | A1 |
20200129239 | Bianchi et al. | Apr 2020 | A1 |
20200138515 | Wong | May 2020 | A1 |
20200155116 | Donhowe et al. | May 2020 | A1 |
20200170623 | Averbuch | Jun 2020 | A1 |
20200170720 | Ummalaneni | Jun 2020 | A1 |
20200179058 | Barbagli et al. | Jun 2020 | A1 |
20200188038 | Donhowe et al. | Jun 2020 | A1 |
20200205903 | Srinivasan et al. | Jul 2020 | A1 |
20200205904 | Chopra | Jul 2020 | A1 |
20200214664 | Zhao et al. | Jul 2020 | A1 |
20200229679 | Zhao et al. | Jul 2020 | A1 |
20200242767 | Zhao et al. | Jul 2020 | A1 |
20200275860 | Duindam | Sep 2020 | A1 |
20200297442 | Adebar et al. | Sep 2020 | A1 |
20200315554 | Averbuch et al. | Oct 2020 | A1 |
20200330795 | Sawant et al. | Oct 2020 | A1 |
20200352427 | Deyanov | Nov 2020 | A1 |
20200364865 | Donhowe et al. | Nov 2020 | A1 |
Number | Date | Country |
---|---|---|
0013237 | Jul 2003 | BR |
0116004 | Jun 2004 | BR |
486540 | Sep 2016 | CZ |
2709512 | Aug 2017 | CZ |
2884879 | Jan 2020 | CZ |
3413830 | Sep 2019 | EP |
3478161 | Feb 2020 | EP |
3641686 | Apr 2020 | EP |
3644885 | May 2020 | EP |
3644886 | May 2020 | EP |
PA03005028 | Jan 2004 | MX |
225663 | Jan 2005 | MX |
226292 | Feb 2005 | MX |
246862 | Jun 2007 | MX |
265247 | Mar 2009 | MX |
284569 | Mar 2011 | MX |
Entry |
---|
Charbonnier Jean-Paul et al: “Automatic Pulmonary Artery-Vein Separation and Classification in Computed Tomography Using Tree Partitioning and Peripheral Vessel Matching”, IEEE Transactions on Medical Imaging, IEEE Service Center, Piscataway, NJ, US, vol. 35, No. 3, Mar. 1, 2016 (Mar. 1, 2016), pp. 882-892. |
Extended European Search Report issued in European Patent Application No. 21152481.4 dated Jun. 2, 2021, 13 pages. |
Kensaku Mori et al: “A fast rendering method using the tree structure of objects in virtualized bronchus endoscope system”, Sep. 22, 1996 (Sep. 22, 1996), Visualization in Biomedical Computing : 4th International Conference, VBC' 96, Hamburg, Germany, Sep. 22-25, 1996, Springer, DE, pp. 33-42. |
Khan Kashif Ali et al: “Navigational Bronchoscopy for Early Lung Cancer: A Road to Therapy”, Advances in Therapy, Health Communications, Metuchen, NJ, US, vol. 33, No. 4, Mar. 22, 2016 (Mar. 22, 2016) , pp. 580-596. |
Palagyi K. et al: “Quantitative analysis of pulmonary airway tree structures”, Computers in Biology and Medicine, New York, NY, US, vol. 36, No. 9, Sep. 1, 2006 (Sep. 1, 2006), pp. 974-996. |
Number | Date | Country | |
---|---|---|---|
20220335690 A1 | Oct 2022 | US |
Number | Date | Country | |
---|---|---|---|
62965288 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17108843 | Dec 2020 | US |
Child | 17847422 | US |