This disclosure relates generally to the use of machine learning-based methods and apparatus for predicting or segmenting regions of interest in medical images. Disclosed embodiments include methods for training deep learning models that can be used to segment structures in brain images.
Prediction or segmentation involves identifying regions of interest in images. The accurate segmentation of structures or other regions of interest in patients' medical images can enhance clinical outcomes of procedures using those images. For example, deep brain stimulation (DBS) therapy has shown clinical efficacy in the mediation of symptomatic motoric behavior associated with Parkinson's disease, essential tremor, dystonia, and other conditions. DBS therapy makes use of electrodes inserted into certain target regions of interest in the patient's brain, such as the subthalamic nucleus (STN), internal segment of the globus pallidus (GPi), and/or the external segment of the globus pallidus (GPe). Images of the patient's brain, such as those produced using magnetic resonance imaging (MRI) scanners, are commonly used by clinicians to guide and accurately place the electrodes. The precise identification of these target regions of interest in the images can enhance the accuracy of the electrode placement and the efficacy of the therapy.
Automated processes for segmenting brain images offer certain advantages. They can, for example, streamline clinical workflow, eliminate human bias associated with the segmentation process, and provide consistent results. The use of machine learning processes, such as deep-learning, including the use of convolutional neural networks (CNNs) for purposes of image analysis and computer vision are generally known and disclosed, for example, in the following articles: Krizhevsky et al., ImageNet classification with deep convolutional neural networks, http://code.google.com/p/cuda-convnet/; and Lecun et al., Deep Learning, Nature, vol. 521, issue 7553, pp. 436-444. Methods and systems for generating images of patient's brains are disclosed in the Sapiro U.S. Pat. No. 9,600,778.
Standard clinical images are generated at many health care facilities by MRI scanners operating at 1.5 Tesla and 3 Tesla (i.e., 1.5 T and 3 T) field strengths. Unfortunately, it may be difficult to clearly visualize regions of interest such as the STN, GPi and GPe in clinical images taken using these 1.5 T and 3 T MRI scanners. Although MRI scanners operating at higher field strengths, such as 7 T, are known and can produce higher contrast resolution images than those of 1.5 T and 3 T scanners and that may enhance the ability of users to visualize and distinguish between adjacent structures or regions, these higher field strength scanners are not widely available for clinical applications.
There remains a need for improved methods and apparatus for accurately segmenting regions of interest in medical images. In particular, there is a need for improved methods for segmenting regions of interest in clinical images produced using commonly available MRI scanners such as 1.5 T and 3 T scanners. Efficacy of therapy guided by these clinical images, such as DBS electrode placement, may be enhanced by such methods and apparatus.
One example relates to a ground truth image generation method. Embodiments include a method for generating image file pairs usable to train deep learning models, comprising: receiving access to a first image file of a three-dimensional body portion of a subject including structures of interest (SOI), wherein the first image file is defined by a first coordinate space and a first effective resolution such as a contrast resolution; receiving access to a second image file of the three-dimensional body portion of the subject including the SOI, wherein the second image file is defined by a second coordinate space and a second effective resolution such as a contrast resolution that is greater than the first resolution; segmenting the SOI in the second image file; transforming the segmented SOI into the first coordinate space to create a third image file of the three-dimensional body portion of the subject including the SOI; and wherein the first image file and the third image file are usable to train deep learning models for segmentation of SOI in patient images corresponding to the SOI in the first image file and the third image file.
In embodiments, receiving access to the first image file includes receiving access to a first image file produced by a scanner defined by a field strength less than or equal to about three Tesla. In any or all of the above embodiments, receiving access to the second image file includes receiving access to a second image file produced by a scanner defined by a field strength greater than or equal to about five Tesla. In any or all of the above embodiments, receiving access to the second image file includes receiving access to a second image file produced by a scanner defined by a field strength greater than or equal to about seven Tesla. In any or all of the above embodiments, receiving the first image file includes receiving a first image file produced by a first scanner; and receiving the second image file includes receiving a second image file produced by a second scanner different than the first scanner.
In any or all of the above embodiments, the three dimensional body portion includes a head of the subject. In any or all of the above embodiments, the SOI includes one or more of a subthalamic nucleus, globus pallidus, red nucleus, substantia nigra, thalamus or caudate nucleus.
In any or all of the above embodiments, segmenting the SOI in the second image includes manually segmenting the SOI. In any or all of the above embodiments, transforming the SOI includes affine transforming the SOI. In any or all of the above embodiments, transforming the SOI includes transforming the SOI to an accuracy of less than or equal to about one voxel. In any or all of the above embodiments, transforming the SOI includes transforming the SOI to an accuracy of less than or equal to about 1 mm. In any or all of the above embodiments, transforming the SOI includes electronically transforming the SOI.
Any or all of the above embodiments may further include quality checking the SOI in the third image file. In any or all of the above embodiments, quality checking the SOI includes: transforming the first image file into the second coordinate space to produce a fourth image file; transforming the third image file into the second coordinate space to produce a fifth image file; and comparing the SOI in fourth image file and the fifth image file. Comparing SOI in the fourth image file and the fifth image file may include a manual visualization.
In any or all of the above embodiments, segmenting the SOI comprises segmenting a region of interest including the SOI, wherein the region of interest comprises a portion of the body portion defined by the second image file.
Embodiments include a computer system operable to provide any or all of the functionality of any of the ground truth image generation method embodiments described above.
Another example relates to a model training method. Embodiments include a method for training a deep learning model, comprising: receiving access to a plurality of pairs of training image files, wherein one or more of the pairs of training image files is optionally generated by any of the ground truth image generation method embodiments described above, and includes: a first image file of a three dimensional body portion of a subject including structures of interest (SOI), wherein the first image file is defined by a first coordinate system and a first effective resolution; and a second image file of the three dimensional body portion of the subject including the SOI, wherein SOI of the second image file are defined by a second effective resolution that is greater than the first resolution, and are defined to the first coordinate system; and iteratively processing the plurality of pairs of training image files by a first deep learning model to train the first deep learning model, including producing a plurality of partially-trained iterations of the first deep learning model and a first trained segmentation deep learning model optimized for segmentation of the SOI.
In any or all of the above embodiments, iteratively processing the plurality of pairs of training images includes iteratively processing the plurality of pairs of training set images by a neural network deep learning model, and optionally a convolutional neural network deep learning model.
Any or all of the above embodiments may further include validating the training of the deep learning model. Validating the training of the deep learning model may include: receiving access to a third image file of a three dimensional body portion of a subject different that the subjects of the pairs of training image files and including the SOI, wherein the third image file is defined by a third effective resolution that is less than the second resolution; receiving access to a fourth image file of the three dimensional body portion of the subject of the third image file and including the SOI, wherein the fourth image file is defined by a fourth effective resolution greater than the third resolution and the SOI is segmented; processing the third image file by each of one or more of the plurality of partially-trained iterations of the first deep learning model to produce one or more associated iteration image files including the segmented SOI; and comparing the segmented SOI in each of the one or more of the iteration image files to the segmented SOI in the fourth image file to assess a level of optimization of each of the one or more of the partially-trained iterations of the first deep learning model.
In any or all of the above embodiments, the first image file of the set of training image files was produced by a scanner defined by a field strength less than or equal to about three Tesla. In any or all of the above embodiments, the second image file of the set of training image files includes SOI produced by a scanner defined by a field strength greater than or equal to about five Tesla. In any or all of the above embodiments, the second image file of the set of training image files includes SOI produced by a scanner defined by a field strength greater than or equal to about seven Tesla. In any or all of the above embodiments, the first image file of the set of training image files includes a first image file produced by a first scanner; and the second image file of the set of training image files includes a second image file produced by a second scanner different than the first scanner.
In any or all of the above embodiments, the three dimensional body portion includes a head of the subject. In any or all of the above embodiments, the SOI includes one or more of a subthalamic nucleus, globus pallidus, red nucleus, substantia nigra, thalamus or caudate nucleus.
In any or all of the above embodiments, the second image file of the set of training image files includes the SOI defined to the first coordinate system to an accuracy of less than or equal to about one voxel. In any or all of the above embodiments, the second image file of the set of training image files includes the SOI defined to the first coordinate system to an accuracy of less than or equal to about one mm. In any or all of the above embodiments, the second image file of the set of training image files includes a region of interest of the body portion defined by the second image file, and wherein the region of interest includes the SOI and comprises a portion of the body portion defined by the second image file.
Any or all of the above embodiments may further include augmentation transforming one or both of the first image file and the second image file of set of training image files before processing the one or both of the first image file and the second image file by the first deep learning model, optionally including one or more of a canonical orientation transformation, a normalize intensity transformation, a region of interest crop size transformation, a training crop transformation, a resample transformation, or an add noise transformation.
Any or all of the above embodiments may further include iteratively processing the first image file and the second image file by one or more additional deep learning models to train each of the one or more additional deep learning models and produce one or more associated trained segmentation deep learning models optimized for segmentation of the SOI, wherein each of the one or more additional trained deep learning models is different than the first trained deep learning model and different than others of the one or more additional trained deep learning models.
Embodiments include a computer system operable to provide any or all of the functionality of embodiments of the model training method described above.
Another example relates to segmentation prediction. Embodiments include a method for segmenting structures of interest (SOI) in a patient image file, comprising: receiving access to a clinical patient image file of a three dimensional body portion including the SOI, wherein the clinical patient image file is defined by a first effective resolution; processing the patient image file by a first deep learning model trained using a plurality of pairs of training image files to produce a first segmented image file of the body portion with the SOI segmented, wherein (1) one or more of the pairs of training image files was generated by the method or using the computer system of any one or more of the ground truth image generation embodiments described above, and/or (2) the first deep learning model was trained by the method or using the computer system of any one or more of the embodiments of the model training method described above.
Embodiments may further include processing the patient image file by one or more additional deep learning models to produce one or more additional and associated segmented image files with the SOI segmented, wherein each of the one or more additional deep learning models is different than the first deep learning model and different than others of the one or more additional deep learning models.
Embodiments may further include combining the SOI of each of the first segmented image file and one or more additional segmented image files into a multiple model patient image file, wherein the step of combining is optionally performed using a voting algorithm.
In any or all of the above embodiments, the clinical patient image file was produced by a scanner defined by a field strength less than or equal to about three Tesla. In any or all of the above embodiments, the first deep learning model was trained using a set of training image files including images produced by a scanner defined by a field strength greater than or equal to about seven Tesla.
In any or all of the above embodiments, the body portion includes a head and the SOI includes one or more of a subthalamic nucleus, globus pallidus, red nucleus, substantia nigra, thalamus or caudate nucleus.
Embodiments include a computer system operable to provide any or all of the functionality of the segmentation prediction embodiments described above.
Although depicted as two dimensional for purposes of example in
In embodiments, segmentation method 10 is used to segment brain structures of interest (e.g., SOI 14S) such as the subthalamic nucleus (STN), internal segment of the globus pallidus (GPi), external segment of the globus pallidus (GPe), red nucleus (RN), substantia nigra (SN), thalamus and/or caudate nucleus. Therapies for treatment of diseases such as Parkinson's disease, dystonia and essential tremor (ET) may involve surgical procedures including the placement of electrodes into target brain structures of these types, and images may be used to guide these surgical procedures or other therapies. In general, the efficacy of the therapies may be enhanced by the accurate positioning of the electrodes in the target structures of interest. However, some or all of these brain structures may be difficult to visualize on clinical images 14 (e.g., images produced on 1.5 T or 3 T scanners). By the use of the enhanced visualization provided by method 10 and segmented brain images such as 14′, efficacy of the therapy can be enhanced. For example, as shown diagrammatically in
Imaging systems such as so-called 7 T MRI scanners are generally known and are capable of producing images having effective resolutions, such as for example contrast resolutions, greater than those of certain clinical images such as 14. However, relatively high contrast resolution imaging systems of these types are less common than clinical imaging systems, and may be available for example at certain research institutions. Segmentation method 10 thereby effectively enhances the resolution of at least key portions of clinical images 14, (e.g., SOI 14S) to produce the segmented images 14′ with relatively high contrast resolution SOI 14S′, and thereby enhances the efficacy of treatments provided by images produced by relatively low contrast resolution clinical imaging systems. The segmentation method 10 can be efficiently performed at the locations of the clinical scanners, or remotely with the clinical images 14 and segmented images 14′ electronically transmitted over networks between the facilities operating the clinical imaging systems and providers of the segmentation process 18 (e.g., as software as a service (SaaS)).
The source image file 34S and labeled image file 34L of each pair of training image files 34 are based on images of the same training subject (e.g., the same individual). In embodiments, one or more of the pairs of training image files 34 includes a source image file 34S produced by a relatively low contrast resolution clinical imaging system such as a 1.5 T and/or 3 T MRI scanner, and an associated labeled image file 34L in which the labeled structures of interest were identified and segmented in an image of the same training subject produced by a higher contrast resolution imaging system such as a 7 T scanner. Optimization of the trained deep learning model 18 during the training method 30, and the ability of the trained deep learning model to accurately segment (e.g., identify or predict) the SOI 14S in the clinical image 14, may be enhanced by the use of these pairs of training image files 34 including a source image 34S produced by a relatively low contrast resolution imaging system and an associated ground truth or labeled image file 34L produced by a relatively high contrast resolution imaging system.
As shown diagrammatically in
A relationship between the first and second coordinate spaces 46 and 48 is known and defined by coordinate system relationship information. Using the coordinate system relationship information, locations of each of one or more voxels or portions of voxels in the coordinate space 46 or 48 of one of the source image files 33S or 35S can be transformed or mapped to a corresponding location of one or more voxels or portions of voxels in the coordinate space 46 or 48 of the other of the source image files 33S or 35S.
As shown in
In embodiments, the segmentation process 50 is performed manually by neuroanatomy experts using computer visualization tools displaying the image 45 based on the second source image file 35S. Commercially available computer visualization tools such as Avizo, Amira or 3DSlicer may be used in connection with the segmentation process 50, for example. In embodiments, for example, a group of one or more (e.g., five) neuroanatomy experts of the basil ganglia such as neurosurgeons, neurologists and/or neuroimaging experts reviewed the relatively high contrast resolution (e.g., 7 T) images 45 of the second source image file 35S and determined the anatomical borders of each of the left and right portions of the STN, the left and right portions of the GPi, the left and right portions of the GPe, and/or other SOI such as the RN and SN (e.g., defining in what image (slice number) the SOI starts and ends in axial, coronal and sagittal views). The group of experts determined by consensus the final borders of the structures. Consensus concerning the locations and borders of the SOI was determined by mutual agreement of experts in the field of basil ganglia anatomy, physiology, neurosurgery, neurology and imaging in embodiments. The criteria for determining the locations of SOI such as the STN included prior anatomical studies that determine its location relative to other adjacent structures such as the SN, RN and thalamus, for example. Borders may be determined based on image contrast relative to that of the SOI and other structures, as well as nearby fiber pathways (e.g., internal capsule and Fields of Forel). In embodiments, the segmented second image file 35S′ may be labeled to identify each segmented SOI 45S′ with a unique class label identifier (e.g., 1 and 2 for the left and right hemisphere portions of the STN, 3 and 4 for the left and right portions of the RN, 5 and 6 for the left and right portions of the SN, 7 and 8 for the left and right portions of the GPi, and 9 and 10 for the left and right portions of the GPe, respectively). The segmented second source image file 35S′ may be labeled to identify portions of the image 45′ that do not correspond to a SOI 45S′ with a unique identifier such as 0.
As shown in
In embodiments, the training image file generation method 40 is performed separately on the source image files such as 33S and 35S to generate the pairs of training image files such as 34 for each of the STN and the GP (e.g., including both the GPi and GPe). Such pairs of training image files 34 may then be used by the training method 30 (
ROI segmentation method 60 may be performed, for example, using conventional or otherwise known image processing and/or software tools of the types described above. In embodiments, ROI segmentation method 50 makes use of a certain or particular SOI 45S, referred to herein as the target SOI 45ST, of a group of the SOI 45S in the image 45 (e.g., a subset of the SOI 45S). In embodiments, the target SOI 45ST may be selected based on factors such as its ability to be relatively easily and/or accurately identified by automated image processing and software tools, and/or that has generally known positional relationships and locations to the other SOI 45S to be segmented from the image 45. For example, in connection with embodiments of the types described herein for segmenting SOI 45S such as the STN, SN, RN, GPi and GPe, the RN (red nucleus) may be selected as the target SOI 45ST. The other SOI 45S, such as the STN, SN, GPi and GPe may be expected to be within a certain predetermined distance of the RN, for example based on known brain physiology.
As shown in
Referring back to
Convolutional neural network (CNN) models are used as deep learning models 18 in embodiments. Types of untrained CNN deep learning models 36 used to produce the trained deep learning models 18 include Nested-UNet architectures and Fully Convolutional DenseNet (DenseFCNet) architectures in embodiments. Alternative and/or additional untrained deep learning models 36 are used in other embodiments. Versions of untrained deep learning models 36 are commercially or otherwise publicly available. Embodiments of method 10, for example, may use as untrained deep learning models 36 a Nested-UNet architecture based on code https://arxiv.org/abs/1807.10165 available at https://github.com/4uiiurz1/pytorch-nested-unet/blob/master.archs.pv, and a DenseFCNet architecture described in the Khened, Kollerathu and Krishnamurthi article entitled “Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers,” in the publication Medical Image Analysis, 51 (2019) 21-45 and available at https://doi.org/10.1016/j.media.2018.10.004. The base architectures may include conventional or otherwise known substitutions and/or additions for purposes of experimentation. Examples of such substitutions and/or additions include additional linear downsampling and upsampling layers to enable acceptance of 0.5 mm isotropic inputs and to produce 0.5 mm isotropic outputs while operating internally at 1.0 mm isotropic, replacing transposed convolutions with simple linear upsampling and ELU (exponential linear unit) activation functions prior to a final softmax layer. Such substitutions and/or additions may, for example, minimize memory consumption during the training method 30. By way of example, Nested-UNet untrained deep learning models 36 of these types may have about 1.6 M parameters, and DenseFCNet untrained deep learning models of these types may have about 800K parameters.
Training process 32, which can be performed by conventional or otherwise known processes, produces the trained deep learning models 18 by passing the pairs of training image files such as 341-34N through the mathematical operations defined by the large number of parameters of the associated untrained deep learning models 36. Good values for the parameters, e.g., values that enable the trained deep learning models 18 to produce desired outputs that accurately segment SOIs, are determined by the training process 32. Random values are initially assigned to the parameters, and those values are adjusted during the iterative process 38 to move them closer toward the good values. The differences between the actual and desired outputs of the partially trained deep learning models during the iterative process 38 are defined by values referred to as losses. During the training process 32 the losses are calculated using a mathematical loss function. In embodiments, training process 32 uses a Sorenson-Dice coefficient (DC)-based loss function. For example, by this approach, similarity between the segmentations produced by partially-trained iterations of the untrained deep learning models 36 and the ground truth images (e.g., the labeled source images such as 45′) are determined. The loss may, for example, be calculated as an average of 1-DC for each SOI, since the loss may be minimized. In embodiments, weighted averages may be used to emphasize the relative importance or difficulty of certain SOI.
Individual iterations or passes of the iterative process 38 may be referred to as epochs. Each epoch may iterate though all image files 341-34N of each set of training image files 34. The training image files 34 may be processed in batches of N pairs of image files 341-34N. In embodiments, N may be 1, 2, 4 or 8. Other embodiments of training method 30 use pairs of training image files 34 having other numbers of pairs of training image files. The data of the pairs of training image files 34 may be augmented by various transformations before being applied to the deep learning model during each of one or more epochs of the iterative process 38. Examples of such augmentation transformations that may be used during the iterative process 38 include the transformations described in Table 1 below. One or more of the transformations in Table 1 may be used in embodiments.
Alternative or additional transformations are used in other embodiments. Examples may include canonical orientation, normalize intensity, resample and image to tensor transformations, and left/right flips.
Use of the transformations may enhance the ability of the deep learning model 18 to learn from a plurality of images during each epoch of the training process 32. In effect, the partially trained deep learning model is exposed to a greater variety of training images during the training process 32 by applying the transformations to the pairs of training image files 34. The transformations include rotations, contrast adjustments and simulated noise, and their use helps prevent the training process 32 from “memorizing” the training inputs and enhances the generality of deep learning models 18 trained by the training process 32.
Embodiments of training method 30 use pairs of validation images (not separately shown) to assess the level of optimization of each iteration or partially-trained deep learning model during the training process 32. Each validation image set may include a source image file and a labeled image file similar to those of the source image file 34S and labeled image file 34L, respectively, of a training image pair 34. The labeled image file of each validation image set may be produced by a method substantially the same as or similar to the method 40 by which the labeled image file 34L of the pairs of training image files 34 are generated. In embodiments, the validation image pairs are produced using images of one or more individual subjects that are different than the one or more individual subjects associated with the pairs of training image files 34 used during the training method 30 of the associated trained deep learning model 18.
By way of example, seventy-five to two hundred epochs may be used during the training process 32 to produce deep learning models 18. As described above, during these epochs the deep learning model is optimized on the pairs of training image files 34, reducing the loss function by adjusting its parameters toward lower loss, epoch by epoch. In embodiments, to avoid overfitting the deep learning model to the pair of training images, the quality or validity of the partially-trained deep learning models is assessed by applying the partially-trained deep learning models to the source image file of each of one or more pairs of validation images, and comparing the resulting segmented validation image to the labeled image of the associated labeled image file. Segmentation processes that are substantially the same as or similar to those of the segmentation process 10 described above can be used during these quality or validity assessments (e.g., on images not seen by the deep learning model during the training process 32). In embodiments, the determination that the deep learning model is optimized by the training process 32 (e.g., during the iterative process 38 shown in
The training method 30 described above in connection with
By the segmentation processes 161-16N, the clinical image file 12 is applied to and processed by the corresponding trained deep learning model 181-18N. Because of differences between the trained deep learning models 181-18N, the associated segmented clinical image files 201-20N may have different associated and labeled SOI 14S1′-14SN′ that were identified during the respective segmentation processes 161-16N. For example, because each of the trained deep learning models 181-18N may be of one or more of different types, trained using different pairs of training image files, the augmentation transforming and/or random initialization, each of the models may produce different outputs. By step 112, the images 141′-14N′ of the clinical image files 201-20N are combined or merged into a final segmented clinical image 14″ defined by a final segmented clinical image file 21. Step 112 may, for example, use a voting algorithm. Combining the results of the different models 181-18N may retain the results on which certain (e.g., most) of the models agree (e.g., high confidence segmentation features), while discarding features that were produce by certain (e.g., a minority) of the models (e.g., low confidence segmentation features). Segmentation methods such as 110 have been demonstrated to produce high quality segmented versions of clinical images.
The final segmented clinical image file 21 may be validated or quality checked in embodiments. For example, in embodiments the SOI 14S″ may be validated based on factors such as their volume, shape and position (e.g., distances and angles between the STN/RN/SN objects and distances between GPi and GPe objects) relative to expected volumes, shapes and positions. The expected volumes, shapes and positions may, for example, be calculated in advance using ground truth data similar to the data used to train the models. In embodiments, for example, if any of the analyzed measurements relating to the predicted SOI 14S″ are over a first threshold such as three standard deviations from the expected mean values, the predicted segmented clinical image file may be flagged with a warning. If the analyzed measurements relating to the predicted SOI 14S″ are over a second threshold such as four standard deviations from the expected mean value, the predicted segmented clinical image file may be flagged as an error.
In embodiments, the segmentation method 110 may be performed on a clinical image file 12 using a first set of trained deep learning models 181-18N trained to segment first SOI such as the STN, RN and/or SN, to produce a first segmented clinical image file 21 with the first SOI segmented. The segmentation method 110 may be performed on the same clinical image file 12 using a second set of trained deep learning models 181-18N trained to segment second SOI such as the GPi and GPe, to produce a second segmented clinical image file 21 with the second SOI segmented. A composite final segmented clinical image file including both the segmented first and second SOI can be produced by transforming and combining the SOI of the first and second segmented clinical image files into a common image with a common coordinate space. Transformation processes of the types described above can be used in connection with the generation of such a composite final segmented clinical image file.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. It is contemplated that features described in association with one embodiment are optionally employed in addition or as an alternative to features described in or associated with another embodiment. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. For example, although described in connection with certain brain SOI, embodiments may be used in connection with other brain SOI and images of other parts of a body.
This application is a national phase application of PCT Application No. PCT/US2022/020895, internationally filed on Mach 18, 2022, which claims the benefit of Provisional Application No. 63/163,507, filed Mar. 19, 2021, and also claims the benefit of Provisional Application No. 63/212,957, filed Jun. 21, 2021, which are all incorporated herein by reference in their entireties for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/020895 | 3/18/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63212957 | Jun 2021 | US | |
63163507 | Mar 2021 | US |