Cancers include mutations, which may be somatic and tumor-specific. The immune system detects these cancer-based mutations by identifying peptides that are derived from these mutations. The peptides can be identified by the immune system when they bind proteins encoded by a major histocompatibility complex (MHC) gene and are presented on a surface of a cell. For example, a peptide corresponding to a mutated gene can bind to a specific MHC molecule (e.g., human leukocyte antigen (HLA) protein) and be presented on the cell surface. Predicting peptides expressed on a tumor cell surface can inform development of precision cancer therapeutics and diagnostics. For example, genomic variants corresponding to these peptides can be identified to analyze complex systems’ responses and resistance to certain cancer immunotherapies. As another example, the peptides presented on the tumor cell surface can be analyzed to create personalized immuno-oncology (I-O) therapies and/or neoantigen cancer vaccines.
Techniques for predicting such peptides expressed on tumor cell surface, also known as “neoantigens”, require in-depth analysis of many technical factors, including, but not limited to, the quality of peptide sequencing data, availability of paired tumor and normal samples, HLA-typing, and identification of other peptide characteristics. For example, a neoantigen can be identified based on a prediction of a peptide that will bind to an MHC molecule and be presented on a cell surface. To identify neoantigens, determining peptides encoded by somatic variants and identifying HLA molecules that bind the peptides are only the initial steps in a very complex process. This is because each peptide identified from the sequence data may or may not be: processed by the proteasome; transported for MHC binding; presented on a tumor cell surface; and ultimately recognized by the immune system. Because of this complex process, many peptides that bind to an HLA molecule (for example) may not be expressed on the cell surface.
Further, one or more binding motifs of the MHC molecules can be identified to determine whether a given peptide will bind to the MHC molecule. While binding motifs for some MHC molecules (e.g., HLA-A molecule) are known, there are many MHC molecules for which binding motifs are yet to be identified. For example, binding motifs of MHC Class II molecules are relatively unknown due to limited availability of experimental data. Without this information, it would be difficult to determine whether a peptide will bind to a corresponding MHC molecule. Conventional techniques have attempted to address this issue by training machine-learning models using known MHC binding motifs to predict whether a peptide will bind to one of the various types of MHC molecules. However, even when such peptides are identified, some of them may not be presented on a cell surface. In other words, conventional techniques may identify MHC-binding peptides, but only a small fraction of them can be successfully presented on a cell surface. Since an immune system response is triggered when MHC-binding peptides are presented on the cell surface, identifying MHC-binding peptides alone cannot provide all the details on how the immune system responds to tumor cells, foreign protein, etc.
Thus, conventional techniques for predicting MHC-binding peptides do not address whether the peptides are actually presented and expressed on a cell surface. Conventional techniques also fall short of identifying peptide characteristics that are indicative of a given peptide being presented on the cell surface. Accordingly, there is a need for accurately predicting peptides that bind to their corresponding MHC molecules and are presented on a cell surface.
In some embodiments, a method of predicting surface-presenting peptides is provided. The method can include accessing a trained machine-learning model, which was trained using a training data set that included, for each peptide of a plurality of peptides identified by the training data set, protein characteristics of a major histocompatibility complex (MHC) molecule that binds and presents the peptide, one or more expression levels representing an expression level of a gene encoding the peptide, and one or more peptide-presentation metrics representing a quantity of peptides detected as having been presented by the MHC molecule. The machine-learning model can be configured to generate an output that indicates an extent to which the one or more expression levels and the one or more peptide-presentation metrics are related in accordance with a population-level relationship between expression and presentation.
The method can also include accessing genomic and transcriptomic data corresponding to a biological sample of a subject. The genomic and transcriptomic data can identify one or more MHC molecules from the biological sample and include, for each peptide of a set of peptides identified from the cell line or tissue samples, one or more values representing the peptide. The one or more values can be determined based on processing of the tissue sample. The method can also include determining, for each peptide of the set of peptides, a score using the machine-learning model, the one or more MHC molecules identified from the biological samples, and the one or more values representing the peptide. The method can include generating a result based on the score and outputting the result.
Some embodiments of the present disclosure include a system including one or more data processors. In some embodiments, the system includes a non-transitory computer readable storage medium containing instructions which, when executed on the one or more data processors, cause the one or more data processors to perform part or all of one or more methods and/or part or all of one or more processes disclosed herein. Some embodiments of the present disclosure include a computer-program product tangibly embodied in a non-transitory machine-readable storage medium, including instructions configured to cause one or more data processors to perform part or all of one or more methods and/or part or all of one or more processes disclosed herein.
The terms and expressions which have been employed are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof, but it is recognized that various modifications are possible within the scope of the invention claimed. Thus, it should be understood that although the present invention as claimed has been specifically disclosed by embodiments and optional features, modification and variation of the concepts herein disclosed may be resorted to by those skilled in the art, and that such modifications and variations are considered to be within the scope of this invention as defined by the appended claims.
The present disclosure is described in conjunction with the appended figures:
To address at least the above deficiencies of conventional systems, the present techniques can be used to predict surface-presenting peptides. As used herein, a “surface-presenting peptide” can refer to a peptide that binds to an MHC molecule (e.g., an HLA-A protein) and is presented on a corresponding cell surface. One or more somatic variants can be identified by sequencing DNA from normal and tumor samples. A somatic variant includes one or more gene mutations present in the tumor sample and also in the normal sample. The somatic variant of the tumor sample can be processed using a trained machine-learning model to predict whether a peptide encoded by the somatic variant will bind to an MHC molecule (e.g., MHC Class 1) and be presented on a cell surface. The machine-learning model can include a binding model that predicts whether a peptide encoded by the somatic variant will bind to an MHC molecule. In some embodiments, the machine-learning model includes a presentation model that predicts whether the peptide encoded by the somatic variant will be expressed on a cell surface.
The machine-learning model can be trained using a training data set derived from: (i) genetically engineered mono-allelic cell lines; and (ii) multi-allelic data from tissue samples of other subjects. In some instances, the machine-learning model was trained using binding array data (e.g., IEDB data). The training data set can include, for each peptide identified by the training data set, one or more expression levels representing an expression level of a gene encoding the peptide and one or more peptide-presentation metrics representing a quantity of peptides detected as having been presented by the MHC molecule. The training data set can include immunopeptidomics data of peptides generated from a plurality of genetically engineered cell lines (e.g., K562 cells) that express a single allele of interest (e.g., HLA-A). In particular, MHC-peptide complexes in these cell lines may be immunoprecipitated using W6/32 antibody, followed by peptide elution and peptide sequencing using tandem mass spectrometry. The training data set corresponding to multi-allelic data from other tissue samples can be obtained using curated public data.
The prediction of surface-presenting peptides can be performed in a manner that biases the selection towards peptides associated with scores predicting presentation to be more probable relative to a probability expected by a population-level relationship between expression and presentation of the peptides. Additionally or alternatively, a prediction of surface-presenting peptides is performed in a manner that biases the selection towards peptides associated with a region in a space, the region being associated with outlier peptides in the training data set for which expression levels and peptide-presentation metrics were related in a manner that departed from the population-level relationship.
Accordingly, embodiments of the present disclosure provide a technical advantage over conventional systems by accurately predicting peptides that bind to their corresponding MHC molecules and are presented on a cell surface. As noted above, binding and expression of peptides on a tumor cell surface can be predictive of how immune systems will respond to neoantigens and/or certain cancer immunotherapies. Thus, an accurate prediction of surface-presenting peptides facilitates selection or development of immunotherapies that would be most effective for a given subject. Further, based on the model evaluations, the embodiments demonstrate a significantly higher positive predictive value compared to conventional techniques such as NetMHCPan 4.0. As such, the high sensitivity and specificity of the embodiments enable accurate identification of MHC-binding peptides that are presented on a cell surface, thereby facilitating applications to the development of personalized immunotherapies and biomarker discovery.
The following examples are provided to introduce certain embodiments. In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of examples of the disclosure. However, it will be apparent that various examples may be practiced without these specific details. For example, devices, systems, structures, assemblies, methods, and other components may be shown as components in block diagram form in order not to obscure the examples in unnecessary detail. In other instances, well-known devices, processes, systems, structures, and techniques may be shown without necessary detail in order to avoid obscuring the examples. The figures and description are not intended to be restrictive. The terms and expressions that have been employed in this disclosure are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof. The word “example” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as an “example” is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
Neoantigens can be found in tumor samples, in which the neoantigens indicate one or more peptides that are presented on the tumor-cell surface thereby triggering an immune-system response. The immune system can be conditioned to seek pathogens including cancer and thus has the capacity to cure cancer. The immune system can distinguish self from non-self antigens. Because tumors are caused by genetic mutations (e.g., somatic variants), peptides corresponding to these genetic mutations and expressed on a cell surface can be considered as neoantigens. Because these peptides are considered “new” to the immune system, the immune system can ideally recognize tumor cells based on detecting the neoantigens presented on a tumor-cell surface and eliminate the tumor cells. As explained above, a tumor sample can be analyzed to identify sequence data and the sequence data can be compared with those from a normal sample to identify somatic variants. The somatic variants can be further analyzed to determine which subset of the variants will be manifested as peptides. Neoantigens can be predicted by identifying peptides that bind to an MHC molecule and are presented on the cell surface. Thus, a peptide’s ability to be presented on a cell surface can be a key component for developing immunotherapies against cancer.
Surface-presented peptides can be identified in the context of autoimmune diseases, in which the peptides are encoded based on genetic alterations resulting from particular immunotherapies.
The machine-learning models for predicting surface-presenting peptides can be trained using a supervised training algorithm. The machine-learning models can be trained using a training data set. The training data set for training the machine-learning models can include sequence data from various sources: (i) peptides identified as binding to HLA molecules based on in vitro experiments; (ii) peptides identified by performing mass spectrometry from tumor samples; (iii) HLA alleles; and (iv) non-tumor samples. However, some training sequence data may be inaccurate for training the machine-learning models. For example, training sequence data generated from tissue samples would require a difficult process of mapping peptides to one of several types of HLA proteins (e.g., HLA-A, HLA-B) that are being simultaneously expressed on the cell surface. In another example, sequence data generated using in vitro methods might not model surface presentation. Embodiments of the present disclosure to systematically resolve inconsistencies in the training data set is used to train the machine-learning model to predict, from somatic variants called from sequence data, a peptide that is likely to be “shuttled” to the cell surface.
Additionally or alternatively, the training data set may further include data corresponding to somatic variants, in which each somatic variant is labeled to indicate whether a peptide encoded by the somatic variant will bind to an MHC molecule (e.g., an HLA-A protein) and presented on a cell surface. The training data set can also include one or more features derived from the somatic variants (e.g., peptide sequence, peptide length, expression of peptide in the tumor sample).
To prepare the training data set, a tumor sample and a corresponding normal, control sample can be sequenced to generate a tumor-normal pair sequence data. The tumor-normal pair sequence data are compared to identify somatic variants, including altered genes that include single-nucleotide variants (SNV), indels, and/or copy number variations. In some instances, a machine-learning model is used to process the tumor-normal pair sequence data to identify the somatic variants in the tumor sample.
In some instances, at least some of the training data correspond to peptides identified from genetically engineered mono-allelic cell lines.
Mono-allelic immunopeptidomics data identifying various characteristics of the HLA-binding peptides can be identified and included as part of the training data set. Examples of training data from mono-allelic immunopeptidomics data can include, for a given HLA-binding peptide, a type of peptide, length of the peptide, amino-acid sequence of the peptide, an HLA allele that binds the peptide, a number of transcripts that correspond to the peptide, and expression of a gene region that encodes the peptide. In order to optimize the performance of the machine-learning model, training data was generated to be representative of HLA genotypes in the general population. For example,
The training data corresponding to the HLA-binding peptides can thus facilitate training of machine-learning model by using mono-allelic immunopeptidomics data that express one particular type of HLA at a time. Further, allelic diversity in the mono-allelic immunopeptidomics training data enables the machine-learning model to predict surface-presenting peptides derived from various alleles that may be absent from the training data.
In some instances, at least some of the training data correspond to peptides identified from sequencing tissue samples of other subjects. Cell lines of different tissue samples or tissue samples of subjects can be sequenced to identify a plurality of peptides that bind to different types of HLA molecules (e.g., HLA-A, HLA-B, HLA-C). In some instances, the cell lines and tissue samples are processed using mass spectrometry. Multi-allelic immunopeptidomics data derived from the identified plurality of peptides can be used as part of the training data. The multi-allelic immunopeptidomics data may include various characteristics corresponding to the identified peptides, including peptide length and allelic diversity.
The multi-allelic immunopeptidomics data generated from diverse tissues and cell lines can be integrated into the training data set to improve the performance of the trained machine-learning model. In particular, training the machine-learning model with multi-allelic immunopeptidomics data may reduce overfitting and/or underfitting. For example, both mono-and multi-allelic immunopeptidomics data from several publicly available data sources can be added into the training data set. The mono-allelic immunopeptidomics data from genetically engineered cell lines and mono- and multi-allelic immunopeptidomics data from tissue samples can all be combined into the training data set to expand its scale (e.g., a greater quantity of unique peptide counts).
As explained above, the immunopeptidomics data from the training data set identify various characteristics of an HLA-binding peptide, including peptide sequence, peptide length, a binding pocket sequence, left flanking region, and right flanking region. In some instances, the training data set also includes antigen presentation features such as expression level of peptides measured in terms of DPM. In addition to the above, two additional features can be generated from the immunopeptidomics data, which can be used to enhance the training data set.
A first feature generated from the immunopeptidomics data can include comparison data between expected peptide counts based on gene expression levels and actual observed peptide count. By including the training data set with the first feature, the trained machine-learning model trained from the above training data can improve prediction of surface-presenting peptides such that the prediction is biased towards peptides associated with scores predicting presentation to be more probable relative to a probability expected by a population-level relationship between expression and presentation of the peptides. In addition, the trained machine-learning model trained from the above training data can facilitate prediction of the surface-presenting peptides such that the prediction is performed in a manner that biases the selection towards peptides associated with a region in a space, in which the region being associated with outlier peptides in the training data set for which expression levels and peptide-presentation metrics were related in a manner that departed from the population-level relationship.
An initial hypothesis without the comparison data in
At block 710, an expected number of peptides is calculated for a gene identified in the immunopeptidomics data. In particular, the expected number of peptides are calculated based on a number of transcripts (e.g., TPM) and a sequence length of the gene. At block 715, a ratio between the expected number of peptides and an observed number of peptides can be calculated to generate the gene propensity score (e.g., log10(observed/expected)). In some instances, the gene propensity score is added as an additional feature of the training data set.
A second feature generated from the immunopeptidomics data can include comparison data between expected peptide counts based on expression levels within one or more regions in a given gene and actual observed peptide count corresponding to the one or more regions. In contrast to the first feature that identifies gene expression levels across various genes, the second feature identifies expression levels of regions within a single gene. An expected quantity of peptides can be generated based on the identified expression levels. The expected quantity can be compared with the observed quantity of peptides to identify the second feature for the training data set, in which the second feature is indicative of one or more surface-presentation characteristics of regions within the corresponding gene.
In some instances, the first feature and the second feature are combined into the training dataset. The trained machine-learning model trained from the training data with the combined features can further facilitate prediction of surface-presenting peptides prediction such that the prediction is biased towards peptides associated with scores predicting presentation to be more probable relative to a probability expected by a population-level relationship between expression and presentation of the peptides. In addition, the trained machine-learning model trained from the above training data can facilitate prediction of the surface-presenting peptides prediction such that the prediction is performed in a manner that biases the selection towards peptides associated with a region in a space, in which the region being associated with outlier peptides in the training data set for which expression levels and peptide-presentation metrics were related in a manner that departed from the population-level relationship.
At step 910, predicted number of peptides for each region of a particular gene is compared to an actual number of peptides for the region. At step 915, a hotspot score is calculated for the particular gene, in which the hotspot score identifies a distribution of observed peptide counts across the regions of the gene (e.g., ACTB gene, ACTC1 gene).
The training data set can be used to train a machine-learning model for predicting surface-presenting peptides. The machine-learning model includes one or more sub-models configured to identify binding characteristics and surface-presentation characteristics of peptides in a sample. These sub-models can be separately trained with a corresponding subset of the training data set, such that each sub-model can predict the surface-presenting peptides based on parameters learned from features that correspond to the subsets.
In some instances, the machine-learning model includes a binding model and a presentation model, each of which is trained to process different features of the input data.
The presentation model 1010 can be trained not only using the information associated with peptides (e.g., peptide sequence, sequence of the MHC molecules that bind the peptides, peptide length), but also expression levels of the source protein from which the peptide was derived, surface-presentation characteristics of the peptides, gene propensity scores, and hotspot scores. The trained presentation model 1010 can thus identify binding characteristics of a given peptide and its surface-presentation characteristics, i.e., whether the peptide will be presented on a cell surface. Similar to the binding model 1005, the presentation model 1010 can include one or more trained gradient-boosting algorithms.
The training data set from the database that includes the deconvoluted sets of peptides for each HLA allele can be used to train a third set of presentation and binding models 1120 (“final models”). The trained final models 1120 can be deployed to predict surface-presenting peptides. A purpose of building the training database to train the final models is to obtain as much allelic diversity as possible and avoid issues caused by underfilling or overfilling. Additionally or alternatively, trained intermediate models can also be deployed to predict the surface-presenting peptides, although performance levels of the trained final models tend to be superior than those of the trained intermediate models.
To evaluate performance of the trained machine-learning model, a test data set comprising several experimentally observed peptides that were not part of the training process and synthetic decoys was generated. The trained machine-learning model processed these candidate test peptides to output a score predictive of MHC Class I binding and cell-surface presentation, in which the machine-learning model was trained using large scale immunopeptidome training data sets as described above. The scores are then compared with a corresponding data derived from verified MHC-binding peptides that are presented on a cell surface to identify performance levels of the trained machine-learning model. The output scores were also benchmarked against NetMHCpan 4.0 (an existing platform for predicting binding of peptides to MHC molecules), and the trained machine-learning algorithm demonstrated higher overall sensitivity and specificity. Based on the output scores, an antigen burden score of the predicted peptides can be calculated using peptides having the output scores that pass a confidence threshold.
In another example, the trained machine-learning model was tested and evaluated using experimentally generated peptides, using mass spectrometry-based immunopeptidomics approaches described above, from tissue samples, mixed with decoys in a 1:999 ratio. The positive predictive value in the top 0.1% predicted ligands by the trained machine-learning model is significantly higher compared to NetMHCPan 4.0, which is a publicly available tool regarded as the gold standard for prediction of MHC-binding peptides. In yet another example, the trained machine-learning model was also evaluated using a leave-one-out analysis, in which a high-degree of agreement was observed between motifs in raw data and motifs predicted by trained machine-learning model.
As shown in
As shown in
As shown in
The performance levels of the trained machine-learning model using multi-allelic samples further demonstrate an improvement of predicting surface-presenting peptides over conventional techniques such as NetMHCpan.
For example, the fraction value corresponding to NetMHCpan is approximately 0.65. This fraction value indicates that NetMHCpan was able to predict approximately 65% of the surface-presenting peptides that are actually present in the tissue sample. In comparison, the trained binding models perform better than NetMHCpan, in which the binding model trained with mono-allelic data having a fraction value of approximately 0.81 and the binding model trained with mono- and multi-allelic data having a fraction value of approximately 0.85. The first trained presentation model trained with mono-allelic data and the second trained presentation model with mono- and multi-allelic data perform even better, both corresponding to fraction values of approximately 0.9. Thus, the trained presentation models identified approximately 90% of the surface-presenting peptides experimentally identified in the tissue samples. Similar improvements of prediction of surface-presenting peptides were shown in other tissue samples.
At operation 1910, a computer system accesses a machine-learning model. The machine-learning model was trained using a training data set that included, for each peptide of a plurality of peptides identified by the training data set: protein characteristics of an MHC molecule (e.g., an HLA allele) that binds and presents the peptide on a cell surface; and one or more expression levels representing an expression level of a gene encoding the peptide; and one or more peptide-presentation metrics representing a quantity of peptides detected as having been presented by the MHC molecule. The machine-learning model is configured to generate an output that indicates an extent to which the one or more expression levels and the one or more peptide-presentation metrics are related in accordance with a population-level relationship between expression and presentation.
At operation 1920, the computer system accesses genomic and transcriptomic data corresponding to a biological sample of a subject. The genomic data and transcriptomic data of the biological sample processed to identify candidate neoantigens (peptides). The genomic and transcriptomic data identifies one or more MHC molecules from the biological sample and includes, for each peptide of a set of peptides identified from the tissue sample (e.g., candidate neoantigens), one or more values representing the peptide. At least one of the one or more values can be determined based on processing of the tissue sample. The one or more values can correspond to a type of the peptide, length of the peptide, an allele that binds the peptide, and expression of a gene region that encodes the peptide.
At operation 1930, the computer system determines, for each peptide of the set of peptides, a score using the machine-learning model, the one or more MHC molecules identified from the biological sample, and the one or more values representing the peptide in the genomic and transcriptomic data. In some instances, the computer system uses the trained machine-learning model processes the one or more values to output, for a given peptide, a score predictive of MHC-molecule binding and presentation.
At operation 1940, the computer system generates a result based on the score. The results may include an incomplete subset of peptides that exceeds a predefined threshold, in which the subset of peptides are predicted to be surface-presenting peptides. In some instances, the result may include motifs that correspond to each of the subset of peptides. Additionally or alternatively, the results can include a subset of peptides having scores above a particular ranking percentile of scores (e.g., 0.02). In some instances, the result indicates, for each peptide of the set of peptides, whether the peptide is a surface-presenting peptide, i.e., a peptide that binds to a corresponding MHC molecule and is presented on a cell surface.
In some instances, the computer system selects an incomplete subset of the set of peptides, in which an identification of the incomplete subset is performed in a manner that biases the selection towards peptides associated with a region in a space, the region being associated with outlier peptides in the training data set for which expression levels and peptide-presentation metrics were related in a manner that departed from the population-level relationship.
At operation 1950, the computer system outputs the result. Process 1900 terminates thereafter.
Further, memory 2004 includes an operating system, programs, and applications. Processor 2002 is configured to execute the stored instructions and includes, for example, a logical processing unit, a microprocessor, a digital signal processor, and other processors. Memory 2004 and/or processor 2002 can be virtualized and can be hosted within another computing system of, for example, a cloud network or a data center. I/O peripherals 2008 include user interfaces, such as a keyboard, screen (e.g., a touch screen), microphone, speaker, other input/output devices, and computing components, such as graphical processing units, serial ports, parallel ports, universal serial buses, and other input/output peripherals. I/O peripherals 2008 are connected to processor 2002 through any of the ports coupled to interface bus 2012. Communication peripherals 2010 are configured to facilitate communication between computer system 2000 and other computing devices over a communications network and include, for example, a network interface controller, modem, wireless and wired interface cards, antenna, and other communication peripherals.
While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. Indeed, the methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the present disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the present disclosure.
Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computing systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied-for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain examples include, while other examples do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more examples or that one or more examples necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular example.
The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Similarly, the use of “based at least in part on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based at least in part on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of the present disclosure. In addition, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed examples. Similarly, the example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed examples.
The present application is a continuation of International Application No. PCT/US2021/037902, filed on Jun. 17, 2021, and claims priority to U.S. Provisional Application No. 63/040,943, entitled “Composite Biomarkers for Immunotherapy for Cancer” filed Jun. 18, 2020, and U.S. Provisional Application No. 63/111,007, entitled “Machine-Learning Techniques For Predicting Surface-Presenting Peptides” filed Nov. 7, 2020, the entire contents of which are herein incorporated by reference in their entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63040943 | Jun 2020 | US | |
63111007 | Nov 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2021/037902 | Jun 2021 | WO |
Child | 18065410 | US |