Atherosclerotic cardiovascular disease (ASCVD) is the primary cause of morbidity and mortality worldwide. Almost 60% of myocardial infarctions (MIs) occur in people with 0 or 1 risk factor. That is, the majority of people that experience a cardiac event are in the low-intermediate or intermediate risk categories as assessed by current methods.
A combination of genetic and environmental factors is responsible for the initiation and progression of the disease. Atherosclerosis is often asymptomatic and goes undetected by current diagnostic methods. In fact, for many, the first symptom of atherosclerotic cardiovascular disease is heart attack or sudden cardiac death.
An assay and method that can accurately predict and diagnose cardiovascular disease and development is highly desirable.
The disclosure provides methods, assays and kits for assessing the cardiovascular health of a human. In one embodiment, a method for assessing the cardiovascular health of a human is provided comprising: a) obtaining a biological sample from a human; b) determining levels of at least 2 miRNA markers selected from miRNAs listed in Table 20 in the biological sample; c) obtaining a dataset comprised of the levels of each miRNA marker; d) inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and e) determining a treatment regimen for the human based on the classification in step (d); wherein the cardiovascular health of the human is assessed.
A method for assessing the cardiovascular health of a human comprising: a) obtaining a biological sample from a human; b) determining levels of at least 3 protein markers selected from the group consisting of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF in the biological sample; c) obtaining a dataset comprised of the levels of each protein marker; d) inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and e) determining a treatment regimen for the human based on the classification in step (d); wherein the cardiovascular health of the human is assessed.
A method for assessing the cardiovascular health of a human to determine the need for or effectiveness of a treatment regimen comprising: obtaining a biological sample from a human; determining levels of at least 2 miRNA markers selected from miRNAs listed in Table 20 in the biological sample; determining levels of at least 3 protein biomarker selected from the group consisting of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF in the biological sample; obtaining a dataset comprised of the individual levels of the miRNA markers and the protein biomarkers; inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and classifying the biological sample according to the output of the classification process and determining a treatment regimen for the human based on the classification.
In yet another embodiment, a kit for assessing the cardiovascular health of a human to determine the need for or effectiveness of a treatment regimen is provided. The kit comprises: an assay for determining levels of at least two miRNA markers selected from the miRNAs listed in Table 20 in the biological sample and/or for determining the levels of at least 3 protein markers selected from the group consisting of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF in the biological sample; instructions for (1) obtaining a dataset comprised of the levels of each miRNA and/or protein marker, (2) inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; (3) and determining a treatment regimen for the human based on the classification.
In yet another embodiment, methods for assessing the risk of a cardiovascular event of a human comprising: a) obtaining a biological sample from a human; b) determining levels of three or more protein biomarkers selected from the group consisting of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF and/or 2 or more of the miRNAs in Table 20 in the sample; c) obtaining a dataset comprised of the levels of each protein and/or miRNA biomarkers; d) inputting the data into a risk prediction analysis process to determine the risk of a cardiovascular event based on the dataset; and e) determining a treatment regimen for the human based on the predicted risk of a cardiovascular event in step (d); wherein the risk of a cardiovascular event of the human is assessed.
The disclosure provides methods, assays and kits for assessing the cardiovascular health of a human, and particularly, to predict, diagnose, and monitor atherosclerotic cardiovascular disease (ASCVD) in a human. The disclosed methods, assays and kits identify circulating micro ribonucleic acid (miRNA) biomarkers and/or protein biomarkers for assessing the cardiovascular health of a human. In certain embodiments of the methods, assays and kits, circulating miRNA and/or protein biomarkers are identified for assessing the cardiovascular health of a human.
In one embodiment, the disclosure provides a method for assessing the cardiovascular health of a human to determine the need for, or effectiveness of, a treatment regimen comprising: obtaining a biological sample from a human; determining levels of at least 2 miRNA markers selected from the group consisting of the list in Table 20 in the biological sample; obtaining a dataset comprised of the levels of each miRNA marker; inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and classifying the biological sample according to the output of the classification process and determining a treatment regimen for the human based on the classification.
In certain embodiments, a method for assessing the cardiovascular health of a human to determine the need for, or effectiveness of, a treatment regimen is disclosed comprising: obtaining a biological sample from a human; determining levels of at least 3 protein biomarkers selected from the group consisting of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF in the biological sample; obtaining a dataset comprised of the levels of each protein marker; inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and classifying the biological sample according to the output of the classification process and determining a treatment regimen for the human based on the classification.
In another embodiment, a method is provided for assessing the cardiovascular health of a human. In certain embodiments, the assessment can be used to determine the need for or effectiveness of a treatment regimen. The method comprises: obtaining a biological sample from a human; determining levels of at least two miRNA markers selected from the miRNAs listed in Table 20 in the biological sample; determining levels of at least three protein biomarker selected from the group consisting of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF in the biological sample; obtaining a dataset comprised of the levels of the indivdual miRNA markers and the protein biomarkers; inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and classifying the biological sample according to the output of the classification process and determining a treatment regimen for the human based on the classification.
In yet another embodiment, methods for assessing the risk of a cardiovascular event of a human. The method comprises obtaining a biological sample from a human; and determining the levels of (1) three or more protein biomarkers selected from the group consisting of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF and/or (2) two or more of the miRNAs in Table 20 in the sample. In the method, a dataset is obtained comprised of the levels of each protein and/or miRNA biomarkers. The data is input into a risk prediction analysis process to predict the risk of a cardiovascular event based on the dataset; and a treatment regimen can be determined for the human based on the predicted risk of a cardiovascular event. The risk of a cardiovascular even can be predicted for about 1 year, about 2 years, about 3 years, about 4 years, about 5 years or more from the date on which the sample is obtained and/or analyzed. The predicted cardiovascular event, as described below, can be development of atherosclerotic disease, a MI, etc.
The terms “marker” and “biomarker” are used interchangeably throughout the disclosure.
In the disclosed methods, the number of miRNA markers that are detected and whose levels are determined, can be 1, or more than 1, such as 2, 3, 4, 5, 6, 7, 8, 9, 10, or more. In certain embodiments, the number of miRNA markers detected is 3, or 5, or more. The number of protein biomarkers that are detected, and whose levels are determined, can be 1, or more than 1, such as 2, 3, 4, 5, 6, 7, 8, 9, 10, or more. In certain embodiments, 1, 2, 3, or 5 or more miRNA markers are detected and levels are determined and 1, 2, 3, or 5 or more protein biomarkers are detected and levels are determined.
The methods of this disclosure are useful for diagnosing and monitoring atherosclerotic disease. Atherosclerotic disease is also known as atherosclerosi, arteriosclerosis, atheromatous vascular disease, arterial occlusive disease, or cardiovascular disease, and is characterized by plaque accumulation on vessel walls and vascular inflammation. Vascular inflammation is a hallmark of active atherosclerotic disease, unstable plaque, or vulnerable plaque. The plaque consists of accumulated intracellular and extracellular lipids, smooth muscle cells, connective tissue, inflammatory cells, and glycosaminoglycans. Certain plaques also contain calcium. Unstable or active or vulnerable plaques are enriched with inflammatory cells.
By way of example, the present disclosure includes methods for generating a result useful in diagnosing and monitoring atherosclerotic disease by obtaining a dataset associated with a sample, where the dataset at least includes quantitative data about miRNA markers alone or in combination with protein biomarkers which have been identified as predictive of atherosclerotic disease, and inputting the dataset into an analytic process that uses the dataset to generate a result useful in diagnosing and monitoring atherosclerotic disease. This quantitative data can include DNA, RNA, protein expression levels, and a combination thereof.
The methods, assays and kits disclosed are also useful for diagnosing and monitoring complications of cardiovascular disease, including myocardial infarction (MI), acute coronary syndrome, stroke, heart failure, and angina. An example of a common complication is MI, which refers to ischemic myocardial necrosis usually resulting from abrupt reduction in coronary blood flow to a segment of myocardium. In the great majority of patients with acute MI, an acute thrombus, often associated with plaque rupture, occludes the artery that supplies the damaged area. Plaque rupture occurs generally in arteries previously partially obstructed by an atherosclerotic plaque enriched in inflammatory cells. Another example of a common atherosclerotic complication is angina, a condition with symptoms of chest pain or discomfort resulting from inadequate blood flow to the heart.
The present disclosure identifies profiles of biomarkers of inflammation that can be used for diagnosis and classification of atherosclerotic cardiovascular disease as well as prediction of the risk of a cardiovascular event (e.g., MI) within a specific period of time from blood draw for a given individual. The miRNA and protein biomarkers assayed in the present disclosure are those identified using a learning algorithm as being capable of distinguishing between different atherosclerotic classifications, e.g., diagnosis, staging, prognosis, monitoring, therapeutic response, and prediction of pseudo-coronary calcium score. Other data useful for making atherosclerotic classifications, such as clinical indicia (e.g., traditional risk factors) may also be a part of a dataset used to generate a result useful for atherosclerotic classification.
Datasets containing quantitative data for the various miRNA and protein biomarkers markers disclosed herein, alone or in combination, and quantitative data for other dataset components (e.g., DNA, RNA, measures of clinical indicia) can be input into an analytical process and used to generate a result. The analytic process may be any type of learning algorithm with defined parameters, or in other words, a predictive model. Predictive models can be developed for a variety of atherosclerotic classifications or risk prediction by applying learning algorithms to the appropriate type of reference or control data. The result of the analytical process/predictive model can be used by an appropriate individual to take the appropriate course of action. For example, if the classification is “healthy” or “atherosclerotic cardiovascular disease”, then a result can be used to determine the appropriate clinical course of treatment for an individual.
MicroRNA (also referred to herein as miRNA, μRNA, mi-R) is a form of single-stranded RNA molecule of about 17-27 nucleotides in length, which regulates gene expression. miRNAs are encoded by genes from whose DNA they are transcribed but miRNAs are not translated into protein (i.e. they are non-coding RNAs); instead each primary transcript (a pri-miRNA) is processed into a short stem-loop structure called a pre-miRNA and finally into a functional miRNA.
miRNA markers associated with inflammation and useful for assessing the cardiovascular health of a human include, but are not limited to, one or more of miR-26a, miR-16, miR-222, miR-10b, miR-93, miR-192, miR-15a, miR-125-a.5p, miR-130a, miR-92a, miR-378, miR-20a, miR-20b, miR-107, miR-186, hsa.let.7f, miR-19a, miR-150, miR-106b, miR-30c, and let 7b. In certain embodiments, the miRNA markers include one or more of miR-26a, miR-16, miR-222, miR-10b, miR-93, miR-192, miR-15a, miR-125-a.5p, miR-130a, miR-92a, miR-378, and let 7b. In particular, the miRNAs listed in Table 20 are useful in assessing cardiovascular health of a human.
Protein biomarkers associated with inflammation and useful for assessing the cardiovascular health of a human include, but are not limited to, one or more of RANTES, TIMP1, MCP-1, MCP-2, MCP-3, MCP-4, eotaxin, IP-10, M-CSF, IL-3, TNFa, Ang-2, IL-5, IL-7, IGF-1, sVCAM, sICAM-1, E-selectin, P-selection, interleukin-6, interleukin-18, creatine kinase, LDL, oxLDL, LDL particle size, Lipoprotein(a), troponin I, troponin T, LPPLA2, CRP, HDL, triglycerides, insulin, BNP, fractalkine, osteopontin, osteoprotegerin, oncostatin-M, Myeloperoxidase, ADMA, PAI-1 (plasminogen activator inhibitor), SAA (circulating amyloid A), t-PA (tissue-type plasminogen activator), sCD40 ligand, fibrinogen, homocysteine, D-dimer, leukocyte count, heart-type fatty acid binding protein, MMP1, plasminogen, folate, vitamin B6, leptin, soluble thrombomodulin, PAPPA, MMP9, MMP2, VEGF, PIGF, HGF, vWF, and cystatin C. In certain embodiments, the protein biomarkers include one or more of IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF. In addition to the specific biomarkers, the disclosure further includes biomarker variants that are about 90%, about 95%, or about 97% identical to the exemplified sequences. Variants, as used herein, include polymorphisms, splice variants, mutations, and the like.
Protein biomarkers can be detected in a variety of ways. For example, in vivo imaging may be utilized to detect the presence of atherosclerosis-associated proteins in heart tissue. Such methods may utilize, for example, labeled antibodies or ligands specific for such proteins. In these embodiments, a detectably-labeled moiety, e.g., an antibody, ligand, etc., which is specific for the polypeptide is administered to an individual (e.g., by injection), and labeled cells are located using standard imaging techniques, including, but not limited to, magnetic resonance imaging, computed tomography scanning, and the like. Detection may utilize one, or a cocktail of, imaging reagents.
Additional markers can be selected from one or more clinical indicia, including but not limited to, age, gender, LDL concentration, HDL concentration, triglyceride concentration, blood pressure, body mass index, CRP concentration, coronary calcium score, waist circumference, tobacco smoking status, previous history of cardiovascular disease, family history of cardiovascular disease, heart rate, fasting insulin concentration, fasting glucose concentration, diabetes status, and use of high blood pressure medication. Additional clinical indicia useful for making atherosclerotic classifications can be identified using learning algorithms known in the art, such as linear discriminant analysis, support vector machine classification, recursive feature elimination, prediction analysis of microarray, logistic regression, CART, FlexTree, LART, random forest, MART, and/or survival analysis regression, which are known to those of skill in the art and are further described herein.
The analytical classification disclosed herein, can comprise the use of a predictive model. The predictive model further comprises a quality metric of at least about 0.68 or higher for classification. In certain embodiments, the quality metric is at least about 0.70 or higher for classification. In certain embodiments, the quality metric is selected from area under the curve (AUC), hazard ratio (HR), relative risk (RR), reclassification, positive predictive value (PPV), negative predictive value (NPV), accuracy, sensitivity and specificity, Net reclassification Index, Clinical Net reclassification Index. These and other metrics can be used as described herein. Further, various terms can be selected to provide a quality metric.
Quantitative data is obtained for each component of the dataset and input into an analytic process with previously defined parameters (the predictive model) and then used to generate a result.
The data may be obtained via any technique that results in an individual receiving data associated with a sample. For example, an individual may obtain the dataset by generating the dataset himself by methods known to those in the art. Alternatively, the dataset may be obtained by receiving a dataset or one or more data values from another individual or entity. For example, a laboratory professional may generate certain data values while another individual, such as a medical professional, may input all or part of the dataset into an analytic process to generate the result.
One of skill should understand that although reference is made to “a sample” throughout the disclosure that the quantitative data may be obtained from multiple samples varying in any number of characteristics, such as the method of procurement, time of procurement, tissue origin, etc.
In methods of generating a result useful for atherosclerotic classification, the expression pattern in blood, serum, etc. of the protein markers provided herein is obtained. The quantitative data associated with the protein markers of interest can be any data that allows generation of a result useful for atherosclerotic classification, including measurement of DNA or RNA levels associated with the markers but is typically protein expression patterns. Protein levels can be measured via any method known to those of skill in the art that generates a quantitative measurement either individually or via high-throughput methods as part of an expression profile. For example, a blood-derived patient sample, e.g., blood, plasma, serum, etc. may be applied to a specific binding agent or panel of specific binding agents to determine the presence and quantity of the protein markers of interest.
Blood samples, or samples derived from blood, e.g. plasma, serum, etc. are assayed for the presence of expression levels of the miRNA markers alone or in combination with protein markers of interest. Typically a blood sample is drawn, and a derivative product, such as plasma or serum, is tested. In addition, the sample can be derived from other bodily fluids such as saliva, urine, semen, milk or sweat. Samples can further be derived from tissue, such as from a blood vessel, such as an artery, vein, capillary and the like. Further, when both miRNA and protein biomarkers are assayed, they can be derived from the same or different samples. That is, for example, an miRNA biomarker can be assayed in a blood derived sample and a protein biomarker can be assayed in a tissue sample.
The quantitative data associated with the miRNA and protein markers of interest typically takes the form of an expression profile. Expression profiles constitute a set of relative or absolute expression values for a number of miRNA or protein products corresponding to the plurality of markers evaluated. In various embodiments, expression profiles containing expression patterns at least about 2, 3, 4, 5, 6, 7 or more markers are produced. The expression pattern for each differentially expressed component member of the expression profile may provide a particular specificity and sensitivity with respect to predictive value, e.g., for diagnosis, prognosis, monitoring treatment, etc.
Numerous methods for obtaining expression data are known, and any one or more of these techniques, singly or in combination, are suitable for determining expression patterns and profiles in the context of the present disclosure.
For example, DNA and RNA (mRNA, pri-miRNA, pre-miRNA, miRNA, precursor hairpin RNA, microRNP, and the like) expression patterns can be evaluated by northern analysis, PCR, RT-PCR, Taq Man analysis, FRET detection, monitoring one or more molecular beacon, hybridization to an oligonucleotide array, hybridization to a cDNA array, hybridization to a polynucleotide array, hybridization to a liquid microarray, hybridization to a microelectric array, cDNA sequencing, clone hybridization, cDNA fragment fingerprinting, serial analysis of gene expression (SAGE), subtractive hybridization, differential display and/or differential screening. These and other techniques are well known to those of skill in the art.
The present disclosure includes nucleic acid molecules, preferably in isolated form. As used herein, a nucleic acid molecule is to be “isolated” when the nucleic acid molecule is substantially separated from contaminant nucleic acid molecules encoding other polypeptides. The term “nucleic acid” is defined as coding and noncoding RNA or DNA. Nucleic acids that are complementary to, that is, hybridize to, and remain stably bound to the molecules under appropriate stringency conditions are included within the scope of this disclosure. Such sequences exhibit at least 50%, 60%, 70% or 75%, preferably at least about 80-90%, more preferably at least about 92-94%, and even more preferably at least about 95%, 98%, 99% or more nucleotide sequence identity with the RNAs disclosed herein, and include insertions, deletions, wobble bases, substitutions and the like. Further contemplated are sequences sharing at least about 50%, 60%, 70% or 75%, preferably at least about 80-90%, more preferably at least about 92-94%, and most preferably at least about 95%, 98%, 99% or more identity with the protein biomarker sequences disclosed herein
Specifically contemplated within the scope of the disclosure are genomic DNA, cDNA, RNA (mRNA, pri-miRNA, pre-miRNA, miRNA, hairpin precursor RNA, RNP, etc.) molecules, as well as nucleic acids based on alternative backbones or including alternative bases, whether derived from natural sources or synthesized.
Homology or identity at the nucleotide or amino acid sequence level is determined by BLAST (Basic Local Alignment Search Tool) analysis using the algorithm employed by the programs blastp, blastn, blastx, tblastn and tblastx which are tailored for sequence similarity searching. The approach used by the BLAST program is to first consider similar segments, with and without gaps, between a query sequence and a database sequence, then to evaluate the statistical significance of all matches that are identified and finally to summarize only those matches which satisfy a preselected threshold of significance. The search parameters for histogram, descriptions, alignments, expect (i.e., the statistical significance threshold for reporting matches against database sequences), cutoff, matrix and filter (low complexity) are at the default settings. The default scoring matrix used by blastp, blastx, tblastn, and tblastx is the BLOSUM62 matrix, recommended for query sequences over 85 nucleotides or amino acids in length.
For blastn, the scoring matrix is set by the ratios of M (i.e., the reward score for a pair of matching residues) to N (i.e., the penalty score for mismatching residues), wherein the default values for M and N are 5 and −4, respectively. Four blastn parameters were adjusted as follows: Q=10 (gap creation penalty); R=10 (gap extension penalty); wink=1 (generates word hits at every winkth position along the query); and gapw-16 (sets the window width within which gapped alignments are generated). The equivalent Blastp parameter settings were Q=9; R=2; wink=1; and gapw=32. A Bestfit comparison between sequences, available in the GCG package version 10.0, uses DNA parameters GAP=50 (gap creation penalty) and LEN=3 (gap extension penalty) and the equivalent settings in protein comparisons are GAP=8 and LEN=2.
“Stringent conditions” are those that (1) employ low ionic strength and high temperature for washing, for example, 0.015 M NaCl/0.0015 M sodium citrate/0.1% SDS at 50° C., or (2) employ during hybridization a denaturing agent such as formamide, for example, 50% (vol/vol) formamide with 0.1% bovine serum albumin/0.1% Ficoll/0.1% polyvinylpyrrolidone/50 mM sodium phosphate buffer at pH 6.5 with 750 mM NaCl, 75 mM sodium citrate at 42° C. Another example is hybridization in 50% formamide, 5×SSC (0.75 M NaCl, 0.075 M sodium citrate), 50 mM sodium phosphate (pH 6.8), 0.1% sodium pyrophosphate, 5×Denhardt's solution, sonicated salmon sperm DNA (50 μg/ml), 0.1% SDS, and 10% dextran sulfate at 42° C., with washes at 42° C. in 0.2×SSC and 0.1% SDS. A skilled artisan can readily determine and vary the stringency conditions appropriately to obtain a clear and detectable hybridization signal.
The present disclosure further provides fragments of the disclosed nucleic acid molecules. As used herein, a fragment of a nucleic acid molecule refers to a small portion of the coding or non-coding sequence. The size of the fragment will be determined by the intended use. For example, if the fragment is chosen so as to encode an active portion of the protein, the fragment will need to be large enough to encode the functional region(s) of the protein. For instance, fragments which encode peptides corresponding to predicted antigenic regions may be prepared. If the fragment is to be used as a nucleic acid probe or PCR primer, then the fragment length is chosen so as to obtain a relatively small number of false positives during probing/priming.
Protein expression patterns can be evaluated by any method known to those of skill in the art which provides a quantitative measure and is suitable for evaluation of multiple markers extracted from samples such as one or more of the following methods: ELISA sandwich assays, flow cytometry, mass spectrometric detection, calorimetric assays, binding to a protein array (e.g., antibody array), or fluorescent activated cell sorting (FACS).
In one embodiment, an approach involves the use of labeled affinity reagents (e.g., antibodies, small molecules, etc.) that recognize epitopes of one or more protein products in an ELISA, antibody-labelled fluorescent bead array, antibody array, or FACS screen. Methods for producing and evaluating antibodies are well known in the art.
A number of suitable high throughput formats exist for evaluating expression patterns and profiles of the disclosed biomarkers. Typically, the term high throughput refers to a format that performs at least about 100 assays, or at least about 500 assays, or at least about 1000 assays, or at least about 5000 assays, or at least about 10,000 assays, or more per day. When enumerating assays, either the number of samples or the number of markers assayed can be considered.
Numerous technological platforms for performing high throughput expression analysis are known. Generally, such methods involve a logical or physical array of either the subject samples, or the protein markers, or both. Common array formats include both liquid and solid phase arrays. For example, assays employing liquid phase arrays, e.g., for hybridization of nucleic acids, binding of antibodies or other receptors to ligand, etc., can be performed in multiwell or microtiter plates. Microtiter plates with 96, 384 or 1536 wells are widely available, and even higher numbers of wells, e.g., 3456 and 9600 can be used. In general, the choice of microtiter plates is determined by the methods and equipment, e.g., robotic handling and loading systems, used for sample preparation and analysis. Exemplary systems include, e.g., xMAP® technology from Luminex (Austin, Tex.), the SECTOR® Imager with MULTI-ARRAY® and MULTI-SPOT® technologies from Meso Scale Discovery (Gaithersburg, Md.), the ORCA™ system from Beckman-Coulter, Inc. (Fullerton, Calif.) and the ZYMATE™ systems from Zymark Corporation (Hopkinton, Mass.), miRCURY LNA™ microRNA Arrays (Exiqon, Woburn, Mass.).
Alternatively, a variety of solid phase arrays can favorably be employed to determine expression patterns in the context of the disclosed methods, assays and kits. Exemplary formats include membrane or filter arrays (e.g., nitrocellulose, nylon), pin arrays, and bead arrays (e.g., in a liquid “slurry”). Typically, probes corresponding to nucleic acid or protein reagents that specifically interact with (e.g., hybridize to or bind to) an expression product corresponding to a member of the candidate library, are immobilized, for example by direct or indirect cross-linking, to the solid support. Essentially any solid support capable of withstanding the reagents and conditions necessary for performing the particular expression assay can be utilized. For example, functionalized glass, silicon, silicon dioxide, modified silicon, any of a variety of polymers, such as (poly)tetrafluoroethylene, (poly)vinylidenedifluoride, polystyrene, polycarbonate, or combinations thereof can all serve as the substrate for a solid phase array.
In one embodiment, the array is a “chip” composed, e.g., of one of the above-specified materials. Polynucleotide probes, e.g., RNA or DNA, such as cDNA, synthetic oligonucleotides, and the like, or binding proteins such as antibodies or antigen-binding fragments or derivatives thereof, that specifically interact with expression products of individual components of the candidate library are affixed to the chip in a logically ordered manner, i.e., in an array. In addition, any molecule with a specific affinity for either the sense or anti-sense sequence of the marker nucleotide sequence (depending on the design of the sample labeling), can be fixed to the array surface without loss of specific affinity for the marker and can be obtained and produced for array production, for example, proteins that specifically recognize the specific nucleic acid sequence of the marker, ribozymes, peptide nucleic acids (PNA), or other chemicals or molecules with specific affinity.
Microarray expression may be detected by scanning the microarray with a variety of laser or CCD-based scanners, and extracting features with numerous software packages, for example, IMAGENE™ (Biodiscovery), Feature Extraction Software (Agilent), SCANLYZE™ (Stanford Univ., Stanford, Calif.), GENEPIX™ (Axon Instruments).
High-throughput protein systems include commercially available systems from Ciphergen Biosystems, Inc. (Fremont, Calif.) such as PROTEIN CHIP™ arrays, and FASTQUANT™ human chemokine protein microspot array (S&S Bioscences Inc., Keene, N.H., US).
Quantitative data regarding other dataset components, such as clinical indicia, metabolic measures, and genetic assays, can be determined via methods known to those of skill in the art.
The quantitative data thus obtained about the miRNA, protein markers and other dataset components (i.e., clinical indicia and the like) is subjected to an analytic process with parameters previously determined using a learning algorithm, i.e., inputted into a predictive model. The parameters of the analytic process may be those disclosed herein or those derived using the guidelines described herein. Learning algorithms such as linear discriminant analysis, recursive feature elimination, a prediction analysis of microarray, logistic regression, CART, FlexTree, LART, random forest, MART, or another machine learning algorithm are applied to the appropriate reference or training data to determine the parameters for analytical processes suitable for a variety of atherosclerotic classifications.
The analytic process used to generate a result (classification, survival/time-to-event, etc.) may be any type of process capable of providing a result useful for classifying a sample, for example, comparison of the obtained dataset with a reference dataset, a linear algorithm, a quadratic algorithm, a decision tree algorithrh, or a voting algorithm.
Various analytic processes for obtaining a result useful for making an atherosclerotic classification are described herein, however, one of skill in the art will readily understand that any suitable type of analytic process is within the scope of this disclosure.
Prior to input into the analytical process, the data in each dataset is collected by measuring the values for each marker, usually in duplicate or triplicate or in multiple replicates. The data may be manipulated, for example, raw data may be transformed using standard curves, and the average of replicate measurements used to calculate the average and standard deviation for each patient. These values may be transformed before being used in the models, e.g. log-transformed, Box-Cox transformed, etc. This data can then be input into the analytical process with defined parameters.
The analytic process may set a threshold for determining the probability that a sample belongs to a given class. The probability preferably is at least 50%, or at least 60% or at least 70% or at least 80%, at least 90%, or higher.
In other embodiments, the analytic process determines whether a comparison between an obtained dataset and a reference dataset yields a statistically significant difference. If so, then the sample from which the dataset was obtained is classified as not belonging to the reference dataset class. Conversely, if such a comparison is not statistically significantly different from the reference dataset, then the sample from which the dataset was obtained is classified as belonging to the reference dataset class.
In general, the analytical process will be in the form of a model generated by a statistical analytical method such as those described below. Examples of such analytical processes may include a linear algorithm, a quadratic algorithm, a polynomial algorithm, a decision tree algorithm, a voting algorithm. A linear algorithm may have the form:
where R is the useful result obtained. C0 is a constant that may be zero. Ci and xi are the constants and the value of the applicable biomarker or clinical indicia, respectively, and N is the total number of markers.
A quadratic algorithm may have the form:
where R is the useful result obtained. C0 is a constant that may be zero. Ci and xi are the constants and the value of the applicable biomarker or clinical indicia, respectively, and N is the total number of markers.
A polynomial algorithm is a more generalized form of a linear or quadratic algorithm that may have the form:
where R is the useful result obtained. C0 is a constant that may be zero. Ci and xi are the constants and the value of the applicable biomarker or clinical indicia, respectively; yi is the power to which xi is raised and N is the total number of markers.
Using any suitable learning algorithm, an appropriate reference or training dataset can be used to determine the parameters of the analytical process to be used for classification, i.e., develop a predictive model. The reference or training dataset to be used will depend on the desired atherosclerotic classification to be determined. The dataset may include data from two, three, four or more classes. For example, to use a supervised learning algorithm to determine the parameters for an analytic process used to diagnose atherosclerosis, a dataset comprising control and diseased samples is used as a training set. Alternatively, if a supervised learning algorithm is to be used to develop a predictive model for atherosclerotic staging, then the training set may include data for each of the various stages of cardiovascular disease.
The following are examples of the types of statistical analysis methods that are available to one of skill in the art to aid in the practice of the disclosed methods, assays and kits. The statistical analysis may be applied for one or both of two tasks. First, these and other statistical methods may be used to identify preferred subsets of markers and other indicia that will form a preferred dataset. In addition, these and other statistical methods may be used to generate the analytical process that will be used with the dataset to generate the result. Several of statistical methods presented herein or otherwise available in the art will perform both of these tasks and yield a model that is suitable for use as an analytical process for the practice of the methods disclosed herein.
Biomarkers whose corresponding features values (e.g., concentration, expression level) are capable of discriminating between, e.g., healthy and atherosclerotic, are identified herein. The identity of these markers and their corresponding features (e.g., concentration, expression level) can be used to develop an analytical process, or plurality of analytical processes, that discriminate between classes of patients. The examples below illustrate how data analysis algorithms can be used to construct a number of such analytical processes. Each of the data analysis algorithms described in the examples use features (e.g., expression values) of a subset of the markers identified herein across a training population that includes healthy and atherosclerotic patients. Specific data analysis algorithms for building an analytical process, or plurality of analytical processes, that discriminate between subjects disclosed herein will be described in the subsections below. Once an analytical process has been built using these exemplary data analysis algorithms or other techniques known in the art, the analytical process can be used to classify a test subject into one of the two or more phenotypic classes (e.g. a healthy or atherosclerotic patient) and/or predict survival/time-to-event. This is accomplished by applying one or more analytical processes to one or more marker profile(s) obtained from the test subject. Such analytical processes, therefore, have enormous value as diagnostic indicators.
The disclosed methods, assays and kits provide, in one aspect, for the evaluation of one or more marker profile(s) from a test subject to marker profiles obtained from a training population. In some embodiments, each marker profile obtained from subjects in the training population, as well as the test subject, comprises a feature for each of a plurality of different markers. In some embodiments, this comparison is accomplished by (i) developing an analytical process using the marker profiles from the training population and (ii) applying the analytical process to the marker profile from the test subject. As such, the analytical process applied in some embodiments of the methods disclosed herein is used to determine whether a test subject has atherosclerosis. In alternate embodiments, the methods disclosed herein determine whether or not a subject will experience a MI, and/or can predict time-to-event (e.g. MI and/or survival).
In some embodiments of the methods disclosed herein, when the results of the application of an analytical process indicate that the subject will likely experience a MI, the subject is diagnosed/classified as a “MI” subject. Alternately, if, for example, the results of the analytical process indicate that a subject will likely develop atherosclerosis, the subject is diagnosed as an “atherosclerotic” subject. If the results of an application of an analytical process indicate that the subject will not develop atherosclerosis, the subject is diagnosed as a healthy subject. Thus, in some embodiments, the result in the above-described binary decision situation has four possible outcomes: (i) truly atherosclerotic, where the analytical process indicates that the subject will develop atherosclerosis and the subject does in fact develop atherosclerosis during the definite time period (true positive, TP); (ii) falsely atherosclerotic, where the analytical process indicates that the subject will develop atherosclerosis and the subject, in fact, does not develop atherosclerosis during the definite time period (false positive, FP); (iii) truly healthy, where the analytical process indicates that the subject will not develop atherosclerosis and the subject, in fact, does not develop atherosclerosis during the definite time period (true negative, TN); or (iv) falsely healthy, where the analytical process indicates that the subject will not develop atherosclerosis and the subject, in fact, does develop atherosclerosis during the definite time period (false negative, FN).
It will be appreciated that other definitions for TP, FP, TN, FN can be made. While all such alternative definitions are within the scope of the disclosed methods, assays and kits, for ease of understanding, the definitions for TP, FP, TN, and FN given by definitions (i) through (iv) above will be used herein, unless otherwise stated.
As will be appreciated by those of skill in the art, a number of quantitative criteria can be used to communicate the performance of the comparisons made between a test marker profile and reference marker profiles (e.g., the application of an analytical process to the marker profile from a test subject). These include positive predicted value (PPV), negative predicted value (NPV), specificity, sensitivity, accuracy, and certainty. In addition, other constructs such a receiver operator curves (ROC) can be used to evaluate analytical process performance. As used herein: PPV=TP/(TP+FP), NPV=TN/(TN+FN), specificity=TN/(TN+FP), sensitivity=TP/(TP+FN), and accuracy=certainty=(TP+TN)/N.
Here, N is the number of samples compared (e.g., the number of test samples for which a determination of atherosclerotic or healthy is sought). For example, consider the case in which there are ten subjects for which this classification is sought. Marker profiles are constructed for each of the ten test subjects. Then, each of the marker profiles is evaluated by applying an analytical process, where the analytical process was developed based upon marker profiles obtained from a training population. In this example, N, from the above equations, is equal to 10. Typically, N is a number of samples, where each sample was collected from a different member of a population. This population can, in fact, be of two different types. In one type, the population comprises subjects whose samples and phenotypic data (e.g., feature values of markers and an indication of whether or not the subject developed atherosclerosis) was used to construct or refine an analytical process. Such a population is referred to herein as a training population. In the other type, the population comprises subjects that were not used to construct the analytical process. Such a population is referred to herein as a validation population. Unless otherwise stated, the population represented by N is either exclusively a training population or exclusively a validation population, as opposed to a mixture of the two population types. It will be appreciated that scores such as accuracy will be higher (closer to unity) when they are based on a training population as opposed to a validation population. Nevertheless, unless otherwise explicitly stated herein, all criteria used to assess the performance of an analytical process (or other forms of evaluation of a biomarker profile from a test subject) including certainty (accuracy) refer to criteria that were measured by applying the analytical process corresponding to the criteria to either a training population or a validation population.
In some embodiments, N is more than 1, more than 5, more than 10, more than 20, between 10 and 100, more than 100, or less than 1000 subjects. An analytical process (or other forms of comparison) can have at least about 99% certainty, or even more, in some embodiments, against a training population or a validation population. In other embodiments, the certainty is at least about 97%, at least about 95%, at least about 90%, at least about 85%, at least about 80%, at least about 75%, at least about 70%, at least about 65%, or at least about 60% against a training population or a validation population. The useful degree of certainty may vary, depending on the particular method. As used herein, “certainty” means “accuracy.” In one embodiment, the sensitivity and/or specificity is at least about 97%, at least about 95%, at least about 90%, at least about 85%, at least about 80%, at least about 75%, or at least about 70% against a training population or a validation population. In some embodiments, such analytical processes are used to predict the development of atherosclerosis with the stated accuracy. In some embodiments, such analytical processes are used to diagnoses atherosclerosis with the stated accuracy. In some embodiments, such analytical processes are used to determine a stage of atherosclerosis with the stated accuracy.
The number of features that may be used by an analytical process to classify a test subject with adequate certainty is 2 or more. In some embodiments, it is 3 or more, 4 or more, 10 or more, or between 10 and 200. Depending on the degree of certainty sought, however, the number of features used in an analytical process can be more or less, but in all cases is at least 2. In one embodiment, the number of features that may be used by an analytical process to classify a test subject is optimized to allow a classification of a test subject with high certainty.
In certain embodiments, analytical processes are utilized to predict survival. Survival analyses involve modeling time-to-event data. Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes before some event occurs to one or more covariates that may be associated with that quantity. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. Survival models can be viewed as consisting of two parts: the underlying hazard function, often denoted Λ0(t), describing how the hazard (risk) changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age, gender, and the presence of other diseases in order to reduce variability and/or control for confounding.
The proportional hazards assumption is the assumption that covariates multiply hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time t, while the baseline hazard may vary. Note however, that the covariate is not restricted to binary predictors; in the case of a continuous covariate x, the hazard responds logarithmically; each unit increase in x results in proportional scaling of the hazard. Typically under the fully-general Cox model, the baseline hazard is “integrated out”, or heuristically removed from consideration, and the remaining partial likelihood is maximized. The effect of covariates estimated by any proportional hazards model can thus be reported as hazard ratios. The Cox model assumes that if the proportional hazards assumption holds, it is possible to estimate the effect parameters without consideration of the hazard function.
Relevant data analysis algorithms for developing an analytical process include, but are not limited to, discriminant analysis including linear, logistic, and more flexible discrimination techniques; tree-based algorithms such as classification and regression trees (CART) and variants; generalized additive models; neural networks, penalized regression methods, and the like.
In one embodiment, comparison of a test subject's marker profile to a marker profile(s) obtained from a training population is performed, and comprises applying an analytical process. The analytical process is constructed using a data analysis algorithm, such as a computer pattern recognition algorithm. Other suitable data analysis algorithms for constructing analytical process include, but are not limited to, logistic regression or a nonparametric algorithm that detects differences in the distribution of feature values (e.g., a Wilcoxon Signed Rank Test (unadjusted and adjusted)). The analytical process can be based upon 2, 3, 4, 5, 10, 20 or more features, corresponding to measured observables from 1, 2, 3, 4, 5, 10, 20 or more markers. In one embodiment, the analytical process is based on hundreds of features or more. An analytical process may also be built using a classification tree algorithm. For example, each marker profile from a training population can comprise at least 3 features, where the features are predictors in a classification tree algorithm. The analytical process predicts membership within a population (or class) with an accuracy of at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 97%, at least about 98%, at least about 99%, or about 100%.
Suitable data analysis algorithms are known in the art. In one embodiment, a data analysis algorithm of the disclosure comprises Classification and Regression Tree (CART), Multiple Additive Regression Tree (MART), Prediction Analysis for Microarrays (PAM), or Random Forest analysis. Such algorithms classify complex spectra from biological materials, such as a blood sample, to distinguish subjects as normal or as possessing biomarker levels characteristic of a particular disease state. In other embodiments, a data analysis algorithm of the disclosure comprises ANOVA and nonparametric equivalents, linear discriminant analysis, logistic regression analysis, nearest neighbor classifier analysis, neural networks, principal component analysis, quadratic discriminant analysis, regression classifiers and support vector machines. While such algorithms may be used to construct an analytical process and/or increase the speed and efficiency of the application of the analytical process and to avoid investigator bias, one of ordinary skill in the art will realize that computer-based algorithms are not required to carry out the methods of the present disclosure.
Analytical processes can be used to evaluate biomarker profiles, regardless of the method that was used to generate the marker profile. For example, suitable analytical processes can be used to evaluate marker profiles generated using gas chromatography, spectra obtained by static time-of-flight secondary ion mass spectrometry (TOF-SIMS), distinguishing between bacterial strains with high certainty (79-89% correct classification rates) by analysis of MALDI-TOF-MS spectra, use of MALDI-TOF-MS and liquid chromatography-electrospray ionization mass spectrometry (LC/ESI-MS) to classify profiles of biomarkers in complex biological samples.
One approach to developing an analytical process using expression levels of markers disclosed herein is the nearest centroid classifier. Such a technique computes, for each class (e.g., healthy and atherosclerotic), a centroid given by the average expression levels of the markers in the class, and then assigns new samples to the class whose centroid is nearest. This approach is similar to k-means clustering except clusters are replaced by known classes. This algorithm can be sensitive to noise when a large number of markers are used. One enhancement to the technique uses shrinkage: for each marker, differences between class centroids are set to zero if they are deemed likely to be due to chance. This approach is implemented in the Prediction Analysis of Microarray, or PAM. Shrinkage is controlled by a threshold below which differences are considered noise. Markers that show no difference above the noise level are removed. A threshold can be chosen by cross-validation. As the threshold is decreased, more markers are included and estimated classification errors decrease, until they reach a bottom and start climbing again as a result of noise markers—a phenomenon known as overfitting.
Multiple additive regression trees (MART) represent another way to construct an analytical process that can be used in the methods disclosed herein. A generic algorithm for MART is:
2. For m=I to M:
(a) For I=1, 2, . . . , N compute
(b) Fit a regression tree to the targets rim giving terminal regions Rjm, j=1, 2, . . . Jm
(c) For j=1, 2, . . . Jm compute
3. Output f(x)=fM(x).
Specific algorithms are obtained by inserting different loss criteria L(y,f(x)). The first line of the algorithm initializes to the optimal constant model, which is just a single terminal node tree. The components of the negative gradient computed in line 2(a) are referred to as generalized pseudo residuals, r. Gradients for commonly used loss functions are known in the art. Tuning parameters associated with the MART procedure are the number of iterations M and the sizes of each of the constituent trees J.sub.m, m=1, 2, . . . , M.
In some embodiments, an analytical process used to classify subjects is built using regression. In such embodiments, the analytical process can be characterized as a regression classifier, preferably a logistic regression classifier. Such a regression classifier includes a coefficient for each of the markers (e.g., the expression level for each such marker) used to construct the classifier. In such embodiments, the coefficients for the regression classifier are computed using, for example, a maximum likelihood approach. In such a computation, the features for the biomarkers (e.g., RT-PCR, microarray data) are used. In certain embodiments, molecular marker data from only two trait subgroups is used (e.g., healthy patients and atherosclerotic patients) and the dependent variable is absence or presence of a particular trait in the subjects for which marker data is available.
In another embodiment, the training population comprises a plurality of trait subgroups (e.g., three or more trait subgroups, four or more specific trait subgroups, etc.). These multiple trait subgroups can correspond to discrete stages in the phenotypic progression from healthy, to mild atherosclerosis, to medium atherosclerosis, etc. in a training population. In this embodiment, a generalization of the logistic regression model that handles multi-category responses can be used to develop a decision that discriminates between the various trait subgroups found in the training population. For example, measured data for selected molecular markers can be applied to any of the multi-category logit models in order to develop a classifier capable of discriminating between any of a plurality of trait subgroups represented in a training population.
In some embodiments, the analytical process is based on a regression model, preferably a logistic regression model. Such a regression model includes a coefficient for each of the markers in a selected set of markers disclosed herein. In such embodiments, the coefficients for the regression model are computed using, for example, a maximum likelihood approach. In particular embodiments, molecular marker data from the two groups (e.g., healthy and diseased) is used and the dependent variable is the status of the patient corresponding to the marker characteristic data.
Some embodiments of the disclosed methods, assays and kits provide generalizations of the logistic regression model that handle multi-category (polychotomous) responses. Such embodiments can be used to discriminate an organism into one or three or more classifications. Such regression models use multicategory logit models that simultaneously refer to all pairs of categories, and describe the odds of response in one category instead of another. Once the model specifies logits for a certain (J−1) pairs of categories, the rest are redundant.
Linear discriminant analysis (LDA) attempts to classify a subject into one of two categories based on certain object properties. In other words, LDA tests whether object attributes measured in an experiment predict categorization of the objects. LDA typically requires continuous independent variables and a dichotomous categorical dependent variable. For use with the disclosed methods, the expression values for the selected set of markers across a subset of the training population serve as the requisite continuous independent variables. The group classification of each of the members of the training population serves as the dichotomous categorical dependent variable.
LDA seeks the linear combination of variables that maximizes the ratio of between-group variance and within-group variance by using the grouping information. Implicitly, the linear weights used by LDA depend on how the expression of a marker across the training set separates in the two groups (e.g., a group that has atherosclerosis and a group that does not have atherosclerosis) and how this expression correlates with the expression of other markers. In some embodiments, LDA is applied to the data matrix of the N members in the training sample by K genes in a combination of genes described in the present disclosure. Then, the linear discriminant of each member of the training population is plotted. Ideally, those members of the training population representing a first subgroup (e.g. those subjects that do not have atherosclerosis) will cluster into one range of linear discriminant values (e.g., negative) and those member of the training population representing a second subgroup (e.g. those subjects that have atherosclerosis) will cluster into a second range of linear discriminant values (e.g., positive). The LDA is considered more successful when the separation between the clusters of discriminant values is larger.
Quadratic discriminant analysis (QDA) takes the same input parameters and returns the same results, as LDA. QDA uses quadratic equations, rather than linear equations, to produce results. LDA and QDA are roughly interchangeable (though there are differences related to the number of subjects required), and which to use is a matter of preference and/or availability of software to support the analysis. Logistic regression takes the same input parameters and returns the same results as LDA and QDA.
One type of analytical process that can be constructed using the expression level of the markers identified herein is a decision tree. Here, the “data analysis algorithm” is any technique that can build the analytical process, whereas the final “decision tree” is the analytical process. An analytical process is constructed using a training population and specific data analysis algorithms. Tree-based methods partition the feature space into a set of rectangles, and then fit a model (like a constant) in each one.
The training population data includes the features (e.g., expression values, or some other observable) for the markers across a training set population. One specific algorithm that can be used to construct an analytical process is a classification and regression tree (CART). Other specific decision tree algorithms include, but are not limited to, ID3, C4.5, MART, and Random Forests. All such algorithms are known in the art.
In some embodiments of the disclosed methods, assays and kits, decision trees are used to classify patients using expression data for a selected set of markers. Decision tree algorithms belong to the class of supervised learning algorithms. The aim of a decision tree is to induce an analytical process (a tree) from real-world example data. This tree can be used to classify unseen examples which have not been used to derive the decision tree.
A decision tree is derived from training data. An example contains values for the different attributes and what class the example belongs. In one embodiment, the training data is expression data for a combination of markers described herein across the training population.
The following algorithm describes a decision tree derivation:
Tree (Examples, Class, Attributes)
Create a root node
If all Examples have the same Class value, give the root this label
Else if Attributes is empty label the root according to the most common value
Else begin
Calculate the information gain for each attribute
Select the attribute A with highest information gain and make this the root attribute
For each possible value, v, of this attribute
Add a new branch below the root, corresponding to A=v Let Examples(v) be those examples with A=v
If Examples (v) is empty, make the new branch a leaf node labeled with the most common value among Examples
Else let the new branch be the tree created by Tree (Examples (v), Class, Attributes-{A})
End.
A more detailed description of the calculation of information gain is shown in the following. If the possible classes vi of the examples have probabilities P(vi) then the information content I of the actual answer is given by:
The I-value shows how much information is needed in order to be able to describe the outcome of a classification for the specific dataset used. Supposing that the dataset contains p positive (e.g. has atherosclerosis) and n negative (e.g. healthy) examples (e.g. individuals), the information contained in a correct answer is:
where log2 is the logarithm using base two. By testing single attributes the amount of information needed to make a correct classification can be reduced. The remainder for a specific attribute A (e.g. a marker) shows how much the information that is needed can be reduced.
where “v” is the number of unique attribute values for attribute A in a certain dataset, “i” is a certain attribute value, “pi” is the number of examples for attribute A where the classification is positive (e.g. atherosclerotic), “ni” is the number of examples for attribute A where the classification is negative (e.g. healthy).
The information gain of a specific attribute A is calculated as the difference between the information content for the classes and the remainder of attribute A:
The information gain is used to evaluate how important the different attributes are for the classification (how well they split up the examples), and the attribute with the highest information.
In general there are a number of different decision tree algorithms, including but not limited to, classification and regression trees (CART), multivariate decision trees, ID3, and C4.5.
In one embodiment when a decision tree is used, the expression data for a selected set of markers across a training population is standardized to have mean zero and unit variance. The members of the training population are randomly divided into a training set and a test set. For example, in one embodiment, two thirds of the members of the training population are placed in the training set and one third of the members of the training population are placed in the test set. The expression values for a select combination of markers described herein is used to construct the analytical process. Then, the ability for the analytical process to correctly classify members in the test set is determined. In some embodiments, this computation is performed several times for a given combination of markers. In each iteration of the computation, the members of the training population are randomly assigned to the training set and the test set. Then, the quality of the combination of molecular markers is taken as the average of each such iteration of the analytical proCess computation.
In addition to univariate decision trees in which each split is based on an expression level for a corresponding marker, among the set of markers disclosed herein, or the expression level of two such markers, multivariate decision trees can be implemented as an analytical process. In such multivariate decision trees, some or all of the decisions actually comprise a linear combination of expression levels for a plurality of markers. Such a linear combination can be trained using known techniques such as gradient descent on a classification or by the use of a sum-squared-error criterion.
To illustrate such an analytical process, consider the expression: 0.04x1+0.16x2<500. Here, x1 and x2 refer to two different features for two different markers from among the markers disclosed herein. To poll the analytical process, the values of features x1 and x2 are obtained from the measurements obtained from the unclassified subject. These values are then inserted into the equation. If a value of less than 500 is computed, then a first branch in the decision tree is taken. Otherwise, a second branch in the decision tree is taken.
Another approach that can be used in the present disclosure is multivariate adaptive regression splines (MARS). MARS is an adaptive procedure for regression, and is well suited for the high-dimensional problems addressed by the methods disclosed herein. MARS can be viewed as a generalization of stepwise linear regression or a modification of the CART method to improve the performance of CART in the regression setting.
In some embodiments, the expression values for a selected set of markers are used to cluster a training set. For example, consider the case in which ten markers are used. Each member m of the training population will have expression values for each of the ten markers. Such values from a member m in the training population define the vector:
x1mx2mx3mx4mx5mx6mx7mx8mx9mx10m
where Xim is the expression level of the ith marker in subject m. If there are m organisms in the training set, selection of i markers will define m vectors. Note that the methods disclosed herein do not require that each the expression value of every single marker used in the vectors be represented in every single vector m. In other words, data from a subject in which one of the ith marker is not found can still be used for clustering. In such instances, the missing expression value is assigned either a “zero” or some other normalized value. In some embodiments, prior to clustering, the expression values are normalized to have a mean value of zero and unit variance.
Those members of the training population that exhibit similar expression patterns across the training group will tend to cluster together. A particular combination of markers is considered to be a good classifier in this aspect of the methods disclosed herein when the vectors cluster into the trait groups found in the training population. For instance, if the training population includes healthy patients and atherosclerotic patients, a clustering classifier will cluster the population into two groups, with each group uniquely representing either healthy patients and atherosclerotic patients.
The clustering problem is described as one of finding natural groupings in a dataset. To identify natural groupings, two issues are addressed. First, a way to measure similarity (or dissimilarity) between two samples is determined. This metric (similarity measure) is used to ensure that the samples in one cluster are more like one another than they are to samples in other clusters. Second, a mechanism for partitioning the data into clusters using the similarity measure is determined.
One way to begin a clustering investigation is to define a distance function and to compute the matrix of distances between all pairs of samples in a dataset. If distance is a good measure of similarity, then the distance between samples in the same cluster will be significantly less than the distance between samples in different clusters. However, clustering does not require the use of a distance metric. For example, a nonmetric similarity function s(x, x′) can be used to compare two vectors x and x′. Conventionally, s(x, x′) is a symmetric function whose value is large when x and x′ are somehow “similar.”
Once a method for measuring “similarity” or “dissimilarity” between points in a dataset has been selected, clustering requires a criterion function that measures the clustering quality of any partition of the data. Partitions of the data set that extremize the criterion function are used to cluster the data. Particular exemplary clustering techniques that can be used with the methods disclosed herein include, but are not limited to, hierarchical clustering (agglomerative clustering using nearest-neighbor algorithm, farthest-neighbor algorithm, the average linkage algorithm, the centroid algorithm, or the sum-of-squares algorithm), k-means clustering, fuzzy k-means clustering algorithm, and Jarvis-Patrick clustering.
Principal component analysis (PCA) has been proposed to analyze biomarker data. More generally, PCA can be used to analyze feature value data of markers disclosed herein in order to construct an analytical process that discriminates one class of patients from another (e.g., those who have atherosclerosis and those who do not). Principal component analysis is a classical technique to reduce the dimensionality of a data set by transforming the data to a new set of variable (principal components) that summarize the features of the data.
A few non-limiting examples of PCA are as follows. Principal components (PCs) are uncorrelate and are ordered such that the kth PC has the kth largest variance among PCs. The kth PC can be interpreted as the direction that maximizes the variation of the projections of the data points such that it is orthogonal to the first k-1 PCs. The first few PCs capture most of the variation in the data set. In contrast, the last few PCs are often assumed to capture only the residual “noise” in the data.
PCA can also be used to create an analytical process as disclosed herein. In such an approach, vectors for a selected set of markers can be constructed in the same manner described for clustering. In fact, the set of vectors, where each vector represents the expression values for the select markers from a particular member of the training population, can be considered a matrix. In some embodiments, this matrix is represented in a Free-Wilson method of qualitative binary description of monomers, and distributed in a maximally compressed space using PCA so that the first principal component (PC) captures the largest amount of variance information possible, the second principal component (PC) captures the second largest amount of all variance information, and so forth until all variance information in the matrix has been accounted for.
Then, each of the vectors (where each vector represents a member of the training population) is plotted. Many different types of plots are possible. In some embodiments, a one-dimensional plot is made. In this one-dimensional plot, the value for the first principal component from each of the members of the training population is plotted. In this form of plot, the expectation is that members of a first group (e.g. healthy patients) will cluster in one range of first principal component values and members of a second group (e.g., patients with atherosclerosis) will cluster in a second range of first principal component values (one of skill in the art would appreciate that the distribution of the marker values need to exhibit no elongation in any of the variables for this to be effective).
In one example, the training population comprises two groups: healthy patients and patients with atherosclerosis. The first principal component is computed using the marker expression values for the selected markers across the entire training population data set. Then, each member of the training set is plotted as a function of the value for the first principal component. In this example, those members of the training population in which the first principal component is positive are the healthy patients and those members of the training population in which the first principal component is negative are atherosclerotic patients.
In some embodiments, the members of the training population are plotted against more than one principal component. For example, in some embodiments, the members of the training population are plotted on a two-dimensional plot in which the first dimension is the first principal component and the second dimension is the second principal component. In such a two-dimensional plot, the expectation is that members of each subgroup represented in the training population will cluster into discrete groups. For example, a first cluster of members in the two-dimensional plot will represent subjects with mild atherosclerosis, a second cluster of members in the two-dimensional plot will represent subjects with moderate atherosclerosis, and so forth.
In some embodiments, the members of the training population are plotted against more than two principal components and a determination is made as to whether the members of the training population are clustering into groups that each uniquely represents a subgroup found in the training population. In some embodiments, principal component analysis is performed by using the R mva package (a statistical analysis language), which is known to those of skill in the art.
Nearest neighbor classifiers are memory-based and require no model to be fit. Given a query point x0, the k training points x(r)), r, . . . , k closest in distance to x0 are identified and then the point x0 is classified using the k nearest neighbors. Ties can be broken at random. In some embodiments, Euclidean distance in feature space is used to determine distance as:
d
(l)
=∥x
(l)
−x
α∥
Typically, when the nearest neighbor algorithm is used, the expression data used to compute the linear discriminant is standardized to have mean zero and variance 1. For the disclosed methods, the members of the training population are randomly divided into a training set and a test set. For example, in one embodiment, two thirds of the members of the training population are placed in the training set and one third of the members of the training population are placed in the test set. Profiles of a selected set of markers disclosed herein represents the feature space into which members of the test set are plotted. Next, the ability of the training set to correctly characterize the members of the test set is computed. In some embodiments, nearest neighbor computation is performed several times for a given combination of markers. In each iteration of the computation, the members of the training population are randomly assigned to the training set and the test set. Then, the quality of the combination of markers is taken as the average of each such iteration of the nearest neighbor computation.
The nearest neighbor rule can be refined to deal with issues of unequal class priors, differential misclassification costs, and feature selection. Many of these refinements involve some form of weighted voting for the neighbors.
Inspired by the process of biological evolution, evolutionary methods of classifier design employ a stochastic search for an analytical process. In broad overview, such methods create several analytical processes—a population—from measurements such as the biomarker generated datasets disclosed herein. Each analytical process varies somewhat from the other. Next, the analytical processes are scored on data across the training datasets. In keeping with the analogy with biological evolution, the resulting (scalar) score is sometimes called the fitness. The analytical processes are ranked according to their score and the best analytical processes are retained (some portion of the total population of analytical processes). Again, in keeping with biological terminology, this is called survival of the fittest. The analytical processes are stochastically altered in the next generation—the children or offspring. Some offspring analytical processes will have higher scores than their parent in the previous generation, some will have lower scores. The overall process is then repeated for the subsequent generation: The analytical processes are scored and the best ones are retained, randomly altered to give yet another generation, and so on. In part, because of the ranking, each generation has, on average, a slightly higher score than the previous one. The process is halted when the single best analytical process in a generation has a score that exceeds a desired criterion value.
Bagging, boosting, the random subspace method, and additive trees are data analysis algorithms known as combining techniques that can be used to improve weak analytical processes. These techniques are designed for, and usually applied to, decision trees, such as the decision trees described above. In addition, such techniques can also be useful in analytical processes developed using other types of data analysis algorithms such as linear discriminant analysis.
In bagging, one samples the training datasets, generating random independent bootstrap replicates, constructs the analytical processes on each of these, and aggregates them by a simple majority vote in the final analytical process. In boosting, analytical processes are constructed on weighted versions of the training set, which are dependent on previous analytical process results. Initially, all objects have equal weights, and the first analytical process is constructed on this data set. Then, weights are changed according to the performance of the analytical process. Erroneously classified objects get larger weights, and the next analytical process is boosted on the reweighted training set. In this way, a sequence of training sets and classifiers is obtained, which is then combined by simple majority voting or by weighted majority voting in the final decision.
To illustrate boosting, consider the case where there are two phenotypic groups exhibited by the population under study, phenotype 1 (e.g., poor prognosis patients), and phenotype 2 (e.g., good prognosis patients). Given a vector of molecular markers X, a classifier G(X) produces a prediction taking one of the type values in the two value set: {phenotype 1, phenotype 2}. The error rate on the training sample is
where N is the number of subjects in the training set (the sum total of the subjects that have either phenotype 1 or phenotype 2). For example, if there are 35 healthy patients and 46 sclerotic patients, N is 81.
A weak analytical process is one Whose error rate is only slightly better than random guessing. In the boosting algorithm, the weak analytical process is repeatedly applied to modified versions of the data, thereby producing a sequence of weak classifiers Gm(x), m=1, 2, . . . , M. The predictions from all of the classifiers in this sequence are then combined through a weighted majority vote to produce the final prediction:
1. Initialize the observation weights wi=1/N, i=1, 2, . . . , N
2. For m=1 to M:
(a) Fit an analytical process Gm(x) to the training set using weights wi.
(c) Compute am=log((1−errm/errm).
(d) Set wiwiexp[αmI(yi≠Gm(xi))], i=1, 2, . . . , N.
Here α1, α2, . . . , αm are computed by the boosting algorithm and their purpose is to weigh the contribution of each respective Gm(x). Their effect is to give higher influence to the more accurate classifiers in the sequence.
The data modifications at each boosting step consist of applying weights w1, w2, . . . , wn to each of the training observations (xi, yi), i=1, 2, . . . , N. Initially all the weights are set to wi=1/N, so that the first step simply trains the analytical process on the data in the usual manner. For each successive iteration m=2, 3, . . . , M the observation weights are individually modified and the analytical process is reapplied to the weighted observations. At stem m, those observations that were misclassified by the analytical process Gm-1(x) induced at the previous step have their weights increased, whereas the weights are decreased for those that were classified correctly. Thus as iterations proceed, observations that are difficult to correctly classify receive ever-increasing influence. Each successive analytical process is thereby forced to concentrate on those training observations that are missed by previous ones in the sequence.
The exemplary boosting algorithm is summarized as follows:
1. Initialize the observation weights wi=1/N, i=1, 2, . . . . , N.
2. For m=1 to M:
(a) Fit an analytical process Gm(x) to the training set using weights wi,
(C) Compute αm=log((1−errm)/errm).
(d) Set wi←→wiexp[αmI(yi≠Gm(xi))], i=1, 2, . . . , N.
In the algorithm m, the current classifier Gm(x) is induced on the weighted observations at line 2a. The resulting weighted error rate is computed at line 2b. Line 2c calculates the weight αm given to Gm(x) in producing the final classifier Gm (line 3). The individual weights of each of the observations are updated for the next iteration at line 2d. Observations misclassified by Gm(x) have their weights scaled by a factor exp(αm), increasing their relative influence for inducing the next classifier Gm+I(x) in the sequence. In some embodiments, boosting or adaptive boosting methods are used.
In some embodiments, feature preselection is performed using a technique such as the nonparametric scoring method. Feature preselection is a form of dimensionality reduction in which the markers that discriminate between classifications the best are selected for use in the classifier. Then, the LogitBoost procedure is used rather than the boosting procedure. In some embodiments, the boosting and other classification methods are used in the disclosed methods.
In the random subspace method, classifiers are constructed in random subspaces of the data feature space. These classifiers are usually combined by simple majority voting in the final decision rule (i.e., analytical process).
As indicated, the statistical techniques described herein are merely examples of the types of algorithms and models that can be used to identify a preferred group of markers to include in a dataset and to generate an analytical process that can be used to generate a result using the dataset. Further, combinations of the techniques described above and elsewhere can be used either for the same task or each for a different task. Some combinations, such as the use of the combination of decision trees and boosting, have been described. However, many other combinations are possible. By way of example, other statistical techniques in the art such as Projection Pursuit and Weighted Voting can be used to identify a preferred group of markers to include in a dataset and to generate an analytical process that can be used to generate a result using the dataset.
An optimum number of dataset components to be evaluated in an analytical process can be determined. When using the learning algorithms described above to develop a predictive model, one of skill in the art may select a subset of markers, i.e. at least 3, at least 4, at least 5, at least 6, up to the complete set of markers, to define the analytical process. Usually a subset of markers will be chosen that provides for the needs of the quantitative sample analysis, e.g. availability of reagents, convenience of quantitation, etc., while maintaining a highly accurate predictive model.
The selection of a number of informative markers for building classification models requires the definition of a performance metric and a user-defined threshold for producing a model with useful predictive ability based on this metric. For example, the performance metric may be the AUC, the sensitivity and/or specificity of the prediction as well as the overall accuracy of the prediction model.
The predictive ability of a model may be evaluated according to its ability to provide a quality metric, e.g. AUC or accuracy, of a particular value, or range of values. In some embodiments, a desired quality threshold is a predictive model that will classify a sample with an accuracy of at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, at least about 0.95, or higher. As an alternative measure, a desired quality threshold may refer to a predictive model that will classify a sample with an AUC of at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, or higher.
As is known in the art, the relative sensitivity and specificity of a predictive model can be “tuned” to favor either the selectivity metric or the sensitivity metric, where the two metrics have an inverse relationship. The limits in a model as described above can be adjusted to provide a selected sensitivity or specificity level, depending on the particular requirements of the test being performed. One or both of sensitivity and specificity may be at least about at least about 0.7, at least about 0.75, at least about 0.8, at least about 0.85, at least about 0.9, or higher.
Various methods are used in a training model. The selection of a subset of markers may be via a forward selection or a backward selection of a marker subset. The number of markers to be selected is that which will optimize the performance of a model without the use of all the markers. One way to define the optimum number of terms is to choose the number of terms that produce a model with desired predictive ability (e.g. an AUC>0.75, or equivalent measures of sensitivity/specificity) that lies no more than one standard error from the maximum value obtained for this metric using any combination and number of terms used for the given algorithm.
As described above, quantitative data for components of the dataset are inputted into an analytic process and used to generate a result. The result can be any type of information useful for making an atherosclerotic classification, e.g. a classification, a continuous variable, or a vector. For example, the value of a continuous variable or vector may be used to determine the likelihood that a sample is associated with a particular classification.
Atherosclerotic classification refer to any type of information or the generation of any type of information associated with an atherosclerotic condition, for example, diagnosis, staging, assessing extent of atherosclerotic progression, prognosis, monitoring, therapeutic response to treatments, screening to identify compounds that act via similar mechanisms as known atherosclerotic treatments, prediction of pseudo-coronary calcium score, stable (i.e., angina) vs. unstable (i.e., myocardial infarction), identifying complications of atherosclerotic disease, etc.
In a preferred embodiment, the result is used for diagnosis or detection of the occurrence of an atherosclerosis, particularly where such atherosclerosis is indicative of a propensity for myocardial infarction, heart failure, etc. In this embodiment, a reference or training set containing “healthy” and “atherosclerotic” samples is used to develop a predictive model. A dataset, preferably containing protein expression levels of markers indicative of the atherosclerosis, is then inputted into the predictive model in order to generate a result. The result may classify the sample as either “healthy” or “atherosclerotic”. In other embodiments, the result is a continuous variable providing information useful for classifying the sample, e.g., where a high value indicates a high probability of being an “atherosclerotic” sample and a low value indicates a low probability of being a “healthy” sample.
In other embodiments, the result is used for atherosclerosis staging. In this embodiment, a reference or training dataset containing samples from individuals with disease at different stages is used to develop a predictive model. The model may be a simple comparison of an individual dataset against one or more datasets obtained from disease samples of known stage or a more complex multivariate classification model. In certain embodiments, inputting a dataset into the model will generate a result classifying the sample from which the dataset is generated as being at a specified cardiovascular disease stage. Similar methods may be used to provide atherosclerosis prognosis, except that the reference or training set will include data obtained from individuals who develop disease and those who fail to develop disease at a later time.
In other embodiments, the result is used to determine response to atherosclerotic disease treatments. In this embodiment, the reference or training dataset and the predictive model is the same as that used to diagnose atherosclerosis (samples of from individuals with disease and those without). However, instead of inputting a dataset composed of samples from individuals with an unknown diagnosis, the dataset is composed of individuals with known disease which have been administered a particular treatment and it is determined whether the samples trend toward or lie within a normal, healthy classification versus an atherosclerotic disease classification.
Treatment as used herein can include, without limitation, a follow-up checkup in 3, 6, or 12 months; pharmacologic intervention such as beta-blocker, calcium channel blocker, aspirin, cholesterol lowering agents, etc; and/or further testing to determine the existence or degree of cardiovascular condition/disease. In certain instances, no immediate treatment will be required.
In another embodiment, the result is used for drug screening, i.e., identifying compounds that act via similar mechanisms as known atherosclerotic drug treatments. In this embodiment, a reference or training set containing individuals treated with a known atherosclerotic drug treatment and those not treated with the particular treatment can be used develop a predictive model. A dataset from individuals treated with a compound with an unknown mechanism is input into the model. If the result indicates that the sample can be classified as coming from a subject dosed with a known atherosclerotic drug treatment, then the new compound is likely to act via the same mechanism.
In preferred embodiments, the result is used to determine a “pseudo-coronary calcium score,” which is a quantitative measure that correlates to coronary calcium score (CCS). CCS is a clinical cardiovascular disease screening technique which measures overall atherosclerotic plaque burden. Various different types of imaging techniques can be used to quantitate the calcium area and density of atherosclerotic plaques. When electron-beam CT and multidetector CT are used, CCS is a function of the x-ray attenuation coefficient and the area of calcium deposits. Typically, a score of 0 is considered to indicate no atherosclerotic plaque burden, >0 to 10 to indicate minimal evidence of plaque burden, 11 to 100 to indicate at least mild evidence of plaque burden, 101 to 400 to indicate at least moderate evidence of plaque burden, and over 400 as being extensive evidence of plaque burden. CCS used in conjunction with traditional risk factors improves predictive ability for complications of cardiovascular disease. In addition, the CCS is also capable of acting as an independent predictor of cardiovascular disease complications.
A reference or training set containing individuals with high and low coronary calcium scores can be used to develop a model for predicting the pseudo-coronary calcium score of an individual. This predicted pseudo-coronary calcium score is useful for diagnosing and monitoring atherosclerosis. In some embodiments, the pseudo-coronary calcium score is used in conjunction with other known cardiovascular diagnosis and monitoring methods, such as actual coronary calcium score derived from imaging techniques to diagnose and monitor cardiovascular disease.
One of skill will also recognize that the results generated using these methods can be used in conjunction with any number of the various other methods known to those of skill in the art for diagnosing and monitoring cardiovascular disease.
Also provided are reagents and kits thereof for practicing one or more of the above-described methods. The subject reagents and kits thereof may vary greatly. Reagents of interest include reagents specifically designed for use in production of the above described expression profiles of circulating miRNA markers, protein biomarkers, or a combination of miRNA and protein markers associated with atherosclerotic conditions.
In one embodiment a kit for assessing the cardiovascular health of a human to determine the need for or effectiveness of a treatment regimen is provided, which comprises: an assay for determining levels of at least two miRNA markers selected from the the miRNAs in Table 20 in the biological sample; instructions for obtaining a dataset comprised of the levels of each miRNA marker, inputting the data into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and classifying the biological sample according to the output of the classification process and determining a treatment regimen for the human based on the classification.
In certain embodiments, the kit further comprises an assay for determining levels of at least three protein biomarker selected from the group consisting IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF in the biological sample; and instructions for obtaining a dataset comprised of the indivdual levels of the protein markers, inputting the data of the miRNA and protein markers into an analytical classification process that uses the data to classify the biological sample, wherein the classification is selected from the group consisting of an atherosclerotic cardiovascular disease classification, a healthy classification, a medication exposure classification, a no medication exposure classification; and classifying the biological sample according to the output of the classification process and determining a treatment regimen for the human based on the classification.
One type of such reagent is an array or kit of antibodies that bind to a marker set of interest. A variety of different array formats are known in the art, with a wide variety of different probe structures, substrate compositions and attachment technologies. Representative array or kit compositions of interest include or consist of reagents for quantitation of at least 2, at least 3, at least 4, at least 5 or more miRNA markers alone or in combination with protein markers. In this regard, the reagent can be for quantitation of at least 1, at least 2, at least 3, at least 4, at least 5 miRNA markers selected from the miRNAs listed in Table 1 and preferably, the miRNAs listed in Table 20.
Alternatively, or in addition to, the reagent can be for quantitation of at least 1, at least 2, at least 3, at least 4, at least 5, at least 6, at least 7, at least 8, at least 9 or at least 10 protein biomarkers selected from TABLE 2
In certain embodiments, the protein biomarkers are selected from IL-16, sFas, Fas ligand, MCP-3, HGF, CTACK, EOTAXIN, adiponectin, IL-18, TIMP.4, TIMP.1, CRP, VEGF, and EGF.
The kits may further include a software package for statistical analysis of one or more phenotypes, and may include a reference database for calculating the probability of classification. The kit may include reagents employed in the various methods, such as devices for withdrawing and handling blood samples, second stage antibodies, ELISA reagents, tubes, spin columns, and the like.
In addition to the above components, the subject kits will further include instructions for practicing the subject methods. These instructions may be present in the subject kits in a variety of forms, one or more of which may be present in the kit. One form in which these instructions may be present is as printed information on a suitable medium or substrate, e.g., a piece or pieces of paper on which the information is printed, in the packaging of the kit, in a package insert, etc. Yet another means would be a computer readable medium, e.g., diskette, CD, etc., on which the information has been recorded. Yet another means that may be present is a website address which may be used via the Internet to access the information at a removed site. Any convenient means may be present in the kits.
In an additional embodiment, the methods assays and kits disclosed herein can be used to detect a biomarker in a pooled sample. This method is particularly useful when only a small amount of multiple samples are available (for example, archived clinical sample sets) and/or to create useful datasets relevant to a disease or control population. In this regard, equal amounts (for example, about 10 μL, about 15 μL, about 20 μL, about 30 μL, about 40 μL, about 50 μL, or more) of a sample can be obtained from multiple (about 2, 5, 10, 15, 20, 30, 50, 100 or more) individuals. The individuals can be matched by various indicia. The indicia can include age, gender, history of disease, time to event, etc. The equal amounts of sample obtained from each individual can be pooled and analyzed for the presence of one or more biomarkers. The results can be used to create a reference set, make predictions, determine biomarkers associated with a given condition, etc by using the prediction and classifying models described herein. One of skill in the art will readily appreciate the many uses of this method and that it is in no way limited to the miRNAs, proteins, and disease states disclosed herein. In fact, this method can be used to detect DNA, RNA (mRNA, miRNA, hairpin precursor RNA, RNP), proteins, and the like, associated with a variety of diseases and conditions.
Terms used herein are defined as set forth below unless otherwise specified.
The term “monitoring” as used herein refers to the use of results generated from datasets to provide useful information about an individual or an individual's health or disease status. “Monitoring” can include, for example, determination of prognosis, risk-stratification, selection of drug therapy, assessment of ongoing drug therapy, determination of effectiveness of treatment, prediction of outcomes, determination of response to therapy, diagnosis of a disease or disease complication, following of progression of a disease or providing any information relating to a patient's health status over time, selecting patients most likely to benefit from experimental therapies with known molecular mechanisms of action, selecting patients most likely to benefit from approved drugs with known molecular mechanisms where that mechanism may be important in a small subset of a disease for which the medication may not have a label, screening a patient population to help decide on a more invasive/expensive test, for example, a cascade of tests from a non-invasive blood test to a more invasive option such as biopsy, or testing to assess side effects of drugs used to treat another indication. In particular, the term “monitoring” can refer to atherosclerosis staging, atherosclerosis prognosis, vascular inflammation levels, assessing extent of atherosclerosis progression, monitoring a therapeutic response, predicting a coronary calcium score, or distinguishing stable from unstable manifestations of atherosclerotic disease.
The term “quantitative data” as used herein refers to data associated with any dataset components (e.g., miRNA markers, protein markers, clinical indicia, metabolic measures, or genetic assays) that can be assigned a numerical value. Quantitative data can be a measure of the DNA, RNA, or protein level of a marker and expressed in units of measurement such as molar concentration, concentration by weight, etc. For example, if the marker is a protein, quantitative data for that marker can be protein expression levels measured using methods known to those of skill in the art and expressed in mM or mg/dL concentration units.
The term “mammal” as used herein includes both humans and non-humans and include but is not limited to humans, non-human primates, canines, felines, murines, bovines, equines, and porcines.
The term “pseudo coronary calcium score” as used herein refers- to a coronary calcium score generated using the methods as disclosed herein rather than through measurement by an imaging modality. One of skill in the art would recognize that a pseudo coronary calcium score may be used interchangeably with a coronary calcium score generated through measurement by an imaging modality.
The term percent “identity” in the context of two or more nucleic acid or polypeptide sequences, refer to two or more sequences or subsequences that have a specified percentage of nucleotides or amino acid residues that are the same, when compared and aligned for maximum correspondence, as measured using one of the sequence comparison algorithms described below (e.g., BLASTP and BLASTN or other algorithms available to persons of skill) or by visual inspection. Depending on the application, the percent “identity” can exist over a region of the sequence being compared, e.g., over a functional domain, or, alternatively, exist over the full length of the two sequences to be compared.
In certain embodiments, the “effectiveness” of a treatment regimen is determined. A treatment regimen is considered effective based on an improvement, amelioration, reduction of risk, or slowing of progression of a condition or disease. Such a determination is readily made by one of skill in the art.
The pooling approach utilized in this study accomplished two goals: a) to investigate the ability of the Exiqon Locked Nucleic Acid (LNA™) technology to identify miRNAs in serum and b) to utilize minimum volumes from precious archived clinical samples for testing.
In order to evaluate the ability of the LNA™ technology to identify miRNAs in serum, 52 pools were created using archived serum samples from a prospective study (Marshfield Clinical Personalized Medicine Research Project (PMRP), Personalized Medicine, 2(1): 49-79 (2005)). Twenty-six of the pools represented cases and 26 pools represented controls. Each pool contained equivalent volumes (50 μL) of serum sample from each of 5 individuals that were matched for age (selected from the eight 5-year ranges between 40 and 80 year old individuals), gender, and time to event for cases (i.e, MI within 0-6 mos, MI within 6-12 mos, etc). The matching for the later was approximate. Cases were subjects with an MI or hospitalized unstable angina within five years from blood draw. Controls were subjects that did not have either of these events within five years from blood draw. The sample was evaluated as a classification problem and the test performance was judged using the area under the curve (AUC).
The performance of the test in terms of AUC depends on the distribution of measured values (for individual markers) or of that of the score, which at the time of the experimental design was unknown. In order to estimate the expected performance of the test for a set of similar sample size with the actual experimental design (26 cases and 26 controls), a number of simulations were performed using different assumed distributions for the variables and number of samples in a pool. The assumed distributions used were: a) normal, b) chisq and c) log-normal. For each distribution and number of samples in a pool the appropriate number of “controls” was randomly selected and the corresponding number of cases was selected from a distribution with known shift in the mean, in order to represent differences between the populations. Therefore, for a pool of size M, select 26*M controls and 26*M cases were selected and each pooled sample is created by averaging the values of M samples. The process was repeated 500 times and a distribution of expected AUCs was estimated for a given number of pooled samples and population distance.
Thirty-eight miRNAs on 52 pooled samples were analyzed using EXIQON UniRT® LNA technology. Total RNA was extracted from the supplied serum samples (described above) using the QIAGEN RNEASY® Mini Kit Protocol (QIAGEN, Valenica, Calif.) with a slightly modified protocol.
Total RNA was extracted from serum using the QIAGEN RNEASY® Mini Kit. Serum was thawed on ice and centrifuge at 1000×g for 5 min in a 4° C. microcentrifuge. An aliquot of 200 μL of serum per sample was transferred to a new microcentrifuge tube and 750 ul of Qiazol mixture containing 0.94 μg/μL of MS2 bacteriophage was added to the serum. Tube was mixed and incubated for 5 min followed by the addition of 200 μL chloroform. Tube was mixed, incubated for 2 min and centrifuge at 12,000×g for 15 min in a 4° C. microcentrifuge. Upper aqueous phase was collected to a new microcentrifuge tube and 1.5 volume of 100% ethanol was added. Tube was mixed thoroughly and 750 μL of the sample was transferred to the QIAGEN RNEASY® Mini spin column in a collection tube followed by centrifugation at 15,000×g for 30 sec at room temperature. Process was repeated until remaining sample was loaded. The QIAGEN RNEASY® Mini spin column was rinsed with 700 μL QIAGEN RWT buffer and centrifuge at 15,000×g for 1 min at room temperature followed by another rinse with 500 μL QIAGEN RPE buffer and centrifuge at 15,000×g for 1 min at room temperature. Rinsing with 500 μL QIAGEN RPE buffer was repeated 2×. The QIAGEN RNEASY® Mini spin column was transferred to a new collection tube and centrifuge at 15,000×g for 2 min at room temperature. The QIAGEN RNEASY® Mini spin column was transferred to a new microcentrifuge tube and the lid was uncapped for 1 min to dry. RNA was eluted by adding 50 μL of RNase-free water to the membrane of the QIAGEN RNEASY® mini spin column and incubated for 1 min before centrifugation at 15,000×g for 1 min at room temperature. RNA was stored in −70° C. freezer until shipment on dry ice. Thirty-eight miRNAs were selected for analysis (Table 3).
Each RNA sample was reverse transcribed (RT) into cDNA in three independent RT reactions and run as singlicate real-time PCR or qPCR reaction.
Each 384 well plate contained reactions for all the samples for 2 miRNA assays. Negative controls were included in the experiment: No template control (RNA replaced with water) in RT step, and a No enzyme control in the RT step (pooled RNA as template). All assays passed this quality control step in that the no template control and no enzyme control were negative.
An additional step in the real-time PCR analysis was performed to evaluate the specificity of the assays by generating a melting curve for each reaction. The appearance of a single peak during melting curve analysis is an indication that a single specific product was amplified during the qPCR process. The appearance of multiple melting curve peaks correspondingly provides an indication of multiple qPCR amplification products and is evidence of a lack of specificity. Any assays that showed multiple peaks have been excluded from the data set. The amplification curves were analyzed using the LIGHTCYCLER® software (Roche, Indianapolis, Ind.) both for determination of Cp (crossing point, i.e., the point where the measured signal crosses above a predesignated threshold value, indicating a measurable concentration of the target sequence) (by 2nd derivative method) and for melting curve analysis.
PCR efficiency was also assessed by analysis of the PCR amplification curve with the LINREG® software (Open Source Software) The performance of five housekeeping miRNAs (miR-16, miR-93, miR-103, miR-192 & miR-451) was used to evaluate the quality of the RNA extracted from the supplied serum samples.
Twenty-four of the 38 miRNA targets were detected in the samples. Fifty of the samples (26 cases and 24 controls) were used to evaluate the expected perfromance of a classification analysis on these samples and to select miRNAs that predict status. The following methodologies were employed for building a model: a) a logistic regression approach and b) a penalized logistic regression approach using (L1 penalty—lasso). The selection of the terms that provided the best classification in a model was completed by a) conducting forward selection using the Bayesian Information criterion for the unpenalized logistic regression approach and b) a cross-validation based selection of the optimum penalty for the penalized approach. In the latter, since the penalty parameter drives the coefficients of the available parameters to zero, the resulting model contains only a reduced number of predictive miRNAs. In order to evaluate an objective measure of the performance, AUC was calculated using a prevalidated score. The prevalidation is very similar to a cross-validation approach, where the association of a “score” with a given outcome is based on values that for a given subject have been predicted from a model that was fit without using the specific subject in the training set. For this analysis prevalidated scores were calculated based on two approaches: a) k-fold cross-validation and b) leave-one-out cross validation. The prevalidation iteration has been repeated N times (where N is usually equal to 100-1000). The complete sequence of the analysis is as follows:
1) Fit a model on a subset of the data using logistic regression with BIC for model selection, or penalized logistic regression estimating the penalty function through a nested cross-validation in the training set;
2) For a k-fold cross-validation, the model is fitted on k-1 groups of samples;
3) For a leave-one-out cross-validation, the model is fitted in the M-1 samples where here M=50;
4) Using the fitted model, predict the score for the left-out samples (group k for the cross-validation and the single left-out sample for the leave-one-out cross-validation);
5) Once all the scores have been predicted for all the samples, calculate the AUC for the classification problem;
6) Repeat steps 1-3 N times to evaluate the variability of the AUC.
Table 5 presents the count of biomarkers selected using the leave-one-out (LOOV) cross-validation in combination with an L1 penalized logistic regression approach. The two methods provide highly overlapping sets of biomarkers, selected at approximately the same order. The difference in the counts is due to the number of samples in the set. The corresponding AUC is 0.66.
A follow-up experiment concentrated on evaluating the detection and performance of miRNAs in individual serum samples (26 cases and 26 controls) using the EXIQON LNA™ technology described in Example 1. A total of 90 miRNAs (see Table 6) were screened, which included the miRNAs screened in the pooled samples. Fourty-four of the 90 miRNA targets were detected in the individual serum samples. The 24 miRs detected in the pooled samples were also detected in the individual samples and 20 additional miRNAs were detected in the individual samples. Five miRNAs were used for data normalization and were removed from the analysis.
The same methodlogy described in Example 1 was utilized for analysis of this data set. Using a penalized logistic regression with a leave-one-out crossvalidation produced an AUC equal to 0.778. The number of times individual miRNAs were selected in the models used in the prevalidated score calculation is shown in Table 7 (50 models total since there were 50 samples). The average model size was ˜8 terms (top 8 miRNAs are indicated by “*”). The expected value is higher than the corresponding value obtained for the pooled data.
Table 8 provides the miRNAs selected when an L1 penalized logistic regression approach with 4-fold cross validation was applied to 50 individual samples. Again, considerable overlap in the markers and order is observed between the two methods.
Models were developed that included protein only data (from the Marshfield cohort utilized in Examples 1 and 2). A total of 47 unique protein biomarkers (Table 9) were analyzed. Serum samples were collected and kept frozen at −80° C., then thawed immediately prior to use. Each sample was analyzed in duplicate using two distinct detection technologies: xMAP® technology from Luminex (Austin, Tex.) and the SECTOR® Imager with MULTI-SPOT® technology from Meso Scale Discovery (MSD, Gaithersburg, Md.).
The Luminex xMAP technology utilizes analyte-specific antibodies that are pre-coated onto color-coded microparticles. Microparticles, standards and samples are pipetted into wells and the immobilized antibodies bind the analytes of interest. After an appropriate incubation period, the particles are re-suspended in wash buffer multiple times to remove any unbound substances. A biotinylated antibody cocktail specific to the analytes of interest is added to each well. Following a second incubation period and a wash to remove any unbound biotinylated antibody, streptavidin-phycoerythrin conjugate (Streptavidin-PE), which binds to the biotinylated detection antibodies, is added to each well. A final wash removes unbound Streptavidin-PE and the microparticles are resuspended in buffer and read using the Luminex analyzer. The analyzer uses a flow cell to direct the microparticles through a multi-laser detection system. One laser is microparticle-specific and determines which analyte is being detected. The other laser determines the magnitude of the phycoerythrin-derived signal, which is in direct proportion to the amount of analyte bound. Curves are constructed using the signals generated by the standards and protein biomarker concentrations of the samples are read off each curve. Sensitivity (Limit of Detection, LOD) and precision (intra- and inter-assay % CV) of the 47 Luminex protein biomarker assays is shown in Table 10.
Ten of the 45 unique protein biomarkers were analyzed with a 10-plex assay on the MSD platform (Table 11).
The MSD technology utilizes specialized 96-well microtiterplates constructed with a carbon surface on the bottom of each plate. Antibodies specific for each protein biomarker are spotted in spatial arrays on the bottom of each well of the microtiterplate. Standards and samples are pipetted into the wells of the precoated plates and the immobilized antibodies bind the analytes of interest. After an appropriate incubation period, the plates are washed multiple times to remove any unbound substances. A cocktail of analyte-specific secondary antibodies labeled with a SULFO-TAG™ is added to each well. Following a second incubation period, the plates are again washed multiple times to remove any unbound materials and a specialized Read Buffer is added to each well. The plates are then placed into the SECTOR® Imager where an electric current is applied to the carbon electrode on the bottom of the microtiterplate. The SULFO-TAG™ labels bound to the specific secondary antibodies at each spot emit light upon this electrochemical stimulation, which is detected using a sensitive CCD camera. Curves are constructed using the signals generated by the standards and protein biomarker concentrations of the samples are read off each curve. Sensitivity (Limit of Detection, LOD) and precision (intra- and inter-assay % CV) of the 10 MSD protein biomarker assays is shown in Table 12.
The models were built and performance was evaluated using the logistic regression approach with LOOV or k-fold cross-validation for the calculation of the prevalidated score as described above.
Models were developed that included both protein and miRNAs data (from Examples 1 and 2). The protein data across 47 biomarkers (from Example 3) were obtained using two distinct detection technologies: Luminex (Luminex Corp, Austin, Tex.) and Mesoscale Discovery System. Since the protein and miRNAs data were combined, the number of candidate explanatory variables exceeds the number of samples. In this situation, the use of the unpenalized methods is not appropriate, thus models were built and performance was evaluated using the penalized logistic regression with LOOV or k-fold cross-validation for the calculation of the prevalidated score as described above.
In this study, the levels of the miRNA describe the risk of an event (here MI) occurring over time. Univariate and multivariate classification and survival analyses of 112 candidate miRNA markers were performed. Classification results were obtained based on the methodologies described in Examples 2 and 3. Survival analysis was performed using a Cox proportional hazard regression approach. The response variables for the later analysis included the time when an event took place or the time to the end of the study and an index indicating if the time corresponds to an event or the end of the study (censoring). For the 52 samples described in Example 2, the time of event or end of follow-up time was known. For the 26 subjects that had an event before the end of the study, the indicator variable for an event was set to 1 and for the 26 subjects without an event within the duration of the study the indicator variable was set to 0. Explanatory variables included in the analysis were: a) the protein levels alone, b) the miRNA levels alone and c) either the miRNA and/or protein levels. Model fitting was accomplished using both penalized and unpenalized versions of the Cox proportional hazard model. The L1-penalty (Lasso) was used whenever the penalized version of the model was applied. The variable selection for each model was performed using the same approaches described in Example 1, i.e., using a) the Bayesian information criterion with forward selection for the unpenalized version of the models and b) a cross-validation based selection of the optimum penalty for the penalized approach. In order to evaluate the performance of these models in an objective way, the calculation of a prevalidated score obtained in a manner similar to the one described in Example 1 was employed.
In the first analysis (classification), survival time was ignored and all cases were treated the same, regardless of time-to-event. Table 16 shows the results for the univariate classification analysis. The markers in this table have been ordered by the predicted AUC. Table 18 shows the selection frequency of miRNAs in multivariate classification models. Multiple logistic regression models were built during the prevalidation process on training sets obtained through a LOOV approach, providing a score for the left-out-sample. The model size was determined by the use of the Bayesian Information Criterion. The average classification performance was based on the vector of prevalidated calssification scores and was equal to 0.7.
Table 18 shows the results from the univariate survival analysis. Again, the markers in this table have been ordered by the predicted AUC. Top selected markers were almost identical to those obtained from the classification analysis and overall performance, as measured by time-dependent AUC, was comparable to that obtained from the classification approach. Table 19 shows the selection frequency of the miRNA markers in a multivariate survival analysis using a Cox proportional Hazard regression approach. The expected performance, for miRNA only based models, was estimated using prevalidation (AUC=0.78). Training sets were constructed through a leave-one-out approach and the model size within each fold was determined based on the Bayesian information criterion. The average model size was 8.
In order to further investigate the ability of miRNA biomarkers to distinguish case versus control, RNA extracts previously obtained from the fifty-two serum samples from Example 2, were screened for the presence of 720 miRNA target sequences shown in Table 1, using Exiqon's mercury LNA™ Universal RT microRNA PCR array technology platform, currently updated to miRBASE 13.
A number of analyses were combined to provide an overall significance of each miRNA biomarker. Univariate classification and survival analyses provided AUC values for each individual miRNA target which were used to rank each target in order of significance. Multivariate analysis was also conducted to generate 47 multivariate models. miRNA targets were ranked by the number of models for which they were selected. A t-test analysis (1-tailed) was also conducted comparing Cp values measured for each miRNA target in the case and control populations. Lastly, a quartile analysis was conducted for the data set. For each miRNA target, all samples (combined case and control populations) were ranked according to Cp value (low to high). The ranked population was then divided into four quartiles, each containing 25% of the total population. The number of case and control subjects in each quartile was then recorded. If greater than 65% or less than 35% of the total number of 26 cases were ranked in the “low” quartile, then that miRNA target was considered significant.
Based on the analysis of the expanded set of 720 miRNA biomarkers, a final overall rank score was assigned, which describes the generation of an overall significance score by which the entire set of miRNA targets was ranked. Table 20 shows the top 50 scoring miRNAs.
The development of a cardiovascular risk score was based on a sample of 1123 individuals from the PMRP (Personalized Medicine, 2(1): 49-79 (2005)). The set was selected based on a case-cohort design. Subjects from the PMRP cohort were considered “cases” if they were from 40-80 years old at the time of baseline blood draw and if they had an incident MI or had been hospitalized for unstable angina (UA) during the 5 years of follow-up. There were 385 total cases (164 subjects with initial MI, and 221 subjects with UA) and 838 controls. The available data included 59 (47 unique) protein biomarkers measured for each individual and 107 clinical characteristics including demographic (age, gender, race, diabetes status, family history of MI, smoking, etc.) and laboratory measurements (total cholesterol, HDL, LDL, etc.) and medication use (statin, antihypertensive medication, hypoglycemic medication, etc.).
Univariate Analysis. The association of each biomarker with patient outcome was evaluated using a Cox proportional hazard regression and time dependent area under the curve (AUC) using the Kaplan-Meier method of Heagerty et al., (Survival Model Predictive Accuracy and ROC Curves Biometrics, 61:92-105 (2005)). In order to present the hazard ratio (HR) across all protein biomarkers with different concentration ranges on a common scale, the values for all subjects were normalized by subtracting the mean value of the controls' concentration divided by the standard deviation of the controls after log-transforming the data. The hazard ratios were thus expressed per one standard deviation unit.
The same analysis was repeated while adjusting each of the biomarkers for the following traditional risk factors (TRFs): age, sex, systolic BP, diastolic BP, cholesterol, HDL, hypertension, use of hypertension drug, hyperlipidemia, diabetes, smoking (
Multivariate analysis: development of prognostic score for MI and/or UA. The development of a prognostic score was based on the inclusion of TRFs as well as protein biomarkers. Given the known association of age, gender, diabetes, and family history with cardiovascular events, these four parameters were included in the model. The inclusion of these 4 parameters was confirmed by running a number of forward marker selection algorithms. All of the algorithms selected the four variables in the final multivariate algorithms. The determination of the optimum model size was based on the use of the following criteria: (a) Akaike information criterion, (b) Bayesian information criterion, (c) Drop-in-deviance criterion. The first 2 are known in-sample error estimators and the third utilizes a cross-validation loop to estimate the goodness-of-fit. In all three cases, the model size was selected for the model that best fit the data, avoiding overfitting. A characteristic drop-in-deviance curve for model selection (a plot of the absolute value of the quantity) is shown in
Table 21 shows the frequency selection, average, minimum and maximum rank of each biomarker over 4 repeats of a 5-fold prevalidation (a form of cross-validation) process. The 4 TRFs were included in each of the models.
Using the optimum model size predicted by the drop-in-deviance approach, a Cox proportional hazard model was fit to all available data in order to obtain a model that could be used for validation on a different population. This final protein-based model contained the following protein biomarkers in the order selected: IL-16, eotaxin, fasligand, CTACK, MCP-3, HGF, and sFas.
The transportability of the disclosed model for predicting risk of cardiovascular event (ie, MI or UA) was assessed in a second multi-ethnic cohort selected from the U.S. population, ages 45-84 years old (Multi Ethnic Study of AtheroSclerosis Cohort) [Bild D E, Bluemke D A, Burke G L, Detrano R, Diez Roux A V, Folsom A R, Greenland P, Jacob D R, Jr., Kronmal R, Liu K, Nelson J C, O'Leary D, Saad M F, Shea S, Szklo M, Tracy R P. Multi-ethnic study of atherosclerosis: objectives and design. Am J Epidemiol. 2002; 156(9):871-881.
In order to establish the expected performance of the model on a different sample similar to the one used for development, the method of prevalidation was used again, before applying the model to the second population. Two performance metrics were used: the Net Reclassification Index (NRI) and the Clinical Net Reclassification Index (CNRI). The definition of the net reclassification index is given by the following equation:
The equation measures the improvement for the cases and controls separately in terms of a percent and combines the results into a single number. A positive percentile for the cases and a negative for the controls represents improvement in performance introduced by the disclosed model. The risk category is defined by establishing appropriate thresholds for the risk scores predicted by the existing and disclosed models. The CNRI is defined in the same way but applies to a subset of the population that can gain from an improved method of identifying the true risk within the group. For cardiovascular disease, application of the NRI metric in the intermediate risk population, as defined by the Franimgham score for example, satisfies this criterion. The calculated value represents the CNRI performance for the intermediate risk category.
Traditionally, the intermediate risk category, as calculated by the Framingham score for 10 year risk, has been defined as those individuals with risk score between 10% and 20%. The results presented here are based on the following cutoffs for defining the intermediate risk category: <3.5%, >7.5%. The use of these lower cutoffs is justified because: a) the disclosed model focuses on a time horizon of 5 years, and b) the event rate in the current population is lower than the one observed when the Framingham score was developed.
The reclassification comparison required the calculation of an absolute risk, from each model, for a given subject. The calculation of an absolute risk for each individual using a Cox Proportional Hazard (Cox PH) model required the calculation of the relative risk for this individual based on their characteristics and the estimation of a baseline hazard. The Cox PH model is designed to predict the relative risk but does not require specification of the hazard function. To produce absolute risk estimates from a Cox PH model, we needed the absolute risk for any individual, or for an “average” individual; then using the risk estimates relative to this individual or the average, the absolute risk for any individual was computed. The average is a hypothetical individual with the population average value for each predictor. Given that the true baseline hazard for the population and the corresponding “average” person are not known (because the correct model for the calculation of the risk of a cardiovascular event is unknown), an estimate needed to be provided. The R language [R: A Language and Environment for Statistical Computing, R Development Core Team, R Foundation for Statistical Computing, Vienna, Austria, 2010] survfit function was used to calculate the baseline hazard for the average individual. The survfit function uses weights for the calculation: each member of the population receives a weight depending on their estimated risk score relative to the average, and then a weighted hazard estimate is used for the baseline hazard. The estimation of a baseline hazard depends on the model used and hence also upon the predicted relative risk. In order to make fair comparisons of the reclassification performance of the disclosed model vs. the FRS and TRF-based models, an appropriate baseline hazard estimate was needed that did not unduly favor any one model. Described below is the preferred approach for the calculation of the baseline hazard that used a risk score that is the average score from the two models being compared. In addition, the survfit function implemented two alternative estimators: Kaplan-Meier and Aalen. Both estimators were tested and the difference observed was negligible. In order to extend our conclusions to the population, the baseline survivor function was evaluated at the population mean of the covariates using the case-cohort weights of the study.
The selection of a baseline hazard estimate for comparing two models in terms of absolute risk score is a difficult problem, and one not addressed in the literature. Because the true baseline hazard for the population is unknown, the use of a different estimate by each model can have a significant effect on the results of the comparison. To investigate the effect of the baseline hazard estimate, all calculations were performed using two different methods: 1.) the absolute risk score for each model based on the individual baseline survivor estimate using the linear predictor scores calculated by each model; and 2.) the absolute risk score based on a common baseline survivor estimate obtained by calculating the average linear predictor from the two scores, centered at the population mean.
Tables 22, 23, and 24 present the NRI and CNRI expected performance of the pre-validated models containing biomarkers against three alternative models: 1.) the Framingham risk score (“FRS”); 2.) a model fitted on the Marshfield data using 4 TRFs (“4-TRF”; age, gender, diabetes, and family history of MI) as covariates; and 3.) an alternate model fitted on the Marshfield data using 9 TRFs (“9-TRF”; age, gender, diabetes, family history of MI, smoking, total cholesterol, HDL, hypertension medication, and systolic pressure) as covariates.
Overall, the models that included protein biomarkers provided a better reclassification over the FRS or TRF-based models in both the 3.5-7.5% and 3.5-10% ranges of 5 year risk for a cardiovascular event. Table 22 shows the expected reclassification performance of the disclosed model score against the calibrated FRS score based on pre-validation (Marshfield data set). Tables 23 and 24 show the expected reclassification score against the 4-TRF and 9-TRF model scores, respectively, based on pre-validation (Marshfield data set).
The overall reclassification in terms of both NRI and CNRI were comparable using either of the two methods for calculating the baseline survivor function. There was, however, a difference in the reclassification balance of cases and controls that make up the total NRI or CNRI between the two methods. The common baseline survivor function method did provide a more balanced reclassification. This result was consistent with the results obtained for the relative risk prediction of the models.
The common baseline survivor function method (using the average score) was also consistent with many statistical approaches that use a voting scheme (i.e. weighted averaging) for improving prediction accuracy.
Expected Reclassification performance of Aviir score against the 4-TRF model score based on pre-validation (Marshfield data set)
Expected Reclassification performance of Aviir score against the 9-TRF model score based on pre-validation (Marshfield data set)
The question of transportability of a prognostic model across multiple populations provides the ultimate test for the usefulness of the prediction model. A model's statistical and clinical validity are equally important facets of a model's′ transportability. A three-step validation approach has been proposed for a new test: 1) internal validation, 2) temporal validation, and 3) external validation. The completion of the first step by using pre-validation approach (a form of cross-validation) to validate the modeling methods was described above. The second step requires the testing of the algorithm on a different patient set from the same population or clinical center. Given that there is only a short period of time (about 2 years) between the time that the last event took place within the Marshfield study and the current time, the number of subsequent events was too small for validation within the same population. Therefore, the external validation step was conducted by testing the disclosed protein model on the MESA sample set as a demonstration of the disclosed protein model's transportability.
To evaluate the disclosed model's performance on the MESA cohort, 824 samples (222 cases and 602 controls) were assayed using the panel of protein biomarkers described in Example 7 (IL-16, eotaxin, fas ligand, CTACK, MCP-3, HGF, and sFas).
The Marshfield-trained model was used to predict a score for each subject of the MESA sample with marker selection and model fitting performed on the Marshfield population without any knowledge or input from the MESA results.
The calculations of the absolute risk scores for all models were based on the approaches described above. Due to some missing values for some of the risk factors and the biomarkers, the cohort weights were modified for the combination of status and gender in each of the comparisons. The calculations of the reclassifications also accounted for the same modified weights, because the reclassification of a female and a male case or control does not carry the same′weight. This was done in an attempt to properly extend the results to the total population assuming that the missing values were missing at random.
Tables 25 and 26 present the comparison between the disclosed model vs. the 3 other models in terms of NRI and CNRI presented earlier, as well comparison against the Reynolds score [Ridker P M, Buring J E, Rifai N, et al. Development and validation of improved algorithms for the assessment of global cardiovascular risk in women: the Reynolds Risk Score JAMA 2007; 297:611-619]. The comparisons were consistent with the predicted performance from the Marshfield set. The disclosed model provided better clinical net reclassification over any other transported model presented here. The method using the average of the scores for estimating the baseline survivor function also provided a better balance in reclassification between cases and controls, when compared to the method using the individual estimates. This was again consistent with the relative risk predictions for these models on the MESA samples (
NRI and CNRI results for the MESA data set comparing the Aviir score against FRS, 4-TRF, 9-TRF and Reynolds score models. The CNRI is based on a baseline range of risk of 3.5-10% of the reference model. Subjects with missing biomarker data have been excluded from the comparison.
NRI and CNRI results for the MESA data set comparing the Aviir score against FRS, 4-TRF, 9-TRF and Reynolds score models. The CNRI is based on a baseline range of risk of 3.5-7.5% of the reference model. Subjects with missing biomarker data have been excluded from the comparison.
NRI and CNRI results for the MESA data set comparing the Aviir score against FRS, 4-TRF and 9-TRF models for non-diabetic individuals in the MESA set. The CNRI is based on a baseline range of risk of 3.5-7.5% of the reference model. Subjects with missing biomarker data have been excluded from the comparison.
In addition to the protein biomarker/TRF, miRNAs can be measured in a human fluid, such as blood, and used to predict future cardiovascular events in a subject.
The prognostic power of a hybrid miRNA/protein biomarker set is determined by building a hybrid prognostic model with covariates selected from the miRNA set presented in Table 28 and the disclosed protein biomarker model (see Examples 7-9) as single score, using a case-cohort study design. The cohort contains all of the cases that developed MI within the time frame of interest (n=200) and 200 controls. In order to efficiently utilize the smaller cohort, the TRFs and protein predictors are treated in terms of a single calculated score (single variable), unless univariate association of the miRNA biomarkers is stronger than that observed for the protein biomarkers or TRFs. In the latter case, multivariate models are built based on the use of penalized regression methods selecting variables from all available biomarkers (TRFs, protein biomarkers, miRNAs). In the former case, the score calculation is performed using the coefficients previously estimated on the larger cohort, described above. Cross-validation and penalized regression techniques are used to select the model size and miRNA markers for three types of models: a) miRNA-only model; b) a TRF+miRNA-based model; and c) a TRF+protein+miRNA biomarker-based model. The expected performance of the fitted models is evaluated based on the time-dependent AUC, NRI, and CNRI characteristics of the hybrid models vs. the FRS as well as the previously disclosed TRF+protein-based model (see Examples 8-9)
Unless otherwise indicated, all numbers expressing quantities of ingredients, properties such as molecular weight, reaction conditions, and so forth used in the specification and claims are to be understood as being modified in all instances by the term “about.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the specification and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by the present disclosure. At the very least, and not as an attempt to limit the application of the doctrine of equivalents to the scope of the claims, each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the disclosure are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical value, however, inherently contains certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
The terms “a,” “an,” “the” and similar referents used in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
Groupings of alternative elements or embodiments of the invention disclosed herein are not to be construed as limitations. Each group member may be referred to and claimed individually or in any combination with other members of the group or other elements found herein. It is anticipated that one or more members of a group may be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims.
Certain embodiments of this invention are described herein, including the best mode known to the inventors for carrying out the invention. Of course, variations on these described embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventor expects skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than specifically described′ herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.
Specific embodiments disclosed herein may be further limited in the claims using consisting of or consisting essentially of language. When used in the claims, whether as filed or added per amendment, the transition term “consisting of” excludes any element, step, or ingredient not specified in the claims. The transition term “consisting essentially of” limits the scope of a claim to the specified materials or steps and those that do not materially affect the basic and novel characteristic(s). Embodiments of the invention so claimed are inherently or expressly described and enabled herein.
Furthermore, numerous references have been made to patents and printed publications throughout this specification. Each of the above-cited references and printed publications are individually incorporated herein by reference in their entirety.
In closing, it is to be understood that the embodiments of the invention disclosed herein are illustrative of the principles of the present invention. Other modifications that may be employed are within the scope of the invention. Thus, by way of example, but not of limitation, alternative configurations of the present invention may be utilized in accordance with the teachings herein. Accordingly, the present invention is not limited to that precisely as shown and described.
Specific embodiments disclosed herein may be further limited in the claims using consisting of or consisting essentially of language. When used in the claims, whether as filed or added per amendment, the transition term “consisting of” excludes any element, step, or ingredient not specified in the claims. The transition term “consisting essentially of” limits the scope of a claim to the specified materials or steps and those that do not materially affect the basic and novel characteristic(s). Embodiments of the invention so claimed are inherently or expressly described and enabled herein.
This application is a continuation application of U.S. patent application Ser. No. 12/964,719 with a filing date of Dec. 9, 2010, which is incorporated by reference in its entirety, and which claims benefit to U.S. Provisional Application No. 61/285,121, filed on Dec. 9, 2009, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61285121 | Dec 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12964719 | Dec 2010 | US |
Child | 14788828 | US |