This document describes a practical blood-based test for determining whether an early stage non-small-cell lung cancer (NSCLC) patient is likely to have a high risk of recurrence of cancer after surgery to remove the cancer. The test can be performed at, before, and/or after the time of surgery. Where the test determines that the patient is at a high risk of recurrence of the cancer it indicates that the patient should be considered for more aggressive treatment, such as adjuvant chemotherapy or radiation in addition to the surgery.
The majority of cancer deaths in the United States are due to lung cancer. It is estimated that there were in excess of 200,000 new cases diagnosed and more than 150,000 lung cancer deaths in 2018. See https://seer.cancer.gov/statfacts/html/lungb.html. Approximately 80-85% of lung cancers are non-small cell lung cancer (NSCLC). See https://www.cancer.org/cancer/non-small-cell-lung-cancer/about/what-is-non-small-cell-lung-cancer.html. Currently, around 16% of lung cancers are diagnosed as localized disease. However, this proportion may increase in the future as lung cancer screening programs gain wider adoption.
Patients with Stage 1 disease are generally treated with surgical resection, although radiotherapy is recommended for patients who are inoperable or refuse surgery. National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology (NCCN Guidelines) Non-Small Cell Lung Cancer, Version 3.2019-Jan. 18, 2019, Adjuvant therapy is currently not recommended in the NCCN guidelines for Stage IA disease. It is recommended that positive margins from surgery be followed by re-resection (preferred) or by radiotherapy. Observation is indicated as follow up for Stage IA with negative margins. NCCN recommended follow up for Stage IB (and Stage IIA) disease with negative margins from surgery is observation, or chemotherapy for high-risk patients. Factors that indicate high risk include poorly differentiated tumors, vascular invasion, wedge resection, tumor size >4 cm, visceral pleural involvement and unknown lymph node status. Positive margins in surgery for Stage IB and Stage IIA disease call for re-resection (preferred) or radiotherapy, with or without adjuvant chemotherapy. It is recommended that if radiotherapy is given for Stage IIA disease with positive margins, it should be accompanied by adjuvant chemotherapy.
Prognosis for Stage I patients varies from a 5-year survival rate of 92% for Stage IA1 and 83% for Stage IA2 to 77% for Stage IA3, See https://www,cancer.org/cancer/non-small-cell-lung-cancer/detection-diagnosis-staging/survival-rates.html. Five-year survival for patients with Stage IB disease is about 68%, Id.
Hence, although many patients may be cured by surgical intervention, a significant proportion of patients recur. If it were possible to identify the patients with early stage NSCLC at highest risk of recurrence, it may potentially be advantageous for their survival to treat them more aggressively. It is of note, however, that the Lung Adjuvant Cisplatin Evaluation meta-analysis contraindicated adjuvant chemotherapy in the general stage IA population by indicating potentially worse outcomes with adjuvant chemotherapy than without. J-P. Pignon et al, “Lung Adjuvant Cisplatin Evaluation: A Pooled Analysis by the LACE Collaborative Group,” J ClinOncol, pp, 3552-3559, 2008, Therefore, accurate identification of patients at highest risk of recurrence is essential before advocating more aggressive therapies,
Currently, there is no validated test able to reliably identify patients at highest risk of lung cancer recurrence either from tissue collected at surgery or from blood-based samples. Here, we describe a test, based on mass spectrometry of serum collected from patients at or prior to surgery, able to stratify patients by risk of recurrence.
In one aspect, a method for performing a risk assessment of recurrence of cancer in an early stage non-small-cell lung cancer patient is described. The method includes a step of performing mass spectrometry on a blood-based sample obtained from the patient and obtaining mass spectrometry data. The method further includes the step of, in a computing machine, performing a hierarchical classification procedure on the mass spectrometry data. In particular, the computing machine implements a hierarchical classifier schema including a first classifier (Classifier A in the following description) producing a class label in the form of high risk or low risk or the equivalent. The class label of “high risk” indicates that the patient providing the sample is at high risk of recurrence of the cancer after surgery, whereas the class label “low risk” indicates that the patient providing the sample is at a relatively low risk of recurrence. In one possible embodiment, if the Classifier A produces the high risk label the sample is classified by a second classifier (Classifier B in the following description) generating a classification label of highest risk or high/intermediate risk or the equivalent. If Classifier B produces the label of highest risk or the equivalent the patient is likewise predicted to have a high risk of recurrence of the cancer following surgery.
In one configuration, the computing machine implements a hierarchical classifier schema including a third classifier (Classifier C in the discussion below) wherein if the Classifier A produces a “low risk” classification label the sample is classified by the third Classifier C and wherein classifier C produces a class label of lowest risk or low/intermediate risk, or the equivalent.
In one configuration, the computing machine stores a reference set of mass spectrometry data obtained from blood-based samples obtained from a multitude of early stage non-small-cell cancer patients used in classifier development. The mass spectrometry data includes feature values for features listed in Appendix A.
In another aspect, a programmed computer is described configured for making a prediction of the risk of recurrence of cancer in an early stage non-small-cell lung cancer patient. The programmed computer includes a processing unit and a memory storing code and classifier parameters such that the computer is configured as a hierarchical classifier as per
In another aspect, a method for detecting a class label in an early stage non-small-cell lung cancer patient is disclosed. The method includes steps of (a) conducting mass spectrometry on a blood-based sample obtained from the patient and obtaining integrated intensity values in the mass spectral data of a multitude of pre-determined mass-spectral features, and (b) operating on the mass spectral data with a programmed computer implementing a classifier, wherein the programmed computer performs a hierarchical classification procedure on the mass spectrometry data, including a first classifier (Classifier A) producing a class label in the form of high risk or low risk or the equivalent, and if the Classifier A produces the high risk label the sample is classified by a second classifier (Classifier B) generating a classification label of highest risk or high/intermediate risk or the equivalent. In the operating step the classifier compares the integrated intensity values obtained in step (a) with feature values of a reference set of class-labeled mass spectral data obtained from blood-based samples obtained from a multitude of other early stage non-small-cell lung cancer patients with a classification algorithm and detects a class label for the sample in accordance with the hierarchical classification schema.
In another aspect, a method is described for performing a risk assessment of recurrence of cancer in an early stage non-small-cell lung cancer patient having surgery to treat the cancer. The method includes steps of: (1) obtaining a pre-surgery blood-based sample from the patient, performing mass spectrometry on the sample and obtaining the integrated intensity values of the features listed in Appendix A, and then classifying the mass spectrum of the sample with a computer-based classifier developed from a set of blood-based samples obtained from other early stage NSCLC patients, the classifier producing a label of high or highest risk of recurrence or the equivalent and low or lowest risk of recurrence or the equivalent; (2) if the sample is not classified as high or highest risk of recurrence in accordance with the classification produced in step (1), obtaining a further blood-based sample from the patient after the surgery and conducting mass spectrometry on the blood-based sample including obtaining integrated intensity values of the features listed in Appendix A; and (3) classifying the mass spectrum of the sample obtained in (2) in accordance with a computer-based classifier developed from a set of blood-based samples obtained from other early stage NSCLC patients after surgery, wherein the classifier of this paragraph (3) generates a class label of either G1 or the equivalent or G2 or the equivalent, with G2 class label associated with a prediction that the patient will have a lower risk of recurrence as compared to risk of recurrence associated with the class label G1.
Overview
This document will describe the development of a blood-based test and related machine-implemented classifier which makes a prediction of whether a blood sample for an early stage NSCLC patient indicates that the patient is at high risk of recurrence of the cancer. The classifier is developed from mass spectral data obtained from serum samples from a multitude of early stage NSCLC patients. Once the classifier is developed, as explained in this document, it is used to generate a class label for mass spectral data of a blood sample for an early stage NSCLC patient indicating, i.e., predicting, whether the patient providing that blood sample is at high risk of recurrence of the cancer after surgery. The blood sample can be obtained prior to, at the time of, or after surgery to remove the cancer.
Section 1 provides a description of a set of serum samples obtained from early stage (IA or IB) NSCLC patients which were used to develop the test of this disclosure.
Section 2 explains our methods of obtaining mass spectral data from the serum samples. The methods of Section 2 make use of mass spectral data acquisition and processing steps which are described extensively in the prior patent applications and issued patents of the Assignee Biodesix, Inc. Reference is made to such patents and applications for further details.
Section 3 describes a deep learning classifier development method we used to generate a classifier from the mass spectral data in a classifier development set, which is known as the “Diagnostic Cortex” method of the Assignee and described in previous patent literature. The methodology was performed on the mass spectral data obtained as explained in Section 2 and makes use of mass spectral feature definitions (m/z ranges) in the data which are described in Appendix A.
Section 4 describes a hierarchical combination of classifiers that are used to classify a blood-based sample as either high risk of cancer recurrence, intermediate risk, or low risk. A first classifier (“Classifier A” in the following discussion) was developed which is a binary classifier which splits the development sample set as either High Risk or Low Risk. A practical test could be implemented using just Classifier A. A second classifier (“Classifier B”) stratifies the high risk group defined by the first classifier into two groups with highest (“highest”) and intermediate (“high/int”) risk of recurrence. In a practical testing environment, in one possible implementation, the blood sample is subject to mass spectrometry and if the Classifier A returns a High Risk classification label, it is subject to classification by Classifier B and if Classifier B returns a Highest Risk label (or the equivalent) the patient is predicted to have a high risk of recurrence and is guided towards more aggressive treatment. If the sample is classified by Low Risk by Classifier A, or as “high/int” risk by Classifier B, the patient is not guided towards the more aggressive treatment. However, intermediate or low risk classification labels may still be used to guide treatment or plan surgery on the cancer.
An optional third classifier is described (“Classifier C”) which stratifies the low risk group defined by the first classifier into two groups with lowest (“lowest”) and intermediate (“low/int”) risk of recurrence.
In one possible embodiment a practical test employs the hierarchical combination of all three classifiers using program logic in accordance with
In Section 4 we also show that the stratification produced by classifiers A, B and C remained significant in multivariate analysis including histology, tumor size, gender and age. This indicates that the stratification is providing information that is additional and complementary to these clinicopathological factors,
Section 5 describes our work associating the test classifications with biological processes using a method known as protein set enrichment analysis (PSEA). Using multivariate techniques we defined specific states of the host biology related phenotypes associated with risk of recurrence from pre-surgery measurements of the circulating proteome, Biology underlying these disease states was investigated. Patients in the highest risk classification group had significantly elevated levels of acute phase response, acute inflammatory response, wound healing and complement. Data indicate that systemic host effects related to the circulating proteome measurable from pre-surgery samples may play an important role in assessing risk of recurrence in early stage NSCLC independent of type of recurrence, including new primaries. The associated biological processes have previously been shown to be related to immune checkpoint resistance in metastatic melanoma and lung cancer, and may be related to a particular state of the host's immune system.
Section 6 describes a practical laboratory testing environment in which the methods of this disclosure can be practiced.
Section 7 describes a redevelopment of the test described in Sections 1-6 but using additional samples from a validation set that we had available. Our work described in this section envisions a ternary or three-way classification schema (see
Section 8 describes a classifier developed from samples obtained from NSCLC patients post-surgery. This classifier stratifies patients into a group with higher risk of recurrence or lower risk. The classifier of Section 8 could be used in conjunction with the classifier (or combination of classifiers) described in Sections 4 or 7.
Section 9, Further Considerations, describes additional details on how practical tests in accordance with this disclosure can be implemented in practice.
Section 1: Classifier Development Sample Set
Serum samples taken either at or pre-surgery were available from 124 patients with Stage IA or IB NSCLC. No patients received adjuvant therapy following surgery. Median follow up of these patients was 5.1 years (median (range) for patients alive: 4,9 (0.5-10.1) years). Patient characteristics are summarized in Table 1.
Eleven of the 27 patients recurring died while under follow up: 10 from lung cancer, and the remaining 1 from unspecified causes.
Of the 27 recurrences, 6 (22%) were distant, 11 (41%) were locoregional, and 10 (37%) were new primaries. Four recurrences were observed within 1 year (2 new primary, 2 locoregional), a further 13 were observed between 1 and 2 years after surgery (3 distant, 6 locoregional, and 4 new primaries).
Section 2: Mass Spectral Data Acquisition and Processing
The serum samples explained in Section 1 were subject to mass spectrometry as explained in this section. Once the classifiers were developed and fully defined, the feature values of features listed in Appendix A were then saved as a reference set in computer memory for use in conducting a classification procedure on a new (previously unseen) sample, for example at the time of use to make a prediction as to a given early stage NSCLC patient.
Sample Preparation
Samples were thawed and 3 μl aliquots of each test sample and quality control serum (a pooled sample obtained from serum of thirteen healthy patients, purchased from
Conversant Bio, “SerumP4”) spotted onto VeriStrat serum cards (Therapak). The cards were allowed to dry for 1 hour at ambient temperature after which the whole serum spot was punched out with a 6 mm skin biopsy punch (Acuderm). Each punch was placed in a centrifugal filter with 0.45 μm nylon membrane (VWR). One hundred pi of HPLC grade water (JT Baker) was added to the centrifugal filter containing the punch. The punches were vortexed gently for 10 minutes then spun down at 14,000 rcf for two minutes. The flow-through was removed and transferred back on to the punch for a second round of extraction. For the second round of extraction, the punches were vortexed gently for three minutes then spun down at 14,000 rcf for two minutes. Twenty microliters of the filtrate from each sample was then transferred to a 0.5 ml eppendorf tube for MALDI analysis.
All subsequent sample preparation steps were carried out in a custom designed humidity and temperature control chamber (Coy Laboratory). The temperature was set to 30 ° C. and the relative humidity at 10%.
An equal volume of freshly prepared matrix (25 mg of sinapinic acid per 1 ml of 50% acetonitrile:50% water plus 0.1% TFA) was added to each 20 μl serum extract and the mix vortexed for 30 sec. The first three aliquots (3×2 μl) of sample:matrix mix were discarded into the tube cap. Eight aliquots of 2 μl sample:matrix mix were then spotted onto a stainless steel MALDI target plate (SimulTOF). The MALDI target was allowed to dry in the chamber before placement in the MALDI mass spectrometer.
QC samples (SerumP4) were added to the beginning (two preparations) and end (two preparations) of each batch run.
Spectral Acquisition
MALDI spectra were obtained using a MALDI-TOF mass spectrometer (SimulTOF 100, s/n: LinearBipolar 11.1024.01 or SimulTOF One, sin ClinicalAnalyzer 15.1032.01: from SimulTOF Systems, Marlborough, Mass., USA). The instruments were operated in positive ion mode, with ions generated using a 349 nm, diode-pumped, frequency-tripled Nd:YLF laser firing at a laser repetition rate of 0.5 kHz (SimulTOF100) or 1 kHz (SimulTOF One). External calibration was performed using the following peaks in the QC serum spectra: m/z=3320, 4158,7338, 6636.7971, 9429,302, 13890.4398, 15877,5801 and 28093.951.
Spectra from each MALDI spot were collected as 800 shot spectra that were ‘hardware averaged’ as the laser fires continuously across the spot while the stage is moving at a speed of 0.25 mm/sec (SimulTOF 100) or 0.5 mm/sec (SimulTOF One), A minimum intensity threshold of 0.01 V or 0.003 V for the SimulTOF 100 and SimulTOF One, respectively was used to discard any ‘flat line’ spectra. All 800 shot spectra with intensity above this threshold were acquired without any further processing.
The spectral acquisition made use of the techniques described in the Biodesix U.S. Pat. No. 9,279,798, a technique which is referred to as “Deep MALDI” in this document.
Spectral Processing
Each raster spectrum of 800 shots was processed through an alignment workflow to align prominent peaks to a set of 43 alignment points (see table 2). A filter was applied that essentially smooths noise and the spectra were background subtracted for peak identification. Once peaks had been identified, the filtered spectra (without background subtraction) were aligned. Additional filtering parameters required that raster spectra have at least 20 peaks and used at least 5 alignment points to be included in the pool of rasters used to assemble the average spectrum.
Averages were created from the pool of aligned and filtered raster spectra. A random selection of 500 raster spectra was averaged to create a final analysis spectrum for each sample of 400000 shots.
Although the m/z range is collected from 3-75 KDa, the range for spectral processing is limited to 3-30 KDa including feature generation, as features above 30 KDa have poor resolution and were not found to be reproducible at a feature value level.
We performed background estimation and subtraction, and normalization of the spectra, including a partial ion current normalization, the details of which are not particularly important. We also performed an average spectra alignment to address minor differences in peak positions in the spectra by defining a set of calibration points (m/z positions) used to align spectral averages. We defined a set of 282 features (see Appendix A) that had been discovered and well established from our previous Deep MALDI spectral analysis work relating to blood-based samples in cancer patients.
We further performed a batch correction step making use of quality control reference sample spectra similar to the methodology described in our prior U.S. Pat. No. 9,279,798, the details of which are not particularly important. Following batch correction, a final partial ion current by feature normalization step was applied to the feature tables to account for changes related to m/z dependent corrections, similar to the method described in U.S. Pat. No. 10,007,766, the details of which are not particularly important. The normalization scalars used for partial ion current normalization were not found to be associated with the time to recurrence groups.
In a final step, a trim or pruning of the feature list of Appendix A was done. In particular, eight features of Appendix A were included in the preprocessing that are ill-suited for inclusion in new classifier development in this situation as they are related to hemolysis. It has been observed that these large peaks are useful for stable batch corrections because once in the serum, they appear stable over time and resistant to modifications. However, these peaks are related to the amount of red blood cell shearing during the blood collection procedure and should not be used for test development beyond feature table corrections in preprocessing. The features listed in Appendix A marked with an asterisk (*) were removed from the final feature table, yielding a total of 274 features used for classifier development.
Section 3: Classifier Development Method (Diagnostic Cortex)
The new classifier development process was carried out using the “Diagnostic Cortex”® procedure shown in
This document describes three different classifiers, Classifier A, Classifier B, and Classifier C which are used in a hierarchical manner to generate a class label to indicate the risk of recurrence of a patient blood sample. See
Since the generation of classifiers A, B and C each used the methodology of
In contrast to standard applications of machine learning focusing on developing classifiers when large training data sets are available, the big data challenge, in bio-life-sciences the problem setting is different. Here we have the problem that the number (n) of available samples, arising typically from clinical studies, is often limited, and the number of attributes (measurements) (p) per sample usually exceeds the number of samples. Rather than obtaining information from many instances, in these deep data problems one attempts to gain information from a deep description of individual instances. The present methods of
The method includes a first step of obtaining measurement data for classification from a multitude of samples, i.e., measurement data reflecting some physical property or characteristic of the samples. The data for each of the samples consists of a multitude of feature values, and a class label. In this example, the data takes the form of mass spectrometry data, in the form of feature values (integrated peak intensity values at a multitude of m/z ranges or peaks, see Appendix A). This is indicated by “development set” 100 in
At step 102, a label associated with some attribute of the sample is assigned (for example, patient high risk or low risk of recurrence, “Group1”, “Group2” etc. the precise moniker of the label is not important). In this example, the class labels were assigned by a human operator to each of the samples after investigation of the clinical data associated with the sample. In this example, the sample set is split into two groups, “Group1” (104) being the label assigned to patients at a relatively high risk of recurrence and “Group2” (106) being the label assigned to patients with a relatively lower risk of recurrence, based on the clinical data associated with the samples. This results in a class-labelled development set shown at 108.
Then, at step 110, the class-labeled development sample set 108 is split into a training set 112 and a test set 114. The training set is used in the following steps 116, 118 and 120.
In the training step, the process continues with a step 116 of constructing a multitude of individual mini-Classifiers using sets of feature values from the samples up to a pre-selected feature set size s (s=integer 1 . . . p). For example a multiple of individual mini- (or “atomic”) Classifiers could be constructed using a single feature (s=1), or pairs of features (s=2), or three of the features (s=3), or even higher order combinations containing more than 3 features. The selection of a value of s will normally be small enough to allow the code implementing the method to run in a reasonable amount of time, but could be larger in some circumstances or where longer code run-times are acceptable. The selection of a value of s also may be dictated by the number of measured variables (p) in the data set, and where p is in the hundreds, thousands or even tens of thousands, s will typically be 1, or 2 or possibly 3, depending on the computing resources available. In the present work, s took a value of 1, 2 or 3 as explained later. The mini-Classifiers of step 116 execute a supervised learning classification algorithm, such as k-nearest neighbors (kNN), in which the values for a feature, pairs or triplets of features of a sample instance are compared to the values of the same feature or features in a training set and the nearest neighbors (e.g., k=9) in an s-dimensional feature space are identified and by majority vote a class label is assigned to the sample instance for each mini-Classifier. In practice, there may be thousands of such mini-Classifiers depending on the number of features which are used for classification.
The method continues with a filtering step 118, namely testing the performance, for example the accuracy, of each of the individual mini-Classifiers to correctly classify the sample, or measuring the individual mini-Classifier performance by some other metric (e.g. the Hazard Ratios (HRs) obtained between groups defined by the classifications of the individual mini-Classifier for the training set samples) and retaining only those mini-Classifiers whose classification accuracy, predictive power, or other performance metric, exceeds a pre-defined threshold to arrive at a filtered (pruned) set of mini-Classifiers. The class label resulting from the classification operation may be compared with the class label for the sample known in advance if the chosen performance metric for mini-Classifier filtering is classification accuracy. However, other performance metrics may be used and evaluated using the class labels resulting from the classification operation. Only those mini-Classifiers that perform reasonably well under the chosen performance metric for classification are maintained in the filtering step 118. Alternative supervised classification algorithms could be used, such as linear discriminants, decision trees, probabilistic classification methods, margin-based Classifiers like support vector machines, and any other classification method that trains a Classifier from a set of labeled training data,
To overcome the problem of being biased by some univariate feature selection method depending on subset bias, we take a large proportion of all possible features as candidates for mini-Classifiers. We then construct all possible kNN classifiers using feature sets up to a pre-selected size (parameter s). This gives us many “mini-Classifiers”: e.g. if we start with 100 features for each sample (p=100), we would get 4950 “mini-Classifiers” from all different possible combinations of pairs of these features (s=2), 161,700 mini-Classifiers using all possible combination of three features (s=3), and so forth. Other methods of exploring the space of possible mini-Classifiers and features defining them are of course possible and could be used in place of this hierarchical approach. Of course, many of these “mini-Classifiers” will have poor performance, and hence in the filtering step c) we only use those “mini-Classifiers” that pass predefined criteria. These filtering criteria are chosen dependent on the particular problem: If one has a two-class classification problem, one would select only those mini-Classifiers whose classification accuracy exceeds a pre-defined threshold, i.e., are predictive to some reasonable degree, Even with this filtering of “mini-Classifiers” we end up with many thousands of “mini-Classifier” candidates with performance spanning the whole range from borderline to decent to excellent performance.
The method continues with step 120 of generating a Master Classifier (MC) by combining the filtered mini-Classifiers using a regularized combination method. In one embodiment, this regularized combination method takes the form of repeatedly conducting a logistic training of the filtered set of mini-Classifiers to the class labels for the samples. This is done by randomly selecting a small fraction of the filtered mini-Classifiers as a result of carrying out an extreme dropout from the filtered set of mini-Classifiers (a technique referred to as drop-out regularization herein), and conducting logistic training on such selected mini-Classifiers. While similar in spirit to standard classifier combination methods (see e.g. S. Tulyakov et al., Review of Classifier Combination Methods, Studies in Computational Intelligence, Volume 90, 2008, pp. 361-386), we have the particular problem that some “mini-Classifiers” could be artificially perfect just by random chance, and hence would dominate the combinations. To avoid this overfitting to particular dominating “mini-Classifiers”, we generate many logistic training steps by randomly selecting only a small fraction of the “mini-Classifiers” for each of these logistic training steps. This is a regularization of the problem in the spirit of dropout as used in deep learning theory. In this case, where we have many mini-Classifiers and a small training set we use extreme dropout, where in excess of 99% of filtered mini-Classifiers are dropped out in each iteration.
In more detail, the result of each mini-Classifier is one of two values, either “Groupl” or “Group2” in this example. We can then combine the results of the mini-Classifiers by defining the probability of obtaining a “Group1” label via standard logistic regression (see e.g. http://en.wikipedia.org/wiki/Logistic_regression)
where l(mc(feature values))=1, if the mini-Classifier mc applied to the feature values of a sample returns “Group2”, and 0 if the mini-Classifier returns “Group1”. The weights wmc for the mini-Classifiers are unknown and need to be determined from a regression fit of the above formula for all samples in the training set using +1 for the left hand side of the formula for the Group2-labeled samples in the training set, and 0 for the Group1-labeled samples, respectively. As we have many more mini-Classifiers, and therefore weights, than samples, typically thousands of mini-Classifiers and only tens of samples, such a fit will always lead to nearly perfect classification, and can easily be dominated by a mini-Classifier that, possibly by random chance, fits the particular problem very well. We do not want our final test to be dominated by a single special mini-Classifier which only performs well on this particular set and is unable to generalize well. Hence we designed a method to regularize such behavior: Instead of one overall regression to fit all the weights for all mini-Classifiers to the training data at the same time, we use only a few of the mini-Classifiers for a regression, but repeat this process many times in generating the master classifier. For example we randomly pick three of the mini-Classifiers, perform a regression for their three weights, pick another set of three mini-Classifiers, and determine their weights, and repeat this process many times, generating many random picks, i.e. realizations of three mini-Classifiers. The final weights defining the master Classifier are then the averages of the weights over all such realizations. The number of realizations should be large enough that each mini-Classifier is very likely to be picked at least once during the entire process. This approach is similar in spirit to “drop-out” regularization, a method used in the deep learning community to add noise to neural network training to avoid being trapped in local minima of the objective function.
In a variation of the above method, which was used in the present classifier generation exercises, we saved all of the weights wmc for each dropout iteration and average the P from Eq. 1 calculated for a sample over all the dropout iterations (instead of averaging the weights for the mCs over the dropout iterations and only storing those and then working out the result for a new sample from the averaged weights). We have some description of this difference in U.S. Provisional patent application Ser. No. 62/649,762 filed Mar. 29, 2018, where some of the classifiers use the original weight averaging method and others use the new probability averaging method. The interested reader is directed to that description, which is incorporated by reference herein. The probability averaging technique has some technical advantages when the regression does not converge (“separable” cases for a dropout iteration) or converges slowly, as the probabilities can converge (or can converge faster) even though the weights do not (or converge slowly).
Other methods for performing the regularized combination method in step 120 that could be used include:
General regularized neural networks (Girosi F. et al, Neural Computation, (7), 219 (1995)).
The above-cited publications are incorporated by reference herein. Our approach of using drop-out regularization has shown promise in avoiding over-fitting, and increasing the likelihood of generating generalizable tests, i.e, tests that can be validated in independent sample sets.
“Regularization” is a term known in the art of machine learning and statistics which generally refers to the addition of supplementary information or constraints to an underdetermined system to allow selection of one of the multiplicity of possible solutions of the underdetermined system as the unique solution of an extended system. Depending on the nature of the additional information or constraint applied to “regularize” the problem (i.e. specify which one or subset of the many possible solutions of the unregularized problem should be taken), such methods can be used to select solutions with particular desired properties (e,g. those using fewest input parameters or features) or, in the present context of classifier training from a development sample set, to help avoid overfitting and associated lack of generalization (i.e., selection of a particular solution to a problem that performs very well on training data but only performs very poorly or not all on other datasets), See e.g., https://en.wikipedia.org/wiki/Regularization (mathematics). One example is repeatedly conducting extreme dropout of the filtered mini-Classifiers with logistic regression training to classification group labels. However, as noted above, other regularization methods are considered equivalent. Indeed it has been shown analytically that dropout regularization of logistic regression training can be cast, at least approximately, as L2 (Tikhonov) regularization with a complex, sample set dependent regularization strength parameter λ. (S Wager, S Wang, and P Liang, Dropout Training as Adaptive Regularization. Advances in Neural Information Processing Systems 25, pages 351-359, 2013 and D Helmbold and P Long, On the Inductive Bias of Dropout, JMLR, 16:3403-3454, 2015). In the term “regularized combination method” the “combination” simply refers to the fact that the regularization is performed over combinations of the mini-Classifiers which pass filtering. Hence, the term “regularized combination method” is used to mean a regularization technique applied to combinations of the filtered set of mini-Classifiers so as to avoid overfitting and domination by a particular mini-Classifier.
Still referring to
The performance of the master classifier is evaluated for all the realizations of the separation of the development set of samples into training and test sets in step 126. If there are some samples which persistently misclassify when in the test set, as indicated by the block 128 the process optionally loops back as indicated at loop 127 and steps 102, 110, 116, 118, and 120 are repeated with flipped class labels for such misclassified samples.
The method continues with step 130 of defining a final classifier from one or a combination of more than one of the plurality of master classifiers. In the present example, the final classifier is defined as a majority vote or ensemble average of all the master classifiers resulting from each separation of the sample set into training and test sets, or alternatively by an average probability cutoff, selecting one Master Classifier that has typical performance, or some other procedure. At step 132, the classifier (or test) developed from the procedure of
Section 4: Hierarchical Combination of Classifiers
As explained previously, the methodology of
A. Classifier A—First Split of the Sample Set.
A first split of the sample set was achieved using a classifier developed in accordance with
A “label-flip” approach was used (loop 127), in which the training class labels (at step 102), and master classifiers (resulting from step 120) were simultaneously iteratively refined.
B. Classifier B: Second split of he high risk outcome group from the first split (Classifier A)
The first split of the sample set from Classifier A resulted in a high risk or “poor” outcome group of 56 patients, with 20 recurrers. To further stratify by outcome, the samples in this high risk or “poor” outcome group were split with a second classifier, “Classifier B” developed in accordance with
C. Classifier C: Second split of the low risk outcome group from the first split (Classifier A)
The first split of the sample set performed by Classifier A resulted in a “good” or low risk outcome group of 68 patients, with 7 recurrers. To further stratify by outcome, this low risk outcome group was split using a third classifier (Classifier C) developed in accordance with
Results
1. First Split of the Sample Set, Classifier A (Binary Classification)
This classifier (“Classifier A”) stratifies the development set into two groups with higher and lower risk of recurrence (or worse and better outcomes). Fifty six patients (45%) were classified to the high risk group and the remaining 68 (55%) to the low risk group. Twenty patients in the high risk group recurred (35% recurrence rate in this group, which includes 74% of the recurrers). Fourteen patients in the high risk group died (25% of this group and 100% of all death events). Time-to-recurrence and overall survival are shown by test classification in
Patient characteristics by test classification are shown in table 5.
Table 6 shows the ability of the test to predict outcome when adjusted for other patient characteristics.
Reproducibility
Reproducibility was assessed by comparing the test classifications obtained during development by out-of-bag estimate with the results obtained from two reruns of the development sample set on the ST100 and the ST1 machines. The data showed concordance of between 94 and 97 percent on the reruns.
2. Second Split of the Sample Set, Classifier B (Split of High Risk Group from First Stratification)
This classifier (“Classifier B”) stratifies the high risk group defined by the first Classifier (A) into two groups with highest (“highest”) and intermediate (“high/int”) risk of recurrence. Twenty-one patients (37.5% of the high risk group) were classified to the highest risk group and the remaining 35 (62.5%) to the high/int risk group. Ten patients in the highest risk group recurred (48% recurrence rate); ten patients in the high/int group recurred (29% recurrence rate). Eight patients in the highest risk group had an OS event (38% of this group); six patients in the high/int group had an OS event (17%). Time-to-recurrence and overall survival are shown by second split test classification for patients classified as high risk by the first split in
Patient characteristics by test classification are shown in table 11.
Table 12 shows ability of the test to predict outcome when adjusted for other patient characteristics.
Reproducibility
3. Second Split of the Sample Set, Classifier C (Split of Low Risk Group from First Stratification)
This classifier (“Classifier C”) stratifies the low risk group defined by the first classifier (Classifier A) (N=68 with 7 recurrences) into two groups with lowest (“lowest”) and intermediate (“low/int”) risk of recurrence. This classifier was constructed using spectra acquired on the ST1 and ST100 machines. Hence, we can look at out-of-bag estimators for classification of the development set using either ST100 spectra or ST1 spectra.
For ST100 out-of-bag analysis, 40 patients (59% of the low risk group) were classified to the lowest risk group and the remaining 28 (41%) to the low/int risk group. Two patients in the lowest risk group recurred (5% recurrence rate); five patients in the low/int group recurred (18% recurrence rate). Time-to-recurrence is shown by second split test classification from ST100 spectra for patients classified as low risk by the first split in
For ST1 out-of-bag analysis, 33 patients (49% of the low risk group) were classified to the lowest risk group and the remaining 35 (51%) to the low/int risk group. Two patients in the lowest risk group recurred (6% recurrence rate); five patients in the low/int group recurred (14% recurrence rate). Time-to-recurrence is shown by second split test classification from ST1 spectra for patients classified as low risk by the first split in
Reproducibility
Reproducibility was assessed by comparing the test classifications obtained during development for the ST100 spectra by out-of-bag estimate with the results obtained from two reruns of the development sample set on the ST100 and for the rerun of the development sample set on the ST1. To compare between the results for the ST1 original run (also used in development) and the ST100 original run, out-of-bag estimates were used for both classifications. The data showed concordance of between 87 and 91 percent.
Four-Way Split of the Cohort
A procedure for combining the three classifiers in a hierarchical manner to give a four-way classification of patients is illustrated in
Time-to-recurrence and overall survival for the whole development cohort stratified by four-way test classification are shown in
Reproducibility
Reproducibility of the 4 way classification of
In terms of a practical test, in one embodiment the classification in the hierarchical manner as shown in
As another alternative, it is possible that a test could be performed using only Classifiers A, or the combination of Classifiers A and B in the schema of
Section 5: Association of Test Classifications with Biological Processes Using Protein Set Enrichment Analysis (PSEA)
When building tests using the procedure of
We used a method known as Gene Set Enrichment Analysis (GSEA) applied to protein expression data, which is referred to as Protein Set Enrichment Analsyis (PSEA). Background information on this method is set forth in Mootha, et al., PGC-1α-responsive genes involved in oxidative phosphorylation are coordinately downregulated in human diabetes. Nat Genet. 2003; 34(3):267-73 and Subramanian, et al., Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci USA 2005; 102(43): 15545-50, the content of which are incorporated by reference herein. Further details are explained at length in the patent literature, see U.S. Pat. No. 10,007,766, therefore a detailed discussion is omitted for the sake of brevity. High risk vs low risk (Classifier A)
Classifier A was applied to two sample sets with matched mass spectral and protein panel data (see the discussions in the literature cited above) and the resulting test classifications used as the phenotype for set enrichment analysis. These results were then merged to produce an overall p value of association with a set of 26 biological processes. These results are tabulated below, together with the false discovery rates (FDRs) calculated by the Benjamini-Hochberg method.
Highest vs High/Int (Classifier B)
Classifiers A and B were applied to the two sample sets with matched mass spectral and protein panel data. Samples classified as highest risk and high/int risk were identified and these classifications used as the phenotype for set enrichment analysis. PSEA was carried out and results were then merged to produce an overall p value of association with a set of 26 biological processes. These results are tabulated below, together with the false discovery rates (FDRs) calculated by the Benjamini-Hochberg method.
Highest vs Lowest Risk
Classifiers A, B, and C were applied to the sample sets. Samples classified as highest risk and lowest risk were identified and these classifications used as the phenotype for set enrichment analysis. PSEA was carried out and the results were then merged to produce an overall p value of association with a set of 26 biological processes. These results are tabulated below, together with the false discovery rates (FDRs) calculated by the Benjamini-Hochberg method.
Low/int vs Lowest Risk
Classifiers A and C were applied to the sample sets. Samples classified as lowest risk and low/int risk were identified and these classifications used as the phenotype for set enrichment analysis. PSEA was carried out and the results were then merged to produce an overall p value of association with a set of 26 biological processes. These results are tabulated below, together with the false discovery rates (FDRs) calculated by the Benjamini-Hochberg method.
Section 6: Laboratory Testing Environment
We further contemplate a laboratory test center for conducting tests on blood-based samples to assess the risk of an early stage NSCLC patient of recurrence of the cancer. The lab test center is configured as per Example 5 and
Conclusions
We were able to create a suite of three classifiers stratifying patients with early stage lung cancer by risk of recurrence. Seventeen percent of patients in the development set were assigned to the highest risk group, 23% to the high/intermediate risk group, 28% to the low/intermediate risk group and 32% to the lowest risk group. The percentage of patients recurrence-free at two years varied from 65% in the highest risk group to 100% in the lowest risk group; the percentage of patients alive at five years was 55% in the highest risk group and 100% in the lowest risk group. Although sample sizes were too small, given the few events, for statistical significance except in the first split of the cohort into low and high risk groups, multivariate analysis indicated that hazard ratios for all three classifiers were stable on adjustment for other patient characteristics. It is noteworthy that the tests were able to stratify all three kinds of recurrence: distant, locoregional and new primary.
Protein set enrichment analysis indicated that test classifications were associated with acute phase response, complement activation, acute inflammatory response and wound healing. Immune tolerance and glycolytic processes could also be potentially relevant. These observations, together with our experience showing the relevance of complement, wound healing, acute phase response and acute inflammatory response in metastatic cancer treated with immunotherapies and the fact that the classifiers are able to stratify risk of new primary lesions, could indicate that the test is accessing information on the host's immune response to cancer,
Reproducibility of the test classifications was very good and the test transferred well between mass spectrometer instruments. The preliminary assessment of reproducibility of the four-way classifications was 85% or better.
Section 7: Redevelopment of Test Using Additional Samples from Validation Set
We decided to redevelop the test described above. As a sample development set we combined the original development set of samples described in Section 1 above with some initial validation samples we had from the same source. As there are relatively few recurrers in this indication, we needed to boost the dataset to improve the reliability of the test beyond a first split of the dataset, namely the second and third splits of the sample sets by classifiers B and C. This section will describe this redevelopment work, including a new ternary or three-way hierarchical combination of the classifiers A, B and C, see
Sample Set Description
Serum samples taken pre-surgery were available from 314 patients with Stage IA or IB NSCLC. No patients received adjuvant therapy following surgery. Median follow up of these patients was 4.92 years. Patient characteristics are summarized in Table 27.
Fifteen recurrences were observed within 1 year (4 new primary, 5 locoregional, 6 systemic), a further 24 were observed between 1 and 2 years after surgery (5 distant, 13 locoregional, and 6 new primaries).
Sample preparation and spectral acquisition was the same as described previously.
Spectral processing was the same as described previously.
Classifier development for classifiers A, B and C used the “Diagnostic Cortex” procedure of
First split of the sample set (Classifier A) into High and Low risk groups.
A first split of the 314 sample set was achieved using a Diagnostic Cortex classifier (Classifier A) with the following parameters and design:
Classifier B: a split of the poor outcome group (“high risk”) resulting from the first split produced by Classifier A
The first split of the sample set produced by Classifier A resulted in a poor outcome group (i.e., those patients with a high risk of recurrence) of 137 patients, with 47 recurrers (34%).
To further stratify by outcome, this poor outcome group was further split using a Diagnostic Cortex classifier (classifier B) with the following parameters and design:
The performance of this Classifier B is described below in the Results section.
Classifier C a split of the good outcome group from the first split produced by Classifier A.
The first split of the sample set produced by Classifier A resulted in a good outcome group (i.e., a group of patients with a low risk of recurrence) of 177 patients, with 33 recurrers (19%).
To further stratify by outcome, this good outcome group was split using a Diagnostic Cortex classifier (Classifier C) with the following parameters and design:
Redevelopment Results 1. First Split of the Sample Set (Binary Classification), Classifier A
This classifier (“Classifier A”) stratifies the development set into two groups with higher and lower risk of recurrence (or, equivalently, worse/poor and better/good outcomes). 137 patients (44%) were classified to the high risk group and the remaining 177 (56%) to the low risk group. Forty-seven patients in the high risk group recurred (34% recurrence rate in this group, which includes 59% of the recurrers). Thirty-one patients in the high risk group died (23% of this group and 76% of all death events). Recurrence-free survival and overall survival are shown by test classification in
Patient characteristics by test classification are shown in table 31.
Tables 32 and 33 show the ability of the test to predict RFS and OS when adjusted for other patient characteristics.
Reproducibility was assessed by comparing the test classifications obtained during development by out-of-bag estimate with the results obtained from two reruns of 124 samples from the development sample set on the ST100. The results showed a concordance of test classifications of 94% and 89%
2. Second Split of the Sample Set (Split Of High Risk Group from First Stratification), Classifier B
This classifier (“Classifier B”) stratifies the high risk group defined by the first classifier (N=137) into two groups with highest (“highest”) and intermediate (“high/int”) risk of recurrence. Fifty-six patients (41% of the high risk group) were classified to the highest risk group and the remaining 81 (59%) to the high/int risk group. Twenty-six patients in the highest risk group had a documented recurrence (46% recurrence rate); twenty-one patients in the high/int group had a documented recurrence (26% recurrence rate). Fourteen patients in the highest risk group had an OS event (25% of this group); seventeen patients in the high/int group had an OS event (21%). Recurrence-free and overall survival are shown by second split test classification for patients classified as high risk by the first split in
Patient characteristics by test classification are shown in table 38.
Tables 39 and 40 show the ability of the test (highest vs high/int) to predict outcome when adjusted for other patient characteristics.
Reproducibility was assessed by comparing the test classifications obtained during development by out-of-bag estimate (on the 62 samples classified as high risk by Classifier A on the development run) with the results obtained from two reruns of the same samples on the ST100. Concordance of the test classifications was 85% and 89%.
3. Second split of the sample set (Split of low risk group from first stratification), Classifier C
This classifier (“Classifier C”) stratifies the low risk group defined by the first classifier (N=177 with 33 recurrences) into two groups with lowest (“lowest”) and intermediate (“low/int”) risk of recurrence.
Eighty-eight patients (50% of the low risk group) were classified to the low/int risk group and the remaining 89 (50%) to the lowest risk group. Fourteen patients in the lowest risk group recurred (16% recurrence rate); nineteen patients in the low/int group recurred (21% recurrence rate). RFS and OS are shown by second split test classification (lowest vs low/int) for patients classified as low risk by the first stratification (Classifier A) in
Reproducibility was assessed by comparing the test classifications obtained during development by out-of-bag estimate for samples classified as low risk by Classifier A (N=62) with the results obtained from two additional runs of these samples on the ST100. Concordance of the test classifications was 85% and 89%.
Hierarchical combination of classifiers A, B and C in a testing regime.
As explained previously, and with reference to
For the development sample set in this Section 7 (see above) the patient characteristics by classification label are shown in Table 46.
Recurrence-free survival and overall survival for the whole development cohort stratified by four-way test classification are shown in
Reproducibility of the 4 way classification was assessed comparing reruns of 124 of the development samples on the ST100 with out-of-bag estimates for the development run of the same samples. Concordance of the classification labels was 80% and 81%.
Alternative hierarchical combination of Classifiers A, B and C: ternary split of the cohort (
Inspection of
Reproducibility of the ternary classification was assessed comparing reruns of 124 of the development samples on the ST100 with out-of-bag estimates for the development run of the same samples. Concordance of 84% and 86% was observed.
Associations of test classifications with biological processes using PSEA We performed Protein Set Enrichment Analysis to discover the associations between test classifications in the regime of
Conclusions of Redevelopment of Risk of Recurrence Test (Section 7)
We were able to create a suite of three classifiers (A, B and C) stratifying patients with early stage lung cancer by risk of recurrence. Eighteen percent of patients were assigned to the highest risk group, 54% to the intermediate risk group (26% to the high/intermediate risk group, 28% to the low/intermediate risk group) and 28% to the lowest risk group. The percentage of patients recurrence-free at two years varied from 67% in the highest risk group to 95% in the lowest risk group; the percentage of patients alive at five years was 69% in the highest risk group and 93% in the lowest risk group. RFS and OS were significantly different between highest risk, intermediate risk and lowest risk classifications and they remained predictive of RFS and OS (trend for intermediate vs highest risk for OS) in multivariate analysis, adjusting for other prognostic factors. It is noteworthy that the tests were able to stratify all three kinds of recurrence: distant, locoregional and new primary, although performance was best for distant and locoregional recurrences.
Set enrichment analysis indicated that test classifications were associated with acute phase response, complement activation, acute inflammatory response, and wound healing. Immune tolerance could also be potentially relevant. These observations, together with our experience showing the relevance of complement, wound healing, acute phase response and acute inflammatory response in metastatic cancer treated with immunotherapies and the fact that the classifiers are able to stratify risk of new primary lesions, could indicate that the test is accessing information on the host's immune response to cancer.
Reproducibility of the test classifications was good, with reproducibility of around 85% for the ternary classification of highest, intermediate and lowest risk.
While the ternary test appeared to work well on plasma (i.e. produced concordant classifications between serum and plasma within the inherent reproducibility of the serum test itself), the first split of the dataset (binary classification) did not. Further investigations should be undertaken to assess whether the apparent correction to concordance on moving from 4-way to ternary classification is reliable if the ternary test is to run on plasma samples.
Analysis of test performance in the large subgroup of patients with adenocarcinoma demonstrated similar performance to that in the whole cohort.
Section 8: Development and Use of a Classifier Developed from Samples Obtained Post-Surgery
We had post-surgery samples collected between 30 and 120 days after surgery in addition to pre-surgery samples from 114 patients. We found that applying the above-described redeveloped risk of recurrence test. developed on 300+ patients (described in Section 7) to these post-surgery samples was not very useful. However, we did discover that if we excluded the patients we had identified as at highest risk of recurrence from their pre-surgery sample, we could make a test using post-surgery samples that allowed a better stratification of these patients into intermediate and lowest risk groups.
In practical terms, one could implement the test (or classifier) described in this section after surgery, in addition to performing a test from a blood-based sample prior to surgery. In particular, one would test a patient pre-surgery, using the test of Section 7 (e.g., a ternary classification routine as described in that section). If the pre-surgery sample is classified as highest risk, that test result could inform and guide their treatment. For example, it could lead to adjuvant chemotherapy, or perhaps immunotherapy if such treatment is approved in the future, or more intensive follow up with the patient. If the pre-surgery patient is classified as lowest or intermediate risk, we could obtain a post-surgery serum sample and generate an improved stratification based on that, using the classifier developed as described in this section.
As the classifier developed in this section only had samples collected 30-120 days post-surgery, we do not presently know if that is an optimal timeframe in which to collect a second sample. In one possible strategy, stratification could be improved by collecting a series of post-surgery samples (e.g. at 6 months, 9 months, 1 year post-surgery) and conducting the test described in this section on each of such samples.
Our observation we have made is that the serum proteome changes from pre-surgery to post-surgery, and the post-surgery proteome contains information that allows us to improve our recurrence risk stratification. We have conducted analysis of PSEA scores, which support the realization that there are significant changes between pre- and post-surgery sampling,
A post-surgery classifier was developed by training on the post-surgery feature values derived from the first spectral acquisition using instrument “ST100”, as mentioned earlier. Patients whose pre-surgery samples were classified as highest risk by the pre-surgery classifier were excluded, leaving 95 post-surgery samples for classifier development. The resulting classifier stratifies patients into a group with higher risk of recurrence (class label “G1”) and lower risk (class label “G2”). In this section, the highest-risk pre-surgery patients are shown alongside the plots for the patients having class label G1 and G2 for purposes of comparison despite such the fact that samples from such patients were not used in the post-surgery classifier development.
Details of Classifier Development
A classifier was developed using the procedure shown in
Results
After classifier development, the matched samples were classified using the post-surgery classifier, using out-of-bag classifications, with those patients designated highest risk based on their pre-surgery ST100 classification excluded. Of the 114 matched samples, 24 (21%) were classified as highest risk by the pre-surgery classifier, 49 (43%) were classified as G1, and 41 (36%) were classified as G2 (Table 63). Of the 22 recurrences in the matched sample cohort, of which eight belonged to the highest-risk group (33% recurrence rate in this group), 12 were assigned to G1 (24% recurrence rate), and two to G2 (5% recurrence rate),
The concordance between the post-surgery classifier (using the post-surgery samples) and the original pre-surgery ROR classifier (using the pre-surgery samples) is shown in Table 64 for patients not classified as at highest risk of recurrence from their pre-surgery sample. Thirteen of the patients whose pre-surgery samples were classified as low risk were classified as G1 (higher risk) post-surgery, of which two patients had recurrences. Twelve patients were classified as intermediate risk pre-surgery and as G2 (lower risk) after surgery, of which no patients recurred.
Recurrence-free survival is shown by test classification in
Cox proportional hazard ratios and p values comparing G1 vs G2 are shown in Table 65.
Some key time-to-event landmarks are summarized in Table.
Table 67 shows patient characteristics by test classification.
Table 68 shows the ability of the test to predict recurrence-free survival when adjusted for other patient characteristics. Among the recurrences, G1 and G2 both contained roughly equal proportions of locoregional recurrences and new primaries, although the total number of recurrences in G2 is very small, making comparisons difficult. Table 69 shows the types of recurrences by test classification.
Reproducibility was assessed by comparing the test classifications obtained during development by out-of-bag estimate with the results obtained from a rerun of the same samples on the ST100, Eighty-nine out of 90 samples (99%) received the same classification for both runs.
Conclusions
A test developed using post-surgery samples, collected from patients not classified as at highest risk of recurrence based on pre-surgery samples, was able to effectively stratify these patients into two groups (G1 and G2) with worse and better RFS and OS, respectively. This stratification of these patients appeared to be better than that obtained from pre-surgery samples and the risk of recurrence test described in Section 7. As the post-surgery test can only be effectively applied to patients not classified as at highest risk based on pre-surgery samples, it would be necessary to have tested a patient's pre-surgery sample to provide an improved prognostication of likelihood of recurrence after surgery,
This result indicates the presence of outcome-associated differences in the serum proteome between samples collected before and after surgery. This observation was confirmed by comparing the PSEA scores before and after surgery, the details of which are omitted for the sake of brevity.
Thus, we contemplate a testing methodology as follows.
Section 9, Further Considerations
Practical implementations of the test of this document could take several forms.
In one embodiment, a method for performing a risk assessment of recurrence of cancer in an early stage non-small-cell lung cancer patient includes the steps of:
(a) performing mass spectrometry on a blood-based sample obtained from the patient and obtaining mass spectrometry data, and
(b) in a computing machine, performing a hierarchical classification procedure on the mass spectrometry data wherein the computing machine implements a hierarchical classifier schema including a first classifier (Classifier A) producing a class label in the form of high risk or low risk or the equivalent (See
Alternatively, the test could be performed in accordance with a method in which the computing machine implements a hierarchical classifier schema including a third classifier (Classifier C), see
As described in conjunction with
As an alternative, the test could be conducted in binary classification procedure using just Classifier A to produce High Risk or Low Risk classification labels (or the equivalent).ln this regard, a method for performing a risk assessment of recurrence of cancer in an early stage non-small-cell lung cancer patient includes the steps of: performing mass spectrometry on a blood-based sample obtained from the patient prior to surgery to treat the cancer and obtaining mass spectrometry data, and in a computing machine, performing a binary classification procedure on the mass spectrometry data wherein the computing machine implements a first classifier (Classifier A) producing a class label in the form of high risk or low risk or the equivalent, wherein if the class label of is high risk or the equivalent the patient is predicted to have a high risk of recurrence of the cancer following surgery.
In the above methods, in one embodiment the computing machine stores a reference set of mass spectrometry data obtained from blood-based samples obtained from a multitude of early stage non-small-cell cancer patients for use in classification of the mass spectrum of the sample, and wherein the mass spectrometry data includes feature values for features listed in Appendix A.
As another example of how the present disclosure can be practiced, a programmed computer is provided with machine-readable code and memory storing parameters for at least Classifier A, and optionally Classifiers B and Classifier C (and code for implementing an associated hierarchical classification schema, shown in
In one possible implementation, the classifiers A, B and C are generated from performing the method of
It will be appreciated that the terms assigned to class labels, such as “high risk” or “highest” are descriptive and offered by way of example but not limitation, and of course other labels could be chosen, such as “good” “bad”, “1”, “2”, “G1” or Group 1, “G2”, etc. The particular nomenclature used in practice is not particularly important.
As noted, in one possible configuration just Classifier A is used to stratify the patient into high and low risk groups. The cases in which one might use just Classifier A for high/low risk and not prefer to define a “highest” risk group (using Classifier B) would be:
1. A scenario where the highest risk identification (produced by Classifier B) does not validate well. Usually our tests validate well, but in this risk of recurrence setting we are dealing with relatively small numbers of recurrers and this increases the risk of not generalizing well. This can be due to some overfitting, misjudging performance on small development set, or not having a population-representative set to train with.
2. That this option would extend better to other indications. As this “first split” of the dataset looks less deeply into the proteome and specifics of the training set, it might be more portable to other indications in terms of moving to stage II NSCLC, other lung cancer or possibly other early stage cancers.
The appended claims are offered as further descriptions of the disclosed inventions.
This application claims priority benefits of U.S. provisional application serial no. 62/806,254 filed Feb. 15, 2019, the content of which is incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/015626 | 1/29/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62806254 | Feb 2019 | US |