Rectal cancer is a disease in which malignant cells form in the tissues of the rectum. As shown in
At 200, Rectal Cancer Diagnosis is performed. Most people in early colon or rectal cancer stages do not experience the symptoms of the disease. Thus, screening tests are recommended to detect and diagnose the cancer before it further progresses. One or more of tests used to detect and diagnose colon and rectal cancer include:
At 204, staging is performed. Staging is the process of determining the spread and extent of the cancer tumor once it has been diagnosed. It is based on the results of the physical exam, biopsies, blood and imaging tests. The American Joint Committee on Cancer (AJCC) staging system, also known as the TNM system, is the tool most commonly staging used for colorectal cancer. The TNM consists of three key elements:
Once the patient's T, N and M categories have been determined, a stage grouping (from stage I to stage IV in Error! Reference source not found.) is determined from the least advanced to the most advanced stage.
At 205, treatment options are determined. There are different types of treatment for rectal cancer, some are standard practice and others are being tested in clinical trials. According to the National Cancer Institute (NCI), four types of standard treatment are used: surgery, radiation therapy (RT), chemotherapy, and targeted therapy. There treatments can be performed separately or combined as shown in
The primary treatment used in rectal cancer is surgical resection. According to the NCI, local excision of clinical tumors is commonly used for selected patients in rectal cancer stage T1. For higher stages of rectal cancer, a total mesorectal excision (TME) is the treatment of choice. Since the introduction of TME for rectal cancer, reduced local recurrence rates and improved oncologic outcomes have been observed. Depending on the surgeon's experience, the rate of complications, such as blood loss and anastomotic leaks, are low. Furthermore, radiotherapy before surgery appears to benefit patient outcomes even with improvements in surgical technique.
Radiation Therapy (RT) is the most commonly prescribed treatment in rectal cancer treatment. Approximately 50% of cancer patients will receive RT alone or in combination with other treatments. When used before surgery, the goal is to shrink the tumor to make surgery or chemotherapy more effective. When used afterward, it is used to destroy any cancer cells that might remain after surgery. There are two basic types of RT:
A combination of radiation and chemotherapy before radiation (also known preoperative chemo-radiation (CRT) or neoadjuvant therapy) has become the standard of care for patients with clinically staged T3-T4 or node-positive disease based on the results of clinical trials. CRT may be given before surgery to shrink the tumor, make it easier to remove the cancer, and lessen problems with bowel control after surgery. Even if all the cancer that can be seen at the time of the surgery is removed, some patients may be given radiation therapy or chemotherapy after surgery to kill any cancer cells that are left. Treatment given after the surgery to lower the risk that the cancer will come back is called adjuvant therapy.
For patients with rectal cancer stage II and III, neoadjuvant treatment with RT and 5-FU-based chemotherapy is preferred compared to adjuvant therapy in reducing local recurrence and minimizing toxicity. However, there are specific challenges and adverse effects associated with the RT in rectal cancer patients. These include:
RT after or before surgery treatment has negative effects on toxicity and the quality of life of the patient; therefore, treatment options should be discussed with the patient.
Personalized medicine refers to the use and implementation of the patient's unique biologic, clinical, genetic and environmental information to make decisions about their treatment or course of action. Cancer Therapy is implemented on a watch-and-wait basis for most patients. Although an individual's clinical information (cancer stage) is used to decide which regimen is likely to work best, only data referring to outcomes of larger groups of patients is considered herein.
Under the umbrella of personalized medicine is genomic medicine, which refers to “the use of information from genomes (from humans and other organisms) and their derivatives (RNA, proteins, and metabolites) to guide medical decision making,” as described by G. S. Ginsburg and H. F. Willard, “Genomic and personalized medicine: foundations and applications.,” Transl. Res., vol. 154, no. 6, pp. 277-87, Dec. 2009. The discovery of patterns in gene expression data and examining a person's genome makes possible to make individualized risk predictions and treatment decisions. A patient predisposition to treatment and health states can now be characterized by their molecular information, and useful classifiers and prognostic models can be developed to more strategically make decisions.
There has been a significant improvement in sensitivity as DNA microarray technology continues to advance. DNA microarray and gene expression profiles data has made possible to understand and make new discoveries at the molecular level regarding human conditions and diseases, especially cancer. However, a challenge facing this area of study is the complexity and amount data across multiple samples.
This research is motivated by the question whether it is possible to determine which patients will more likely benefit from using RT as part of their cancer treatment. Clinical decision-making regarding RT is still based on estimated overall level of tumor aggressiveness, but current decision models are not personalized for predicting the benefit from RT for a specific patient, as described by J. F. Torres-Roca and C. W. Stevens, “Predicting response to clinical radiotherapy: past, present, and future directions.,” Cancer Control, vol. 15, no. 2, pp. 151-6, Apr. 2008 (herein “Torres-Roca”). Torres-Roca developed and validated a system biology model of cellular radiosensitivity would lead to the discovery of novel radiation specific predictive biomarkers. The clinical applications of this type of personalized predictive model have the potential to identify patients likely to benefit from certain treatment and determine a more effective treatment strategy.
There has been an increasing trend in the way patients are moving from being a passive actor of their disease management process to actively making decisions regarding their treatment. It could now be expected that patients will at least give true informed consent to their treatment, if not actually making such treatment decisions themselves. Depending in the stage of the cancer, the decision of receiving a treatment is a matter of several factors and implications that influence the patient to accept or reject treatment. Further treatment may prolong life or relieve symptoms, but in some cases will not eradicate the disease. A trade off must be made between possible benefits and likely side effects.
The decision making process should consider the individual patients preferences for which treatment, if any, should be selected. Different significant predictors for overall survival, quality of life, cost-effectiveness, and response to treatment include individual patient genomic profile factors, prognostic biomarkers, and socio-economical patient characteristics. This information can help the patient make a decision, based on their individual preferences and personal situation.
As patients continue to gain control over their treatment strategies, more support is needed to help them make good decisions. It is still unclear to what extend patients are involved in their decision making and how they can resolve their personal uncertainty regarding their treatment options. D. J. Kiesler and S. M. Auerbach, “Optimal matches of patient preferences for information, decision-making and interpersonal behavior: evidence, models and interventions.,” Patient Educ. Couns., vol. 61, no. 3, pp. 319-41, Jun. 2006, reviewed studies regarding the involvement of patients in the decision making process, they found that although a large proportion of patient want to be fully informed and actively participate in their treatment decisions with their physicians, a considerable proportion of patients prefer to have little to no detailed information about their condition or involvement in medical decisions. This shared decision process is dynamic in the sense that it will vary depending on the patient preferences.
Other literature exists that concentrates on decision models used to select which treatment should be selection for patients with cancers. A large of proportion of articles are focused in determining which prognostic factors and biomarkers are the most significant predictors in the assessment of different outputs (e.g. Survival, Recurrence rate and chances of metastasis). The information, criteria, methods and objectives used in the models to make the treatment selection decision are listed in Table 2.
The objectives and criteria used in cancer treatment selection models involve intrinsic trade-offs between survival and quality of life Summers (2007) assessed trade-offs between quantity and quality of life particular to prostate cancer patients as well as among different side effects to determine which treatment would be optimal for a specific patient [20]. [21], [22], [23], [24], used an utility score and defined it as the relative value patients assign to potential health states. Utilities values were obtained from interviews or the literature. Some of the treatment complications considered include: sexual dysfunction, urinary symptoms bowel dysfunction, and death. Szumacher, 2005 [25], implemented a decision model mainly based on patients preferences in regards to convenience of treatment plan, pain relief, overall quality of life, Individual's chances of survival and out-of-pocket costs. Survival, chance of metastasis and risk of relapse are usually compared to quality of life measures: [26], [27] evaluated models based on the probability of the cancer relapsing after an amount of time, and [20], [24], [27] assessed the chance of the cancer spreading to other organs as decision criteria. On the other hand, A number of articles concentrated specifically on the cost effectiveness of various strategies [28], [29], [27]. Van Gerven, 2007 [30], focused on the maximization of patient benefit, while simultaneously minimizing the cost of treatment.
Among the methods utilized in the literature, different types of Markov decision analysis framework were the most used [29],[21], [20], [22], [30], [23]. A Markov decision process extends a Markov chain by allowing actions and rewards to incorporate both choice and motivation, also the Markov property ensures that the future state is independent of the past state given the current state of a random process. [28], [29], [27] used decision tress and cost-effectiveness analysis as a strategy to select strategies. Multi-criteria optimization models were used in [31], [32] to find the best dose—volume histogram (DVH) values by varying the dose—volume constraints on each of the organs at risk (OARs). Other methods used include: neural networks [25] and multivariate statistical analysis [25]. In most cases, Individual patient risks and preferences are not considered in these models to make individual recommendations. Therefore, future analyses need provide outcomes stratified by more specific risks and preferences.
The Data used as inputs considered in the models include tumor anatomy factors, patients' characteristics, and cost estimates. Tumor anatomy is also considered using the TNM staging system in various studies [30], [28], [24], [29]. Gleason score and prostate-specific antigen (PSA) are important input for prostate cancer treatment selection [21], [20], [22], [24]. Age is the most commonly patients characteristics considered in the models [21], [20], [22], [24], [30], [23], [28], [26], [25]. Other patient and health factors include: gender, race, treatment history, comorbidities, and laboratory results.
Below is a key to the references noted in Table 2 and discussed above:
Each of the above is incorporated herein by reference in its entirety.
Radiation Therapy (RT) is the most commonly prescribed single agent in cancer therapeutics. Approximately, half of cancer patients receive RT as part of their treatment. There has been great improvement in the quality and effectiveness of RT delivery in the last years.
Unfortunately, neoadjuvant CRT is not beneficial for all patients. The treatment response ranges from a pathologic complete response (pCR) to a resistance. It is reported that only 10 to 20 percent of patients with advanced rectal cancer show pCR to neoadjuvant CRT. Nowadays, patients with no response or minimum tumor response to neoadjuvant CRT before its initiation are not being identified.
Identifying patients that potentially could benefit from CRT and justifying a given treatment path will hopefully minimize side effects caused by the current treatment practices. We are entering in a new era of personalized, patient-specific care, and with the advent of low-cost individual genomic and proteomic analysis, we are on the path of employing patient's biologic data to systematically predict the best course of therapy.
Treatment decision making for cancer is complex. Every patient is unique with their own genetic traits, predisposition to side effects and preferences. The patient and clinician's subjective judgment plays a vital role in making sound treatment decisions. Furthermore, various patient-specific factors make it difficult to objectively and quantitatively compare various treatment decisions.
As described herein a prediction model is described that is based on the gene expression profiles of a sample of cell lines for the response of a patient to RT (Radiosensitivity) using their genomic information. Measures of the patient's individual clinical information, biological characteristics and anticipated quality of life are integrated into a patient-centered prescriptive model that determines the most appropriate course of action at a given stage (II and III) for rectal cancer.
Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.
The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.
Radiation therapy (RT) is the most commonly prescribed cancer treatment and can be effective in curing cancer. The success rates for RT are comparable with those achieved with surgery in some cancers (prostate, head and neck and cervical cancer). Over the past decades, RT effectiveness has improved by the discovery of physical approaches that optimizes the radiation dose to tumors and space normal tissues. With the introduction of microarrays and the use of gene expression to identify features in medical outcomes, identification of gene signatures and pathways activated in the response of cells to radiation can result in the development of treatment options which gene expression is controlled within the irradiated tumor (e.g. BUdR and IUdR were among the first classes of biological agents analyzed as radiosensitizers to enhance the effects of radiotherapy treatment).
Decision making and treatment selection in radiation oncology is subjective and based on clinic-pathological features of a large group of patient outcomes. In personalized medicine, the objective is to select the most appropriate course of treatment that fits an individual patient's needs and characteristics. Genomic medicine technological advancements has now the potential of predicting a patient predisposition to RT. Microarrays technology is one of the most widely adopted methods of genomics analyses. Microarrays experiments generate functional data on a genome-wide scale, and can provide important data for biological interpretation of genes and their functions.
The complexity and dimensionality of the data generated from gene expression microarray technology requires advanced computational approaches. Machine learning and supervised learning methods provide tools to develop predictive models from available data, and it is effective when dealing with large amounts of biological data. In this dissertation, we present a methodology to organize and analyze gene expression data and test whether it results in an accurate predictive model of tumor radiosensitivity.
Machine learning refers to the type of computational techniques that are used to develop a “model” from a set of observations of a system. The term “model” assumes that there exists an approximate relationships between the parameters considered in the system. The goal is to predict a quantitative (regression) or qualitative (classification) outcome using a set of attributes or features. Consequently, supervised learning refers to the subset of machine learning methods where the input-output relationship is assumed to be known.
Supervised learning is commonly used in the computational biology area ranging from gene expression data to analysis of interactions between biological subjects. Some of the most commonly used supervised learning methods used in computational biology include: neural networks, support vector machine, logistic regression, multivariate linear regression, decision tree-based models and ensembles (random forest). A review of these methods is presented in the following section.
Below is a discussion on the development of a personalized diagnostic tool to predict radiotherapy (RT) efficacy using the patient genomic information and estimate likelihood of response to RT of an individual patient. Later, the results of this model will be implemented into a decision model with the objective of guiding the patient and physician decision on the selection of a cancer treatment strategy.
A summary of the methods, relevant literature, strengths, limitations and opportunities are presented in Table 3. Artificial neural networks (ANN) and support vector machines are among the most commonly used black box machine learning tools in the literature. ANN-based approaches may be applied for classification, predictive modelling and biomarker identification within data sets of high complexity.
Below is a key to the references noted in Table 3:
Each of the above is incorporated herein by reference in its entirety.
More recent studies using ANN approaches in system biology include: a validated a reduced (from 70 to 9 genes) gene signature capable of accurately predicting distant metastases by Lancashire et al [40]; a model to predict Parkison's disease using micro-array gene expression data by Sateesh Babu et al [41]; and a gene expression-based model to select 20 genes that are closely related to breast cancer recurrence by Chou et al [42].
The support vector machine (SVM) algorithm consists on a hyperplane or a set of hyperplanes in a high-dimensional space, which are then used for classification or regression [43]. Support vector machines (SVM) have a number of mathematical features that make them attractive for gene expression analysis due to its ability of dealing with large data sets with high data dimensionality, ability to identify outliers, flexibility in choosing a similarity function and sparseness of the solution [44]. According to Statnikov et al, multi-category SVM are the most effective classifiers in performing accurate cancer diagnosis using gene expression data [45]. Most studies conclude that the main limitation of SVM is the lack of interpretability of the results and heuristic determination of the Kernel parameters.
In models using logistic regression for classification, the outcome of interest is assumed to be binomially distributed with the logistic function f(y)=1/(1+exp−Y). The variable y is a measure of the contributions of the parameters y=β0+β1X1+ . . . +βnXn, where β0 is a constant term and the β1, β2, . . . , βn are regression coefficients. Models [65]-[74] include [paragraph still in process]
The origin of tree-based learning methods is often credited to Hunt [75], but the method became recognized in the field of statistics by Breiman et al. [76] with the Classification And Regression Trees (CART). Since then, more decision-tree based methods have been proposed to improve the prediction accuracy by aggregating the predictions given by several decision trees for the same outcome. Although decision tree models were originally designed to address classification problems, they have been extended to handle Univariate and multivariate regression. Random forests (RF) models [77] is a randomization method that modifies the node splitting of the CART procedure as follows: at each node, K candidate variables are selected at random among all input candidate variables, an optimal candidate test is found for each of these variables, and the best test among them is eventually selected to split the node [78].
Below is a comparison of supervised learning methods appropriate to the structure and objectives of the models. Based on the performance of the models, a prediction model trained in tumor cell gene expression data is validated in two independent clinical outcomes datasets for patients that received pre-operative RT.
With referenced to
Table 4). Since gene expression profiles are available for all cell lines, gene expression is used as the basis of the prediction model. The operational flow 1900 may be predicated on two hypotheses. The first is that a radiosensitivity cell-based prediction model can be validated using clinical patient data from rectal and esophagus cancer patients that received RT before surgery. The second is that a radiosensitivity genomic-based prediction model could identify patients with rectal cancer that may benefit from RT treatment by assigning higher values of SF2 to radio-resistant patients and lower values of SF2 to radio-sensitive patients.
As evidence, radiosensitivity is defined based on cellular clonogenic survival after 2 Gy (SF2) for 48 cell lines (1902). Since gene expression profiles are available for all cell lines, gene expression is used as the basis of the prediction model. Radiosensitivity prediction has been studied, and a clinically validated radiosensitivity index (RSI) has been defined to estimate radiosensitivity. The approach herein differs from conventional methods in that the response SF2 transformation process and the gene expression selection process use a statistically based procedure versus a biological feature selection approach.
Sample: Cell lines are used to construct the prediction model and were obtained from the NCI [35]. Cells were cultured as recommended by the NCI in Roswell Park Memorial Institute medium (RPMI) 1640 supplemented with glutamine (2 mmol/L), antibiotics (penicillin/streptomycin, 10 units/mL) and heat-inactivated fetal bovine serum (10%) at 37° C. with an atmosphere of 5% CO2.
Microarrays: analyses using microarrays technology has been widely adopted for generating gene expression data on a genomic scale. Gene expression profiles were from obtained from Affymetrix UI33plus chips from a previously published study by S. Eschrich, H. Zhang, H. Zhao, D. Boulware, J.-H. Lee, G. Bloom, and J. F. Torres-Roca, “Systems biology modeling of the radiation sensitivity network: a biomarker discovery platform.,” Int. J. Radiat. Oncol. Biol. Phys., vol. 75, no. 2, pp. 497-505, October 2009.
Output: The survival fraction at 2 Gy (SF2) of 48 human cancer cell lines used in the classifier was obtained from Torres-Roca, 2005 and are presented in
Table 4.
The procedure used to obtain these values consisted on cells being plated so that 50 to 100 colonies would form per plate and incubated overnight at 37° C. to allow for adherence. Cells were then radiated with 2 Gy using a Cesium Irradiator. Exposure time was adjusted for decay every 3 months. After irradiation, cells were incubated for 10 to 14 days at 37° C. before being stained with crystal violet. Only colonies with at least 50 cells were counted. The values for SF2 were determined using the following equation 1:
Output transformation: A transformation function (equation 2) is applied to the SF2. Originally SF ranges between 0 and 1; with the transformation functions, SF2 can range between −∞ and ∞. The objective of this transformation is to enhance the extremes values of SF2 (radio-sensitive and radio-resistant responses). The transformation follows equation 2 and is represented in FIG. 4, which illustrates SF2 and transformed SF2
Standard prediction models and variable reduction methods face an important challenge with the dimensionality of the data. This is the case for the area of genomic applications where the number of genes is considerably higher than the samples available to study them. In this problem, a total of m=54,675 potential candidates (gene expression) are considered to be part of the prediction models with a total of n=48 observations tumor cells. The most commonly used approaches, such as PCA, require for n>m. However, this problem shows m>>n. Thus, a methodology to reduce the sample size and to identify features that are statistically independent (low correlation values) is recommended. The objectives of the dimension reduction procedure presented here are to:
The approach herein is a Univariate method that selects the most relevant (statistically significant) features one by one and excluding the rest. This technique is computationally simple and fast to process high-dimensional datasets, and it is independent of the classification/regression models. When using this procedure, feature dependencies are ignored. Thus, a step to extract independent features has to be included (step 5 below).
Thus, with reference to
The dimension reduction process presented in this study is also compared with two other feature selection methods including random forests and support vector machines. Since the subset of selected features is different for all methods there is no evidence to support one method over the other.
Predictive models are developed and compared based on their performance. The experimental design of the models is presented in Figure . The process to build, test and validate the models has been used in the literature of supervised learning methods in computational and systems biology, and it can be summarized as follows:
In the selection of a prediction model after 1914, there is tradeoff between simplicity and wholeness. Simpler models can be more understandable, computationally tractable. On the other hand, more complex models tend to fit the data better and to capture more information from available data. Two simple models (a Multivariate regression model and a decision tree model) and a more complex model (random forest) are created and compared to select the most appropriate model in the prediction of radiation sensitivity.
Model 1: Multivariate regression with 2-way interactions (1918)
Linear regression is a method used in building models from data for which dependencies can be closely approximated and predicting the value of a response (y) from a set of predictors (xi). Let x1,x2, . . . ,x169 be a set of 169 predictors believed to be associated with the transformed response T_SF2. The linear regression model for the jth has the form given by (3):
T_SF2j=β0+β1xj1+β2xj2+ . . . +β169x169+∈j (3)
The matrix notation is ŷ=Xβ. Where ∈ is a random error with E(∈j)=0, Var(∈j)=σ2, Cov(∈j, ∈k)=0 ∀j≠k, and i=0,1, . . . ,169 are the regression coefficients. The approach to estimate the vector β's in this study is the least square estimation: The value of β that minimizes the sum of square residuals (Y−Xβ)′(Y−Zβ) and the decomposition is given by (4):
The goodness of fit (GOF) of the model is measured by the proportion of the variability that the model can explain given by R2. The formulation and motivation of the use of R2 and other performance measures of GOR have been extensively addressed in the literature [84].
The creation of the multivariate regression model allowed for 2-way interactions to be considered as predictors in the regression model. The steps to build the models are as follows: (1) The model was coded using proc glmselect in SAS 9.3. (2) The selection process consisted on a stepwise forward selection (effects already in the model do not necessarily stay as the fit is iteratively tested considering all candidate variables). The decision criteria used considers the optimal value of the Akaike information criterion (AIC) and the adjusted R2 to access the trade-off between the GOF of the model and the number of predictors in the system. The AIC value is given by AIC=2k−2ln(L), where k is the number of parameters and L is the value of the likelihood function.
The value of the adjusted R2 is also presented in Thus,
Thus,
A decision tree induction is a method of data analysis that maps the dependency relationships in the data, and it is sometimes subsumed by the category of cluster analyses. The goal with CART is to build a regression tree and predict radiosensitivity (SF2) based on the gene expression profiles available using recursive partitioning or rpart in R. The following steps are followed to build the tree in rpart:
1. Splitting criteria: is given that the split of a node A into two sons AR and AL is (5):
P(AL)r(AL)+P(AR)r(AR)≦P(A)r(A) (5)
Where: P(A) is the probability of A for future observations, and r(A) is the risk of A. However, rpart considers measures of impurity or diversity for the note splitting criteria. Let f be the impurity function defined by (6):
Where piA is the proportion of the elements in A that belong to class i. Therefore, if I(A)=0 when A is pure, f must be concave with f(0)=f(1)=0. the split with the maximal impurity reduction (the Gini or information index) is used.
Supervised learning provides techniques to learn predictive models only from observations of a system and is therefore well suited to deal with the highly experimental nature of biological knowledge.
Breiman's Random Forests algorithm [77] builds each tree from a bootstrap sample like Bagging but modifies the node splitting procedure as follows: at each test node, K attributes are selected at random among all input attributes, an optimal candidate test is found for each of these attributes, and the best test among them is eventually selected to split the node.
The prediction model for radiosensitivity was built using the random forest package in R (1922). The selected predictors (gene expression profiles), ranked in the order the variable reduced prediction error, are presented
The predictive models were validated in three independent datasets. Clinical Outcomes are classified into responder(R) and non-responder (NR).
Rectal Cancer Dataset
Esophageal Cancer Dataset
Discussion
Herein, the microarray gene expression data processing and prediction model is built following four steps:
(1) Response variable transformation: SF2 for 48 cancer cell lines was transformed using a mathematical function to augment the lower and upper extremes (related to Radiosensitive and Radioresistant cell lines) of the radiosensitivity/radioresistance spectrum
(2) Dimensionality reduction: candidate gene expression probesets were selected using a univariate regression analysis with statistical significance (p<=0.001)
(3) Model building: Breiman's Random Forest algorithm [77] which is an ensemble of decision trees, was trained using the learning sample of the 48 human cancer cell lines to predict the transformed SF2
(4) Model calibration: statistically significant differences (p<0.05) were found between the median of the training set of the cell lines and the validation set of patients. We estimated the calibration parameters based on the calculated difference in medians.
Thus, the above provides clinical support for a practical and novel assay to predict tumor radiosensitivity. Due to the difference in experimental measurement in DNA microarray gene expression values among different cohorts, calibration methods may be created to standardize validation across different sites. Further testing of this technology in larger clinical populations is also supported.
An implementation of the above is a model based design and decision making of a multiple-input/multiple-output (MIMO) fuzzy logic controller (FLC). FLC defines a static nonlinear control law by employing a set of fuzzy if-then rules (also known as fuzzy rules). A set of fuzzy rules is derived via knowledge acquisition and reflects the knowledge of an expert in the area where the decision making is made. Below is an introduction to basic FLC related concepts involving the definitions of a fuzzy sets, fuzzy input, fuzzy output variables and fuzzy state space. Next, the types of FLCs are presented which include the Takagi-Sugeno, Mamdani and the sliding mode FLC models. Finally, the decision model is presented to select the most appropriate treatment based on the individual characteristics of the patient.
Classical sets are refer to as crisp sets in fuzzy set theory to differentiate them from fuzzy sets. A crisp set C of the universe of discourse, or domain D, can be represented by using its characteristic function μc:
The function μc: D→[0,1] is a characteristic function of the set C if and only if for all d
Therefore, for crisp sets every element of d of D either d∈C, or d∉C. It is not the same for fuzzy sets. Given a fuzzy set F, it is not necessary that d∈F, or d∉F. This function can be generalized to a membership function which assigns every d∈C a value from the unit interval [0,1] instead from the two element set {0,1}.
The membership function μF of a fuzzy set F is a function defined as μF:D→[0,1]. Every element d∈D has a membership degree μF(d)∈[0,1]. Thus, the fuzzy set F is completely determined by:
F={(d, μF(d))|d∈D}
Where D and F are continuous domains, and μF is a continuous membership function.
Herein, only fuzzy sets with convex membership functions are considered. A fuzzy set F is convex if and only if:
Vx, y∈XVλ∈[0,1]:μA(λ·x+(1−λ)·y)≧min(μA(x), μA(y))
The FLC described here have uses inputs and output variables whose states variables are x1, x2, . . . , xn. Let X be a given closed interval of reals, a state variable xi with values in the fuzzy sets are fuzzy state variables, and the set of these fuzzy values are called term-set. The values xi are denoted as TXi, and the j—th value of the i—th fuzzy state is denoted as LXij. Each LXij defined by a membership function:
LX
ij=∫xμx(x)/x
Where μx(x)/x is the degree of membership of the crisp value xi* of xi to the fuzzy value LXij of xi.
The fuzzy values LXij−1 and LXij+1 are referred to as the left and right neighbor of the fuzzy value LXij respectively. Also, It is required that each fuzzy value shares a certain degree of membership with its left and right neighbor:
supp(LXij−1)∩supp(LXij)≠
supp(LXij)∩supp(LXij+1)≠
μLX
μLX
Given a fuzzy state vector x=(x1, x2, . . . , xn)T, each xi takes some fuzzy value LXi∈TXi. Therefore, a random fuzzy state vector can be written as LX=(LX1, LX2, . . . , LXn)T. Each fuzzy state variable takes its fuzzy values amongst the elements of a finite term-set; therefore, there is a finite number of different fuzzy state vectors, denoted as LXi (for I=1,2, . . . , M). The center of a fuzzy region, LXi=(LX1i, LX2i, . . . , LXni)T defined by the crisp state vector xi=(x1i, x2i, . . . , xni)T∈Xn, where xki are crisp values such that μLX
The general form of a model is given as {dot over (x)}=f(x, u), where f is a n×1 state vector and u is the n×1 input vecto, and let u=g(x) be the control law. Then, we can estimate the closed loop system as {dot over (x)}=f(x, g(x)).
Bayesian Decision Theory/models are appropriate for groups of patients but are complicated in application to individual patient factors. Fuzzy set theory effectively handles the deterministic uncertainty and subjective information of clinical decision making. Other decision-making approaches include neural networks, utility theory, statistical pattern matching, decision trees, rule-based systems, and model-based schemes. Fuzzy set theory has been successfully used alone or combined with neural networks and expert systems to solve challenging biomedical problems in practice
Thus, in view of the above, the present disclosure seeks to develop an expert decision knowledge-based system that is able to effectively depict patient preferences and evaluate rectal cancer treatment options. The present disclosure further seeks to integrate patient-centered measures into a decision model that considers multiple criteria. This may be based on the following, non-limiting hypotheses:
A focus herein may be the selection of three cancer treatment regimens for stage II and stage III rectal cancer patients that will receive treatment for the first time (no metastasis):
There are 27 possible combinations (3×3×3=27), 9 transition matrices for the 3 regimens. Semi-Gaussian functions are used to produce gradual changes of membership/probability (see Table 6). The essential elements of an effective cancer treatment regimen include:
E(h)=α·WS+β·WA+γ·WE (3)
where WS, WA and WE are the weight vectors for survival, adverse effects and treatment efficacy.
In accordance with the methods above, the mathematical model to predict radio sensitivity is able to discriminate team responders and nonresponders using expression data for 14 genes, as listed below. In addition, a subset of these 14 genes as also able to predict radiotherapy sensitivity with statistical significance. It is noted that the number of genes in the model is selected based on model performance, and the best model as achieved with the 14 genes below.
The list of the 14 genes are:
For the random forest, the 14 genes are used to run the prediction since several (random) trees with different subset of genes are grown in order to get an aggregate prediction. However, we can rank the variables that are the best predictors (as they reduce the prediction error).
For the regression model, one can see in the every step of the modeling and how the performance changes as new variables are added to the model. A model may be built that only considers the first 5 steps.
The 14 genes or output after running the multivariate regression (see,
Models are built on data from 48 cell lines of different tumors (breast, colon, etc.). Once a final model is selected, we tested on patients that received Radiation, and based on the gene expression of the tumor, we tested how our model is able to discriminate between responders and non-responders.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This application claims priority to U.S. Provisional Patent Application No. 62/049,431, filed Sep. 12, 2014 and U.S. Provisional Patent Application No. 62/085,922, filed Dec. 1, 2014, each entitled “Supervised Learning Methods for the Prediction of Tumor Radiosensivity to Preoperative Radiochemotherapy.” The disclosures of the aforementioned U.S. Patent Applications are incorporated by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/049665 | 9/11/2015 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62049431 | Sep 2014 | US | |
62085922 | Dec 2014 | US |