SYSTEMS AND METHODS FOR GENERATING IMPROVED DECISION TREES

Information

  • Patent Application
  • 20230186106
  • Publication Number
    20230186106
  • Date Filed
    June 30, 2016
    8 years ago
  • Date Published
    June 15, 2023
    a year ago
  • CPC
    • G06N5/01
  • International Classifications
    • G06N5/01
Abstract
A system and method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, comprising: generating a node including: receiving i) training data including data instances, each data instance having a plurality of attributes and a corresponding label, ii) instance weightings, iii) a valid domain for each attribute generated, and iv) an accumulated weighted sum of predictions for a branch of the decision tree; and associating one of a plurality of binary prediction of an attribute with each node including selecting the one of the plurality of binary predictions having a least amount of error; in accordance with a determination that the node includes child nodes, repeat the generating the node step for the child nodes; and in accordance with a determination that the node is a terminal node, associating the terminal node with an outcome classifier; and displaying the decision tree including the plurality of nodes arranged hierarchically.
Description
BACKGROUND

The present invention generally relates to generating decision trees and, more particularly, to systems and methods for generating improved decision trees.


SUMMARY

In one embodiment there is a method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, comprising: generating a node of the decision tree, including: receiving i) training data including data instances, each data instance having a plurality of attributes and a corresponding label, ii) instance weightings, iii) a valid domain for each attribute generated, and iv) an accumulated weighted sum of predictions for a branch of the decision tree; and associating one of a plurality of binary prediction of an attribute with each node including selecting the one of the plurality of binary predictions having a least amount of weighted error for the valid domain, the weighted error being based on the instance weightings and the accumulated weighted sum of predictions for the branch of the decision tree associated with the node; in accordance with a determination that the node includes child nodes, repeat the generating the node step for the child nodes; and in accordance with a determination that the node is a terminal node, associating the terminal node with an outcome classifier; and displaying the decision tree including the plurality of nodes arranged hierarchically.


In some embodiments, generating the node includes: foregoing generating the node that has a binary prediction that is inconsistent with a parent node.


In some embodiments, generating the node includes: updating instance weightings for child nodes including incorporating an acceleration term to reduce consideration for data instances having labels that are inconsistent with the tree branch and utilizing the instance weightings during the generating the node step repeated for the child nodes.


In some embodiments, generating the node includes: updating the valid domain and utilizing the valid domain during generation of the child nodes.


In some embodiments, generating the node includes: foregoing generating the node that has a sibling node with an identical prediction.


In one embodiment, there is a system for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, comprising: one or more memory units each operable to store at least one program; and at least one processor communicatively coupled to the one or more memory units, in which the at least one program, when executed by the at least one processor, causes the at least one processor to perform the steps of any of the preceding embodiments.


In one embodiment, there is a non-transitory computer readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, perform the steps of any of the preceding embodiments.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

The foregoing summary, as well as the following detailed description of embodiments of the invention, will be better understood when read in conjunction with the appended drawings of an exemplary embodiment. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.


In the drawings:



FIG. 1A illustrates an exemplary data set plotted on a Cartesian graph.



FIG. 1B illustrates an exemplary decision tree generated based on the exemplary data set of FIG. 1A.



FIG. 2 illustrates an exemplary decision tree based on an exemplary ensemble method (e.g., AdaBoost) represented as an interpretable tree in accordance with at least some of the embodiments of the invention.



FIG. 3 illustrates a method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, according to at least one embodiment of the invention.



FIG. 4 illustrates an exemplary decision tree generated according to at least one of the embodiments of the invention.



FIG. 5 illustrates a method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, according to at least one embodiment of the invention.



FIG. 6 illustrates a method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, according to at least one embodiment of the invention.



FIGS. 7A-7B illustrates a graphical example of the comparison of the absolute error values between LMB and different machine learning algorithms.



FIGS. 8A-8B illustrates a graphical example of the comparison of the absolute error values between MAB and different machine learning algorithms.



FIGS. 9A-9B illustrates a graphical example of the effects of acceleration parameter on the training error (e.g., FIG. 9A) and testing error (e.g., FIG. 9B).



FIG. 10 illustrates an exemplary computing system for implement at least some of the methods in accordance with at least some of the embodiments of the invention.



FIG. 11 illustrates a graphical representation of CART, Tree-Structured Boosting and Boosting, each given four training instances, according to at least one embodiment of the invention.



FIG. 12 illustrates in-sample and out-of-sample classification errors for different values of λ, according to at least one embodiment of the invention.



FIG. 13 illustrates a heatmap linearly interpolating the weights associated with each instance for a disjoint region defined by one of the four leaf nodes of the trained tree, according to at least one embodiment of the invention.





DETAILED DESCRIPTION

Referring to the drawings in detail, wherein like reference numerals indicate like elements throughout, there is shown in the Figures, generally designated, in accordance with an exemplary embodiment of the present invention.


Machine Learning has evolved dramatically in recent years, and now is being applied to a broad spectrum of problems from computer vision to medicine. Specifically in medicine, a query of “machine learning” on www.pubmed.gov returns approximately 10,000 articles. The transition to the clinic, however, has seen limited success, and there has been little dissemination into clinical practice. Machine learning algorithms generally have some degree of inaccuracy, which leaves a user (e.g., a physician) with the question of what to do when their intuition and experience disagree with an algorithm's prediction. Most users might ignore the algorithm, in these cases, without being able to interpret how the algorithm computed its result. For this reason, some of the most widely used machine-learning based scoring or classification systems are highly interpretable. However, these systems generally trade off interpretability for accuracy. In medicine and other fields where misclassification has a high cost, while average prediction accuracy is a desirable trait; interpretability is as well. This is the reason why decision trees such as C4.5, ID3, and CART are popular in medicine. They can simulate the way physicians think by finding subpopulations of patients that all comply with certain rules and have the same classification. In a decision tree, these rules may be represented by nodes organized in a hierarchy, leading to a prediction.



FIG. 1A illustrates an exemplary data set plotted on a Cartesian graph. FIG. 1B illustrates an exemplary decision tree generated based on the exemplary data set of FIG. 1A. In FIG. 1A, data instances (e.g., data instance 101) in the data set are classified as either “Class A” or “Class B.” In FIG. 1B, a decision tree is generated by generating three decision nodes 102a-102c to partition the data instances into four subregions 103a-103d. Following the decision tree, a user can determine the proper classification for a data instance in a data set. Using the data instance 101 as an example, a user would first determine whether data instance 101 has an X2 value greater than 5. After determining that the X2 value for the data instance 101 exceeds 5 (because the X2 value is 9.5), the user would determine whether the data instance 101 has an X1 value less than 8. After determining that the X1 value for data instance 101 is less than 8 (because the X1 value is 5), the user would classify the data instance 101 as “Class A.”


The interpretability of decision trees can allow users to understand why a prediction is being made, providing an account of the reasons behind the prediction in case they want to override it. This interaction between users and algorithms can provide more accurate and reliable determinations (e.g., diagnoses) than either method alone, but it offers a challenge to machine learning: a tradeoff between accuracy and interpretability. Decision trees, although interpretable, are generally not among the most accurate algorithms. Instead, decision trees are generally outperformed by ensemble methods (the combination of multiple models) such as AdaBoost, Gradient boosting, and Random forests. Random forests in particular are widely used in medicine for their predictive power, although they can lack interpretability. In some embodiments described herein, there are systems and methods for generating decision trees that can have similar accuracy to ensemble methods while still being interpretable by users. In some embodiments, the systems and methods described herein are applicable in the field of medicine. However, it is contemplated that these systems and methods can be applicable to other fields besides medicine.


In some embodiments, ensemble methods, such as AdaBoost, combine weak learners (i.e., classifiers whose prediction may be only required to be slightly better than random guessing) via a weighted sum to produce a strong classifier. These ensemble methods may receive, as input, a set of labeled data X={x1 . . . , xN} with corresponding binary labels y={y1, . . . , yN} such that yi∈{−1, +1}. Each instance xi∈X lies in some d-dimensional feature space X, which may include a mix of real-valued, discrete, or categorical attributes. In these embodiments, the labels are given according to some “true” function F*: X→{−1, +1}, with the goal being to obtain an approximation F of that true function from the labeled training data under some loss function L(y, F (x)).


When evaluating different processes, a notion of interpretability that is common in the medical community is used that considers a classifier to be interpretable if its classification can be explained by a conjunction of a few simple questions about the data. Under this definition, standard decision trees (such as those learned by ID3 or CART) are considered interpretable. In contrast, boosting methods and Random Forests produce an unstructured set of weighted hypotheses that can obfuscate correlations among the features, sacrificing interpretability for improved predictive performance. As described below, it is shown that embodiments of the invention generate trees are interpretable, while obtaining predictive performance comparable to ensemble methods.


Representing a Model Produced by an Exemplary Ensemble Method as a Decision Tree


Generally ensemble methods iteratively train a set of T decision stumps as the weak learners {h1, . . . , hT} in a stage-wise approach, where each subsequent learner favors correct classification of those data instances that are misclassified by previous learners. Each decision stump ht may focus on a particular feature at of the vector x with a corresponding threshold to split the observations (e.g., at ≡“xj>3.411), and outputs a prediction ht(x, at)∈{−1, +1}. Given a new data instance characterized by an observation vector x, these ensemble methods may predict the class label F(x)∈{−1, +1} for that instance as:






F(x)=sign(Σt=1Tcustom-charactertht(x,at)).  (I)


where the weight βtcustom-character of each decision stump ht depends upon its classification (training) error on the training data.


In some embodiments, the model produced by an exemplary ensemble method (e.g., AdaBoost) with decision stumps (one-node decision trees) can be represented as a decision tree. In some embodiments, such a model may be represented as an interpretable tree in accordance with at least some of the embodiments of the invention by constructing a tree with 2T branches, where each path from the root to a terminal node contains T nodes. At each branch, from the top node to a terminal node, the stumps h1, . . . , hT from the ensemble method may be assigned, pairing each node of the decision tree with a particular attribute of the data and corresponding threshold. The final classification at each terminal node can be represented by Equation I.


Since each ht outputs a binary prediction, the model learned by the exemplary ensemble method can be rewritten as a complete binary tree with height T by assigning ht to all internal nodes at depth t−1 with a corresponding weight of custom-charactert. The decision at each internal node may be given by ht(x, at), and the prediction at each terminal node may be given by F(x). Essentially, each path from the root to a terminal node may represent the same ensemble, but tracking the unique combination of predictions made by each ht. FIG. 2 illustrates an exemplary decision tree based on an exemplary ensemble method (e.g., AdaBoost) represented as an interpretable tree in accordance with at least some of the embodiments of the invention.


The trivial representation of the model in the exemplary ensemble method as a tree, however, likely results in trees that are accurate but too large to be interpretable. Embodiments of the invention remedy this issue by 1) introducing diversity into the ensemble represented by each path through the tree via a membership function that accelerates convergence to a decision, and 2) pruning the tree in a manner that does not affect the trees predictions, as explained below.


Generating a Decision Tree Using an Exemplary Method



FIG. 3 illustrates a method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, according to at least one embodiment of the invention. In one embodiment, the method is referred to as MediBoost.


In some embodiments, the method may include generating a node. The step of generating a node may include receiving i) training data including data instances, each data instance having a plurality of attributes and a corresponding label, ii) instance weightings, iii) a valid domain for each attribute generated, and iv) an accumulated weighted sum of predictions for a branch of the decision tree. The step of generating a node may also include associating one of a plurality of binary prediction of an attribute with each node including selecting the one of the plurality of binary predictions having a least amount of error. The step of generating a node may also include determining whether a node includes child nodes or whether the node is a terminal node. The step of generating a node may also include in accordance with a determination that the node includes child nodes, repeat the generating the node step for the child nodes. The step of generating a node may also include in accordance with a determination that the node is a terminal node, associating the terminal node with an outcome classifier.


In some embodiments, the method may include displaying the decision tree including the plurality of nodes arranged hierarchically. FIG. 4 illustrates an exemplary decision tree generated according to at least one of the embodiments of the invention.


Turning back to FIG. 3, in some embodiments, the step of generating the node may include foregoing generating the node i) having a binary prediction that is inconsistent with a parent node.


In some embodiments, the step of generating the node may include updating instance weightings for child nodes including incorporating an acceleration term to reduce consideration for data instances having labels that are inconsistent with the tree branch and utilizing the instance weightings during the generating the node step repeated for the child nodes.


In some embodiments, the step of generating the node may include updating the valid domain and utilizing the valid domain during generation of the child nodes.


In some embodiments, the step of generating the node may include foregoing generating the node having a sibling node with an identical outcome classifier.


To users who want interpretable models, the fact that a decision tree is generated in accordance with embodiments of the invention via boosting and not the maximization of information gain (as standard in decision tree induction) is irrelevant. As long as the decision nodes represent disjoint subpopulations and all observations within a terminal node have the same classification, the trees can be highly interpretable. Traditional decision trees do recursive partitioning; each node of the tree further subdivides the observed data, so that as one goes farther down the tree, each branch has fewer and fewer observations. This strongly limits the possible depth of the tree as the number of available observations typically shrinks exponentially with tree depth. In this ‘greedy search’ over data partitions, assigning an observation on the first nodes of the tree to incorrect branches can greatly reduce their accuracy. In AdaBoost, and in its trivial representation as a tree, although different observations are weighted differently at each depth (based on classification errors made at the previous level), no hard partitioning is performed; all observations contribute to all decision nodes. Having all observations contribute equally at each branch, as is done by boosting methods, however, might result in trees that are accurate but too large to be interpretable. In fact, it is not unusual for AdaBoost or Gradient Boosting (an AdaBoost generalization to different loss functions) to combine hundreds of stump decisions.


To remedy these issues, at least some embodiments of the invention (e.g., embodiments implementing Mediboost) weights how much each observation contributes to each decision node, forming a relative “soft” recursive partition, similar to decision trees grown with fuzzy logic in which observations have a “degree of membership” in each node. These embodiments merge the concepts of decision trees, boosting and fuzzy logic by growing decision trees using boosting with the addition of a membership function that accelerates its convergence at each individual branch, and enables pruning of the resulting tree in such a manner that does not affect its accuracy. These embodiments thus give the best of both worlds: they do not do a hard recursive partitioning, but they still grow a single interpretable tree via boosting. It is the combination of the soft assignment of observations to decision tree splits through the membership function and the boosting framework to minimize a loss function that provides the improvement in accuracy over regular decision trees.


Because at least some embodiments at their core are a boosting framework, different boosting methods including Gradient Boosting, and Additive Logistic Regression with different loss functions can be used to construct specific decision tree induction algorithms. As discussed in more detail below, two exemplary embodiments of the invention: 1). MediAdaBoost (MAB) using Additive Logistic Regression and 2.) Likelihood MediBoost (LMB) using Gradient Boosting, are described in further detail. MAB, similar to AdaBoost, can be obtained by minimizing an exponential loss function using Additive Logistic Regression with the addition of a membership function. MAB can find each node of the decision tree not only by focusing on the data instances that previous nodes have misclassified as in AdaBoost, but also focusing more on instances with higher probability of belonging to that node as in fuzzy logic. Conversely, LMB can be obtain using Gradient Boosting by finding the split that minimizes the quadratic error of the first derivative of the binomial log-likelihood loss function and determining the coefficients according to the same framework (see supplementary materials). Reinterpreting MediBoost using Gradient Boosting can not only allow different loss functions, but also provide the necessary mechanisms to add regularization beyond penalizing for the size of the tree (as is sometimes done in regular decision trees) in order to obtain better generalization accuracy. Additionally, embodiments of the invention can easily be extended to regression.


Generating a Decision Tree Using an Alternative Exemplary Methods



FIG. 5 illustrates a method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, according to at least one embodiment of the invention. In one embodiment, the method is referred to as MediAdaBoost (MAB).


At each node of the tree, MAB can train a weak learner to focus on the data instances that previous nodes have misclassified, as in AdaBoost. In addition, MAB can incorporate an acceleration term (second terms in lines 6a and 6b) to penalize instances whose labels disagree with the tree branch, focusing each branch more on instances that seem to have higher probability of following the corresponding path, as in fuzzy logic. While growing the tree, MAB can also prunes (line 11) impossible paths based on previous decisions on the path to the root (lines 7-8).


This algorithm can be obtained if the expected value of the exponential loss function L(F)=E(exp (−yF(x)) is minimized with respect to the ensemble classification rule F(x) using an additive logistic regression model via Newton-like updates. Some embodiments of the invention include the acceleration term (A) based on a membership function to diversify the ensembles and speed their convergence.


In some embodiments, L(F)=E(e−yF(x)) is the loss function of the tree at an arbitrary terminal node NT. Assuming a current estimate of the function FT-1(x) corresponding to a tree of depth T−1 the estimate can be improved by adding an additional split at one of the terminal nodes NT-1 that will define two more terminal nodes, children of NT-1, using an additive step:






F
T(x)=FT-1(x)+custom-characterThT(x,aT),  (2)


where custom-characterT is a constant, and hT(x, aT)∈{−1, +1} represents the classification of each observation with decision predicate aT to split the observations at NT. The new loss function can be:






L(FT-1(x)+custom-characterThT(x,aT))=custom-character(exp(−yFT-1(x)−ycustom-characterThT(x,aT))),  (3)


Taking into account that F(x) is fixed and expanding exp(−yFT-1(x)−ycustom-characterThT(x, aT)) around hT=hT(x, aT)=0 (for some predicate aT) as a second-order polynomial (for a fixed custom-characterT and x) we obtain:






L(FT-1(x)+custom-characterThT)≈custom-character(e−yFT-1(x)(custom-characterycustom-characterThT+custom-characterT2y2hT2/2)),  (4)


Since y∈{−1, +1} and hT∈{−1, +1}, we have y2=1 and h2T=1, so:






L(FT-1(x)+custom-characterThT)≈custom-character(e−yFT-1(x)(custom-characterycustom-characterThT+c2/2)),  (5)


where c is a constant. Minimizing Equation 5 with respect to hT for a fixed x yields:














?

T

=

arg

min

?



𝔼
w

(


1
-

y


β
T




h
T

(

x
,
a

)


+


c
2

/
2



x

)



,





(
6
)










?

indicates text missing or illegible when filed




where Ew(⋅|x) refers to the weighted conditional expectation in which the weight of each instance (xi, yi) is given by






w(i)=e−yFT-1(xi)M(xi,T−1),


with an acceleration term M(x, T−1) that emphasizes instances with predicted labels that agree with the corresponding branch of the tree. The introduction of this acceleration term can be a key step that leads to MediAdaBoost, differentiating these embodiments of the invention from Discrete AdaBoost, and making each path through the tree converge to a different ensemble of nodes.


If custom-characterT>0, Equation 6 is equivalent to











?





(
7
)










?

indicates text missing or illegible when filed




where consideration is taken that y2=1 and (hT(x, a))2=1.


Equation 7 indicates that in order to minimize the expected loss, hT(x, aT) can be obtained using a weighted least square minimization over the training data. Given hT(x, aT), custom-characterT is obtained as:











?





(
8
)










?

indicates text missing or illegible when filed




which can be shown to be:











?





(
9
)










?

indicates text missing or illegible when filed




Therefore, the new function at NT is given by








?








?

indicates text missing or illegible when filed




where hT(x, aT) is the decision stump that results from solving Equation 7. Let {N1, . . . , NT} denote the path from the root node to NT. To yield MAB, the acceleration term is set to be:











?





(
10
)










?

indicates text missing or illegible when filed




where A is an acceleration constant and








?








?

indicates text missing or illegible when filed




thereby penalizing the weight of x by e−A each time the instance may be predicted to belong to a different path. If A is set to 0, then every path through the resulting tree can be identical to the AdaBoost ensemble for the given problem. As the constant A increases, the resulting MAB tree can converge faster and the paths through the tree represent increasingly diverse ensembles. MAB may also prune branches that are impossible to reach by tracking the valid domain for every at tribute and eliminating impossible-to-follow paths during the training process. As a final step, post-pruning the tree bottom-up can occur by recursively eliminating the parent nodes of leaves with identical predictions, further compacting the resulting decision tree.


In this section, MediBoost can be generalized to any loss function using the gradient boosting framework. As in the case of MAB, assuming a current estimate of the function FT-1(x) corresponding to a tree of depth T−1 and this estimate can be improved by adding an additional split at one of the terminal nodes NT-1 that will define two more terminal nodes, children of NT-1, using an additive step. The function at depth T is then given by FT(x)=FT-1(x)+βThT(x, aT). Additionally, we can define a loss function over one observation (xi, yi) as:






custom-character(yi,FT(xi))=custom-character(yi,FT-1(xi)+βThT(xi,aT)),  (11)


and a loss function over all observations as











?





(
12
)










?

indicates text missing or illegible when filed




where M(xi, T−1) is a membership function of the observation xi at node NT-1 as defined in the previous section. There is interest in finding the {βT, aT} that minimize Equation 12, which can be interpreted as the expected value of the loss function over a discrete number of observations.


Using a greedy stage-wise approach to minimize Equation 12, custom-characterThT(xi, aT) can be interpreted as the best greedy step to minimize Equation 12 under the constraint that the step direction hT(xi, aT) is a decision stump parameterized by aT. Therefore, using gradient steepest descent, custom-characterThT(xi, aT) is found that is most correlated to












(


y
i

,


F
T

(

x
i

)


)






F
T

(

x
i

)



.




One solution is to find the predicate aT and weight custom-characterT by solving











{


α
T

,

β
T


}

=

arg


min

α
,
β






i
=
1

N




[







(


y
i

,


F
T

(

x
i

)


)






F
T

(

x
i

)



-

β


h

(


x
i

,
α

)



]

2



M

(


x
i

,

T
-
1


)





;




(
13
)







Equation 13 is equivalent to finding aT that minimizes the quadratic loss function of a regression tree fitted to the pseudo-response












(


y
i

,


F
T

(

x
i

)


)






F
T

(

x
i

)



.




Once, aT has been found to yield the weak learner hT(xi)=hT(xi, aT), the quadratic Taylor expansion of Equation 11 can be used:












(


y
i

,


F
T

(

x
i

)


)

=




(


y
i

,


F

T
-
1


(

x
i

)


)

+







(


y
i

,


F

T
-
1


(

x
i

)


)






F

T
-
1


(

x
i

)





β
T




h
T

(

x
i

)


+


1
2






2




(


y
i

,


F

T
-
1


(

x
i

)


)





2



F

T
-
1


(

x
i

)






(


β
T




h
T

(

x
i

)


)

2







(
14
)







in combination with Equation 12 to obtain the value of custom-characterT. Additionally, defining









i

=








(


y
i

,

F

(

x
i

)


)





F

(

x
i

)





and


κ

=




2




(


y
i

,

F

(

x
i

)


)





2


F

(

x
i

)





,




Equation 12 can be rewritten as:









L
=




i
=
1

N



[




(


y
i

,

F

(

x
i

)


)

+



i


β


h

(


x
i

,
α

)


+


1
2





k
i

(

β


h

(


x
i

,
α

)


)

2



]




M

(

x
i

)

.







(
15
)







Finally, realizing that:










β


h

(


x
i

,
α

)


=




j
=
1

2



c
j



𝟙

(


x
i



R
j


)







(
16
)







where Rj indicates to the two regions represented by the stump h(xi, a) and cj are constants. Therefore, substituting Equation 16 into Equation 15, gives the following:









L
=




i
=
1

N



[




(


y
i

,

F

(

x
i

)


)

+



i






j
=
1

2



c
j



𝟙

(


x
i



R
j


)




+


1
2





k
i

(




j
=
1

2



c
j



𝟙

(


x
i



R
j


)



)

2



]




M

(

x
i

)

.







(
17
)







There is interest in finding the cj that minimize Equation 17 given the split obtained using Equation 13. The terms that do not depend on cj can be removed from this optimization problem. After a few arrangements a new loss function is obtained:










L
=





j
=
1

2



(




i


I
j






i



M

(

x
i

)



)



c
j



+


1
2



(




i


I
j





k
i



M

(

x
i

)



)



c
j
2




,




(
18
)







where Ij represents the observations that belong to Rj. Finally, writing explicitly the optimization problem and calling GJi∈IjgiM(xi) and KJi∈IjkiM(xi):











{

c
j

}


j
=
1

2

=

arg


min

w
j






j
=
1

2


[



G
j



c
j


+


1
2



K
J



c
j
2



]







(
19
)







whose solution is:










c
-

=

-


G
-


K
-







(
20
)











c
+

=

-


G
+


K
+




,




where in the pair of Equation 20 we have substituted J by (+)=Right and (−)=Left corresponding to the observations on the right or left nodes that are children of NT-1.


In some embodiments, regularization may be utilized. In these embodiments, regularization generally limits the depth of the decision tree. A L2-norm penalization on cj can be added to Equation 18 as follows:









arg


min

w
j






j
=
1

2


[



G
j



c
j


+


1
2



K
J



c
j
2


+

λ


c
j
2



]






(
21
)







with the subsequent coefficients given by:







c
-

=

-


G
-



K
-

+
λ











c
+

=

-


G
+



K
+

+
λ




,




Finally, the concept of shrinkage or learning rate, LR, regularly used in gradient Boosting can also be applied to MediBoost. In this case, the pair of Equations above will be given by:










c
-

=

-


LR


G
-




K
-

+
λ







(
22
)











c
+

=

-


LR


G
+




K
+

+
λ




,




where learning rate (LR) is a constant, the shrinkage or learning rate constant, can be 0.1. Each of these regularization methods can be used independently of each other.


Generating a Decision Tree Using an Alternative Exemplary Methods



FIG. 6 illustrates a method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, according to at least one embodiment of the invention. In one embodiment, the method is referred to as LikelihoodMediBoost (LMB).


Gradient Boosting with binomial log-likelihood as the loss function typically outperforms embodiments employing AdaBoost, and typically results in a more accurate algorithm with fewer branches. Using the embodiments described above, LMB can be derived using deviance as the loss function.






custom-character(yi,F(xi))=log(1+e−2yiF(xi)),y∈{−1,+1}


Because a log-likelihood loss was assumed, F(xi) can also be interpreted as:











F

(

x
i

)

=


1
2



log

(


Pr

(

y
=

1


x
i



)


Pr

(

y
=


-
1



x
i



)


)



,




(
23
)







where Pr(y=1|xi) and Pr(y=−1|xi) are the probabilities of y being equal to 1 or −1, respectively. This can justify classifying yi as sign(F(xi)).


Finding the first and second derivative of the loss function provides the following:












g
i

=







(


y
i

,

F

(

x
i

)


)





F

(

x
i

)



=



-
2



y
i



(

1
+

e

?



)








(
24
)















k
i

=





2




(


y
i

,

F

(

x
i

)


)





2


F

(

x
i

)



=




g
i





(

2
-



g
i




)








(
25
)










?

indicates text missing or illegible when filed




and using the definitions of Gj and Kj together with Equation 22, the coefficients can be rewritten at each split of LMB as:










e
-

=

-


LR


G
-




K
-

+
λ







(
26
)











e
+

=

-


LR


G
+




K
+

+
λ




,




where 0<LR≤1 is the shrinkage or learning rate regularization parameter and λ is the regularization on the weights assuming L2 norm penalty. If LR=1 and λ=0 then no regularization is applied. With these clarifications and using a similar membership function as in the case of MAB we are ready to write the LMB algorithm as shown below.


Results


To demonstrate that embodiments of the invention have similar accuracy as ensemble methods while maintaining interpretability like regular decision trees, the two exemplary embodiments were evaluated on 13 diverse classification problems from the UCI Machine Learning Repository (http://archive.ics.uci.edu/ml/) covering different subfields of medicine (see supplementary materials); these datasets represented all available binary medical classification problems in the UCI Repository.


For these evaluations, MediAdaBoost (MAB) was because of its simplicity, and LikelihoodMediBoost (LMB) because the binomial log-likelihood loss function has been shown to outperform AdaBoost in classification problems. The performances of LMB and MAB were compared with ID3 (our own implementation), CART, LogitBoost and Random Forests (Matlab R2015). All results were averaged over 5-fold cross-validation on the data sets, with hyper-parameters chosen in an additional 5-fold cross-validation on the training folds (see the supplementary materials for details).


As shown in Table 1, LMB with its default settings is better than its decision tree cousins (ID3 and CART) on 11 out of the 13 problems.













TABLE 1





LMB vs
ID3
CART
LogitBoost
Random Forests



















wins
12
11
6
4


losses
1
1
5
8


ties
0
1
2
1










FIGS. 7A-7B illustrates a graphical example of the comparison of the absolute error values between LMB and different machine learning algorithms. These results are statistically significant in a two-way sign-to-sign test. In one of the problems where the default LMB was not superior, the standard decision trees also outperformed the ensemble methods. In a three-way ANOVA comparison of the cross-validation errors between LMB, ID3 and CART across all problems, LMB was significantly better than ID3 (p=10−8) and CART (p=0.014). In comparison to the ensemble methods, LMB was indistinguishable from LogitBoost (p=0.44) and worse than Random Forests (p=0.0004). Comparison of algorithms using ANOVA tests has been pointed out to be less robust than the Friedman test, and so we also performed that test. LMB was significantly better than ID3 (p=0.006) and CART (p=0.09) at the 90% confidence interval in a three-way Friedman test, but not significantly different from either LogitBoost (p=0.97) or Random forests (p=0.30). Similar results were obtained when LMB was run with a learning rate of 0.1.


Additionally, as shown in Table 2, MAB gave similar results, though slightly worse but not statistically significant, to those obtained using LMB.













TABLE 2





MAB vs
ID3
CART
LogitBoost
Random Forests



















wins
11
10
5
4


losses
1
2
6
8


ties
1
1
2
1










FIGS. 8A-8B illustrates a graphical example of the comparison of the absolute error values between MAB and different machine learning algorithms. In these figures, points above the black line indicate results where MAB was better. As illustrated, MAB is significantly better than decision tree algorithms and indistinguishable from ensemble methods.


Therefore, based on both ANOVA and Friedman tests, we conclude that MediBoost in its various forms is significantly better than current decision tree algorithms and has comparable accuracy to ensemble methods.



FIGS. 9A-9B illustrates a graphical example of the effects of acceleration parameter on the training error (e.g., FIG. 9A) and testing error (e.g., FIG. 9B). This parameter was applied for four different data sets and two different algorithms (MAB and LMB). In all cases, the training error decreases as the accerlation parameter increases (accelerating the convergence of the algorithm) while the testing error improves or remains the same.


Additionally, some embodiments of the invention retains the interpretability of conventional decision trees (see e.g., FIG. 4). The interpretability of some embodiments of the invention is not only the result of representing it as a tree, but also the significant shrinkage obtained compared to boosting. This shrinkage is due to the introduction of the membership function controlled by the acceleration parameter, elimination of impossible paths that are result of the introduction of fuzzy logic, and a post-training pruning approach that does not change the model's accuracy. Once a deep decision tree is grown (eg. with a depth of 15) in accordance with embodiments of the invention, all branches that do not change the sign of the classification of its parent nodes can be pruned without loss of accuracy. This pruning approach has been used to represent the decision tree in FIG. 4. This tree is not only accurate, but also interpretable; physicians can readily incorporate it into their practice without knowledge of the underlying machine learning algorithm.


To summarize, utilizing embodiments of the invention results in trees that retain all the desirable traits of its decision tree cousins while obtaining similar accuracy to ensemble methods. It thus has the potential to be the best off-the-shelf classifier in fields such as medicine where both interpretability and accuracy are of paramount importance and change as such the way clinical decisions are made.



FIG. 10 illustrates an exemplary computing system for implement at least some of the methods in accordance with at least some of the embodiments of the invention. Some embodiments of the present invention may be implemented as programmable code for execution by computer system 1000. However, it is contemplated that other embodiments of the present invention may be implemented using other computer systems and/or computer architectures.


Computer system 1000 may include communication infrastructure 1011, processor 1012, memory 1013, user interface 1014 and/or communication interface 1015.


Processor 1012 may be any type of processor, including but not limited to a special purpose or a general-purpose digital signal processor. Processor 1012 may be connected to a communication infrastructure (e.g. a data bus or computer network) either via a wired connection or a wireless connection. Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the art how to implement the invention using other computer systems and/or computer architectures.


Memory 1013 may include at least one of: random access memory (RAM), a hard disk drive and a removable storage drive, such as a floppy disk drive, a magnetic tape drive, or an optical disk drive, etc. The removable storage drive reads from and/or writes to a removable storage unit. The removable storage unit can be a floppy disk, a magnetic tape, an optical disk, etc., which is read by and written to a removable storage drive. Memory 1013 may include a computer usable storage medium having stored therein computer software programs and/or data to perform any of the computing functions of computer system 1000. Computer software programs (also called computer control logic), when executed, enable computer system 1000 to implement embodiments of the present invention as discussed herein. Accordingly, such computer software programs represent controllers of computer system 1000.


Memory 1013 may include one or more datastores, such as flat file databases, hierarchical databases or relational databases. The one or more datastores act as a data repository to store data such as flat files or structured relational records. While embodiments of the invention may include one or more of the memory or datastores listed above, it is contemplated that embodiments of the invention may incorporate different memory or data stores that are suitable for the purposes of the described data storage for computer system 1000.


User interface 1014 may be a program that controls a display (not shown) of computer system 1000. User interface 1014 may include one or more peripheral user interface components, such as a keyboard or a mouse. The user may use the peripheral user interface components to interact with computer system 1000. User interface 1014 may receive user inputs, such as mouse inputs or keyboard inputs from the mouse or keyboard user interface components.


Communication interface 1015 may allow data to be transferred between computer system 1000 and an external device. Examples of communication interface 1015 may include a modem, a network interface (such as an Ethernet card), a communication port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, etc. Data transferred via communication interface 1015 may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being transmitted or received by communication interface. These signals are provided to or received from communication interface 1015 and the external device via a network.


In at least one embodiment, there is included one or more computers having one or more processors and memory (e.g., one or more nonvolatile storage devices). In some embodiments, memory or computer readable storage medium of memory stores programs, modules and data structures, or a subset thereof for a processor to control and run the various systems and methods disclosed herein. In one embodiment, a non-transitory computer readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, perform one or more of the methods disclosed herein.


Exemplary Embodiments

Additive models, such as produced by gradient boosting, and full interaction models, such as classification and regression trees (CART), are algorithms that have been investigated largely in isolation. However, these models exist along a spectrum, identifying deep connections between these two approaches. In some embodiments, there is a technique or method called tree-structured boosting for creating hierarchical ensembles, and this method can produce models equivalent to CART or gradient boosting by varying a single parameter. Notably, tree-structured boosting can produce hybrid models between CART and gradient boosting that can outperform either of these approaches.


1 Introduction

CART analysis is a statistical learning technique, which can be applicable to numerous other fields for its model interpretability, scalability to large data sets, and connection to rule-based decision making. CART can build a model by recursive partitioning the instance space, labeling each partition with either a predicted category (in the case of classification) or real-value (in the case of regression). CART models can often have lower predictive performance than other statistical learning models, such as kernel methods and ensemble techniques. Among the latter, boosting methods were developed as a means to train an ensemble of weak learners (often CART models) iteratively into a high-performance predictive model, albeit with a loss of model interpretability. In particular, gradient boosting methods focus on iteratively optimizing an ensemble's prediction to increasingly match the labeled training data. These two categories of approaches, CART and gradient boosting, have been studied separately, connected primarily through CART models being used as the weak learners in boosting. However, there is a deeper and surprising connection between full interaction models like CART and additive models like gradient boosting, showing that the resulting models exist upon a spectrum. In particular, described herein are the following contributions:

    • Introduction of tree-structured boosting (TSB) as a new mechanism for creating a hierarchical ensemble model that recursively partitions the instance space, forming a perfect binary tree of weak learners. Each path from the root node to a leaf represents the outcome of a gradient-boosted ensemble for a particular partition of the instance space.
    • Proof that TSB generates a continuum of single tree models with accuracy between CART and gradient boosting, controlled via a single tunable parameter of the algorithm. In effect, TSB bridges between CART and gradient boosting, identifying new connections between additive and full interaction models.


This result is verified empirically, showing that this hybrid combination of CART and gradient boosting can outperform either approach individually in terms of accuracy and/or interpretability. The experiments also provide further insight into the continuum of models revealed by TSB.


2 Background on CART and Boosting

Assume there is a training set (X, y)={(xi, yi)}Ni-1, where each d-dimensional xi∈X⊆χ has a corresponding label yi∈Y, drawn i.i.d from a unknown distribution D. In a classification setting, Y={±1}; in regression, Y=custom-character. One goal is to learn a function F: X→Y that will perform well in predicting the label on new examples drawn from D. CART analysis recursively partitions x, with F assigning a single label in Y to each partition. In this manner, there can be full interaction between each component of the model. Different branches of the tree are trained with disjoint subsets of the data, as shown in FIG. 11.


In contrast, boosting iteratively trains an ensemble of T weak learners {ht: X→Y}Tt=1, such that the model is a weighted sum of the weak learners' predictions F(x)=ΣTt=1 ρtht (x) with weights ρ∈custom-characterT. Each boosted weak learner is trained with a different weighting of the entire data set, unlike CART, repeatedly emphasizing mispredicted instances to induce diversity (FIG. 11). Gradient boosting with decision stumps or simple regression can create a pure additive model, since each new ensemble member serves to reduce the residual of previous members. Interaction terms can be included in the overall ensemble by using more complex weak learners, such as deeper trees.


Classifier ensembles with decision stumps as the weak learners, ht(x), can be trivially rewritten as a complete binary tree of depth T, where the decision made at each internal node at depth t−1 is given by ht(x), and the prediction at each leaf is given by F(x). Intuitively, each path through the tree represents the same ensemble, but one that tracks the unique combination of predictions made by each member.


3 Tree-Structured Boosting

This interpretation of boosting lends itself, however, to the creation of a tree-structured ensemble learner that bridges between CART and gradient boosting. The idea in tree-structured boosting (TSB) is to grow the ensemble recursively, introducing diversity through the addition of different sub-ensembles after each new weak learner. At each step, TSB first trains a weak learner on the current training set {(xi, yi)}Ni=1 with instance weights w∈custom-characterN, and then creates a new sub-ensemble for each of the weak learner's outputs. Each sub-ensemble can be subsequently trained on the full training set, but instances corresponding to the respective branch are more heavily weighted during training, yielding diverse sub-ensembles (FIG. 11). This process can proceed recursively until the depth limit is reached. This approach can identify clear connections between CART and gradient boosting—as the re-weighting ratio is varied, tree-structured boosting produces a spectrum of models with accuracy between CART and gradient boosting at the two extremes.


This concept of tree-structured boosting was previously discussed above. In this context, each of the weak learners could be a decision stump classifier, allowing the resulting tree-structured ensemble to be written explicitly as a decision tree by replacing each internal node with the attribute test of its corresponding decision stump. The resulting MediBoost decision tree was fully interpretable for its use in medical applications (since it was just a decision tree), but it retained the high performance of ensemble methods (since the decision tree was grown via boosting). Described further below is a general formulation of this idea of tree-structured boosting, focusing on its connections to CART and gradient boosting. For ease of analysis, there is a focus on weak learners that induce binary partitions of the instance space.


TSB can maintain a perfect binary tree of depthn, with 2n−1 internal nodes, each of which corresponds to a weak learner. Each weak learner hk along the path from the root node to a leaf prediction node 1 induces two disjoint partitions of χ, namely Pk and Pkc=X\Pk so that hk(xi)≠hk(xj)∀xi∈Pk and xj∈Pkc. Let {R1, . . . , Rn} be the corresponding set of partitions along that path to l, where each Rk is either Pk or Pck. We can then define the partition of χ associated with l as Rl=∩nk=1Rk. TSB predicts a label for each x∈Rl via the ensemble consisting of all weak learners along the path to l so that F(x∈Rl)=Σnk=1 ρkhk(x). To focus each branch of the tree on corresponding instances, thereby constructing diverse ensembles, TSB can maintain a set of weights w∈custom-characterN over all training data. Let wn,l denote the weights associated with training a weak learner hn,l at the leaf node l at depth n.


In some embodiments, the tree is trained as follows. At each boosting step, there can be a current estimate of the function Fn-1(x) corresponding to a perfect binary tree of height n−1. This estimate can be improved by replacing each of the 2n-1 leaf prediction nodes with additional weak learners {h′n,l}2n-1l=1 with corresponding weights ρncustom-character2n-1, growing the tree by one level. This yields a revised estimate of the function at each terminal node as











F
n

(
x
)

=



F

n
-
1


(
x
)

+




i
=
1


2

n
-
1





ρ

n
,
l




𝟙
[

x



l


]





h

n
,
l



(
x
)

.








(
1
)







where 1[p] is a binary indicator function that is 1 if predicate p is true, and 0 otherwise. Since partitions {R1, . . . , R2n-1} are disjoint, Equation (1) is equivalent to 2n-1 separate functions






F
n,l(x∈custom-characterl)=Fn-1,l(x)+ρn,lh′n,l(x),


one for each leaf's corresponding ensemble. The goal is to minimize the loss over the data












L
n

(

X
,
y

)

=




l
=
1


2

n
-
1







i
=
1

N



w

n
,
l
,
i






(


y
i

,



F


n
-
1

,
l


(

x
i

)

+


ρ

n
,
l




𝟙
[


x
i




l


]




h

n
,
l



(

x
i

)




)





,




(
2
)







by choosing ρn and the h′n,l′s at each leaf. Taking advantage again of the independence of the leaves, Equation (2) can be minimized by independently minimizing the inner summation for each l, i.e.,












L

n
,
l


(

X
,
y

)

=




i
=
1

N



w

n
,
l
,
i






(


y
i

,



F

n
-
1


(

x
i

)

+


ρ

n
,
l





h

n
,
l



(

x
i

)




)




,




(
3
)











l



{

1
,


,

2

n
-
1



}

.






Note that (3) can be solved efficiently via gradient boosting of each Ln,l(⋅) in a level-wise manner through the tree.


Next, there is a derivation of TSB where the weak learners are binary regression trees with least squares as the loss function l(⋅). The negative unconstrained gradient can be estimated at each data instance








{



y
~

i

=

-






(


y
i

,


F

n
-
1


(

x
i

)


)






F

n
-
1


(

x
i

)





}


i
=
1

N

,
,




which are equivalent to the residuals (i.e., {tilde over (y)}i=yi−Fn-1(xi)). Then, the optimal parameters can be determined for Ln,l(⋅) by solving






arg


min


ρ
n

,
l
,

h

n
,
l









i
=
1

N





w

n
,
l
,
i


(



y
~

i

-


ρ

n
,
l





h

n
,
l



(

x
i

)



)

2

.






Gradient boosting can solve Equation (4) by first fitting h′n,l to the residuals (X, ˜y), then solving for the optimal ρn,l. Adapting TSB to the classification setting,












Algorithm 1 TreeStructuredBoosting(X, y, ω, λ, n, T, R, Fn-1)















Inputs: training data (X, y) = {(xi, yi)}i=1N, instance weights ω∈custom-characterN









N

(


default
:

ω
i


=

1
N


)

,






λ∈ [0, +∞], node depth n (default: 0), max height T, node domain custom-character  (default: X),


prediction function Fn-1(x) (default: F0(x) = {tilde over (y)})


Outputs: the root node of a hierarchical ensemble


1: If n > T, return a prediction node ln that predicts the weighted average of y with weights w


2: Create a new subtree root ln to hold a weak learner





3: Compute negative gradients
{y~i=-(yi,Fn-1(xi))Fn-1(xi)}i=1N






4: Fit weak classifier hn′(x) : X custom-character  y by solving hn′ ← arg minh,β Σi=1N ωi({tilde over (y)}i-βh(xi))2


5: Let {Pn, Pnc} be the partitions induced by hn′.


6: ρnl ← arg minρ Σi=1N ωi (yi-Fn-1(xi)-ρhnl′(xi))2


7: Update the current function estimation Fn(x) = Fn-1(x) + ρhn′(x)


8: Update the left and right subtree instance weights, and normalize them:


ωi(left) ∝ ωi (λ + custom-character  [xi ∈ Pn]) ωi(right) ∝ ωi (λ + custom-character  [xi ∈ Pnc])


9: If custom-character  ∩ Pn custom-character , compute the left subtree recursively:


ln.left ← TreeStructuredBoosting(X, y, λ, ω(left), custom-character  ∩ Pn, n + 1, T, Fn)


10: If custom-character  ∩ Pnc custom-character , compute the right subtree recursively:


ln.right ← TreeStructuredBoosting(X, y, λ, ω(right), custom-character  ∩ Pnc, n + 1, T, Fn)


11: If custom-character  ∩ Pn = custom-character , prune impossible left branch by returning ln.right


Else If custom-character  ∩Pnc = custom-character , prune impossible right branch by returning ln.left


Else Return the subtree root ln










for example using logistic regression base learners and negative binomial log-likelihood as the loss function l(⋅), follows directly from equation 4 by using the gradient boosting procedure for classification in place of regression.


If all instance weights w remain constant, this approach would build a perfect binary tree of height T, where each path from the root to a leaf represents the same ensemble, and so would be exactly equivalent to gradient boosting of (X, y). To focus each branch of the tree on corresponding instances, thereby constructing diverse ensembles, the weights can be updated separately for each of hn,l's two children: instances in the corresponding partition have their weight multiplied by a factor of 1+λ, and instances outside the partition have their weights multiplied by a factor of λ, where λ∈[0, ∞]. The update rule for the weight wn,l(xi) of xi for Rn,l∈{Pn,l, Pcn,l} (the two partitions induced by hn,l) is given by












w

n
,
l


(

x
i

)

=




w


n
-
1

,
l


(

x
i

)


z
n




(

λ
+

𝟙
[


x
i



R

n
,
l



]


)



,




(
5
)







where zn∈R normalizes wn,l to be a distribution. The initial weights w0 are typically uniform. The complete TSB approach is detailed as Algorithm 1, also incorporating pruning of any impossible branches where Rl=Ø.


4 Theoretical Analysis

This section analyzes TSB to show that it is equivalent to CART when λ=0 and equivalent to gradient boosting as λ→∞. Therefore, these theoretical results establish the intrinsic connections between CART and gradient boosting identified by TSB. Provided below are proof sketches for the four lemmas used to prove our main result in Theorem 1, below; full proofs of the lemmas are illustrated in FIGS. 14A-14D.


Lemma 1 The weight of xi at leaf l∈{1, . . . , 2n} at the nth boosting iteration is given by














w

n
,
l


(

x
i

)

=






w
0

(

x
i

)




(

λ
+
1

)








k
=
1

n



𝟙
[


x
i



R
k


]





λ







k
=
1

n



𝟙
[


x
i


R





?









k
=
1

n



z
k






n


=
1


,
2
,



.






(
6
)










?

indicates text missing or illegible when filed




where {R1, . . . , Rn} is the sequence of partitions along the path from the root to l.


Proof Sketch: This lemma can be shown by induction based on Equation (5).


Lemma 2 Given the weight distribution formula (6) of xi at leaf l∈{1, . . . , 2n} at the nth boosting iteration, the following limits hold,












lim

λ

0





w

n
,
l


(

x
i

)


=




w
0

(

x
i

)








x
j





n
,
l







w
0

(

x
j

)





𝟙
[


x
i





n
,
l



]



,




(
7
)














lim

λ







w

n
,
l


(

x
i

)


=



w
0

(

x
i

)

.





(
8
)







where Rn,l=∩nk=1Rk is the intersection of the partitions along the path from the root to l.


Proof Sketch: Both parts follow directly by taking the corresponding limits of Lemma 1.


Lemma 3 The optimal simple regressor h*n,l(X) that minimizes the loss function (3) at the nth iteration at node l∈{1, . . . , 2n} is given by,













h

n
,
l



(
x
)

=

{









?



w

n
,
l


(

x
i

)



(


y
i

-


F

n
-
1


(

x
i

)


)







?



w

n
,
l


(

x
i

)








if



x
i




R
n











?



w

n
,
l


(

x
i

)



(


y
i

-


F

n
-
1


(

x
i

)


)







?



w

n
,
l


(

x
i

)






otherwise



.







(
9
)










?

indicates text missing or illegible when filed




Proof Sketch: For a given region Rn∈X at the nth boosting interaction, the simple regressor has the form











h
n

(
x
)

=

{





h

n
1






if


x



R
n







h

n
2




otherwise



.






(
10
)







with constants hn1, hn2∈R. We take the derivative of the loss function (3) in each of the two regions Rn and Rcn, and solve for where the derivative is equal to zero, obtaining (9).


Lemma 4 The TSB update rule is given by Fn(X)=Fn-1(X)+hn,l(X). If hn,l(X) is defined as












h

n
,
l


(
x
)

=









i
:

x
i






n
,
l







w

n
,
l


(

x
i

)



(


y
i

-


F

n
-
1


(

x
i

)


)










i
:

x
i






n
,
l







w

n
,
l


(

x
i

)




,




(
11
)











with


constant




F
0

(
x
)


=


y
_

0


,


then




F
n

(
x
)


=



y
_

n



is



constant
_



,








with



y
n


=









i
:

x
i






n
,
l







w

n
,
l


(

x
i

)



y
i










i
:

x
i






n
,
l







w

n
,
l


(

x
i

)




,



n

=
1

,
2
,



.





Proof Sketch: The proof is by induction on n, building upon (10). It is shown that each hn(xi) is constant and so yn is constant, and therefore the lemma holds under the given update rule.


Building upon these four lemmas, the result is presented in the following theorem, and explained in the subsequent two remarks:


Theorem 1 Given the TSB optimal simple regress or (9) that minimizes the loss function (3), the following limits regarding the parameter λ of the weight update rule (5) are enforced:














lim

λ

0





h

n
,
l



(
x
)


=










i
:

x
i






n
,
l







w
0

(

x
i

)



y
i










i
:

x
i






n
,
l







w
0

(

x
i

)



-


y
_


n
-
1




,





(
12
)
















lim

λ







h

n
,
l



(
x
)


=

{









?



w
0

(

x
i

)



(


y
i

-


F

n
-
1


(

x
i

)


)







?



w
0

(

x
i

)








if



x
i




R
n











?



w
0

(

x
i

)



(


y
i

-


F

n
-
1


(

x
i

)


)







?



w
0

(

x
i

)






otherwise



.







(
13
)










?

indicates text missing or illegible when filed




where w0(xi) is the initial weight for the i-th training sample.


Proof The limit (12) follows from applying (7) from Lemma 2 to (9) from Lemma 3 regarding the result Fn(x)=yn with yn a constant defined by (11) in Lemma 4. Similarly, the limit (13) follows from applying (8) from Lemma 2 to (9) in Lemma 3.


Remark 1 The simple regressor given by (12) calculates a weighted average of the difference between the random output variables yi and the previous estimate yn-1 of the function F*(x) in the disjoint regions defined by Rn,l. This formally defines the behavior of the CART algorithm.


Remark 2 The simple regressor given by (13) calculates a weighted average of the difference between the random output variables yi and the previous estimate of the function F*(x) given by the piece-wise constant function Fn-1(xi). Fn-1(xi) is defined in the overlapping region determined by the latest stump, namely Rn. This can formally define the behavior of the gradient boosting algorithm.


Based on Remark 1 and Remark 2, it can be concluded that TSB can equivalent to CART when λ−+0 and gradient boosting as λ−+00. Besides identifying connections between these two algorithms, TSB can provide the flexibility to train a hybrid model that lies between CART and gradient boosting, with potentially improved performance over either, as shown empirically in the next section.


5 Experiments

In this section, the experimental validation of TSB is presented. In a first experiment, real world data is used to carry out a numerical evaluation of the classification error of TSB for different values of λ. The behavior of the instance weights is then examined as λ varies in a second experiment.


5.1 Assessment of TSB Model Performance Versus CART and Gradient Boosting


In this experiment, four life science data sets are used from the UCI repository: Breast Tissue, Indian Liver Patient Dataset (ILPD), SPECTF Heart Disease, and Wisconsin Breast Cancer. All these data sets contain numeric attributes with no missing values and are binary classification tasks. In this experiment, the classification error is measured as the value of λ increases from 0 to 00. In particular, 10 equidistant error points corresponding to the in-sample and out-of-sample errors of the generated TSB trees are assessed, and the transient behavior of the classification errors as functions of λ are plotted. The goal is to illustrate the trajectory of the classification errors of TSB, which is expected to approximate the performance of CART as λ−+0, and to converge asymptotically to gradient boosting as λ−+00.


To ensure fair comparison, the classification accuracy of CART and gradient boosting is assessed for different depth and learning rate values by performing 5-fold cross-validation. As a result, it is concluded that a tree/ensemble depth of 10 offered near-optimal accuracy, and so use it for all algorithms. The binary classification was carried out using the negative binomial log-likelihood as the loss function, similar to the LogitBoost approach, which requires an additional learning rate (shrinkage) factor, yet under the scheme described by Algorithm 1. The learning rate values are provided in the third column of Table 1.









TABLE 1







Data Set Specifications










Data Set
# Instances
# Attributes
TSB Learning Rate













Breast Tissue
106
9
0.3


ILPD
583
9
0.3


SPECTF
80
44
0.3


Wisconsin Breast
569
30
0.7


Synthetic
100
0
0.1









For each data set, the experimental results were averaged over 20 trials of 10-fold cross-validation over the data, using 90% of the samples for training and the remaining 10% for testing in each experiment. The error bars in the plots denote the standard error at each sample point.


The results are presented in FIG. 12, showing that the in-sample and out-of-sample classification errors for different values of λ of TSB approximates the CART and gradient boosting errors in the limits λ−+0 and λ−+00 respectively. As expected, increasing lambda generally reduces overfitting. However, note that for each data set except ILPD, the lowest test error is achieved by a TSB model between the extremes of CART and gradient boosting. This reveals that hybrid TSB models can outperform either of CART or gradient boosting alone.


5.2 Effect of λ on the Instance Weights


In a second experiment, a synthetic binary-labeled data set is used to graphically illustrate the behavior of the instance weights as functions of lambda. The synthetic data set consists of 100 points in R2, out of which 58 belong to the red class, and the remaining 42 belong to the green class, as shown in FIG. 13. The learning rate was chosen to be 0.1 based on classification accuracy, as in the previous experiment. The instance weights produced by TSB at different values of A were recorded.



FIG. 13 shows a heatmap linearly interpolating the weights associated with each instance for a disjoint region defined by one of the four leaf nodes of the trained tree. The chosen leaf node corresponds to the logical function (X2>2.95)∧(X1<5.55).


When A=0, the weights have binary normalized values that produce a sharp differentiation of the surface defined by the leaf node, similar to the behavior of CART, as illustrated in FIG. 13(a). As A increases in value, the weights become more diffuse in FIGS. 13(b) and 13(c), until A becomes significantly greater than 1. At that point, the weights approximate the initial values as anticipated by theory. Consequently, the ensembles along each path to a leaf are trained using equivalent instance weights, and therefore are the same and equivalent to gradient boosting.


6 Conclusions

As described above, it was shown that tree-structured boosting reveals the intrinsic connections between additive models (gradient boosting) and full interaction models (CART). As the parameter A varies from 0 to ∞, the models produced by TSB vary between CART and gradient boosting, respectively. This has been shown both theoretically and empirically. Notably, the experiments revealed that a hybrid model between these two extremes of CART and gradient boosting can outperform either of these alone.


It will be appreciated by those skilled in the art that changes could be made to the exemplary embodiments shown and described above without departing from the broad inventive concept thereof. It is understood, therefore, that this invention is not limited to the exemplary embodiments shown and described, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the claims. For example, specific features of the exemplary embodiments may or may not be part of the claimed invention and features of the disclosed embodiments may be combined. Unless specifically set forth herein, the terms “a”, “an” and “the” are not limited to one element but instead should be read as meaning “at least one”.


It is to be understood that at least some of the figures and descriptions of the invention have been simplified to focus on elements that are relevant for a clear understanding of the invention, while eliminating, for purposes of clarity, other elements that those of ordinary skill in the art will appreciate may also comprise a portion of the invention. However, because such elements are well known in the art, and because they do not necessarily facilitate a better understanding of the invention, a description of such elements is not provided herein.


Further, to the extent that the method does not rely on the particular order of steps set forth herein, the particular order of the steps should not be construed as limitation on the claims. The claims directed to the method of the present invention should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the steps may be varied and still remain within the spirit and scope of the present invention.

Claims
  • 1. A method for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, comprising: generating a node of the decision tree including: receiving i) training data including data instances, each data instance having a plurality of attributes and a corresponding label, ii) instance weightings, iii) a valid domain for each attribute generated, and iv) an accumulated weighted sum of predictions for a branch of the decision tree; andassociating one of a plurality of binary prediction of an attribute with each node including selecting the one of the plurality of binary predictions having a least amount of weighted error for the valid domain, the weighted error being based on the instance weightings and the accumulated weighted sum of predictions for the branch of the decision tree associated with the node;in accordance with a determination that the node includes child nodes, repeat the generating the node step for the child nodes; andin accordance with a determination that the node is a terminal node, associating the terminal node with an outcome classifier; anddisplaying the decision tree including the plurality of nodes arranged hierarchically.
  • 2. The method of claim 1, wherein generating the node includes: foregoing generating the node having a binary prediction that is inconsistent with a parent node.
  • 3. The method of claim 1, wherein generating the node includes: updating instance weightings for child nodes including incorporating an acceleration term to reduce consideration for data instances having labels that are inconsistent with the tree branch and utilizing the instance weightings during the generating the node step repeated for the child nodes.
  • 4. The method of claim 1, wherein generating the node includes: updating the valid domain and utilizing the valid domain during generation of the child nodes.
  • 5. The method of claim 1, wherein generating the node, for each node, includes: foregoing generating the node having a sibling node with an identical prediction.
  • 6. A system for generating a decision tree having a plurality of nodes, arranged hierarchically as parent nodes and child nodes, comprising: one or more memory units each operable to store at least one program; andat least one processor communicatively coupled to the one or more memory units, in which the at least one program, when executed by the at least one processor, causes the at least one processor to perform the steps of claim 1.
  • 7. A non-transitory computer readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, perform the steps of claim 1.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of priority to U.S. Provisional Application No. 62/357,250 filed Jun. 30, 2016, the entirety of which is incorporated herein by reference.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2017/040353 6/30/2016 WO
Provisional Applications (1)
Number Date Country
62357250 Jun 2016 US