The present disclosure generally relates to optimal decision tree learning, and more particularly, to the used of mixed integer programs (MIPS) in decision tree learning.
Decision trees are one of the more popular machine learning models because the tree structure is visually easy to comprehend. The learning of an optimal decision tree is Non-deterministic Polynomial-time hard (NP-hard). Popular algorithms rely on greedy heuristic-based methods that are challenging to incorporate constraints. MIP methods, which build on an arc-based formulation, are used to handle sample-level constraints and linear metrics.
In one embodiment, a computer-implemented decision tree machine learning method includes accessing a decision tree associated with a path-based machine learning model. The decision tree is split into a plurality of multiway decision trees, each of the plurality of decision trees having an attribute not occurring more than once in each of the plurality of decision trees. A problem associated with the machine learning model is solved using one or more of the plurality of decision trees in which one or more decision rules of the decision tree are mapped using a mixed-integer program (MIPS).
According to an embodiment, the decision tree in the splitting operation is a multiway decision tree.
According to an embodiment, solving the problem further associated with the path-based machine learning model includes finding multiway regression tress using mixed-integer programs (MIPS).
According to an embodiment, solving the problem includes incorporating rule constraints associated with solving the problem.
According to an embodiment, the incorporating constraints include an intra-rule and/or an inter-rule constraint.
According to an embodiment, the incorporating constraints include a monotonic prediction output and/or a fairness constraint.
According to an embodiment, solving the problem further comprises analyzing metrics including a precision and/or recall for imbalanced datasets.
According to an embodiment, solving the problem further includes performing column generation to provide a restricted master program version of the multiway decision trees.
According to an embodiment, solving the problem further includes generating a feature graph in which each decision rule is mapped to a distinct independent path in the feature graph.
According to an embodiment, generating the feature graph includes providing an acyclic multi-level digraph comprising multiple features, and each future indicates a level in the feature graph represented by multiple nodes corresponding to its distinct feature values.
In one embodiment, a computing device configured to perform decision tree machine learning includes a processor, and a memory coupled to the processor. The memory stores instructions to cause the processor to perform acts including accessing a decision tree associated with a path-based machine learning model; splitting the decision tree into a plurality of multiway decision trees, each of the plurality of decision trees having an attribute not occurring more than once in each of the plurality of decision trees; and solving a problem associated with the machine learning model using one or more of the plurality of decision trees in which one or more decision rules of the decision tree are mapped using a mixed-integer program (MIPS).
In one embodiment, a non-transitory computer readable storage medium tangibly embodying a computer readable program code having computer readable instructions that, when executed, causes a computer device to carry out a method of decision tree machine learning. The method includes accessing a decision tree associated with a path-based machine learning model. The decision tree is split into a plurality of decision trees, each of the plurality of decision trees having an attribute not occurring more than once in each of the plurality of decision trees. A problem associated with the machine learning model is solved using one or more of the plurality of decision trees in which one or more decision rules of the decision tree are mapped using a mixed-integer program (MIPS). The plurality of decision trees are multiway decision trees provided by the splitting operation, and solving the problem further includes performing column generation.
The drawings are of illustrative embodiments. They do not illustrate all embodiments. Other embodiments may be used in addition to or instead. Details that may be apparent or unnecessary may be omitted to save space or for more effective illustration. Some embodiments may be practiced with additional components or steps and/or without all the components or steps that are illustrated. When the same numeral appears in different drawings, it refers to the same or like components or steps.
In the following detailed description, numerous specific details are set forth by way of examples to provide a thorough understanding of the relevant teachings. However, it is to be understood that the present teachings may be practiced without such details. In other instances, well-known methods, procedures, components, and/or circuitry have been described at a relatively high level, without detail, to avoid unnecessarily obscuring aspects of the present teachings. It is also to be understood that the present disclosure is not limited to the depictions in the drawings, as there may be fewer elements or more elements than shown and described.
In discussing the present technology, it may be helpful to describe various salient terms. In one aspect, spatially related terminology such as “front,” “back,” “top,” “bottom,” “beneath,” “below,” “lower,” above,” “upper,” “side,” “left,” “right,” and the like, is used with reference to the direction of the Figures being described. Since components of embodiments of the disclosure can be positioned in a number of different directions, the directional terminology is used for purposes of illustration and is in no way limiting. Thus, it will be understood that the spatially relative terminology is intended to encompass different directions of the device in use or operation in addition to the direction depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, for example, the term “below” can encompass both an orientation that is above, as well as, below. The device may be otherwise oriented (rotated 90 degrees or viewed or referenced at other directions) and the spatially relative descriptors used herein should be interpreted accordingly.
Although the terms first, second, third, etc., may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Example embodiments are described herein with reference to schematic illustrations of idealized or simplified embodiments (and intermediate structures). As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, may be expected. Thus, the regions illustrated in the figures are schematic in nature and their shapes do not necessarily illustrate the actual shape of a region of a device and do not limit the scope.
It is to be understood that other embodiments may be used and structural or logical changes may be made without departing from the spirit and scope defined by the claims. The description of the embodiments is not limiting. In particular, elements of the embodiments described hereinafter may be combined with elements of different embodiments.
As used herein, the term “multiway” tree when referring to decision trees and/or split trees is to be understood as being able to have more than two children.
The present disclosure provides a novel method of learning optical decision trees using mixed-integer programs (MIPS) in a path-based formulation. Typically, MIP methods are built on an arc-based formulation and do not scale well because the number of binary variables and constraints are on the order of O (2dN), where d and N refer to the depth of the tree and the size of the dataset. Moreover, such MIP methods built on an arc-based formulation can only handle sample-level constraints and linear metrics. In contrast, in one aspect, the present disclosure teaches a path-based MIP formulation where the number of decision variables is independent of both d and N.
More particularly, the present disclosure presents a scalable column generation framework to provide solutions to the decision tree problems by providing an optimal multiway-split tree (OMT) which is more interpretable and informative than typical binary-split trees due to shorter rules. For example, the framework is more general and can handle nonlinear metrics and incorporate a broader class of constraints during training.
Through the use of multiway-split trees, an attribute rarely appears more than once in any path from root to leaf, which are easier to comprehend than its binary counterparts. In the path-based formulation, each feasible rule is mapped to a distinct path in a graph. As the cardinality of paths become prohibitive for larger graphs with many numerical features, the path-based MIP formulation address this issue using an enhanced column generation technique.
The embodiments of the computer-implemented method and computing device of the present disclosure provide for an improvement in the field of providing solutions to the decision tree problems in a variety of different applications, as more accurate decisions can be made based on analyzing the predicted model while taking into consideration constraints and interdependencies. In addition, there is an improvement in computer operations, as the computer-implemented method and system according to the present disclosure reduces the amount of processing power used to achieve results with reduced storage usage, and the results have increased accuracy. Results of extensive experiments on datasets demonstrate the efficiency and superiority over existing MIP-based decision tree models. In one embodiment, there is a 24× reduction in run time.
In one aspect, the teachings herein are based on the inventors' insight that the techniques described herein may be implemented in a number of ways. Example implementations are provided below with reference to the following figures.
In our path-based MIP formulation, each feasible rule is mapped to a distinct path in a graph. As the cardinality of paths becomes prohibitive for large graphs (e.g., many numerical features), the method addresses this issue by using an enhanced column generation technique. This formulation allows us to incorporate constraints, e.g., monotonic predicted output, fairness constraint, precision and/or a recall for imbalanced datasets, etc. Data 205 along with constraints and metrics 210 are provided to construct a feature graph 215. The data 205 may include transaction in the training data, such as a list of features sored in order of importance suggested by any black box prediction model. In an embodiment, a decision tree module 540 (
With continued reference to
With reference to the subgradient solver input 220, a Lagrangian Relaxation (LR) is construct of a CL Tree Master Program includes dualizing cover, capacity, and cardinality constraints by incorporating within the objective via Lagrange multipliers (E). The LR decomposes into simple bound-constrained problems separable by market and rule. To solve LR, compute the resultant objective Lk. Record the best (maximum) value L* so far, compute capacity constraint violations (negative subgradients), compute the resultant capacity violation error vector dk, compute deflected subgradient gk+1=αdk+(1−α)gk, where α is an algorithmic parameter. Optionally, there may be obtained primal estimates s*k+1=αsk+(1−α)s*k, updating the dual vector E, Ek+1=Ek+2α(θ−L*)g/∥g∥2, where θ is a carefully chosen target value. Finally Ek+1 is projected onto its bounds to preserve dual feasibility. Stop when g is small, or a time limit expires, and return Ek as the best shadow price estimate.
At 225, an efficient feature graph search technique is performed, and an output of a next set of feasible candidates rules. Enhancements to this technique reduce storage requirements and computation time. These enhancements include: (i) creating only symbolic representations of cumulative nodes. This reduces storage to the original O(κ) by sorting the feature nodes (done only once at the start of the CG algorithm) with individual (x) nodes first followed by cumulative nodes. The intersections of a partial path's training samples are performed with the samples of the individual nodes first. (ii) Computing set intersections is reduced to linear time O(m), which is done by always operating on pre-sorted training sample data in the partial path and nodes. (iii) Apply the distributive law of set intersections. Compute of expensive intersections is reduced to O(κ)×O(κ). This allows us to compute the intersections for cumulative nodes performing unions of intersections of individual sets and the prior level of accumulation, e.g., union of no more than two sets of training samples. Discretization quality is improved by initializing the κ-means algorithm using quantiles. For example, the data are sorted and partitioned into κ equal intervals, and the mean value of each interval is used to generate the starting points for the κ-means procedure
At 230 the decision tree module (e.g.,
With the foregoing overview of the example architecture, it may be helpful now to consider a high-level discussion of an example process. To that end,
At operation 402, a decision tree associated with a path-based machine learning model is split by a decision tree module (See
At operation 404, the decision trees are populated such that an attribute does not occur more than once in each of the plurality of multiway decision trees for most attributes. By mapping each feature to a node, it is more efficient than an arc-based operation.
At operation 406, a computing device having modules such as illustrated in
The computer platform 500 may include a central processing unit (CPU) 504, a hard disk drive (HDD) 506, random access memory (RAM) and/or read-only memory (ROM) 508, a keyboard 510, a mouse 512, a display 514, and a communication interface 516, which are connected to a system bus 502. The HDD 506 can include data stores.
In one embodiment, the HDD 506 has capabilities that include storing a program that can execute various processes, such as machine learning, predictive modeling, classification, updating model parameters. The ML model generation module 540 is configured to generate a machine learning model based on at least one of the generated candidate machine learning pipelines.
With continued reference to
A machine learning module 548 is configured to assist in generating multiway decision trees from path-based formulations that are more interpretable and informative than binary-split trees. A feature graph module 556 is configured to generate a multi-level diagraph, where each features indicates a level in the graph represented by multiple nodes corresponding to its distinct feature values.
As discussed above, functions relating prescriptive may include a cloud. It is to be understood that although this disclosure includes a detailed description of cloud computing as discussed herein below, the implementation of the teachings recited herein is not limited to a cloud computing environment. Rather, embodiments of the present disclosure are capable of being implemented in conjunction with any other type of computing environment now known or later developed.
Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).
Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.
Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.
Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).
A cloud computing environment is service-oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes.
Referring now to
Hardware and software layer 760 include hardware and software components. Examples of hardware components include: mainframes 761; RISC (Reduced Instruction Set Computer) architecture-based servers 762; servers 763; blade servers 764; storage devices 765; and networks and networking components 766. In some embodiments, software components include network application server software 767 and database software 768.
Virtualization layer 770 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 771; virtual storage 772; virtual networks 773, including virtual private networks; virtual applications and operating systems 774; and virtual clients 775.
In one example, management layer 780 may provide the functions described below. Resource provisioning 781 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing 782 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 783 provides access to the cloud computing environment for consumers and system administrators. Service level management 784 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 785 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
Workloads layer 790 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 791; software development and lifecycle management 792; virtual classroom education delivery 793; data analytics processing 794; transaction processing 795; and an optimal decision tree generation module 796 configured to generate multiway decision trees that are more interpretable and informative than binary-split trees due to shorter rules, as discussed herein above.
The descriptions of the various embodiments of the present teachings have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
While the foregoing has described what are considered to be the best state and/or other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications, and variations that fall within the true scope of the present teachings.
The components, operations, steps, features, objects, benefits, and advantages that have been discussed herein are merely illustrative. None of them, nor the discussions relating to them, are intended to limit the scope of protection. While various advantages have been discussed herein, it will be understood that not all embodiments necessarily include all advantages. Unless otherwise stated, all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. They are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain.
Numerous other embodiments are also contemplated. These include embodiments that have fewer, additional, and/or different components, steps, features, objects, benefits and advantages. These also include embodiments in which the components and/or steps are arranged and/or ordered differently.
While the foregoing has been described in conjunction with exemplary embodiments, it is understood that the term “exemplary” is merely meant as an example, rather than the best or optimal. Except as stated immediately above, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims.
It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any such actual relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments have more features than are expressly recited in each claim. Rather, as the following claims reflect, the inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.