The present disclosure relates generally to artificial intelligence, and relates more particularly to devices, non-transitory computer-readable media, and methods for automatically recommending predefined scripts for constructing machine learning models.
Machine learning is a subcategory of artificial intelligence that uses statistical models, executed on computers, to perform specific tasks. Rather than provide the computers with explicit instructions, the statistical models are used by the computers to learn patterns and predict the correct tasks to perform. The statistical models may be trained using a set of sample or training data (which may be labeled or unlabeled), which helps the computers to learn the patterns. At run time, new data is processed based on the learned patterns to predict the correct tasks from the new data. Machine learning therefore may be used to automate tasks in a wide variety of applications, including virtual personal assistants, email filtering, computer vision, customer support, fraud detection, and other applications.
The teachings of the present disclosure can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
To facilitate understanding, similar reference numerals have been used, where possible, to designate elements that are common to the figures.
The present disclosure broadly discloses methods, computer-readable media, and systems for automatically recommending predefined scripts for constructing machine learning models. In one example, a method performed by a processing system including at least one processor includes building a set of test data for a machine learning model, wherein the building is performed in response to receiving a target data set from a user, wherein the target data set is a data set on which the machine learning model is to be trained to operate, identifying a subset of predefined features engineering action scripts from among a plurality of predefined features engineering action scripts, wherein the subset of predefined features engineering action scripts is determined to be applicable to the set of test data, and automatically generating a recommended features engineering action script for operating on the target data set, wherein the automatically generating comprises customizing at least one parameter of at least one predefined features engineering action script of the subset of predefined features engineering action scripts to extract data values from at least one location in the target data set, and wherein the recommended features engineering action script is recommended to the user for inclusion in a features engineering component of the machine learning model.
In another example, a non-transitory computer-readable medium may store instructions which, when executed by a processing system in a communications network, cause the processing system to perform operations. The operations may include building a set of test data for a machine learning model, wherein the building is performed in response to receiving a target data set from a user, wherein the target data set is a data set on which the machine learning model is to be trained to operate, identifying a subset of predefined features engineering action scripts from among a plurality of predefined features engineering action scripts, wherein the subset of predefined features engineering action scripts is determined to be applicable to the set of test data, and automatically generating a recommended features engineering action script for operating on the target data set, wherein the automatically generating comprises customizing at least one parameter of at least one predefined features engineering action script of the subset of predefined features engineering action scripts to extract data values from at least one location in the target data set, and wherein the recommended features engineering action script is recommended to the user for inclusion in a features engineering component of the machine learning model.
In another example, a device may include a processing system including at least one processor and a non-transitory computer-readable medium storing instructions which, when executed by the processing system when deployed in a communications network, cause the processing system to perform operations. The operations may include building a set of test data for a machine learning model, wherein the building is performed in response to receiving a target data set from a user, wherein the target data set is a data set on which the machine learning model is to be trained to operate, identifying a subset of predefined features engineering action scripts from among a plurality of predefined features engineering action scripts, wherein the subset of predefined features engineering action scripts is determined to be applicable to the set of test data, and automatically generating a recommended features engineering action script for operating on the target data set, wherein the automatically generating comprises customizing at least one parameter of at least one predefined features engineering action script of the subset of predefined features engineering action scripts to extract data values from at least one location in the target data set, and wherein the recommended features engineering action script is recommended to the user for inclusion in a features engineering component of the machine learning model.
As discussed above, machine learning uses statistical models, executed on computers, to perform specific tasks. Rather than provide the computers with explicit instructions, the statistical models are used by the computers to learn patterns and to predict the correct tasks to perform. The statistical models may be trained using a set of sample or training data (which may be labeled or unlabeled), which helps the computers to learn the patterns. At run time, new data (test data) is processed based on the learned patterns to predict the correct tasks from the new data. Machine learning therefore may be used to automate tasks in a wide variety of applications, including virtual personal assistants, email filtering, computer vision, customer support, fraud detection, and other applications.
The construction of machine learning models is a complicated process that is typically performed by data scientists who have advanced software and programming knowledge. However, these data scientists may lack the specific domain expertise needed to ensure that the machine learning models perform effectively for their intended purposes. For instance, a machine learning model that is constructed to function as a virtual personal assistant should behave differently than a machine learning model that is constructed to function as a customer support tool. An effective machine learning model must be able to learn how the specific types of data the model receives as input (e.g., an incoming text message from a specific phone number, versus keywords in a query posed to a customer support chat bot) map to specific tasks or actions (e.g., silencing a text message alert, versus identifying a department to which to direct a customer query).
Moreover, even within the same domain, separate machine learning models are often constructed for each machine learning problem. In some cases, multiple versions of machine learning models may even be created for the same machine learning problem, where each version may include different experimental feature engineering code logic. The various combinations of copies of model code may therefore become very expensive to store and maintain.
Examples of the present disclosure expose the internal logic of machine learning modeling in order to make the construction of machine learning models a more configuration-driven, and therefore more user-friendly, task. In other words, the exposure of the internal logic makes it possible for an individual who may possess expertise in a particular domain, but who may lack knowledge of data science and programming, to construct an effective machine learning model for a domain problem. In one particular example, the portions of the internal logic that are exposed comprise the feature engineering portions of the machine learning model, e.g., the logic blocks that define the features that will be extracted from raw input data and processed by a machine learning algorithm in order to generate a prediction.
In one example, the present disclosure defines a set of rules (e.g., standards) as atomic building blocks for constructing a machine learning model. From these building blocks, a user may “programmatically” construct a machine learning model by manipulating the configuration file of the model with a human language-like syntax (e.g., a syntax that is closer to human language—such as English—than to computer syntax) in order to tailor the machine learning model to a specific problem or use case.
In further examples, the present disclosure provides a system and user interface that allows the scripts (logic blocks) for the atomic building blocks to be crowdsourced. In other words, examples of the present disclosure allow users to create, upload, and save generic scripts, including scripts for features engineering, to a library of scripts. Other users may later access these generic scripts and customize the generic scripts by setting the values for various parameters of the scripts. Customization of a generic script may result in the creation of an action block for performing a specific features engineering task. The action block may then be used to populate a configuration file for a machine learning model, where the action block defines the features that the machine learning model will extract from test data and apply a machine learning algorithm to.
In still further examples, the present disclosure may assist a user who is constructing a machine learning model by recommending predefined scripts based on the test data to be processed by the machine learning model. For instance, the user may provide a set of test data to a recommendation engine. The recommendation engine may have access to an inventory of available scripts and the histories of usage of those scripts (e.g., which scripts have been used is which machine learning models). The recommendation engine may train the available scripts and then feed the test data to the trained available scripts in order to determine which of the trained available scripts may be applicable to the test data. These and other aspects of the present disclosure are discussed in greater detail below in connection with the examples of
To further aid in understanding the present disclosure,
In one example, the system 100 may comprise a core network 102. The core network 102 may be in communication with one or more access networks 120 and 122, and with the Internet 124. In one example, the core network 102 may functionally comprise a fixed mobile convergence (FMC) network, e.g., an IP Multimedia Subsystem (IMS) network. In addition, the core network 102 may functionally comprise a telephony network, e.g., an Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) backbone network utilizing Session Initiation Protocol (SIP) for circuit-switched and Voice over Internet Protocol (VoIP) telephony services. In one example, the core network 102 may include at least one application server (AS) 104, at least one database (DB) 106, and a plurality of edge routers 128-130. For ease of illustration, various additional elements of the core network 102 are omitted from
In one example, the access networks 120 and 122 may comprise Digital Subscriber Line (DSL) networks, public switched telephone network (PSTN) access networks, broadband cable access networks, Local Area Networks (LANs), wireless access networks (e.g., an IEEE 802.11/Wi-Fi network and the like), cellular access networks, 3rd party networks, and the like. For example, the operator of the core network 102 may provide a cable television service, an IPTV service, or any other types of telecommunication services to subscribers via access networks 120 and 122. In one example, the access networks 120 and 122 may comprise different types of access networks, may comprise the same type of access network, or some access networks may be the same type of access network and other may be different types of access networks. In one example, the core network 102 may be operated by a telecommunication network service provider. The core network 102 and the access networks 120 and 122 may be operated by different service providers, the same service provider or a combination thereof, or the access networks 120 and/or 122 may be operated by entities having core businesses that are not related to telecommunications services, e.g., corporate, governmental, or educational institution LANs, and the like.
In one example, the access network 120 may be in communication with one or more user endpoint devices 108 and 110. Similarly, the access network 122 may be in communication with one or more user endpoint devices112 and 114. The access networks 120 and 122 may transmit and receive communications between the user endpoint devices 108, 110, 112, and 114, between the user endpoint devices 108, 110, 112, and 114, the server(s) 126, the AS 104, other components of the core network 102, devices reachable via the Internet in general, and so forth. In one example, each of the user endpoint devices 108, 110, 112, and 114 may comprise any single device or combination of devices that may comprise a user endpoint device. For example, the user endpoint devices 108, 110, 112, and 114 may each comprise a mobile device, a cellular smart phone, a gaming console, a set top box, a laptop computer, a tablet computer, a desktop computer, an application server, a bank or cluster of such devices, and the like.
In one example, one or more servers 126 may be accessible to user endpoint devices 108, 110, 112, and 114 via Internet 124 in general. The server(s) 126 may operate in a manner similar to the AS 104, which is described in further detail below.
In accordance with the present disclosure, the AS 104 may be configured to provide one or more operations or functions in connection with examples of the present disclosure for recommending scripts for automatically recommending predefined scripts for constructing machine learning models, as described herein. For instance, the AS 104 may be configured to operate as a Web portal or interface via which a user endpoint device, such as any of the UEs 108, 110, 112, and/or 114, may access various predefined logic blocks and machine learning algorithms. The AS 104 may further allow the user endpoint device to manipulate the predefined logic blocks and machine learning algorithms in order to construct a machine learning model that is tailored for a specific use case. For instance, as discussed in further detail below, manipulation of the predefined logic blocks may involve setting parameters of the logic blocks and/or arranging the logic blocks in a pipeline-style execution sequence in order to accomplish desired feature engineering for the machine learning model. Manipulation of the machine learning algorithms may involve selecting one or more specific machine learning algorithms to process features extracted from raw test data (e.g., in accordance with the desired feature engineering) and/or specifying a manner in which to combine the outputs of multiple machine learning models to generate a single prediction.
In some examples, the AS 104 may further function as a recommendation engine that recommends predefined logic blocks which might be implemented as part of a machine learning model to process a set of test data, based on features of the test data. The recommendation feature may provide additional assistance to users who may lack software engineering and/or programming expertise, as it knows what predefined logic blocks are available and how the predefined logic blocks have been used in the past, (e.g., what sorts of features the predefined logic blocks have been used to extract).
In accordance with the present disclosure, the AS 104 may comprise one or more physical devices, e.g., one or more computing systems or servers, such as computing system 800 depicted in
The AS 104 may have access to at least one database (DB) 106, where the DB 106 may store the predefined logic blocks that may be manipulated in order to perform feature engineering for a machine learning model. In one example, at least some of these predefined logic blocks are atomic and generic, which allows the predefined logic blocks to be reused for various different use cases (e.g., for various machine learning models that are programmed to carrying out various different tasks). Metadata associated with the predefined logic blocks may indicate machine learning models in which the predefined logic blocks have previously been used. In one example, at least some of the predefined logic blocks may be crowdsourced, e.g., contributed by individual users of the system 100 who may have software engineering and/or programming expertise.
The DB 106 may also store a plurality of different machine learning algorithms that may be selected for inclusion in a machine learning model. Some of these machine learning algorithms are discussed in further detail below; however, the DB may also store additional machine learning algorithms that are not explicitly specified. In addition, the DB 106 may store constructed machine learning models. This may help the AS 104, for instance, to identify the most frequently reused predefined logic blocks, to recommend predefined logic blocks for particular uses (based on previous uses of the predefined logic blocks), and to allow for sharing of the constructed machine learning models among users.
In one example, DB 106 may comprise a physical storage device integrated with the AS 104 (e.g., a database server or a file server), or attached or coupled to the AS 104, to store predefined logic blocks, machine learning algorithms, and/or machine learning models, in accordance with the present disclosure. In one example, the AS 104 may load instructions into a memory, or one or more distributed memory units, and execute the instructions for automatically recommending predefined scripts for constructing machine learning models, as described herein. An example method for automatically recommending predefined scripts for constructing machine learning models is described in greater detail below in connection with
It should be noted that the system 100 has been simplified. Thus, those skilled in the art will realize that the system 100 may be implemented in a different form than that which is illustrated in
In one example, the machine learning algorithm 202 is an algorithm that takes test data as input, and, based on processing of the test data, generates a prediction as an output. The prediction may comprise an appropriate action to be taken in response to the test data. As the machine learning algorithm 202 is exposed to more data over time, the machine learning algorithm 202 may adjust the manner in which incoming test data is processed (e.g., by adjusting one or more parameters of the machine learning algorithm 202) in order to improve the quality of the predictions. For instance, the machine learning algorithm 202 may receive feedback regarding the quality of the predictions, and may adjust one or more parameters in response to the feedback in order to ensure that high-quality predictions are generated more consistently. In one example, the machine learning algorithm 202 may initially be trained on a set of training data (which may be labeled or unlabeled). However, even after training, the machine learning algorithm 202 may continue to adjust the parameters as more test data is processed. In one example, the machine learning algorithm 202 may be any machine learning model, such as a gradient boost machine (GBM) algorithm, an extreme gradient boosting (XGBoost) algorithm, a LightGBM algorithm, or a random forest algorithm, for instance.
In one example, the features engineering component 204 utilizes at least one data mining technique in order to extract useful features from the test data. The features engineering component 204 may rely on domain knowledge (e.g., knowledge of the domain for which the machine learning model 200 is being constructed) in order to define the features that should be extracted from the test data. In one example, the feature engineering component 204 comprises a set of configurable logics 206 and a runtime execution component 208.
The set of configurable logics 206 may generally comprise components of the machine learning model 200 that can be configured by a user. For instance, examples of the present disclosure may present a system and user interface that allow a user to configure and customize certain parameters of the machine learning model 200 for a particular use. As discussed in further detail below, some of these parameters may be encoded in programming blocks. The programming blocks may be reusable in the sense that the programming block generally define certain aspects of the corresponding parameters, while allowing the user to customize these aspects through the definition of specific values. In one example, the set of configurable logics 206 may include a set of core parameters 210 and a set of tunable parameters 212.
In one example, the set of core parameters 210 may include programmable operation logic blocks for basic operations (e.g., load data, save data, fetch remote data, etc.), where the operation logic blocks can be combined, and the values for the operation logic blocks can be defined, to construct more complex operations. For instance, a sequence of the basic operations, when executed in order, may result in a more complex operation being performed.
In one example, the set of tunable parameters 212 may include blocks of predefined code logic that may be used to extract common feature types. For instance, a common feature type may comprise a number of days elapsed between two events, a total number of words in a string of text, or some other feature type. The specifics of the feature type may vary based on application. For instance, for a machine learning model that is designed to detect fraudulent claims for mobile phone replacements, the number of days elapsed between the activation date of a mobile phone and a date a claim for replacement of the mobile phone was submitted may be a feature that one would want to extract. However, for a machine learning model that is designed to remind a user to take a prescribed medication (e.g., a virtual personal assistant), the number of days elapsed between the last time the user took the prescribed medication and the current day may be a feature that one would want to extract. Thus, a predefined code logic block to extract the number of days elapsed between events may be customized by specifying the events for which the dates are to be extracted. The events may be specified by indicating a column of a data set in which the dates of the events are recorded.
The method 300 begins in step 302 and proceeds to step 304. At step 304, the processing system may present a first user interface to a first user. In one example, the first user interface may be presented via a user endpoint device operated by the first user. The first user interface may comprise an interface that allows the first user to configure a generic features engineering action script, where the generic features engineering action script is configured to assist a second user (different from the first user) in constructing a customized script for the features engineering component of a machine learning model.
As illustrated, the first user interface 400 may include a parameter definition section 402. The parameter definition section 402 may allow the parameters of the first features engineering action script (e.g., the number and/or format of the first features engineering action script's input and outputs) to be defined. In one example, the parameter definition section 402 may include a first field 404 that allows an attribute name to be defined. The attribute name may correspond to a data item on which the first features engineering action script is to act or a format of the first feature engineering action script's output. The first user interface 400 may also include a second field 406 that allows an input type (e.g., text string, dropdown list, etc.) of the data item defined in the first field 404 to be defined.
Referring back to
For instance, referring again to
Referring back to
The example second user interface 410 may further include second and third fields 414 and 416, respectively, to specify the locations (columns) in an input dataset that correspond to the “Col A” and “Col B” parameters discussed above (i.e., locations from which to retrieve data to be processed by the features engineering action script). As discussed above, the fe_date_diff script may compute a time elapsed between two dates, e.g., by subtracting a value in a first column of the input dataset (column A) from a value in a second column of the input dataset (column B).
A fourth field 418 of the example second user interface 410 may provide a drop down menu that allows the user to select the unit of measure for the features engineering action script's output, e.g., a computed difference (which in the example of
In use, when the example second user interface 410 is configured as shown in
Referring back to
The method 300 may end in step 312.
It should be noted that although the method 300 provides a first user interface via which the first user may configure a second user interface and corresponding features engineering action script, the first user may also write the features engineering action script without using the first user interface (particularly if the first user has some software programming expertise). However the generic features engineering action scripts are generated, the generic features engineering action scripts may be stored (along with corresponding user interfaces that allow the generic features engineering action scripts to be customized) for use (and reuse) by other users.
Thus, additional generic features engineering action scripts can be generated in a manner similar to that described in connection with
In some cases, examples of the present disclosure may go a step further and provide recommendations to a user regarding generic features engineering action scripts that the user may wish to use when constructing a machine learning model.
The method 500 begins in step 502 and proceeds to step 504. At step 504, the processing system may build a set of training data, based on an inventory of predefined features engineering action scripts available in a library and on prior usages of the predefined features engineering actions scripts in machine learning models. For instance, in one example, the predefined features engineering action scripts may be indexed according to machine learning models in which the predefined features engineering action scripts have been used. Examining the machine learning models in which the predefined features engineering action scripts have been used may allow the processing system to identify potential uses cases for the features engineering action scripts. As discussed above, features engineering action scripts may be written in a generic manner, such that the features engineering action scripts can be reused in various different contexts via customization (e.g., defining different values for the attributes of the features engineering action scripts). For instance, a generic features engineering action script that computes a difference between two values may be used to compute a spread between zip codes, a difference between a maximum computing resource usage and an actual computing resource usage, a difference between a number of miles driven by a car as of a first date and a number of miles driven by the car as of a second, subsequent date, or any other numerical difference depending on the usage case of a machine learning model.
Building the set of training data may comprise identifying the types of data (e.g., dates, integers, floating-point numbers, text strings, etc.) on which the predefined features engineering action scripts have been used to operate. For instance, Table 1, below, illustrates an example set of training data that may be built based on an examination of the inventory of predefined features engineering action scripts:
It should be noted that, in practice, the set of training data may comprise a larger number of attributes (data types) and records than is shown in Table 1 (which is simplified for ease of explanation). For instance, additional attributes or data types that may be extracted from the records may include categorical indicators for tags that may be popular for a specific use case (e.g., in the example of Table 1, such tags may include “tag_is_fraud,” “tag_is_churn,” “tag is network,” “tag_is_finance,” and the like). Additional features engineering scripts may be available once additional contributions are submitted to the scripts inventory to solve additional use cases. In general, the accuracy of the recommendation system will improve as the recommendation system is exposed to more attributes and more records.
In step 506, the processing system may feed the set of training data to the recommendation system for training, in order to generate a trained classification model. The recommendation system in this case may function as a classification model that can classify a data type according to the types of predefined features engineering action scripts that the data type may be operated on by.
The method 500 may end in step 508. It should be noted, however, that steps 504 and 506 may be repeated any number of times in order to improve the accuracy of the classification model. For instance, steps 504 and 506 may be repeated periodically (e.g., daily, weekly, etc.), randomly, or in response to the occurrence of a predefined action (e.g., a threshold number of new features engineering scripts being added to the library).
The method 600 begins in step 602 and proceeds to step 604. At step 604, the processing system may build a set of test data, based on a target data set provided by a user. For instance, the target data set may be a set of data for which the user wishes to build a machine learning model (i.e., a data set on which the machine learning model is to operate). The user may also specify a use case associated with the target data set, where the use case defines the information that the user wishes to extract from the target data set. As an example, the user may want to examine a target data set of records relating to customer claims for replacement mobile phones in order to detect which claims are potentially fraudulent (use case). Table 2, below, illustrates an example target data set relating to customer claims for replacement mobile phones (where the target data set has been simplified for ease of explanation, similar to Table 1 above):
In one example, building the set of test data based on the target data set involves formatting the test data set to specify the data types of the data contained in the target data set. For instance, the ActDate and Claim Date columns of the example target data set of Table 2 contain dates, while the Description column contains text strings, and the BillZip and ShipZip columns contain integers. Thus, an example set of test data that may be built from the example target data set of Table 2 may be represented as shown in Table 3A, below:
In one example, the set of test data is built against all possible values of the predefined features engineering action scripts in a library of predefined features engineering actions scripts.
In step 606, the processing system may identify a subset of the predefined features engineering action scripts that are applicable to the set of test data. In one example, the applicable subset of the predefined features engineering action scripts may be identified by feeding the set of test data built in step 604 to a recommendation system (which may have been trained in accordance with the method 500, described above).
For instance, in Table 3A each record (ID) in the target data set may be compared against each predefined features engineering action script (FE Script) in order to determine whether the data types contained in the record are the same as the data types on which the predefined features engineering action script operates. If the data types contained in the record are the same as the data types on which the predefined features engineering action script operates, then the predefined features engineering action script may be considered potentially applicable to the record. In this case, the Apply column of Table 3A may be updated, as indicated in Table 3B, below, to indicate whether or not the predefined features engineering action script is applicable to the record.
In step 608, the processing system may automatically generate a recommended features engineering action script for operating on the target data set, by customizing at least one parameter of at least one predefined features engineering action script of the subset to extract data values from at least one location in the target data set. The recommended features engineering action script may be recommended for inclusion in a features engineering component of the machine learning model that is to operate on the target data set.
For instance, in one example, the processing system may select a first predefined features engineering action script from the subset of the predefined features engineering action scripts, where the first predefined features engineering action script was determined to be applicable to the test data as discussed above. The processing system may then map the first predefined features engineering actions script back to the target data set in order to determine which locations (columns) of the data set can provide values for the parameters of the first predefined features engineering action script. For instance, referring back to the example fe_date_diff script, the “col A” parameter may map to the “ActDate” column of Table 2, while the “col B” parameter may map to the “ClaimDate” column of Table 2. Thus, the automatically generated recommended features engineering action script may read as:
The automatically generated recommended features engineering action script may be used to populate a configuration file for a machine learning model that may be trained to operate on the target data set and similar data sets.
The method 600 may end in step 610.
An automatically recommended features engineering action script that is generated according to
It should be noted that the methods 300, 500, and 600 may be expanded to include additional steps or may be modified to include additional operations with respect to the steps outlined above. In addition, although not specifically specified, one or more steps, functions, or operations of the methods 300, 500, and 600 may include a storing, displaying, and/or outputting step as required for a particular application. In other words, any data, records, fields, and/or intermediate results discussed in the method can be stored, displayed, and/or outputted either on the device executing the method or to another device, as required for a particular application. Furthermore, steps, blocks, functions or operations in
Furthermore, one or more hardware processors can be utilized in supporting a virtualized or shared computing environment. The virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtualized virtual machines, hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented. The hardware processor 802 can also be configured or programmed to cause other devices to perform one or more operations as discussed above. In other words, the hardware processor 802 may serve the function of a central controller directing other devices to perform the one or more operations as discussed above.
It should be noted that the present disclosure can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a programmable gate array (PGA) including a Field PGA, or a state machine deployed on a hardware device, a computing device or any other hardware equivalents, e.g., computer readable instructions pertaining to the method discussed above can be used to configure a hardware processor to perform the steps, functions and/or operations of the above disclosed methods 300, 500, and 600. In one example, instructions and data for the present module or process 805 for automatically recommending predefined scripts for constructing machine learning models (e.g., a software program comprising computer-executable instructions) can be loaded into memory 804 and executed by hardware processor element 802 to implement the steps, functions, or operations as discussed above in connection with the illustrative methods 300, 500, and 600. Furthermore, when a hardware processor executes instructions to perform “operations,” this could include the hardware processor performing the operations directly and/or facilitating, directing, or cooperating with another hardware device or component (e.g., a co-processor and the like) to perform the operations.
The processor executing the computer readable or software instructions relating to the above described method can be perceived as a programmed processor or a specialized processor. As such, the present module 805 for automatically recommending predefined scripts for constructing machine learning models (including associated data structures) of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, non-volatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette, and the like. Furthermore, a “tangible” computer-readable storage device or medium comprises a physical device, a hardware device, or a device that is discernible by the touch. More specifically, the computer-readable storage device may comprise any physical devices that provide the ability to store information such as data and/or instructions to be accessed by a processor or a computing device such as a computer or an application server.
While various examples have been described above, it should be understood that they have been presented by way of illustration only, and not a limitation. Thus, the breadth and scope of any aspect of the present disclosure should not be limited by any of the above-described examples, but should be defined only in accordance with the following claims and their equivalents.