This U.S. patent application claims priority under 35 U.S.C § 119 to: Indian patent Application no. 202021040930, filed on Sep. 21, 2020. The entire contents of the aforementioned application are incorporated herein by reference.
The disclosure herein generally relates to labelling unlabeled datasets, and, more particularly, to method and system for generating labelled dataset using a training data recommender technique.
Recommender systems are among the most pervasive machine learning paradigms that many enterprise solutions drive sales. They facilitate most e-commerce and retail businesses, by capturing the complexities of daily B2C (Business to Customer) interactions, providing meaningful and timely recommendations to the customers. Sudden disruptions in such enterprise solutions affects customer preferences drastically and render historical data ineffective for modeling. In such scenarios, enterprise solutions face major challenges in handling dynamic data on machine learning paradigms based recommender systems rendering inaccuracy, increased time recommendations which gains prime significance. However, personalized recommendations are provided to customers by training machine learning models on large amounts of historical labeled data. These historical data prove to be inefficient at labeling existing user preferences, where the machine learning model falls short when there is a new user in the system, or a new product is being launched for which there is no prior data available. Another major challenge is lack of sufficient and timely availability of labeled data.
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a system for generating labelled dataset using a training data recommender is provided. The system includes receiving by a labeling function generator, (i) an unlabeled dataset, and (ii) a labelled dataset comprising a training data and a test data. Further, from the labelled dataset a plurality of feature subsets is extracted from the labeled dataset. The plurality of feature subsets extracted from the labeled dataset are then fed to a one or more machine learning models. Further, a plurality of labelling functions is generated for the corresponding labelled dataset using the one or more trained machine learning models. The plurality of labelling functions is executed for processing the unlabeled dataset to generate a sparse matrix. Then, using a snorkel generative model for the sparse matrix is constructed to label the unlabelled dataset. Further, the one or more machine learning models are trained with adequate amount of labelled dataset for labelling the unlabeled dataset based on a labelled data prediction threshold which is determined using a training data recommender technique.
In one embodiment, the training data recommender technique is determined by, obtaining, a plurality of labeled dataset threshold parameters comprising (i) an initial labeled dataset, (ii) a reduction factor, (iii) the test data, and (iv) a labelled data prediction threshold. Then, a plurality of prediction accuracy metrics of the test data associated with the labelled dataset is determined based on the one or more machine learning models. Then, computes, a selected labeled data, for each machine learning model is based on at least one of (i) the initial labeled dataset, and (ii) the reduction factor and the adequate amount of the labeled dataset for training the one or more machine learning models is determined based on (i) the selected labeled data, (ii) the prediction accuracy metrics of the test data, and (iii) the labelled data prediction threshold.
In another aspect, a method for generating labelled dataset using a training data recommender is provided. The method includes receiving by a labeling function generator, (i) an unlabeled dataset, and (ii) a labelled dataset comprising a training data and a test data. Further, from the labelled dataset a plurality of feature subsets is extracted from the labeled dataset. The plurality of feature subsets extracted from the labeled dataset are then fed to a one or more machine learning models. Further, a plurality of labelling functions is generated for the corresponding labelled dataset using the one or more trained machine learning models. The plurality of labelling functions is executed for processing the unlabeled dataset to generate a sparse matrix. Then, using a snorkel generative model for the sparse matrix is constructed to label the unlabelled dataset. Further, the one or more machine learning models are trained with adequate amount of labelled dataset for labelling the unlabeled dataset based on a labelled data prediction threshold which is determined using a training data recommender technique.
In one embodiment, the training data recommender technique is determined by, obtaining, a plurality of labeled dataset threshold parameters comprising (i) an initial labeled dataset, (ii) a reduction factor, (iii) the test data, and (iv) a labelled data prediction threshold. Then, a plurality of prediction accuracy metrics of the test data associated with the labelled dataset is determined based on the one or more machine learning models. Then, computes, a selected labeled data, for each machine learning model based on the initial labeled dataset, and the reduction factor. Further, the adequate amount of the labeled dataset for training the one or more machine learning models is determined based on (i) the selected labeled data, (ii) the prediction accuracy metrics of the test data, and (iii) the labelled data prediction threshold.
In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors perform actions comprising receiving by a labeling function generator, (i) an unlabeled dataset, and (ii) a labelled dataset comprising a training data and a test data. Further, from the labelled dataset a plurality of feature subsets is extracted from the labeled dataset. The plurality of feature subsets extracted from the labeled dataset are then fed to a one or more machine learning models. Further, a plurality of labelling functions is generated for the corresponding labelled dataset using the one or more trained machine learning models. The plurality of labelling functions is executed for processing the unlabeled dataset to generate a sparse matrix. Then, using a snorkel generative model for the sparse matrix is constructed to label the unlabelled dataset. Further, the one or more machine learning models are trained with adequate amount of labelled dataset for labelling the unlabeled dataset based on a labelled data prediction threshold which is determined using a training data recommender technique.
In one embodiment, the training data recommender technique is determined by, obtaining, a plurality of labeled dataset threshold parameters comprising (i) an initial labeled dataset, (ii) a reduction factor, (iii) the test data, and (iv) a labelled data prediction threshold. Then, a plurality of prediction accuracy metrics of the test data associated with the labelled dataset is determined based on the one or more machine learning models. Then, computes, a selected labeled data, for each machine learning model is based on at least one of (i) the initial labeled dataset, and (ii) the reduction factor and the adequate amount of the labeled dataset for training the one or more machine learning models is determined based on (i) the selected labeled data, (ii) the prediction accuracy metrics of the test data, and (iii) the labelled data prediction threshold.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
Embodiments herein provide a method and system for generating labelled dataset using a training data recommender technique. The method disclosed, enables determining adequate amount of labeled dataset required for training a one or more machine learning models. The method of the present disclosure is based on a training data recommender technique suitably constructed with one or more newly defined parameters such as the labelled data prediction threshold to determine the adequate amount of labelled training data required for training the one or more machine learning models. This labelling data threshold leads to a significant reduction in training time while performing/executing the one or more machine learning models and thus recommender systems to quickly adapt disruptions. Additionally, the method enables auto generation of labels for large amount of training dataset in a timely manner. Also, the method and system when implemented enables reduction in the latency for training the one or more machine learning models using the labelling data prediction threshold parameters. The method facilitates training the one or more machine learning models to label the unlabeled dataset based on a labelled data prediction threshold which is determined using the training data recommender technique. The disclosed system is further explained with the method as described in conjunction with
Referring now to the drawings, and more particularly to
Referring to the components of the system 100, in an embodiment, the processor (s) 104 can be one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 104 is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, 10 hand-held devices, workstations, mainframe computers, servers, a network cloud, and the like.
The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting a number of devices (nodes) of the system 100 to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The modules 108 can be an Integrated Circuit (IC) (not shown), external to the memory 102, implemented using a Field-Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). The names (or expressions or terms) of the modules of functional block within the modules 108 referred herein, are used for explanation and are not construed to be limitation(s).
Referring now to the steps of the method 300, at step 302, the one or more hardware processors 104 receive, via a labeling function generator (i) an unlabeled dataset, and (ii) a labelled dataset comprising a training data and a test data. As a preprocessing step, the one or more machine learning models are trained over the labelled dataset and the test data is utilized for testing prediction accuracy. The present disclosure is further explained considering an example, where the system 100 is initiated to generate labelled dataset using the training data recommender technique of the system of
Referring now to the steps of the method 300, at step 304, the one or more hardware processors 104 extract a plurality of feature subset from the labeled dataset. Here, for the plurality of inputs received, a plurality of feature subset is extracted from the labelled dataset.
Referring now to the steps of the method 300, at step 306, the one or more hardware processors 104 feed, the plurality of feature subset extracted from the labeled dataset to one or more machine learning models. Referring now to
Referring now to the steps of the method 300, at step 308, the one or more hardware processors 104 generate a plurality of labelling functions for the labelled dataset using the one or more trained machine learning models. Referring now to
Referring now to the steps of the method 300, at step 310, the one or more hardware processors 104 execute the plurality of labelling functions for processing the unlabeled dataset to generate a sparse matrix.
Referring now to the steps of the method 300, at step 310, the one or more hardware processors 104 construct, via a snorkel generative model (wherein the snorkel generative model is executed by the hardware processor(s) 104), a generative model for the sparse matrix to label the unlabelled dataset. Now to
Referring now to the steps of the method 300, at step 310, the one or more hardware processors 104 train, the one or more machine learning models, with adequate amount of the labelled dataset for labelling the unlabeled dataset based on a labelled data prediction threshold which is determined using a training data recommender technique. Referring now to the above example, the method enables to label the unlabeled dataset by generating additional labelled dataset which is inadequate to train the one or more machine learning models. This method also extends in scenarios when zero unlabeled dataset or dataset is insufficient to generate the labelled dataset. The training data recommender technique comprises the following steps such as obtaining, information or a plurality of labeled dataset threshold parameters comprising (i) an initial labeled dataset, (ii) a reduction factor, (iii) the test data, and (v) the labelled data prediction threshold. Further, a plurality of prediction accuracy metrics of the test data associated with the labelled dataset is determined based on the one or more machine learning models. Then, a selected labeled data, is computed for each machine learning model which is the product of the initial labeled dataset, and the reduction factor. The required amount of the labeled dataset for training the one or more machine learning models is determined based on (i) the selected labeled data, (ii) the prediction accuracy metrics of the test data, and (iii) the labelled data prediction threshold.
Referring now to
Referring now to
For the experiments, the method has been emulated that there is not enough labeled data, i.e., on the left of the labelled data prediction threshold (
In one embodiment, a discriminative model (XGBoost) is trained over these portions of data ((X+d1) %, (X+d2) % and so on) to measure the accuracy metrics over the spare test data to draw the curve. The technique/method described in the present disclosure with the trained discriminative model over increasing portions of the actual labeled dataset (again (X+d1) %, (X+d2) % and so on). Further, iterative executions are performed on each experiment to plot the averages. The closer “Auto labelling functions X” is to “No LFs” curve the better is the labeling achieved (by using only X % of available labeled data). The starting point X % of each graph on the left of the labelled data prediction threshold for each dataset, which has been determined by the training data recommender technique. The, the Time to Accuracy (TTA) metric is plotted on the secondary Y-axis. This metric illustrates to obtain desired accuracy (similar to “No labelling functions”) with lesser training data in reduced time. The experimented amount of gold data as represented (for e.g., 0.8%, 1%, 2% of the entire dataset) is utilized to generate the labelling functions. The labelling functions are then applied on the unlabeled dataset to generate labels. For example, as depicted in
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
The embodiments of present disclosure herein address unresolved problem of determining the adequate amount of labeled dataset required for training the one or more machine learning models. The embodiment thus provides method and system for generating labelled dataset using a training data recommender technique. Moreover, the embodiments herein further provide a time efficient, accurate and scalable system for generating labelled data using the training data recommender technique. The method of the present disclosure addresses reducing the training time required to train the one or more machine learning models required for labelling the data using the proposed training data recommender technique. The method of the present disclosure is based on a training data recommender technique suitably constructed with newly defined parameters such as the labelling data threshold to determine the sufficiency of labelled training data required for the one or more machine learning models. Additionally, the method enables auto generation of labels for large amount of training dataset in a timely manner.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
202021040930 | Sep 2020 | IN | national |