The present invention relates to the field of data processing, in particular, to methods and apparatuses for a learner for resource constrained devices.
Many machine learning and data mining techniques are designed to operate in devices with sufficient resources to handle large amounts of data and models. With the popularity of mobile devices like smartphones and personal digital assistants (PDAs), the number of applications running on these devices is also increasing rapidly. These devices introduce severe storage and time constraints for any learning algorithm. Typically, a fast online algorithm is required. Moreover, the model needs to be updated continuously since the instance space is limited.
For example, mobile context learning has been pursued under the banners of human-computer interaction (HCI), ubiquitous and pervasive computing. Context is inferred from user activity, the environment, and the state of the mobile device. The model needs to be updated upon receipt of new data. These devices, unlike desktops, do not enjoy abundance of resources and no learner has been designed with the constrained environment of these devices in mind.
The present invention will be described by way of exemplary embodiments, but not limitations, illustrated in the accompanying drawings in which like references denote similar elements, and in which:
Illustrative embodiments of the present invention include, but are not limited to, methods and apparatuses for registering one or more votes to predict a value for an attribute of a received instance. In various embodiments, the registering is performed in a weighted manner based at least in part on a weight and predicted target values associated with at least one of one or more rules whose antecedent has been determined to have been met. In various embodiments, the meeting of the antecedent is determined based at least in part on one or more attributes values of one or more other attributes of the received instance. In various embodiments, the invention further includes determining whether the predicted target value for which one or more votes are registered correctly predicted the attribute value of the received instance, and adjusting the associated weight of the rule accordingly. In various embodiments, the adjustment may include incrementing the weight if the predicted target value for which the one or more votes are registered correctly predicted the attribute value of the received instance, and decrementing the weight if the predicted target value for which the one or more votes are registered incorrectly predicted the attribute value of the received instance.
Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that alternate embodiments may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that alternate embodiments may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative embodiments.
Further, various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.
The phrase “in one embodiment” is used repeatedly. The phrase generally does not refer to the same embodiment; however, it may. The terms “comprising,” “having,” and “including” are synonymous, unless the context dictates otherwise. The phrase “A/B” means “A or B”. The phrase “A and/or B” means “(A), (B), or (A and B)”. The phrase “at least one of A, B and C” means “(A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C)”. The phrase “(A) B” means “(B) or (A B)”, that is, A is optional.
In some embodiments, the resource constrained device 102 may be any sort of computing device known in the art, including smartphones, PDAs, and mobile devices, as well as numerous other computing systems. The device 102 may have a viewer, as is shown in
Referring again to
In various embodiments, server 108 may comprise any sort of computing device known in the art, such as a workstation, server, mainframe, or PC. The server 108 may be capable of receiving requests from a client of device 102 and of answering some or all of such requests. As shown, the server may be capable of performing some or all of the below described operations or storage of learner 104.
As illustrated, learner 104 may generate one or more rules by randomly selecting matching attribute values of one or more instances, first choosing the one or more instances randomly based on matching target values. In various embodiments, an instance may represent any logical concept composed of a plurality of attributes. For example, the logical concept “meeting” might be composed of attributes such as attendees, date, time, and location. Each attribute may then have one or more associated attribute values. Thus, the attribute “attendees” may be associated with the attribute values “Rene” and “Pascal.” A plurality of such instances may be stored on resource constrained device 102, in some embodiments as tables of a database or as data structures of a file. In other embodiments, instances may instead be stored on server 108 and retrieved from server 108. Retrieval of instances from server 108 or from storage of device 102 may be based on one or more target values. Instances having a target value as an attribute value may comprise an instance space, and a number of instances may be randomly chosen from the instance space. For example, if “Conference Room” is a target value, the instances having “Conference Room” as an attribute value may comprise an instance space. A number of instances may then be randomly chosen from the instance space, such as “Lunch,” “Meeting,” and “Seminar,” each instance having “Conference Room” as an attribute value, perhaps of an attribute such as “Location.”
In some embodiments, learner 104 may then generate one or more rules by randomly selecting matching attribute values of the one or more randomly selected instances, the motivation being that matching attribute values may capture the correlation between the various attributes of the randomly selected one or more instances. The rules constructed may have an antecedent comprised of one or more attributes, each attribute associated with one or more attribute values. The rules may also include a consequent comprising a target and one or more target values associated with the target. In some embodiments, rules may be constructed in an “if-then” form, with “if” beginning the antecedent, and “then” beginning the consequent. An exemplary rule of such embodiments is “if sponsor-attendees=mitchell and department-attendees=scs then location=weh5309, weh5311, oakland.” In this example, the antecedent is comprised of two attributes, “sponsor-attendees” and “department-attendees”, with each attribute having one associated attribute value, and the consequent is comprised of a target, “location”, the target having three associated target values. In some embodiments, rules may have fewer or more attributes and/or attribute values comprising the antecedent, and fewer or more target values. Accordingly, rules generated by learner 104 may be variable length rules. Exemplary variable length rules are illustrated by
The generated rules may comprise a ruleset, and may be stored in storage of the resource constrained device 102 or on server 108, the server 108 in some embodiments generating the rules. The rules may be implemented as classes of a programming language or may have their component attributes, attribute values, targets, and target values stored in a table of a database or in a data structure of a file, facilitating dynamic creation of the rules on an as-needed basis.
As shown, after forming the rules, learner 104 may remove redundant rules. In removing redundant rules, more general rules may be preferred to more specific rules.
In various embodiments, learner 104 may then update the rules over the above described instance space, incorporating attribute values and target values not present in the randomly chosen one or more instances. For example, a rule initially formed as “if date=120205 and personnel=rpe, ata then location=Conference Room” may be updated to include additional attribute values of an instance found in the instance space. Thus, if an instance has date, personnel, and location as attributes, but has an additional attribute value associated with personnel, “khf,” then the rule may be updated as “if date=120205 and personnel=rpe, ata, khf then location=Conference Room.”
Upon generating and updating the rules, learning 104 may associate a weight with each rule. In some embodiments, each rule is initially assigned the same weight. For example, upon initialization, each rule may be assigned a weight of “one.” The weight may be a feature of the rule stored with the rule. For example, if the rule is a class, the weight may be a member variable of that class. If the rule is a table, the weight may be a field of the table. In alternate embodiments, the weights and rule may be stored apart, with rules stored on resource constrained device 102, and rules on server 108, or visa versa.
In some embodiments, additional features of a rule or its values may be stored with the rule and/or the weight. For example, learner 104 may count the number of times each target value is predicted.
As mentioned, the selected operations of choosing instances, forming rules, removing redundant rules, updating the rules, and associating each rule with a weight, may be performed entirely or in part on server 108. In some embodiments, however, each and every one of the above selected operations may be performed on resource constrained device 102. The above described rules facilitate the storage of a minimal amount of instance data by predicting attribute values, by updating rules, and by removing rules that inaccurately predict attribute values, these operations described in greater detail below. Also, by keeping only those rules which accurately predict values, learner 104 ensures that the maintained ruleset is compact.
As illustrated, once rules have been generated and updated, learner 104 may wait for a new instance. The new instance may be received by an application of resource constrained device 102, the application enhanced with an embodiment of the learner 104. Learner 104 may be used with any sort of application. For purposes of simplifying the following description, reference will be made to an exemplary calendar application enhanced with learner 104. However, in alternate embodiments, any number of applications may be enhanced with learner 104.
In some embodiments, a calendar application may operate on resource constrained device 102. After forming the rules or receiving them from server 108, an executing calendar application may wait to receive instances as input. Upon receiving a new instance, then, learner 104 may evaluate the instance in light of the rules. For example, a user of the calendar application might create a new meeting object to be displayed by the calendar. To create a meeting, the calendar application may require a user to enter at least a date, a time, an attendee, and a location. The creation of a new meeting object for a calendar may thus be considered the receipt of a new instance by the calendar application, and in turn by the learner 104.
Prior to checking rules of the ruleset, the learner 104 may determine which attribute of the new instance is the target which the rules will be used to predict one or more values for. In some embodiments, the target may be predetermined. For example, if the new instance is a meeting and a user must enter values for three attributes to store the meeting as an object of the calendar, the last attribute for which the user must enter a value may be considered the target, and one or more values may be predicted to the user for that attribute/target. In such embodiments, the rules may be checked after the user has entered values for the attributes prior to the target. In alternate embodiments, each attribute of the new instance may be iteratively treated as a target, with the rules being checked for each attribute of the new instance. For the first attribute, when no attribute values have been entered, and, thus, none of the rules may be met, the learner 104 may, for example, check all rules for values associated with the first attribute, and return either the most frequently predicted value or a list of some or all of the values.
In checking each rule, learner 104 may first determine which rules are comprised of attributes matching the attributes of the new instance for which values have been entered. Referring to the above example of a new meeting, if a user has entered values for date, time, and attendees, the learner 104 may first search the ruleset for rules whose antecedents are comprised of some or all of those three attributes. Of the rules found to match, the attribute values of the rules' antecedents are compared to the attribute values of the new instance. In some embodiments, only rules whose antecedents have all the same values as the new instance may have votes registered for them. In other embodiments, rules having one or more matching attribute values may also have votes registered for them.
In various embodiments, the number of votes registered for each rule may correspond to the weight of each rule, with each rule voting its full weight. For example, a rule with a weight of one may have one vote registered on its behalf, and a rule with a weight of three may have three votes registered on its behalf. In some embodiments, where a rule has more than one target value present in its consequent, all its votes may be registered for its most frequently voted target value. In other embodiments, votes may be cast for each of a rule's target values, with each vote cast equal to the weight of each rule multiplied by the fractional frequency of each target value. For example, if a rule has a weight of six and predicts three target values, “conference room,” “room 417,” and “room 321,” and the values have corresponding frequencies of one, four, and one, then one vote may be cast for “conference room” (weight of six times fractional frequency of one-sixth equals one vote), four for “room 417,” and one for “room 321.” In yet other embodiments, some combination of the above two methods of vote registering may be used.
As illustrated, once votes have been registered, the learner 104 may aggregate the votes and predict one or more target values to a user of the resource constrained device 102. In some embodiments, the learner 104 may aggregate the votes registered on behalf of the rules, and predict only one value, the value receiving the highest plurality of votes. In other embodiments, the learner 104 may aggregate the votes and predict a plurality of values corresponding to the values receiving the highest pluralities of votes. The number of predictions made for a target may vary from embodiment to embodiment, and may be determined by a user- or program-defined threshold metric.
The predicted value or values may then be presented to a user of the resource constrained device 102, in some embodiments through a viewer of device 102, such as the viewer depicted by
Referring again to
In some embodiments, the learner 104 may decrease the weight of a rule if the local prediction by that rule is incorrect, irrespective of the correctness of the global outcome. The learner 104, may, for example, decrement the weight of the rule by half its total weight. Further, when the local prediction is correct but the global outcome is incorrect, the learner 104 may measure the vote deficit for the actual prediction. After that, the learner 104 may increase the weights for rules that had the correct local prediction. In one embodiment, the weights of the correct-predicting rules are increased equally. This may boost the vote for the correct target value.
Additionally, when a rule (and the global outcome) predicts correctly, the learner 104 may increment the weight of the correctly predicting rules conservatively. Such an embodiment conjectures that this reward raises the confidence (weight) of the rule(s) for future predictions. In various embodiments, 0.1 is employed as the reward value. In other embodiments, different reward values may be employed. Liberally rewarding the rules eventually may lead to a drop in the performance, so this parameter may be selected carefully. Moreover, experiments appear to suggest a small linear increase in weight performs much better than exponential increase. In various embodiments, if the weight of any rule falls below a user-defined threshold, the rule may be removed from the ruleset.
In some embodiments, if the antecedent of any rule matches the current instance but the target value selected or provided by the user is not present in the consequent, the learner 104 may update the rule by replacing the target value having the lowest frequency of correctly predicting an outcome with the current, user selected/provided target value. Further, in various embodiments, if the prediction is incorrect, the learner 104 may update the instance space by replacing the target value having the lowest frequency of correctly predicting an outcome with the current, user selected/provided target value. New rules may be generated in the same way as the initial rules, and redundancy may be removed. New rules may each be assigned a weight, for example, a weight of one.
Learner 104 may then use this updated ruleset for subsequent instances. The ruleset is thus updated incrementally.
Experimental usage of the learner 104 has shown that at a relatively low cost in accuracy, a relatively significant improvement in the reduction in storage requirements may be realized. Further, even in this reduced storage environment, the learner 104 may execute at relatively high rate, making it appropriate for online usage.
In some embodiments, the learner method may then generate one or more rules by randomly selecting matching attribute values of the one or more randomly selected instances, block 204, the motivation being that matching attribute values may capture the correlation between the various attributes of the randomly selected one or more instances. The rules constructed may have an antecedent comprised of one or more attributes, each attribute associated with one or more attribute values. The rules may also include a consequent comprising a target and one or more target values associated with the target. In some embodiments, rules may be constructed in an “if-then” form, with “if” beginning the antecedent, and “then” beginning the consequent. An exemplary rule of such embodiments is “if sponsor-attendees=mitchell and department-attendees=scs then location=weh5309, weh5311, oakland.” In this example, the antecedent is comprised of two attributes, “sponsor-attendees” and “department-attendees”, with each attribute having one associated attribute value, and the consequent is comprised of a target, “location”, the target having three associated target values. In some embodiments, rules may have fewer or more attributes and/or attribute values comprising the antecedent, and fewer or more target values. Accordingly, rules generated by embodiments of the learner method may be variable length rules. Exemplary variable length rules are illustrated by
The generated rules may comprise a ruleset, and may be stored in storage of the resource constrained device or on a server, the server in some embodiments generating the rules. The rules may be implemented as classes of a programming language or may have their component attributes, attribute values, targets, and target values stored in a table of a database or in a data structure of a file, facilitating dynamic creation of the rules on an as-needed basis.
As shown, after forming the rules, the learner method may remove redundant rules, block 206. In removing redundant rules, more general rules may be preferred to more specific rules.
In various embodiments, the learner method may then update the rules over the above described instance space, block 208, incorporating attribute values and target values not present in the randomly chosen one or more instances. For example, a rule initially formed as “if date=120205 and personnel=rpe, ata then location=Conference Room” may be updated to include additional attribute values of an instance found in the instance space. Thus, if an instance has date, personnel, and location as attributes, but has an additional attribute value associated with personnel, “khf,” then the rule may be updated as “if date=120205 and personnel=rpe, ata, khf then location=Conference Room.”
Upon generating and updating the rules, the learning method may associate a weight with each rule, block 210. In some embodiments, each rule is initially assigned the same weight. For example, upon initialization, each rule may be assigned a weight of “one.” The weight may be a feature of the rule stored with the rule. For example, if the rule is a class, the weight may be a member variable of that class. If the rule is a table, the weight may be a field of the table. In alternate embodiments, the weights and rule may be stored apart, with rules stored on a resource constrained device, and rules on a server, or visa versa.
In some embodiments, additional features of a rule or its values may be stored with the rule and/or the weight. For example, the learner method may count the number of times each target value is predicted.
As mentioned, the selected operations of choosing instances, block 202, forming rules, block 204, removing redundant rules, block 206, updating the rules, block 208, and associating each rule with a weight, block 210, may be performed entirely or in part on a server. In some embodiments, however, each and every one of the above selected operations may be performed on a resource constrained device. The above described rules facilitate the storage of a minimal amount of instance data by predicting attribute values, by updating rules, and by removing rules that inaccurately predict attribute values, these operations described in greater detail below. Also, by keeping only those rules which accurately predict values, the learner method ensures that the maintained ruleset is compact.
As illustrated, once rules have been generated and updated, the learner method may wait for a new instance, block 212. The new instance may be received by an application of a resource constrained device, the application enhanced with an embodiment of the learner method of the present invention. Learner methods may be used with any sort of application. An application may benefit from enhancement or communicative coupling to a process or device implementing a learning method by requiring the storage of less data. For purposes of simplifying the following description, reference will be made to an exemplary calendar application enhanced with an embodiment of the learner method. However, in alternate embodiments, any number of applications may be enhanced with the learner method.
In some embodiments, a calendar application may operate on a resource constrained device, such as the device described above. After forming the rules or receiving them from a server, an executing calendar application may wait to receive instances as input, block 212. Upon receiving a new instance, then, the learner method of the calendar application may evaluate the instance in light of the rules, block 214. For example, a user of the calendar application might create a new meeting object to be displayed by the calendar. To create a meeting, the calendar application may require a user to enter at least a date, a time, an attendee, and a location. The creation of a new meeting object for a calendar may thus be considered the receipt of a new instance by the calendar application.
Prior to checking rules of the ruleset, block 214, a learner method of the calendar application may determine which attribute of the new instance is the target which the rules will be used to predict one or more values for. In some embodiments, the target may be predetermined. For example, if the new instance is a meeting and a user must enter values for three attributes to store the meeting as an object of the calendar, the last attribute for which the user must enter a value may be considered the target, and one or more values may be predicted to the user for that attribute/target. In such embodiments, the rules may be checked, block 214, after the user has entered values for the attributes prior to the target. In alternate embodiments, each attribute of the new instance may be iteratively treated as a target, with the rules being checked, block 214, for each attribute of the new instance. For the first attribute, when no attribute values have been entered, and, thus, none of the rules may be met, a learner method of the calendar may, for example, check all rules for values associated with the first attribute, and return either the most frequently predicted value or a list of some or all of the values.
In checking each rule, block 214, a learner method may first determine which rules are comprised of attributes matching the attributes of the new instance for which values have been entered. Referring to the above example of a new meeting, if a user has entered values for date, time, and attendees, the method may first search the ruleset for rules whose antecedents are comprised of some or all of those three attributes. Of the rules found to match, the attribute values of the rules' antecedents are compared to the attribute values of the new instance. In some embodiments, only rules whose antecedents have all the same values as the new instance may have votes registered for them. In other embodiments, rules having one or more matching attribute values may also have votes registered for them.
In various embodiments, the number of votes registered for each rule may correspond to the weight of each rule, with each rule voting its full weight. For example, a rule with a weight of one may have one vote registered on its behalf, and a rule with a weight of three may have three votes registered on its behalf. In some embodiments, where a rule has more than one target value present in its consequent, all its votes may be registered for its most frequently voted target value. In other embodiments, votes may be cast for each of a rule's target values, with each vote cast equal to the weight of each rule multiplied by the fractional frequency of each target value. For example, if a rule has a weight of six and predicts three target values, “conference room,” “room 417,” and “room 321,” and the values have corresponding frequencies of one, four, and one, then one vote may be cast for “conference room” (weight of six times fractional frequency of one-sixth equals one vote), four for “room 417,” and one for “room 321.” In yet other embodiments, some combination of the above two methods of vote registering may be used.
As illustrated, once votes have been registered, the learner method may aggregate the votes and predict one or more target values to a user of the resource constrained device, block 216. In some embodiments, the learner method may aggregate the votes registered on behalf of the rules, and predict only one value, the value receiving the highest plurality of votes. In other embodiments, the learner method may aggregate the votes and predict a plurality of values corresponding to the values receiving the highest pluralities of votes. The number of predictions made for a target may vary from embodiment to embodiment, and may be determined by a user or program defined threshold metric.
The predicted value or values may then be presented to a user of the resource constrained device, in some embodiments through a viewer of the device, such as the viewer depicted by
Referring again to
In some embodiments, the learner method may decrease the weight of a rule if the local prediction by that rule is incorrect, irrespective of the correctness of the global outcome. The learner method, may, for example, decrement the weight of the rule by half its total weight. Further, when the local prediction is correct but the global outcome is incorrect, the learner method may measure the vote deficit for the actual prediction. After that, the learner method may increase the weights for rules that had the correct local prediction. In one embodiment, the weights of the correct-predicting rules are increased equally. This may boost the vote for the correct target value.
Additionally, when a rule (and the global outcome) predict correctly, the learner method may increment the weight of the correctly predicting rules conservatively. Such a method conjectures that this reward raises the confidence (weight) of the rule(s) for future predictions. In various embodiments, 0.1 is employed as the reward value. In other embodiments, different reward values may be employed. Liberally rewarding the rules eventually may lead to a drop in the performance, so this parameter may be selected carefully. Moreover, experiments appear to suggest a small linear increase in weight performs much better than exponential increase. In various embodiments, if the weight of any rule falls below a user-defined threshold, the rule is removed from the ruleset.
In some embodiments, if the antecedent of any rule matches the current instance but the target value selected or provided by the user is not present in the consequent, the learner method may update the rule by replacing the target value having the lowest frequency of correctly predicting an outcome with the current, user selected/provided target value. Further, in various embodiments, if the prediction is incorrect, the learner method may update the instance space by replacing the target value having the lowest frequency of correctly predicting an outcome with the current, user selected/provided target value. New rules may be generated in the same way as the initial rules, and redundancy may be removed. New rules may each be assigned a weight, for example, a weight of one.
The learner method of the calendar may then use this updated ruleset for subsequent instances. The ruleset is thus updated incrementally.
Experimental usage of the learner method has shown that at a relatively low cost in accuracy, a relatively significant improvement in the reduction in storage requirements may be realized. Further, even in this reduced storage environment, the learner method may execute at relatively high rate, making it appropriate for online usage.
In alternate embodiments, all or portions of the learner 520 may be implemented in hardware, firmware, or any combination thereof. Hardware implementations may be in the form of an application specific integrated circuit (ASIC), a reconfigured reconfigurable circuit (such as a Field Programming Field Array (FPGA)), and so forth.
The constitution of elements 502-514 are known in the art, and accordingly will not be further described.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described, without departing from the scope of the present invention. Those with skill in the art will readily appreciate that the present invention may be implemented in a very wide variety of embodiments or extended there from. For example, in various embodiments, the system may also be extended to provide confidence metrics for the predictions. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof.
The present application claims priority to U.S. Provisional Patent Application No. 60/734,840, entitled “A Learner for Resource Constrained Devices,” filed on Nov. 9, 2005. The specification of the 60/734,840 provisional application is hereby fully incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5109352 | O'Dell | Apr 1992 | A |
5371673 | Fan | Dec 1994 | A |
5608846 | Mitsubuchi et al. | Mar 1997 | A |
5848396 | Gerace | Dec 1998 | A |
5852814 | Allen | Dec 1998 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
5946375 | Pattison et al. | Aug 1999 | A |
5952942 | Balakrishnan et al. | Sep 1999 | A |
6009444 | Chen | Dec 1999 | A |
6018738 | Breese et al. | Jan 2000 | A |
6054941 | Chen | Apr 2000 | A |
6104317 | Panagrossi | Aug 2000 | A |
6112186 | Bergh et al. | Aug 2000 | A |
6169538 | Nowlan et al. | Jan 2001 | B1 |
6172625 | Jin et al. | Jan 2001 | B1 |
6182070 | Megiddo et al. | Jan 2001 | B1 |
6202058 | Rose et al. | Mar 2001 | B1 |
6204848 | Nowlan et al. | Mar 2001 | B1 |
6311173 | Levin et al. | Oct 2001 | B1 |
6334127 | Bieganski et al. | Dec 2001 | B1 |
6362752 | Guo et al. | Mar 2002 | B1 |
6370513 | Kolawa et al. | Apr 2002 | B1 |
6424743 | Ebrahimi | Jul 2002 | B1 |
6438579 | Hosken | Aug 2002 | B1 |
6502118 | Chatterjee | Dec 2002 | B1 |
6603489 | Edlund et al. | Aug 2003 | B1 |
6636836 | Pyo | Oct 2003 | B1 |
6655963 | Horvitz et al. | Dec 2003 | B1 |
6686852 | Guo | Feb 2004 | B1 |
6711290 | Sparr et al. | Mar 2004 | B2 |
6757544 | Rangarajan et al. | Jun 2004 | B2 |
6801659 | O'Dell | Oct 2004 | B1 |
6801909 | Delgado et al. | Oct 2004 | B2 |
6807529 | Johnson et al. | Oct 2004 | B2 |
6864809 | O'Dell et al. | Mar 2005 | B2 |
6873990 | Oblinger | Mar 2005 | B2 |
6912581 | Johnson et al. | Jun 2005 | B2 |
6947771 | Guo et al. | Sep 2005 | B2 |
6955602 | Williams | Oct 2005 | B2 |
6956968 | O'Dell et al. | Oct 2005 | B1 |
6973332 | Mirkin et al. | Dec 2005 | B2 |
6982658 | Guo | Jan 2006 | B2 |
6983216 | Lam et al. | Jan 2006 | B2 |
7057607 | Mayoraz et al. | Jun 2006 | B2 |
7075520 | Williams | Jul 2006 | B2 |
7095403 | Lyustin et al. | Aug 2006 | B2 |
7113917 | Jacobi et al. | Sep 2006 | B2 |
7139430 | Sparr et al. | Nov 2006 | B2 |
7256769 | Pun et al. | Aug 2007 | B2 |
7257528 | Ritchie et al. | Aug 2007 | B1 |
7272564 | Phillips et al. | Sep 2007 | B2 |
7313277 | Morwing et al. | Dec 2007 | B2 |
7349576 | Holtsberg | Mar 2008 | B2 |
7389235 | Dvorak | Jun 2008 | B2 |
7437001 | Morwing et al. | Oct 2008 | B2 |
7466859 | Chang et al. | Dec 2008 | B2 |
7881995 | Grimberg | Feb 2011 | B2 |
20020018074 | Buil et al. | Feb 2002 | A1 |
20020059202 | Hadzikadic et al. | May 2002 | A1 |
20020065721 | Lema et al. | May 2002 | A1 |
20020131565 | Scheuring et al. | Sep 2002 | A1 |
20020147695 | Khedkar et al. | Oct 2002 | A1 |
20030023426 | Pun et al. | Jan 2003 | A1 |
20030037041 | Hertz | Feb 2003 | A1 |
20030054830 | Williams et al. | Mar 2003 | A1 |
20030097186 | Gutta et al. | May 2003 | A1 |
20030144830 | Williams | Jul 2003 | A1 |
20030149675 | Ansari et al. | Aug 2003 | A1 |
20030152904 | Doty, Jr. | Aug 2003 | A1 |
20040076936 | Horvitz et al. | Apr 2004 | A1 |
20040093290 | Doss et al. | May 2004 | A1 |
20040153963 | Simpson et al. | Aug 2004 | A1 |
20040153975 | Williams et al. | Aug 2004 | A1 |
20040181512 | Burdick et al. | Sep 2004 | A1 |
20050017954 | Kay et al. | Jan 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20050114284 | Wrobel et al. | May 2005 | A1 |
20050114770 | Sacher et al. | May 2005 | A1 |
20050137819 | Lam et al. | Jun 2005 | A1 |
20050165596 | Adar et al. | Jul 2005 | A1 |
20050165782 | Yamamoto | Jul 2005 | A1 |
20060010217 | Sood | Jan 2006 | A1 |
20060015421 | Grimberg | Jan 2006 | A1 |
20060026203 | Tan et al. | Feb 2006 | A1 |
20060047650 | Freeman et al. | Mar 2006 | A1 |
20060129928 | Qiu | Jun 2006 | A1 |
20060136408 | Weir et al. | Jun 2006 | A1 |
20060143093 | Brandt et al. | Jun 2006 | A1 |
20060155536 | Williams et al. | Jul 2006 | A1 |
20060158436 | LaPointe et al. | Jul 2006 | A1 |
20060173807 | Weir et al. | Aug 2006 | A1 |
20060193519 | Sternby | Aug 2006 | A1 |
20060224259 | Buil et al. | Oct 2006 | A1 |
20060236239 | Simpson et al. | Oct 2006 | A1 |
20060237532 | Scott-Leikach et al. | Oct 2006 | A1 |
20060239560 | Sternby | Oct 2006 | A1 |
20060247915 | Bradford et al. | Nov 2006 | A1 |
20060266830 | Horozov et al. | Nov 2006 | A1 |
20070083504 | Britt et al. | Apr 2007 | A1 |
20070094718 | Simpson | Apr 2007 | A1 |
20070203879 | Templeton-Steadman et al. | Aug 2007 | A1 |
20070276814 | Williams | Nov 2007 | A1 |
20070285397 | LaPointe et al. | Dec 2007 | A1 |
20080103859 | Yokota et al. | May 2008 | A1 |
20080130996 | Sternby | Jun 2008 | A1 |
Number | Date | Country |
---|---|---|
H01-288926 | Nov 1989 | JP |
Entry |
---|
Avrim Blum (Machine Learning, 197). |
Avrim Blum (CMU, Machine Learning, 1997). |
WeightedMajority—CalendarSchedulingDomain, Avrim Blum, Machine learning 26, 1997. |
International Preliminary Report on Patentability and Written Opinion from PCT/US2006-042622, mailed May 22, 2008, 7 pgs. |
Ishibuchi H. et al, “Voting in Fuzzy Rule-Based Systems for Pattern Classification Problems”, Fuzzy Sets and Systems, Elsevier Science Publishers, Amsterdam, NL, LNKD-DOI:10.1016/S0165-0114(98) 00223-1, vol. 103, No. 2, Apr. 16, 1999, pp. 223-238, XP004157916, ISSN: 0165-0114. |
Davison, Brian D. and Haym Hirsh, “Predicting Sequences of User Actions” Proceedings of AAAI/ICML 1998 Workshop on Predicting the Future: AI Approaches to Time-Series Analysis, Jul. 31, 1998, XP002597337. |
Bark Cheung Chiu and Geoffrey I. Webb, “Using Decision Trees for Agent Modeling: Improving Prediction Performance”, User Modeling and User-Adapted Interaction, vol. 8, No. 1-2, Dec. 31, 1998, pp. 131-152, XP002597338 DOI: 10.1023/A:1008296930163. |
Yingjiu Li et al, “Discovering Calendar-Based Temporal Association Rules” Temporal Representation and Reasoning, 2001, Time 2001, Proceedings. 8th International Symposium on Jun. 14-16, 2001, pp. 111-118, XP010548129, ISBN 978-0-7695-1107-08. |
Gangardiwala A. et al, “Dynamically Weighted Majority Voting for Incremental Learning and Comparison of Three Boositng Based Approaches”, Neural Networks 2005, Proceedings 2005, IEEE International Joint Conference, Montreal, Que. Canada Jul. 31-Aug. 4, 2005. Piscataway, MJ, USA, IEEE, US LNKD-DOI:10.1109/IJCNN.2005.1556012, vol. 2, Jul. 31, 2005, pp. 1131-1136, XP010866158, ISBN: 978-07803-9048-5. |
Number | Date | Country | |
---|---|---|---|
20070106785 A1 | May 2007 | US |
Number | Date | Country | |
---|---|---|---|
60734840 | Nov 2005 | US |