Claims
- 1. A method executed in a physical system that constitutes an incremental supervised learner for adding new learning tasks to said learner, comprising:receiving a training instance; if the training instance received reflects a new learning task, initializing a new learning task state representation in a memory of said learner for the new learning task based on a learning task state representation of a hypothetical learning task; updating in said memory each learning task state representation except the hypothetical learning task using a target value stored for that learning task in the training instance; and updating in said memory the state representation for the hypothetical learning task using a default target value for the training instance.
- 2. The method of claim 1, further including producing predictors for each learning task based on each learning task state representation.
- 3. The method of claim 2, further including an applier that produces a prediction based on the predictor.
- 4. The method of claim 2 wherein the predictors are at least one of boolean functions, regression models and neural networks.
- 5. The method of claim 2 where the predictors are used by another learning system.
- 6. The method of claim 1, wherein default target values reflect negative examples.
- 7. A method executed in a learning system apparatus for communicating accumulated state information between learning tasks stored in a memory of said learning system, comprising:initiating initial state representation for a hypothetical learning task; receiving a training instance; if the training instance received reflects a new learning task, initializing a new learning task state representation in said memory for the new learning task based on a hypothetical learning task state representation of the hypothetical learning task; updating in said memory each learning task state representation except the hypothetical learning task using a target value stored for that learning task in the training instance; and updating in said memory the state representation for the hypothetical learning task using a default target value for the training instance.
Parent Case Info
This application is a continuation of application Ser. No. 09/563,506, filed May 3, 2000, U.S. Pat. No. 6,523,017.
This non-provisional application claims the benefit of U.S. Provisional Application No. 60/132,490 entitled “AT&T Information Classification System” which was filed on May 4, 1999 and U.S. Provision Application No. 60/134,369 entitled “AT&T Information Classification System” which was filed May 14, 1999, both of which are hereby incorporated by reference in their entirety. The applicants of the Provisional Applications are David D. Lewis, Amitabh Kumar Singhal, and Daniel L. Stern and David D. Lewis and Daniel L. Stern.
US Referenced Citations (6)
Non-Patent Literature Citations (2)
Entry |
Xu et al., “Adaptive Supervised Learning Decision Networks for Traders and Portfolios”, IEEE, IAFE, Mar. 1997. |
Kogiantis et al., “Operations and Learning in Neural Networks for Robust Prediction”, IEEE, Transaction on Systems, Man and Cybernetics, Jun. 1997. |
Provisional Applications (2)
|
Number |
Date |
Country |
|
60/132490 |
May 1999 |
US |
|
60/134369 |
May 1999 |
US |
Continuations (1)
|
Number |
Date |
Country |
Parent |
09/563506 |
May 2000 |
US |
Child |
10/325911 |
|
US |