FEEDBACK DRIVEN DECISION SUPPORT IN PARTIALLY OBSERVABLE SETTINGS

Abstract
A novel formulation called the Context-Attentive Bandit with Observations (CABO) is described, where only a limited number of features can be accessed by the learner. The present invention is applicable to many problems including problems arising in clinical settings and dialog systems where it is not possible to reveal the whole feature set. The present invention adapts the standard contextual bandit algorithm known as Thompson Sampling with a novel algorithm, we call Context-Attentive Thompson Sampling with Observations (CATSO). Experimental results are included to demonstrate its effectiveness including a regret analysis and an empirical evaluation demonstrating advantages of the disclosed novel approach on several real-life datasets.
Description
BACKGROUND

The present application relates generally to an improved data processing apparatus and method and more specifically to machine learning systems using reinforcement learning.


The contextual bandit problem is a variant of the extensively studied multi-armed bandit problem, where at each iteration, before choosing an arm, the agent observes an N-dimensional context, or feature vector, and uses it to predict the next best arm to play. The agent uses this context, along with the rewards of the arms played in the past, to choose which arm to play in the current iteration. Over time, the agent's aim is to collect enough information about the relationship between the context vectors and rewards, so that it can predict the next best arm to play by looking at the corresponding context.


SUMMARY

The present invention is a novel formulation called the Context-Attentive Bandit with Observations (CABO), where only a limited number of features can be accessed by the learner. The present invention is applicable to many problems including problems arising in clinical settings and dialog systems where it is not possible to reveal the whole feature set. The present invention adapts the standard contextual bandit algorithm known as Thompson Sampling with a novel algorithm, we call Context-Attentive Thompson Sampling with Observations (CATSO). Experimental results are included to demonstrate its effectiveness including a regret analysis and an empirical evaluation demonstrating advantages of the disclosed novel approach on several real-life datasets. This application references several teachings from non-patent literature listed in the information disclosure statement filed herewith. The teachings of each of the non-patent literature is hereby incorporated hereinto by reference in their entirety


The present invention discloses a feedback driven methodology that can exploit a small fixed subset of known features to choose additional unknown features that together best inform a decision process. The present invention includes the feature of utilizing observable feature values and feature budget size to decide which initially unobservable features are revealed using a reinforcement learning (RL) policy, and utilizing all observed features to make a decision using an RL policy and updating the RL policies with feedback on the suggested action/decision.


More specifically, the present invention is a system, computer program product, and method for making decisions using limited features in a reinforcement learning system. The begins with obtaining initially observable feature values corresponding to a list of initially observable features, a list of initially unobservable features, and a maximum number of observable features relating to a computer system being interacted with by a reinforcement learning agent. The initially observable features values are one or more of numeric feature values, structural features values, strings values, and graph values. In one example, the initially observable features represent patient history and the initially unobservable features are available patient tests. In another example, the initially observable features are one or more of speech dialog utterances and user history and the initially unobservable features are available responses. In yet another example, the initially observable features are one or more of user information history and the initially unobservable features are available responses


The reinforcement learning agent selects actions from a set of actions to cause the computer system to move from one state to another state, whereby selections of the actions is a modeled as a reinforcement learning policy. The actions are next preformed which has been selected to cause the computer system to move from one state to another state. The initially observable feature values and the maximum number of observable features are used to select which initially unobservable features to reveal using a first reinforcement learning policy for feature selection. Next the initially observable features along with the initially unobservable features are used, which have been revealed, to select a next action from the set of actions using a second reinforcement learning policy for decision selection. The first reinforcement learning policy and the second reinforcement learning policy are updated with feedback based on the next action which has been selected. The above steps are repeated over a settable number of iterations. These steps in one example are applied to contextual bandit algorithm but in other examples other algorithms are contemplated.


The present invention, in another example, is applied to single or multi stage examples. In the case of stationary data, the claimed invention further provides obtaining a total number of features, a total number of desired features to observe over a settable number of stages. The steps above include another step of selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach. This additional step can be repeated for more than one iteration or a settable number of iterations.


In the case of non-stationary data, the claimed invention further provides obtaining a total number of features, a total number of desired features to observe over a settable number of stages. The steps above include another two steps of selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach and updating at least one of the first reinforcement learning policy and the second reinforcement learning policy using a decay parameter computed by a GP-UCB algorithm. Like the stationary case, these additional steps can be repeated for more than one iterations or a settable number of iterations.





BRIEF DESCRIPTION THE DRAWINGS

The invention, as well as a preferred mode of use and further objectives and advantages thereof, will best be understood by reference to the following detailed description of illustrative embodiments when read in conjunction with the accompanying drawings, wherein:



FIG. 1 is an example diagram of a distributed data processing system in which aspects of the invention may be implemented;



FIG. 2 is an example block diagram of a computing device in which aspects of the invention may be implemented;



FIG. 3 depicts an example functional block diagram of the present invention's Context-Attentive Thompson Sampling with Observations (CATSO);



FIG. 4 is a table of experimental results using the CATSO approach with stationary dataset;



FIG. 5 is a table of experimental results using the CATSO approach with non-stationary dataset;



FIG. 6 is a graph of experimental results a comparison with Weighted TRSC (WTRSC) approach for a stationary Covertype dataset; and



FIG. 7 is a graph of experimental results a comparison with WTRSC approach for a non-stationary Covertype dataset.





DETAILED DESCRIPTION

Context-attentive bandits are contextual bandits that predict which arm to play based on partially observable contexts. A challenge with context-attentive bandits is that observing the full context is impossible and the agent is allowed to observe just a small subset of information at the onset. For instance, in clinical settings, a doctor conducts a first stage exploration of patient information during a medical examination, uses this information to explore more features with medical tests, and then decides on a treatment plan. In this clinical setting a small subset of features are observed during an exam and the results of all possible tests are unobserved. The true label or best possible treatment plan can never be known, only the reward or feedback on the treatment plan chosen. Conducting all possible tests is at very least impractical and costly, if not altogether impossible, necessitating an intelligent selection of unobserved context features to maximize reward. Notice that when there are greater than two treatment plans or potential responses, the ground truth is unavailable. The doctor can never know the “best treatment” or “best response,” only the patient or user's feedback to the action chosen.


Similar problems can arise in multi-skill orchestration for Artificial Intelligence (AI) agents. In dialog orchestration, a user's query is often directed to a number of domain specific agents and the best response is returned. In this case, the query can be immediately observed, but the set of features or responses from the domain specific agents cannot. For multi-purpose dialog systems, like personal home assistants, retrieving features or responses from every domain specific agent is computationally expensive or intractable, with the potential to cause a poor user experience, again underscoring the need for effective feature selection. Using the query to shortlist potential responses is necessary.


Determining how to exploit a small fixed subset of known features to choose additional unknown features that together best inform a decision process is common problem. A challenge is to automate this decision process when the ground truth is unavailable.


Context-attentive bandits are contextual bandits that predict which arm to play based on partially observable contexts. Disclosed is a special case of the context attentive bandit, called the Context-Attentive Bandit with Observations (CABO), where observing the full context vector at each iteration is impossible, but a small subset of context features is observable and a fixed number of the unobserved features can be revealed. The number of unknown context features that can be revealed is fixed for all iterations and the agent can explore any subset of the unobserved context of this given size. The goal in this problem is to select the best feature subset at each iteration in order to maximize overall reward, which in this setting involves exploring both the feature space as well as the arms space.


Disclosed is a new variant of the context-attentive bandit problem, CABO, motivated by practical applications with a limited information access. Also disclosed are two new algorithms, both for stationary and non-stationary settings of the CABO problem, and empirical evaluation demonstrating advantages of our proposed methods over a range of datasets and parameter settings.


Data Processing Environment

Thus, the illustrative embodiments may be utilized in many different types of data processing environments. In order to provide a context for the description of the specific elements and functionality of the illustrative embodiments, FIG. 1 and FIG. 2 are provided hereafter as example environments in which aspects of the illustrative embodiments may be implemented. It should be appreciated that FIG. 1 and FIG. 2 are only examples and are not intended to assert or imply any limitation with regard to the environments in which aspects or embodiments of the present invention may be implemented. Many modifications to the depicted environments may be made without departing from the spirit and scope of the present invention.



FIG. 1 depicts a pictorial representation of an example distributed data processing system in which aspects of the illustrative embodiments may be implemented. Distributed data processing system 100 may include a network of computers in which aspects of the illustrative embodiments may be implemented. The distributed data processing system 100 contains at least one network 102, which is the medium used to provide communication links between various devices and computers connected together within distributed data processing system 100. The network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.


In the depicted example, server 104 and server 106 are connected to network 102 along with storage unit 108. In addition, clients 110, 112, and 114 are also connected to network 102. These clients 110, 112, and 114 may be, for example, personal computers, network computers, or the like. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to the clients 110, 112, and 114. Clients 110, 112, and 114 are clients to server 104 in the depicted example. Distributed data processing system 100 may include additional servers, clients, and other devices not shown.


In the depicted example, distributed data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages. Of course, the distributed data processing system 100 may also be implemented to include a number of different types of networks, such as for example, an intranet, a local area network (LAN), a wide area network (WAN), or the like. As stated above, FIG. 1 is intended as an example, not as an architectural limitation for different embodiments of the present invention, and therefore, the particular elements shown in FIG. 1 should not be considered limiting with regard to the environments in which the illustrative embodiments of the present invention may be implemented.



FIG. 2 is a block diagram of an example data processing system in which aspects of the illustrative embodiments may be implemented. Data processing system 200 is an example of a computer, such as client 110 in FIG. 1, in which computer usable code or instructions implementing the processes for illustrative embodiments of the present invention may be located.


In the depicted example, data processing system 200 employs a hub architecture including north bridge and memory controller hub (NB/MCH) 202 and south bridge and input/output (I/O) controller hub (SB/ICH) 204. Processing unit 206, main memory 208, and graphics processor 210 are connected to NB/MCH 202. Graphics processor 210 may be connected to NB/MCH 202 through an accelerated graphics port (AGP).


In the depicted example, local area network (LAN) adapter 212 connects to SB/ICH 204. Audio adapter 216, keyboard and mouse adapter 220, modem 222, read only memory (ROM) 224, hard disk drive (HDD) 226, CD-ROM drive 230, universal serial bus (USB) ports and other communication ports 232, and PCI/PCIe devices 234 connect to SB/ICH 204 through bus 238 and bus 240. PCI/PCIe devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not. ROM 224 may be, for example, a flash basic input/output system (BIOS).


HDD 226 and CD-ROM drive 230 connect to SB/ICH 204 through bus 240. HDD 226 and CD-ROM drive 230 may use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface. Super I/O (SIO) device 236 may be connected to SB/ICH 204.


An operating system runs on processing unit 206. The operating system coordinates and provides control of various components within the data processing system 200 in FIG. 2. As a client, the operating system may be a commercially available operating system such as Microsoft® Windows 10®. An object-oriented programming system, such as the Java™ programming system, may run in conjunction with the operating system and provides calls to the operating system from Java™ programs or applications executing on data processing system 200.


As a server, data processing system 200 may be, for example, an IBM eServer™ System P® computer system, Power™ processor based computer system, or the like, running the Advanced Interactive Executive (AIX®) operating system or the LINUX® operating system. Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors in processing unit 206. Alternatively, a single processor system may be employed.


Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as HDD 226, and may be loaded into main memory 208 for execution by processing unit 206. The processes for illustrative embodiments of the present invention may be performed by processing unit 206 using computer usable program code, which may be located in a memory such as, for example, main memory 208, ROM 224, or in one or more peripheral devices 226 and 230, for example.


A bus system, such as bus 238 or bus 240 as shown in FIG. 2, may be comprised of one or more buses. Of course, the bus system may be implemented using any type of communication fabric or architecture that provides for a transfer of data between different components or devices attached to the fabric or architecture. A communication unit, such as modem 222 or network adapter 212 of FIG. 2, may include one or more devices used to transmit and receive data. A memory may be, for example, main memory 208, ROM 224, or a cache such as found in NB/MCH 202 in FIG. 2.


Those of ordinary skill in the art will appreciate that the hardware in FIG. 1 and FIG. 2 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash memory, equivalent non-volatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in FIG. 1 and FIG. 2. Also, the processes of the illustrative embodiments may be applied to a multiprocessor data processing system, other than the SMP system mentioned previously, without departing from the spirit and scope of the present invention.


Moreover, the data processing system 200 may take the form of any of a number of different data processing systems including client computing devices, server computing devices, a tablet computer, laptop computer, telephone or other communication device, a personal digital assistant (PDA), or the like. In some illustrative examples, data processing system 200 may be a portable computing device that is configured with flash memory to provide non-volatile memory for storing operating system files and/or user-generated data, for example. Essentially, data processing system 200 may be any known or later developed data processing system without architectural limitation.


BACKGROUND

This section introduces some background concepts in which the present invention builds upon, such as contextual bandit and contextual combinatorial bandit.


The contextual bandit problem. Following, this problem is defined as follows. At each time point (iteration), t∈{1, . . . T} an agent is presented with a context (feature vector) c(t)∈RN before choosing an arm k∈A={1, . . . K}. We will denote by C={C1, . . . CN} the set of features (variables) defining the context. Let r(t)={r1(t), . . . rk(t)} denote a reward vector, where rk (t)∈[0,1] is a reward at time t associated with the arm k∈A. Herein, we will primarily focus on the Bernoulli bandit with binary reward, i.e. rk (t)∈[0,1]. Let π: RN→A denote a policy, mapping a context c(t)∈RN into an action k∈A. We assume some probability distribution Pc(c) over the contexts in C, and a distribution of the reward, given the context and the action taken in that context. We will assume that the expected reward (with respect to the distribution Pr (r|c, k) is a linear function of the context,








i
.
e
.





E


[



r
k



(
t
)


|

c


(
t
)



]



=

u


T
K



c


(
t
)




,




where uk is an unknown weight vector associated with the arm k; the agent's objective is to learn uk from the data so it can optimize its cumulative reward over time.


Contextual Combinatorial Bandit (CCB)

The feature subset selection approach in this invention builds upon the Contextual Combinatorial Bandit (CCB) problem, specified as follows. Each arm k∈A={1, . . . K} is associated with the corresponding variable xk (t)∈R, which indicates the reward obtained when choosing the k-th arm at time t, for t>1. Let us consider a constrained set of arm subsets S⊆P(K), where P(K) is the power set of K, associated with a set of variables {rM (t)}, for all M∈S and t>1. Variable rM (t)∈R indicates the reward associated with selecting a subset of arms M at time t, where rM (t)=h(xk (t), k∈M, k∈M, for some function h ( ). The contextual combinatorial bandit setting can be viewed as a game where the agent sequentially observes a context c, selects subsets in S and observes rewards corresponding to the selected subsets. Here we will define the reward function h ( ) used to compute rM (t) as a sum of the outcomes of the arms in M, i.e. rM (t)=Σk∈M xk (t), although one can also use nonlinear rewards. The objective of the CCB algorithm is to maximize the reward over time. We consider here a stochastic model, where the expectation of xt(t) observed for an arm k is a linear function of the context,







i
.
e
.






E


[



x
i



(
t
)


|

c


(
t
)



]


=

u


T
i



c


(
t
)





,




where ui is an unknown weight vector (to be learned from the data) associated with the arm i. The outcomes distribution can be different for each arm. The global rewards rM (t) are also random variables independent and distributed according to some unknown distribution with some expectation uM.


Problem Setting: Context-Attentive Bandit with Observations (CABO)


In this section, we introduce a novel type of contextual bandit problem, called Context-Attentive Bandit with Observations (CABO), present an algorithm for solving this problems, and derive theoretical results (regret bounds) for the algorithm.


As mentioned above, c(t)∈RN will denote a vector of values assigned to an (ordered) set of random context variables, or features, C={C1, . . . CN}, at time t. Let CX⊆C, |CX|⊆X, 0<X≤N, denote a subset of features of size X. Let cX (t)∈Scx be a vector where Scx⊆RN, which is defined as a subspace RN containing all sparse vectors with features vectors where the only nonzero features appear within CX.


We will now introduce the following problem setting, outlined in Algorithm 1. We assume that at each time point t the environment generates a feature vector c(t)∈RN which the agent cannot observe fully. However, unlike CBRC setting, the proposed setting provides a partial observation of the context, i.e. reveals the values CV a subset of observed features CV ⊂C, V<N. Based on this partially observed context, observations, the agent is allowed to request U additional unobserved features. The goal of the agent is to maximize its total reward over time via (1) the optimal choice of the additional observations, given the initial ones, and (2) the optimal choice of an action k∈A. A based on both observations.












Algorithm 1 CABO Problem Setting
















 1:
for t = 1 to T do


 2:
Context c(t)is drawn from distribution Pc(c)


 3:
The values cV(t) of a subset CV ∈ C are observed


 4:
The agent selects a subset CU = h(cV(t)), CU ⊆ C


 5:
The values cU(t) are revealed; cV+U(t) will denote the observed



values of features of features in cD are revealed, CV ∪ CU


 6:
The agent chooses an arm k(t) = {circumflex over (π)} (cV+U(t))


 7:
The reward rk (t) is revealed


 8:
The agent updates the policy π ∈ Π


 9:
t = t + 1


10:
end for









We will consider a set Π of all compound-function policies which first choose a feature subset CU(t) given the observation cV(t), obtain cU+V(t), and then map the feature subset values to actions:





Π=∪CU∈S{π:⊆RN→A|π(c)={circumflex over (π)}(cU+V)}.  Eq 1:


where S={CU|CU⊆C|CV}, |CU|=U} is a set of all possible subsets of additional U features the agent may request to observe, given the partial context cV, and where π: ScV+U→A is a function mapping the final observed feature vector cV+U ∈ScV+U into an action (a.k.a. bandit's arm) k(t)∈A, which results into a reward rk(t).


The objective of a contextual bandit algorithm is to find an optimal policy π∈Π, over T iterations or time points, so that the total reward is maximized. Assuming a stochastic setting, where both contexts and rewards are random variables drawn from the probability distribution PC(c) and Pr (c, k), respectively, we can formally define our novel problem setting as follows:


Definition 1 (Cumulative regret). Let Π be a set of policies in eq. 1. Then the cumulative regret over T iterations of an algorithm solving the CABO problem, i.e. optimizing its policy π∈Π, is defined as






R(T)=maxπ∈ΠΣt=1TrπC(t)(t)−Σt=1Trk(t)(t).  Eq 2:


The objective of a contextual bandit algorithm, which learns the hypothesis π∈Π over T iterations, is to minimize the cumulative regret R(T).


Before introducing an algorithm for solving the above CABO problem, we will derive a lower bound on the expected regret of any algorithm used to solve this problem.


Theorem 1. For any policy π∈Π computed by a particular algorithm for solving CABO problem, given the initial feature subset CV with V<N, N2≤T and T≥(N-VV), there exists probability distribution PC(c) and Pr(c, k), such that the lower bound of the expected regret accumulated by the algorithm over T iterations is:





Ω(√{square root over (T(U+V))}+√{square root over (TV)}.


The first term comes from the lower bound of the linear bandit with the (U+V) best components, and the second one from the lower bound of the linear bandit with the V best components. The condition on N and T comes from Lemma 7. Proof details can be found in the supplemental material. See online URL<<http://tinyurl.com/y37r9chm>> the teachings of which is hereby incorporated by reference in its entirety. Note that this lower bound is worse than the classical contextual bandit since the algorithm needs to explore both the features space and the arms space.


New Method: Context-Attentive Thompson Sampling with Observations (CATSO)












Algorithm Context-Attentive Thompson Sampling with Observations 2
















 1
Require: Total number of features N; initially observed number of features V ; the set of



those features CV ; the number of additional features to observe U; the distribution



parameter _, and the function _(t) computed differently for stationary and nonstationary



cases.


 2
Initialize: ∀k ∈ {1, ... T }, Ak = IK, gk = 0K, {circumflex over (uk)} = 0K, and ∀i ∈ {1, ... N}, Bi = IN, zi = 0N,




custom-character  =0N.



 3
Foreach t ∈ 1,2, ... , T do








 4
observe cV(t), given feature subset CV








 5
Foreach context feature i = 1, ... , N do








 6
if i NOT IN CV then








 7
Sample θi, from N ( custom-character   α2, Bi−1)








 8
End if








 9
End do








10
Select Cu = arg maxC′⊆C/CV,i|C′|=U Σi ∈C′ c (t)T θi


11
CU+V (t) = CV ∪ Cu(t)


12
observe values cU+V(t)x(t) of features in CU+V(t)


13
 c(t) = cU+V(t)V


14
Foreach arm k = 1; :::;K do








15
 Sample uk from N ( custom-character  , α2, Ak−1) distribution.








16
End do


17
Select k(t) = arg maxk⊂{1,k}cTuk


18
Observe rk(t)


19
Ak(t) = Ak(t) + c(t)c(t)T


20
gk = gk + c(t)rk(t)


21

custom-character  = Ak−1gk



22
Foreach i ∈ CU








23
 Bi = λ(t)Bi + c(t)cx(t)T


24
 zi = zi + cT(t)rk(t)


25
custom-character  = λ(t)Bi−1zi








26
End do








27
End do









We now propose a method for solving the CABO problem, called Context-Attentive Thompson Sampling with Observations (CATSO), and summarize it in Algorithm 2 (see section 3 for background on Thompson Sampling). The combinatorial task of selecting the best subset of features is treated as a contextual combinatorial bandit (CCB) problem, and the subsequent decision-making (action selection) task as a contextual bandit problem solved by Thompson Sampling, respectively.


The algorithm takes the total number of features N, initially observed number of features V, and the number of additional features to observe, U, as its inputs. We also provide as an input the distribution parameter used in Contextual Thompson Sampling (CTS).


We iterate over T steps, at each iteration t first observing values cV(t) of features in the original observed subset CV. At each iteration t, we sample the vector parameter θiϵRV (step 7) from the corresponding multivariate Gaussian distribution, separately for each feature i not yet observed so far, to estimate custom-character. At each stage we select the best subset of features, CU ⊆C/CV, such that Cu=arg maxC′⊆C/CV,|C′|=uΣi∈C′cV(t)Tθi.


Once a subset of features is selected using the contextual combinatorial bandit approach, we switch to the contextual bandit setting in steps 17-26, to choose an arm based on the context consisting now of a subset of features.


We will assume that the expected reward is a linear function of a restricted context,








E


[



r
k



(
t
)


|

c


(
t
)



]


=

u


T
k



c


(
t
)




;




note that this assumption is different from the linear reward assumption where a single parameter u was considered for all arms, there was no restricted context, and for each arm, a separate context vector ck(t) was assumed.


Besides this difference, we will follow the approach for the Contextual Thompson Sampling (CTS). We assume that reward rk (t) for choosing arm k at time t follows a parametric likelihood function P(r(t)|uk), and that the posterior distribution at time t+1, P(u|r(t))α P(r(t)|u)P(u), is given by a multivariate Gaussian distribution N (custom-character(t+1), α2Ak(t+1)−1, where Ak(t)=INr=1t-1c(r)c(r)T with N the size of the context vectors c,







α
=


R



24








d





ln






(

1
γ

)






with





R

>
0


,




ϵ∈[0,1],γ∈[0,1] constants, and û=A(t)−1=(Σr=1t-1c(r)c(r)T.


At each time point t, and for each arm, we sample a k-dimensional uk from N (custom-character(t+1), α2Ak(t+1)−1), choose an arm maximizing c(t)Tuk (step 20 in the algorithm), obtain the reward rk (t) for choosing an arm k, and finally update the relevant parameters.


We now derive an upper bound on the regret of the policy computed by the CATSO algorithm we just presented. However, to make the analysis more convenient. We assume that the algorithm has to choose between independent subset of features CV. This makes the algorithm solving a contextual bandit problem at the features level rather-than a combinatorial contextual bandit problem. We start with a useful definition:


Definition 2 (Optimal Policy for CABO). The optimal policy for solving the CABO is selecting the arm at time t: k(t)=arg maxk∈K,CV∈C uk*(t)Tc*′U+V (t)+θc*v(t)TcV (t) where uk* is the optimal coefficient vector for the arm k, and θc*v(t) the optimal coefficient vector for the feature Cv, and where C*′U+V (t) denotes the optimal feature set of size U+V, and c*′U+V (t) denotes the observation of the optimal feature set.


We also need the following assumption: ∀C′U+V∈C with p(CU+V)>0, positive function, such that the noise nk,t=rk(t)−cU+V (t)T custom-character is conditionally p(U+V)−subgaussian, that is for all t≥1 and p(U+V)ϵR,












λϵ





R


,

E
[


e

λ


n
t



,


exp


(



λ
2




p


(

C

U
+
V


)


2


2

)


.







Eq





3







Note that this condition implies that the noise has zero mean and depends on CU+V where U being chosen at each iteration. So, this heteroscedastic noise is due to the fact that the reward observed at the feature subset selection step depends on the performance of the arm selection step. The later step is creating time variance on the noise that the first algorithm is observing and vice versa.


Using the above definition of regret, we can now derive the following result.


Theorem 2. Assuming that, the measurement noise nt satisfies assumption (3). With probability 1−δ, where 0<δ<1>0, with p>0, ∥ct∥≤1, ∥u*∥≤1, ∥θ*∥≤1, the regret R(T) accumulated by the CATSO algorithm in T iterations is upper-bounded by







R


(
T
)




(




p


(

U
+
V

)





T





log






(
K
)




+





T


(

N
-
V

)







log






(

N
-
V

)


)




(


ln


(
T
)


+



ln


(
T
)







ln


1
δ




)










where


=

p
=



arg

max



C

U
+
V



C




p


(

C

U
+
V


)










Theorem 2 states that the regret of the proposed algorithm depends on the following two components: the error caused by using the best features vector (upper-bounded by the regret of the CTS) and the error caused by choosing a suboptimal features subset (upper-bounded by the regret of the CTS in the contextual bandit with heteroscedastic noise on rewards).


EXPERIMENTS

Empirical evaluation of the proposed methodology was performed on three publicly available datasets, Covertype1, CNAE-91 and Warfarin. The details for each of these datasets are summarized in Table 1.









TABLE







Datasets












Datasets
Instances
Features
Classes
















Covertype1
500 000  
95
7



CNAE-91
1080
856
9



Warfarin
5228
93
3










We assess Context-Attentive Thompson Sampling with Observations (CATSO) with respect to the current state of the art for context-attentive bandits, Thompson Sampling with Restricted Context (TSRC). TSRC solves the contextual bandit with restricted context problem (CRBC) discussed prior, which selects a set of unknown features at each event while assuming no observable features exist. For a total number of features N and total feature budget D, we refer to the O observed features as the known context and N−O unobserved context features as the unknown context. In our use of the TSRC algorithm, at each iteration, the known context is observed, the TSRC decision mechanism independently chooses U=D−O unknown context features to reveal, and Contextual Thompson Sampling (CTS) is invoked.


For the stationary setting, we randomly fix 10% of the context feature space of each dataset to be known at the onset and explore a subset of U unknown features. Notice that when U=0, both of the approaches reduce to CTS with just the fixed known context, and when U is equivalent to 90% of N, the total number of context features, each of the approaches reduce to CTS with the full context. For CATSO we fix λ(t)=1 to reflect the stationary setting and choose S=1. For both CATSO and TSRC we fix v=0.25. We report the total average reward across a range of U corresponding to various percentages of N for each algorithm in Table 2 in FIG. 4. The results in Table 2 are promising, with CATSO outperforming the TSRC baseline in the majority of cases. CATSO sometimes outperforms and other times nearly matches TSRC performance on the CNAE-9 dataset. This outcome is somewhat expected, for in original work on TSRC, the mean error rate of TSRC was only 0:03% lower than randomly fixing a subset of unknown features to reveal for each event on CNAE-9. This suggests that the operating premise of TSRC, that some features are more predictive of reward than others, does not hold on this dataset. On top of this assumption, CATSO also assumes that there exist relationships between the known and unknown context features, likely causing a small compounding of error. CATSO outperforms TSRC on both Warfarin and Covertype.


Next we examine the case where the unknown feature space is nonstationary. To simulate nonstationarity in the unknown feature space, we duplicate each dataset, randomly fix the known context in the same manner as above, and shuffle the unknown feature set—label pairs. Then we stochastically replace events in the original dataset with their shuffled counterparts, with the probability of replacement increasing uniformly with each additional event. For this nonstationary setting, which we refer to as NCATSO, we fix S=1 and use λ(t) defined by the GP-UCB algorithm. We compare NCATSO to Weighted TSRC (WTSRC) where TSRC is Thompson Sampling with Restricted Context, the nonstationary version of TSRC also developed in the cited Bouneffouf et al. 2017 reference. WTSRC makes updates to its feature selection model based only on recent events, where recent events are defined by a time period, or “window” w. We choose w=100 for WTSRC, fix v=0:25 for both of the algorithms, randomly fix 10% of the context to be known, and contrast the approaches total average reward in Table 3 in FIG. 5. Here we observe that NCATSO again outperforms the state of the art, in this case WTSRC, the majority of the time and notice that the improvement gain over baseline, in this case WTSRC, is even greater than in the stationary case. Regarding the drop in reward when 100% of the context is revealed, this is the case where both methods reduce to CTS. Without any adaptions, CTS does not handle nonstationary data any differently than stationary data, which can sometimes result in non-optimal performance FIG. 1: Stationary Covertype, 10% context known


We conclude our experiments by performing a deeper analysis of the Covertype dataset, examining the case where S=U. When S=1, the known context is used to select all U additional context features at once, whereas when S=U, the known context is updated and used to select each of the U additional context features one at a time. Maintaining v=1 and λ(t)=1, for the stationary case we denote these two cases of the CATSO algorithm as CATSO-1 and CATSO-U respectively and report their performance when 10% of the context is randomly fixed, across various U in FIG. 6. CATSO-1 consistently outperforms CATSO-U across all U tested. CATSO-U likely suffers because incremental feature selection adds nonstationarity to the known context—CATSO-1 is learning relationships between the known and unknown features while CATSO-U is learning relationships between them as the known context grows. Nonetheless, both methods outperform TSRC. For the nonstationary case we use the GP-UCB algorithm for λ(t), refer to the S=1 and S=U cases as NCATSO-1 and NCATSOU, and illustrate their performance in FIG. 7. Here we observe that NCATSO-1 and NCATSO-U have comparable performance, and the improvement gain over baseline, in this case WTSRC, is even greater than in the stationary case.


Non-Limiting Examples of Applications

This patent described CABO, a new variant of context attentive bandits which focused on contextual bandit problems where the context is not fully observable. We show that FIG. 2: Nonstationary Covertype, 10% context known by knowing part of the context the agent can make a better decision on selecting the next set of features to be revealed. The motivation of this comes from problem scenarios where part of the context features are easier and cheaper to become available while the rest might require time and/or incur cost.


In our experiments we used a random fixed context to be available beforehand for each dataset. However it is important to note that in actual applications of this problem setting, the known context feature space is likely nonrandom. It is instead the information most readily at hand, available at little to no cost, and likely highly informative. In our motivating examples of clinical settings, known contexts of patient information are likely highly informative of which medical tests need to be performed and in the case of AI agent orchestration known context of queries and users are likely highly predictive of which potential agent responses need to be revealed in dialog orchestration. As our immediate next step in the future we hope to evaluate our algorithm on datasets where such a natural partition between the known and unknown context features exists, and expect our method to have an even greater edge in these scenarios. Another direction for future work could be exploring nonstationarity in the known context space where the set of known context changes throughout time.


Non-Limiting Definitions

Before beginning the discussion of the various aspects of the illustrative embodiments, it should first be appreciated that throughout this description the term “mechanism” will be used to refer to elements of the present invention that perform various operations, functions, and the like. A “mechanism,” as the term is used herein, may be an implementation of the functions or aspects of the illustrative embodiments in the form of an apparatus, a procedure, or a computer program product. In the case of a procedure, the procedure is implemented by one or more devices, apparatus, computers, data processing systems, or the like. In the case of a computer program product, the logic represented by computer code or instructions embodied in or on the computer program product is executed by one or more hardware devices in order to implement the functionality or perform the operations associated with the specific “mechanism.” Thus, the mechanisms described herein may be implemented as specialized hardware, software executing on general purpose hardware, software instructions stored on a medium such that the instructions are readily executable by specialized or general purpose hardware, a procedure or method for executing the functions, or a combination of any of the above.


The present description and claims may make use of the terms “a”, “at least one of”, and “one or more of” with regard to particular features and elements of the illustrative embodiments. It should be appreciated that these terms and phrases are intended to state that there is at least one of the particular feature or element present in the particular illustrative embodiment, but that more than one can also be present. That is, these terms/phrases are not intended to limit the description or claims to a single feature/element being present or require that a plurality of such features/elements be present. To the contrary, these terms/phrases only require at least a single feature/element with the possibility of a plurality of such features/elements being within the scope of the description and claims.


Data Processing Environment

Thus, the illustrative embodiments may be utilized in many different types of data processing environments. In order to provide a context for the description of the specific elements and functionality of the illustrative embodiments, FIG. 1 and FIG. 2 are provided hereafter as example environments in which aspects of the illustrative embodiments may be implemented. It should be appreciated that FIG. 1 and FIG. 2 are only examples and are not intended to assert or imply any limitation with regard to the environments in which aspects or embodiments of the present invention may be implemented. Many modifications to the depicted environments may be made without departing from the spirit and scope of the present invention.



FIG. 1 depicts a pictorial representation of an example distributed data processing system in which aspects of the illustrative embodiments may be implemented. Distributed data processing system 100 may include a network of computers in which aspects of the illustrative embodiments may be implemented. The distributed data processing system 100 contains at least one network 102, which is the medium used to provide communication links between various devices and computers connected together within distributed data processing system 100. The network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.


In the depicted example, server 104 and server 106 are connected to network 102 along with storage unit 108. In addition, clients 110, 112, and 114 are also connected to network 102. These clients 110, 112, and 114 may be, for example, personal computers, network computers, or the like. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to the clients 110, 112, and 114. Clients 110, 112, and 114 are clients to server 104 in the depicted example. Distributed data processing system 100 may include additional servers, clients, and other devices not shown.


In the depicted example, distributed data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages. Of course, the distributed data processing system 100 may also be implemented to include a number of different types of networks, such as for example, an intranet, a local area network (LAN), a wide area network (WAN), or the like. As stated above, FIG. 1 is intended as an example, not as an architectural limitation for different embodiments of the present invention, and therefore, the particular elements shown in FIG. 1 should not be considered limiting with regard to the environments in which the illustrative embodiments of the present invention may be implemented.



FIG. 2 is a block diagram of an example data processing system in which aspects of the illustrative embodiments may be implemented. Data processing system 200 is an example of a computer, such as client 110 in FIG. 1, in which computer usable code or instructions implementing the processes for illustrative embodiments of the present invention may be located.


In the depicted example, data processing system 200 employs a hub architecture including north bridge and memory controller hub (NB/MCH) 202 and south bridge and input/output (I/O) controller hub (SB/ICH) 204. Processing unit 206, main memory 208, and graphics processor 210 are connected to NB/MCH 202. Graphics processor 210 may be connected to NB/MCH 202 through an accelerated graphics port (AGP).


In the depicted example, local area network (LAN) adapter 212 connects to SB/ICH 204. Audio adapter 216, keyboard and mouse adapter 220, modem 222, read only memory (ROM) 224, hard disk drive (HDD) 226, CD-ROM drive 230, universal serial bus (USB) ports and other communication ports 232, and PCI/PCIe devices 234 connect to SB/ICH 204 through bus 238 and bus 240. PCI/PCIe devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not. ROM 224 may be, for example, a flash basic input/output system (BIOS).


HDD 226 and CD-ROM drive 230 connect to SB/ICH 204 through bus 240. HDD 226 and CD-ROM drive 230 may use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface. Super I/O (SIO) device 236 may be connected to SB/ICH 204.


An operating system runs on processing unit 206. The operating system coordinates and provides control of various components within the data processing system 200 in FIG. 2. As a client, the operating system may be a commercially available operating system such as Microsoft® Windows 7®. An object-oriented programming system, such as the Java™ programming system, may run in conjunction with the operating system and provides calls to the operating system from Java™ programs or applications executing on data processing system 200.


As a server, data processing system 200 may be, for example, an IBM eServer™ System P® computer system, Power™ processor based computer system, or the like, running the Advanced Interactive Executive (AIX®) operating system or the LINUX® operating system. Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors in processing unit 206. Alternatively, a single processor system may be employed.


Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as HDD 226, and may be loaded into main memory 208 for execution by processing unit 206. The processes for illustrative embodiments of the present invention may be performed by processing unit 206 using computer usable program code, which may be located in a memory such as, for example, main memory 208, ROM 224, or in one or more peripheral devices 226 and 230, for example.


A bus system, such as bus 238 or bus 240 as shown in FIG. 2, may be comprised of one or more buses. Of course, the bus system may be implemented using any type of communication fabric or architecture that provides for a transfer of data between different components or devices attached to the fabric or architecture. A communication unit, such as modem 222 or network adapter 212 of FIG. 2, may include one or more devices used to transmit and receive data. A memory may be, for example, main memory 208, ROM 224, or a cache such as found in NB/MCH 202 in FIG. 2.


Those of ordinary skill in the art will appreciate that the hardware in FIG. 1 and FIG. 2 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash memory, equivalent non-volatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in FIG. 1 and FIG. 2. Also, the processes of the illustrative embodiments may be applied to a multiprocessor data processing system, other than the SMP system mentioned previously, without departing from the spirit and scope of the present invention.


Moreover, the data processing system 200 may take the form of any of a number of different data processing systems including client computing devices, server computing devices, a tablet computer, laptop computer, telephone or other communication device, a personal digital assistant (PDA), or the like. In some illustrative examples, data processing system 200 may be a portable computing device that is configured with flash memory to provide non-volatile memory for storing operating system files and/or user-generated data, for example. Essentially, data processing system 200 may be any known or later developed data processing system without architectural limitation.


Identification And Clustering System

In order to identify a top subset (k) of a set of high-quality plans based on the quality of each plan in a set of plans and, amongst the identified top-k plans, identify one or more clusters (m) from the set of top-k plans, the illustrative embodiment provides a plan identification and clustering mechanism that identifies a set of k distinct plans with a lowest cost, ranks the identified top-k plans based on an associated quality, and then groups the top-k plans using clustering techniques forming a set of top-m clusters. FIG. 3 depicts a functional block diagram of such a plan identification with unreliable observations using AI planning. The functional block diagram of FIG. 3 may be implemented, for example, by one or more of the computing devices illustrated in FIG. 1 and/or data processing system 200 of FIG. 2. In the initialization of plan identification and clustering mechanism 300, plan identification and clustering mechanism 300 receives a planning problem.


Software and Hardware Implementation Examples

The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.


The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.


Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.


Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.


Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.


These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.


The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.


Thus, the illustrative embodiments provide mechanisms for identifying a set of top-k plans based on the quality of each plan in a set of plans and, amongst the identified set of top-k plans, identifying one or more clusters, i.e. top-m clusters, from the set of top-k plans. In particular, the illustrative embodiments identify a set of k distinct plans with a lowest cost, where the k distinction plan includes both optimal plans and near-optimal plans, depending on k, and, by definition, for each plan in this set all valid plans of lower cost must also be in the set. The top-k plans are then ranked based on each plans associated quality, i.e. the cost associated with the plan, where the lowest cost identifies the highest quality. The top-k plans are then grouped using clustering techniques into top-m clusters, with a representative set of each cluster being presented with an option of viewing all plans within that cluster.


As noted above, it should be appreciated that the illustrative embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In one example embodiment, the mechanisms of the illustrative embodiments are implemented in software or program code, which includes but is not limited to firmware, resident software, microcode, etc.


A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.


Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters. Disk drives, DVDs, CDs and memory sticks are just a few of the computer readable storage devices.


The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims
  • 1. A method for making decisions using limited features in a reinforcement learning system, the method comprises: a) obtaining initially observable feature values corresponding to a list of initially observable features, a list of initially unobservable features, and a maximum number of observable features relating to a computer system being interacted with by a reinforcement learning agent;b) selecting, by the reinforcement learning agent, actions from a set of actions to cause the computer system to move from one state to another state, whereby selections of the actions is a modeled as a reinforcement learning policy;c) performing, by the reinforcement learning agent, the actions which has been selected to cause the computer system to move from one state to another state;d) using the initially observable feature values and the maximum number of observable features to select which initially unobservable features to reveal using a first reinforcement learning policy for feature selection;e) using the initially observable features along with the initially unobservable features which have been revealed to select a next action from the set of actions using a second reinforcement learning policy for decision selection;f) updating the first reinforcement learning policy and the second reinforcement learning policy with feedback based on the next action which has been selected; andg) repeating step d through step g over a settable number of iterations.
  • 2. The method of claim 1, wherein step d) through step f) is applied to a contextual bandit algorithm.
  • 3. The method of claim 2, further comprising: obtaining a total number of features, a total number of desired features to observe over a settable number of stages; andstep d through step g further including iterating for the settable number of stages selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach.
  • 4. The method of claim 2, further comprising: obtaining a total number of features, a total number of desired features to observe over a settable number of stages; andstep d through step g further including iterating for the settable number of stages each of selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach.updating at least one of the first reinforcement learning policy and the second reinforcement learning policy using a decay parameter computed by a GP-UCB algorithm.
  • 5. The method of claim 1, wherein the initially observable features values are one or more of numeric feature values, structural features values, strings values, and graph values.
  • 6. The method of claim 1, wherein the initially observable features are one or more of patient history and the initially unobservable features are available patient tests.
  • 7. The method of claim 1, wherein the initially observable features are one or more of speech dialog utterances and user history and the initially unobservable features are available responses.
  • 8. The method of claim 1, wherein the initially observable features are one or more of user information history and the initially unobservable features are available responses.
  • 9. A computer program product for making decisions using limited features in a reinforcement learning system comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to perform: a) obtaining initially observable feature values corresponding to a list of initially observable features, a list of initially unobservable features, and a maximum number of observable features relating to a computer system being interacted with by a reinforcement learning agent;b) selecting, by the reinforcement learning agent, actions from a set of actions to cause the computer system to move from one state to another state, whereby selections of the actions is a modeled as a reinforcement learning policy;c) performing, by the reinforcement learning agent, the actions which has been selected to cause the computer system to move from one state to another state;d) using the initially observable feature values and the maximum number of observable features to select which initially unobservable features to reveal using a first reinforcement learning policy for feature selection;e) using the initially observable features along with the initially unobservable features which have been revealed to select a next action from the set of actions using a second reinforcement learning policy for decision selection;f) updating the first reinforcement learning policy and the second reinforcement learning policy with feedback based on the next action which has been selected; andg) repeating step d through step g over a settable number of iterations.
  • 10. The computer program product of claim 9, wherein step d) through step f) is applied to a contextual bandit algorithm.
  • 11. The computer program product of claim 10, further comprising: obtaining a total number of features, a total number of desired features to observe over a settable number of stages; andstep d through step g further including iterating for the settable number of stages selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach.
  • 12. The computer program product of claim 10, further comprising: obtaining a total number of features, a total number of desired features to observe over a settable number of stages; andstep d through step g further including iterating for the settable number of stages each of selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach.updating at least one of the first reinforcement learning policy and the second reinforcement learning policy using a decay parameter computed by a GP-UCB algorithm.
  • 13. The computer program product of claim 9, wherein the initially observable features values are one or more of numeric feature values, structural features values, strings values, and graph values.
  • 14. The computer program product of claim 9, wherein the initially observable features are one or more of patient history and the initially unobservable features are available patient tests.
  • 15. The computer program product of claim 9, wherein the initially observable features are one or more of speech dialog utterances and user history and the initially unobservable features are available responses.
  • 16. The computer program product of claim 9, wherein the initially observable features are one or more of user information history and the initially unobservable features are available responses.
  • 17. An apparatus for making decisions using limited features in a reinforcement learning system comprising: a processor; anda memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to perform: a) obtaining initially observable feature values corresponding to a list of initially observable features, a list of initially unobservable features, and a maximum number of observable features relating to a computer system being interacted with by a reinforcement learning agent;b) selecting, by the reinforcement learning agent, actions from a set of actions to cause the computer system to move from one state to another state, whereby selections of the actions is a modeled as a reinforcement learning policy;c) performing, by the reinforcement learning agent, the actions which has been selected to cause the computer system to move from one state to another state;d) using the initially observable feature values and the maximum number of observable features to select which initially unobservable features to reveal using a first reinforcement learning policy for feature selection;e) using the initially observable features along with the initially unobservable features which have been revealed to select a next action from the set of actions using a second reinforcement learning policy for decision selection;f) updating the first reinforcement learning policy and the second reinforcement learning policy with feedback based on the next action which has been selected; andg) repeating step d through step g over a settable number of iterations.
  • 18. The system of claim 17, wherein step d) through step f) is applied to a contextual bandit algorithm.
  • 19. The system of claim 18, further comprising: obtaining a total number of features, a total number of desired features to observe over a settable number of stages; andstep d through step g further including iterating for the settable number of stages selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach.
  • 20. The system of claim 18, further comprising: obtaining a total number of features, a total number of desired features to observe over a settable number of stages; andstep d through step g further including iterating for the settable number of stages each of selecting a subset of features using a contextual combinatorial bandit approach followed by a contextual bandit approach.updating at least one of the first reinforcement learning policy and the second reinforcement learning policy using a decay parameter computed by a GP-UCB algorithm.