This application claims the benefit under 35 U.S.C. §119(a) of Korean Patent Application No. 10-2008-107149, filed on Oct. 30, 2008, the disclosure of which is incorporated by reference in its entirety for all purposes.
1. Field
The following description relates to a technology that can provide a personalized service, and more particularly, to a technology that can provide a personalized service based on situation recognition.
2. Description of the Related Art
The development of information technology (IT) and the increased use of the Internet have resulted in an exponential increase of information available to users. However, this exponential increase of information presents users with the challenge of searching through a vast amount of information to find and select desired information. To address this problem, research is being performed into content recommendation systems which can provide a service personalized to a user by filtering out information that is not desired by the user and by recommending useful information. Conventional research has been focused on recommending contents by utilizing user profile information according to the clear needs of each user. That is, conventional research is based on the assumption that refined information can be received from the user in a static environment such as customer relationship management (CRM) environment.
Conventional personalization techniques that are widely used include content-based techniques and collaborative filtering techniques, and most of these techniques require prior information about users or detailed information about items the user would consider as recommended items. However, meta information of services provided by service providers is not fully defined, and it is difficult to collect, in advance, information about users due to security or privacy matters. Therefore, using the conventional techniques, provision of personalized services can be very limited.
The following description relates to an apparatus and method for modeling a user's service use pattern, the apparatus and method capable of providing personalized content service to a user without requiring prior information about the user or detailed information about items the user would consider as recommended items.
According to an exemplary aspect, there is provided an apparatus for modeling a user's service use pattern. The apparatus includes: a user model database storing a user model which is composed of context-service pairs and records a learning value of each of the context-service pairs; a service information collection unit collecting information about a service selected by the user; a situation information collection unit collecting situation information of the user when selecting the service; and a learning unit learning the user's service use pattern based on the information about the service selected by the user and the situation information of the user and updating learning values of one or more corresponding context-service pairs, wherein the situation information of the user includes one or more contexts.
The apparatus further includes a recommendation unit creating a service prediction table, which comprises services that the user is expected to use in a current situation of the user, based on the user model and the situation information collected by the situation information collection unit and recommending a service based on the created service prediction table.
According to another exemplary aspect, there is provided a method of modeling a user's service use pattern. The method includes: collecting information about a service selected by the user and situation information of the user when selecting the service; learning the user's service use pattern based on the collected information; and updating a learning value of a corresponding context-service pair in a user model, which is composed of context-service pairs, based on the learning result, wherein the situation information of the user includes one or more contexts.
The method further includes determining a domain based on the collected situation information before the learning of the user's service use pattern, wherein in the updating of the learning value, the learning value of the corresponding context-service pair is updated using a reward defined in a context profile which corresponds to the determined domain.
The method further includes: creating a service prediction table, which comprises services that the user is expected to use in a current situation of the user, based on the user model and the situation information of the user; and recommending a service based on the created service prediction table.
The method further includes: receiving feedback on whether the user used the recommended service; and updating the learning value of the corresponding context-service pair in the user model based on the feedback result.
Other objects, features and advantages will be apparent from the following description, the drawings, and the claims.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention, and together with the description serve to explain aspects of the invention.
The invention is described more fully hereinafter with reference to the accompanying is drawings, in which exemplary embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure is thorough, and will fully convey the scope of the invention to those skilled in the art.
Referring to
The user terminal 200 may be a mobile phone, a personal digital assistant (PDA), or any other type of communication equipment. For communication with the modeling server 100, an application program is installed on the user terminal 200. The application program transmits information about the current situation (described in the following paragraph in more detail) of the user and information about a service selected by the user to the modeling server 100 over a network. Then, the modeling server 100 recommends at least one service suitable for the current situation of the user. Accordingly, the application program informs the user of the service recommended by the modeling server 100.
More specifically, the application program installed on the user terminal 200 obtains information about a service (such as watching digital multimedia broadcasting (DMB), listening to the radio, MP3 playback, or Internet access) selected by the user and information regarding the current situation (hereinafter referred to as situation information) of the user when selecting the service. Here, the situation information of the user is information about the current environment of the user, such as the user's location, the user's activity, and current time. In a house, for example, a noise sensor, a radio-frequency identification (RFID) sensor, a biosensor, and physical environment sensors for measuring temperature and humidity may be installed, and the user terminal 200 may obtain the situation information of the user from the above sensors.
As described above, the application program of the user terminal 200 obtains information about a service selected by the user and the situation information of the user when selecting the service and transmits the obtained information to the modeling server 100. Then, the user terminal 200 displays a service recommended by the modeling server 100 on the screen thereof to inform the user of the recommended service. If the user selects the recommended service, the user terminal 100 provides the service directly or receives the service from an external source in order to provide the service.
The modeling server 100 learns a user model which is composed of context-service pairs. Referring to
“Reward” is a value that must be reflected in a user model based on the result of learning a user's service use pattern. Rewards are divided into a reward used when a user actively selects a service, a reward used when the user uses a recommended service, and a reward used when the user does not use the recommended service. The reward used when a user actively selects a service is defined as “Selection-rs,” the reward used when the user reacts positively to a recommended service is defined as “Positive Feedback-rp,” and the reward used when the user reacts negatively to the recommended service is defined as “Negative Feedback-rn.” Since a context profile may exist for each domain, rewards may be included in each context profile, so that different rewards can be set for each domain. Domains do not represent all environments. Instead, each domain models represents one of a number of groups into which various environments are categorized. Three domains, e.g., house, inside a car, and outdoor, may be modeled.
Referring to
A user model (C-TBL) includes context-service pairs. When the situation information includes three contexts, e.g., activity, location, and time, if a location-service pair already exists in another context profile, it is not created again. That is, only context-service pairs that do not exist in the user model are added to the user model. Then, the user model is updated according to the service clearly requested by the user (operation 460).
For example, if a user wakes up (c2: Wakeup) at seven o'clock in the morning (c3) and requests a news service (ac1: ListeningNews) in the bedroom (c1: Bedroom), the modeling server 100 updates a user model using a reward (Selection-rs) defined in a corresponding context profile. That is, learning values of C-TBL[c1][ac1] and C-TBL[c2][ac1] are updated. As for time information, a learning value of C-TBL[c3][ac1] is updated after the normalization process. This updating process is defined by the following equation:
for each ci in C-TBL[ai, k(t)][ac(t)] do
C-TBL[ai, k][ac(t)]←C-TBL[ai, k(t)][ac(t)]+γR(t),
where γ is the discount factor and ci∈State.
Referring to
, where M(cs) is used to normalize each value to be in the range of 0 to 1 when the service prediction table (P-TBL) is created using the user model (C-TBL), and wi is a weight given to a context ci for each user. In general, the weight wi is a fixed value. However, entropy of each context is calculated in order to give a different weight to each context according to characteristics of users. The entropy of each context provides an information gain needed to select a service. In the following equation, p(I) indicates a ratio of the number of entities (?) included in ActionClass I to the total number of entities in the entropy “S”.
For example, when the entropy “S” includes two classes of ac1 and ac2, a ratio of the number of entities included in ac1 to the total number of entities may be p(ac1), and a ratio of the number of entities included in ac2 to the total number of entities may be p(ac2). In this case, the entropy of a context may be calculated by −p(ac1)log2(p(ac1))−p(ac2)log2(p(ac2)). Using the calculated entropy of the context, an information gain for the context is calculated. In the following equation, gain(ck) indicates an information gain for a context ck, and Sv indicates a value of each attribute that the context ck can have. The calculated information gain of each context is applied to the weight wi thereof. When there are many contexts, the contexts may be prioritized based on the calculated information gains, and a context which affects the selection of a service may be selected.
P-TBL[ack] for the current situation of the user is calculated by applying the weight wi of each context, and a service corresponding to P-TBL[ack] having a highest value is recommended, or a list of recommended services corresponding respectively to a plurality of P-TBL[ack], which are prioritized in order of highest to lowest value, are provided to the user terminal (operation 520).
Referring to
Referring to
A context profile unit 710 stores one or more context profiles. As illustrated in
A service information collection unit 730 collects information about a service selected by a user, and a situation information collection unit 740 collects situation information of the user. When a user selects a service, the user terminal 200 may transmit information about the service selected by the user and situation information of the user to the modeling server 100. Accordingly, the modeling server 100 may simultaneously collect the information about the service selected by the user and the situation information synchronized with the information about the selected service. Alternatively, the modeling server 100 may continuously monitor the user terminal 200 to identify the situation of the user when selecting a service.
A user model database 720 stores a user model (C-TBL) for each user. A user model includes context-service pairs, and a learning value is reflected in each of the context-service pairs. For example, a user model may include a location-service pair, an activity-service pair, and a time-service pair. In this case, a learning value resulting from the learning operation of a learning unit 750 is reflected in each of the pairs.
The learning unit 750 learns the service information collected by the service information collection unit 730 and the state information collected by the situation information collection unit 740. The learning unit 750 determines a domain based on one or more contexts that are included in the situation information. Then, the learning unit 750 learns a user's service use pattern with reference to a context profile corresponding to the determined domain. Based on the result of learning the user's service use pattern, the learning unit 750 updates a learning value of a corresponding context-service pair in a user model by using a reward stored in the context profile.
When the result of learning the service use pattern of a user, who is managed using a user model, exceeds a predetermined level at which it is determined that the service use pattern of the user has been fully learned, a recommendation unit 760 identifies a service frequently used by the user in the current situation and recommends the service to the user. Specifically, the recommendation unit 760 creates a service prediction table (P-TBL) by using the user model and recommends a service based on the created service prediction table. The service prediction table may be created by reflecting the weight of each context. Here, the weight of each context may be calculated as described above. Alternatively, the recommendation unit 760 may create a service prediction table by reflecting the weight of each context stored in the user profile unit 700.
Once a service is recommended, the learning unit 750 receives feedback on whether the user used the recommended service and learns the feedback. As described above, the learning unit 750 updates the user model using a reward (Positive Feedback-rp or Negative Feedback-rn) defined in the context profile according to whether the user used the recommended service.
In the above example, a case where the user terminal 200 collects all situation information and provides the collected situation information to the modeling server 100 has been described. However, the modeling server 100 may also obtain situation information of the user terminal 200 from external sensors in an ubiquitous environment while still receiving situation information that can be collected by the user terminal 200 from the user terminal 200.
The present invention makes it possible to actively and accurately provide a personalized service to a user by learning the user's service use pattern through interactions with the user. At a learning stage, a user's service use pattern is learned. Flexibility is allowed in situation information. That is, situation information is composed of contexts (such as time and location) extracted from sensors which are installed in a user's environment. In addition, the concept of domains into which various environments are grouped is introduced. Thus, a context profile is created for each domain, and a user has two-dimensional (context-service pair) information for each domain. For service recommendation, a domain is determined first. Then, a set of contexts that can be accessed in the determined domain are extracted, and a service is recommended based on a subset of the set of contexts. That is, a user model can be configured using pairs of currently accessible contexts and their corresponding services through a learning process. Hence, the situation information is not limited to information about a specified environment. That is, contexts can be easily added or removed from the situation information. In this regard, when service recommendation is required, a service can be recommended based only on accessible situation information.
While this invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. Therefore, the scope of the invention is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be construed as being included in the present invention.
Number | Date | Country | Kind |
---|---|---|---|
10-2008-0107149 | Oct 2008 | KR | national |