The Next Basket Recommendation (NBR) problem and the Next Basket Repurchase Recommendation (NBRR) problem are active related research fields that deal with recommending items for users' item selections during an item selection session based on sequences of their prior selections. Previous solutions for these problems analyze ordered sequences of historical sets of selected items to generate recommendations.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
A computerized method for providing item selection recommendations using prediction scores based on a user's selection cycle of an item is described. A set of filter weights is generated using a trained hypernetwork. The set of filter weights is specific to a user profile and an item (a user-item pair). Each filter weight of the set is indicative of a probability that the user profile will select the item at the associated time period. A prediction score is generated for the item using the set of filter weights and item selection history data of the user profile, including a time period at which the user profile last selected the item. A selection recommendation is then provided to a user interface associated with the user profile based at least in part on the generated prediction score during a current time period. Further, in some examples, the prediction score is based on item correlation data indicative of selection correlations between items for the user profile.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Corresponding reference characters indicate corresponding parts throughout the drawings. In
Aspects of the disclosure provide a computerized method and system for generating prediction scores and providing recommendations for item selection during an item selection session. The disclosure describes generating sets of filter weights for specific user-item pairs that are representative of the user's item selection cycle for that item over time. The sets of filter weights are used with the user's selection history data to generate item selection cycle prediction scores for the user. Those scores can be used to make recommendations to the user, or they can be combined with other predictive data to provide improved recommendations. For instance, the disclosure describes generating an item correlation matrix from the user's selection history data to represent the likelihood that the user will select a pair of items together. The item correlation matrix can be combined with the item selection cycle prediction scores as described herein to generate combined prediction scores that accurately predict user selection behavior.
The disclosure operates in an unconventional manner at least by using hypernetwork architecture and explicit timestamps of past selection data to train a model to predict next basket selections (where a basket is a group or set of items that are selected at substantially the same time or otherwise selected together during a selection session) based on a specific user's selection cycle (or buy cycle in examples where items are being bought) for individual items. Further, the disclosure combines this selection cycle modeling with an item correlation model that generates item correlations values in a matrix. By combining both the selection cycle model and the item correlation model, the disclosure performs efficient, accurate predictions of next basket selections that are individualized to specific users, enabling the providing of personalized recommendations to those users. Further, by using explicit timestamps, the disclosure enables the trained models to account for selection cycle patterns of users that are difficult or impossible to detect using implicit timestamps, as are used by many other prior art solutions.
Additionally, the disclosure specifically describes the use of hypernetwork architecture for modeling selection cycle behavior of specific users. The use of the hypernetwork to generate filter weights for use by the second network in generating item reselection predictions is an efficient alternative to other possible implementations in resource use and time use contexts. Further, the use the hypernetwork architecture enables the explicit timestamps to be used more efficiently and effectively than other possible implementations, thus overcoming challenges associated with methods used by prior art solutions (e.g., the use of explicit timestamps with recurring neural networks is more challenging).
The NBRR problem may initially appear as an easy task. However, different users exhibit different selection cycles for different items. For example, a user who is single probably has a longer selection cycle for toilet paper than a parent who buys for the entire family. However, both have shorter selection cycles for bananas due to their short expiry date. In the former case, the selection cycle is determined by the consumption rate of different users, whereas in the latter example, it is determined by the item's durability. Unfortunately, both users' metadata as well as items' durability are often unknown to the recommender system, and in any case, they are not the sole factors that determine reselection cycles. Hence, the main difficulty in NBRR stems from the need to estimate different reselection cycles for every combination of user and item based on historical selections only. The disclosure addresses these differences by analyzing selection cycles at the user and item level and doing so in the context of explicit timestamps, such that differing selection cycles become apparent.
In some examples, the system 100 includes a computing device (e.g., the computing apparatus of
The selection cycle network 102 includes hardware, firmware, and/or software configured to generate selection cycle prediction scores 116 using a selection cycle hypernetwork 110 and item selection history data 114. In some examples, user vectors 106 and item vectors 108 are used by the selection cycle hypernetwork 110 to generate filter weight sets 112 that are applied to the item selection history data 114 to generate the selection cycle prediction scores 116, which are indicative of a likelihood that an item will be selected on a future day from the current time (e.g., the likelihood that a user will select an item in 2 days from today). In some such examples, the user vectors 106 and/or item vectors 108 are learned by the selection cycle network 102 during machine learning processes. Alternatively, or additionally, the vectors 106 and/or 108 are initialized from other vector representations of users and/or items (e.g., a vector representation of an item's metadata based on a belief that items with similar metadata would exhibit a similar selection cycle). The operations of the selection cycle network 102 are described in greater detail below with respect to at least
The item correlation network 104 includes hardware, firmware, and/or software configured to generate an item correlation matrix 124 that is indicative of relationships between each item pair of the item selection history data 118. In some examples, item data of the item selection history data 118 is processed using a first mapping function 120 and a second mapping function 122 to generate the item correlation matrix 124 as described herein. Additionally, in some such examples, the item correlation matrix 124 is configured to include item correlation values that are indicative of a likelihood that a pair of items will be selected (e.g., purchased) by the user together (e.g., during the same shopping or item purchasing session). Thus, in such examples, the item correlation matrix 124 includes an item correlation value for each pair of items that are represented in the matrix 124. These item correlation values can be combined to generate combined item correlation values, which can be used in combination with the selection cycle prediction scores 116 to generate the combined prediction scores 126 as described herein. The operations of the item correlation network 104 are described in greater detail below with respect to at least
Further, in some examples, the selection cycle prediction scores 116 are combined with the item correlation matrix 124 to generate combined prediction scores 126 as described herein. In some examples, the combined prediction scores 126 of multiple items are compared and one or more items are chosen to be recommended to a user profile 130 in a selection recommendation 128. In some such examples, the selection recommendation 128 is displayed to the user profile 130 via a user interface that enables the user profile 130 to then select one or more recommended items. Alternatively, in other examples, the selection recommendation 128 is provided to the user profile 130 in other formats and/or via other interfaces without departing from the description. For instance, in some examples, provided recommendations take the form of recommendations for coupons that anticipate the users' needs and/or reminders for potentially forgotten items.
In some examples, the user profile 130 includes data that identifies a particular user and the user profile 130 is associated with data that is indicative of the user's behavior on a website, on a platform, and/or in other contexts, such as the item selection history data 114 and/or 118. In some such examples, the item selection history data 114 and 118, and other data that is specific to a user is linked to the user profile 130 of that user. Further, in some examples, the user profile 130 includes security information, such as a password or other security measures, that are configured to enable the user with which the user profile 130 is associated to use the user profile 130 and to prevent other users from using the user profile 130. Additionally, or alternatively, in some examples, the user profile 130 is required and/or otherwise used to make item selections (e.g., on a website of a store that enables the user with the user profile 130 to make item selections that are then associate with the user profile 130 in item selection history data 114). The user profile 130 is configured to enable the user to “sign in” to the user profile 130 and then interact with user interface(s) that are associated with the user profile 130 to make item selections, view selection recommendations, view item selection history data 114 and/or related data, or the like. Still further, in some examples where a user-item pair is used to represent a relationship between a specific user and a specific item, the user profile 130 and/or a user identifier associated with the profile 130 is used to represent the user in that relationship, such the user-item pair includes the user profile 130 and an identifier of the specific item in the pair.
In some examples, the system 100 is configured to operate as or otherwise include a hyper-convolutional model for predicting when items will be selected again (e.g., repurchased) which employs a hypernetwork architecture (e.g., the selection cycle hypernetwork 110). The hypernetwork 110 is configured to generate the filter weights as described herein and the second network (e.g., the selection cycle network 102), which is a 1-dimensional convolutional neural network in some examples, makes the ultimate prediction in the form of the selection cycle prediction scores 116. In such examples, the use of hypernetwork architecture provides an efficient alternative to neural attention architecture.
Further, in some examples, the system 100 is configured to account for users' repurchasing behavior, including repetitiveness and loyalty to an item or brand, quasi-stationarity of users' preferences, selection cycles, and item correlations. Repetitiveness and/or loyalty is a notable behavioral pattern with respect to repurchasing items in that users tend to prefer to repeatedly select the same items rather than switch to another brand or an otherwise alternative item. Switching does occur occasionally, but “loyalty” to previously selected items tends to be a much more dominant trend. The strength of such loyalty behavior in predicting reselections can also depend on the types of selections being made (e.g., loyalty is a very strong predictor with respect to online grocery purchases).
Quasi-stationarity of users' preferences includes selection behavioral patterns such as seasonal changes in purchasing of items. Such seasonality or otherwise drifting of the repurchasing behavior can be caused by the item itself (e.g., whether the item is available or not), the user (e.g., a user's dietary changes or need to cut expenses), or some common confounder (e.g., an upcoming holiday or sporting event). For instance, an example of item-related quasi-stationarity is the seasonality of some fresh produce which significantly affects the popularity of the produce at the peak of the season, while availability is significantly reduced when off-season. These behaviors can be observed by looking at selection histories of users using explicit timestamps, which enable seasonality and other similar patterns to be visualized and/or analyzed. In some examples, the system 100 is configured to account for such patterns when detected in the selection histories of users (e.g., changing recommendations for a first type of produce to a second type of produce as the first type of produce goes out of season).
Selection cycles are behavioral patterns that users exhibit as they shop multiple times over a time period. Although users tend to exhibit “loyalty” toward specific products, as discussed above, they do not necessarily select these items in every visit. A user's selection cycle for an item represents the frequency with which they select that item over time. For some items (e.g., bread) a user has a weekly selection cycle, indicating that they tend to purchase the item once a week (and sometimes on the same day each week). For other items, longer or shorter selection cycles are identified and used by the system 100 (e.g., a user has a longer selection cycle for buying paper towels than for bread). Such selection cycles depend on the durability and/or perishability of such products, the quantity in which they are selected, and/or other factors. Further, in some examples, the determined selection cycles are specific to users (e.g., a user that purchases groceries for a family has a shorter selection cycle for many items than a user that purchases groceries for only themselves).
Item correlations are behavior patterns that users exhibit when they tend to select multiple items at the same time (e.g., users tend to select pasta and tomato sauce at the same time). Item correlations may also be called “frequently co-occurring items”. The use of such item-to-item relationships as determined by the item correlation network 104 improve the predictions made by the system 100 as described herein. Such item correlations are determined from selection history data and can arise from sets of items that are all part of a recipe, sets of items that are common at specific events, such as picnics, or the like. Further, some correlations do not stem from a specific need or intent of the user. Instead, some correlations are related to the context of the user's selection session or visit. For example, weekly selection sessions may include large coalitions of correlated items which are not related to a specific need or intent, such that these types of correlations can only be observed at the level of a single user. Additionally, or alternatively, in some examples, some pairs of items even exhibit negative correlations, such that the selection of one item makes it less likely that the other item will also be selected (e.g., similar products from different brands or the like).
In some of the description below, formula notation is used to represent the relationships between various elements of the described systems and processes. In examples where such notation is used, let U={u}u=1N
Further, let Hu be the set of all items that appear in the selection history of a user u, i.e., Hu:=∪r=1B
Additionally, or alternatively, in some examples, a user's selection history (e.g., item selection history data 114 and/or 118) is denoted in at least one of two different ways. The first representation is based on implicit timestamps. This representation is Huim∈N
where Huex[i,tuj]=1 if an item i was selected on the tujth day (in basket buj) and zero otherwise. In some examples, in both cases, the datasets represented by H are lists of bits in which each bit is associated with a timestamp (either a basket as an implicit timestamp or a datetime as an explicit timestamp) and the value of each bit indicates whether an item i was selected at that timestamp. It should be understood that the explicit timestamp representation Huex is considered sparse because a user may go multiple days without making any selections, resulting in entire columns of the data set being all zeroes.
Other solutions use an ordered sequency of historical selection sessions or visits (e.g., an implicit timestamp representation) but ignore the exact timestamps of the selections. Such solutions ignore the time that passed between two consecutive selection sessions and, as a result, are oblivious to whether a selection session occurred on the previous day or two weeks ago. In contract, the described solution uses the exact timestamps of selection sessions, enabling it to better distinguish selection cycles that cannot be detected otherwise. Many users make repeat selections based on the timing of such selection cycles, resulting in periodic patterns of reselection likelihood (e.g., see the illustrated filter weight set 212 in
The illustrated filter weight set 212 displays three phenomena that demonstrate the importance of modeling selection cycles based on explicit timestamps as described herein. First, a periodic pattern emerges that represents users' tendencies to shop at regular intervals, such as weekly intervals, bi-weekly intervals, or the like. Second, the periodic patterns exhibit a time-decay, indicating that prediction of reselections of recently selected items should be prioritized. Third, reselection patterns include portions where the reselection likelihood is negative immediately after the item was selected (e.g., a user is very unlikely to reselect an item in the three days following a previous selection of the same item).
In an example, T denotes a hyperparameter indicating the length of time that is used by the models (e.g., the length of time from which historical data is used by the models). The hypernetwork 210 is configured for a personalized filter weight prediction function that predicts a 1-dimensional filter that matches a personalized selection cycle (e.g., the cycle in which the user of the user vector 206 reselects the item of the item vector 208). In this example, this hypernetwork 210 function is parameterized by θf and denoted by fθ
Further, in this example, user collaborative filtering (CF) representations are represented by Ψ={ψu}u=1N
(e.g., item selection history data 214), each item in the catalog is used with the hypernetwork 210 function to generate the corresponding filter weights {wu,i=fθ
where {tilde over (S)}u[i, n]=Σm=1THuex[i,n−m]wu,i[m]. In some examples, the input is padded such that {tilde over (S)}u has the same dimensions as Huex. An nth column of {tilde over (S)}u is denoted as {tilde over (s)}un, which represents the prediction scores for the selection of each item from the complete set of items on the nth day. Note that {tilde over (s)}nn[i]>0 only when the item i is present in the user's selection history. It should be understood that, in some such examples, {tilde over (S)}u is a 2-dimensional data structure that stores prediction scores associated with a user u for items on specific days or other timestamps, wherein each row is associated with a specific item and each column is associated with a specific day or other timestamps. {tilde over (s)}un[i] represents a prediction score for an item i on a specific day n.
In an example, the item correlation network 304 is configured to leverage the pairwise correlations between items bought by the user as described herein. The mapping functions 320 and 322 are denoted by qθ
Further, in some examples, the matrix 324 is combined with the prediction scores {tilde over (S)}u as described above to generate the combined prediction scores 126. In some such examples, the predicted scores for a user u to reselect each item from the catalog on the nth day are given by the multiplication of the nth column of {tilde over (S)}u with the personalized item correlation matrix 324: sun=Aun{tilde over (s)}un.
In some examples, the parameters of the described models (e.g., the selection cycle hypernetwork 110, the selection cycle network 102, and/or the item correlation network 104) are optimized using machine learning techniques. For instance, in some examples, the mapping functions 320 and 322 (e.g., qθ
The average loss term for a user u is given by
and Θ is given by the solution to the following optimization problem: Θ*=arg min Σu=1N
At 402, a set of filter weights is generated for a user-item pair using a trained hypernetwork. In some examples, the generation of the set of filter weights includes generating sets of filter weights for each item that the user of the user-item pair has previously selected (e.g., see
At 404, a prediction score is generated for the item using the generated set of filter weights and based at least in part on item selection history data of the user. In some examples, generating the prediction score includes applying a 1-dimensional convolution between the selection history data of the item and the generated set of filter weights.
At 406, a selection recommendation of the item is provided to the user based on the generated prediction score during a current time period. In some examples, the current time period includes an item selection session (e.g., an online shopping session) in which the user is engaging. In some such examples, the recommendation of the item is displayed to the user via a webpage or other similar interface, providing the user a chance to select the recommended item for selection. In other examples, the recommendation is provided to the user in other forms and/or through other interfaces without departing from the description.
Further, in some examples, the set of filter weights includes filter weight values that are indicative of a probability that the user will select the item in the associated time period. In some examples, the length of the time period is one day, such that each filter weight is associated with a future day-long time period. However, in other examples, other lengths of time periods are used without departing from the description (e.g., a time period of six hours, one week, or the like).
Additionally, or alternatively, in some examples, new item selection history data for a user is received. In some such examples, the set of filter weights for the user-item pair is updated based on the new item selection history data and a new prediction score for the item is generated using the updated set of filter weights and/or the new item selection history data. Then, in some examples, a new selection recommendation for the item is provided to the user based on the new prediction score. Alternatively, in some examples, the new prediction score results in a different item being recommended to the user (e.g., the new prediction score of the time is less than a prediction score of the different item).
It should be understood that, while many examples described herein refer to item selection or purchase in the context of shopping sessions, in other examples, the described systems and methods are used to make predictions and/or recommendations in other contexts. For instance, in an example, the described systems and methods are used to provide travel destination recommendations based on past travel data. Alternatively, in another example, the described systems and methods are used to provide sales contact recommendations or predictions based on past sales contact data.
At 502, an item correlation matrix is generated based on the item selection history data of the user. In some examples, the generation of the item correlation matrix is based on the use of two mapping functions as described herein (e.g., see
At 504, a set of other selected items associated with the current item selection session are identified. In some examples, this includes identifying the items that are already in the user's online shopping cart or basket.
At 506, a combined item correlation value of the item is determined based on item correlation values of the item correlation matrix that are associated with items of the set of other selected items and the item of the user-item pair. In some examples, this includes determining the correlation data values associated with correlations between the item of the user-item pair and each item of the set of other selected items. Those determined correlation data values are combined to arrive at the combined item correlation value.
At 508, the prediction score is generated based on the combined item correlation value and on the set of filter weights and the item selection history data. In some examples, the set of filter weights and the item selection history data are combined to obtain a prediction score as described above with respect to
In some examples, that prediction score is then used to provide a selection recommendation of the item as described above with respect to
At 602, a user vector and an item vector are learned, or otherwise generated, based on user-specific item selection history data (e.g., item selection history data 114). In some examples, the user vector and the item vector are initialized using external machine learning techniques.
At 604, the function of the trained hypernetwork is applied to the user vector and the item vector and, at 606, the set of filter weights is generated as a result of the application of that function. method 700
In some examples, the trained hypernetwork is used to generate a set of filter weights for each item associated with a user, such that, for each user, there are a plurality of sets of filter weights, with one set of filter weights for each item (e.g., see
At 702, a first mapping function is applied to data of each item in the item selection history data to generate a first set of mapping results and, at 704, a second mapping function is applied to the data of each item in the item selection history data to generate a second set of mapping results.
At 706, the first and second sets of mapping results are combined to generate the item correlation matrix. In some examples, this process is done using an attention-like procedure. Further, in some examples, the first mapping function and the second mapping function are fully connected neural networks with single Leaky Rectified Linear Unit (Leaky-ReLU) activated hidden layers. In some such examples, parameters of the first mapping function and the second mapping function include a weight matrix and a bias vector which are trained during application of the first mapping function and the second mapping function to the data of each item in the item selection history data of the user.
The method 800 includes 802-806, which are performed for each item in the user's history data. In this way, a combined prediction score is generated for each item. At 802, a set of filter weights is generated that is indicative of the user's selection cycle for the associated item. In some examples, the generation of the set of filter weights is done using a trained hypernetwork as described herein.
At 804, a prediction score for the item is generated based on the set of filter weights and item selection history data. In some examples, the generation of this prediction score is performed in substantially the same way as described above with respect to at least method 400 of
At 806, a combined prediction score for the item is generated based on the generated prediction score and the item correlation matrix. In some examples, this combined prediction score is generated in substantially the same way as described above with respect to at least method 500 of
At 808, the item with the highest combined prediction score is selected and, at 810, a recommendation for the selected item is provided to the user. In some examples, the recommendation is provided to the user in substantially the same way as described above with respect to at least method 400 of
The present disclosure is operable with a computing apparatus according to an embodiment as a functional block diagram 900 in
In some examples, computer executable instructions are provided using any computer-readable media that are accessible by the computing apparatus 918. Computer-readable media include, for example, computer storage media such as a memory 922 and communications media. Computer storage media, such as a memory 922, include volatile and non-volatile, removable, and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or the like. Computer storage media include, but are not limited to, Random Access Memory (RAM), Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), persistent memory, phase change memory, flash memory or other memory technology, Compact Disk Read-Only Memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, shingled disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing apparatus. In contrast, communication media may embody computer readable instructions, data structures, program modules, or the like in a modulated data signal, such as a carrier wave, or other transport mechanism. As defined herein, computer storage media do not include communication media. Therefore, a computer storage medium should not be interpreted to be a propagating signal per se. Propagated signals per se are not examples of computer storage media. Although the computer storage medium (the memory 922) is shown within the computing apparatus 918, it will be appreciated by a person skilled in the art, that, in some examples, the storage is distributed or located remotely and accessed via a network or other communication link (e.g., using a communication interface 923).
Further, in some examples, the computing apparatus 918 comprises an input/output controller 924 configured to output information to one or more output devices 925, for example a display or a speaker, which are separate from or integral to the electronic device. Additionally, or alternatively, the input/output controller 924 is configured to receive and process an input from one or more input devices 926, for example, a keyboard, a microphone, or a touchpad. In one example, the output device 925 also acts as the input device. An example of such a device is a touch sensitive display. The input/output controller 924 may also output data to devices other than the output device, e.g., a locally connected printing device. In some examples, a user provides input to the input device(s) 926 and/or receive output from the output device(s) 925.
The functionality described herein can be performed, at least in part, by one or more hardware logic components. According to an embodiment, the computing apparatus 918 is configured by the program code when executed by the processor 919 to execute the embodiments of the operations and functionality described. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
At least a portion of the functionality of the various elements in the figures may be performed by other elements in the figures, or an entity (e.g., processor, web service, server, application program, computing device, etc.) not shown in the figures.
Although described in connection with an exemplary computing system environment, examples of the disclosure are capable of implementation with numerous other general purpose or special purpose computing system environments, configurations, or devices.
Examples of well-known computing systems, environments, and/or configurations that are suitable for use with aspects of the disclosure include, but are not limited to, mobile or portable computing devices (e.g., smartphones), personal computers, server computers, hand-held (e.g., tablet) or laptop devices, multiprocessor systems, gaming consoles or controllers, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, mobile computing and/or communication devices in wearable or accessory form factors (e.g., watches, glasses, headsets, or earphones), network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. In general, the disclosure is operable with any device with processing capability such that it can execute instructions such as those described herein. Such systems or devices accept input from the user in any way, including from input devices such as a keyboard or pointing device, via gesture input, proximity input (such as by hovering), and/or via voice input.
Examples of the disclosure may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices in software, firmware, hardware, or a combination thereof The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the disclosure may be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions, or the specific components or modules illustrated in the figures and described herein. Other examples of the disclosure include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
In examples involving a general-purpose computer, aspects of the disclosure transform the general-purpose computer into a special-purpose computing device when configured to execute the instructions described herein.
An example system comprises: a processor; and a memory comprising computer program code, the memory and the computer program code configured to, with the processor, cause the processor to: generate a set of filter weights for a user-item pair using a trained hypernetwork, wherein the set of filter weights includes filter weights associated with each time period of a set of time periods after an item of the user-item pair was last selected by a user profile of the user-item pair, and wherein each filter weight is indicative of a probability that the user profile will select the item at the associated time period; generate a prediction score for the item using the generated set of filter weights based at least in part on item selection history data of the user profile and a timestamp at which the user profile last selected the item; and provide a selection recommendation of the item to a user interface associated with the user profile based at least in part on the generated prediction score during a current time period.
An example computerized method comprises: generating a set of filter weights for a user-item pair using a trained hypernetwork, wherein the set of filter weights includes filter weights associated with each time period of a set of time periods after an item of the user-item pair was last selected by a user profile of the user-item pair, and wherein each filter weight is indicative of a probability that the user profile will select the item at the associated time period; generating a prediction score for the item using the generated set of filter weights based at least in part on item selection history data of the user profile and a timestamp at which the user profile last selected the item; and providing a selection recommendation of the item to the user profile based at least in part on the generated prediction score during a current time period.
One or more computer storage media having computer-executable instructions that, upon execution by a processor, cause the processor to at least: generate a set of filter weights for a user-item pair using a trained hypernetwork, wherein the set of filter weights includes filter weights associated with each time period of a set of time periods after an item of the user-item pair was last selected by a user profile of the user-item pair, and wherein each filter weight is indicative of a probability that the user profile will select the item at the associated time period; generate a prediction score for the item using the generated set of filter weights based at least in part on item selection history data of the user profile and a timestamp at which the user profile last selected the item; and provide a selection recommendation of the item to the user profile based at least in part on the generated prediction score during a current time period.
Alternatively, or in addition to the other examples described herein, examples include any combination of the following:
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
Examples have been described with reference to data monitored and/or collected from the users (e.g., user identity data with respect to profiles). In some examples, notice is provided to the users of the collection of the data (e.g., via a dialog box or preference setting) and users are given the opportunity to give or deny consent for the monitoring and/or collection. The consent takes the form of opt-in consent or opt-out consent.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The embodiments illustrated and described herein as well as embodiments not specifically described herein but within the scope of aspects of the claims constitute an exemplary means for generating a set of filter weights for a user-item pair using a trained hypernetwork, wherein the set of filter weights includes filter weights associated with each time period of a set of time periods after an item of the user-item pair was last selected by a user profile of the user-item pair, and wherein each filter weight is indicative of a probability that the user profile will select the item at the associated time period; exemplary means for generating a prediction score for the item using the generated set of filter weights based at least in part on item selection history data of the user profile and a timestamp at which the user profile last selected the item; and exemplary means for providing a selection recommendation of the item to a user interface associated with the user profile based at least in part on the generated prediction score during a current time period.
The term “comprising” is used in this specification to mean including the feature(s) or act(s) followed thereafter, without excluding the presence of one or more additional features or acts.
In some examples, the operations illustrated in the figures are implemented as software instructions encoded on a computer readable medium, in hardware programmed or designed to perform the operations, or both. For example, aspects of the disclosure are implemented as a system on a chip or other circuitry including a plurality of interconnected, electrically conductive elements.
The order of execution or performance of the operations in examples of the disclosure illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and examples of the disclosure may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure.
When introducing elements of aspects of the disclosure or the examples thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. The term “exemplary” is intended to mean “an example of ” The phrase “one or more of the following: A, B, and C” means “at least one of A and/or at least one of B and/or at least one of C.”
Having described aspects of the disclosure in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the disclosure as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the disclosure, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.