Content display and delivery systems exist to provide users of computer devices with information and entertainment. Content comprises a large number of different kinds of presentational materials, including images and text. Content includes dynamic media such as weather and news updates, social media such as Twitter and Facebook, information such as email and entertainment such as video. It is increasingly problematic for a user to efficiently and successfully navigate their way through this vast proliferation of content to receive and view only that which is relevant to him. This is wasteful of a user's time and network resources, as well as local processing resources.
Recommendation engines aim to provide users with content that they will find interesting. This measure of relevance depends of course on the individual user; what one user finds interesting may not be attractive at all to another.
Such a system for recommending content is able to estimate how interesting an item of content is to a user by using a number of techniques. These might include looking at how a user has responded to other content in the past; fitting users into groups characterised by shared interests or other properties and using this to infer interest in particular items (i.e. inferring a microscopic trend from a macroscopic one); looking at what friends are watching and so on.
These concepts are clearly understood within the realm of recommendation systems.
An aspect of the invention provides a content delivery server configured to select from multiple content items a set of content items for display to a user at a user terminal, the content delivery server having access to content identifiers, identifying content items for delivery; a processor operating a content selection program which is arranged to receive the context data for different contexts and to select a set of content items in dependence on the context data, wherein the content items in the set vary with the context data, such that the content of items in a first set for a user in a first context are different from the content of items in a second set for the same user in a second context, and to transmit a recommendation message to the user terminal comprising a set of content identifiers.
Another aspect of the invention provides a computer device operating as a user terminal and comprising: a display for displaying content items to a user, at least one context sensor configured to sense a context of the user terminal and generate a context data item, a context collector configured to receive the at least one context data item and to generate context data, an interface for transmitting the context data to a content delivery server and for receiving a recommendation message from the content delivery server comprising a set of content item identifiers for content items selected based on the context data, wherein the display is operable to display the selected content items.
The invention also extends to a computer program product for implementing the methods and processes described herein, and a system comprising combinations of the computer devices/servers described herein.
The present disclosure recognises that the prior art approaches are limited by how the needs of an individual user are viewed. A user is not an immutable summary of their history, nor are they permanent members of any collection of groups or necessarily in tune with their friends.
A given user may sometimes enjoy sitcoms, but at other times prefer watching the news. They might tune into a YouTube channel for opinion based news programming around certain topics, but opt for a network TV channel for mainstream news.
Such a user may be typical of one collection of users when it comes to the movies they like, but be at odds with the same group when it comes to documentaries. When they are watching TV with the family, their choices might suggest one set of group memberships, while watching alone or with friends might suggest a different set. No user exists as a single consistent persona.
People behave inconsistently and variably depending on their mood, their environment, their history or just on a whim. In the present disclosure that collection of measurable or inferable properties that describe the situation of the user is defined as their Context—an instantaneous evaluation or snapshot of a user's circumstance.
When a recommendation system selects content it thinks will interest a user, it is therefore important to consider not just what the user has enjoyed before or what other similar users have liked, but also to track and utilise intelligence about how these tastes change with context. Doing so avoids the mistake of averaging out the user's changing preferences and enables the system to bring to light content that is exactly right for the moment rather than being modestly relevant.
Thus, since user preferences and tastes change with context, so too does the assessment of relevance. Since level of interest is synonymous with relevance, then so too do the recommendations that result from it vary.
For a better understanding of the present invention and to show how the same may be carried into effect, reference will now be made by way of example only to the accompanying drawings.
In the present disclosure, recommendations are made based on the context of a user, wherein the context defines user behaviour and provides insight into the kind of asset which a user may prefer in that context. The context can be time of day, available time, location, device type, etc. Either or both of content and type of asset can vary with context.
Adapting Recommendation Systems for Context
Typically, any current engine that provides content suggestions, ranked by assessing relevance, can be described as follows.
R=f(L, p(U), h(U), n(U, A))
Which can be read as: Recommendations are a function of the Library of content available, the profile of the user, the history and the network of relationships that exists between the User and the entire Audience.
This declaration understates the complexity of what the network of relationships means in practice and says nothing about what the user history is or how it is compiled. Likewise, it makes no assumptions about exactly what is stored in the user profile.
Simplifying the definition in this way serves one purpose: it illustrates that it is necessary only to identify the user for whom recommendations are required in order to generate a result-set. There is no context input upon which to vary the response.
A context sensitive recommendation system however could be described as follows.
R=f′(L, p(U, C), h(U, C), n(U, C, A))
Reading this through again we can see that Recommendations are a different f′unction of the library of content available, the profile of the User given their current Context, the history of the User weighted for relevance to their current Context and the network or associations appropriate to the current Context between the User and full Audience.
Certainly, the first and simpler statement could describe a system that tracks which recommended items receive positive feedback by time of day and use this to inform future requests, we have already seen that time of day is just one element of context. After all, users do not always to the same things at the same time every day.
What is important about the second statement is not simply that it gives the engine the ability to track a user's feedback to recommendations according to their context. It is also an opportunity to know the context under which recommendations are required in the first place.
The two statements above, which have been construed by the inventors to define the difference between non-context-based and context-based recommendations share some similarities. While f( ) and f′( ) are defined to be different functions, it is possible that they could in fact be the same function. They both accept the same kinds of data albeit differently filtered and weighted first.
In the following description a custom made context-based recommendation engine is described, but it will be apparent from this analysis that it would be possible to wrap or modify an existing engine. There is an advantage in that existing engines have a wealth of sophistication to match content with users. This can be retained while adding complexity on top of it.
Few recommendation systems offer this kind of direct access to their matching routines by default. However, if this access was permitted then it would be possible to adapt the surrounding functionality to feed in the required data suitable weighted, filtered and modified according to context. This would allow the core content scoring and matching functions to remain largely unchanged.
One aim of the concepts described herein is to provide a new navigation paradigm that breaks from the norm from a traditional navigational paradigm. For VOD (video-on-demand) content this is typically static poster images with associated metadata, and for linear channel (LC) this is a traditional EPG (electronic programming guide).
According to the concepts described herein, the new navigation paradigm is based on user's navigating using a mosaic of tiles with video playback. The layout of the tiles can vary depending on the available screen size. For example, a 3×3 or 2×2 layout could be provided for tablets, and a 4×1 for Smartphones. For VOD content, the video displayed in the tile could be the first 60 seconds of the video displayed in a loop, and for live content it could be the actual live signal on an ongoing basis. Other features can be incorporated, such as email, chat, social media feeds like Facebook and video, etc.
Another aim of the concept described herein is to provide an architecture which constitutes an “ecosystem” for a service provider. At present, different kinds of devices run different kinds of operating systems, and therefore any provider of content is dependent on providing content to a number of different devices. Aggregation of content such as email, weather updates, social updates and other forms of dynamic media is possible by the device, governed by the OS of the particular device. In contrast, with the principles described herein, an application is provided which runs on top of the operating system and provides a complete aggregation of content and display function based on recommendations from a server. The server also governs how content is displayed at the client device, by not only recommending content items but delivering the content items associated with an ordering to the device. That ordering can be interpreted differently at different devices depending on the display resources. The content items include not only dynamic media, but other assets such as short form and long form video assets, including video on demand (VoD) and linear channel (LC) assets. The server can also receive content from different content sources and these content sources can both drive recommendations made by the server, as well as to deliver assets themselves directly to the device. This allows the service provider to “shape” content which is delivered to a user as he can generate some control over aggregation of the content sources as well as the content sources themselves and recommendations based on them.
According to another feature described herein, tiles for a specific user can be based on a recommended set of videos or channels based on user preferences and history. Recommended content is displayed in a manner dependent on the consuming device.
The app described herein introduces a new User Interface (UI) style using tile based navigation and presenting highly personalised content to the user. This content can take the form of video, written word, and potentially music. In these embodiments, video is the main focus, with support from news articles and social media feeds.
While displayed items are the main focus, the concepts described herein extend to audible output such as voice delivered email and music.
The main page of the application is a trending topics page. This page presents topics of interest to the user. Each is presented as a still image with a title and sub heading. The user has the ability to enter pre-defined topics into a list in their context settings. The topics are also personalised using information from the user's Facebook feed, twitter feed, their location, and time of day, etc. Initial information gathering can be done through access to Facebook, Twitter, etc.
Once a topic has been selected, a user is presented with a number of items of content relating to that topic arranged in a tile formation. The arrangement of these tiles can be specified, and numbered 1 to 9, for example. The number of tiles the application is capable of presenting is also dependent on the size and resolution of the screen being used. On a television there may be 9 items presented, with videos running concurrently. On a mobile phone there may be 1 item presented, though in both cases more content will be visible by scrolling to the right (or in any other direction).
It is intended for the application to be highly personalised. A user will have specific input, but certain elements will be learnt by the application. For example, the schedule of the user, and their viewing habits at different points in the day. The user may only ever want to read the news at breakfast, watch you tube videos at lunch, and watch a movie after dinner. The app will respond by suggesting content on topics of interest within these parameters. Of course a mixture of all types of content can be presented at any time of day. User feedback can take the form of a “don't like now” button which allows a user to defer a recommended action to another context. Existing recommendation engines allow their decision logic to be affected by both positive and negative feedback from users. However, in existing engines, as there is a lack of context awareness, there is no real concept of a user being able to respond to a recommendation that they like but which they don't like right now. This is distinct from watch-lists and favourites feature, which require a user to decide when to pull things out of these lists.
The deferral of a recommendation is something different—it's the ability of the user to say that they like something but would prefer it in another context. The result of this signal in the described embodiment is that the recommendation engine reschedules the item for when the user is next in that context and adjusts its decision logic so that future similar items are similarly targeted.
The form of the content can be described as long-form (movies, longer television programs), short-form (You Tube clips etc.), or articles. All content can be ordered by type within the tile view. This could be done by the user or the provider. This does not change the specific content presented, simply the type. There is no content from Twitter or Facebook presented at this point, though hashtags etc. from the user's feeds will be used to determine the content presented. Once a specific video has been selected, the feed information relative to its content can be presented via a screen separation to the right or any other direction. It is then possible to move to a related article presented in the news feed, from the video the user was previously watching. In addition, dynamic content such as weather updates or social medial can be provided. Tiles can display Twitter/Facebook, etc., e.g., latest tweets from known contacts or reminders about accepted Facebook events.
Notifications will be given when new topics of interest become available. This could be as a result of a breaking news event for example. The notification will appear against the reload button in the top left-hand corner of the screen. In one example, it resembles a small blue speech bubble with the number of notifications within it. Reloading will add this topic to the trending topics page. Articles can be presented alongside the news event. The article becomes full screen text on selection.
The layout could be mirrored from a smaller handheld device used for selection (mobile or tablet), while using a larger device to view the same layout of selections (TV, computer). It could then be possible to watch an item of content on the larger device, while continuing to browse content on the hand-held device, e.g. articles and feeds. There is an option for motion gestures (some kind of swipe, two fingers or pinch), to then move content of choice from the hand-held device to the main screen for viewing instead of currently displaying content.
It is also intended for motion to be used as a discriminator for current activity. The pattern and speed of movement of the user could be used to determine whether they are driving, on a train, running, or walking. Relevant content would then be presented. These would vary from each other greatly, as one may not wish to watch a long form video on a bus trip, but may on a train journey. Also a user would not be able to view content while driving, but may wish to listen to music or news, have an article read to them, or listen to the commentary of a sports event.
Metadata could contain cues for display of promoted items within the content being viewed. For example the user is watching James Bond, and an advert for the watch he is wearing appears. The cues within the metadata could also be filtered depending on the personalisation of the application.
A user can “roll forward” a clock. This would be useful in certain instances. For example the user wishes to choose or browse content they may view in the evening in advance, either out of curiosity or to make a selection beforehand. This would avoid the introduction of an anomalous event within their schedule, which could potentially jeopardise the previously learnt schedule. The same may occur if the user is ill, and therefore not at work as usual, or on holiday.
Controlling the audio of the concurrently playing videos displayed within the tile view is available. A swiping motion up/down across the face of any tile controls the audio's volume. This allows a user to view one item while listening to another, which is particularly useful if viewing content on a television while browsing on another device. Also where content has audio deemed to not be desirable, e.g. sports commentary, etc.
The application can allow control of the ‘ecosystem’ of a household service provider who already provides a content based service to that household (or community of users).
The user terminal 4 is labelled “Device 1”. A user 35 may own multiple devices, which are indicated in
In some of the examples described herein, the system is capable of delivering context recommendations based on the type of device that a user is currently logged in to.
The user 35 has a profile 36 in the user profile 30. In this user profile are stored preferences and other information about the user 35 to allow recommendations to be made based on information personal to that user. In the present system, the user can set up individual sub-profiles, 36a, 36b, 36c, etc. which allow him to have different preferences in different situations that he may find himself in. This means that recommendations based on the user sub-profiles could vary even for the same user when that user is in different settings. It will readily be appreciated that a single user is being discussed, but in practice the system operates with a large number of different users, where all users have profiles and sub-profiles set up for them respectively. Only a single profile and its sub-profiles is shown in
In addition to providing recommendations based on device type, the system provides recommendations based on other context parameters including location, time and available time as will become evident from the examples discussed later.
The multiple content sources 14 to 22 are also accessible to the user terminal 4 itself as denoted by the various arrows. The purpose of these connections is to allow the user terminal 4 to access content from the multiple sources 14 to 22 when invited to do so on the instructions received from the control server 2. Thus, these sources operate in two ways. Firstly, they provide content to the data aggregator 12 for driving the recommendation engine 10, and secondly they provide content items for display to a user at the user terminal, when they are recommended to the user terminal.
The context engine module 24 influences the recommendation engine so that the recommendations are based on the context of a user. The context of a user is perceived here to govern the behaviour of a user and therefore to affect their likely preferences for engaging with content. The likely context based preferences for a user can be determined by monitoring historical behaviour of a user, or can default to certain conditions based on information about the user, for example, in his user profile. A user can set or override context parameters associated with the context engine module 24 should they wish to do so. The context engine module 24 also influences the recommendation engine to define the number n and type of assets to be recommended to a user, based on context.
The user device 4 executes a client application 38 which cooperates with the context engine 24 to deliver context based recommendation.
Layout manager 380,
Feed reader 382,
Email adaptor 384,
Facebook service 386,
Event manager 388,
Render engine 390,
Twitter service 392,
Location service 394,
Notification service 396, and
Analytic service 398.
The client device 4 also includes an accelerometer 400 and has the following software components installed: Facebook app 402, Twitter app 404, notification manager 406, native video player 408, and location manager 410. In addition to the data aggregator 12, the server 2 includes a stream adaptor component 502. The adaptor component 502 includes a YouTube adaptor, Facebook adaptor, Google News adaptor and Twitter adaptor. Although not shown in
The adaptor component 502 operates according to the common adaptor principle. Data from a wide range of disparate sources is used by the system. In order to deal with this variety of sources, the interfaces which are presented are generalised so that the system need only be aware of one type of interface. This interface contains a superset of possible data structure options to accommodate each type of data likely to be communicated over it. When a new data source is added to the system, gaining access to this data is then a matter of creating a wrapper around the data source to conform it to this common interface. Once in the system, data received from such a source can be weighted, analysed, recommended, rejected, prioritised, etc. using the same functions and processes as every other piece of data.
The content delivery system is capable of compiling video snippets based on various context parameters including: location, time (possibly short-form in the day and long-form in the evening), device (flat screen TV, laptop, mobile device), available time (that is, the time available to a user to engage with particular content. The terms short-form and long-form define different types of assets—other types of content include news articles, linear news, social content. As mentioned above, different types of assets can be stored in the asset server 6, or available from the multiple sources 14 to 22. In addition, other assets can be available from different sources (not shown), for example, static news articles. Herein, the term “content” refers to any type of displayed images or text to a user; a content item is a piece of content. The term “asset” is used herein to denote video assets and also other types of content items without limitation.
Thus, the content, type and number of the recommended assets varies with context.
A user may add his own source of content, subject to permission from the service provider.
Reference will now be made to
There are two parts: a client side part installed on the consumer's device 4 within our ecosystem app 38, and a server side part embodied in the module 24.
The Context Engine System (CES) (which includes both parts) is designed to provide a list of contexts within which it believes a given user exists at any particular moment.
Because the CES cannot know for sure what context a user is in, it provides its assessment as a list of probabilities. Any context assessed with a sufficiently high probability is considered to be ‘active’ for that user. Users can be in more than one context at once: for example, they could be at home and with family; or, at work but about to go on vacation; or, at a bar with work colleagues etc.
A user always has visibility of the contexts the CES thinks they are in, as shown by the oval context display component 50 which shows context data to a user on the display 46. This presentation also gives the user the option to correct their context. Let's say the CES had thought they were at home enjoying some leisure time, but actually they are working from home; or they're on a business trip rather than a holiday. A user can engage with the display through a user interface (UI) touch screen, mouse, etc. to adapt their context.
The Context Engine logic 52, 54 is present within the consumer app 38 as well as the server so that the app is able to determine context even if there is limited access to the Internet. The whole idea of the ecosystem context is to make the app valuable to users. One way is to reduce its bandwidth footprint when on holiday using expensive cellular data.
The ‘Context Collection Agent’ 54 is a software service that resides within the consumer app 38, on the device 4, which collects information and intelligence from the sensors available to it. Some example servers are shown including device 56, location (GPS) 58, Bluetooth 80, Wi-Fi 62, motion servers 64, and ambient light sensor 66.
The Context Collection Agent does not simply record the raw data arising from these sensors but performs some basic calculations from it. The device server 56 provides local information about the device, e.g. the device type and its current time zone. For example, it tracks changes in time zone from the ‘Device’ and records this change as a significant event.
Likewise, it summarises rates of change of motion from the motion sensor to determine whether it believes the user is walking or being conveyed in some way.
Similarly, changes in WiFi network name, the security settings of a network, the rate of movement amongst local Bluetooth devices are all metrics to be tracked beyond the raw data any of these sensors provide.
This is what the Context Collection Agent collects and sends to the server side component Context Collector 70 whenever a network connection exists to do so.
It also makes this information available directly to the local Consumer App Context Engine 52.
The Context Collector 70 acts as a data collection endpoint for all users' context information. It is used by the server side service Server Context Engine 72 when it performs more detailed context assessments, as well as a Context Summarisation Service 74.
The Context Summarisation Services 74 takes all the data collected about all users and summarises it into recognisable groups and patterns.
Anonymised patterns, in this way, can be used by the Server Context Engine 72 to decide if a particular user's context information is a better match for one behaviour or another when calculating its probability list for them.
Different users commute at different times, for example. The Context Summarisation Service 74 will look at motion, GPS, pedometer and time of day information and summarise patterns for distinct groups of users. This information is used by the Server Context Engine 72 to fine tune its assessments.
Similarly, appropriate summary data sets will occasionally be provided to the consumer app so that it can use them to make rapid context assessments if it finds itself bandwidth constrained. Appropriate summary data sets are those which the server believes best match a user's typical behaviour which the Consumer App
Context Engine 52 can use to make a best effort assessment while it waits for a better assessment from the server.
The Server Context Engine is a more functional version of the Consumer App Context Engine. It is able to perform more detailed analysis of a user's context inputs before making a determination of what it believes are the most probable contexts within which the user finds themselves. It has full access to anonymous data sets from the Context Summarisation Service 74 with which it can compare its assessments for a given user and adjust according to expected behaviours.
The Consumer App Context Engine is a pared down version of this capable of operating on a handheld device or set top box (STB). It uses information provided directly by the Context Collection Agent 54 to make assessments of what it things the user is doing. It balances this with information it may or may not receive from its server based counterpart.
The context display component 50 makes the current context assessments visible to the user so that they can see what has been determined and so that they can provide their feedback on this.
Feedback provided in this way is used to inform the context engines on both the consumer app and the server to allow it to adjust future assessments.
For example; suppose the system guesses a context that's wrong and the user corrects this to say ‘I'm travelling to work’. The system will learn from this when the user works and when they're likely to be home and commuting. This allows it to adjust its probability graph of work/other as shown in
As the system learns, it can use the gradient of the graph to infer a commute and a flat to infer time at work or elsewhere—a distinction it can fine tune from other information.
This graph becomes therefore another input to the Context Engine; the steepness of the line is proportional to the probability that the user is commuting at a given time and therefore weighs on the calculations performed when determining the most likely contexts.
It is important to note that the Context Engine does not decide what content is relevant within a given context. It just provides an assessment of the likely contexts which can be used as inputs to the recommendation engine 10.
Moreover it is clear that no one sensor provides a definitive answer about any context. For example (the following is not an exhaustive list),
In each case, the accumulation of evidence for a given context increases its probability, but no one piece of information decides a context definitively. The process is a best effort attempt that is fine-tuned by a comparison against anonymous data from other similar users and by user feedback and machine learning derived from this.
The following section discusses the nature and format of context data as a context vector.
As discussed above, context is derived in part from data received from device sensors (location, time, network connection type etc. . . . ), partly from historical data (what to infer from sensor data, what type of place the user might be in at a given time, typical working hours for the user etc. . . . ) and finally also from other devices (e.g. who else you're with, are you surrounded by people you don't know such as in a bar, concert, tube-train etc. . . . )
In this way Context can be viewed as a collection of input data and a vector of derived, processed output data.
Elements of this vector might be of a range of different variable types from continuous (e.g. time of day); discrete (e.g. day of week) to categorical (e.g. at home, at work, commuting to work etc. . . . )
Context is an assessment of the likely meaning of the input data, but unless explicitly acknowledged and approved by the user it is only an approximation. As such, any statement of context would normally be associated with the input data on which it is based so that further offline analysis can be done on it to improve future assessments. Likewise, when the user does explicitly approve the assessment this signal can also be used to improve future assessments.
Context is an instantaneous capture of the user's predicament at a given moment. This is distinct from a typical profile of a user which simply collates preferences rather than context-based trends.
An example of context vector might be:
Context={location; motion; place; time; network; user; enumeration of nearby devices; temperature; altitude; current activity; pending activities}
Where is the user right now, their longitude and latitude
The user's velocity (i.e. speed and direction) together with their type of motion (e.g. walking, running, car, train etc. . . . )
At work, at home, at the shopping mall, in a favoured coffee shop, on a train. Note that places are not simply a look up of what's at the user's current location. If a user is driving past their place of work at the weekend, or walking past their favourite coffee shop on their way someone else, they would not think of themselves as being at either place. A place is a venue the context engine believes the user to be at and is a function of location together with other metrics.
For example: if the location suggests a user is at work but in fact he is in a car moving at 30 mph in a direction away from work and it's a weekend, then the system would not indicate the user as “at work”. On the other hand, if a user is on foot approaching your place of work after a train or car journey and it was 8,30 am on a weekday then it would be configured to assess you to be at work.
So the variable “place” is a function of several other inputs, including other variables that may be in the context vector such as location and motion.
The date, time and time zone of the user
Connection type (3G, 4G, WiFi, Wired), IP address (which the system might use to check if a user is on the same WiFi network as their home STB to deduce the user is at home)
A statement of who the user is (e.g. user ID)
A list of devices (see for example, Device x, Device y in
Often this is detected indirectly via a barometric measurement. This can be used, together with particular accelerometer patterns, to determine a flight in progress since cabin pressure changes occur in a well-known way.
Running, walking, relaxing, watching TV, eating, sleeping, commuting. Mostly derived inferred from other sensors and data sets.
About to go on vacation; a commute expected. These are deduced from previous patterns or other data sources but useful for pre-empting other activities such as downloading the user's usual podcasts before the commute starts or offering movies to download and watch while flying off on holiday etc.
The recommendation engine 10 receives context information from the context engine, for example in the form of a context vector as discussed above, and based on their context information makes a recommendation for assets to be displayed at the user terminal 4. The recommendation supplies information about these assets to the API 8, which formulate a recommendation message for transmission to the user device 4. The number, content and type of asset vary depending on the context. The recommendation message comprises a sequence of asset tiles presented in a particular order.
The asset tiles can include content identifiers as mentioned above. Alternatively, the asset tiles can include content itself, such as news overlay or descriptive text for a content item. Such content is displayed at the user device.
In addition, each asset tile includes a weighting which denotes the perceived importance of that tile to the user. The weighting also governs how the tile is displayed. For example, assets with a higher weighting can be shown in a tile of a larger size than assets with lower weightings. Weightings are not obligatory—it is possible to have a system in which weightings are not utilised, and wherein the display is controlled only by the order in which assets are received. Each asset tile further comprises information about where the client can obtain the asset. This could be for example an asset locator for accessing the asset server 6 to return a particular type of asset from the asset server. Alternatively it could be a URL identifying one of the content sources 14 to 22 which (as described earlier) are accessible directly to the user terminal 4. It does not have to be a URL. In some cases, it may be an instruction to access a service of a particular type (e.g. Twitter or email) which the device then interprets.
A user can decide to “pin” a certain content item to a certain location, e.g. a weather update is always shown in the top right hand corner. This is managed in his user profile.
As described later, the display component 50 presents at the user terminal 4 a settings panel so a user can configure their context parameters. For example, they could override tile placements to replace a video with a Twitter feed output (for example), or they could select topics as part of their settings. It could include a “more like this tile”, and it could allow for reordering of the tiles on their display. Tiles could also be rearranged and resized by user input at the UI 49, ni a manner emulating operation.
As mentioned in the introduction, the client terminal 4 has a responsive UI which changes the layout based on device resolution (phone and tablet) and orientation. Moreover, it can include a number of conceptual representations of video navigational layouts, for example, a grid where tiles are varied based on available screen space.
In addition, the system provides a different look and feel based on various context parameters including location, time, device and available time.
Reference will now be made to
In this configuration, the companion device 4a can be controlled by a user in the following way. A set of assets may be on display at the companion device 4a. A particular tile format is presented to a user. This format can be mirrored on the display 46b of the second device 4b. Thus, a user can now see on his companion device and his larger device the same display format. The user can configure the format to his taste on his companion device by suitable user input (for example, with a touch screen he can change the size of tiles by gesture, or drag tiles to different locations). Once he is satisfied with the new configuration this can be uploaded to the second device 4b so that the new configuration is shown on the screen 46b. Then, the companion device can be reset into an independent mode whereby it can continue to recommend asset and content using its default display configuration, or another configuration selected by the user. The other device 4b will no longer follow the configuration once the user device 4a has been put back into an independent mode.
The recommendation engine is responsive to changes in context parameters provided by the context engine module 24 to update the content/layout of the tiles in real time based on time and location (and other context parameters). Thus, the display provided to the user at the user terminal 4 will change automatically depending on the time of day or the location where the user is, or in dependence on the user manually activating different settings of his context.
The recommendation message received from the control server 2 includes asset locators which enable the user terminal 4 to locate assets at the asset server 6 which are then displayed in accordance with order received from the control server 2. The approach supports tiles with initial choices for 4×4, 4×6 or a freely configurable number of tiles. The order is interpreted differently depending on the type of device. Depending on number and screen size governed by the display at the device 4, rectangle dimensions are calculated. A double-click on a tile which is empty by default points to a list of sources: Internet, social media, live TV, email, other. Email is a dynamic feed (reference 26) that pushes updates every ten minutes. The tiles can be made adjustable in size by the user using two fingers.
It is intended that the size of display of the asset will be equivalent to the importance to the user, as governed by the context parameters driving the recommendation engine 10. This is controlled by the weightings 10.
The context engine learns from and considers the user behaviour to modify and to optimize the recommendation. Here different “inputs” (device, time, location, . . . ) are used to detect in what situation the user currently is, finally to set the best recommendation and to configure the actual experience.
The user sub profiles can allow the user to set the situation by himself to get the correct recommendation related to his mood/situation/general preferences but also to set general no-goes.
A user profile could define generally what someone likes and would get recommended but perhaps more important what someone doesn't want to see at all (violence, pornography, soap operas, . . . ). A user sub profile could also have the ability to allow to define different preferences related to the current situation.
Actions which a user takes when they are using one of their sub profiles does not affect recommendations when using another of their sub profiles, unless they specifically request that the sub profiles are modified together.
Note that the server updates any of the users/connected devices of a given profile if just one of them senses a different context. If a user carried their smartphone and their tablet, but only the phone detects a change in context, nevertheless that change in context can be updated also for the tablet. When the user starts using the tablet, the context is updated on that device as well, and also for example, on the television at home.
Notwithstanding this update, note that all the devices could show the same or different content depending on the settings on each device.
An important feature of the app is to have the capability of general settings, that have direct applicability to the profile but also to have sub-profiles to select by the user that consider different situations where a user has different preferences. All this helps to optimize the recommendation and to speed up the learning process of the recommendation engine but also supports the recommendation and the respective learning process to focus on the right spot/situation and not to get distracted and interfered with by the fact of “different preferences” at different situations which are not already being detected by the several input mechanisms (shown in
Reference will now be made to
Each asset can have its volume independently adjusted—there may be multiple audio output streams running simultaneously.
There follow examples of five user stories.
Whilst at work, Mark has 30 minutes to browse the Internet in his lunch. He is interested in a short-form content appropriate to his work environment. This means being recommended short-form new items (both VOD and linear), possibly based on trending topics derived from his Facebook and Twitter fees. Additionally Mark will want to watch the typical “kitten” videos we all share in the office.
The application can do this as it knows that Mark is at work and it knows that Karl takes a 30 minute break between 1 pm and 1.30 pm.
Whilst Leigh is using the U-TV 38 to browse content in his evening, a fire breaks out down town and a breaking news article trends within his Twitter feed. The U-TV 38 will now update his display to include,
The system can do this as it integrates with Twitter and Facebook and assesses trending news articles, keywords in news articles can be additionally used to “find” associated video and VOD content based on content tagging. Additionally the system has a hook in to Leigh's social graph and can additionally promote items based on his specifics (as in his user profile).
Kevin is wanting to watch television at home and loads the U-TV app to discover some content. The system knows that Kevin is at home and that Kevin likes to watch long-form content of an evening. The system will promote VOD content based on Kevin's previous viewing habits. The system will include trending VOD content but will not include Twitter, Facebook or any other non-video content.
The system can do this as it knows that Kevin is at home and that Kevin watches films on Monday evening.
Sian likes to use U-TV whilst she is watching television at home on an evening. She is mostly focussed on the television but the television programming does not require her undivided attention and she casually browses U-TV to spot any short-form and social content that can complement her casual approach to watching television on an evening. The U-TV app listens to the television programming and promotes content based on her television programme at that time. This can be achieved where the TV feed is supplied as a content source for matching purposes.
Peter enjoys watching any sport in the evening although he is relatively indiscriminate in the sport that he wants to watch. He uses U-TV to surface linear streams running sports and will “zoom” in on a game when the action interests him. However, Peter is also a social hound and knows that his friends know when something great is happening in a game. U-TV displays a good mix of sports content based on Peter's preferences but additionally a tile maintains a list of sports-related content that is trending and his friends twitter posts are surfaced higher than public posts.
The application can do this as it knows that Peter is a sports fiend, especially on an evening, and he is really looking for the good bits of a game. This means short-form and highlights or a chance to jump to the hot part of a game when his friends tell him to.
There follows a description of three use cases:
Start the application by entering into the android menu and selecting the U TV MIX icon.
After selecting the U-TV MIX icon 90 the user is met with a loading screen,
The ‘Trending Topics’ page is then displayed,
More topics can be found by scrolling to the right, as shown in
Clicking on the settings button 132 takes you to the settings page. The initial section of this is the MY U-TV page. Here the user can select certain topics of interest allowing the further refinement of the metrics used to build the trending topics.
The Social Settings section is where a user logs into their social networks. It is intended to have the user also log into news sources e.g. Huffington Post, BBC News, Google News, etc. in order to pull in news articles.
The Configuration page is the ‘Cheat Key’ for use in the PoC. Allowing demonstration of how the applications trending topics change when the user is at work/home, in another city/country, the weather is good/bad, etc.
Going back to the Trending Topics page,
Begin by loading the application as before, seeing the standard loading page, the user is then presented with the Trending Topics page, as shown in
A significant news story breaks, and the topics available are updated. The user is notified of this by the appearance of a small blue notification next to the ‘refresh’ button in the top left-hand corner of the screen. There is a single notification, therefore the number shown is 01.
Clicking on the notification updates the Trending Topics page and the new topic is inserted into the page with a ‘News Alert’ 210 highlight on it to grab the user's attention.
Clicking on the new topic brings up the tile display for that topic, as shown in
Scrolling to the right brings more content into view.
Scrolling back to the left and clicking on the largest video tile 222 with the heading ‘Wildfires in Southern California . . . ’ takes the user into a single item view to watch the linear stream of this video.
Clicking on the ‘information’ button 242 or the ‘conversation’ button 244 brings in associated content. The ‘information’ button brings in an informative description to accompany the video.
The ‘conversation’ button provides the user with a selection of feeds associated with the video via news sources, Facebook, Twitter, and Google plus, etc. The user can scroll down to display more items.
Begin by loading the application, seeing the standard loading page,
The user then selects the World Cup topic 1114. The tile view of this topic is then presented to the user, shown in
Twitter and Facebook content has not been brought in-line at this point. This has been left until the user has selected a specific item of content to view. Though Twitter and Facebook are being used to decide tile content. The live Brazil vs England feed is selected to be viewed by the user 270. The video fills the screen and a bar appears at the bottom as described in
Selecting the ‘information’ button 242 in the bottom right-hand corner brings up a description of the video being watched, including live score.
The ‘conversation’ button 244 allows the user to view content from news feeds and social media feeds as previously shown in
Clicking on the ‘conversation’ button 244 again removes the screen section displaying the feeds. The display is then as shown in
The ‘exit’ button 246 is then selected, and returns the user to the tile view for the previously selected World Cup topic as shown in
The following sets out information about the API 8. Each table has a heading which describes the function of the API:
Recommendations and list—Tables II/III
Each function can be activated at an endpoint which is defined in the table. Note that for recommendations and lists multiple endpoints are possible and this has been separated into two tables, one table (Table II) relating to recommendation and list of articles, and the second table (Table III) being related to recommendations and lists for video assets. A response always contains an array of recommendations tiles.
The endpoint in Table II is a source of articles, and the parameters in the content include q-queryterm; a-numbers of articles; sv-number of short form videos; Iv-number of long form videos; pid-location id; Is-location radius.
The endpoints in Table III are a source, VOD store, live feed and YouTube/vod (short form video).
When the function is implemented, the response is determined by the function name (action) and the defined endpoint. The response includes ‘n’ tiles, where n can include sv; a; Iv.
Below the table of “Trending topic” are exemplary response items, each having a corresponding tile id equal to 1, 2, 3, 4. A response always contains an array of topics.
Below the “Social feed” Table IV a sample response lists a number of different articles from the source “article” and postings from the source “Twitter” all sharing the subject content “Tracy Morgan”. Note that the items returned from the source “article” have a specific URL associated with them to allow the user terminal to access these articles from the article asset server itself. Postings from the Twitter source which are returned in the response do not have a separate URL—instead they are taken directly from the Twitter source to the user terminal.
Below the tables II and III “Recommendations and lists” is an example response containing an array of recommendations tiles, each including a video URL and information about how the tile is to be presented at the user terminal.
Aspects of the inventions described herein include any or all of the following features used in any combination. In addition, methods, and computer programs for implementing the method, are contemplated.
A content delivery server configured to select from multiple content items a set of content items for display to a user at a user terminal, the content delivery having access to content identifier, identifying a context for delivery of the set of assets;
A computer device operating as a user terminal and comprising:
A content delivery system comprising:
A video content delivery system comprising;
A computer device having a display for displaying to a user at least one content item;
A content delivery system comprising:
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 62/033,445, filed on Aug. 5, 2014. The entire teachings of the above application(s) are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62033445 | Aug 2014 | US |