Embodiments of the present invention relate to methods, apparatuses and computer program products for contextual grouping of media items.
It is now common for a person to use one or more devices to access media content such as music tracks and/or photographs. The content may be stored in the device as media items such as MP3 files, JPEG files, etc
Cameras, mobile telephones, personal computers, personal music players and even gaming consoles may store many different media items and it may be difficult for a user to access a preferred content item.
According to one embodiment of the invention there is provided an apparatus comprising: a memory for recording a first context output, which is contemporaneous with when a media item was operated on, and a second context output, which is also contemporaneous with when the media item was operated upon but different to the first context output; and processing circuitry operable to associate the media item with a combination of at least the recorded first context and the recorded second context and operable to create at least a set of media items using the associated combinations of first and second contexts.
This provides the advantage that the apparatus is able to categorize media items based on, for example, their historic use and the context in which they were used. The apparatus is then able to match a current context with one of several possible contexts and use this match to make intelligent suggestions of media items for use.
The media items suggested for use may be those that have historically been used in similar contexts.
Thus an in-car music player may make different suggestions for one's drive to work, one's drive from work and driving during one's leisure time.
Thus a personal music player may make different suggestions when a user is exercising, relaxing etc.
According to another embodiment of the invention there is provided a computer program product comprising computer program instructions for: recording a first context output, which is contemporaneous with when a media item was operated on, recording a second context output, which is also contemporaneous with when the media item was operated upon but different to the first context output; associating the media item with a combination of at least the recorded first context and the recorded second context; and creating at least a set of media items using the associated combinations of first and second contexts.
According to another embodiment of the invention there is provided a method comprising: recording a first context output, which is contemporaneous with when a media item was operated on, recording a second context output, which is also contemporaneous with when the media item was operated upon but different to the first context output; associating the media item with a combination of at least the recorded first context and the recorded second context; and creating at least a set of media items using the associated combinations of first and second contexts.
For a better understanding of the present invention reference will now be made by way of example only to the accompanying drawings in which:
The illustrated apparatus 10 comprises: a memory 20; a context generator 40; an input/output device 14; a user input device 4 and an input port 2.
The memory 20 stores a plurality of media items 22 including a first media item 221 and a second media item 222, a database 26, a computer program 25 and a collection 30 of context outputs 32 from the context generator 40 including, at least, a first context output 32, and a second context output 322.
A media item 22 is a data structure which records media content such as visual and/or audio content. A media item 22 may, for example, be a music track, a video, an image or similar. Media items may be created using the apparatus 10 or transferred into the apparatus 10.
In the illustrated example, the first media item 221 is for a music track and includes music metadata 23 including, for example, genre metadata 241 identifying the music genre of the music track such as ‘rock’, ‘classical’ etc and including tempo metadata 242 identifying the tempo or beat of the music track. The music metadata 23 may include other metadata types such as, for example, metadata indicating the ‘energy’ of the music.
The music metadata 23 may be integrated as a part of the first media item 221 when the metadata item is transferred into the apparatus 10 or added after processing the first media item 221 to identify the ‘genre’, ‘tempo’ or ‘energy’.
The context outputs 32 stored in the memory 20 may, for example, be generated by the context generator 40 or received at the apparatus 10 via the input port 2.
The context generator 40 generates at least one data value (a context output) that identifies a ‘context’ or environment at a particular time. In the example illustrated, the context generator is capable of producing multiple different context outputs. It should, however, be appreciated that the context generator may not be present in all embodiments, context outputs being received via the input port 2 instead. It should, also be appreciated that the context outputs illustrated are merely illustrative and different numbers and types of context outputs may be produced.
The context generator 40 may, for example, include a real-time clock device 421 for generating as a context output the time and/or the day.
The context generator 40 may, for example, include a location device 422 for generating as a context output a location or position of the apparatus 10. The location device 422 may, for example, include satellite positioning circuitry that positions the apparatus 10 by receiving transmissions from multiple satellites. The location device 422 may, for example, be cellular mobile telephone positioning circuitry that positions the apparatus 10 by identifying a current radio cell.
The context generator 40 may, for example, include an accelerometer device 423 for generating as a context output the current acceleration of the apparatus. The accelerometer device 423 may be a gyroscope device or a solid state accelerometer.
The context generator 40 may, for example, include a weather device 424 for generating as a context output an indication of the current weather such as the temperature and/or the humidity.
The context generator 40 may, for example, include a proximity device 425 for generating as a context output an indication of which other apparatuses are nearby. The proximity device e.g. a Bluetooth transceiver may for example, use low power radio frequency transmissions to discover and identify other proximity devices nearby, for example, within a few metres or a few tens of metres.
It should be appreciated that by providing suitable sensors 40 different activities of a person carrying the apparatus 10 may be discriminated. For example, a context parameter output by the real-time clock device 421 may be used to determine whether, when the apparatus is used, it is being used during work-time or leisure time. For example, a context parameter output by the location device 422 may be used to determine whether, when the apparatus is used, it is being used while the user is stationary or moving or while the user is in particular locations. For example, a context parameter output by the accelerometer device 423 may be used to determine whether, when the apparatus is used, it is being used while the user is exercising. As an example, jogging may produce a characteristic acceleration and deceleration signature in the output parameter. For example, a context parameter output by the weather device 424 may be used to determine whether, when the apparatus is used, it is being used inside or outside etc. For example, a context parameter output by the proximity device 425 may be used to determine whether, when the apparatus is used, it is being used while the user of the apparatus is in the company of identifiable individuals or near a particular location.
The collection of output contexts produced or received at a moment in time define a vector that defines the current context in a multi-dimensional context space 60 (schematically illustrated in
The input/output device 14 is used to operate on a media item. It may, for example, include an audio output device 15 such as a loudspeaker or ear phone jack for playing a music track. The input/output device 14 may, for example, include a camera 16 for capturing an image or video. The input/output device 14 may, for example, include a display 17 for displaying an image or video.
The memory 20 stores computer program instructions 25 that control the operation of the apparatus 10 when loaded into the processor 12. The computer program instructions 25 provide the logic and routines that enables the apparatus 10 to perform the methods illustrated in
The computer program instructions may arrive at the apparatus 10 via an electromagnetic carrier signal or be copied from a physical entity 6 such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD.
The operation of the apparatus 10 will not be described with reference to
Referring to
After providing the first media item 221 to the input/output device 14, the processor 12 at block 104 receives a first context output 321 from the context generator 40 (or input port 2) and stores it in the memory 20. The first context output 321 is a first parameter of the current context of the apparatus 10 i.e. the context that is contemporaneous with playing the first media item 221.
After providing the first media item 221 to the input/output device 14, the processor 12 at block 106 receives a second context output 322 from the context generator 40 (or input port 2) and stores it in the memory 20. The second context output 321 is a second parameter of the current context of the apparatus 10 i.e. the context that is contemporaneous with playing the first media item 221. The second parameter is different from the first parameter.
The processor 12 may also receive and store additional context parameters of the current context of the apparatus 10 i.e. the context that is contemporaneous with playing the first media item 221. The types of context outputs recorded as context parameters may be dependent upon the type of media item being operated on.
At block 110, the processor 12 associates the first media item 221 with a combination of context parameters for the current context of the apparatus 10 i.e. the context that is contemporaneous with playing the first media item 221. The collection of output contexts produced or received at a moment in time define a vector composed of context parameters that defines the current context in a multi-dimensional context space 60
At block 108, the operation of the input/output device 14 on the first media item 221 is terminated.
The method 100 is repeated when the same or different media items are used by the input/output device 14.
In the figure, the first media item 221 is associated 521 with a combination 5011 of context parameters 321, 322 that were current when the first media item 221 was being used. A different combination 5011 will be created each time the first media item 221 is used and will be associated with the first media item 221. The associations between the first media item 221 and the combination or combinations of context parameters 32 are stored in the database 26. A combination of context parameters 32 defines a vector in a multi-dimensional context space 60.
In the figure, the second media item 222 is associated 522 with a combination 5021 of context parameters 321, 322 that were current at a time T1 when the second media item 222 was being used. The second media item 222 is also associated 523 with a combination 5022 of context parameters 323, 324 that were current at a time T2 when the second media item 222 was being used. The associations between the second media item 222 and the combinations 50 of context parameters are stored in the database 26. A combination of context parameters 32 defines a vector in a multi-dimensional context space 60.
As an example, for music track media items, the first context parameter may be the time and/or day (of playing the music track) and the second context parameter may be a location (of playing the music track).
As another example, for image media items, the first context parameter may be the time and/or day (of capturing/viewing the image) and the second context parameter may be a location (of capturing/viewing the image).
Referring to
At block 114, a set 63 of media items 22 is created by searching the database 62 to identify media items 22 that have associated contexts that are within the defined context space 62.
At block 116, the set 63 of media items 22 may be adjusted by the processor 12 using, for example, a threshold criterion or criteria. For example, the set may be reduced by the processor 12 to include only those media items 22 that have multiple (i.e. greater than N) associated contexts that are within the defined context space 62. For example, the processor 12 may reduce the set 63 by including only those media items 22 that have similar metadata 23. For example, in the case of music tracks the set 63 may be restricted to music tracks of similar genre and/or tempo and/or energy as identified by the processor 12. The processor 12 may, in some embodiments, augment the set 63 by including media items that have similar metadata but do not have associated contexts that are within the defined context space.
At block 118, following optional block 116, a definition of the set 63 of media items 22 is stored in the database 26 in association with the definition 70 of the context space 62 as illustrated in
Referring to
If the current context does lie within a defined context volume 62, then at block 124, the processor 12 accesses the set 63 of media items 22 associated with that context volume 62.
The processor 12 may present the set 63 of media items as a contextual play list. The play list may be presented as suggestions for user selection of individual media items for use. The playlist may be presented as a playlist for automatic use of the set of media items without further user intervention e.g. as a music compilation or image slide show.
The play lists may then be stored and referenced.
Although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed. For example, although association of a media item with a vector of context parameters may be achieved automatically using a processor 12 as illustrated in
Examples of how embodiments of the invention may be used include:
Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the Applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.