The present invention relates generally, as indicated, to a system and method tracking music or other audio metadata from a number of sources.
As the proliferation of electronic devices continues, there is now also a huge diversity in the number of features and accessories associated with such electronic devices. More specifically, many electronic devices have video playback capability, audio playback capability and image display capability. Exemplary accessories may also include headphones, music and video input players, etc. Taken together, the above features and accessories are often used by owners of electronic devices to store, stream and listen to a range of audio and music related media, which can then be consumed, by the owners at any time and/or location.
To match this ever-growing demand by consumers to store, stream and listen to a range of audio and music media, a number of digital music content providers have emerged over the past decade. Following the introduction of downloadable digital music files, the music industry evolved from the peer-to-peer network platforms which facilitated the illegal sharing of such files for free (eg Napster and Kazaa) to fully licensed alternatives (eg iTunes). The next significant evolution in digital music occurred as web based internet radio providers offered listeners the ability to listen to music online (eg Pandora Radio). Music subscription services then emerged as a means for users to consume large libraries of music for a flat subscription fee (eg Rhapsody and Spotify).
As a result of the ever-changing mediums that consumers use to listen to digital music, the music space is severely fragmented with users divided between downloading music (both legally and illegally), streaming through internet based radio stations and/or using online subscription services. Accompanying this fragmentation is the overwhelming song choice that consumers now face as there are over 20 million tracks available on most of the established content providers. This means that consumers are becoming increasingly confused by both the number of content providers available and the amount of music that is available to consume.
In order to combat this ‘search bar paralysis’ when looking for music, a number of services have been introduced which have tried to tackle the problems of discovering music in such a disjointed environment. Traditionally, such services have concentrated on analysing the listening history of a user and providing recommended artists based on a recommender system. Sentimental analysis has also been used to filter music listening habits based on the time of day or mood of the consumer for example. These approaches neglect the human curation side of music discovery.
Furthermore, the lack of interoperability means that a user is unlikely to use the same content provider as their friends which limits the amount of social interactivity between the two parties. In the current state of the art, it is therefore increasingly difficult to share audio and music content information with your friends due to the fragmented nature of the industry.
Existing content providers may offer a social music discovery tool; however, the content that is shared on such services is limited to users of that particular service. The music service that provides for the sharing of content is typically also in control of the actual music content. Akin to a toll bridge type business, such content providers facilitate the movement of traffic provided it is through its own gateways. Users must therefore pay the toll in order to benefit from the full service. This results in a rather limited and sandboxed experience for the user who can only discover the music listening habits of other users on that same platform.
It is therefore an object of the present invention to provide a new and improved system and method of tracking music and/or other audio metadata.
The present invention, as set out in the appended claims, relates to a system and method tracking music or other audio metadata from a number of sources in real-time on an electronic device and displaying this information as a unified music feed using a graphical and textual interface, and more particularly, to sharing such information within a social network or other conveyance system in order to aggregate crowd sourced, location-based and real-time information by combining the location, timestamp and metadata of user's listening history on such an electronic device.
Many owners of portable electronic devices have their own collection of music which is often sourced from a variety of different locations and music services including, but not limited to mp3 files, mp4 files, other downloads and streaming services. It is very common for electronic devices to be used in a manner that allows the user to side load their music, to store it and play such music. The metadata related to the playing of such audio and music content is therefore accessible as it sits agnostically on an electronic device.
The invention specifically targets this information through a system and method which interacts with a user's electronic device and is able to access this metadata at the time of playing the content so that we can know what music or other audio content that people are actually listening to in real-time.
Furthermore, the invention can access this metadata from a number of sources including native music players on the electronic devices, third party music players, internet radio and streaming services. The invention is not therefore limited to tracking the music or other audio metadata of any one content provider. The invention allows for a more holistic view of what people are listening to across a range of platforms on their electronic devices. In turn, this unified feed is displayed in a graphical and textual interface so that the user can easily see what other listeners within their network are listening to.
In addition, once a song or other audio metadata has been played by a user, it is now possible to determine the location of the electronic device through the use of either GPS, wireless triangulation and system networks or a combination of same. This means that it is also possible to locate the location of where a song or other audio metadata is played on an electronic device. Despite this advance in technology, traditional music services tend not utilise this location-based information when sharing content between users.
It is also possible to know the exact timestamp of when a song or other audio metadata is played on the majority of electronic devices. Often this information is then relayed on the music services' social network or other conveyance system to other users of the music service. However, this real-time application of the listening habits of an individual user is not often used in the aggregate to see what a group of people have been listening to over a specific time frame (eg in the last hour, during the previous week or over the course of a year).
Accordingly, there exists a need for a system and method for sharing information about music or other audio metadata which is extracted from an electronic device that remains independent and which sits agnostically above any particular music service. This will in turn allow for the aggregation of crowd-sourced listening habits of users by combining the location, timestamp and music or other audio information of multiple users' listening histories in order to display a unified music feed to assist in the music discovery process.
The present invention is an improvement over conventional systems in that the system and method for tracking music or other audio metadata from a number of sources in real-time on an electronic device and displaying this information as a unified feed using a graphical and/or textual interface is both unique and an improvement over the prior art.
There is also provided a computer program comprising program instructions for causing a computer program to carry out the above method which may be embodied on a record medium, carrier signal or read-only memory.
It is therefore an object of the present invention to provide a new and improved system and method of tracking music or other audio metadata that allows users to actually see what their friends and family are listening to as they listen to their music (on their platform of choice).
It is another object of the present invention to provide a new and improved tracking system and method that displays this information as a unified feed and in an efficient manner targeted to users who are the most likely to be interested in the information.
It is yet another object of the present invention to provide a new and improved tracking system and method that allows mobile users to use the system by way of multiple platforms and across multiple content providers.
It is still yet another object of the present invention to provide a new and improved tracking system and method that is capable of working with real-time GPS location-based systems as well as pre-loaded mapping software.
It is still yet another object of the present invention to provide a new and improved tracking system and method that is capable of working with temporal-based systems so that users can search and filter this information by time.
It is another object of the present invention to provide a new and improved tracking system and method using a graphical and textual interface to facilitate the discovery of new music.
Other objects, features and advantages of the invention will be apparent from the following detailed disclosure, taken in conjunction with the accompanying sheets of drawings, wherein like reference numerals refer to like parts.
The invention will be more clearly understood from the following description of an embodiment thereof, given by way of example only, with reference to the accompanying drawings, in which:
A system and method of tracking music or other audio metadata from a number of sources in real-time on an electronic device and displaying this information as a unified feed using a graphical and textual interface is disclosed.
While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments, with the understanding that the present disclosure is to be considered merely an exemplification of the principles of the invention and the application is limited only to the appended claims.
Although several embodiments of the invention are discussed with respect to music or other audio metadata at different devices and from different content sources, in communication with a network, it is recognized by one of ordinary skill in the art that the embodiments of the inventions have applicability to any type of content playback (eg video, books, games) involving any device (wired and wireless local devices or both local and remote wired or wireless devices) capable of playing content that can be tracked, or capable of communication with such a device.
The system set out in
In the illustrated embodiment, the services include played content tracker process 2 and 5 to track plated music or other audio metadata and to use the database interface process 3 to store and retrieve the event data that describes what is being played, where it being played and when.
In step 2, the event generator process detects the initial operation of the device, such as during power up or movement to a cell of a different base station or access point. An event geolocation message is sent for receipt by the content service system. The geolocation event message indicates the geographic location of the mobile device, determined in any manner known in the art. For example, in some embodiments, the mobile terminal includes a Global Positioning System (GPS) receiver and logic to determine the geographic location of the mobile terminal. In some embodiments, geolocation message is omitted.
In some embodiments of 2 and 5 the user ID field 2 and 5 may be used, such as a node identifier for the device used for playback, a user supplied name, an email address or an ID assigned to a user who registers with a content service system (eg Facebook). In steps 2 and 5, the timestamp field is also retrieved which holds data that indicates when the event occurred on the device that plays the content. In some embodiments, the timestamp is omitted. The content duration field (not shown) in steps 2 and 5 holds data that indicates the time needed to play the content fully for appreciation by a human user. This field in certain embodiments can be omitted. The content ID in steps 2 and 5 holds data that uniquely identifies the content being played (eg the music or audio metadata). In some embodiments, the field holds data that indicates a name of the content and a name of an artist who generated the content, such as song title and singer name. This content ID, if a music file, often contains the genre of the music played together with the song duration and other related metadata.
In circumstances where the music or audio metadata is not stored on the device 1, and pushed 2 to the database 3, often a Content Distribution Network (CDN) as embodied in 6 is the source of the music or audio metadata. Typically, the music store authorizes the CDN to download the client and then directs a link on the user's browser client to request the content from the CDN. The content is delivered to the user through the user's browser client as data formatted, for example, according to HTTP or the real-time messaging protocol (RTMP). As a result, the content is stored as local content 6 on the user's device 1. The local content arrives on the device either directly from the CDN or indirectly through some other device (eg a wired note like other host) using a temporary connection (not shown) between mobile terminal for example and other host.
Once this information has been added to the database 3 and stored locally, the application itself 4 on a user's mobile device can then be used to access and retrieve the music or other audio metadata in a graphical and textual interface. Depending on the availability of the location, metadata, user details and timestamp, the user can then distinguish what music or other audio file was played, when it was played, where it was played and by whom.
In circumstances where the device network 38 changes as set out in step 39, the system acknowledges this through a network change receiver 40. Assuming that the network is connected 41 and that there are songs stored in the queue 37, the queue is then pushed in step 42 and the song play is captured as outlined in step 36. The result for a user is that the music or other audio metadata played is tracked by the system and shows up in their activity feed within the application 4. A visual representation of this is set out in the example of a user's activity feed in
If the app is opened for the first time 47 the service saves the last synced as the current time 48. The next step involves the iPhone library being read 52 to query what the last played songs have been in the phones library and proceeds to step 53 described below.
If the app is opened (any time after being opened for the first time), the service then checks what the now playing song is and if this has changed 49. If it has, then the service reads the iPhone library 52 and proceeds to step 53 described below.
If the app is closed or if the app is dormant in the background 43A, the service will start monitoring the region 45 of the device 1. If and when the user then breaks the region as outlined in step 46, the service assesses if the now playing song has changed since the last query 49. If the now playing song has changed 49, the service reads the iPhone library 52 and proceeds to step 53 described below. If the now playing song has not changed, the service does not proceed again until the user breaks the region that is being monitored 46. This step will reoccur until the now playing song actually changes.
In addition, according to another embodiment the service subscribes to Apple's location monitoring 44 and if there is a change in location 50, the location and time of this change is added to the location database 51 which is then used to append location to the song play 58 in advance of being sent to a server 59.
For every song queried on the iPhone library, if the last played time is more recent than the last synced 53 then it is stores in the local database 54. An example of this would be when the last sync takes place at 11 am. If the last played song is tracked at 1 pm (which is two hours after the last sync), then we store this song in the Local Song Play DB 55. Taking another example, if the last played song is tracked at 10am, then the song will not be stored in the Local Song Play DB 55 as the last sync occurred later than the last played song. The next step involves a scan of the Local Song play Database 55 and if this song has not already been sent to the server 56 it will be sent to the server 59. As outlined above, before step 59, the system uses the location database to calculate the location at the time that the song was played 57. If this query is successful, we then add location to the song information 58. For the purposes of this
Referring now to
Furthermore it should be noted by reference to
It is through this social network and conveyance system that a user can also is share any music or other audio metadata as outlined in
Referring now to
Referring now to
While the above description contains many specificities, these should not be construed as limitations on the scope, but rather as an exemplification of one or several embodiments thereof. Many other variations are possible. For example, cloud lockers that store music can also be tracked using a different embodiment of the system and such platforms are likely to become more and more common as storage moves away from hardware to the cloud. Thus, a further embodiment could add cloud lockers of music as another source of metadata which can also be displayed, consumed and/or shared by the end user. Accordingly, the scope should be determined not by the embodiments illustrated, but by the appended claims and their legal equivalents.
The embodiments in the invention described with reference to the drawings comprise a computer apparatus and/or processes performed in a computer apparatus. However, the invention also extends to computer programs, particularly computer programs stored on or in a carrier adapted to bring the invention into practice. The program may be in the form of source code, object code, or a code intermediate source and object code, such as in partially compiled form or in any other form suitable for use in the implementation of the method according to the invention. The carrier may comprise a storage medium such as ROM, e.g. CD ROM, or magnetic recording medium, e.g. a floppy disk or hard disk. The carrier may be an electrical or optical signal which may be transmitted via an electrical or an optical cable or by radio or other means.
In the specification the terms “comprise, comprises, comprised and comprising” or any variation thereof and the terms include, includes, included and including” or any variation thereof are considered to be totally interchangeable and they should all be afforded the widest possible interpretation and vice versa.
The invention is not limited to the embodiments hereinbefore described but may be varied in both construction and detail.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2015/057322 | 4/2/2015 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61974453 | Apr 2014 | US |