Promotional content targeting based on recognized audio

Information

  • Patent Grant
  • 10832287
  • Patent Number
    10,832,287
  • Date Filed
    Tuesday, September 18, 2018
    5 years ago
  • Date Issued
    Tuesday, November 10, 2020
    3 years ago
Abstract
An audio recognition system provides for delivery of promotional content to its user. A user interface device, locally or with the assistance of a network-connected server, performs recognition of audio in response to queries. Recognition can be through a method such as processing features extracted from the audio. Audio can comprise recorded music, singing or humming, instrumental music, vocal music, spoken voice, or other recognizable types of audio. Campaign managers provide promotional content for delivery in response to audio recognized in queries.
Description
BACKGROUND

The present disclosure relates to systems and methods that recognize audio queries and select related information to return in response to recognition of the audio queries. The technology disclosed facilitates easy designation of aggregate user experience categories and custom audio references to be recognized. It facilitates linking and return of selected information in response to recognition of audio queries that match the designated aggregate user experience categories or custom audio references to be recognized.


Song recognition is related to humming and voice recognition. Algorithms have been developed that allocate audio processing steps between a hand-held device and a remote server. The team working on the technology disclosed in this application has contributed to this art, including development of technology described in US 2012/0036156 A1, published Feb. 9, 2012, entitled “System and Method for Storing and Retrieving Non-Text-Based Information;” and US 2012/0029670 A1, published Feb. 2, 2012, entitled “System and Methods for Continuous Audio Matching.” These patent publications are hereby incorporated herein by reference. In some technologies, audio samples are relayed from a hand-held device to a server for processing. In others, features are extracted from the audio for processing. Sometimes, the features are processed locally. Other times, the features are processed by a server. Traditionally, recognition technology has been used only on demand with hand-held devices, due to battery, bandwidth and transmission cost considerations. New technology described by this development team has opened the door to continuous audio recognition using a battery-powered hand-held device, such as a smartphone, tablet or laptop.


Song recognition has been used as a trigger for metadata presentation. The technology disclosed explores other connections that can be made to provide information to a user following recognition of a song or, more generally, of an audio or multimedia segment.


SUMMARY

The present disclosure relates to systems and methods that recognize audio queries and select related information to return in response to recognition of the audio queries. The technology disclosed facilitates easy designation of aggregate user experience categories and custom audio references to be recognized. It facilitates linking and return of selected information in response to recognition of audio queries that match the designated aggregate user experience categories or custom audio references to be recognized. Particular aspects of the technology disclosed are described in the claims, specification and drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an example system that uses audio recognition and classification of recognized content to deliver promotional content.



FIG. 2 is a block diagram illustrating example modules within the self-service campaign configuration server.



FIG. 3 is a block diagram illustrating example modules within the distribution server.



FIG. 4 is a block diagram illustrating example modules within the computing device app or application.



FIG. 5 is a flow chart illustrating an example process for self-service campaign configuration.



FIG. 6 is a flow chart illustrating an example process for server-based recognition.



FIG. 7 is a flow chart illustrating an example process for local recognition.



FIG. 8 is an example graphical interface for adding a new campaign to an account.



FIG. 9 is an example graphical interface for adding media, such as promotional or informational content, to a campaign.



FIG. 10 is an example graphical interface for adding a group that connects target recognition events to media, such as promotional or informational content, in a campaign.



FIG. 11 is an example graphical interface for finishing adding a new group.



FIG. 12 depicts an example implementation for a device to show promotional content based on the recognized audio.





DETAILED DESCRIPTION

The following detailed description is made with reference to the figures. Preferred embodiments are described to illustrate the technology disclosed, not to limit its scope, which is defined by the claims. Those of ordinary skill in the art will recognize a variety of equivalent variations on the description that follows.


The technology disclosed provides a variety of examples of relating special recognition of audio to delivery of promotional content. The examples can be extended from audio recognition to image recognition, such as recognizing segments of a movie or television show. The following list of applications of the technology disclosed is not intended to define or limit the claims, which speak for themselves.


The technology disclosed can be applied to a variety of technical problems. Applying some implementations of the technology described, the problem of enabling campaign managers to deliver promotional content based on non-textual user experiences can be solved by a self-service, bid-based system that uses audio recognition of aggregate experience categories as a signal to target delivery of promotional content to hand-held devices for successful bidders.


Applying some implementations of the technology described, the problem of delivering promotional content based on non-textual user experiences can be solved by recognizing audio and categorizing it into an aggregate experience category, then combining the aggregate experience category with priority information derived from bidding for content delivery to select among promotional content to be delivered to a user experiencing the recognized audio.


Applying some implementations of the technology described, the problem of enabling campaign managers to deliver promotional content based on non-textual user experiences can be solved by a self-service, bid-based system that uses audio recognition of aggregate experience categories as a signal to target delivery of promotional content to hand-held devices for successful bidders.


Applying some implementations of the technology described, the problem of receiving useful content based on non-textual user experiences can be solved by sending features of audio and geo-location information to a server and receiving responsive content based on an aggregate experience category to which the audio features belong, localized to a location of a user experiencing the audio using the geo-location information.


Applying some implementations of the technology described, the problem of enabling campaign managers to deliver promotional content based on non-textual user experiences can be solved by a self-service, bid-based system that uses audio recognition of uploaded audio content or features of audio content as a signal to target delivery of promotional content to hand-held devices for successful bidders.



FIG. 1 is an example system 100 that uses audio recognition and classification of recognized content to deliver promotional content. One classification method includes classifying songs by aggregate experience category. Examples of aggregate experience categories are artist, album, versions of the song, similar artists, recommended songs in a cluster, or tagging patterns. This classification generalizes from a single recognized performance of a song to more relevant bodies of work that can, as a group, be used as a signal for promotional content targeting. Another classification includes recognizing and classifying custom audio references that would not be found in a database of songs. For instance, commercial voiceovers and sound effects or movie dialogues are custom audio references that would not be found in a music recognition database. Of course, the technology disclosed also could be applied to recognition of individual songs and delivery of promotional content responsive to the recognition.


A campaign manager 123 refers to a device that interacts through the network(s) 125 with the self-service campaign configuration server 115. The campaign manager 123 may be a computer, workstation, tablet, laptop, smartphone, consumer appliance or other device running an application or browser. Either a local or remotely based interface, such as a web-based interface, allows the campaign manager 123 to select among options for configuring a campaign. The campaign may be a promotional campaign or an informational campaign. It can promote or inform about a product, candidate, cause, referendum or other messaging interest. The options are further described below.


The campaign manager 123 may be provided access through the network(s) 125 to a reference database 113 that includes audio content references and metadata. The metadata may organize the audio content by aggregate experience categories. Metadata also may organize any custom audio references uploaded by the campaign manager 123.


The self-service campaign configuration server 115 receives choices that a user or an automated system makes and relays to the server using the campaign manager 123 or another device. The self-service campaign configuration server 115 communicates over one or more network(s) 125 with an account database 114 that maintains account information regarding a reference database 113 that contains audio content references that the overall system 100 matches to captured audio. It also communicates over the network(s) 125 with a content database 117 that contains information, messages, ads and the like that can be presented to a user of a computing device 135 following an audio recognition.



FIG. 2 is a block diagram illustrating example modules within the self-service campaign configuration server 115. While this server is referred to as “self-service,” in some implementations human assistance is available, either immediately or on request, as by initiation of an online chat or call back. In this example, the configuration server includes one or more of a target module 211, bid module 221, reference uploader 231 and content uploader 241. Some implementations may have different and/or additional modules than those shown in FIG. 2. Moreover, the functionalities can be distributed among the modules in a different manner than described herein.


The self-service campaign configuration server 115 recognizes accounts of campaign manager 123 users, stored in the account database 114. Accounts contain contact and billing information for a network information provider. In addition, each account contains at least one campaign with one or more references to targeted multimedia such as an audio fingerprint, a group of multimedia such as the genre of songs or all songs by a particular artist or the textual content of the audio file. Targeted audio or multimedia can be selected from or added to the reference database 113.


The target module 211 accepts parameters for information, message and ad delivery. A campaign manager user may add, delete, or modify the parameters of each campaign after logging into his or her account via an authentication process. The network information provider can select the type of multimedia from an existing database such as a particular song, all songs by an artist or all songs in a given set of genres. Alternatively, the network information provider may upload or provide a link to a new multimedia item, such as an audio file to the database. Other types of multimedia may also be selected or uploaded, such as images, melodies, or videos. The network information provider may also provide the content of the multimedia, such as the corresponding text or melody.


Options for selecting an aggregate user experience category include an artist (all songs by this artist), an album (all songs on this album), all versions of this song by this artist, all versions of this song by other artists, all versions of this song by any artist, all songs by “similar artists,” all “recommended songs” based on this artist, and all songs tagged by at least N people that tagged this song. The N people may be from a general population or from a population restricted to friends, contacts, followers of the campaign or the sponsor of the campaign. Options also include genres such as news shows, TV news shows, comedy, drama or science fiction TV shows. An aggregate user experience category is more than just a single song.


Another option for selecting an aggregate user experience category is by identification of a broadcast stream. Technology for identifying a broadcast stream from a query is described in the pending application Ser. No. 13/401,728, filed Feb. 21, 2012 entitled “System and Method for Matching a Query against a Broadcast Stream,” which is hereby incorporated herein by reference. This option processes a query to a database compiled in real time that includes broadcast streams, such as radio stations, television stations, Internet radio or TV stations, and live performances. In this example, the aggregate user experience is an ongoing broadcast stream of a performance.


Any of the foregoing options can be modified or restricted by geo-location data, which may reflect either the location where the sound was captured, the location of the sound source or both.


The campaign manager 123 also may select targeting of at least one custom audio, such as an advertisement or informational piece. This audio can be uploaded. If it is uploaded in a video format, the reference uploader 231 can extract the audio portion from multimedia content and perform this additional step for save time and effort for a user of the campaign manager 123. The target module 211 can associate the custom audio target with any of the aggregate user experience categories above. For instance, background music often is combined with an announcer's voice. A custom audio reference allows recognition of this mix of sources and treatment of the custom audio as part of a user experience category. Multiple custom audio targets can be grouped into a campaign. The campaign manager 123 can be configured to request that the campaign configuration server 115 locate and fingerprint particular audio content, such as a commercial broadcast by a competitor. The campaign configuration server or another component cooperating with it can create fingerprints without needing to persist a copy of the target content in storage.


The bid module 221 accepts offers to pay for display of promotional or informational content according to selected triggers. A bidding process occurs when more than one information provider desires to deliver promotional content responsive to targeted multimedia content. The network information provider may enter a bid amount, which may be a money amount, for that particular targeted multimedia. The bid amount can be for display of or clicking through an ad, or both with different values assigned to display and click-through. The system and method of the present invention then compares this bid amount with all other bid amounts for the same targeted multimedia, and generates a rank value for all campaigns with this target. The rank value generated by the bidding process determines which campaign manager user's promotional content is delivered to the device. A higher bid by a network information provider will result in a higher rank. When a targeted multimedia is recognized, either automatically or initiated by a user, the promotional content corresponding to this targeted multimedia from the highest bidding campaign is delivered to the device. A minimum bidding amount can be enforced by the system, which may or may not be visible to the campaign manager user.


In some cases, campaign manager users may want to target their promotional content when the device detects audio multimedia from a competitor. In this case, the information provider can bid on the “fingerprint” of the multimedia if they don't own the rights to the original content.


The reference uploader 231 accepts new audio or multimedia for targeting. Added target segments are analyzed by a processing unit in order to extract features and made searchable by a recognition unit (not explicitly shown). If a custom audio file is uploaded or a link is provided, the system can optionally search the existing database to make sure it is a new unique audio. If a collision takes place, the system can prompt for a higher bid. If the audio matches a song, the system can automatically switch to targeting a reference already in the reference database 113. After references are added to the reference database 113, end users are enabled to use their hand-held device to recognize the referenced audio or multimedia. The informational or promotional materials (such as an advertisement provided by a campaign manager user) can then be delivered along with the search results to recognition users operating hand-held devices and initiating audio queries.


The content uploader 241 accepts information and promotional material to be displayed to recognition users. Uploaded content is persisted to the content database 117. The system 100 delivers promotional content from the original owner and the highest bidders to the device. When the promotional content is delivered, it could be in the form of a banner ad, a half page ad, a full page takeover ad, or a listing. Delivery of the content is influenced by the bid amount. For example, a banner ad can be selected, together with an associated destination URL, to which a user will be directed upon clicking on the banner ad. Banner text and/or image are then uploaded. The preferred text to appear in a history log on a user device can be uploaded with the content. This content is associated with a bid amount per delivery and/or click-through.


The content uploaded can be synchronized with lyrics of the target audio. Synchronization of content to lyrics is described in U.S. patent application. Ser. No. 13/310,630, filed Dec. 2, 2011, entitled “Displaying Text to End Users in Coordination with Audio Playback,” which is hereby incorporated herein by reference. In addition to the technology described in that application, more than display of lyrics can be synchronized with song lyrics. For instance, in a music video, the artist could blow a kiss or throw a ball to the audience, and the kiss or ball could end up on the display of the smartphone, tablet or laptop computer.


When the target relates to a competitor, the distribution server can offer a user experiencing the audio query both uploaded content and a link back to the competitor's content. In this way, the user can override the sponsored content and return to the content being experienced. If multiple content alternatives have been uploaded to the distribution server, the server can offer the user links to alternative content, in addition to content selected for display, applying steps described below.


Referring again to FIG. 1, the distribution server 127 is connected via one or more network(s) 125 to one or more of the reference database 113, account database 114, and content database 117. The distribution server 127 is further connected via the network(s) 125 to one or more computing devices 135 used by end users or recognition users. The distribution server receives a multiplicity of recognition requests from the recognition users.



FIG. 3 is a block diagram illustrating example modules within the distribution server 127. In this example, the distribution server includes one or more of a recognition module 311, categorization module 321, prioritization module 331 and download module 341. Some implementations may have different and/or additional modules than those shown in FIG. 3. Moreover, the functionalities can be distributed among the modules in a different manner than described herein.


The recognition module 311 handles an incoming query and attempts to recognize a reference in the reference database 113 as matching the query. These references can be called audio references to distinguish them from locator references, or can simply be referred to as references. The query includes a sequence of audio samples, a plurality of features extracted from audio samples, a plurality of fingerprints extracted from audio samples, a locator reference to samples, features or fingerprints, or another format of data derived from audio samples of an audio passage to be recognized. The query further may include location data that geographically identifies where the sample was collected or, if remotely collected, where the sample originated. Either the distribution server 127 or the computing device app (application) 136 may timestamp the query. Alternatively, the query may include a locator reference that directs the recognition module 311 to a location where data on which the query is based can be found.


The recognition module 311 can implement any audio recognition technology. Two examples of audio recognition technology previously disclosed by this development team are US 2012/0036156 A1, published Feb. 9, 2012, entitled “System and Method for Storing and Retrieving Non-Text-Based Information” and US 2012/0029670 A1, published Feb. 2, 2012, entitled “System and Methods for Continuous Audio Matching,” both of which are incorporated herein by reference. As indicated in FIG. 4 and in the Continuous Audio Matching publication, the recognition module 311 can cooperate with a local recognition module 411 in computing device app 136 on computing device 135. When recognition is accomplished locally on computing device 135, the recognition module 311 on the distribution server 127 may be bypassed and the local recognition accepted.


The categorization module 321 assigns recognized audio to one or more aggregate experience categories, including the categories described above in the discussion of the target module 211. A recognized song, for instance, will be assigned to an artist category, an album category, a versions of this song category, a genre category, and other categories for which targeting is supported. A recognized custom audio will be assigned to categories as selected during interaction with the target module.


The prioritization module 331 prioritizes among campaigns that have bid to provide promotional material in response to the aggregate experience categories that correspond to the recognized audio. This prioritization may be done in advance of recognizing the query and, optionally, transmitted to the computing device app 136 before the query. Either the distribution server 127 or computing device app 136 can select among the prioritized promotional or informational information available to display. The price bid for displaying the information is one factor used in selection. Other factors may include whether the same information recently has been displayed, whether there is a limit on the number of exposures to a particular device that the campaign sponsor will pay for and whether a device user has positively or negatively responded to the same information when previously presented.


The download module 341 provides promotional or information content to the computing device app 136, which can be displayed to a user. This may include content responsive to a particular query, content predicted to be responsive to future queries by the user, or both. Content can be sent for immediate display or to be cached for future display.



FIG. 4 is a block diagram illustrating example modules within the computing device app 136 running on a computing device 135, such as a smartphone, tablet or laptop. In this example, the computing device app 136 includes one or more of a local recognition module 411, local categorization module 421, local content selection module 431 and content display module 441. Some implementations may have different and/or additional modules than those shown in FIG. 4. Moreover, the functionalities can be distributed among the modules in a different manner than described herein. A query-forming module (not shown) forms a query as described above for processing by the local recognition module 411 or transmission to distribution server 127.


The local recognition module 411 optionally performs or attempts recognition of a query. This can be done on demand or continuously. On demand local recognition is a local version of the server-based recognition described above, typically with fallback to server-based recognition if local recognition is unsuccessful and the server is available.


The local categorization module 421 is a local version of the server-based categorization described above, typically with fallback to server-based categorization if local categorization is not successful and the server is available.


The local content selection module 431 optionally uses priority information provided by the server to select among promotional or informational messages available for display. The local content selection module 431 controls timing of display. It may limit the number of displays in a time period, such as one display per three minutes. It may limit the frequency with which particular content is repeated, such once per day or five times total. It may combine information about computing device usage that is locally available to select content to display.


The content display module 441 provides content for the computing device 135 to display. This may include adapting content provided by the distribution server 127 to the available display format of the computing device 135.



FIG. 5 is a flow chart illustrating an example process for self-service campaign configuration. Other embodiments may perform the steps in different orders and/or perform different or additional steps than the ones illustrated in FIG. 5. For convenience, FIG. 5 will be described with reference to a system of one or more computers that perform the process. The system can include, for example, the campaign manager 123 and self-service campaign configuration server 115 described above with reference to FIG. 1. The actions described in this system are actions of computer-based systems, some of which can be responsive to human user input. In claims, the steps can be expressed for a system as a whole or from the perspective of one of the system components, such as the campaign manager 123 or the self-service campaign configuration server 115.


At step 511, the campaign manager 123 transmits and the self-service campaign configuration server 115 receives one or more target identifications. The targets identified can be from the reference database 113 or uploaded custom audio or multimedia with audio. The targets can be limited to audio recognitions with a selected country of origin, geographic location, device type, operating system, time of day, user age, user gender or other demographic characteristic.


At step 521, the campaign manager 123 transmits and the self-service campaign configuration server 115 receives one or more bids for delivering promotional or informational content as targeted. Multiple bids may be entered for display in response to recognitions in combination with alternative demographic characteristics. Budgets can be set for an overall campaign or for each bid within a campaign.


At step 531, the campaign manager 123 transmits and the self-service campaign configuration server 115 receives one or more custom audio or multimedia segments to be recognized.


At step 533, self-service campaign configuration server 115 searches the reference database 113 to determine whether the uploaded custom audio or multimedia segments are already available for recognition. Prior availability of custom targeted audio may impact bidding or may cause generation of an alert. An alert may advise the campaign manager 123 that the custom targeted audio already has been uploaded and may identify one or more campaigns in which it already is being used.


At step 535, self-service campaign configuration server 115 readies the uploaded custom audio or multimedia segments for recognition.


At step 541, the campaign manager 123 transmits and the self-service campaign configuration server 115 receives one or more content items for display during the campaign.



FIG. 6 is a flow chart illustrating an example process for server-based recognition. Other embodiments may perform the steps in different orders and/or perform different or additional steps than the ones illustrated in FIG. 6. For convenience, FIG. 6 will be described with reference to a system of one or more computers that perform the process. The system can include, for example, the computing device 135 and distribution server 127 described above with reference to FIG. 1. The actions described in this system are actions of computer-based systems, some of which can be responsive to human user input. In claims, the steps can be expressed for a system as a whole or from the perspective of one of the system components, such as the computing device app 136 or the distribution server 127.


At step 605, the computing device 135 transmits and distribution server 127 receives a query. The query includes data derived from audio capture or a location reference to the derived data. It also may include location information and other information that identifies the computing device 135 or user of the device. If local recognition has been performed by the computing device app 136, the query also may include status information regarding the local recognition.


At step 611, the distribution server 127 recognizes the query.


At step 621, the distribution server 127 categorizes the recognized reference that matches the query. The reference can be assigned to multiple categories of aggregate user experience or custom targeting.


At step 631, the distribution server 127 prioritizes promotional and informational content triggered by the categories that match the query and the custom targets that match the query. This can include assigning value information to content available for display.


At step 641, the distribution server 127 downloads and the computing device 135 receives promotional and informational content.


At step 651, the computing device 135 readies the promotional and informational content for display.



FIG. 7 is a flow chart illustrating an example process for local recognition. Other embodiments may perform the steps in different orders and/or perform different or additional steps than the ones illustrated in FIG. 7. For convenience, FIG. 7 will be described with reference to a system of one or more computers that perform the process. The system can include, for example, the computing device 135 that has interacted with a distribution server 127 in preparation for recognition. The actions described in this system are actions of computer-based systems, some of which can be responsive to human user input. In claims, the steps can be expressed for a system as a whole or from the perspective of one of the system components, such as the computing device app 136 or the distribution server 127.


At step 711, the computing device app 136 recognizes the query.


At step 721, the computing device app 136 categorizes the recognized reference that matches the query. The reference can be assigned to multiple categories of aggregate user experience or custom targeting.


At step 731, the computing device app 136 selects among promotional and informational content triggered by the categories that match the query and the custom targets that match the query. This can include applying a value function and other selection criteria.


At step 741, the computing device app 136 provides the computing device 135 promotional or informational content to display.



FIGS. 8-11 are example graphical interfaces for establishing a campaign to display media on computing device apps. The interface components may collect information in a different order and/or using different or additional interfaces than the ones illustrated in FIGS. 8-11.


The interface in FIG. 8 is an example of adding a new campaign to an account. This interface allows adding, deleting, searching and sorting of campaigns in the account. One or more filters can be provided to select campaigns of interest for display. An “add” button can invoke additional interfaces for adding a new campaign.


The interface in FIG. 9 is an example of adding media, such as promotional or informational content, to a campaign. An “add media” button can invoke additional interfaces for adding content.


The interface in FIG. 10 is an example of adding a group that connects target recognition events to media, such as promotional or informational content, in a campaign. One or more groups are added until, in FIG. 11, “finish” is selected.



FIG. 12 depicts an example implementation for a device to show promotional content based on the recognized audio. In this case when the audio that is recognized is unique to a specific campaign, the targeted content is delivered to the device. If the recognized audio is a song, the song information is shown to the user, and then depending on the type of ad in the campaign, the ad is either shown alongside the song information or a full page takeover takes place. If the recognized audio is not unique to a specific campaign, a bidding process takes place. The system can optionally implement an anti-fraud functionality to only count a limited number of recognitions per device per some time period, such as a day.


Computer system 1210 typically includes at least one processor 1214, which communicates with a number of peripheral devices via bus subsystem 1212. These peripheral devices may include a storage subsystem 1224, comprising for example memory devices and a file storage subsystem, user interface input devices 1222, user interface output devices 1220, and a network interface subsystem 1216. The input and output devices allow user interaction with computer system 1210. Network interface subsystem 1216 provides an interface to outside networks, including an interface to communication network 125, and is coupled via communication network 125 to corresponding interface devices in some computer systems.


User interface input devices 1222 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touch screen incorporated into the display, audio input devices such as voice recognition systems, microphones, and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system 1210 or onto communication network 125.


User interface output devices 1220 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system 1210 to the user or to another machine or computer system.


Storage subsystem 1224 stores programming and data constructs that provide the functionality of some or all of the modules described herein, including the logic to create inferred queries for use as query suggestions according to the processes described herein. These software modules are generally executed by the at least one processor 1214 alone or in combination with additional processors.


Memory 1226 used in the storage subsystem can include a number of memories including a main random access memory (RAM) 1230 for storage of instructions and data during program execution and a read only memory (ROM) 1232 in which fixed instructions are stored. A file storage subsystem 1228 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain embodiments may be stored by file storage subsystem 1228 in the storage subsystem 1224, or in additional machines accessible by the processor.


Bus subsystem 1212 provides a mechanism for letting the various components and subsystems of computer system 1210 communicate with each other as intended. Although bus subsystem 1212 is shown schematically as a single bus, some embodiments of the bus subsystem may use multiple busses.


Computer system 1210 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 1210 depicted in FIG. 12 is intended only as a specific example for purposes of illustrating the preferred embodiments. Many configurations of computer system 1210 are possible having more or fewer components than the computer system depicted in FIG. 12.


Some Particular Implementations


In one implementation, a method is described that includes receiving a selection of a target audio reference. It further includes receiving a selection of at least one aggregate experience category to which the target audio reference belongs, and linking the specified aggregate experience category to one or more bids to deliver one or more promotional or informational content items to users of portable devices upon recognition of audio queries originating from the portable devices that match the target audio reference or the additional audio references in the aggregate experience category. At least the specified target audio reference, the linked aggregate experience category, the bids and the promotional or informational content items are stored as a campaign.


This method and other implementations of the technology disclosed can each optionally include one or more of the following features.


The aggregate experience category can be a genre of music. The linking further includes selection of the genre of music to delivery of promotional content responsive to recognition of an audio query that matches any audio reference in the selected genre.


The aggregate experience category can be multiple renditions by multiple artists of a selected song. The linking further includes identifying multiple renditions by multiple artists of the selected song and linking the selected song aggregate experience category to delivery of promotional content responsive to recognition of an audio query that matches any audio reference of the multiple renditions of a particular song.


The aggregate experience category can be all songs by a selected artist. The linking further includes linking the selected artist aggregate experience category to delivery of promotional content responsive to recognition of an audio query that matches any audio reference of the selected artist.


The aggregate experience category can be all songs by similar artists. The linking further includes identifying the similar artists from the target audio reference and linking the selected similar artists aggregate experience category to delivery of promotional content responsive to recognition of an audio query that matches any audio reference of the similar artists.


The aggregate experience category can be all recommended songs. The linking further includes identifying the recommended songs from the target audio reference and linking the recommended songs aggregate experience category to delivery of promotional content responsive to recognition of an audio query that matches any audio reference of the recommended songs.


The aggregate experience category can be a broadcast channel. The linking further includes linking the broadcast channel aggregate experience category to delivery of promotional content responsive to recognition of an audio query that matches live broadcast content from the selected broadcast channel.


Additional examples of aggregate experience categories above can similarly be combined with this method implementation.


The promotional or informational content can be synchronized to lyrics of the target audio reference.


Other implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform a method as described above. Yet another implementation may include a non-transitory computer-readable storage medium storing instructions executable by a processor to perform a method as described above.


In another implementation, a method is described that includes recognizing an audio query and categorizing it into at least one aggregate experience category. It further includes combining the aggregate experience category with at least priority information derived from bidding for content delivery to select among promotional or informational content to be delivered to a user experiencing the recognized audio query.


This method and other implementations of the technology disclosed can each optionally include one or more of the following features.


The aggregate experience category can be any of the categories described above. It can be multiple renditions by multiple artists of a selected song. The recognizing further includes categorizing the recognized audio query as one of multiple renditions by multiple artists of the selected song. Similarly, it can be all songs by similar artists. The recognizing further includes categorizing the recognized audio query as performed by one of a group of similar artists to the recognized audio query.


The promotional or informational content can be synchronized to lyrics of the target audio reference.


Other implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform a method as described above. Yet another implementation may include a non-transitory computer-readable storage medium storing instructions executable by a processor to perform a method as described above.


In another implementation, a method is described that includes receiving an uploaded target audio reference from a promoter's system. It further includes processing the target audio reference, preparing it to be recognized and linking the uploaded target audio reference to one or more bids by the promoter to deliver one or more promotional or informational content items to users of portable devices upon recognition of audio queries originating from the portable devices that match the uploaded target audio reference. At least the link to the uploaded target audio reference, the bids and the promotional or informational content items are stored as a campaign.


This method and other implementations of the technology disclosed can each optionally include one or more of the following features.


A specification of one or more target user locations can limit delivery of the one or more promotional items based on an origination location of the audio query. The origination location can be where a computing device is located or an origin of audio being experienced. This feature further includes storing the target user locations with the campaign.


The uploaded target audio reference can include a mix of background music and other sounds. With the upload, the method can include receiving a selection of a song corresponding to the background music and a selection of at least one aggregate experience category to which the song and background music belong. The selected aggregate experience category can be stored with the campaign.


Additional examples of aggregate experience categories above can similarly be combined with this method implementation.


The promotional or informational content can be synchronized to lyrics of the target audio reference.


Other implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform a method as described above. Yet another implementation may include a non-transitory computer-readable storage medium storing instructions executable by a processor to perform a method as described above.


While the present technology is disclosed by reference to the embodiments and examples detailed above, it is understood that these examples are intended in an illustrative rather than in a limiting sense. Computer-assisted processing is implicated in the described embodiments. Accordingly, the present technologies may be embodied in methods for initializing or executing recognition of non-textual user queries and related information to return, systems including logic and resources to process audio query recognition, systems that take advantage of computer-assisted methods to process audio query recognition, non-transitory, computer-readable storage media impressed with logic to process audio query recognition, data streams impressed with logic to process audio query recognition, or computer-accessible services that carry out computer-assisted methods to process audio query recognition. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the technology disclosed and the scope of the following claims.

Claims
  • 1. An audio recognition device enabled to deliver promotional content, the audio recognition device comprising: a user interface output device enabled to provide information and corresponding related promotional content to a user;a network interface subsystem for providing the information to a server and obtaining the corresponding related promotional content;a module for retrieving the information from a local storage device in response to audio queries from the user requesting the information; anda local recognition module that recognizes the audio queries received from the user.
  • 2. The audio recognition device of claim 1, wherein at least one of the audio queries is vocal music.
  • 3. The audio recognition device of claim 1, wherein at least one of the audio queries is spoken voice.
  • 4. The audio recognition device of claim 1, wherein the local recognition module performs recognition by processing features extracted from the audio queries.
  • 5. The audio recognition device of claim 1, wherein the user interface output device is a display subsystem.
  • 6. The audio recognition device of claim 1, wherein the user interface output device is an audio output device and the corresponding related promotional content comprises audio messages.
  • 7. The audio recognition device of claim 1, wherein the information comprises a name of a song.
  • 8. A method for computer-assisted processing of audio queries, the method comprising: receiving an audio query and a request for information about the audio query from a user;recognizing the audio query;determining a category of the recognized audio query;searching a database corresponding to the category to obtain information responsive to the recognized audio query;selecting a promotional item corresponding to the category; andproviding the obtained information and promotional item to the user.
  • 9. The method of claim 8, wherein: the method is performed by a server;the audio query is received through a network; andthe obtained information and promotional item are sent to the user through a network.
  • 10. The method of claim 8, wherein: the method is performed by a user device;the audio query is received through a microphone; andthe obtained information and promotional item are displayed on a visual display.
  • 11. The method of claim 8, wherein: the method is performed by a user device;the audio query is received through a microphone; andthe obtained information and promotional item are provided to the user through an audio output device.
  • 12. The method of claim 8, wherein the audio query is vocal music.
  • 13. The method of claim 8, wherein the audio query is spoken voice.
  • 14. The method of claim 8, wherein the recognizing of the audio query is performed by processing features extracted from the audio query.
  • 15. The method of claim 8, wherein the information comprises a name of a song.
  • 16. A method of managing promotional campaigns, the method comprising: receiving, from a campaign manager, a reference to a category for recognized audio;receiving, from the campaign manager, promotional content;receiving, from the campaign manager, a bid to deliver the promotional content in response to an audio query that matches the category for recognized audio; andstoring, on a campaign configuration server, a link between the category and the promotional content.
  • 17. The method of claim 16, wherein the category for recognized audio is a genre of vocal music.
  • 18. The method of claim 16, wherein the category for recognized audio is a name of a song.
  • 19. The method of claim 16, wherein the category for recognized audio is a topic spoken by voice.
RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 15/455,083, entitled “SYSTEM AND METHOD FOR TARGETING CONTENT BASED ON IDENTIFIED AUDIO AND MULTIMEDIA”, filed Mar. 9, 2017 which is a continuation of U.S. application Ser. No. 14/696,308, entitled “SYSTEM AND METHOD FOR TARGETING CONTENT BASED ON IDENTIFIED AUDIO AND MULTIMEDIA”, filed Apr. 24, 2015, now U.S. Pat. No. 9,633,371, issued Apr. 25, 2017, which is a continuation of U.S. application Ser. No. 13/468,975, entitled “SYSTEM AND METHOD FOR TARGETING CONTENT BASED ON IDENTIFIED AUDIO AND MULTIMEDIA,” by Aaron Master and Keyvan Mohajer, filed May 10, 2012, now U.S. Pat. No. 9,035,163, issued May 19, 2015, which is related to and claims the benefit of U.S. Provisional Patent Application No. 61/484,609, entitled “System and Method for Targeting Content Based on Identified Audio and Multimedia,” by Aaron Master and Keyvan Mohajer, filed May 10, 2011. All of these related applications are incorporated herein by reference.

US Referenced Citations (244)
Number Name Date Kind
3919479 Moon et al. Nov 1975 A
4450531 Kenyon et al. May 1984 A
4697209 Kiewit et al. Sep 1987 A
4739398 Thomas et al. Apr 1988 A
4843562 Kenyon et al. Jun 1989 A
4918730 Schulze Apr 1990 A
4928249 Vermesse May 1990 A
4959850 Marui Sep 1990 A
5019899 Boles et al. May 1991 A
5164915 Blyth Nov 1992 A
5436653 Ellis et al. Jul 1995 A
5437050 Lamb et al. Jul 1995 A
5511000 Kaloi et al. Apr 1996 A
5542138 Williams et al. Aug 1996 A
5577249 Califano Nov 1996 A
5581658 O'Hagan et al. Dec 1996 A
5664270 Bell et al. Sep 1997 A
5687279 Matthews Nov 1997 A
5708477 Forbes et al. Jan 1998 A
5862260 Rhoads Jan 1999 A
5874686 Ghias et al. Feb 1999 A
5880386 Wachi et al. Mar 1999 A
5907815 Grimm et al. May 1999 A
5918223 Blum et al. Jun 1999 A
5956683 Jacobs et al. Sep 1999 A
5963957 Hoffberg Oct 1999 A
5969283 Looney et al. Oct 1999 A
5974409 Sanu et al. Oct 1999 A
5991737 Chen Nov 1999 A
6049710 Nilsson Apr 2000 A
6067516 Levay et al. May 2000 A
6092039 Zingher Jul 2000 A
6108626 Cellario et al. Aug 2000 A
6121530 Sonoda Sep 2000 A
6122403 Rhoads Sep 2000 A
6182128 Kelkar et al. Jan 2001 B1
6188985 Thrift et al. Feb 2001 B1
6201176 Yourlo Mar 2001 B1
6209130 Rector, Jr. et al. Mar 2001 B1
6233682 Fritsch May 2001 B1
6292767 Jackson et al. Sep 2001 B1
6314577 Pocock Nov 2001 B1
6345256 Milsted et al. Feb 2002 B1
6363349 Urs et al. Mar 2002 B1
6385434 Chuprun et al. May 2002 B1
6405029 Nilsson Jun 2002 B1
6408272 White et al. Jun 2002 B1
6434520 Kanevsky et al. Aug 2002 B1
6453252 Laroche Sep 2002 B1
6504089 Negishi et al. Jan 2003 B1
6505160 Levy et al. Jan 2003 B1
6507727 Henrick Jan 2003 B1
6510325 Mack, II et al. Jan 2003 B1
6519564 Hoffberg et al. Feb 2003 B1
6535849 Pakhomov et al. Mar 2003 B1
6542869 Foote Apr 2003 B1
6594628 Jacobs et al. Jul 2003 B1
6611607 Davis et al. Aug 2003 B1
6614914 Rhoads et al. Sep 2003 B1
6629066 Jackson et al. Sep 2003 B1
6631346 Karaorman et al. Oct 2003 B1
6633845 Logan et al. Oct 2003 B1
6633846 Bennett et al. Oct 2003 B1
6640306 Tone et al. Oct 2003 B1
6808272 Kuo Oct 2004 B1
6834308 Ikezoye et al. Dec 2004 B1
6850288 Kurokawa Feb 2005 B2
6879950 Mackie et al. Apr 2005 B1
6931451 Logan et al. Aug 2005 B1
6941275 Swierczek Sep 2005 B1
6967275 Ozick Nov 2005 B2
6990453 Wang et al. Jan 2006 B2
6995309 Samadani et al. Feb 2006 B2
7017208 Weismiller et al. Mar 2006 B2
7058376 Logan et al. Jun 2006 B2
7085716 Even et al. Aug 2006 B1
7174293 Kenyon et al. Feb 2007 B2
7174346 Gharachorloo et al. Feb 2007 B1
7190971 Kawamoto Mar 2007 B1
7206820 Rhoads et al. Apr 2007 B1
7209892 Galuten et al. Apr 2007 B1
7233321 Larson et al. Jun 2007 B1
7257536 Finley et al. Aug 2007 B1
7266343 Yli-juuti et al. Sep 2007 B1
7323629 Somani et al. Jan 2008 B2
7328153 Wells et al. Feb 2008 B2
7373209 Tagawa et al. May 2008 B2
7379875 Burges et al. May 2008 B2
7444353 Chen et al. Oct 2008 B1
7490107 Kashino et al. Feb 2009 B2
7516074 Bilobrov Apr 2009 B2
7562392 Rhoads et al. Jul 2009 B1
7567899 Bogdanov Jul 2009 B2
7580832 Allamanche et al. Aug 2009 B2
7672916 Poliner et al. Mar 2010 B2
7693720 Kennewick et al. Apr 2010 B2
7743092 Wood Jun 2010 B2
7756874 Hoekman et al. Jul 2010 B2
7783489 Kenyon et al. Aug 2010 B2
7853664 Wang et al. Dec 2010 B1
7858868 Kemp et al. Dec 2010 B2
7881657 Wang et al. Feb 2011 B2
7899818 Stonehocker et al. Mar 2011 B2
7904297 Mirkovic et al. Mar 2011 B2
7908135 Shishido Mar 2011 B2
8013230 Eggink Sep 2011 B2
8073684 Sundareson Dec 2011 B2
8086171 Wang et al. Dec 2011 B2
8116746 Lu et al. Feb 2012 B2
8296179 Rennison Oct 2012 B1
8358966 Zito et al. Jan 2013 B2
8433431 Master et al. Apr 2013 B1
8452586 Master et al. May 2013 B2
8583675 Haahr et al. Nov 2013 B1
8634947 Kleinpeter et al. Jan 2014 B1
8658966 Ha Feb 2014 B2
8694537 Mohajer Apr 2014 B2
8762156 Chen Jun 2014 B2
9035163 Mohajer May 2015 B1
9047371 Mohajer et al. Jun 2015 B2
9196242 Master et al. Nov 2015 B1
9390167 Mont-Reynaud et al. Jul 2016 B2
9633371 Mohajer Apr 2017 B1
20010005823 Fischer et al. Jun 2001 A1
20010014891 Hoffert et al. Aug 2001 A1
20010049664 Kashino Dec 2001 A1
20010053974 Lucke et al. Dec 2001 A1
20020023020 Kenyon et al. Feb 2002 A1
20020042707 Zhao et al. Apr 2002 A1
20020049037 Christensen et al. Apr 2002 A1
20020072982 Barton et al. Jun 2002 A1
20020083060 Wang et al. Jun 2002 A1
20020138630 Solomon et al. Sep 2002 A1
20020163533 Trovato et al. Nov 2002 A1
20020174431 Bowman et al. Nov 2002 A1
20020181671 Logan Dec 2002 A1
20020193895 Qian et al. Dec 2002 A1
20020198705 Burnett Dec 2002 A1
20020198713 Franz et al. Dec 2002 A1
20020198789 Waldman Dec 2002 A1
20030023437 Fung Jan 2003 A1
20030050784 Hoffberg et al. Mar 2003 A1
20030078928 Dorosario et al. Apr 2003 A1
20030106413 Samadani et al. Jun 2003 A1
20030192424 Koike Oct 2003 A1
20040002858 Attias et al. Jan 2004 A1
20040019497 Volk et al. Jan 2004 A1
20040143349 Roberts et al. Jul 2004 A1
20040167779 Lucke et al. Aug 2004 A1
20040193420 Kennewick et al. Sep 2004 A1
20040231498 Li et al. Nov 2004 A1
20050016360 Zhang Jan 2005 A1
20050016361 Ikeya et al. Jan 2005 A1
20050027699 Awadallah et al. Feb 2005 A1
20050086059 Bennett Apr 2005 A1
20050254366 Amar Nov 2005 A1
20050273326 Padhi et al. Dec 2005 A1
20060003753 Baxter Jan 2006 A1
20060059225 Stonehocker et al. Mar 2006 A1
20060106867 Burges et al. May 2006 A1
20060122839 Li-Chun Wang et al. Jun 2006 A1
20060155694 Chowdhury et al. Jul 2006 A1
20060169126 Ishiwata et al. Aug 2006 A1
20060189298 Marcelli Aug 2006 A1
20060242017 Libes et al. Oct 2006 A1
20060277052 He et al. Dec 2006 A1
20070010195 Brown et al. Jan 2007 A1
20070016404 Kim et al. Jan 2007 A1
20070055500 Bilobrov Mar 2007 A1
20070120689 Zerhusen et al. May 2007 A1
20070168409 Cheung Jul 2007 A1
20070168413 Barletta et al. Jul 2007 A1
20070204319 Ahmad et al. Aug 2007 A1
20070239676 Stonehocker et al. Oct 2007 A1
20070260634 Makela et al. Nov 2007 A1
20070282860 Athineos Dec 2007 A1
20070288444 Nelken et al. Dec 2007 A1
20080022844 Poliner et al. Jan 2008 A1
20080026355 Petef Jan 2008 A1
20080082510 Wang et al. Apr 2008 A1
20080134264 Narendra et al. Jun 2008 A1
20080154951 Martinez et al. Jun 2008 A1
20080190272 Taub et al. Aug 2008 A1
20080208891 Wang et al. Aug 2008 A1
20080215319 Lu et al. Sep 2008 A1
20080215557 Ramer et al. Sep 2008 A1
20080235872 Newkirk et al. Oct 2008 A1
20080249982 Lakowske Oct 2008 A1
20080255937 Chang et al. Oct 2008 A1
20080256115 Beletski et al. Oct 2008 A1
20080281787 Arponen et al. Nov 2008 A1
20080301125 Alves et al. Dec 2008 A1
20090030686 Weng et al. Jan 2009 A1
20090031882 Kemp et al. Feb 2009 A1
20090037382 Ansari et al. Feb 2009 A1
20090063147 Roy Mar 2009 A1
20090063277 Bernosky Mar 2009 A1
20090064029 Corkran et al. Mar 2009 A1
20090119097 Master et al. May 2009 A1
20090125298 Master et al. May 2009 A1
20090125301 Master et al. May 2009 A1
20090144273 Kappos Jun 2009 A1
20090165634 Mahowald Jul 2009 A1
20090228799 Verbeeck et al. Sep 2009 A1
20090240488 White et al. Sep 2009 A1
20100014828 Sandstrom et al. Jan 2010 A1
20100017366 Robertson et al. Jan 2010 A1
20100049514 Kennewick et al. Feb 2010 A1
20100145708 Master et al. Jun 2010 A1
20100158488 Roberts et al. Jun 2010 A1
20100205166 Boulter et al. Aug 2010 A1
20100211693 Master et al. Aug 2010 A1
20100235341 Bennett Sep 2010 A1
20100241418 Maeda et al. Sep 2010 A1
20100250497 Redlich et al. Sep 2010 A1
20110046951 Suendermann et al. Feb 2011 A1
20110071819 Miller et al. Mar 2011 A1
20110078172 LaJoie et al. Mar 2011 A1
20110082688 Kim et al. Apr 2011 A1
20110116719 Bilobrov May 2011 A1
20110132173 Shishido Jun 2011 A1
20110132174 Shishido Jun 2011 A1
20110173185 Vogel Jul 2011 A1
20110173208 Vogel Jul 2011 A1
20110213475 Herberger et al. Sep 2011 A1
20110244784 Wang Oct 2011 A1
20110247042 Mallinson Oct 2011 A1
20110276334 Wang et al. Nov 2011 A1
20110288855 Roy Nov 2011 A1
20120029670 Mont-Reynaud et al. Feb 2012 A1
20120036156 Mohajer et al. Feb 2012 A1
20120047156 Jarvinen et al. Feb 2012 A1
20120078894 Jiang et al. Mar 2012 A1
20120095958 Pereira et al. Apr 2012 A1
20120143679 Bernosky et al. Jun 2012 A1
20120232683 Master et al. Sep 2012 A1
20120239175 Mohajer et al. Sep 2012 A1
20130024442 Santosuosso et al. Jan 2013 A1
20130044885 Master et al. Feb 2013 A1
20130052939 Anniballi et al. Feb 2013 A1
20140019483 Mohajer Jan 2014 A1
20140316785 Bennett et al. Oct 2014 A1
20160103822 George Apr 2016 A1
20160292266 Mont-Reynaud et al. Oct 2016 A1
Foreign Referenced Citations (22)
Number Date Country
0944033 Sep 1999 EP
0944033 Sep 1999 EP
1367590 Dec 2003 EP
1367590 Dec 2003 EP
H11-272274 Oct 1999 JP
H11-272274 Oct 1999 JP
2000187671 Jul 2000 JP
2000187671 Jul 2000 JP
9517746 Jun 1995 WO
95177416 Jun 1995 WO
9918518 Apr 1999 WO
9918518 Apr 1999 WO
03061285 Jul 2003 WO
03061285 Jul 2003 WO
2004091307 Oct 2004 WO
2004091307 Oct 2004 WO
2008004181 Jan 2008 WO
2008004181 Jan 2008 WO
2010018586 Feb 2010 WO
2010018586 Feb 2010 WO
2013177213 Nov 2013 WO
2013177213 Nov 2013 WO
Non-Patent Literature Citations (107)
Entry
U.S. Appl. No. 14/696,308—Office Action dated Aug. 11, 2016, 6 pages.
U.S. Appl. No. 14/696,308—Response to Aug. 11 Office Action filed Nov. 11, 2016, 9 pages.
U.S. Appl. No. 13/468,975—Office Action dated Jun. 19, 2014, 6 pages.
U.S. Appl. No. 13/468,975—Response to Jun. 19 Office Action filed Sep. 17, 2014, 10 pages.
U.S. Appl. No. 13/468,975—Notice of Allowance dated Jan. 6, 2015, 7 pages.
U.S. Appl. No. 14/696,308—Notice of Allowance dated Dec. 6, 2016, 5 pages.
PCT/US2009/066458—International Search Report, dated Jun. 23, 2010, 16 pages.
InData Corporation, DepoView Video Review Software Product Description, “InData's Newest Video Deposition Viewer”, Dec. 2007, 2 pgs.
InData Corporation, DepoView DVD, Video Review Software Product Brochure, Jun. 2008, 4 Pgs.
InData Corporation, DepoView Video Review Software Product Description, http://indatacorp.com/depoview.html, accessed Nov. 8, 2011, 2 Pgs.
Sony Ericsson's W850i Walkman Phone Now Available in the Middle East. Al-Bawaba News, 2006 Al-Bawaba. Dec. 11, 2006. Factiva, Inc. <www.albawaba.com>. 2 pages.
Blackburn, S G., “Content Based Retrieval and Navigation of Music,” University of Southampton, Departmenf of Electronics and Computer Science, Faculty of Engineering and Applied Science, Mar. 10, 1999, 41 Pages.
Blackburn, S., et al. “A Tool for Content Based Navigation of Music,” University of Southampton, Department of Electronics and Computer Science, Multimedia Research Group, Copyright 1998 ACM 1-58113-036-8/98/0008, pp. 361-368.
Blackburn, Steven G. “Content Based Retrieval and Navigation of Music Using Melodic Pitch Contours”. University of Southampton, Department of Electronics and Computer Science, Faculty of Engineering and Applied Science. Sep. 26, 2000. 136 Pages.
Blackburn, S G. “Search by Humming”. University of Southampton, Department of Electronics and Computer Science, Faculty of Engineering, May 8, 1997, 69 Pages.
Hum That Tune, Then Find it on the Web. NPR: Weekend Edition—Saturday, WKSA. Copyright 2006 National Public Radio. Dec. 23, 2006. Factiva, Inc. 2 pages.
Casey, M. A., et al., “Content-Based Music Information Retrieval: Current Directions and Future Challenges”. Apr. 2008, vol. 96, No. 4, Copyright 2008 IEEE, Retrieved from IEEE Xplore [retrieved on Dec. 29, 2008 at 18:02], 29 Pages.
Wagstaff, J., “Loose Wire: New Service Identifies Songs You Hum,” WSJA Weekend Journal. Copyright 2006, Dow Jones & Company, Inc. Dec. 25, 2006. Factiva, Inc. 2 pages.
Saltzman, M., “The Best Things in life are Free—For Your iPhone,” Home Electronics and Technology, For Canwest News Service. Copyright 2008 Edmonton Journal. Nov. 12, 2008. Factiva, Inc. 2 pages.
First Products with Gracenote Technology to Ship in 2008. Warren's Consumer Electronics Daily. Copyright 2007 Warren Publishing, Inc. Sep. 18, 2007. Factiva, Inc. 2 pages.
Gracenote Readies New Services, But Video Initiative Stalls. Warren's Consumer Electronics Daily. Copyright 2005 Warren Publishing, Inc. vol. 5; Issue 122. Jun. 24, 2005. Factiva, Inc. 2 pages.
Furui, S., “Digital Speech Processing, Synthesis, and Recognition”. Second Edition, Revised and Expanded. Nov. 17, 2000. ISBN 978-0824704520. 17 pages.
Ghias, A., et al. “Query By Humming,” Musical Information Retrieval in an Audio Database, Cornell University 1995, 6 Pages.
Mobile Music: Comcast Cellular First in U.S. to Trial Breakthrough Interactive Music Service Called *CD. Copyright PR Newswire, New York. ProQuest LLC. Feb. 11, 1999. Retrieved from the Internet: <http://proquest.umi.com.libproxy.mit.edu/pqdwb?did+38884944&sid=3&Fmt=3&clientld=5482&RQT=309&VName=PQD>. 3 pages.
Typke, R., et al., “A Survey of Music Information Retrieval Systems,” Universiteit Utrecht, The Netherlands. Copyright 2005 Queen Mary, University of London. 8 Pages.
Wang, A., “The Shazam Music Recognition Service”. Communications of the ACM, vol. 49, No. 8. Aug. 2006. ACM 0001-0782/06/0800. pp. 44-48. 5 pages.
Melodis Rolls Out midomi mobile. Wireless News. Copyright 2008 M2 Communications, Ltd. Mar. 6, 2008. 1 Page.
Zhu, Y., et al. “Warping Indexes with Envelope Transforms for Query by Humming”. New York University, New York. SIGMOD Copyright 2003, San Diego, CA. Jun. 9-12, 2003. ACM 1-58113-634-X/03/06. pp. 181-192. 12 Pages.
PCT/US2009/066458—International Preliminary Report on Patentability dated Jun. 7, 2011, 7 pages.
Wang et al., “Method and Apparatus for Recognizing Sound and Music Signals in High Noise and Distortion”, U.S. Appl. No. 60/222,023, dated Jul. 31, 2000, 26 pages.
Rhoads, G., “Methods and Systems Employing Digital Watermarking”, U.S. Appl. No. 60/134,782, dated May 19, 1999, 47 pages.
Finley, Michael, et al., “Broadcast Media Purchasing System”, U.S. Appl. No. 60/166,965, dated Nov. 23, 1999, 21 pages.
Swierczek, Remi, “Music Identification System”, U.S. Appl. No. 60/158,087 dated Oct. 7, 1999, 12 pages.
Swierczek, Remi, “Music Identification System”, U.S. Appl. No. 60/186,565, dated Mar. 2, 2000, 14 pages.
Chou, Ta-Chun, et al., “Music Databases: Indexing Techniques and Implementation”, Proceedings of International Workshop on Multimedia Database Management Systems, IEEE, dated Aug. 14-16, 1996, pp. 46-53, 8 pages.
McPherson, John R. and Bainbridge, David, “Usage of the MELDEX Digital Music Library”, 1999, in Proceedings of the International Symposium on Music Information Retrieval, (Bloomington, IN, USA, 2001), pp. 19-20, 2 pages.
Wold, Erling, et al., “Classification, Search, and Retrieval of Audio”, Muslce Fish, Berkeley, CA, USA, CRC Handbook of Multimedia Computing 1999, pp. 1-19, 18 pages.
Wold et al., “Content-Based Classification, Search and Retrieval of Audio”, IEEE Multimedia 1070-986X/96, vol. 3, No. 3: Fall 1996, pp. 27-36 (17 pages).
Horn, Patricia, “What was that song? With a wireless phone, find out what you heard on the radio.”, The Inquirer, Philadelphia, Pennsylvania, USA, dated Feb. 11, 1999, 3 pages.
Kenyon, Stephen, et al., U.S. Appl. No. 60/218,824 for Audio Identification System and Method, Jul. 18, 2000, 45 pages.
Kenyon, Stephen, U.S. Appl. No. 60/155,064 for Automatic Program Identification System and Method, Sep. 21, 1999, 49 pages.
U.S. Appl. No. 13/401,728—Response to Jul. 17 Office Action filed Oct. 16, 2014, 16 pages.
U.S. Appl. No. 13/401,728—Notice of Allowance dated Mar. 4, 2015, 8 pages.
U.S. Appl. No. 13/401,728—Office Action dated Jul. 17, 2014, 11 pages.
StagePrompt Pro (formerly Music Scroller), <http://www.softpedia.com/get/Multimedia/Audio/Other-AUDIO-Tools/StagePrompt-Pro.shtml> last accessed Sep. 10, 2015, 2 pages.
U.S. Appl. No. 13/193,514—Office Action dated Jul. 17, 2015, 15 pages.
U.S. Appl. No. 13/193,514—Office Action dated Aug. 22, 2014, 20 pages.
U.S. Appl. No. 13/193,514—Office Action dated Jan. 6, 2014, 20 pages.
U.S. Appl. No. 13/310,630—Office Action dated Apr. 7, 2014, 14 pages.
U.S. Appl. No. 13/310,630—Office Action dated Nov. 19, 2014, 22 pages.
U.S. Appl. No. 13/310,630—Office Action dated Jun. 19, 2015, 24 pages.
U.S. Appl. No. 13/310,630—Office Action dated Nov. 2, 2015, 12 pages.
U.S. Appl. No. 13/372,399—Office Action dated May 18, 2012, 16 pages.
U.S. Appl. No. 13/372,399—Office Action dated Sep. 25, 2012, 18 pages.
U.S. Appl. No. 12/629,821—Office Action dated May 31, 2012, 10 pages.
U.S. Appl. No. 13/372,381—Office Action dated May 15, 2014, 15 pages.
U.S. Appl. No. 13/372,381—Office Action dated Nov. 5, 2014, 23 pages.
U.S. Appl. No. 13/482,792—Office Action dated Jun. 4, 2015, 48 pages.
U.S. Appl. No. 13/193,514—Notice of Allowance dated Mar. 11, 2016, 8 pages.
U.S. Appl. No. 13/310,630—Office Action dated Mar. 2, 2016, 17 pages.
U.S. Appl. No. 13/310,630—Response to Apr. 7 Office Action filed Oct. 6, 2014, 14 pages.
U.S. Appl. No. 13/310,630—Response to Nov. 19 Office Action filed Feb. 19, 2015, 15 pages.
U.S. Appl. No. 13/310,630—Response to Jun. 19 Office Action filed Sep. 18, 2015, 15 pages. (Meld 1014-4).
U.S. Appl. No. 13/310,630—Response to Nov. 2 Office Action filed Jan. 25, 2016, 17 pages.
U.S. Appl. No. 13/193,514—Response to Jan. 6 Office Action filed May 6, 2014, 11 pages.
U.S. Appl. No. 13/193,514—Response to Aug. 22 Office Action filed Jan. 21, 2015, 9 pages.
U.S. Appl. No. 13/193,514—Response to Jul. 17 Office Action filed Nov. 17, 2015, 16 pages.
U.S. Appl. No. 13/372,399—Response to May 18 Office Action filed Aug. 17, 2012, 12 pages.
U.S. Appl. No. 13/372,399—Response to Sep. 25 Office Action dated Sep. 25, 2012, 9 pages.
U.S. Appl. No. 13/372,381—Response to May 15 Office Action filed Oct. 9, 2014, 17 pages.
U.S. Appl. No. 12/629,821—Response to May 31 Office Action filed Nov. 30, 2012, 23 pages.
U.S. Appl. No. 13/372,399—Notice of Allowance dated Dec. 12, 2012, 8 pages.
U.S. Appl. No. 14/884,650—Office Action dated Dec. 7, 2015, 16 pages.
U.S. Appl. No. 13/482,792—Response to Feb. 13 Office Action filed May 13, 2015, 12 pages.
U.S. Appl. No. 14/884,650—Response to Dec. 7 Office Action filed Feb. 23, 2016, 15 pages.
U.S. Appl. No. 14/884,650—Office Action dated Jun. 13, 2016, 39 pages.
U.S. Appl. No. 13/482,792—Office Action dated Feb. 13, 2015, 28 pages.
U.S. Appl. No. 13/482,792—Office Action dated Jun. 26, 2014, 49 pages.
U.S. Appl. No. 14/884,650—Notice of Allowance dated Nov. 29, 2016, 19 pages.
U.S. Appl. No. 13/482,792—Notice of Allowance dated Jul. 16, 2015, 16 pages.
U.S. Appl. No. 13/482,792—Response to Jun. 26 Office Action filed Oct. 24, 2014, 17 pages.
U.S. Appl. No. 14/884,650—Response to Jun. 13 Office Action filed Aug. 30, 2016, 14 pages.
U.S. Appl. No. 15/182,300—Office Action dated Oct. 5, 2017, 58 pages.
U.S. Appl. No. 12/629,821, filed Dec. 2, 2009, U.S. Pat. No. 8,452,586, May 28, 2013, Issued.
U.S. Appl. No. 13/372,381, filed Feb. 13, 2012, 2013-0044885, Feb. 21, 2013, Abandoned.
U.S. Appl. No. 13/193,514, filed Jul. 28, 2011, U.S. Pat. No. 9,390,167, Jul. 12, 2016, Issued.
U.S. Appl. No. 15/182,300, filed Jun. 14, 2016, U.S. Pat. No. 10,055,490, Aug. 21, 2018, Issued.
U.S. Appl. No. 13/310,630, filed Dec. 2, 2009, Abandoned.
U.S. Appl. No. 13/372,399, filed Feb. 13, 2012, U.S. Pat. No. 8,433,431, Apr. 30, 2013, Issued.
U.S. Appl. No. 13/482,792, filed May 29, 2012, U.S. Pat. No. 9,196,242, Nov. 24, 2015, Issued.
U.S. Appl. No. 14/884,650, filed Oct. 15, 2015, U.S. Pat. No. 9,619,560, Apr. 11, 2017, Issued.
U.S. Appl. No. 13/468,975, U.S. Appl. No. 13/468,975, U.S. Pat. No. 9,035,163, May 19, 2015, Issued.
U.S. Appl. No. 14/696,308, filed Apr. 24, 2015, U.S. Pat. No. 9,633,371, Apr. 25, 2017, Issued.
U.S. Appl. No. 15/455,083, filed Mar. 9, 2017, U.S. Pat. No. 10,121,165, Nov. 6, 2018, Issued.
U.S. Appl. No. 13/401,728, filed Feb. 21, 2012, U.S. Pat. No. 9,047,371, Jun. 2, 2015, Issued.
Wold, Erling, et al., “Classification, Search, and Retrieval of Audio”, MusIce Fish, Berkeley, CA, USA, CRC Handbook of Multimedia Computing 1999, pp. 1-19, 18 pages.
Ghias, A., et al. “Query by Humming—Musical Information Retrieval in an Audio Database,” ACM Multimedia 95—Electronic Proceedings, San Francisco, CA, Nov. 5-9, 1995, 13 Pages.
Han, B., et al. “M-Musics: Mobile Content-Based Music Retrieval System”. Copyright 2007, Augsburg, Bavaria, Germany. ACM 978-1-59593-01-8/07/0009. Sep. 23-28, 2007. pp. 469-470. 2 Pages.
Jang, J.R., et al. “A General Framework of Progressive Filtering and its Application to Query to Singing/Humming”. IEEE Transactions on Audio, Speech, and Language Processing, vol. 16. No. 2, Feb. 2008. pp. 350-358. 9 Pages.
Kosugi, N., et al. “A Practical Query-By-Humming System for a Large Music Database”. NTT Laboratories, Japan. ACM Multimedia Los Angeles, Ca, USA. Copyright ACM 2000 1-58113-198-4/00/10. pp. 333-342. 10 Pages.
McNab, R. J., et al. “Towards the Digital Music Library: Tune Retrieval from Acoustic Input”. University of Waikato, Department of Computer Science, School of Education. DL 1996, Bethesda MD USA. Copyright 1996 ACM 0-89791-830-4/96/03. pp. 11-18. 8 Pages.
McNab, R. J., et al. “The New Zealand Digital Library MELody inDEX”. University of Waikato, Department of Computer Science. D-Lib Magazine, May 1997 [retrieved on Jun. 12, 2011 at 11:25:49 AM]. ISSN 1082-9873. Retrieved from the Internet <http://dlib.org/dlib/may97/meldex/05written.html>, 13 pages.
Pardo, B., et al. “The VocalSearch Music Search Engine”. EECS, Northwestern University. JCDL 2008, Pittsburgh, Pennsylvania, USA. Jun. 16-20, 2008, ACM 978-1-59593-998-2/08/06. p. 430. 1 Page.
Mobile Music: Comcast Cellular First in U.S. to Trial Breakthrough Interactive Music Service Called *CD. Copyright PR Newswire, New York. ProQuest LLC. Feb. 11, 1999. Retrieved from the Internet: <http://proquest.umi.com.libproxy.mit.edu/pqdwb?did+38884944&sid=3&Fmt=3&clientId=5482&RQT=309&VName=PQD>. 3 pages.
Song, J., et al. “Query by Humming: Matching Humming Query to Polyphonic Audio,” LG Electronics, Seoul, Korea. Copyright 2002 IEEE. 0/7809-7304-9/02. pp. 329-332. 4 Pages.
Taylor, C., “Company Lets Listeners Dial for CDs,” Billboard, vol. 1, No. 26, General Interest Module, Jun. 26, 1999, pp. 86-87, 2 pages.
“Can't Get That Song Out of Your Head,” Copyright 2007, The Jakarta Post, May 20, 2007, Factiva, Inc, 2 Pages.
Related Publications (1)
Number Date Country
20190019220 A1 Jan 2019 US
Provisional Applications (1)
Number Date Country
61484609 May 2011 US
Continuations (3)
Number Date Country
Parent 15455083 Mar 2017 US
Child 16134890 US
Parent 14696308 Apr 2015 US
Child 15455083 US
Parent 13468975 May 2012 US
Child 14696308 US