Providing audio content to a device

Information

  • Patent Grant
  • 10958781
  • Patent Number
    10,958,781
  • Date Filed
    Monday, May 6, 2013
    11 years ago
  • Date Issued
    Tuesday, March 23, 2021
    3 years ago
Abstract
The present disclosure describes receiving a trigger operation indication that content has been selected by a user device, and determining whether the content offers a recurring audio content data. The operation may also include retrieving a first audio content and transmitting the first audio content to the user device.
Description
FIELD

The present disclosure is generally related to a communications network, and more particularly to a system, method, and non-transitory computer readable medium for providing audio announcement to content selections.


BACKGROUND

Automatic Number Identification (ANI) is a system utilized by telephone companies to identify the Directory Number (DN) of a calling subscriber. ANI serves a function similar to Caller ID, but may utilize different underlying technology. It is possible that the Caller ID can be blocked by prefixing a call with *67. ANI was originally developed for telephone company billing purposes and is now offered to commercial customers who may benefit from knowing who is calling them. In addition, ANI is one of the core technologies behind the 911 emergency services.


In commercial applications, a user may have an integrated or extraneous display affixed to a telephone. Such a display presents the ANI or telephone number of the calling party. In addition, the display may present the caller's name or calling name, also known as CNAM. Similarly, in case of a Short Messaging Service (SMS), the display may present a sender's name. However, the user may prefer to hear the audio of the information rather then watching the display. As such, audio may be provided to a user device based on certain content selection operations and corresponding settings.


SUMMARY

The present disclosure describes a system, method, and non-transitory computer-readable storage medium storing instructions that when executed cause a processor to perform providing audio announcement of communications to a called party in a communication network. In one embodiment, a method includes receiving communication from a calling party and performing a lookup of information relating to the calling party in a database via an Internet Protocol connection based on an identifier of at least one of the calling party and the called party. The information comprises one or more audio files. The audio announcement is then provided to a called party based on the audio files.


In another embodiment, a system comprises at least one device for receiving communication from a calling party. The system comprising at least one database for storing information associated with the calling party. The at least one device is operable to perform a lookup of information relating to the calling party in a database via an Internet Protocol connection based on an identifier of at least one of the calling party and the called party, wherein the information comprises one or more audio files, and provide an audio announcement to a called party based on the audio files.


In a further embodiment, a computer-readable medium comprises instructions executable by a device for receiving communication from a calling party, performing a lookup of information relating to the calling party in a database via an Internet Protocol connection based on an identifier of at least one of the calling party and the called party, wherein the information comprises one or more audio files, and providing an audio announcement to a called party based on the audio files.


In another embodiment, a method comprises providing an audio content to a user device comprises receiving a trigger operation indication that a content source has been selected by a user device, determining whether the content source offers recurring audio content data, retrieving a first audio content and transmitting the first audio content to the user device, retrieving user preferences for receiving the recurring audio content data, and transmitting additional audio content that is different from the first audio content to the user device based on the user preferences.


In a further embodiment a method comprises receiving a trigger operation indication that content has been selected by a user device, determining whether the content offers a recurring audio content data, and retrieving a first audio content and transmitting the first audio content to the user device.


In another embodiment, an apparatus comprises a receiver configured to receive a trigger operation indication that a content source has been selected, and a processor configured to determine whether the content source offers a recurring audio content data, retrieve a first audio content and transmitting the first audio content, retrieve user preferences for receiving the recurring audio content data, and a transmitter configured to transmit additional audio content that is different from the first audio content based on the user preferences.


In a further embodiment, an apparatus comprises a receiver configured to receive a trigger operation indication that content has been selected and a processor configured to: determine whether the content offers a recurring audio content data, and retrieve a first audio content and transmit the first audio content. The term apparatus can also refer to a system herein where the receiver and processor are not co-located.


In another embodiment, a non-transitory computer readable storage medium stores instructions that when executed cause a processor to perform: receiving a trigger operation indication that content has been selected by a user device, determining whether the content offers a recurring audio content data, and retrieving a first audio content and transmitting the first audio content to the user device.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 depicts a system 100 for providing audio response in accordance with one embodiment of the present disclosure;



FIGS. 2A, 2B and 2C depict a flowchart of a method or computer readable medium comprising instructions for providing audio announcement of communications to a called party in accordance with one embodiment of the present disclosure; and



FIG. 3 depicts a flowchart of a process (which includes a method or computer readable medium comprising instructions) is depicted for playing audio announcement based on playback preference in accordance with an embodiment of the present disclosure;



FIG. 4 depicts a system 400 for providing audio response to a wireless called party in accordance with one embodiment of the present disclosure; and



FIGS. 5A, 5B and 5C depict a flowchart of a method or computer readable medium comprising instructions for providing audio announcement of communications to a wireless called party in accordance with one embodiment of the present disclosure.



FIG. 6 illustrates an example flow chart method of operation according to other example embodiments.



FIG. 7 illustrates an example system for performing one or more example processes according to example embodiments.





DETAILED DESCRIPTION

The present disclosure provides audio announcement of communications to a called party. In the context of the present disclosure, a calling party is a party initiating or sending a call or a message. A called party is a party receiving the call or the message. Such a process is initiated when a calling party contacts or communicates a called party wherein the calling party and the called party each use at least one electronic device and the called party is able to receive ANI or Caller ID service and is able to display/play related information to such service. The message comprises a text message or a multimedia message. The at least one electronic device is at least one of a computer, a audio file database, a wireless phone, an Internet Protocol (IP) enabled phone, a wireless IP-enabled phone, or a device that can receive and/or transmit information. The computer readable medium (or software) of the present disclosure is stored on and/or runs on at least one of the above-mentioned electronic devices.



FIG. 1 depicts a system 100 for providing audio announcement of communications to a called party in accordance with one embodiment of the present disclosure. The system 100 includes a device 102, which is able to communicate with a called party device 104. The device 102 and the called party device 104 may communicate by calling or sending messages. The called party device 104 may be communicably coupled to device 102 via a wired connection or a wireless connection such as a land line telephone or a wireless device including but not limited to a cellular device, a Wi-Fi connection, a PDA, a Bluetooth, etc. The device 102 may be communicably coupled but not limited to a RJ 11 (telephone wire) communication address 106, 120, and/or a RJ11/wireless communication address 130. The device 102 can specify any communication address such as 106, 120, and 130 to receive information. The device 102 has a display 108 and a speaker 109 for presenting the information. In this embodiment, the display 108 and the speaker 109 are integrated within device 102. However, the display 108 and the speaker 109 may be implemented as a standalone device without departing from the spirit and scope of the present disclosure. Moreover, the device 102 may contain modules such as a headset jack or a Bluetooth to play audio. Further, the device 102 may be connected to one or more displays and/or speakers via a wired and/or wireless connection.


In addition, device 102 may receive information from a plurality of sources including but not limited to a communications network 110 such as a public switched telephone network (PSTN), a code division multiple access (CDMA), a global system for mobile communication (GSM). For example, a public switched telephone network (PSTN) that includes a central office 112 that is coupled to a calling party device 114. The information may be received through at least a RJ11 (telephone wire) communication address 106 of device 102. Other sources include a wireless network or data network (not shown) supporting other devices such as a computer or IP enabled phones.


Aspects of the present disclosure provide information to the called parties, such as the called party device 104, by providing an ability to retrieve information of a calling party from a CNAM database 116 and/or an other database 118. The CNAM database 116 and the other database 118 comprise information relating to the calling party, for example, calling party name, telephone number, messages, location, and other information associated with the calling party. In addition, the information relating to the calling party may be a city, a state, an address, a hyperlink, a photo, a video, and any information that can be sent via an Internet Protocol (IP) connection. The device 102 communicates with the CNAM database 116 and/or the other database 118 via an IP connection. The CNAM database 116 and the other database 118 comprise information relating to the calling party, for example, calling party name, telephone number, messages, location, and other information associated with the calling party. In addition, the information relating to the calling party may be a city, a state, an address, a hyperlink, a photo, a video, an announcement, a short film, one or more audio files and any information that can be sent via an Internet Protocol (IP) connection.


When a calling party communicates by calling or sending a message using the calling party device 114, the device 102 receives a caller ID, Automatic Number Identification (ANI) or other identifier from the calling party. An example of the identifier may include an IP address of the calling party device 114 or a unique identifier of the calling party that can be received and transmitted by the calling party device 114. The identifier may include information related to at least one of a calling party and a called party.


In response to receiving the caller ID, ANI or other identifier, the device 102 sends a query for the calling party name to the CNAM database 116 and/or the other database 118. The query may be sent via at least one communication address such as 106, 120, and 130 (the communication address may include but not limited to a wired communication and/or a wireless communication such as a cellular device, a Wi-Fi connection, a PDA, a Bluetooth connection, or the like) of the device 102 to the CNAM database 116 and/or the other database 118 via a direct connection or via a data network (not shown). Once the query is received, a lookup is performed in the CNAM database 116 and/or the other database 118 for the calling party name and other information. If the calling party name is found, the CNAM database 116 and/or the other database 118 returns the calling party name and other information associated with the calling party to the device 102. Thereafter, the device 102 may store the information associated with the calling party in cache 131. In an embodiment, the cache 131 may be implemented as a local storage on the device 102. Further, the information may be stored based on configurable parameters such as, but not limited to, a number of audio files, a time duration, size and so forth. Moreover, the cache 131 may not include any duplicate information or records. For example, the information may be maintained for a total of 100 non-duplicate audio files for names of calling parties. In an embodiment of the disclosure, the size of the cache 131 may be limited to a predefined limit. For example, the predefined limit may be 200 KB. Further, the cache 131 may be maintained on a rolling basis. For example, after the size of the cache 131 reaches the predefined limit, and when new information is received, then a previous or the earliest information in the cache 131 may be deleted.


The device 102 can maintain the cache 131 by adding, deleting or modifying information corresponding to a calling party or a called party. For example, the device 102 may delete audio files after a predefined number of days. Alternatively, the device 102 may allow a calling party or called party to modify or delete a file or clear data stored on the cache 131. Also, the device 102 ensures the integrity of the data stored in the cache 131. To maintain integrity, the device 102 may generate a key on the fly using attributes of the calling party/called party and encrypt the information including audio response with the key. Alternatively, the device 102 may include software that detects and prevents malicious attack and ensure data safety.


Therefore, when a new communication such as a call or a message is received, the device 102 checks the cache 131 to determine whether the calling party information is located in the cache 131. If the information is present in the cache 131, then the device 102 verifies the status of an indicator for audio announcement. The indicator can be set by a called party or configured by the device 102 to an active or an inactive state. In case the status of the indicator is active, for example, then the device 102 looks up for audio file included in information on the cache 131. Subsequently, audio announcement is played based on the audio file. For example, at least the name of the calling party may be announced as the audio. Otherwise, in case the indicator is inactive then the device 102 looks up for the information excluding the audio file in the cache 131. Thereafter, the information is displayed on the display 108 of the device 102.


In one embodiment of the disclosure, if the information is not available in the cache 131, then the device 102 verifies the status of an indicator. Thereafter, if the status of the indicator is active, then the device 102 sends a query, for example, with the calling party number or other identifier to the CNAM database 116 and/or the other database 118 for lookup of information including the audio file. The CNAM database 116 and/or the other database 118 return calling party information to the device 102 if the calling party name/number and corresponding audio file is found in the respective database. Thereafter, the audio announcement is played based on the audio file. In an embodiment of the disclosure, the audio file is streamed to the device 102 for playing the audio announcement. For example, the audio announcement may be played when the audio file is being downloaded and/or stored on the device 102. In another embodiment of the disclosure, the audio file is downloaded and stored in the cache 131. Therefore, the device 102 may not be required to connect to the CNAM database 116 and/or the other database 118 when the information is available in the cache 131. However, if the status of the indicator is set as inactive, then the device 102 looks up for the information excluding the audio file in the CNAM database 116 and/or the other database 118. Thereafter, the information is displayed on the display 108 of the device 102.


The device 102 may provide a playback preference to the called party for selecting a module for playback of the audio announcement. The modules include for example, but not limited to, a headset, a speaker or a Bluetooth device, such as an external device capable of playing audio through Bluetooth pairing. The device 102 captures the playback preference of a module for the called party. For example, the playback preference option may include language control/selection option from the service provider site 408 but the language control/selection option may further include the central office 112, the device 102, the calling party, and the called party. In another embodiment of the disclosure, the CNAM request can also include the language indicator to let the service provider site 408 and/or the central office 112 informs the spoken language to generate the audio file. Further, the device 102 may have a default module in case a module as selected by the user is not available. For example, the default module may be the speaker 109 of the device 102. Thereafter, the audio announcement may be played through the module. However, in case the selected module is not available then the default module may be selected for playback. For example, the called party device 104 may enable the audio response to be played over “Bluetooth” and over the ringer speaker with a ringer interrupt. Alternatively, the called party device 104 may enable the textual name to be displayed over “Bluetooth” to external displays such as in automobiles.


In addition to displaying/playing the information and audio response, device 102 may send the information to other user devices, such as called party devices 104, 122, and 126, to be contemporaneously displayed on displays 109, 124, and 128 respectively and played on their speakers 111, 123 and 129 respectively. In an embodiment, while the audio announcement is played, the audio announcement may refer to links that are displayed on the display 108. Further, the links displayed may be clickable. For this, a URL may be embedded with the information displayed on the display 108. Further, when the called party clicks the URL, a browser is launched with relevant information of the called party.


In this example, displays 109, 124, and 128 are implemented as standalone devices. In other embodiments, the displays 109, 124, and 128 or speakers 111, 123 and 129 can be communicably coupled to called party devices 104, 122, and 126 or may be integrated with called party devices 104, 122 and 126 without departing from the spirit and scope of the present disclosure. For example, display 128 may be integrated as part of the called party device 126 and the device 102 may send information directly to the called party device 126 to be displayed on display 128. The information may be sent from at least one communication address such as 106, 120, 130 of the device 102 or via wireless connection 130.


The information/audio response received at the device 102 may include number(s) that indicates the sender's phone number, as well as the sender's name, city, and/or state. In addition, the information/audio response includes for example alerts in response to an occurrence of an event, informative and promotional information from a service provider, and situational information from an emergency service provider. Furthermore, the information/audio response may include information relating to the calling party, such as an address, a hyperlink, a photo, a video, and any information that can be sent via an Internet Protocol connection.


Referring to FIGS. 2A, 2B, and 2C, a flowchart of a method or computer readable medium comprising instructions for providing audio response to a called party is depicted in accordance with one embodiment of the present disclosure. In this example, process 200 may be implemented as instructions executed within the device 102. Process 200 begins at step 202 with receiving a caller ID or identifier of the calling party from a calling party device. The caller ID, ANI or other identifier may be received at least one communication address such as 106, 120, 130 of device 102. Thereafter, at step 204 it is determined if the status of an indicator for audio response is set as active. If the indicator is set to active, then the process continues to step 212, else if inactive, then the process continues to step 206.


At step 212, a lookup is performed in the cache 131 of the device 102 for the audio files corresponding to the identifier. Thereafter, at step 214 if the audio file is available in the cache 131 then the audio announcement is played at step 216. Otherwise, if the information or the audio files is not available, then the process 200 continues to step 218. At step 218, a lookup is performed in the CNAM database 116 and/or the other database 118. Subsequently, at step 220, the information is downloaded and stored in the cache 131. Further, the audio announcement is played based on the audio file at step 216. In an embodiment of the disclosure, the audio announcement is streamed or played while being downloaded. As a result, the waiting time for the download and then playing the file is reduced. In another embodiment of the disclosure, the audio file is downloaded on the device 102 and then the announcement is played.


As discussed above, if the audio indicator status is not set to active, then the lookup is performed in the cache 131 for information excluding the audio file. For example, the lookup may be performed for text data such as name of the called party, but excluding the audio file. The process 200 then continues to step 208, where it is determined whether the information is available in the cache 131. In case, the information is available then the information is displayed on the device 102 at step 210. Otherwise, the process continues to step 222, where the information is looked up excluding the audio file. Subsequently, the information is displayed at step 210. In an embodiment of the disclosure, the information displayed at step 210 is clickable. For example, the text displayed from the information can be clicked to open a browser for additional information.


Referring to FIG. 3, a flowchart of a process (which includes a method or computer readable medium comprising instructions) is depicted for playing audio announcement based on playback preference, in accordance with an embodiment of the present disclosure. Process 300 begins at step 302, where a playback preference of the user is determined. For example, the user may select a module from preferences such as a headset, a speaker a Bluetooth device and so forth. Thereafter, at step 304, it is determined whether the selected module based on the playback preference of the user is available.


In case, the selected module is available, then the audio announcement is played through the selected module at step 306. Otherwise, if the selected module is not available, then a default module is selected at step 308. For example, a default module may be the speaker of the device 102. Subsequently, the audio announcement is played through the default module at step 310.


Referring to FIG. 4, a system 400 providing audio response to a wireless called party in accordance with an alternative embodiment of the present disclosure. System 400 is similar to system 100 in FIG. 1, except that device 102 is implemented as a wireless communication enabled device. Device 102 is being implemented as a mobile phone 402, a smart phone 404, or a Personal Digital Assistant (PDA) 406. In an embodiment of the disclosure, the software of device 102 is implemented on called party devices such as the mobile phone 402, the smart phone 404, or the PDA 406. To send and receive information to and from the CNAM database 116 or other database 118, one or more of mobile devices 402, 404, and 406 can wirelessly communicate with a service provider site 408, which is also communicably coupled to the CNAM database 116 and the other databases 118 via a data network (not shown) and the calling party device(s) 114 via at least one communication network such as a public switched telephone network (PSTN) 110, a code division multiple access (CDMA), a global system for mobile communication (GSM). The calling party device 114 can be, but not limited to, a mobile phone, a smart phone, a PDA, a landline and so forth.


In one embodiment of the present disclosure, a calling party device 114 connects to a receiving party device such as a mobile phone 402, a smart phone 404, or a PDA 406. At least one of the receiving party devices includes software to obtain information based on the caller ID, ANI or other identifier. The receiving party devices such as 402, 404, and 406 may receive calling party 114 phone number via a service provider 408. The receiving party device, such as the mobile phone 402, retrieves the phone number through the software and sends it to the service provider 408 through internet connectivity including but not limited to FTP, HTTP, TELNET, etc. The service provider 408 may function as a web server, listening information and requests from the software. When the service provider 408 receives a request with calling party 114 phone number, it sends the request to at least the CNAM DB 116 for name or Message DB 132 for message. In another embodiment of the present disclosure, at least the calling party 114 name or other information is returned by at least the CNAM database 116, an audio file is generated based on at least the name from the CNAM database 116. The generated file may then be stored at least in the mobile phone 402 as a table for later matching at the name requests or other information. The information gathered from at least the 116 and 132 by the service provider 408 is sends to the receiving party device such as the mobile phone 402 (not shown).


In accordance with one embodiment of the present disclosure, aspects of the present disclosure are provided within the called party devices. Thus, when a calling party communicates by calling or sending a message using the calling party device 114, the calling party device such as mobile device 402 receives a caller ID, Automatic Number Identification (ANI) or other identifier from the calling party. An example of the identifier may include an IP address of the calling party device 114 or a unique identifier of the calling party that can be received and transmitted by the calling party device 114. The identifier may include information related to at least one of a calling party and a called party.


In response to receiving the caller ID, ANI or other identifier, the called party device 402 sends a query for the calling party name to the CNAM database 116 and/or the other database 118. The query may be sent wirelessly from the called party device 402 to the CNAM database 116 and/or the other database 118 via a direct connection or via a data network (not shown). Once the query is received, a lookup is performed in the CNAM database 116 and/or the other database 118 for the calling party name and other information. If the calling party name is found, the CNAM database 116 and/or the other database 118 returns the calling party name and other information associated with the calling party to the called party device 402. Thereafter, the called party device 402 may store the information associated with the calling party in cache 131. In an embodiment, the cache 131 may be implemented as a local storage on the called party device 402. Further, the information may be stored based on configurable parameters such as, but not limited to, a number of audio files, a time duration, size and so forth. Moreover, the cache 131 may not include any duplicate information or records. For example, the information may be maintained for a total of 100 non-duplicate audio files for names of calling parties. In an embodiment of the disclosure, the size of the cache 131 may be limited to a predefined limit. For example, the predefined limit may be 200 KB. Further, the cache 131 may be maintained on a rolling basis. For example, after the size of the cache 131 reaches the predefined limit, and when new information is received, then a previous or the earliest information in the cache 131 may be deleted.


The called party device 402 can maintain the cache 131 by adding, deleting or modifying information corresponding to a calling party or a called party. For example, the called party device 402 may delete audio files after a predefined number of days. Alternatively, the called party device 402 may allow a calling party or called party to modify or delete a file or clear data stored on the cache 131. Also, the called party device 402 ensures the integrity of the data stored in the cache 131. To maintain integrity, the called party device 402 may generate a key on the fly using attributes of the calling party/called party and encrypt the information including audio response with the key. Alternatively, the called party device 402 may include software that detects and prevents malicious attack and ensure data safety.


Therefore, when a new communication such as a call or a message is received, the called party device 402 checks the cache 131 to determine whether the calling party information is located in the cache 131. If the information is present in the cache 131, then the called party device 402 verifies the status of an indicator for audio announcement. The indicator can be set by a called party or configured by the called party device 402 to an active or an inactive state. In case the status of the indicator is active, for example, then the called party device 402 looks up for audio file included in information on the cache 131. Subsequently, audio announcement is played based on the audio file. For example, the name of the calling party may be announced as the audio. Otherwise, in case the indicator is inactive then the called party device 402 looks up for the information excluding the audio file in the cache 131. Thereafter, the information is displayed on the called party device 402.


In one embodiment of the disclosure, if the information is not available in the cache 131, then the called party device 402 verifies the status of an indicator. Thereafter, if the status of the indicator is active, then the called party device 402 sends a query, for example, with the calling party number or other identifier to the CNAM database 116 and/or the other database 118 for lookup of information including the audio file. The CNAM database 116 and/or the other database 118 return calling party information to the called party device 402 if the calling party name/number and corresponding audio file is found in the respective database. Thereafter, the audio announcement is played based on the audio file. In an embodiment of the disclosure, the audio file is streamed to the called party device 402 for playing the audio announcement. For example, the audio announcement may be played when the audio file is being downloaded and/or stored on the called party device 402. In another embodiment of the disclosure, the audio file is downloaded and stored in the cache 131. Therefore, the called party device 402 may not be required to connect to the CNAM database 116 and/or the other database 118 when the information is available in the cache 131. However, if the status of the indicator is set as inactive, then the called party device 402 looks up for the information excluding the audio file in the CNAM database 116 and/or the other database 118. Thereafter, the information is displayed on the called party device 402.


Referring to FIGS. 5A, 5B, and 5C, a flowchart of a method or computer readable medium comprising instructions for providing audio response to a wireless called party is depicted in accordance with one embodiment of the present disclosure. In this example, process 500 may be implemented as instructions executed within the called party device 402. Process 500 begins at step 502 with receiving a caller ID or identifier of the calling party from a calling party device. The caller ID, ANI or other identifier may be received wirelessly at the called party device 402. Thereafter, at step 504 it is determined if the status of an indicator for audio response is set as active. If the indicator is set to active, then the process continues to step 512, else if inactive, then the process continues to step 506.


At step 512, a lookup is performed in the cache 131 of the called party device 402 for the audio files corresponding to the identifier. Thereafter, at step 514 if the audio file is available in the cache 131 then the audio announcement is played at step 516. Otherwise, if the information or the audio files is not available, then the process 500 continues to step 518. At step 518, a lookup is performed in the CNAM database 116 and/or the other database 118. Subsequently, at step 520, the information is downloaded and stored in the cache 131. Further, the audio announcement is played based on the audio file at step 516. In an embodiment of the disclosure, the audio announcement is streamed or played while being downloaded. As a result, the waiting time for the download and then playing the file is reduced. In another embodiment of the disclosure, the audio file is downloaded on the called party device 402 and then the announcement is played.


As discussed above, if the audio indicator status is not set to active, then the lookup is performed in the cache 131 for information excluding the audio file. For example, the lookup may be performed for text data such as name of the called party, but excluding the audio file. The process 500 then continues to step 508, where it is determined whether the information is available in the cache 131. In case, the information is available then the information is displayed on the called party device 402 at step 510. Otherwise, the process continues to step 522, where the information is looked up excluding the audio file. Subsequently, the information is displayed at step 510. In an embodiment of the disclosure, the information displayed at step 510 is clickable. For example, the text displayed from the information can be clicked to open a browser for additional information.


Although an exemplary embodiment of the system, method, and computer readable medium of the present disclosure has been illustrated in the accompanied drawings and described in the foregoing detailed description, it will be understood that the disclosure is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions without departing from the spirit and scope of the present disclosure as set forth and defined by the following claims. For example, a greater or lesser numbers of elements, modules, hardware, software, and/or firmware can be used to provide information delivery without departing from the spirit and scope of the present disclosure. Also, the device 102 may be a wireless mobile phone, a personal digital assistant, a cellular phone, an IP-enabled caller ID device, or a wired telephone that has IP communication capabilities. Further, the device 102 may include a memory (not shown) and a processor (not shown) to execute the process or the instructions. The memory may be for example, a Read Only Memory (ROM), a Random Access Memory (RAM), a disc media or any other computer readable medium comprising instruction executable by the processor. Although the device 102 is shown separate from the receiving party device 104, a person skilled in the art will appreciate that they can be co-located. Moreover, the receiving party device 104 may include all the functionalities of the device 102, without departing from the scope of this disclosure.


According to another example embodiment, a voice or audio file (e.g., a .wav, .aiff, .au or other file type) can be retrieved and activated during the operation of another mobile phone, computing device or wireless device function. For example, a user may access a web browser or email and select a link to an advertisement web page or other advertisement on their personal display. Alternatively, the advertisement may initiate automatically based on a first trigger operation. For example, a call may be received, an email may be received, a file or browser navigation operation may be detected, etc., and the advertisement may be initiated as a result of the trigger operation. Also, audio may be initiated based on a first trigger operation. The audio may include an audio advertisement, an audio announcement of a caller identification or other audio information which can be based on the recipient device location. As a result, a geolocation technique, such as GPS or triangulation, etc., may be used to modify a dialect of the voice recorded audio data (e.g., southern English, northern English, Canadian English, Hispanic English, or other language dialect, etc.), the addition of certain regional words or phrases, etc., can also be added to audio data received by the user device.


In operation, a user device can receive a voice message the first time an advertisement or any data is displayed on a user's display. Thereafter, further voice data updates may be provided without any further action from the user. For example, a user device may display an advertisement banner on a browser or other application. In one example, a grocery store advertisement may be displayed on a browser or application window and if a user engages with the advertisement, or if the advertisement simply pops-up automatically, then an audio message may be retrieved separately and played by the user device. At a later time in the same day, a different day, etc., another audio message and/or advertisement from the same grocery store may be sent to the user device indicating a 20% off limited offer on a perishable grocery item. Another example, may be an emergency service “advertisement” or “alert” which is displayed during inclement weather. If the user engages with the advertisement or if just pops-up automatically, then an audio message about a new alert regarding the weather/emergency event is played by the user device. In the same day, a different day, etc., another audio message and/or advertisement from the emergency services entity, which can be fire, police, ambulance, FEMA, etc., can be sent to the user device indicating further actions to take and/or information to be consumed by the end user of the user device.


In addition to or apart from the audio, a text message, email, etc. can be sent so the user can hear a tone and determine the type of message (i.e., a coupon, an emergency, etc.). As a result, the tone can imply some further information, such as what type of coupon, what type of emergency, or further information that is even unrelated to the original message content (e.g., an advertisement within an advertisement).


In selecting a content source, the user may click, select, perform a cursor roll-over operation, hover a mouse cursor for a predetermined amount of time, touch a touch screen device, use a hand or other body part to make a selection and/or speak to create an audible interaction with any content source, such as an advertisement, which may be a photo, video, web link, etc. At the point of any of these operations being performed, the audio content may be retrieved and “overlaid” with the data as a voice over type of function. Also, the audio may be retrieved and provided to the user after a certain portion of the data being consumed by the original source content has been identified. For example, a user may roll-over with a cursor on a banner advertisement's predefined window area on a display surface of a computing device. As a result of a 2, 3 . . . n second cursor roll-over operation, the first audio content may be provided to the user. Thereafter, the user preferences may be identified to determine whether subsequent audio content are acceptable. The user preferences may include a yes or no to subsequent audio content. Assuming the user has permitted such subsequent audio content to be received, the operation or process which provides the first audio may then setup a process that begins operating according to a specified amount of time.


In one example, a grocery store advertisement “such as a banner ad, rich media ad, social network ad, online classified ad, and the like” may be selected and a user may hear a list of items on sale as audio content at their nearest grocery store ‘XYZ.’ Later that same day, the user may receive a text message, email or additional audio notifications that the same store has just marked down all apples, seafood, or dairy with any purchase. Since the original content was accepted, the subsequent content was continually supplied for a predefined amount of time (e.g., 24 hours). A similar example may include a hurricane notification or emergency message. This may be especially important if a user goes to sleep and receives an audible indicator in the middle of the night that the “wind speeds are higher than expected, stay away from windows, consider staying in the basement.” Such an audible message may alert the user to do something assuming a text message indicator was missed due to a sleeping user. Also, it is important to note that other data (e.g., text, images, video, links, etc.) can be presented with the audio or after the audio.



FIG. 6 illustrates an example method of operation according to example embodiments. Referring to FIG. 6, the flow diagram 600 includes a first operation of receiving a trigger operation indication that a content source has been selected 602. The user may perform a roll-over operation and trigger the indication to be created and to launch the content source's active operations. A determination may then be made whether the content source offers recurring data at operation 604. If not, then non-recurring or one-time data may be retrieved responsive to the trigger operation at operation 614. The non-recurring data may then be displayed on the display device of the user device at operation 616. Assuming the user has elected to receive audio content, then the first set of audio data may then be provided to the user device at operation 606. Thereafter, user preferences may be retrieved from memory to identify the user's more specific preferences regarding subsequent audio data. For instance, the user may desire to receive audio data but only for certain types of information, such as coupons for stores, emergency alerts, other content types, etc. As a result, the subsequent audio may be subject to one or more restrictions identified by the user preferences and thus may not be permitted to be transmitted to the user device.



FIG. 7 illustrates an example system 700 according to example embodiments. In FIG. 7, the system 700 may be one device, such as a computer or server, or multiple devices operating as a single system. In one example of operation, the trigger reception module 710 may receive a trigger operation indication that a content source has been selected by a user device. In response, the content and user preference information databank 740 may reference pre-stored content source information and determine whether the content source offers a recurring audio content data. If so, the first audio content may be retrieved from memory 740 and transmitted as the first audio content to the user device via the adaptation module 720. Next, user preferences may be retrieved from memory 740 and the recurring audio content data may also be retrieved. The additional audio content data may be transmitted to the user device, and may be different from the first audio content data. Also, the subsequent or additional data may be delivered to the user device based on the user preferences stored in memory 740.


The trigger operation may include an advertisement selection operation, such as an item selection operation, a cursor roll-over operation, a touch screen input operation, and an audible input selection operation of a user speaking a command or making a voice based selection. Instead, the content source may be determined to not offer the recurring audio content data, and then non-recurring data may be retrieved responsive to the trigger operation and displayed on a display device of the user device. The user preferences for receiving the recurring audio content data may include preferences for predetermined content types including at least one of consumer advertisement data, emergency alert data, and local weather data.


The user preference data may include a fixed time interval during which multiple audio content data messages may be received. For example, the user may elect to receive notifications for 5, 10, 24, or ‘n’ hours. The additional audio content data may be based on a same content type as the first audio content data. For example, a first notification may be a coupon for store XYZ and subsequent content may be coupons for the same store or similar items. Any changes to the user preferences may be detected and updated by the user preference update module 730.


The embodiments of the disclosure are described above with reference to block diagrams and schematic illustrations of methods and systems according to embodiments of the disclosure. It will be understood that each block of the diagrams and combinations of blocks in the diagrams can be implemented by computer program instructions. These computer program instructions can be loaded onto one or more general purpose computers, or other programmable data processing apparatus to produce machines, such that the instructions which execute on the computers or other programmable data processing apparatus create means for implementing the functions specified in the block or blocks. Such computer program instructions can also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the block or blocks.

Claims
  • 1. A method, comprising: receiving, from a user device, an indication of a trigger operation that includes a selection of an advertisement by the user device, wherein the indication of the trigger operation indicates that a content source has been selected more than once by the user device during a selection operation;retrieving first audio content associated with the selected advertisement and overlaying the first audio content with advertisement content, wherein the overlaid audio content is provided to the user device after a certain portion of the advertisement content has been consumed based on a predefined period of time associated with the selection of the advertisement;determining, in response to the trigger operation, whether the content source offers a recurring audio content data associated with the trigger operation;retrieving, based on a user preference indicating that subsequent audio content is acceptable, and if the content source offers recurring audio content data, the recurring audio content from the content source; andtransmitting, in accordance with a recurring schedule without any further action from the user device, the recurring audio content data to the user device during a fixed time interval specified by the user preference, wherein the recurring audio content comprises audio content identifying a same retailer identified in the first audio content.
  • 2. The method of claim 1, wherein the trigger operation comprises an advertisement selection operation wherein the advertisement selection operation comprises an item selection operation, a cursor roll-over operation, a touch screen input operation, and an audible input selection operation.
  • 3. The method of claim 1, wherein the user preference for receiving the recurring audio content data include preferences for predetermined content types comprising at least one of consumer advertisement data, emergency alert data, and local weather data.
  • 4. The method of claim 3, wherein the preferences further include a fixed time interval during which multiple audio content data messages may be received.
  • 5. The method of claim 4, wherein the additional audio content data is based on a same content type as the non-duplicate audio content data.
  • 6. An apparatus, comprising: a receiver configured to receive an indication of a trigger operation that includes a selection of an advertisement by the user device, wherein the indication of the trigger operation indicates that a content source has been selected more than once by the user device during a selection operation; anda processor configured to: retrieve first audio content associated with the selected advertisement and overlay the first audio content with advertisement content, wherein the overlaid audio content is provided to the user device after a certain portion of the advertisement content has been consumed based on a predefined period of time associated with the selection of the advertisement;determine, in response to the trigger operation, whether the content source offers a recurring audio content data associated with the trigger operation;retrieve, based on a user preference indicating that subsequent audio content is acceptable, and if the content source offers recurring audio content data, the recurring audio content from the content source; andtransmit, in accordance with a recurring schedule without any further action from the user device, the recurring audio content data to the user device during a fixed time interval specified by the user preference, wherein the recurring audio content comprises audio content which identifies a same retailer identified in the first audio content.
  • 7. The apparatus of claim 6, wherein the trigger operation comprises an advertisement selection operation, wherein the advertisement selection operation comprises an item selection operation, a cursor roll-over operation, a touch screen input operation, and an audible input selection operation.
  • 8. The apparatus of claim 6, wherein the user preferences for receiving the recurring audio content data include preferences for predetermined content types comprising at least one of consumer advertisement data, emergency alert data, and local weather data.
  • 9. The apparatus of claim 8, wherein the user preference data comprises a fixed time interval during which multiple audio content data messages may be received.
  • 10. The apparatus of claim 9, wherein the additional audio content data is based on a same content type as the non-duplicate audio content data.
  • 11. A non-transitory computer readable storage medium storing instructions that when executed by a processor cause the processor to perform: receiving, from a user device, an indication of a trigger operation that includes a selection of an advertisement by the user device, wherein the indication of the trigger operation indicates that a content source has been selected more than once by the user device during a selection operation;retrieving first audio content associated with the selected advertisement and overlaying the first audio content with advertisement content, wherein the overlaid audio content is provided to the user device after a certain portion of the advertisement content has been consumed based on a predefined period of time associated with the selection of the advertisement;determining, in response to the trigger operation, whether the content source offers a recurring audio content data associated with the trigger operation;retrieving, based on the user preference indicating that subsequent audio content is acceptable, and if the content source offers recurring audio content data, the recurring audio content from the content source; andtransmitting, in accordance with a recurring schedule without any further action from the user device, the recurring audio content data to the user device during a fixed time interval specified by the user preference, wherein the recurring audio content comprises audio content identifying a same retailer identified in the first audio content.
  • 12. The non-transitory computer readable storage medium of claim 11, wherein the trigger operation comprises an advertisement selection operation, wherein the advertisement selection operation comprises an item selection operation, a cursor roll-over operation, a touch screen input operation, and an audible input selection operation.
  • 13. The non-transitory computer readable storage medium of claim 11, wherein the user preference for receiving the recurring audio content data include preferences for predetermined content types comprising at least one of consumer advertisement data, emergency alert data, and local weather data.
  • 14. The non-transitory computer readable storage medium of claim 13, wherein the user preference comprises a fixed time interval during which multiple audio content data messages may be received, and the additional audio content data is based on a same content type as the non-duplicate audio content data.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a Continuation-In-Part (CIP) of U.S. Non-Provisional application Ser. No. 12/890,829 entitled ‘PROVIDING AUDIO ANNOUNCEMENT TO CALLED PARTIES’ filed on Sep. 27, 2010 which is a Continuation-In-Part (CIP) of U.S. Non-Provisional application Ser. No. 11/974,983 entitled ‘PROVIDING ADDITIONAL INFORMATION TO CALLED PARTIES’ and filed on Oct. 17, 2007, which is a non-provisional of U.S. Provisional application Ser. No. 60/934,407, entitled ‘SYSTEM, METHOD, AND COMPUTER READABLE MEDIUM FOR PROVIDING ENHANCED AUTOMATIC NUMBER IDENTIFICATION FUNCTIONALITY’ filed on Jun. 13, 2007. The above applications are incorporated herein by reference.

US Referenced Citations (179)
Number Name Date Kind
672146 Collis Apr 1901 A
5165095 Borcherding Nov 1992 A
5311569 Brozovich et al. May 1994 A
5651055 Argade Jul 1997 A
5883943 Siddiqui Mar 1999 A
5940484 DeFazio et al. Aug 1999 A
5970143 Schneier et al. Oct 1999 A
6014426 Drysdale et al. Jan 2000 A
6240175 Barber May 2001 B1
6298128 Ramey et al. Oct 2001 B1
6341161 Latter et al. Jan 2002 B1
6353664 Cannon et al. Mar 2002 B1
6449351 Moss et al. Sep 2002 B1
6459782 Bedrosian et al. Oct 2002 B1
6496569 Pelletier et al. Dec 2002 B2
6539080 Bruce et al. Mar 2003 B1
6608886 Contractor Aug 2003 B1
6658455 Weinman Dec 2003 B1
6721406 Contractor Apr 2004 B1
6829233 Gilboy Dec 2004 B1
6920206 Basore et al. Jul 2005 B2
6940954 Toebes Sep 2005 B1
6950504 Marx et al. Sep 2005 B1
6970543 Lautenschlager et al. Nov 2005 B2
7024556 Hadjinikitas et al. Apr 2006 B1
7127237 Naruse et al. Oct 2006 B2
7200212 Gosselin Apr 2007 B2
7269413 Kraft Sep 2007 B2
7840689 Stewart Nov 2010 B2
7864940 Harvey et al. Jan 2011 B1
7869792 Zhou et al. Jan 2011 B1
7899921 Hill et al. Mar 2011 B2
8005195 Luneau et al. Aug 2011 B2
8068825 Mikan et al. Nov 2011 B2
8081992 Book Dec 2011 B2
8095647 Stewart Jan 2012 B2
8099780 Lu Jan 2012 B2
8103868 Christensen Jan 2012 B2
8199733 Stewart Jun 2012 B2
8250204 Stewart Aug 2012 B2
8255539 Pierlot et al. Aug 2012 B2
8295819 Kaplan et al. Oct 2012 B1
8300787 Frank Oct 2012 B2
8331899 Broms Dec 2012 B2
8417763 Stewart Apr 2013 B2
8437460 Daniell et al. May 2013 B2
8447018 Martino et al. May 2013 B2
8548140 Martino et al. Oct 2013 B2
8625762 White et al. Jan 2014 B1
8861697 Martino et al. Oct 2014 B2
8879702 White et al. Nov 2014 B1
9008292 Martino et al. Apr 2015 B2
9036797 Martino et al. May 2015 B2
9106743 White et al. Aug 2015 B2
9197740 White et al. Nov 2015 B2
9350855 White et al. May 2016 B2
9392107 Martino et al. Jul 2016 B2
9496569 Gangwar Nov 2016 B2
9497306 Martino et al. Nov 2016 B2
9674346 White et al. Jun 2017 B2
9838535 Martino et al. Dec 2017 B2
10091342 Martino et al. Oct 2018 B2
10148813 White et al. Dec 2018 B2
10257348 White et al. Apr 2019 B2
10397387 Martino et al. Aug 2019 B2
10455083 White et al. Oct 2019 B2
20020018546 Horne Feb 2002 A1
20020057764 Salvucci et al. May 2002 A1
20020067807 Danner et al. Jun 2002 A1
20020120505 Henkin et al. Aug 2002 A1
20020169670 Barsade Nov 2002 A1
20020172336 Postma et al. Nov 2002 A1
20030027559 Umstetter et al. Feb 2003 A1
20030086558 Seelig et al. May 2003 A1
20030112938 Kanakubo et al. Jun 2003 A1
20030128821 Luneau et al. Jul 2003 A1
20030130887 Nathaniel Jul 2003 A1
20030130894 Huettner Jul 2003 A1
20030177347 Schneier et al. Sep 2003 A1
20030198323 Watanabe Oct 2003 A1
20040013409 Beach Jan 2004 A1
20040044912 Connary et al. Mar 2004 A1
20040067751 Vandermeijden et al. Apr 2004 A1
20040078807 Fries Apr 2004 A1
20040096042 Orwick et al. May 2004 A1
20040148342 Cotte Jul 2004 A1
20040174966 Koch Sep 2004 A1
20040223605 Donnelly Nov 2004 A1
20040228456 Glynn et al. Nov 2004 A1
20040261126 Addington et al. Dec 2004 A1
20050008085 Lee et al. Jan 2005 A1
20050021397 Cui Jan 2005 A1
20050074109 Hanson et al. Apr 2005 A1
20050084085 Silver et al. Apr 2005 A1
20050160144 Bhatia Jul 2005 A1
20050172154 Short Aug 2005 A1
20050182675 Huettner Aug 2005 A1
20050197164 Chan Sep 2005 A1
20050198099 Motsinger et al. Sep 2005 A1
20050240432 Jensen Oct 2005 A1
20050243975 Reich et al. Nov 2005 A1
20050246732 Dudkiewicz Nov 2005 A1
20050286687 Sanmugasuntharam et al. Dec 2005 A1
20050286691 Taylor et al. Dec 2005 A1
20060025112 Hamanaga et al. Feb 2006 A1
20060026277 Sutcliffe Feb 2006 A1
20060031553 Kim Feb 2006 A1
20060072713 Fernandes et al. Apr 2006 A1
20060085519 Goode et al. Apr 2006 A1
20060123119 Hill et al. Jun 2006 A1
20060166658 Bennett et al. Jul 2006 A1
20060184684 Weiss et al. Aug 2006 A1
20060223494 Chmaytelli et al. Oct 2006 A1
20060248209 Chiu Nov 2006 A1
20060293057 Mazerski et al. Dec 2006 A1
20060294465 Ronen et al. Dec 2006 A1
20070010235 Moyes Jan 2007 A1
20070033419 Kocher et al. Feb 2007 A1
20070050372 Boyle Mar 2007 A1
20070064886 Chiu et al. Mar 2007 A1
20070071201 Pettus et al. Mar 2007 A1
20070094082 Yruski Apr 2007 A1
20070094398 Chou Apr 2007 A1
20070102527 Eubank et al. May 2007 A1
20070127650 Altberg Jun 2007 A1
20070127656 Citron et al. Jun 2007 A1
20070133771 Stifelman et al. Jun 2007 A1
20070195942 Woodring Aug 2007 A1
20070206736 Sprigg et al. Sep 2007 A1
20070207781 Spragg et al. Sep 2007 A1
20070271596 Boubion et al. Nov 2007 A1
20070280445 Shkedi Dec 2007 A1
20080005325 Wynn et al. Jan 2008 A1
20080070609 Ackley Mar 2008 A1
20080084975 Schwartz Apr 2008 A1
20080089501 Benco et al. Apr 2008 A1
20080091796 Story Apr 2008 A1
20080140714 Rhoads et al. Jun 2008 A1
20080159318 Pierlot et al. Jul 2008 A1
20080240383 Fronczak et al. Oct 2008 A1
20080246605 Pfeffer et al. Oct 2008 A1
20080249986 Clarke-Martin Oct 2008 A1
20080260135 Siegrist Oct 2008 A1
20090041206 Hobby et al. Feb 2009 A1
20090177303 Logan Jul 2009 A1
20090234578 Newby et al. Sep 2009 A1
20090258595 Gielow et al. Oct 2009 A1
20100042592 Stolz Feb 2010 A1
20100088715 Sloo Apr 2010 A1
20100125500 Beavers May 2010 A1
20100159909 Stifelman Jun 2010 A1
20110007885 Kirchhoff et al. Jan 2011 A1
20110013501 Curtis Jan 2011 A1
20110013755 Martino et al. Jan 2011 A1
20110045761 Rolf et al. Feb 2011 A1
20110082752 Dube Apr 2011 A1
20110087744 Deluca et al. Apr 2011 A1
20110105091 Jones May 2011 A1
20110179453 Poniatowski Jul 2011 A1
20110320066 Schofield Dec 2011 A1
20120003955 Gabriel Jan 2012 A1
20120078707 Ramakrishnan Mar 2012 A1
20120123862 Kurra May 2012 A1
20120158472 Singh Jun 2012 A1
20120158504 Kumar Jun 2012 A1
20120226756 Lindquist Sep 2012 A1
20120230479 Martin Sep 2012 A1
20120243675 Diroo et al. Sep 2012 A1
20120324504 Archer Dec 2012 A1
20130115927 Gruber et al. May 2013 A1
20130325606 Balduf Dec 2013 A1
20130332288 Garmon et al. Dec 2013 A1
20140003589 Martino et al. Jan 2014 A1
20140006161 Jabara Jan 2014 A1
20140122506 Jebara et al. May 2014 A1
20140180818 Mistler Jun 2014 A1
20140282759 Harvey Sep 2014 A1
20140330649 Lyren Nov 2014 A1
20140351147 Castrechini Nov 2014 A1
Related Publications (1)
Number Date Country
20130243176 A1 Sep 2013 US
Provisional Applications (1)
Number Date Country
60934407 Jun 2007 US
Continuation in Parts (2)
Number Date Country
Parent 12890829 Sep 2010 US
Child 13887810 US
Parent 11974983 Oct 2007 US
Child 12890829 US