Supplementing audio recorded in a media file

Information

  • Patent Grant
  • 9318100
  • Patent Number
    9,318,100
  • Date Filed
    Wednesday, January 3, 2007
    17 years ago
  • Date Issued
    Tuesday, April 19, 2016
    8 years ago
Abstract
Methods, systems, and computer program products are provided for supplementing audio recorded in a media file. Embodiments include receiving a media file; identifying the subject matter of audio portion of the media file; identifying supplemental content for supplementing the subject matter recorded in the audio portion of the media file; and inserting in the media file markup for rendering the supplemental content.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The field of the invention is data processing, or, more specifically, methods, systems, and products for supplementing audio recorded in a media file.


2. Description of Related Art


Managers are increasingly isolated from one another and their employees. One reason for this isolation is that managers are often time constrained and their communication occurs with many different devices and often communications requires two or more managers or employees to be available at the same time. There therefore is a need for improvement in communications among users such as managers and employees that reduces the devices used to communicate and reduces the requirement for more than one user to communicate at the same time.


SUMMARY OF THE INVENTION

Methods, systems, and computer program products are provided for supplementing audio recorded in a media file. Embodiments include receiving a media file; identifying the subject matter of audio portion of the media file; identifying supplemental content for supplementing the subject matter recorded in the audio portion of the media file; and inserting in the media file markup for rendering the supplemental content.


The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 sets forth a network diagram of a system for asynchronous communications using messages recorded on handheld devices according to embodiments of the present invention.



FIG. 2 sets forth a block diagram of automated computing machinery comprising an exemplary library management system useful in asynchronous communications according to embodiments of the present invention.



FIG. 3 sets forth a flow chart illustrating an exemplary method for asynchronous communications according to embodiments of the present invention.



FIG. 4 sets forth a flow chart illustrating an exemplary method for associating the message with content under management by a library management system in dependence upon the text converted from a recorded message.



FIG. 5 sets forth a flow chart illustrating another method for associating the message with content under management by a library management system in dependence upon the text converted from a recorded message.



FIG. 6 sets forth a flow chart illustrating another method for associating the message with content under management by a library management system in dependence upon the text converted from a recorded message.



FIG. 7 sets forth a flow chart illustrating an exemplary method for supplementing audio recorded in a media file.



FIG. 8 sets forth a flow chart illustrating further device-side aspects of embodiments of supplementing audio recorded in a media file according to the present invention.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

Exemplary methods, systems, and products for asynchronous communications and supplementing audio recorded in a media file according to the present invention are described with reference to the accompanying drawings, beginning with FIG. 1. FIG. 1 sets forth a network diagram of a system for asynchronous communications using messages recorded on handheld devices and for supplementing audio recorded in a media file according to the present invention according to embodiments of the present invention. Asynchronous communications means communications among parties that occurs with some time delay. Asynchronous communications according to the present invention may allow participants of communications to send, receive, and respond to communications at their own convenience with no requirement to be available simultaneously.


The system of FIG. 1 includes to personal computers (106 and 112) coupled for data communications to a wide area network (‘WAN’) (102). Each of the personal computers (106 and 112) of FIG. 1 has installed upon them a local library application (232). A local library application (232) includes computer program instructions capable of transferring media files containing recorded messages to a handheld recording device (108 and 114). The local library application (232) also includes computer program instructions capable of receiving media files containing messages from the handheld recording device (108 and 114) and transmitting the media files to a library management system (104).


The example of FIG. 1 also includes a library management system (104). The library management system of FIG. 1 is capable of asynchronous communications by receiving a recorded message having been recorded on a handheld device (108) converting the recorded message to text; identifying a recipient (116) of the message in dependence upon the text; associating the message with content under management by a library management system in dependence upon the text; and storing the message for transmission to another handheld device (114) for the recipient. The exemplary library management system (104) of FIG. 1 manages asynchronous communications using recorded messages according to the present invention, as well as additional content associated with those recorded messages. Such associated content under management include, for example, other recorded messages created by senders and recipients, emails, media files containing media content, spreadsheets, presentations, RSS (‘Really Simple Syndication’) feeds, web pages, and well as any other content that will occur to those of skill in the art. Maintaining the content as well as managing asynchronous communications relating to that content may provide tight coupling between the communications between users and the content related to those communications. Such tight coupling provides the ability to determine that content under management is the subject of the communications and therefore provide an identification of such content to a recipient. Such tight coupling also provides the ability to attach that content to the message providing together the content which is the subject of the communications and the communications themselves.


The library management system (104) of FIG. 1 is also capable of supplementing audio recorded in a media file by receiving a media file; identifying the subject matter of audio portion of the media file; identifying supplemental content for supplementing the subject matter recorded in the audio portion of the media file; and inserting in the media file markup for rendering the supplemental content. Supplementing audio recorded in a media file according to the present invention may improve the use and enjoyment of audio recorded on the media file.


The exemplary system of FIG. 1 is capable of asynchronous communications according to the present invention by recording a message from a sender (110) on handheld device (108). The handheld recording device of FIG. 1 includes a microphone for receiving speech of the message and is capable of recording the message in a media file. One handheld recording device useful according to embodiments of the present invention is the WP-U2J available from Samsung.


The exemplary system of FIG. 1 is capable of transferring the media file containing the recorded message from the handheld recording device (108) to a local library application (232). Media files containing one or messages may be transferred to the local library application by periodically synchronizing the handheld recording device with the local library application allowing a sender to begin transmission of the message at the convenience of the sender.


The exemplary system of FIG. 1 is also capable of transferring the media file containing the recorded message to a library management system (104). The library management system comprises computer program instructions capable of receiving a recorded message; converting the recorded message to text; identifying a recipient of the message in dependence upon the text; associating the message with content under management by a library management system in dependence upon the text; and storing the message for transmission to another handheld device for the recipient


The exemplary system of FIG. 1 is also capable of transferring the media file containing the recorded message to a local library application (232) installed on a personal computer (112). The system of FIG. 1 is also capable of transmitting message to the handheld recording device (114) of the recipient (116) who may listen to the message using headphones (112) or speakers on the device. A recipient may transfer messages to the handheld device by synchronizing the handheld recording device with the local library application (232) allowing the recipient to obtain messages at the recipients convenience. The recipient may now respond to the sender in the same manner providing two way asynchronous communications between sender and recipient.


The handheld recording devices (108 and 114) of FIG. 1 are also useful in supplementing audio recorded in a media file. The handheld recording devices are also capable of capable of receiving a media file; extracting markup from the media file; rendering supplemental content in dependence upon the markup; and playing the audio portion of the media file.


The arrangement of devices making up the exemplary system illustrated in FIG. 1 is for explanation, not for limitation. Data processing systems useful according to various embodiments of the present invention may include additional servers, routers, other devices, and peer-to-peer architectures, not shown in FIG. 1, as will occur to those of skill in the art. Networks in such data processing systems may support many data communications protocols, including for example TCP (Transmission Control Protocol), IP (Internet Protocol), HTTP (HyperText Transfer Protocol), WAP (Wireless Access Protocol), HDTP (Handheld Device Transport Protocol), and others as will occur to those of skill in the art. Various embodiments of the present invention may be implemented on a variety of hardware platforms in addition to those illustrated in FIG. 1.


Asynchronous communications in accordance with the present invention is generally implemented with computers, that is, with automated computing machinery. In the system of FIG. 1, for example, all the nodes, servers, and communications devices are implemented to some extent at least as computers. For further explanation, therefore, FIG. 2 sets forth a block diagram of automated computing machinery comprising an exemplary library management system (104) useful in asynchronous communications according to embodiments of the present invention. The library management system (104) of FIG. 2 includes at least one computer processor (156) or ‘CPU’ as well as random access memory (168) (‘RAM’) which is connected through a system bus (160) to processor (156) and to other components of the library management system.


Stored in RAM (168) is a library management application (202) for asynchronous communications according to the present invention including computer program instructions for receiving a recorded message, the message recorded on a handheld device; converting the recorded message to text; identifying a recipient of the message in dependence upon the text; associating the message with content under management by a library management system in dependence upon the text; and storing the message for transmission to another handheld device for the recipient.


The library management application (202) of FIG. 2 includes a speech recognition engine (203), computer program instructions for converting a recorded message to text. Examples of speech recognition engines capable of modification for use with library management applications according to the present invention include SpeechWorks available from Nuance Communications, Dragon NaturallySpeaking also available from Nuance Communications, ViaVoice available from IBM®, Speech Magic available from Philips Speech Recognition Systems, iListen from MacSpeech, Inc., and others as will occur to those of skill in the art.


The library management application (202) of FIG. 2 includes a speech synthesis engine (204), computer program instructions for creating speech identifying the content associated with the message. Examples of speech engines capable of creating speech identifying the content associated with the message, for example, IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and Python's pyTTS class.


The library management application (202) of FIG. 2 includes a content management module (206), computer program instructions for receiving a recorded message; identifying a recipient of the message in dependence upon text converted from the message; associating the message with content under management by a library management system in dependence upon the text; and storing the message for transmission to another handheld device for the recipient.


The library management application (202) of FIG. 2 also includes a content supplement module (207), computer program instructions for supplementing audio recorded in a media file. The content supplement module includes computer program instructions capable of receiving a media file; identifying the subject matter of audio portion of the media file; identifying supplemental content for supplementing the subject matter recorded in the audio portion of the media file; and inserting in the media file markup for rendering the supplemental content.


Also stored in RAM (168) is an application server (155), a software platform that provides services and infrastructure required to develop and deploy business logic necessary to provide web clients with access to enterprise information systems. Also stored in RAM (168) is an operating system (154). Operating systems useful in computers according to embodiments of the present invention include UNIX™, Linux™, Microsoft XP™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154) and library management module (202) in the example of FIG. 2 are shown in RAM (168), but many components of such software typically are stored in non-volatile memory (166) also.


Library management system (104) of FIG. 2 includes non-volatile computer memory (166) coupled through a system bus (160) to processor (156) and to other components of the library management system (104). Non-volatile computer memory (166) may be implemented as a hard disk drive (170), optical disk drive (172), electrically erasable programmable read-only memory space (so-called ‘EEPROM’ or ‘Flash’ memory) (174), RAM drives (not shown), or as any other kind of computer memory as will occur to those of skill in the art.


The exemplary library management system of FIG. 2 includes one or more input/output interface adapters (178). Input/output interface adapters in library management systems implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices (180) such as computer display screens, as well as user input from user input devices (181) such as keyboards and mice.


The exemplary library management system (104) of FIG. 2 includes a communications adapter (167) for implementing data communications (184) with other computers (182). Such data communications may be carried out serially through RS-232 connections, through external buses such as USB, through data communications networks such as IP networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a network. Examples of communications adapters useful for asynchronous communications and supplementing audio according to embodiments of the present invention include modems for wired dial-up communications, Ethernet (IEEE 802.3) adapters for wired network communications, and 802.11b adapters for wireless network communications.


Asynchronous Communications

For further explanation, FIG. 3 sets forth a flow chart illustrating an exemplary method for asynchronous communications according to embodiments of the present invention that includes recording (302) a message (304) on handheld device (108). Recording (302) a message (304) on handheld device (108) may be carried out by recording a message in the data format the handheld device supports. Examples of media files useful in asynchronous communications according to the present invention include MPEG 3 (‘.mp3’) files, MPEG 4 (‘.mp4’) files, Advanced Audio Coding (‘AAC’) compressed files, Advances Streaming Format (‘ASF’) Files, WAV files, and many others as will occur to those of skill in the art.


The method of FIG. 3 includes transferring (308) a media file (306) containing the recorded message (304) to a library management system (104). As discussed above, one way of transferring (308) a media file (306) containing the recorded message (304) to a library management system (104) includes synchronizing the handheld recording device (108) with a local library application (232) which in turns uploads the media file to the local management system. Synchronizing the handheld recording device (108) with a local library application (232) may provide a sender to record messages at the sender's convenience and also the sender to initiate the sending of those messages at the sender's convenience.


The method of FIG. 3 also includes receiving (310) the recorded message (304). In the example of FIG. 3, a library management system (104) receives the recorded message in a media file from a local library application (232). Local library applications (232) according to the present invention may be configured to upload messages from a sender to a library management system (104) and download messages for a recipient from a library management system (104) periodically, such as daily, hourly and so on, upon synchronization with handheld recording devices, or in any other manner as will occur to those of skill in the art.


The method of FIG. 3 also includes converting (312) the recorded message (304) to text (314). Converting (312) the recorded message (304) to text (314) may be carried out by a speech recognition engine. Speech recognition is the process of converting a speech signal to a set of words, by means of an algorithm implemented as a computer program. Different types of speech recognition engines currently exist. Isolated-word speech recognition systems, for example, require the speaker to pause briefly between words, whereas a continuous speech recognition systems do not. Furthermore, some speech recognition systems require a user to provide samples of his or her own speech before using them, whereas other systems are said to be speaker-independent and do not require a user to provide samples.


To accommodate larger vocabularies, speech recognition engines use language models or artificial grammars to restrict the combination of words and increase accuracy. The simplest language model can be specified as a finite-state network, where the permissible words following each word are explicitly given. More general language models approximating natural language are specified in terms of a context-sensitive grammar.


Examples of commercial speech recognition engines currently available include SpeechWorks available from Nuance Communications, Dragon NaturallySpeaking also available from Nuance Communications, ViaVoice available from IBM®, Speech Magic available from Philips Speech Recognition Systems, iListen from MacSpeech, Inc., and others as will occur to those of skill in the art.


The method of FIG. 3 also includes identifying (319) a recipient (116) of the message (304) in dependence upon the text (314). Identifying (319) a recipient (116) of the message (304) in dependence upon the text (314) may be carried out by scanning the text for previously identified names or user identifications. Upon finding a match, identifying (319) a recipient (116) of the message (304) may be carried out by retrieving a user profile for the identified recipient including information facilitating sending the message to the recipient.


The method of FIG. 3 also includes associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314). Associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314) may be carried out by creating speech identifying the content associated with the message; and associating the speech with the recorded message for transmission with the recorded message as discussed below with reference to FIG. 4. Associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314) may also be carried out by extracting keywords from the text; and searching content under management for the keywords as discussed below with reference to FIG. 5. Associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314) may also be carried out by extracting an explicit identification of the associated content from the text; and searching content under management for the identified content as discussed below with reference with FIG. 6.


The method of FIG. 3 also includes storing (320) the message (304) for transmission to another handheld device (114) for the recipient (116). In the example of FIG. 3, a library management system (104) stores the message for downloading to local library application (232) for the recipient.


The method of FIG. 3 also includes transmitting (324) the message (304) to another handheld device (114). Transmitting (324) the message (304) to another handheld device (114) according to the method of FIG. 3 may be carried out by downloading the message to a local library application (232) for the recipient (116) and synchronizing the handheld recording device (114) with the local library application (232). Local library applications (232) according to the present invention may be configured to download messages for a recipient from a library management system (104) periodically, such as daily, hourly and so on, upon synchronization with handheld recording devices, or in any other manner as will occur to those of skill in the art.


To aid users in communication, content identified as associated with communications among users may be identified, described in speech, and presented to those users thereby seamlessly supplementing the existing communications among the users. For further explanation, FIG. 4 sets forth a flow chart illustrating an exemplary method for associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314). The method of FIG. 4 includes creating (408) speech (412) identifying the content (318) associated with the message (304). Creating (408) speech (412) identifying the content (318) associated with the message (304) may be carried out by processing the text using a text-to-speech engine in order to produce a speech presentation of the text and then recording the speech produced by the text-speech-engine in the audio portion of a media file. Examples of speech engines capable of converting text to speech for recording in the audio portion of a media file include, for example, IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and Python's pyTTS class. Each of these text-to-speech engines is composed of a front end that takes input in the form of text and outputs a symbolic linguistic representation to a back end that outputs the received symbolic linguistic representation as a speech waveform.


Typically, speech synthesis engines operate by using one or more of the following categories of speech synthesis: articulatory synthesis, formant synthesis, and concatenative synthesis. Articulatory synthesis uses computational biomechanical models of speech production, such as models for the glottis and the moving vocal tract. Typically, an articulatory synthesizer is controlled by simulated representations of muscle actions of the human articulators, such as the tongue, the lips, and the glottis. Computational biomechanical models of speech production solve time-dependent, 3-dimensional differential equations to compute the synthetic speech output. Typically, articulatory synthesis has very high computational requirements, and has lower results in terms of natural-sounding fluent speech than the other two methods discussed below.


Formant synthesis uses a set of rules for controlling a highly simplified source-filter model that assumes that the glottal source is completely independent from a filter which represents the vocal tract. The filter that represents the vocal tract is determined by control parameters such as formant frequencies and bandwidths. Each formant is associated with a particular resonance, or peak in the filter characteristic, of the vocal tract. The glottal source generates either stylized glottal pulses for periodic sounds and generates noise for aspiration. Formant synthesis often generates highly intelligible, but not completely natural sounding speech. However, formant synthesis typically has a low memory footprint and only moderate computational requirements.


Concatenative synthesis uses actual snippets of recorded speech that are cut from recordings and stored in an inventory or voice database, either as waveforms or as encoded speech. These snippets make up the elementary speech segments such as, for example, phones and diphones. Phones are composed of a vowel or a consonant, whereas diphones are composed of phone-to-phone transitions that encompass the second half of one phone plus the first half of the next phone. Some concatenative synthesizers use so-called demi-syllables, in effect applying the diphone method to the time scale of syllables. Concatenative synthesis then strings together, or concatenates, elementary speech segments selected from the voice database, and, after optional decoding, outputs the resulting speech signal. Because concatenative systems use snippets of recorded speech, they often have the highest potential for sounding like natural speech, but concatenative systems typically require large amounts of database storage for the voice database.


The method of FIG. 4 also includes associating (410) the speech (412) with the recorded message (304) for transmission with the recorded message (304). Associating (410) the speech (412) with the recorded message (304) for transmission with the recorded message (304) may be carried out by including the speech in the same media file as the recoded message, creating a new media file containing both the recorded message and the created speech, or any other method of associating the speech with the recorded message as will occur to those of skill in the art.


As discussed above, associated messages with content under management often requires identifying the content. For further explanation, FIG. 5 sets forth a flow chart illustrating another method for associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314). The method of FIG. 5 includes extracting (402) keywords (403) from the text (314). Extracting (402) keywords (403) from the text (314) may be carried out by extracting words from the text that elicit information about content associated with the subject matter of the message such as, for example, ‘politics,’ ‘work,’ ‘movies,’ and so. Extracting (402) keywords (403) from the text (314) also may be carried out by extracting words from the text identifying types of content such as, for example, ‘email,’ ‘file,’ ‘presentation,’ and so on. Extracting (402) keywords (403) from the text (314) also may be carried out by extracting words from the text having temporal semantics, such as ‘yesterday,’ ‘Monday,’ ‘10:00 am.’ and so on. The examples of extracting words indicative of subject matter, content type, or temporal semantics are presented for explanation and not for limitation. In fact, associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314) may be carried out in many was as will occur to those of skill in the art and all such ways are within the scope of the present invention.


The method of FIG. 5 also includes searching (404) content (318) under management for the keywords (403). Searching (404) content (318) under management for the keywords (403) may be carried out by searching the titles, metadata, and content itself for the keywords and identifying as a match content having the most matching keywords or content having the best matching keywords according to predefined algorithms for selecting matching content from potential matches.


In some cases, the messages comprising communications among users may contain an explicit identification of content under management. For further explanation, FIG. 6 sets forth a flow chart illustrating another method for associating (316) the message (304) with content (318) under management by a library management system in dependence upon the text (314) includes extracting (502) an explicit identification (506) of the associated content from the text and searching content (318) under management for the identified content (506). Extracting (502) an explicit identification (506) of the associated content from the text may be carried out by identifying one or more words in the text matching a title or closely matching a title or metadata identification of specific content under management. For example, the phrase ‘the Jones Presentation,’ may be extracted as an explicit identification of a PowerPoint™ Presentation entitled ‘Jones Presentation May 2, 2006.’ For example, the phrase ‘Your message of Yesterday,’ may be extracted as an explicit identification of a message from the intended recipient of the message send a day earlier than the current message from which the text was converted according to the present invention.


Supplementing Audio Recorded in a Media File

As mentioned above, the content of messages stored in audio portion of media files for asynchronous communications and other content stored in the audio portion of a media file may be supplemented to provide additional information and often added enjoyment for a user. For further explanation, therefore, FIG. 7 sets forth a flow chart illustrating an exemplary method for supplementing audio recorded in a media file. The method of FIG. 7 includes receiving (702) a media file (306). In the example of FIG. 7, the media file (306) is received in a library management system (104). Receiving (702) a media file (306) may be carried out by synchronizing a handheld recording device (108) with a local library application (232) which in turns uploads the media file to the local management system (104). Receiving (702) a media file (306) may also be carried out by transmitting the media file to the library management system from another server through, for example, purchase or transfer, or another computer as will occur to those of skill in the art.


The exemplary media file of FIG. 7 has content stored in the audio portion (712) of the media file. The content stored in the media file may be messages recorded by users for asynchronous communications, content such as songs, speech recordings, and other content as will occur to those of skill in the art. Examples of media files useful in supplementing audio recorded in a media file according to the present invention include MPEG 3 (‘.mp3’) files, MPEG 4 (‘.mp4’) files, Advanced Audio Coding (‘AAC’) compressed files, Advances Streaming Format (‘ASF’) Files, WAV files, and many others as will occur to those of skill in the art.


The method of FIG. 7 also includes identifying (704) the subject matter of audio portion (712) of the media file (306). Identifying (704) the subject matter of audio portion (712) of the media file (306) may be carried out by retrieving an identification of the subject matter of the media file stored as metadata in a header of the media file. For example, in an MPEG file, such a header may be an ID3v2 tag designed to store metadata about the content on the media file. An ID3v2 tag is prepended to the audio portion of the media file. An ID3v2 tag provides a container for metadata associated with the media file. An ID3v2 tag includes one or more fames supporting the inclusion of text, images, files, and other information. ID3v2 tags are flexible and expandable because parsers that do not support specific functions of the ID3v2 tag will ignore those functions. ID3v2 supports Unicode thereby providing the ability to include metadata in text of many different languages. The maximum tag size of an ID3v2 tag is typically 256 megabytes and maximum frame size is typically 16 megabytes.


Identifying (704) the subject matter of audio portion (712) of the media file (306) may be also carried out by converting the audio portion to text and determining a subject matter from the text. As discussed above, converting the audio portion to text may be carried out by a speech recognition engine such as, for example, SpeechWorks available from Nuance Communications, Dragon NaturallySpeaking also available from Nuance Communications, ViaVoice available from IBM®, Speech Magic available from Philips Speech Recognition Systems, iListen from MacSpeech, Inc., and others as will occur to those of skill in the art.


Determining a subject matter from the text may be carried out by parsing the text in dependence upon rules predetermined to identify words indicative of the subject matter of the content recorded on the audio portion of the media file. Such rules may for example ignore commonly used words such as ‘a,’ ‘the,’ ‘and’ and so on and then use a weighted algorithm to extract words that may be used to determine the subject matter of the content recorded on the audio portion of the media file.


The method of FIG. 7 also includes identifying (707) supplemental content (708) for supplementing the subject matter recorded in the audio portion (712) of the media file (306). Identifying (707) supplemental content (708) for supplementing the subject matter recorded in the audio portion (712) of the media file (306) may be carried out by retrieving markup from a database (318) that is indexed by subject matter. Such markup when rendered by browser provides additional content for supplementing the recorded audio portion of a media file.


The markup may include markup for visual rendering by a browser such as, for example, HyperText Markup Language (‘HTML’). The markup may also include voice markup to be rendered by a multimodal browser such as for example VoiceXML, X+V (‘XHTML plus Voice’) and so on as will occur to those of skill in the art.


The method of FIG. 7 also includes inserting (710) in the media file (306) markup for rendering the supplemental content (708). Inserting (710) in the media file (306) the markup for rendering the supplemental content (708) may be carried out by including the markup in a header in the media file. For example, markup may be included in an ID3v2 tag of an MPEG media file.


Having included the markup for supplementing the content recorded in the audio portion of the media file, the markup may now be rendered to supplement in real time the audio played on a handheld recording device. For further explanation, therefore, FIG. 8 sets forth a flow chart illustrating further aspects of some embodiments of supplementing audio recorded in a media file according to the present invention. The method of FIG. 8 includes receiving (802) the media file (306). Receiving (802) the media file (306) may be carried out by downloading the media file from a library management system (104) to a local library application (232) for the user (700) and synchronizing the handheld recording device (108) with the local library application (232). Local library applications (232) according to the present invention may be configured to download media files from the library management system (104) periodically, such as daily, hourly and so on, upon synchronization with handheld recording devices, or in any other manner as will occur to those of skill in the art.


The method of FIG. 8 also includes extracting (804) markup (714) from the media file (306). Extracting (804) markup (714) from the media file (306) may be carried out by extracting markup from an ID3v2 tag in, for example, an MPEG media file.


The method of FIG. 8 also includes rendering (806) supplemental content in dependence upon the markup (714) and playing (808) the audio portion (712) of the media file (306). Rendering (806) supplemental content in dependence upon the markup (714) may be carried out by rendering the supplemental content on a browser installed on a handheld recording device. Similarly, playing (808) the audio portion (712) of the media file (306) may be carried out by playing the audio portion of a media file by a handheld recording device.


For improved user experience, the supplemental content may be rendered synchronized with the playback of the content on the media file. For example, the timing of content rendered according to the markup on the browser may be timed according to duration of the audio recorded on the media file. In some embodiments of the method of FIG. 8, therefore, extracting markup from the media file includes extracting synchronization markup from the media file. Synchronization markup is markup dictating to a browser timing information for synchronizing supplemental content with the playback of the media file. In such embodiments, rendering supplemental content in dependence upon the markup is carried out by synchronizing the rendering of the supplemental content with the playing of the audio portion of the media file in dependence upon the synchronization markup.


Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for asynchronous communications using messages recorded on handheld devices and supplementing audio recorded in a media file. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on computer readable media for use with any suitable data processing system. Such computer readable media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web as well as wireless transmission media such as, for example, networks implemented according to the IEEE 802.11 family of specifications. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.


It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.

Claims
  • 1. A method for supplementing audio recorded in a media file, the method comprising: receiving a media file;extracting keywords from text of an audio portion of the media file, and determining a subject matter from text based on the keywords;identifying, from the audio portion of the media file, the subject matter of the audio portion of the media file;searching a content source for supplemental content, wherein the content source is a managed library comprising a database indexed by subject matter, wherein the supplemental content relates to and supplements the identified subject matter of the audio portion of the media file;identifying, automatically without user intervention, the supplemental content for supplementing the subject matter recorded in the audio portion of the media file;inserting, in the media file, markup for rendering the supplemental content, wherein the supplemental content comprises one or more image files, video files or text files; andrendering supplemental content in dependence upon the markup.
  • 2. The method of claim 1 wherein identifying the subject matter of audio portion of the media file further comprises converting the audio portion to text.
  • 3. The method of claim 1 further comprising: extracting the markup from the media file including extracting synchronization markup from the media file; andwherein the rendering supplemental content in dependence upon the markup includes synchronizing the rendering of the supplemental content with the playing of the audio portion of the media file in dependence upon the synchronization markup.
  • 4. The method of claim 1 further comprising: rendering supplemental content in dependence upon the markup including rendering the supplemental content on a browser installed on a handheld recording device; andplaying the audio portion of the media file including playing the audio portion of a media file by the handheld recording device.
  • 5. The method of claim 1 wherein the audio portion of the media file contains speech recorded on a handheld recording device for asynchronous communications between users.
  • 6. The method of claim 1, the method further comprising: receiving the media file;extracting the markup from the media file;searching the database for database markup that matches the markup from the media file, the database markup being associated with said supplemental content from the database;retrieving the database markup and said supplemental content from said database;rendering the supplemental content in dependence upon the markup and the database markup; andplaying the audio portion of the media file.
  • 7. A system for supplementing audio recorded in a media file, the system comprising a computer processor, a computer memory operatively coupled to the computer processor, the computer memory having disposed within it computer program instructions, wherein the processor, upon executing the computer program instructions, is capable of: receiving a media file;extracting keywords from text of an audio portion of the media file, and determining a subject matter from text based on the keywords;identifying, from the audio portion of the media file, the subject matter of audio portion of the media file;searching a content source for supplemental content, wherein the content source is a managed library comprising a database indexed by subject matter, wherein the supplemental content relates to and supplements the identified subject matter of the audio portion of the media file;identifying, automatically without user intervention, the supplemental content for supplementing the subject matter recorded in the audio portion of the media file;inserting, in the media file, markup for rendering the supplemental content, wherein the supplemental content comprises one or more image files, video files or text files; andrendering supplemental content in dependence upon the markup.
  • 8. The system of claim 7 wherein identifying the subject matter of audio portion of the media file further comprises converting the audio portion to text.
  • 9. The system of claim 7 wherein the processor is capable of: extracting the markup from the media file including extracting synchronization markup from the media file; andwherein the rendering supplemental content in dependence upon the markup includes synchronizing the rendering of the supplemental content with the playing of the audio portion of the media file in dependence upon the synchronization markup.
  • 10. The system of claim 7 wherein the processor is capable of: rendering supplemental content in dependence upon the markup including rendering the supplemental content on a browser installed on a handheld recording device; andplaying the audio portion of the media file including playing the audio portion of a media file by the handheld recording device.
  • 11. The system of claim 7 wherein the audio portion of the media file contains speech recorded on a handheld recording device for asynchronous communications between users.
  • 12. The system of claim 7, the system further comprising another computer processor, another computer memory operatively coupled to the another computer processor, the another computer memory having disposed within it other computer program instructions, wherein the another computer processor, upon executing the other computer program instructions, is capable of: receiving the media file;extracting the markup from the media file;searching the database for any database markup that matches the markup from the media file, the database markup being associated with said supplemental content from the database;retrieving the database markup and said supplemental content from said database;rendering the supplemental content in dependence upon the markup and the database markup; andplaying the audio portion of the media file.
  • 13. A computer program product for supplementing audio recorded in a media file, the computer program product embodied on a non-transitory computer-readable recordable medium, the computer program product comprising: computer program instructions for receiving a media file;computer program instructions for extracting keywords from text of an audio portion of the media file, and determining a subject matter from text based on the keywords;computer program instructions for identifying, from the audio portion of the media file, the subject matter of audio portion of the media file;computer program instructions for searching a content source for supplemental content, wherein the content source is a managed library comprising a database indexed by subject matter, wherein the supplemental content relates to and supplements the identified subject matter of the audio portion of the media file;computer program instructions for identifying, automatically without user intervention, supplemental content for supplementing the subject matter recorded in the audio portion of the media file;computer program instructions for inserting, in the media file, markup for rendering the supplemental content, wherein the supplemental content comprises one or more image files, video files or text files; andrendering supplemental content in dependence upon the markup.
  • 14. The computer program product of claim 13 wherein computer program instructions for identifying the subject matter of audio portion of the media file further comprise computer program instructions for converting the audio portion to text.
  • 15. The computer program product of claim 13 further comprising: computer program instructions for extracting the markup from the media file including computer program instructions for extracting synchronization markup from the media file; andwherein the computer program instructions for rendering supplemental content in dependence upon the markup includes computer program instructions for synchronizing the rendering of the supplemental content with the playing of the audio portion of the media file in dependence upon the synchronization markup.
  • 16. The computer program product of claim 13 further comprising: computer program instructions for rendering supplemental content in dependence upon the markup including computer program instructions for rendering the supplemental content on a browser installed on a handheld recording device; andcomputer program instructions for playing the audio portion of the media file including computer program instructions for playing the audio portion of a media file by the handheld recording device.
  • 17. The computer program product of claim 13 wherein the audio portion of the media file contains speech recorded on a handheld recording device for asynchronous communications between users.
  • 18. The computer program product of claim 13, the computer program product further comprising: computer program instructions for receiving the media file;computer program instructions for extracting the markup from the media file;computer program instructions for searching the database for any database markup that matches the markup from the media file, the database markup being associated with said supplemental content from the database;computer program instructions for retrieving the database markup and said supplemental content from said database;computer program instructions for rendering supplemental content in dependence upon the markup and the database markup; andcomputer program instructions for playing the audio portion of the media file.
  • 19. The method of claim 1, wherein the audio portion of the media file is a voice message recorded on a handheld device configured to communicate over a wireless network.
  • 20. The system of claim 7, wherein the audio portion of the media file is a voice message recorded on a handheld device configured to communicate over a wireless network.
  • 21. The computer program product of claim 13, wherein the audio portion of the media file is a voice message recorded on a handheld device configured to communicate over a wireless network.
US Referenced Citations (353)
Number Name Date Kind
4785408 Britton Nov 1988 A
5341469 Rossberg Aug 1994 A
5377354 Scannell Dec 1994 A
5564043 Siefert Oct 1996 A
5566291 Boulton et al. Oct 1996 A
5613032 Cruz et al. Mar 1997 A
5715370 Luther Feb 1998 A
5732216 Logan Mar 1998 A
5774131 Kim Jun 1998 A
5819220 Sarukkai et al. Oct 1998 A
5884266 Dvorak Mar 1999 A
5890117 Silverman Mar 1999 A
5892825 Mages et al. Apr 1999 A
5901287 Bull et al. May 1999 A
5903727 Nielsen May 1999 A
5911776 Guck Jun 1999 A
5978463 Jurkevics Nov 1999 A
6006187 Tanenblatt Dec 1999 A
6012098 Bayeh et al. Jan 2000 A
6029135 Krasle Feb 2000 A
6032260 Sasmazel et al. Feb 2000 A
6044347 Abella Mar 2000 A
6055525 Nusbickel Apr 2000 A
6064961 Hanson May 2000 A
6088026 Williams Jul 2000 A
6092121 Bennett Jul 2000 A
6115482 Sears et al. Sep 2000 A
6115686 Chung Sep 2000 A
6141693 Perlman et al. Oct 2000 A
6178511 Cohen et al. Jan 2001 B1
6199076 Logan Mar 2001 B1
6233318 Picard May 2001 B1
6240391 Ball et al. May 2001 B1
6266649 Linden et al. Jul 2001 B1
6272461 Meredith et al. Aug 2001 B1
6282511 Mayer Aug 2001 B1
6282512 Hemphill Aug 2001 B1
6311194 Sheth et al. Oct 2001 B1
6317714 Del Castillo Nov 2001 B1
6324511 Kiraly et al. Nov 2001 B1
6397185 Komissarchik et al. May 2002 B1
6463440 Hind et al. Oct 2002 B1
6468084 MacMillan Oct 2002 B1
6480860 Monday Nov 2002 B1
6510413 Walker Jan 2003 B1
6519617 Wanderski et al. Feb 2003 B1
6532477 Tang Mar 2003 B1
6563770 Kokhab May 2003 B1
6568939 Edgar May 2003 B1
6574599 Lim Jun 2003 B1
6593943 MacPhail Jul 2003 B1
6594637 Furukawa Jul 2003 B1
6604076 Holley Aug 2003 B1
6611876 Barrett Aug 2003 B1
6644973 Oster Nov 2003 B2
6684370 Sikorsky Jan 2004 B1
6687678 Yorimatsu Feb 2004 B1
6731993 Carter May 2004 B1
6771743 Butler et al. Aug 2004 B1
6792407 Kibre Sep 2004 B2
6802041 Rehm Oct 2004 B1
6810146 Loui et al. Oct 2004 B2
6832196 Reich Dec 2004 B2
6839669 Gould Jan 2005 B1
6859527 Banks Feb 2005 B1
6901403 Bata May 2005 B1
6912691 Dodrill et al. Jun 2005 B1
6931587 Krause Aug 2005 B1
6944214 Gilbert Sep 2005 B1
6944591 Raghunandan Sep 2005 B1
6965569 Carolan et al. Nov 2005 B1
6976082 Ostermann et al. Dec 2005 B1
6990451 Case et al. Jan 2006 B2
6992451 Kamio Jan 2006 B2
6993476 Dutta et al. Jan 2006 B1
7017120 Shnier Mar 2006 B2
7031477 Mella Apr 2006 B1
7039643 Sena et al. May 2006 B2
7046772 Moore et al. May 2006 B1
7054818 Sharma May 2006 B2
7062437 Kovales et al. Jun 2006 B2
7065222 Wilcock Jun 2006 B2
7069092 Wiser Jun 2006 B2
7096183 Junqua Aug 2006 B2
7107281 De La Huerga Sep 2006 B2
7113909 Nukaga Sep 2006 B2
7120702 Huang et al. Oct 2006 B2
7130850 Russell-Falla et al. Oct 2006 B2
7139713 Falcon Nov 2006 B2
7149694 Harb Dec 2006 B1
7149810 Miller Dec 2006 B1
7162502 Suarez Jan 2007 B2
7171411 Lewis et al. Jan 2007 B1
7178100 Call Feb 2007 B2
7191133 Pettay Mar 2007 B1
7313528 Miller Dec 2007 B1
7346649 Wong Mar 2008 B1
7346844 Baer et al. Mar 2008 B1
7349949 Connor Mar 2008 B1
7356470 Roth et al. Apr 2008 B2
7366712 He et al. Apr 2008 B2
7369988 Thenthiruperai May 2008 B1
7386575 Bashant Jun 2008 B2
7392102 Sullivan Jun 2008 B2
7430510 De Fabbrizio Sep 2008 B1
7433819 Adams et al. Oct 2008 B2
7437408 Schwartz Oct 2008 B2
7454346 Dodrill et al. Nov 2008 B1
7542903 Azara et al. Jun 2009 B2
7552055 Lecoeuche Jun 2009 B2
7561932 Holmes Jul 2009 B1
7568213 Carhart Jul 2009 B2
7657006 Woodring Feb 2010 B2
7664641 Pettay et al. Feb 2010 B1
7685525 Kumar Mar 2010 B2
7729478 Coughlan et al. Jun 2010 B1
7756934 Silver et al. Jul 2010 B2
7827259 Heller et al. Nov 2010 B2
7873520 Paik Jan 2011 B2
7890517 Angelo Feb 2011 B2
8321041 Jellison et al. Nov 2012 B2
20010014146 Beyda Aug 2001 A1
20010027396 Sato Oct 2001 A1
20010040900 Salmi Nov 2001 A1
20010047349 Easty Nov 2001 A1
20010049725 Kosuge Dec 2001 A1
20010054074 Hayashi Dec 2001 A1
20020013708 Walker et al. Jan 2002 A1
20020015480 Daswani Feb 2002 A1
20020032564 Ehsani et al. Mar 2002 A1
20020032776 Hasegawa et al. Mar 2002 A1
20020039426 Takemoto Apr 2002 A1
20020054090 Silva et al. May 2002 A1
20020057678 Jiang May 2002 A1
20020062216 Guenther et al. May 2002 A1
20020062393 Borger et al. May 2002 A1
20020083013 Rollins et al. Jun 2002 A1
20020095292 Mittal et al. Jul 2002 A1
20020120451 Kato et al. Aug 2002 A1
20020120693 Rudd Aug 2002 A1
20020128837 Morin Sep 2002 A1
20020130891 Singer Sep 2002 A1
20020143414 Wilcock Oct 2002 A1
20020151998 Kemppi Oct 2002 A1
20020152210 Johnson et al. Oct 2002 A1
20020169770 Kim Nov 2002 A1
20020173964 Reich Nov 2002 A1
20020178007 Slotznick et al. Nov 2002 A1
20020193894 Terada Dec 2002 A1
20020194286 Matsuura et al. Dec 2002 A1
20020194480 Nagao Dec 2002 A1
20020198714 Zhou Dec 2002 A1
20020198720 Takagi et al. Dec 2002 A1
20030018727 Yamamoto Jan 2003 A1
20030028380 Freeland et al. Feb 2003 A1
20030033331 Sena et al. Feb 2003 A1
20030034879 Rangarajan et al. Feb 2003 A1
20030055835 Roth Mar 2003 A1
20030055868 Fletcher et al. Mar 2003 A1
20030078780 Kochanski et al. Apr 2003 A1
20030103606 Rhie et al. Jun 2003 A1
20030108184 Brown Jun 2003 A1
20030110185 Rhoads Jun 2003 A1
20030110272 du Castel et al. Jun 2003 A1
20030110297 Tabatabai et al. Jun 2003 A1
20030115056 Gusler et al. Jun 2003 A1
20030115064 Gusler et al. Jun 2003 A1
20030115289 Chinn Jun 2003 A1
20030126293 Bushey Jul 2003 A1
20030132953 Johnson et al. Jul 2003 A1
20030145062 Sharma Jul 2003 A1
20030151618 Johnson Aug 2003 A1
20030156130 James Aug 2003 A1
20030158737 Csicsatka Aug 2003 A1
20030160770 Zimmerman Aug 2003 A1
20030163211 Van Der Meulen Aug 2003 A1
20030167234 Bodmer et al. Sep 2003 A1
20030172066 Cooper et al. Sep 2003 A1
20030182000 Muesch Sep 2003 A1
20030182124 Khan Sep 2003 A1
20030187668 Ullmann Oct 2003 A1
20030187726 Bull Oct 2003 A1
20030188255 Shimizu et al. Oct 2003 A1
20030212654 Harper Nov 2003 A1
20030225599 Mueller Dec 2003 A1
20030229847 Kim Dec 2003 A1
20040003394 Ramaswamy Jan 2004 A1
20040027859 Ohtani Feb 2004 A1
20040034653 Maynor et al. Feb 2004 A1
20040041835 Lu Mar 2004 A1
20040044665 Nwabueze Mar 2004 A1
20040049477 Powers Mar 2004 A1
20040067472 Polanyi et al. Apr 2004 A1
20040068552 Kotz et al. Apr 2004 A1
20040088063 Hoshi May 2004 A1
20040088349 Beck et al. May 2004 A1
20040093350 Alexander May 2004 A1
20040107125 Guheen Jun 2004 A1
20040120479 Creamer Jun 2004 A1
20040128276 Scanlon Jul 2004 A1
20040143430 Said Jul 2004 A1
20040153178 Koch Aug 2004 A1
20040172254 Sharma Sep 2004 A1
20040199375 Ehsani et al. Oct 2004 A1
20040201609 Obrador Oct 2004 A1
20040210626 Bodin Oct 2004 A1
20040225499 Wang Nov 2004 A1
20040254851 Himeno et al. Dec 2004 A1
20040267387 Samadani Dec 2004 A1
20040267774 Lin et al. Dec 2004 A1
20050004992 Horstmann Jan 2005 A1
20050015254 Bearman Jan 2005 A1
20050015718 Sambhus Jan 2005 A1
20050021826 Kumar Jan 2005 A1
20050043940 Elder Feb 2005 A1
20050045373 Born Mar 2005 A1
20050065625 Sass Mar 2005 A1
20050071780 Muller et al. Mar 2005 A1
20050076365 Popov et al. Apr 2005 A1
20050088981 Woodruff et al. Apr 2005 A1
20050108521 Silhavy et al. May 2005 A1
20050114139 Dincer May 2005 A1
20050119894 Cutler et al. Jun 2005 A1
20050120083 Aizawa Jun 2005 A1
20050137875 Kim Jun 2005 A1
20050138063 Bazot Jun 2005 A1
20050144002 Ps Jun 2005 A1
20050152344 Chiu Jul 2005 A1
20050154580 Horowitz Jul 2005 A1
20050154969 Bodin Jul 2005 A1
20050190897 Eberle Sep 2005 A1
20050195999 Takemura Sep 2005 A1
20050203887 Joshi Sep 2005 A1
20050203959 Muller et al. Sep 2005 A1
20050203960 Suarez Sep 2005 A1
20050232242 Karaoguz et al. Oct 2005 A1
20050234727 Chiu Oct 2005 A1
20050251513 Tenazas Nov 2005 A1
20050261905 Pyo et al. Nov 2005 A1
20050262119 Mawdsley Nov 2005 A1
20050288926 Benco Dec 2005 A1
20060008252 Kim Jan 2006 A1
20060008258 Kawana Jan 2006 A1
20060020662 Robinson Jan 2006 A1
20060031447 Holt Feb 2006 A1
20060041549 Gundersen Feb 2006 A1
20060048212 Tsuruoka et al. Mar 2006 A1
20060050794 Tan et al. Mar 2006 A1
20060050996 King Mar 2006 A1
20060052089 Khurana et al. Mar 2006 A1
20060075224 Tao Apr 2006 A1
20060085199 Jain Apr 2006 A1
20060095848 Naik May 2006 A1
20060100877 Zhang et al. May 2006 A1
20060112844 Hiller Jun 2006 A1
20060114987 Roman Jun 2006 A1
20060123082 Digate et al. Jun 2006 A1
20060129403 Liao et al. Jun 2006 A1
20060136449 Parker et al. Jun 2006 A1
20060140360 Crago et al. Jun 2006 A1
20060149781 Blankinship Jul 2006 A1
20060155698 Vayssiere Jul 2006 A1
20060159109 Lamkin et al. Jul 2006 A1
20060165104 Kaye Jul 2006 A1
20060168507 Hansen Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060184679 Izdepski et al. Aug 2006 A1
20060190616 Mayerhofer et al. Aug 2006 A1
20060193450 Flynt Aug 2006 A1
20060200743 Thong Sep 2006 A1
20060206533 MacLaurin et al. Sep 2006 A1
20060224739 Anantha Oct 2006 A1
20060233327 Roberts et al. Oct 2006 A1
20060242663 Gogerty Oct 2006 A1
20060253699 Della-Libera Nov 2006 A1
20060265503 Jones et al. Nov 2006 A1
20060282317 Rosenberg Dec 2006 A1
20060282822 Weng Dec 2006 A1
20060287745 Richenstein Dec 2006 A1
20060288011 Gandhi et al. Dec 2006 A1
20070005339 Jaquinta Jan 2007 A1
20070027692 Sharma Feb 2007 A1
20070027958 Haslam Feb 2007 A1
20070043462 Nakayama Feb 2007 A1
20070043735 Bodin Feb 2007 A1
20070043758 Bodin Feb 2007 A1
20070043759 Bodin et al. Feb 2007 A1
20070061132 Bodin Mar 2007 A1
20070061229 Ramer et al. Mar 2007 A1
20070061266 Moore et al. Mar 2007 A1
20070061371 Bodin Mar 2007 A1
20070061401 Bodin Mar 2007 A1
20070061711 Bodin Mar 2007 A1
20070061712 Bodin Mar 2007 A1
20070073728 Klein et al. Mar 2007 A1
20070077921 Hayashi et al. Apr 2007 A1
20070078655 Semkow et al. Apr 2007 A1
20070083540 Gundia et al. Apr 2007 A1
20070091206 Bloebaum Apr 2007 A1
20070100628 Bodin May 2007 A1
20070100629 Bodin May 2007 A1
20070100787 Lim May 2007 A1
20070100836 Eichstaedt et al. May 2007 A1
20070101274 Kurlander May 2007 A1
20070101313 Bodin May 2007 A1
20070112844 Tribble et al. May 2007 A1
20070118426 Barnes, Jr. May 2007 A1
20070124458 Kumar May 2007 A1
20070124802 Anton et al. May 2007 A1
20070130589 Davis et al. Jun 2007 A1
20070138999 Lee Jun 2007 A1
20070147274 Vasa et al. Jun 2007 A1
20070165538 Bodin Jul 2007 A1
20070168191 Bodin Jul 2007 A1
20070168194 Bodin Jul 2007 A1
20070174326 Schwartz et al. Jul 2007 A1
20070174388 Williams Jul 2007 A1
20070191008 Bucher et al. Aug 2007 A1
20070192327 Bodin Aug 2007 A1
20070192672 Bodin Aug 2007 A1
20070192673 Bodin Aug 2007 A1
20070192674 Bodin Aug 2007 A1
20070192675 Bodin Aug 2007 A1
20070192676 Bodin Aug 2007 A1
20070192683 Bodin Aug 2007 A1
20070192684 Bodin et al. Aug 2007 A1
20070198267 Jones Aug 2007 A1
20070208687 O'Connor et al. Sep 2007 A1
20070213857 Bodin Sep 2007 A1
20070213986 Bodin Sep 2007 A1
20070214147 Bodin et al. Sep 2007 A1
20070214148 Bodin Sep 2007 A1
20070214149 Bodin Sep 2007 A1
20070214485 Bodin Sep 2007 A1
20070220024 Putterman et al. Sep 2007 A1
20070239837 Jablokov Oct 2007 A1
20070253699 Yen et al. Nov 2007 A1
20070276837 Bodin et al. Nov 2007 A1
20070276865 Bodin et al. Nov 2007 A1
20070276866 Bodin et al. Nov 2007 A1
20070277088 Bodin Nov 2007 A1
20070277233 Bodin Nov 2007 A1
20080034278 Tsou et al. Feb 2008 A1
20080052415 Kellerman et al. Feb 2008 A1
20080082576 Bodin Apr 2008 A1
20080082635 Bodin Apr 2008 A1
20080155616 Logan Jun 2008 A1
20080161948 Bodin Jul 2008 A1
20080162131 Bodin Jul 2008 A1
20080162559 Bodin Jul 2008 A1
20080275893 Bodin et al. Nov 2008 A1
20090271178 Bodin Oct 2009 A1
20100223223 Sandler et al. Sep 2010 A1
Foreign Referenced Citations (9)
Number Date Country
1123075 May 1996 CN
1298173 Jun 2001 CN
1368719 Sep 2002 CN
1197884 Apr 2002 EP
236995 Dec 2002 GB
20010071517 Jul 2001 KR
20040078888 Sep 2004 KR
WO 0182139 Nov 2001 WO
WO 2005106846 Nov 2005 WO
Non-Patent Literature Citations (127)
Entry
ID3 draft specification copyright 2000.
Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,018.
Office Action Dated Sep. 29, 2006 in U.S. Appl. No. 11/536,733.
Office Action Dated Jan. 3, 2007 in U.S. Appl. No. 11/619,253.
Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,016.
Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,015.
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,318.
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,329.
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,325.
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,323.
Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,679.
Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,824.
Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,760.
Office Action Dated Jun. 23, 2009 in U.S. Appl. No. 11/352,680.
Office Action Dated Jul. 8, 2009 in U.S. Appl. No. 11/372,317.
Final Office Action Dated Jul. 22, 2009 in U.S. Appl. No. 11/536,733.
Office Action Dated Jul. 9, 2009 in U.S. Appl. No. 11/420,017.
Office Action Dated Jul. 17, 2009 in U.S. Appl. No. 11/536,781.
Office Action Dated Jul. 23, 2009 in U.S. Appl. No. 11/420,014.
Final Office Action Dated Jul. 21, 2009 in U.S. Appl. No. 11/420,018.
Text to Speech MP3 with Natural Voices 1.71, Published Oct. 5, 2004.
Managing multimedia content and delivering services across multiple client platforms using XML, London Communications Symposium, xx, xx, Sep. 10, 2002, pp. 1-7.
PCT Search Report and Written Opinion International Application PCT/EP2007/050594.
U.S. Appl. No. 11/352,760, filed Feb. 2006, Bodin, et al.
U.S. Appl. No. 11/352,824, filed Feb. 2006, Bodin, et al.
U.S. Appl. No. 11/352,680, filed Feb. 2006, Bodin, et al.
U.S. Appl. No. 11/352,679, filed Feb. 2006, Bodin et al.
U.S. Appl. No. 11/372,323, filed Mar. 2006, Bodin et al.
U.S. Appl. No. 11/372,318, filed Mar. 2006, Bodin et al.
U.S. Appl. No. 11/372,319, filed Mar. 2006, Bodin et al.
U.S. Appl. No. 11/536,781, filed Sep. 2006, Bodin et al.
U.S. Appl. No. 11/420,014, filed May 2006, Bodin et al.
U.S. Appl. No. 11/420,015, filed May 2006, Bodin et al.
U.S. Appl. No. 11/420,016, filed May 2006, Bodin et al.
U.S. Appl. No. 11/420,017, filed May 2006, Bodin et al.
U.S. Appl. No. 11/420,018, filed May 2006, Bodin et al.
U.S. Appl. No. 11/536,733, filed Sep. 2006, Bodin et al.
U.S. Appl. No. 11/619,216, filed Jan. 2007, Bodin et al.
U.S. Appl. No. 11/619,253, filed Jan. 2007, Bodin, et al.
U.S. Appl. No. 12/178,448, filed Jul. 2008, Bodin, et al.
Office Action Dated Apr. 15, 2009 in U.S. Appl. No. 11/352,760.
Final Office Action Dated Nov. 16, 2009 in U.S. Appl. No. 11/352,760.
Notice of Allowance Dated Jun. 5, 2008 in U.S. Appl. No. 11/352,824.
Office Action Dated Jan. 22, 2008 in U.S. Appl. No. 11/352,824.
Final Office Action Dated Dec. 21, 2009 in U.S. Appl. No. 11/352,680.
Office Action Dated Apr. 30, 2009 in U.S. Appl. No. 11/352,679.
Final Office Action Dated Oct. 29, 2009 in U.S. Appl. No. 11/352,679.
Office Action Dated Oct. 28, 2008 in U.S. Appl. No. 11/372,323.
Office Action Dated Mar. 18, 2008 in U.S. Appl. No. 11/372,318.
Final Office Action Dated Jul. 9, 2008 in U.S. Appl. No. 11/372,318.
Final Office Action Dated Nov. 6, 2009 in U.S. Appl. No. 11/372,329.
Office Action Dated Feb. 25, 2009 in U.S. Appl. No. 11/372,325.
Office Action Dated Feb. 27, 2009 in U.S. Appl. No. 11/372,329.
Final Office Action Dated Jan. 15, 2010 in U.S. Appl. No. 11/536,781.
Office Action Dated Mar. 20, 2008 in U.S. Appl. No. 11/420,015.
Final Office Action Dated Sep. 3, 2008 in U.S. Appl. No. 11/420,015.
Office Action Dated Dec. 2, 2008 in U.S. Appl. No. 11/420,015.
Office Action Dated Mar. 3, 2008 in U.S. Appl. No. 11/420,016.
Final Office Action Dated Aug. 29, 2008 in U.S. Appl. No. 11/420,016.
Final Office Action Dated Dec. 31, 2009 in U.S. Appl. No. 11/420,017.
Office Action Dated Mar. 21, 2008 in U.S. Appl. No. 11/420,018.
Final Office Action Dated Aug. 29, 2008 in U.S. Appl. No. 11/420,018.
Office Action Dated Dec. 3, 2008 in U.S. Appl. No. 11/420,018.
Office Action Dated Dec. 30, 2008 in U.S. Appl. No. 11/536,733.
Office Action Dated Jan. 26, 2010 in U.S. Appl. No. 11/619,216.
Office Action Dated Apr. 2, 2009 in U.S. Appl. No. 11/619,253.
Buchana et al., “Representing Aggregated Works in the Digital Library”, ACM, 2007, pp. 247-256.
Office Action, U.S. Appl. No. 11/352,760, Sep. 16, 2010.
Office Action, U.S. Appl. No. 11/352,680, Jun. 10, 2010.
Final Office Action, U.S. Appl. No. 11/352,680, Sep. 7, 2010.
Office Action, U.S. Appl. No. 11/352,679, May 28, 2010.
Final Office Action, U.S. Appl. No. 11/352,679, Nov. 15, 2010.
Office Action, U.S. Appl. No. 11/372,317, Sep. 23, 2010.
Final Office Action, U.S. Appl. No. 11/372,329, Nov. 6, 2009.
Office Action, U.S. Appl. No. 11/372,319, Apr. 21, 2010.
Final Office Action, U.S. Appl. No. 11/372,319, Jul. 2, 2010.
Final Office Action, U.S. Appl. No. 11/420,014, Apr. 3, 2010.
Final Office Action, U.S. Appl. No. 11/420,017, Sep. 23, 2010.
Final Office Action, U.S. Appl. No. 11/619,216, Jun. 25, 2010.
Final Office Action, U.S. Appl. No. 11/619,236, Oct. 22, 2010.
Office Action, U.S. Appl. No. 12/178,448, Apr. 2, 2010.
Final Office Action, U.S. Appl. No. 12/178,448, Sep. 14, 2010.
Babara et al.; “The Audio Web”; Bell Communications Research; NJ; USA; pp. 97-104; 1997.
Monahan et al.; “Adapting Multimedia Internet Content for Universal Access”; IEEE Transactions on Multimedia, vol. 1, No. 1; pp. 104-114; 1999.
Lu et al.; “Audio Ticker”; Computer Networks and ISDN Systems, vol. 30, Issue 7, pp. 721-722, Apr. 1998.
U.S. Appl. No. 11/352,698 Office Action mailed Apr. 29, 2009.
Advertisement of EastBay Technologies, San Jose, CA; (author unknown); “IM Speak, Text to Speech Instant Messages”; from http://www.eastbaytech.com/im.htm website; Dec. 2005.
Advertisement of Odiogo, Inc. 410 Park Avenue, 15th floor, New York City, NY; (author unknown); “Create Text-To-Speech Podcast from RSS Feed with ODiofo for iPod, MP3 Player and Mobile Phone”; website; pp. 1-2; Dec. 13, 2006.
FeedForAll. Hanover, MA; (author unknown); “iTune Tutorial Tags”; from www.feedforall.com website; pp. 1-9; Jul. 11, 2006.
U.S. Appl. No. 11/331,692 Office Action mailed Aug. 17, 2009.
Advertisement of Audioblog.com Audio Publishing Services, Flower Mound, Texas; (author unknown); “Service Features”; from www.audioblog.com website; pp. 1-2; Sep. 23, 2004.
Zhang et al.; “XML-Based Advanced UDDI Search Mechanism for B2B Integration”, Electronic Commerce Research, vol. 3, Nos. 1-2, 25-42, DOI: 10.1023/A:1021573226353; Kluwer Academic Publishers, The Netherlands, 2003.
Tian He et al., University of Virginia, Charlottesville, VA; “AIDA: Adaptive Application-Independent Data Aggregation in Wireless Sensor Networks” pp. 426-429, 432-449, and, 451-457; 2003.
Braun et al.; Fraunhofer Institute for Computer Graphics, Darmstadt, DE; ICAD 1998 “Using Sonic Hyperlinks in Web-TV”; pp. 1-10; 1998.
Braun et al.; IEEE Computer Society, Washington, DC; “Temporal Hypermedia for Multimedia Applications in the World Wide Web”; pp. 1-5; 1999.
Frankie, James, Computer Science Department, Stanford University; “AHA:Audio HTML Access”; pp. 1-13; 2007.
International Business Machines Corporation; PCT Search Report; Sep. 2, 2007; PCT Application No. PCT/EP2007/051260.
Hoschka, et al; “Synchronized Multimedia Integration Language (SMIL) 1.0 Specification”; pp. 1-43; found at website http://www.w3.org/TR/1998/PR-smil-19980409; Apr. 9, 1998.
Casalaina et al., “BMRC Procedures: RealMedia Guide”; pp. 1-7; Berkeley Multimedia Research Center, Berkeley, CA; found at http://web.archive.org/web/20030218131051/http://bmrc.berkeley.edu/info/procedures/rm.html; Feb. 13, 1998.
Advertisement of TextToSpeechMP3.com; “Text to Speech MP3 with Natural Voices 1.71”; (author unknown); website; pp. 1-5; 2004; London.
Andrade et al.; “Managing Multimedia Content and Delivering Services Across Multiple Client Platforms using XML”; Symposium; London Communications; London; pp. 1-8; 2002.
International Business Machines Corporation; PCT Search Report; Mar. 27, 2007; Application No. PCT/EP2007/050594.
U.S. Appl. No. 11/352,710 Office Action mailed Jun. 11, 2009.
U.S. Appl. No. 11/352,727 Office Action mailed May 19, 2009.
U.S. Appl. No. 11/266,559 Final Office Action mailed Apr. 20, 2009.
U.S. Appl. No. 11/266,662 Final Office Action mailed Oct. 30, 2008.
U.S. Appl. No. 11/266,675 Final Office Action mailed Apr. 6, 2009.
U.S. Appl. No. 11/266,698 Final Office Action mailed Dec. 19, 2008.
U.S. Appl. No. 11/352,709 Office Action mailed May 14, 2009.
U.S. Appl. No. 11/207,911 Final Office Action mailed Apr. 29, 2008.
U.S. Appl. No. 11/207,911 Final Office Action mailed Apr. 15, 2009.
U.S. Appl. No. 11/226,747 Final Office Action mailed Sep. 25, 2008.
U.S. Appl. No. 11/266,744 Final Office Action mailed May 7, 2008.
U.S. Appl. No. 11/207,912 Final Office Action mailed May 7, 2008.
U.S. Appl. No. 11/207,912 Final Office Action mailed Apr. 28, 2009.
U.S. Appl. No. 11/266,663 Final Office Action mailed Sep. 16, 2008.
U.S. Appl. No. 11/331,694 Final Office Action mailed Mar. 30, 2009.
U.S. Appl. No. 11/331,692 Final Office Action mailed Feb. 9, 2009.
U.S. Appl. No. 11/207,914 Final Office Action mailed May 7, 2008.
U.S. Appl. No. 11/207,914 Final Office Action mailed Apr. 14, 2009.
U.S. Appl. No. 11/207,913 Final Office Action mailed Dec. 23, 2008.
U.S. Appl. No. 11/226,746 Final Office Action mailed Sep. 15, 2008.
U.S. Appl. No. 11/207,912 Office Action mailed Jan. 25, 2010.
U.S. Appl. No. 11/207,911 Notice of Allowance mailed Feb. 3, 2010.
U.S. Appl. No. 11/226,746 Final Office Action mailed Jul. 31, 2009.
U.S. Appl. No. 11/226,746 Office Action mailed Jan. 25, 2010.
U.S. Appl. No. 11/352,709 Final Office Action mailed Nov. 5, 2009.
Related Publications (1)
Number Date Country
20080161948 A1 Jul 2008 US