Invoking an audio hyperlink

Information

  • Patent Grant
  • 9135339
  • Patent Number
    9,135,339
  • Date Filed
    Monday, February 13, 2006
    19 years ago
  • Date Issued
    Tuesday, September 15, 2015
    9 years ago
Abstract
Methods, systems, and computer program products are provided for invoking an audio hyperlink. Embodiments include identifying a predetermined playback time in an audio file pre-designated as having an associated audio hyperlink; playing an audio indication of the audio hyperlink at the predetermined playback time; receiving from a user an instruction to invoke the audio hyperlink; identifying a URI associated with the audio hyperlink; and invoking the URI. The audio file may include an audio subcomponent of a file also including video.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The field of the invention is data processing, or, more specifically, methods, systems, and products for invoking an audio hyperlink.


2. Description of Related Art


A ‘hyperlink’ is a reference to a URI which when invoked requests access to a resource identified by a URI. The term ‘hyperlink’ often includes links to URIs effected through conventional markup elements for visual display, as well as ‘Back’ and ‘Forward’ buttons on a toolbar in a GUI of a software application program. Users are typically made aware of hyperlinks by displaying text associated with the hyperlink or the URI itself in highlighting, underscoring, specially coloring, or some other fashion setting the hyperlink apart from other screen text and identifying it as an available hyperlink. In addition, the screen display area of the anchor is often sensitized to user interface operations such as GUI pointer operations such as mouse clicks. Such conventional hyperlinks require a visual screen display to make a user aware of the hyperlink and a device for GUI pointer operations to invoke the hyperlink. Audio files however are typically played on devices with no visual display and without devices for GUI pointer operations.


SUMMARY OF THE INVENTION

Methods, systems, and computer program products are provided for invoking an audio hyperlink. Embodiments include identifying a predetermined playback time in an audio file pre-designated as having an associated audio hyperlink; playing an audio indication of the audio hyperlink at the predetermined playback time; receiving from a user an instruction to invoke the audio hyperlink; identifying a URI associated with the audio hyperlink; and invoking the URI. The audio file may include an audio subcomponent of a file also including video.


Identifying a predetermined playback time in an audio file pre-designated as having an associated audio hyperlink may be carried out by retrieving from an audio hyperlink data structure a playback time in the audio file pre-designated as having an audio hyperlink. Playing an audio indication of the audio hyperlink may be carried out by retrieving from an audio hyperlink data structure an audio indication ID identifying an audio indication of the audio hyperlink and augmenting the sound of the audio file in accordance with the audio indication ID.


Receiving from a user an instruction to invoke the audio hyperlink may be carried out by receiving speech from a user; converting the speech to text; and comparing the text to a grammar. Identifying a URI associated with the audio hyperlink may be carried out by retrieving from a data structure a URI in dependence upon a user speech instruction.


The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 sets forth a network diagram illustrating an exemplary system of computers each of which is capable of invoking an audio hyperlink according to the present invention and for annotating an audio file with an audio hyperlink according to the present invention.



FIG. 2 sets forth a line drawing of an exemplary audio file player capable of invoking an audio hyperlink according to the present invention.



FIG. 3 sets forth a block diagram of automated computing machinery comprising an exemplary computer useful in both invoking an audio hyperlink according to the present invention and annotating an audio file with an audio hyperlink according to the present invention.



FIG. 4 sets forth a flow chart illustrating an exemplary method for invoking an audio hyperlink.



FIG. 5 sets forth a flow chart illustrating an exemplary method for playing an audio indication of an audio hyperlink.



FIG. 6 sets forth a flow chart illustrating an exemplary method for receiving from a user an instruction to invoke the audio hyperlink.



FIG. 7 sets forth a flow chart illustrating an exemplary method for identifying a URI associated with the audio hyperlink.



FIG. 8 sets forth a flow chart illustrating an exemplary method for annotating an audio file with an audio hyperlink.



FIG. 9 sets forth a flow chart illustrating another exemplary method for annotating an audio file with an audio hyperlink.



FIG. 10 sets forth a line drawing of an audio hyperlink file annotation tool useful in annotating an audio file with an audio hyperlink according to the present invention.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

Exemplary methods, systems, and products for invoking an audio hyperlink and for annotating an audio file with an audio hyperlink according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with FIG. 1. FIG. 1 sets forth a network diagram illustrating an exemplary system of computers each of which is capable of invoking an audio hyperlink according to the present invention and for annotating an audio file with an audio hyperlink according to the present invention. In the example of FIG. 1 a personal computer (108) is connected to a wide are network (‘WAN’) (101) though a wireline connection (120). A PDA (112) connects to the WAN (101) through a wireless connection (114). A workstation (104) connects to the WAN (101) through a wireline connection (122). A mobile phone (110) connects to the WAN (101) through a wireless connection (116). An MP3 audio file player (119) connects to the WAN (101) through a wireline connection (125). A laptop (126) connects to the WAN (101) through a wireless connection (118). A compact disc player (105) connects to the WAN (101) through a wireline connection (123).


Each of the computers (108, 112, 104, 110, 119, 126, 105) of FIG. 1 is capable of playing an audio file and is capable of supporting an audio file player according the present invention that is capable supporting an audio hyperlink module computer program instructions for invoking an audio hyperlink. Such an audio hyperlink module is generally capable of identifying a predetermined playback time in an audio file pre-designated as having an associated audio hyperlink; playing an audio indication of the audio hyperlink at the predetermined playback time; receiving from a user an instruction to invoke the audio hyperlink; identifying a URI associated with the audio hyperlink; and invoking the URI.


An ‘audio hyperlink’ is a reference to a URI which when invoked requests access to a resource identified by the URI and whose existence is identified to users through an audio indication of the audio hyperlink. Audio hyperlinks according to the present invention are typically invoked through speech by a user, although audio hyperlinks may also be invoked by user through an input device such as a keyboard, mouse or other device as will occur to those of skill in the art.


A “URI” or “Uniform Resource Identifier” is an identifier of an object. Such an object may be in any namespace accessible through a network, a file accessible by invoking a filename, or any other object as will occur to those of skill in the art. URIs are functional for any access scheme, including for example, the File Transfer Protocol or “FTP,” Gopher, and the web. A URI as used in typical embodiments of the present invention usually includes an internet protocol address, or a domain name that resolves to an internet protocol address, identifying a location where a resource, particularly a web page, a CGI script, or a servlet, is located on a network, usually the Internet. URIs directed to particular resources, such as particular HTML files, JPEG files, or MPEG files, typically include a path name or file name locating and identifying a particular resource in a file system coupled to a network. To the extent that a particular resource, such as a CGI file or a servlet, is executable, for example to store or retrieve data, a URI often includes query parameters, or data to be stored, in the form of data encoded into the URI. Such parameters or data to be stored are referred to as ‘URI encoded data.’


Each of the computers (108, 112, 104, 110, 119, 126, 105) of FIG. 1 is capable of supporting an audio file annotation tool comprising computer program instructions for annotating an audio file with an audio hyperlink. Such an audio file annotation tool is generally capable of receiving an identification of a playback time in an audio file to associate with an audio hyperlink; receiving a selection of a URI identifying a resource to be accessed upon the invocation of the audio hyperlink; receiving a selection of one or more keywords for invoking the audio hyperlink; and associating with the playback time in the audio file the URI, and the one or more keywords.


The arrangement of servers and other devices making up the exemplary system illustrated in FIG. 1 are for explanation, not for limitation. Data processing systems useful according to various embodiments of the present invention may include additional servers, routers, other devices, and peer-to-peer architectures, not shown in FIG. 1, as will occur to those of skill in the art. Networks in such data processing systems may support many data communications protocols, including for example TCP (Transmission Control Protocol), IP (Internet Protocol), HTTP (HyperText Transfer Protocol), WAP (Wireless Access Protocol), HDTP (Handheld Device Transport Protocol), and others as will occur to those of skill in the art. Various embodiments of the present invention may be implemented on a variety of hardware platforms in addition to those illustrated in FIG. 1.


For further explanation, FIG. 2 sets forth a line drawing of an exemplary audio file player (304) capable of invoking an audio hyperlink according to the present invention. An ‘audio hyperlink’ is a reference to a URI which when invoked requests access to a resource identified by the URI and whose existence is identified to users through an audio indication of the audio hyperlink. Audio hyperlinks according to the present invention are typically invoked through speech by a user, although audio hyperlinks may also be invoked by user through an input device such as a keyboard, mouse or other device as will occur to those of skill in the art.


The audio file player (304) of FIG. 2 also includes a speech synthesis module (308), computer program instructions capable of receiving speech from a user, converting the speech to text, and comparing the text to a grammar to receive an instruction as speech from a user to invoke the audio hyperlink. Examples of speech synthesis modules useful in invoking audio hyperlinks according to the present invention include IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and other speech synthesis modules as will occur to those of skill in the art.


The audio file player (304) of FIG. 2 includes an audio hyperlink module, computer program instructions for identifying a predetermined playback time in an audio file (402) pre-designated as having an associated audio hyperlink, playing an audio indication of the audio hyperlink at the predetermined playback time; receiving from a user an instruction to invoke the audio hyperlink; identifying a URI associated with the audio hyperlink; and invoking the URI.


Audio files useful in invoking audio hyperlinks and capable of being annotated with audio hyperlinks according to the present invention include audio files, as well as audio subcomponents of a file also including video. Examples of audio files useful with the present invention include wave files ‘.wav’, MPEG layer-3 files (‘.mp3’) and others as will occur to those of skill in the art.


The audio hyperlink in the example of FIG. 2 is implemented as a data structure (404) made available to the audio hyperlink module (302) in an audio file player (304). The audio hyperlink data structure (404) of FIG. 2 includes an audio file ID (405) uniquely identifying the audio file having an associated audio hyperlink. The audio hyperlink data structure (404) of FIG. 2 also includes a playback time (406) identifying a playback time in the audio file having an associated audio hyperlink.


The audio hyperlink data structure (404) of FIG. 2 includes an audio indication ID (407) uniquely identifying the audio indication for the audio hyperlink. An audio indication is a predetermined sound for augmenting the playback of the audio file that is designed to make a user aware of the existence of the audio hyperlink. Audio indications may be predetermined earcons designed to inform users of the existence of audio hyperlinks, pitch-shifts or phase-shifts during the playback of the audio file designed to inform users of the existence of audio hyperlinks or any other audio indication that will occur to those of skill in the art. An audio player supporting more than one type of audio indication of audio hyperlinks in audio files may be informed of one of a plurality of supported audio indications through the use of an audio indication ID (407) in an audio hyperlink data structure (404) as in the example of FIG. 2.


The audio hyperlink data structure (404) of FIG. 2 includes a grammar (408). A grammar is a collection of one or more keywords that are recognized by an audio player supporting audio files with audio hyperlinks that when received trigger the invocation of URI for the audio hyperlink. The audio hyperlink data structure (404) of FIG. 2 also includes a URI (410) identifying a resource referenced by the audio hyperlink. The URI identifies a resource accessed by invoking the audio hyperlink.


Invoking an audio hyperlink and annotating an audio file with an audio hyperlink in accordance with the present invention is generally implemented with computers, that is, with automated computing machinery. In the system of FIG. 1, for example, all the nodes, servers, and communications devices are implemented to some extent at least as computers. For further explanation, therefore, FIG. 3 sets forth a block diagram of automated computing machinery comprising an exemplary computer (152) useful in both invoking an audio hyperlink according to the present invention and annotating an audio file with an audio hyperlink according to the present invention. The computer (152) of FIG. 3 includes at least one computer processor (156) or ‘CPU’ as well as random access memory (168) (‘RAM’) which is connected through a system bus (160) to processor (156) and to other components of the computer. Stored in RAM (168) is an audio file player (304) including an audio hyperlink module (302), computer program instructions for invoking an audio hyperlink that are capable of identifying a predetermined playback time in an audio file pre-designated as having an associated audio hyperlink; playing an audio indication of the audio hyperlink at the predetermined playback time; receiving from a user an instruction to invoke the audio hyperlink; identifying a URI associated with the audio hyperlink; and invoking the URI.


The audio file player (304) of FIG. 3 also includes a speech synthesis module (308), computer program instructions capable of receiving speech from a user, converting the speech to text, and comparing the text to a grammar to receive an instruction as speech from a user to invoke the audio hyperlink. Examples of speech synthesis modules useful in invoking audio hyperlinks according to the present invention include IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and other speech synthesis modules as will occur to those of skill in the art.


Also stored RAM (168) is an audio hyperlink file annotation tool (306) computer program instructions for annotating an audio file with an audio hyperlink that are capable of receiving an identification of a playback time in an audio file to associate with an audio hyperlink; receiving a selection of a Uniform Resource identifier (‘URI’) identifying a resource to be accessed upon the invocation of the audio hyperlink; receiving a selection of one or more keywords for invoking the audio hyperlink; and associating with the playback time in the audio file the URI, and the one or more keywords. Also stored in RAM (168) is an operating system (154). Operating systems useful in computers according to embodiments of the present invention include UNIX™, Linux™, Microsoft XP™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154), audio file player (304), audio hyperlink module (302), speech synthesis module (308) and audio hyperlink annotation tool (306) in the example of FIG. 3 are shown in RAM (168), but many components of such software typically are stored in non-volatile memory (166) also.


Computer (152) of FIG. 3 includes non-volatile computer memory (166) coupled through a system bus (160) to processor (156) and to other components of the computer (152). Non-volatile computer memory (166) may be implemented as a hard disk drive (170), optical disk drive (172), electrically erasable programmable read-only memory space (so-called ‘EEPROM’ or ‘Flash’ memory) (174), RAM drives (not shown), or as any other kind of computer memory as will occur to those of skill in the art.


The example computer of FIG. 3 includes one or more input/output interface adapters (178). Input/output interface adapters in computers implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices (180) such as computer display screens, as well as user input from user input devices (181) such as keyboards and mice.


The exemplary computer (152) of FIG. 3 includes a communications adapter (167) for implementing data communications (184) with other computers (182). Such data communications may be carried out serially through RS-232 connections, through external buses such as USB, through data communications networks such as IP networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a network. Examples of communications adapters useful for determining availability of a destination according to embodiments of the present invention include modems for wired dial-up communications, Ethernet (IEEE 802.3) adapters for wired network communications, and 802.11b adapters for wireless network communications.


For further explanation, FIG. 4 sets forth a flow chart illustrating an exemplary method for invoking an audio hyperlink. As discussed above, an ‘audio hyperlink’ is a reference to a URI which when invoked requests access to a resource identified by the URI and whose existence is identified to users through an audio indication of the audio hyperlink. Audio hyperlinks according to the present invention are typically invoked through speech by a user, although audio hyperlinks may also be invoked by user through an input device such as a keyboard, mouse or other device as will occur to those of skill in the art. Audio files useful in invoking audio hyperlinks and being capable of being annotated with audio hyperlinks according to the present invention include audio files, as well as audio subcomponents of a file also including video.


The audio hyperlink in the example of FIG. 4 is implemented as a data structure (404) made available to an audio hyperlink module in an audio file player. The audio hyperlink data structure (404) of FIG. 4 includes an audio file ID (405) uniquely identifying the audio file having an associated audio hyperlink. The audio hyperlink data structure (404) of FIG. 4 also includes a playback time (406) identifying a playback time in the audio file having an associated audio hyperlink.


The audio hyperlink data structure (404) of FIG. 4 includes an audio indication ID (407) uniquely identifying the audio indication for the audio hyperlink. An audio indication is a predetermined sound for augmenting the playback of the audio file that is designed to make a user aware of the existence of the audio hyperlink. Audio indications may be predetermined earcons designed to inform users of the existence of audio hyperlinks, pitch-shifts or phase-shifts during the playback of the audio file designed to inform users of the existence of audio hyperlinks or any other audio indication that will occur to those of skill in the art. An audio player supporting more than one type of audio indication of audio hyperlinks in audio files may be informed of one of a plurality of supported audio indications through the use of an audio indication ID (407) in an audio hyperlink data structure (404) as in the example of FIG. 4.


The audio hyperlink data structure (404) of FIG. 4 includes a grammar (408). A grammar is a collection of one or more keywords that are recognized by an audio player supporting audio files with audio hyperlinks that when received trigger the invocation of URI (410) for the audio hyperlink. The audio hyperlink data structure (404) of FIG. 4 also includes a URI identifying a resource referenced by the audio hyperlink. The URI identifies a resource accessed by invoking the audio hyperlink.


The method of FIG. 4 includes identifying (412) a predetermined playback time (406) in an audio file (402) pre-designated as having an associated audio hyperlink (404). Identifying (412) a predetermined playback time (406) in an audio file (402) pre-designated as having an associated audio hyperlink may be carried out by retrieving from an audio hyperlink data structure (404) a playback time (406) in the audio file (402) pre-designated as having an audio hyperlink (404).


The playback time (406) may be targeted to the playback of a single word, phrase, or sound that is conceptually related to the subject of the audio file. Consider for further explanation, an audio file of an advertisement for a clothing store. The playback time of the audio file corresponding with the word “pants” may be associated with an audio hyperlink to a pants manufacturer. Playing an audio indication of the existence of the audio hyperlink informs a user of the existence of the audio hyperlink allowing a user to invoke a user to the pants manufacturer though speech invocation of the URI if a user so desires.


The method of FIG. 4 also includes playing (414) an audio indication (416) of the audio hyperlink (404) at the predetermined playback time (406). Playing (414) an audio indication (416) of the audio hyperlink (404) at the predetermined playback time (406) may be carried out by playing an earcon designed to inform a user of the existence of the audio hyperlink, by pitch-shifting the playback of the audio file at the playback time having the associated audio hyperlink, by phase-shifting the playback of the audio file at the playback time having the associated audio hyperlink or any other way of playing an audio indication of an audio hyperlink that will occur to those of skill in the art.


The method of FIG. 4 also includes receiving (418) from a user (100) an instruction (420) to invoke the audio hyperlink (404). Receiving (418) from a user (100) an instruction (420) to invoke the audio hyperlink (404) may be carried out by receiving speech from a user (100); converting the speech to text; and comparing the text to a grammar (408) as discussed below with reference to FIG. 6.). Receiving (418) from a user (100) an instruction (420) to invoke the audio hyperlink (404) may be carried out by receiving an instruction through a user input device such as a keyboard, mouse, GUI input widget or other device as will occur to those of skill in the art.


The method of FIG. 4 also includes identifying (422) a URI (424) associated with the audio hyperlink (404) and invoking (426) the URI (424). Identifying (422) a URI (424) associated with the audio hyperlink (404) may be carried out by retrieving from an audio hyperlink data structure a URI. Invoking (426) the URI (424) makes available the resource or resources referenced by the audio hyperlink.


As discussed above, audio file players may be capable of supporting more than one type of audio indication designed to inform a user of the existence of an audio hyperlink. For further explanation, FIG. 5 sets forth a flow chart illustrating an exemplary method for playing an audio indication of an audio hyperlink. In the method of FIG. 5, playing (414) an audio indication (416) of the audio hyperlink (404) includes retrieving (504) from an audio hyperlink data structure (404) an audio indication ID (407) identifying an audio indication of the audio hyperlink (404). An audio indication ID may identify a particular type of audio indication such as for example an earcon, an instruction to phase-shift or pitch-shift the playback of the audio file at the associated playback time or an audio indication ID may identify a particular audio indication of an audio hyperlink such as one of a plurality of supported earcons.


Playing (414) an audio indication (416) of the audio hyperlink (404) according to the method of FIG. 5 also includes augmenting (506) the sound of the audio file (402) in accordance with the audio indication ID (407). Augmenting (506) the sound of the audio file (402) in accordance with the audio indication ID (407) may be carried out by phase-shifting the playback of the audio file at the associated playback time, pitch-shifting the playback of the audio file at the associated playback time, or other ways of changing the normal playback of the audio file at the predetermined playback time. Augmenting (506) the sound of the audio file (402) in accordance with the audio indication ID (407) may also be carried out by adding an earcon such as ring or other sound to the playback of the audio file.


As discussed above, audio hyperlinks are typically invoked by speech instructions from a user. For further explanation, therefore, FIG. 6 sets forth a flow chart illustrating an exemplary method for receiving from a user an instruction to invoke the audio hyperlink that includes receiving (508) speech (510) from a user (100) and converting (512) the speech (510) to text (514). Receiving (508) speech (510) from a user (100) and converting (512) the speech (510) to text (514) may be carried out by a speech synthesis engine in an audio file player supporting audio hyperlinking according to the present invention. Examples of such speech synthesis modules include, for example, IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and other speech synthesis modules as will occur to those of skill in the art.


The method of FIG. 6 also includes comparing (516) the text (514) to a grammar (408). As discussed above, a grammar is a collection of one or more keywords that are recognized by an audio player supporting audio files with audio hyperlinks that when received trigger the invocation of URI for the audio hyperlink. Text conversions of speech instructions matching keywords in the grammar are recognized as instructions to invoke the audio hyperlink.


As discussed above, invoking an audio hyperlink is typically carried out by invoking a URI to access a resource referenced by the audio hyperlink. For further explanation, FIG. 7 sets forth a flow chart illustrating an exemplary method for identifying (422) a URI (424) associated with the audio hyperlink (404). Identifying (422) a URI (424) associated with the audio hyperlink (404) according to the method of FIG. 7 includes retrieving (520) from a data structure a URI (410) in dependence upon an instruction (420). Upon receiving an instruction (420) to invoke the audio hyperlink (404), the method of FIG. 7 continues by retrieving from an audio hyperlink data structure (404) the URI associated with the audio hyperlink and requesting access to the resource identified by the URI.


The use of an audio hyperlink data structure is for explanation and not for limitation. In fact, audio hyperlinks may be implemented in a number of ways. Audio hyperlinks may also be implemented through an improved anchor element which is a markup language element. Such an anchor element may be improved to invoke audio hyperlinks. Consider for further explanation the following exemplary anchor element improved to implement an audio hyperlink:














<audioHyperlink href=\\SrvrX\ResourceY playBackTime=00:08:44:44


file=someFile.mp3 grammar ID = grammar123>


Some_Audio_Sound_ID


</audioHyperlink>









This example anchor element includes a start tag <audioHyperlink>, and end tag </audioHyperlink>, which is an href attribute that identifies the target of the audio hyperlink as a resource named ‘ResourceY’ on a web server named ‘SrvrX,’ and an audio anchor. The “audio anchor” is an audio indication of the existence of the audio hyperlink the identification of which is set forth between the start tag and the end tag. That is, in this example, the anchor is an audio sound identified by the identification “Some_Auido_Sound_ID.” Such an audio indication when played is designed to make a user aware of the audio hyperlink. The anchor element also identifies a palyback time of 00:08:44 in file someFile.mp3 as the playback time for playing the audio indication and identifies grammar ID=grammar123 as a grammar including keywords for speech invocation of the audio hyperlink.


Audio hyperlinks advantageously provide added functionality to audio files allowing users to access additional resources through invoking the audio hyperlinks. To provide users with those additional resources audio files may be annotated with an audio hyperlink. For further explanation, FIG. 8 sets forth a flow chart illustrating an exemplary method for annotating an audio file with an audio hyperlink. The method of FIG. 8 includes receiving (602) an identification of a playback time (406) in an audio file (402) to associate with an audio hyperlink. Receiving an identification of a playback time in an audio file to have an associated audio hyperlink may include receiving a user instruction during the recording of the audio file. Receiving (602) an identification of a playback time (406) in an audio file (402) in such cases may be carried out by receiving a user instruction through an input devices, such as for example buttons on an audio file recorder to indicate a playback time for associating an audio hyperlink. Receiving (602) an identification of a playback time (406) in an audio file (402) for an associated audio hyperlink may also be carried out through the use of a tool such as an audio hyperlink file annotation tool on a computer such as the audio hyperlink file annotation tool discussed below with reference to FIG. 10.


Receiving an identification of a playback time in an audio file to associate with an audio hyperlink may also include receiving a user instruction after the recording of the audio file. Receiving (602) an identification of a playback time (406) in an audio file (402) to associate with an audio hyperlink in such cases may be facilitated by use of a tool running on a computer such as the audio hyperlink file annotation tool discussed below with reference to FIG. 10. Such tools may include input widgets designed to receive from a user an identification of a playback time to associate with an audio hyperlink.


The method of FIG. 8 also includes receiving (604) a selection of a URI (410) identifying a resource to be accessed upon the invocation of the audio hyperlink. Receiving (604) a selection of a URI (410) identifying a resource to be accessed upon the invocation of the audio hyperlink may be carried out by use of a tool running on a computer such as the audio hyperlink file annotation tool discussed below with reference to FIG. 10. Such tools may include input widgets designed to facilitate a users entry of a URI identifying a resource to be accessed upon invoking the audio hyperlink.


The method of FIG. 8 also includes receiving (606) a selection of one or more keywords (608) for invoking the audio hyperlink. Receiving (606) a selection of one or more keywords (608) for invoking the audio hyperlink may be carried out by use of a tool running on a computer such as the audio hyperlink file annotation tool discussed below with reference to FIG. 10. Such tools may include input widgets designed to facilitate a users entry of one or more keywords creating a grammar for invoking an audio hyperlink.


The method of FIG. 8 also includes associating (610) with the playback time (406) in the audio file (402) the URI (410), and the one or more keywords (608). Associating (610) with the playback time (406) in the audio file (402) the URI (410) and the one or more keywords (608) may be carried out by creating an audio hyperlink data structure (404) including an identification of the playback time (406), a grammar (408), and the URI (410). As discussed above, an audio hyperlink data structure (404) is a data structure available to an audio file player that supports audio hyper linking according to the present invention containing information useful in invoking an audio hyperlink. The audio hyperlink data structure (404) of FIG. 4 includes an audio file ID (405) uniquely identifying the audio file having an associated audio hyperlink. The audio hyperlink data structure (404) of FIG. 4 also includes a playback time (406) identifying a playback time in the audio file having an associated audio hyperlink. The Audio hyperlink data structure of FIG. 8 includes an audio indication ID (407) uniquely identifying the audio indication for the audio hyperlink. The audio hyperlink data structure (404) of FIG. 8 includes a grammar (408) including keywords for speech invocation of the audio hyperlink. The audio hyperlink data structure (404) of FIG. 8 also includes a URI (410) identifying a resource referenced by the audio hyperlink.


Associating (610) with the playback time (406) in the audio file (402) the URI (410), and the one or more keywords (608) may also be carried out through an improved markup language anchor element. Such an anchor element may be improved to invoke audio hyperlinks as discussed above.


Associating (610) with the playback time (406) in the audio file (402) the URI (410) and the one or more keywords (608) may also include creating an audio hyperlink markup document including an identification of the playback time, a grammar, and a URI. An audio hyperlink markup document includes any collection of text and markup associating with a playback time in audio file and URI and one or more keywords for invoking the audio hyperlink. For further explanation, consider the following exemplary audio hyperlink markup document:

















<audio hyperlink markup document>



  <Audio Hyperlink ID = 1>



    <Playback Time>



      00:03:14:45



    <Playback Time>



    <Grammar>



    “Play link” “Invoke” “Go to Link” “Play”



    </Grammar>



    <URI>



    http://www.someURI.com



    </URI>



  </Audio Hyperlink ID = 1>



  <Audio Hyperlink ID = 2>



    <Playback Time>



      00:14:02:33



    <Playback Time>



    <Grammar>



    “Go” “Do It” Play link” “Invoke” “Go to Link” “Play”



    </Grammar>



    <URI>



    http://www.someOtherWebSite.com



    </URI>



  </Audio Hyperlink ID = 1>



  ...



</audio hyperlink markup document>










The audio hyperlink markup document in the example above includes a plurality of audio hyperlinks including two audio hyperlinks identified as audio hyperlink ID=1 and audio hyperlink ID=2 by the tags <Audio Hyperlink ID=1></Audio Hyperlink ID=1> and <Audio Hyperlink ID=2></Audio Hyperlink ID=2>. Audio hyperlink ID=1 is an audio hyperlink associated with playback time of 00:03:14:45 in an audio file. The audio hyperlink references the URI ‘http://www.someURI.com’ which may be invoked by use of the following speech keywords “Play link” “Invoke” “Go to Link” “Play” that make up a grammar for speech invocation of the audio hyperlink.


Audio hyperlink ID=2 is an audio hyperlink associated with playback time of 00:14:02:33 in an audio file. The audio hyperlink references a URI ‘http://www.someOtherWebSite.com’ which may be invoked by use of the following speech keywords “Go” “Do It” Play link” “Invoke” “Go to Link” “Play” in an associated grammar for speech invocation of the audio hyperlink.


The exemplary audio hyperlink markup document is for explanation and not for limitation. In fact, audio hyperlink markup documents may be implemented in many forms and all such forms are well within the scope of the present invention.


For further explanation, FIG. 9 sets forth a flow chart illustrating another exemplary method for annotating an audio file with an audio hyperlink. The method of FIG. 9 is similar to the method of FIG. 8 in that the method of FIG. 9 includes receiving (602) an identification of a playback time (406) in an audio file (402) to associate with an audio hyperlink; receiving (604) a selection of a URI (410) identifying a resource to be accessed upon the invocation of the audio hyperlink; receiving (606) a selection of one or more keywords (608) for invoking the audio hyperlink; and associating (610) with the playback time (406) in the audio file (402) the URI (410), and the one or more keywords (608). The method of FIG. 9, however, also includes receiving (702) a selection of an associate audio indication (704) for identifying of the existence of the audio hyperlink (404) during playback of the audio file (402).


In the method of FIG. 9, associating (610) with the playback time (406) in the audio file (402) the URI (410), and the one or more keywords (608) also includes associating with the playback time (406) the audio indication (704). Associating with the playback time (406) the audio indication (704) may be carried out through the use of an audio hyperlink data structure, an improved anchor element, an audio hyperlink markup document and in other ways as will occur to those of skill in the art.


As discussed above, annotating an audio file with an audio hyperlink may be facilitated by use of audio hyperlink GUI screens. For further explanation, therefore FIG. 10 sets forth a line drawing of an audio hyperlink file annotation tool (802) useful in annotating an audio file with an audio hyperlink according to the present invention. The audio hyperlink file annotation tool (802) of FIG. 10 includes a GUI input widget (804) for receiving a user selection of an audio file to be annotated by inclusion of an audio hyperlink. In the example of FIG. 10, the audio file called ‘SomeAudioFileName.mp3’ has been selected for annotation to include an audio hyperlink.


The audio hyperlink file annotation tool (802) of FIG. 10 includes a GUI input widget (806) for receiving a user selection of playback time in the audio file to have an associated audio hyperlink. In the example of FIG. 10, the audio file called ‘SomeAudioFileName.mp3’ has been selected for annotation to include an audio hyperlink at playback time 00:00:34:04.


The audio hyperlink file annotation tool (802) of FIG. 10 includes a GUI input widget (808) for receiving a user selection of a URI identifying a resource accessible by invoking the audio hyperlink. In the example of FIG. 10, the URI ‘http:///www.someURI.com’ is selected associated with the audio hyperlink.


The audio hyperlink file annotation tool (802) of FIG. 10 also includes a GUI selection widget (810) for receiving a user selection of one or more keywords creating a grammar for speech invocation of the audio hyperlink. In the example of FIG. 10, available predetermined keywords include ‘invoke,’ ‘Do It,’ ‘Go to,’ and ‘Link.’ The pre-selected keywords are presented in the example of FIG. 10 for explanation and not for limitation. In fact, any keywords may be associated with an audio hyperlink either by providing a list of such words for user selection or providing for user input of keywords as will occur of those of skill in the art.


The audio hyperlink file annotation tool (802) of FIG. 10 also includes a GUI selection widget (812) for receiving a user selection of an audio indication identifying to a user the existence of the audio hyperlink. In the example of FIG. 10, possible audio indications include a bell sound, a whistle sound, pitch-shifting the playback and phase-shifting the playback of the audio file.


Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for invoking an audio hyperlink. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.


It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.

Claims
  • 1. A method for invoking an audio hyperlink, the method comprising: providing a user with an option to either enter a first keyword of the user's choice or select a second keyword provided to the user, either the first keyword or the second keyword being configured for use in a speech instruction to invoke the audio hyperlink;storing a plurality of keywords to be included in a grammar, the plurality of keywords comprising either the first keyword or the second keyword;identifying, through an audio anchor element, a predetermined playback time in an audio file pre-designated as having an association with the audio hyperlink;wherein the audio anchor element is a markup language element that identifies the audio file pre-designated as having an association with the audio hyperlink, a Uniform Resource Identifier (‘URI’) identifying a target resource associated with the audio hyperlink, an audio indication of the audio hyperlink, the predetermined playback time in the audio file pre-designated as having an association with the audio hyperlink, and the grammar including the plurality of keywords for speech invocation of a respective plurality of hyperlinks including the audio hyperlink;playing the audio indication of the audio hyperlink at the predetermined playback time;receiving from the user the speech instruction to invoke the audio hyperlink;identifying, through the audio anchor element, the URI associated with the audio hyperlink; andinvoking the URI.
  • 2. The method of claim 1 wherein the audio file comprises an audio subcomponent of a file also including video.
  • 3. The method of claim 1 wherein identifying a predetermined playback time in an audio file pre-designated as having an associated audio hyperlink further comprise retrieving from the audio anchor element a playback time in the audio file pre-designated as having an audio hyperlink.
  • 4. The method of claim 1 wherein playing an audio indication of the audio hyperlink further comprises: retrieving from the audio anchor element an audio indication ID identifying an audio indication of the audio hyperlink; andaugmenting the sound of the audio file in accordance with the audio indication ID.
  • 5. The method of claim 1 wherein receiving from a user a speech instruction to invoke the audio hyperlink further comprises: receiving speech from a user;converting the speech to text; andcomparing the text to the grammar.
  • 6. The method of claim 1 wherein identifying a URI associated with the audio hyperlink includes retrieving from the audio anchor element the URI in dependence upon a user speech instruction.
  • 7. The method of claim 1, further comprising: receiving user input entering the second keyword for use in a speech instruction to invoke the audio hyperlink; andassociating the second keyword with the audio hyperlink which points to the URI.
  • 8. A system for invoking an audio hyperlink, the system comprising a computer processor, a computer memory operatively coupled to the computer processor, the computer memory having disposed within it computer program instructions capable of: providing a user with an option to either enter a first keyword of the user's choice or select a second keyword provided to the user, either the first keyword or the second keyword being configured for use in a speech instruction to invoke the audio hyperlink;storing a plurality of keywords to be included in a grammar, the plurality of keywords comprising either the first keyword or the second keyword;identifying, through an audio anchor element, a predetermined playback time in an audio file pre-designated as having an association with the audio hyperlink;wherein the audio anchor element is a markup language element that identifies the audio file pre-designated as having an association with the audio hyperlink, a Uniform Resource Identifier (‘URI’) identifying a target resource associated with the audio hyperlink, an audio indication of the audio hyperlink, the predetermined playback time in the audio file pre-designated as having an association with the audio hyperlink, and the grammar including the plurality of keywords for speech invocation of a respective plurality of hyperlinks including the audio hyperlink;playing the audio indication of the audio hyperlink at the predetermined playback time;receiving from the user the speech instruction to invoke the audio hyperlink;identifying, through the audio anchor element, the URI associated with the audio hyperlink; andinvoking the URI.
  • 9. The system of claim 8 wherein the audio file comprises an audio subcomponent of a file also including video.
  • 10. The system of claim 8 wherein the computer memory also has disposed within it computer program instructions capable of retrieving from the audio anchor element a playback time in the audio file pre-designated as having the audio hyperlink.
  • 11. The system of claim 8 wherein the computer memory also has disposed within it computer program instructions capable of: retrieving from the audio anchor element an audio indication ID identifying an audio indication of the audio hyperlink; andaugmenting the sound of the audio file in accordance with the audio indication ID.
  • 12. The system of claim 8 wherein the computer memory also has disposed within it computer program instructions capable of: receiving speech from a user;converting the speech to text; andcomparing the text to a grammar.
  • 13. The system of claim 8 wherein the computer memory also has disposed within it computer program instructions capable of retrieving from the audio anchor element the URI in dependence upon a user speech instruction.
  • 14. The system of claim 8, wherein the computer memory also has disposed within it computer program instructions capable of: receiving user input entering the second keyword for use in a speech instruction to invoke the audio hyperlink; andassociating the second keyword with the audio hyperlink which points to the URI.
  • 15. A computer program product for invoking an audio hyperlink, the computer program product embodied on a non-transitory computer-readable medium, the computer program product comprising: computer program instructions for providing a user with an option to either enter a first keyword of the user's choice or select a second keyword provided to the user, either the first keyword or the second keyword being configured for use in a speech instruction to invoke the audio hyperlink;computer program instructions for storing a plurality of keywords to be included in a grammar, the plurality of keywords comprising either the first keyword or the second keyword;computer program instructions for identifying, through an audio anchor element, a predetermined playback time in an audio file pre-designated as having an association with the audio hyperlink;wherein the audio anchor element is a markup language element that identifies the audio file pre-designated as having an association with the audio hyperlink, a Uniform Resource Identifier (‘URI’) identifying a target resource associated with the audio hyperlink, an audio indication of the audio hyperlink, the predetermined playback time in the audio file pre-designated as having an association with the audio hyperlink, and the grammar including the plurality of keywords for speech invocation of a respective plurality of hyperlinks including the audio hyperlink;computer program instructions for playing the audio indication of the audio hyperlink at the predetermined playback time;computer program instructions for receiving from a user the speech instruction to invoke the audio hyperlink;computer program instructions for identifying, through an audio anchor element, the URI associated with the audio hyperlink; andcomputer program instructions for invoking the URI.
  • 16. The computer program product of claim 15 wherein the audio file comprises an audio subcomponent of a file also including video.
  • 17. The computer program product of claim 15 wherein computer program instructions for identifying a predetermined playback time in an audio file pre-designated as having an associated audio hyperlink further comprise computer program instructions for retrieving from the audio anchor element a playback time in the audio file pre-designated as having an audio hyperlink.
  • 18. The computer program product of claim 15 wherein computer program instructions for playing an audio indication of the audio hyperlink further comprise: computer program instructions for retrieving from the audio anchor element an audio indication ID identifying an audio indication of the audio hyperlink; andcomputer program instructions for augmenting the sound of the audio file in accordance with the audio indication ID.
  • 19. The computer program product of claim 15 wherein computer program instructions for receiving from the user the speech instruction to invoke the audio hyperlink further comprise: computer program instructions for receiving speech from the user;computer program instructions for converting the speech to text; andcomputer program instructions for comparing the text to the grammar.
  • 20. The computer program product of claim 15 wherein computer program instructions for identifying a URI associated with the audio hyperlink includes computer program instructions for retrieving from the audio anchor element the URI in dependence upon the user speech instruction.
  • 21. The computer program product of claim 15 wherein computer program instructions for playing an audio indication of the audio hyperlink further comprise: receiving user input entering the second keyword for use in a speech instruction to invoke the audio hyperlink; andassociating the second keyword with the audio hyperlink which points to the URI.
US Referenced Citations (350)
Number Name Date Kind
4785408 Britton et al. Nov 1988 A
5341469 Rossberg et al. Aug 1994 A
5377354 Scannell et al. Dec 1994 A
5564043 Siefert Oct 1996 A
5566291 Boulton et al. Oct 1996 A
5613032 Cruz et al. Mar 1997 A
5715370 Luther et al. Feb 1998 A
5732216 Logan Mar 1998 A
5774131 Kim Jun 1998 A
5819220 Sarukkai Oct 1998 A
5884266 Dvorak Mar 1999 A
5890117 Silverman Mar 1999 A
5892825 Mages Apr 1999 A
5901287 Bull May 1999 A
5903727 Nielsen May 1999 A
5911776 Guck Jun 1999 A
5978463 Jurkevics et al. Nov 1999 A
6006187 Tanenblatt Dec 1999 A
6012098 Bayeh et al. Jan 2000 A
6029135 Krasle Feb 2000 A
6032260 Sasmazel Feb 2000 A
6044347 Abella et al. Mar 2000 A
6055525 Nusbickel Apr 2000 A
6064961 Hanson May 2000 A
6088026 Williams Jul 2000 A
6092121 Bennett et al. Jul 2000 A
6115482 Sears et al. Sep 2000 A
6115686 Chung et al. Sep 2000 A
6141693 Perlman Oct 2000 A
6178511 Cohen Jan 2001 B1
6199076 Logan et al. Mar 2001 B1
6233318 Picard et al. May 2001 B1
6240391 Ball May 2001 B1
6266649 Linden Jul 2001 B1
6272461 Meredith et al. Aug 2001 B1
6282511 Mayer Aug 2001 B1
6282512 Hemphill Aug 2001 B1
6311194 Sheth Oct 2001 B1
6317714 Del Castillo et al. Nov 2001 B1
6324511 Kiraly et al. Nov 2001 B1
6397185 Komissarchik et al. May 2002 B1
6463440 Hind Oct 2002 B1
6468084 MacMillan Oct 2002 B1
6480860 Monday Nov 2002 B1
6510413 Walker Jan 2003 B1
6519617 Wanderski Feb 2003 B1
6532477 Tang et al. Mar 2003 B1
6563770 Kokhab May 2003 B1
6568939 Edgar May 2003 B1
6574599 Lim et al. Jun 2003 B1
6593943 MacPhail Jul 2003 B1
6594637 Furukawa et al. Jul 2003 B1
6604076 Holley et al. Aug 2003 B1
6611876 Barrett et al. Aug 2003 B1
6644973 Oster Nov 2003 B2
6684370 Sikorsky et al. Jan 2004 B1
6687678 Yorimatsu et al. Feb 2004 B1
6731993 Carter et al. May 2004 B1
6771743 Butler Aug 2004 B1
6792407 Kibre et al. Sep 2004 B2
6802041 Rehm Oct 2004 B1
6810146 Loui et al. Oct 2004 B2
6832196 Reich Dec 2004 B2
6839669 Gould Jan 2005 B1
6859527 Banks et al. Feb 2005 B1
6901403 Bata et al. May 2005 B1
6912691 Dodrill Jun 2005 B1
6931587 Krause Aug 2005 B1
6944214 Gilbert Sep 2005 B1
6944591 Raghunandan Sep 2005 B1
6965569 Carolan Nov 2005 B1
6976082 Ostermann Dec 2005 B1
6990451 Case et al. Jan 2006 B2
6992451 Kamio et al. Jan 2006 B2
6993476 Dutta Jan 2006 B1
7017120 Shnier Mar 2006 B2
7031477 Mella et al. Apr 2006 B1
7039643 Sena May 2006 B2
7046772 Moore May 2006 B1
7054818 Sharma et al. May 2006 B2
7062437 Kovales et al. Jun 2006 B2
7065222 Wilcock Jun 2006 B2
7069092 Wiser et al. Jun 2006 B2
7096183 Junqua Aug 2006 B2
7107281 De La Huerga Sep 2006 B2
7113909 Nukaga et al. Sep 2006 B2
7120702 Huang Oct 2006 B2
7130850 Russell-Falla Oct 2006 B2
7139713 Falcon Nov 2006 B2
7149694 Harb et al. Dec 2006 B1
7149810 Miller et al. Dec 2006 B1
7162502 Suarez et al. Jan 2007 B2
7171411 Lewis Jan 2007 B1
7178100 Call Feb 2007 B2
7191133 Pettay Mar 2007 B1
7313528 Miller Dec 2007 B1
7346649 Wong Mar 2008 B1
7349949 Connor et al. Mar 2008 B1
7356470 Roth Apr 2008 B2
7366712 He Apr 2008 B2
7369988 Thenthiruperai et al. May 2008 B1
7386575 Bashant et al. Jun 2008 B2
7392102 Sullivan et al. Jun 2008 B2
7430510 De Fabbrizio et al. Sep 2008 B1
7433819 Adams et al. Oct 2008 B2
7437408 Schwartz Oct 2008 B2
7454346 Dodrill Nov 2008 B1
7505978 Bodin Mar 2009 B2
7542903 Azara et al. Jun 2009 B2
7552055 Lecoeuche Jun 2009 B2
7561932 Holmes et al. Jul 2009 B1
7568213 Carhart et al. Jul 2009 B2
7657006 Woodring Feb 2010 B2
7664641 Pettay et al. Feb 2010 B1
7685525 Kumar Mar 2010 B2
7729478 Coughlan et al. Jun 2010 B1
7873520 Paik Jan 2011 B2
7890517 Angelo Feb 2011 B2
7949681 Bodin May 2011 B2
7996754 Bodin Aug 2011 B2
20010014146 Beyda et al. Aug 2001 A1
20010027396 Sato Oct 2001 A1
20010040900 Salmi Nov 2001 A1
20010047349 Easty et al. Nov 2001 A1
20010049725 Kosuge Dec 2001 A1
20010054074 Hayashi Dec 2001 A1
20020013708 Walker Jan 2002 A1
20020015480 Daswani et al. Feb 2002 A1
20020032564 Ehsani Mar 2002 A1
20020032776 Hasegawa Mar 2002 A1
20020039426 Takemoto Apr 2002 A1
20020054090 Silva May 2002 A1
20020057678 Jiang et al. May 2002 A1
20020062216 Guenther May 2002 A1
20020083013 Rollins Jun 2002 A1
20020095292 Mittal Jul 2002 A1
20020120451 Kato et al. Aug 2002 A1
20020120693 Rudd et al. Aug 2002 A1
20020128837 Morin Sep 2002 A1
20020130891 Singer Sep 2002 A1
20020143414 Wilcock Oct 2002 A1
20020151998 Kemppi et al. Oct 2002 A1
20020152210 Johnson Oct 2002 A1
20020169770 Kim et al. Nov 2002 A1
20020173964 Reich Nov 2002 A1
20020178007 Slotznick Nov 2002 A1
20020193894 Terada et al. Dec 2002 A1
20020194286 Matsuura Dec 2002 A1
20020194480 Nagao Dec 2002 A1
20020198714 Zhou Dec 2002 A1
20020198720 Takagi Dec 2002 A1
20030018727 Yamamoto Jan 2003 A1
20030028380 Freeland et al. Feb 2003 A1
20030033331 Sena Feb 2003 A1
20030055835 Roth Mar 2003 A1
20030055868 Fletcher Mar 2003 A1
20030078780 Kochanski et al. Apr 2003 A1
20030103606 Rhie Jun 2003 A1
20030108184 Brown et al. Jun 2003 A1
20030110185 Rhoads Jun 2003 A1
20030110272 du Castel Jun 2003 A1
20030110297 Tabatabai Jun 2003 A1
20030115056 Gusler Jun 2003 A1
20030115064 Gusler Jun 2003 A1
20030115289 Chinn et al. Jun 2003 A1
20030126293 Bushey Jul 2003 A1
20030132953 Johnson Jul 2003 A1
20030145062 Sharma et al. Jul 2003 A1
20030151618 Johnson et al. Aug 2003 A1
20030156130 James et al. Aug 2003 A1
20030158737 Csicsatka Aug 2003 A1
20030160770 Zimmerman Aug 2003 A1
20030163211 Van Der Meulen Aug 2003 A1
20030167234 Bodmer Sep 2003 A1
20030172066 Cooper Sep 2003 A1
20030182000 Muesch et al. Sep 2003 A1
20030182124 Khan Sep 2003 A1
20030187668 Ullmann et al. Oct 2003 A1
20030187726 Bull et al. Oct 2003 A1
20030188255 Shimizu Oct 2003 A1
20030212654 Harper Nov 2003 A1
20030225599 Mueller Dec 2003 A1
20030229847 Kim Dec 2003 A1
20040003394 Ramaswamy Jan 2004 A1
20040034653 Maynor Feb 2004 A1
20040041835 Lu Mar 2004 A1
20040044665 Nwabueze Mar 2004 A1
20040049477 Powers et al. Mar 2004 A1
20040067472 Polanyi et al. Apr 2004 A1
20040068552 Kotz Apr 2004 A1
20040088063 Hoshi et al. May 2004 A1
20040088349 Beck May 2004 A1
20040093350 Alexander et al. May 2004 A1
20040107125 Guheen Jun 2004 A1
20040120479 Creamer et al. Jun 2004 A1
20040128276 Scanlon et al. Jul 2004 A1
20040143430 Said et al. Jul 2004 A1
20040153178 Koch et al. Aug 2004 A1
20040172254 Sharma et al. Sep 2004 A1
20040199375 Ehsani Oct 2004 A1
20040201609 Obrador Oct 2004 A1
20040210626 Bodin et al. Oct 2004 A1
20040225499 Wang et al. Nov 2004 A1
20040254851 Himeno Dec 2004 A1
20040267387 Samadani Dec 2004 A1
20040267774 Lin et al. Dec 2004 A1
20050004992 Horstmann Jan 2005 A1
20050015254 Beaman Jan 2005 A1
20050015718 Sambhus et al. Jan 2005 A1
20050021826 Kumar Jan 2005 A1
20050043940 Elder Feb 2005 A1
20050045373 Born Mar 2005 A1
20050065625 Sass Mar 2005 A1
20050071780 Muller Mar 2005 A1
20050076365 Popov Apr 2005 A1
20050088981 Woodruff et al. Apr 2005 A1
20050108521 Silhavy May 2005 A1
20050114139 Dincer May 2005 A1
20050119894 Cutler et al. Jun 2005 A1
20050120083 Aizawa et al. Jun 2005 A1
20050137875 Kim et al. Jun 2005 A1
20050138063 Bazot et al. Jun 2005 A1
20050144002 Ps Jun 2005 A1
20050144022 Evans Jun 2005 A1
20050152344 Chiu et al. Jul 2005 A1
20050154580 Horowitz Jul 2005 A1
20050154969 Bodin et al. Jul 2005 A1
20050190897 Eberle et al. Sep 2005 A1
20050195999 Takemura et al. Sep 2005 A1
20050203887 Joshi et al. Sep 2005 A1
20050203959 Muller Sep 2005 A1
20050203960 Suarez et al. Sep 2005 A1
20050232242 Karaoguz Oct 2005 A1
20050234727 Chiu Oct 2005 A1
20050251513 Tenazas Nov 2005 A1
20050261905 Pyo et al. Nov 2005 A1
20050262119 Mawdsley Nov 2005 A1
20050286705 Contolini et al. Dec 2005 A1
20050288926 Benco Dec 2005 A1
20060008252 Kim Jan 2006 A1
20060008258 Kawana Jan 2006 A1
20060020662 Robinson Jan 2006 A1
20060031447 Holt Feb 2006 A1
20060041549 Gundersen et al. Feb 2006 A1
20060048212 Tsuruoka Mar 2006 A1
20060050794 Tan Mar 2006 A1
20060050996 King et al. Mar 2006 A1
20060052089 Khurana et al. Mar 2006 A1
20060075224 Tao Apr 2006 A1
20060085199 Jain Apr 2006 A1
20060095848 Naik May 2006 A1
20060100877 Zhang et al. May 2006 A1
20060112844 Hiller Jun 2006 A1
20060114987 Roman Jun 2006 A1
20060123082 Digate et al. Jun 2006 A1
20060129403 Liao et al. Jun 2006 A1
20060136449 Parker Jun 2006 A1
20060140360 Crago Jun 2006 A1
20060149781 Blankinship Jul 2006 A1
20060155698 Vayssiere Jul 2006 A1
20060159109 Lamkin Jul 2006 A1
20060165104 Kaye Jul 2006 A1
20060168507 Hansen Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060184679 Izdepski Aug 2006 A1
20060190616 Mayerhofer Aug 2006 A1
20060193450 Flynt Aug 2006 A1
20060200743 Thong et al. Sep 2006 A1
20060206533 MacLaurin Sep 2006 A1
20060224739 Anantha Oct 2006 A1
20060233327 Roberts Oct 2006 A1
20060242663 Gogerty Oct 2006 A1
20060253699 Della-Libera Nov 2006 A1
20060265503 Jones et al. Nov 2006 A1
20060282317 Rosenberg Dec 2006 A1
20060282822 Weng Dec 2006 A1
20060287745 Richenstein et al. Dec 2006 A1
20060288011 Gandhi Dec 2006 A1
20070005339 Jaquinta Jan 2007 A1
20070027692 Sharma et al. Feb 2007 A1
20070027859 Harney et al. Feb 2007 A1
20070027958 Haslam Feb 2007 A1
20070043462 Terada et al. Feb 2007 A1
20070043735 Bodin et al. Feb 2007 A1
20070043758 Bodin et al. Feb 2007 A1
20070043759 Bodin et al. Feb 2007 A1
20070061132 Bodin Mar 2007 A1
20070061229 Ramer Mar 2007 A1
20070061266 Moore Mar 2007 A1
20070061371 Bodin et al. Mar 2007 A1
20070061401 Bodin et al. Mar 2007 A1
20070061711 Bodin Mar 2007 A1
20070061712 Bodin Mar 2007 A1
20070073728 Klein, Jr. Mar 2007 A1
20070077921 Hayashi Apr 2007 A1
20070078655 Semkow Apr 2007 A1
20070083540 Gundla Apr 2007 A1
20070091206 Bloebaum Apr 2007 A1
20070100628 Bodin et al. May 2007 A1
20070100629 Bodin et al. May 2007 A1
20070100787 Lim et al. May 2007 A1
20070100836 Eichstaedt et al. May 2007 A1
20070101274 Kurlander May 2007 A1
20070101313 Bodin et al. May 2007 A1
20070112844 Tribble May 2007 A1
20070118426 Barnes, Jr. May 2007 A1
20070124458 Kumar May 2007 A1
20070124802 Anton, Jr. May 2007 A1
20070130589 Davis Jun 2007 A1
20070138999 Lee et al. Jun 2007 A1
20070147274 Vasa Jun 2007 A1
20070165538 Bodin Jul 2007 A1
20070168191 Bodin et al. Jul 2007 A1
20070168194 Bodin et al. Jul 2007 A1
20070174326 Schwartz Jul 2007 A1
20070191008 Bucher Aug 2007 A1
20070192327 Bodin Aug 2007 A1
20070192672 Bodin Aug 2007 A1
20070192673 Bodin Aug 2007 A1
20070192674 Bodin Aug 2007 A1
20070192675 Bodin Aug 2007 A1
20070192676 Bodin Aug 2007 A1
20070192683 Bodin Aug 2007 A1
20070192684 Bodin Aug 2007 A1
20070198267 Jones et al. Aug 2007 A1
20070208687 O'Conor Sep 2007 A1
20070213857 Bodin Sep 2007 A1
20070213986 Bodin Sep 2007 A1
20070214147 Bodin Sep 2007 A1
20070214148 Bodin Sep 2007 A1
20070214149 Bodin Sep 2007 A1
20070214485 Bodin Sep 2007 A1
20070220024 Putterman Sep 2007 A1
20070239837 Jablokov Oct 2007 A1
20070253699 Yen Nov 2007 A1
20070276837 Bodin Nov 2007 A1
20070276865 Bodin Nov 2007 A1
20070276866 Bodin Nov 2007 A1
20070277088 Bodin Nov 2007 A1
20070277233 Bodin Nov 2007 A1
20080034278 Tsou Feb 2008 A1
20080052415 Kellerman Feb 2008 A1
20080082576 Bodin Apr 2008 A1
20080082635 Bodin Apr 2008 A1
20080155616 Logan et al. Jun 2008 A1
20080161948 Bodin Jul 2008 A1
20080162131 Bodin Jul 2008 A1
20080162559 Bodin Jul 2008 A1
20080275893 Bodin Nov 2008 A1
20090271178 Bodin Oct 2009 A1
Foreign Referenced Citations (15)
Number Date Country
1123075 May 1996 CN
1123075 May 1996 CN
1298173 Jun 2001 CN
1298173 Jun 2001 CN
1368719 Apr 2008 CN
1368719 Apr 2008 CN
1197884 Apr 2002 EP
2369955 Jun 2002 GB
2369955 Jun 2002 GB
2001-0071517 Jul 2001 KR
1020010071517 Jul 2001 KR
2004-0078888 Sep 2004 KR
1020040078888 Sep 2004 KR
WO 0182139 Nov 2001 WO
WO 0106846 Nov 2005 WO
Non-Patent Literature Citations (102)
Entry
Braun N. & Doner R., Uing Sonic Hyperlinks in Web-TV, International Conf. on Auditory Displays (ICAD '98), Nov. 1, 1998, XP002428659, Glasgow UK.
Braun N. & Doner R., “Using Sonic Hyperlinks in Web-TV”, International Conf. on Auditory Displays (ICAD '98), Nov. 1, 1998, XP 002428659, Glasgow, UK.
Hoschka et al., “Synchronized Multimedia Integration Language (SMIL) 1.0 Specification”, Apr. 9, 1998, doi: http://www.w3.org/TR/1998/PR-smil-19980409/#anchor.
Casalaina et al., “BMRC Procedures: RealMedia Guide”, doi: http://web.archive.org/web/20030218131051/http://bmrc.berkeley.edu/info/procedures/rm.html.
Barbara, et al.; “The Audio Web”; Proc. 6th Int. Conf. on Information and Knowledge Management; Jan. 1997; XP002352519; Las Vegas; USA; pp. 97-104.
Office Action Dated Jun. 11, 2009 in U.S. Appl. No. 11/352,710.
Office Action Dated May 19, 2009 in U.S. Appl. No. 11/352,727.
Final Office Action Dated Apr. 20, 2009 in U.S. Appl. No. 11/266,559.
Final Office Action Dated Oct. 30, 2008 in U.S. Appl. No. 11/266,662.
Final Office Action Dated Apr. 6, 2009 in U.S. Appl. No. 11/266,675.
Final Office Action Dated Dec. 19, 2008 in U.S. Appl. No. 11/266,698.
Office Action Dated May 14, 2009 in U.S. Appl. No. 11/352,709.
Final Office Action Dated Apr. 29, 2008 in U.S. Appl. No. 11/207,911.
Final Office Action Dated Apr. 15, 2009 in U.S. Appl. No. 11/207,911.
Final Office Action Dated Sep. 25, 2008 in U.S. Appl. No. 11/226,747.
Final Office Action Dated May 7, 2008 in U.S. Appl. No. 11/226,744.
Final Office Action Dated May 7, 2008 in U.S. Appl. No. 11/207,912.
Final Office Action Dated Apr. 28, 2009 in U.S. Appl. No. 11/207,912.
Final Office Action Dated Sep. 16, 2008 in U.S. Appl. No. 11/266,663.
Final Office Action Dated Mar. 30, 2009 in U.S. Appl. No. 11/331,694.
Final Office Action Dated Feb. 9, 2009 in U.S. Appl. No. 11/331,692.
Final Office Action Dated May 7, 2008 in U.S. Appl. No. 11/207,914.
Final Office Action Dated Apr. 14, 2009 in U.S. Appl. No. 11/207,914.
Final Office Action Dated Dec. 23, 2008 in U.S. Appl. No. 11/207,913.
Final Office Action Dated Sep. 15, 2008 in U.S. Appl. No. 11/226,746.
U.S. Appl. No. 11/619,236 Final Office Action mailed Oct. 22, 2010.
U.S. Appl. No. 12/178,448 Office Action mailed Apr. 2, 2010.
U.S. Appl. No. 12/178,448 Final Office Action mailed Sep. 14, 2010.
Office Action Dated Jan. 25, 2010 in U.S. Appl. No. 11/207,912.
Notice of Allowance Dated Feb. 3, 2010 in U.S. Appl. No. 11/207,911.
Final Office Action Dated Jul. 31, 2009 in U.S. Appl. No. 11/226,746.
Office Action Dated Jan. 25, 2010 in U.S. Appl. No. 11/226,746.
Final Office Action Dated Nov. 5, 2009 in U.S. Appl. No. 11/352,709.
Braun N. & Doner R.; “Using Sonic Hyperlinks in Web-TV”; International Conf. on Auditory Displays (ICAD '98), Nov. 1, 1998; XP 002428659; Glasgow, UK; Final Office Action Dated Nov. 5, 2009 in U.S. Appl. No. 11/352,746 Reference Not Provided.
Hoschka, et al.; “Synchronized Multimedia Integration Language (SMIL) 1.0 Specification”; Apr. 9, 1998; doi: http://www.w3.org/TR/1998/PR-smil-19980409/#anchor.
Casalaina, et al.; “BMRC Procedures: RealMedia Guide”; doi: http://web.archive.org/web/20030218131051/http://bmrc.berkeley.edu/info/procedures/rm.html.
Office Action Dated Apr. 29, 2009 in U.S. Appl. No. 11/352,698.
Office Action Dated Aug. 17, 2009 in U.S. Appl. No. 11/331,692.
Monahan et al.; “Adapting Multimedia Internet Content for Universal Access”; IEEE Transactions on Multimedia, vol. 1, No. 1; pp. 104-114; Mar. 1999.
Buchanan et al.; “Representing Aggregated Works in the Digital Library”, University College London; London; pp. 247-256; Jun. 18, 2007.
Advertisement of TextToSpeechMP3.com; “Text to Speech MP3 with Natural Voices 1.71”; (author unknown); ZDNet.co.uk, London; from website http://downloadsszdnet.co.uk/0,1000000375,39148337s,00.htm; pp. 1-5; Oct. 5, 2004.
Andrade et al.; “Managing Multimedia Content and Delivering Services Across Multiple Client Platforms using XML”; Symposium; London Communications; London; pp. 1-8; Sep. 2002.
International Business Machines Corporation; PCT Search Report; Mar. 27, 2007; Application No. PCT/EP2007/050594.
U.S. Appl. No. 11/352,680 Office Action mailed Jun. 23, 2006.
U.S. Appl. No. 11/372,317 Office Action mailed Jul. 8, 2009.
U.S. Appl. No. 11/536,733 Final Office Action mailed Jul. 22, 2009.
U.S. Appl. No. 11/420,017 Office Action mailed Jul. 9, 2009.
U.S. Appl. No. 11/536,781 Office Action mailed Jul. 17, 2009.
U.S. Appl. No. 11/420,014 Office Action mailed Jul. 23, 2009.
U.S. Appl. No. 11/420,018 Final Office Action mailed Jul. 21, 2009.
U.S. Appl. No. 11/352,760 Office Action mailed Apr. 15, 2009.
U.S. Appl. No. 11/352,760 Final Office Action mailed Nov. 16, 2009.
U.S. Appl. No. 11/352,824 Notice of Allownace mailed Jun. 5, 2008.
U.S. Appl. No. 11/352,824 Office Action mailed Jan. 22, 2008.
U.S. Appl. No. 11/352,680 Final Office Action mailed Dec. 21, 2009.
U.S. Appl. No. 11/352,679 Office Action mailed Apr. 30, 2009.
U.S. Appl. No. 11/352,679 Final Office Action mailed Oct. 29, 2009.
U.S. Appl. No. 11/372,323 Office Action mailed Oct. 28, 2008.
U.S. Appl. No. 11/372,318 Office Action mailed Mar. 18, 2008.
U.S. Appl. No. 11/372,318 Final Office Action mailed Jul. 9, 2008.
U.S. Appl. No. 11/372,329 Final Office Action mailed Nov. 6, 2009.
U.S. Appl. No. 11/372,325 Office Action mailed Feb. 25, 2009.
U.S. Appl. No. 11/372,329 Office Action mailed Feb. 27, 2009.
U.S. Appl. No. 11/536,781 Final Office Action mailed Jan. 15, 2010.
U.S. Appl. No. 11/420,015 Office Action mailed Mar. 20, 2008.
U.S. Appl. No. 11/420,015 Final Office Action mailed Sep. 3, 2008.
U.S. Appl. No. 11/420,015 Office Action mailed Dec. 2, 2008.
U.S. Appl. No. 11/420,016 Office Action mailed Mar. 3, 2008.
U.S. Appl. No. 11/420,016 Final Office Action mailed Aug. 29, 2008.
U.S. Appl. No. 11/420,017 Final Office Action mailed Dec. 31, 2009.
U.S. Appl. No. 11/420,018 Office Action mailed Mar. 21, 2008.
U.S. Appl. No. 11/420,018 Final Office Action mailed Aug. 29, 2008.
U.S. Appl. No. 11/420,018 Office Action mailed Dec. 3, 2008.
U.S. Appl. No. 11/536,733 Office Action mailed Dec. 30, 2008.
U.S. Appl. No. 11/619,216 Office Action mailed Jan. 26, 2010.
U.S. Appl. No. 11/619,253 Office Action mailed Apr. 2, 2009.
U.S. Appl. No. 11/352,760 Office Action mailed Sep. 16, 2010.
U.S. Appl. No. 11/352,680 Office Action mailed Jun. 10, 2010.
U.S. Appl. No. 11/352,680 Final Office Action mailed Sep. 7, 2010.
U.S. Appl. No. 11/352,679 Office Action mailed May 28, 2010.
U.S. Appl. No. 11/352,679 Final Office Action mailed Nov. 15, 2010.
U.S. Appl. No. 11/372,317 Office Action mailed Sep. 23, 2010.
U.S. Appl. No. 11/372,319 Office Action mailed Apr. 21, 2010.
U.S. Appl. No. 11/372,319 Final Office Action mailed Jul. 2, 2010.
U.S. Appl. No. 11/420,014 Final Office Action mailed Apr. 3, 2010.
U.S. Appl. No. 11/420,017 Final Office Action mailed Sep. 23, 2010.
U.S. Appl. No. 11/619,216 Final Office Action mailed Jun. 25, 2010.
Mohan et al. “Adapting Multimedia Internet Content for Universal Access.” IBM T.J Watson Research Center, pp. 1-35.
Lu et al., “Audio Ticker”. WWW7 / Computer Networks 30(1-7): 721-722 (1998).
http://webarchive.org/web/20031203063919/http://eastbaytech.com/im.htm.
http://www.odiogo.com.
FeedForAll at http://web.archive.org/web/20050813012521/http://www.feedforall.com/itune-tutorial-tags.htm.
Internet Archive for FeedForAll at http://web.archive.org/web/*/http://www.feedforall.com/itune-tuntorial-tags.htm.
Audioblog at http://web.archive.org/web/20040923235033.
Zhang, Liang-Jie, et al., “XML-Based Advanced UDDI Search Mechanism for B2B Integration”, Electronic Commerce Research, vol. 3, Nos. 1-2, Jan. 2003, pp. 24-42.
He, Tian, et al., “AIDA: Adaptive Application-Independent Data Aggregation in Wireless Sensor Networks”, TECS, vol. 3, Issue 2, May 2004, pp. 426-457.
Braun N. Doner R: “Using Sonic Hyperlinks in Web-TV,” International Conf on Auditory Displays (ICAD '98), Nov. 1, 1998, XPOO2428659, Glasgow, UK.
Braun N et al: “Temporal hypermedia for multimedia applications in the World Wide Web,” Computational Intelligence and Multimedia Applications, 1999. ICCIMA '99. Third International Conference on New Delhi, India, Sep. 23-23, 1999. Los Alamitos, CA USE, IEEE Comput. SAC, US, Sep. 23, 1999, XP010355646 ISBN: 0-7695-0300-4.
Frankie James: “AHA: audio HTML access” Computer Networks and ISDN Systems, vol. 209, No. 8-13, Sep. 1997, pp. 1395-1404, XP002430654.
PCT Search Report, Sep. 2, 2007; PCT Application No. PCT/EP2007/051260.
Braun N. Doner R: “Using Sonic Hyperlinks in Web-TV”, International Conf. on Auditory Displays (ICAD '98), Nov. 1, 1998, XP002428659, Glasgow, UK.
Braun N et al: “Temporal hypermedia for multimedia applications in the World Wide Web”, Computational Intelligence and Multimedia Applications, 1999. ICCIMA '99. Third International Conference on New Delhi, India, Sep. 23-23, 1999. Los Alamitos, CA USE, IEEE Comput. SOC, US, Sep. 23, 1999. XP010355646 ISBN; 0-7695-0300-4.
Related Publications (1)
Number Date Country
20070192672 A1 Aug 2007 US