1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, systems, and products for creating an audio menu describing media content of a media player.
2. Description of Related Art
Portable media players are often lightweight making such players user friendly and popular. Many conventional portable media players include display screens for displaying metadata associated with the media files supported by the portable media players in addition to being capable of playing the media files themselves. To read the metadata from the display screen users must either be able to see or be in a position to look at the display screen of the portable media player. Blind users and users who are currently visually occupied cannot use the display screen to read the metadata associated with the media files supported by the portable media player.
Methods, systems, and computer program products are provided for creating an audio menu describing media content of a media player. Embodiments include retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, systems, and products for creating audio menus describing media contents of media players are described with reference to the accompanying drawings, beginning with
The audio menu creation module (232) is an application for creating an audio menu describing media content of a media player according to the present invention, including computer program instructions for retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu. The digital media player application (232) is capable of transferring the one or more media files having the speech of the metadata describing the other media files managed by the media player to a portable media player (108).
A portable media player (108) is a device capable of rendering media files and other content. Examples of portable media players include the iPod® from Apple and Creative Zen Vision from Creative labs. The portable media player (108) of
The arrangement of devices making up the exemplary system illustrated in
Creating an audio menu describing media content of a media player according to the present invention is generally implemented with computers, that is, with automated computing machinery. In the system of
Stored in RAM (168) is a digital media player application (234). As mentioned above, a digital media player application (234) is an application that manages media content in media files such as audio files and video files. Such digital media player applications are typically capable of storing the managed media files on a portable media player. Examples of digital media player applications include Music Match™, iTunes®, Songbird™ and others as will occur to those of skill in the art.
Also stored in RAM (168) The audio menu creation module (232) is an application for creating an audio menu describing media content of a media player according to the present invention, including computer program instructions for retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu.
Also stored in RAM (168) is an operating system (154). Operating systems useful in computers according to embodiments of the present invention include UNIX™, Linux™, Microsoft Windows NT™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art.
The exemplary computer (114) of
The exemplary computer (114) of
The exemplary computer (114) of
For further explanation,
Retrieving (302) metadata (304) describing the media files managed by the media player alternatively may be carried out by individually retrieving metadata describing each media file from each of the media files managed by the media player themselves as discussed below with reference to
The method of
Examples of speech engines capable of converting at least a portion of the metadata to speech for recording in the audio portion of a media filed include, for example, IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and Python's pyTTS class. Each of these text-to-speech engines is composed of a front end that takes input in the form of text and outputs a symbolic linguistic representation to a back end that outputs the received symbolic linguistic representation as a speech waveform.
Typically, speech synthesis engines operate by using one or more of the following categories of speech synthesis: articulatory synthesis, formant synthesis, and concatenative synthesis. Articulatory synthesis uses computational biomechanical models of speech production, such as models for the glottis and the moving vocal tract. Typically, an articulatory synthesizer is controlled by simulated representations of muscle actions of the human articulators, such as the tongue, the lips, and the glottis. Computational biomechanical models of speech production solve time-dependent, 3-dimensional differential equations to compute the synthetic speech output. Typically, articulatory synthesis has very high computational requirements, and has lower results in terms of natural-sounding fluent speech than the other two methods discussed below.
Formant synthesis uses a set of rules for controlling a highly simplified source-filter model that assumes that the glottal source is completely independent from a filter which represents the vocal tract. The filter that represents the vocal tract is determined by control parameters such as formant frequencies and bandwidths. Each formant is associated with a particular resonance, or peak in the filter characteristic, of the vocal tract. The glottal source generates either stylized glottal pulses for periodic sounds and generates noise for aspiration. Formant synthesis generates highly intelligible, but not completely natural sounding speech. However, formant synthesis has a low memory footprint and only moderate computational requirements.
Concatenative synthesis uses actual snippets of recorded speech that are cut from recordings and stored in an inventory or voice database, either as waveforms or as encoded speech. These snippets make up the elementary speech segments such as, for example, phones and diphones. Phones are composed of a vowel or a consonant, whereas diphones are composed of phone-to-phone transitions that encompass the second half of one phone plus the first half of the next phone. Some concatenative synthesizers use so-called demi-syllables, in effect applying the diphone method to the time scale of syllables. Concatenative synthesis then strings together, or concatenates, elementary speech segments selected from the voice database, and, after optional decoding, outputs the resulting speech signal. Because concatenative systems use snippets of recorded speech, they have the highest potential for sounding like natural speech, but concatenative systems require large amounts of database storage for the voice database.
The method of
As discussed above, retrieving metadata describing the media files managed by the media player may be carried out by retrieving a metadata file describing the media files. For further explanation, therefore,
In the method of
In the method of
In the method of
As discussed above, retrieving metadata describing the media files managed by a media player may also include individually retrieving metadata describing each media file from each of the media files managed by the media player themselves. For further explanation, therefore,
Creating an audio menu describing media content of a media player according to the method of
In the method of
As an aid to users of audio menus according to the present invention, some embodiments of the present invention also include providing not only a description of the media files managed by the media player, but also a description of the organization of those media files such that a user may be informed of the organization of the media files empowering the user to navigate the media files using the audio menu. For further explanation,
The method of
Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for creating an audio menu describing media content of a media player. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5819220 | Sarukkai et al. | Oct 1998 | A |
6032260 | Sasmazel et al. | Feb 2000 | A |
6178511 | Cohen et al. | Jan 2001 | B1 |
6240391 | Ball et al. | May 2001 | B1 |
6266649 | Linden et al. | Jul 2001 | B1 |
6311194 | Sheth et al. | Oct 2001 | B1 |
6771743 | Butler et al. | Aug 2004 | B1 |
6944591 | Raghunandan | Sep 2005 | B1 |
6976082 | Ostermann et al. | Dec 2005 | B1 |
6993476 | Dutta et al. | Jan 2006 | B1 |
7039643 | Sena et al. | May 2006 | B2 |
7062437 | Kovales et al. | Jun 2006 | B2 |
7130850 | Russell-Falla et al. | Oct 2006 | B2 |
7171411 | Lewis et al. | Jan 2007 | B1 |
7356470 | Roth et al. | Apr 2008 | B2 |
7454346 | Dodrill et al. | Nov 2008 | B1 |
20010027396 | Sato | Oct 2001 | A1 |
20010047349 | Easty et al. | Nov 2001 | A1 |
20010049725 | Kosuge | Dec 2001 | A1 |
20010054074 | Hayashi | Dec 2001 | A1 |
20020013708 | Walker et al. | Jan 2002 | A1 |
20020032564 | Ehsani et al. | Mar 2002 | A1 |
20020032776 | Hasegawa et al. | Mar 2002 | A1 |
20020054090 | Silva et al. | May 2002 | A1 |
20020062216 | Guenther et al. | May 2002 | A1 |
20020062393 | Borger et al. | May 2002 | A1 |
20020095292 | Mittal et al. | Jul 2002 | A1 |
20020152210 | Johnson et al. | Oct 2002 | A1 |
20020178007 | Slotznick et al. | Nov 2002 | A1 |
20020194286 | Matsuura et al. | Dec 2002 | A1 |
20020198720 | Takagi et al. | Dec 2002 | A1 |
20030028380 | Freeland et al. | Feb 2003 | A1 |
20030033331 | Sena et al. | Feb 2003 | A1 |
20030055868 | Fletcher et al. | Mar 2003 | A1 |
20030103606 | Rhie et al. | Jun 2003 | A1 |
20030110272 | du Castel et al. | Jun 2003 | A1 |
20030110297 | Tabatabai et al. | Jun 2003 | A1 |
20030115056 | Gusler et al. | Jun 2003 | A1 |
20030115064 | Gusler et al. | Jun 2003 | A1 |
20030126293 | Bushey | Jul 2003 | A1 |
20030158737 | Csicsatka | Aug 2003 | A1 |
20030160770 | Zimmerman | Aug 2003 | A1 |
20030167234 | Bodmer et al. | Sep 2003 | A1 |
20030172066 | Cooper et al. | Sep 2003 | A1 |
20040003394 | Ramaswamy | Jan 2004 | A1 |
20040041835 | Lu | Mar 2004 | A1 |
20040068552 | Kotz et al. | Apr 2004 | A1 |
20040088349 | Beck et al. | May 2004 | A1 |
20040199375 | Ehsani et al. | Oct 2004 | A1 |
20040201609 | Obrador | Oct 2004 | A1 |
20040254851 | Himeno et al. | Dec 2004 | A1 |
20050015254 | Beaman | Jan 2005 | A1 |
20050045373 | Born | Mar 2005 | A1 |
20050071780 | Muller et al. | Mar 2005 | A1 |
20050076365 | Popov et al. | Apr 2005 | A1 |
20050108521 | Silhavy et al. | May 2005 | A1 |
20050232242 | Karaoguz et al. | Oct 2005 | A1 |
20060008258 | Kawana et al. | Jan 2006 | A1 |
20060020662 | Robinson | Jan 2006 | A1 |
20060048212 | Tsuruoka et al. | Mar 2006 | A1 |
20060050794 | Tan et al. | Mar 2006 | A1 |
20060052089 | Khurana et al. | Mar 2006 | A1 |
20060075224 | Tao | Apr 2006 | A1 |
20060095848 | Naik | May 2006 | A1 |
20060123082 | Digate et al. | Jun 2006 | A1 |
20060136449 | Parker et al. | Jun 2006 | A1 |
20060140360 | Crago et al. | Jun 2006 | A1 |
20060155698 | Vayssiere | Jul 2006 | A1 |
20060159109 | Lamkin et al. | Jul 2006 | A1 |
20060173985 | Moore | Aug 2006 | A1 |
20060184679 | Izdepski et al. | Aug 2006 | A1 |
20060190616 | Mayerhofer et al. | Aug 2006 | A1 |
20060224739 | Anantha | Oct 2006 | A1 |
20060233327 | Roberts et al. | Oct 2006 | A1 |
20060265503 | Jones et al. | Nov 2006 | A1 |
20060288011 | Gandhi et al. | Dec 2006 | A1 |
20070027958 | Haslam | Feb 2007 | A1 |
20070061266 | Moore et al. | Mar 2007 | A1 |
20070073728 | Klein et al. | Mar 2007 | A1 |
20070083540 | Gundia et al. | Apr 2007 | A1 |
20070091206 | Bloebaum | Apr 2007 | A1 |
20070100836 | Eichstaedt et al. | May 2007 | A1 |
20070112844 | Tribble et al. | May 2007 | A1 |
20070118426 | Barnes, Jr. | May 2007 | A1 |
20070124458 | Kumar | May 2007 | A1 |
20070124802 | Anton et al. | May 2007 | A1 |
20070130589 | Davis et al. | Jun 2007 | A1 |
20070174326 | Schwartz et al. | Jul 2007 | A1 |
20070191008 | Bucher et al. | Aug 2007 | A1 |
20070192327 | Bodin | Aug 2007 | A1 |
20070192674 | Bodin | Aug 2007 | A1 |
20070192683 | Bodin | Aug 2007 | A1 |
20070192684 | Bodin et al. | Aug 2007 | A1 |
20070208687 | O'Conor et al. | Sep 2007 | A1 |
20070213857 | Bodin | Sep 2007 | A1 |
20070213986 | Bodin | Sep 2007 | A1 |
20070214147 | Bodin et al. | Sep 2007 | A1 |
20070214148 | Bodin | Sep 2007 | A1 |
20070214149 | Bodin | Sep 2007 | A1 |
20070214485 | Bodin | Sep 2007 | A1 |
20070220024 | Putterman et al. | Sep 2007 | A1 |
20070253699 | Yen et al. | Nov 2007 | A1 |
20070276837 | Bodin et al. | Nov 2007 | A1 |
20070276865 | Bodin et al. | Nov 2007 | A1 |
20070276866 | Bodin et al. | Nov 2007 | A1 |
20070277088 | Bodin | Nov 2007 | A1 |
20070277233 | Bodin | Nov 2007 | A1 |
20080034278 | Tsou et al. | Feb 2008 | A1 |
20080052415 | Kellerman et al. | Feb 2008 | A1 |
20080082576 | Bodin | Apr 2008 | A1 |
20080082635 | Bodin | Apr 2008 | A1 |
20080161948 | Bodin | Jul 2008 | A1 |
20080162131 | Bodin | Jul 2008 | A1 |
20080275893 | Bodin et al. | Nov 2008 | A1 |
Number | Date | Country |
---|---|---|
WO 0182139 | Nov 2001 | WO |
WO 2005106846 | Nov 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20080082576 A1 | Apr 2008 | US |