Audio menus describing media contents of media players

Information

  • Patent Grant
  • 7831432
  • Patent Number
    7,831,432
  • Date Filed
    Friday, September 29, 2006
    17 years ago
  • Date Issued
    Tuesday, November 9, 2010
    13 years ago
Abstract
Methods, systems, and computer program products are provided for creating an audio menu describing media content of a media player. Embodiments include retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The field of the invention is data processing, or, more specifically, methods, systems, and products for creating an audio menu describing media content of a media player.


2. Description of Related Art


Portable media players are often lightweight making such players user friendly and popular. Many conventional portable media players include display screens for displaying metadata associated with the media files supported by the portable media players in addition to being capable of playing the media files themselves. To read the metadata from the display screen users must either be able to see or be in a position to look at the display screen of the portable media player. Blind users and users who are currently visually occupied cannot use the display screen to read the metadata associated with the media files supported by the portable media player.


SUMMARY OF THE INVENTION

Methods, systems, and computer program products are provided for creating an audio menu describing media content of a media player. Embodiments include retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu.


The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 sets forth a block diagram of an exemplary system for creating an audio menu describing media content of a media player according to the present invention.



FIG. 2 sets forth a block diagram of automated computing machinery including a computer useful in creating an audio menu describing media content of a media player according to the present invention.



FIG. 3 sets forth a flow chart illustrating an exemplary method for creating an audio menu describing media content of a media player.



FIG. 4 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player that includes retrieving a metadata file describing the media files managed by the media player.



FIG. 5 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player.



FIG. 6 sets forth a flow chart illustrating an exemplary method for creating an audio file organization menu.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

Exemplary methods, systems, and products for creating audio menus describing media contents of media players are described with reference to the accompanying drawings, beginning with FIG. 1. FIG. 1 sets forth a block diagram of an exemplary system for creating an audio menu describing media content of a media player according to the present invention. The system of FIG. 1 includes a personal computer (106) having installed upon it a digital media player application (232) and an audio menu creation module (454). A digital media player application (234) is an application that manages media content in media files such as audio files and video files. Such digital media player applications are typically capable of storing the media files on a portable media player. Examples of digital media player applications include Music Match™, iTunes®, Songbird™ and others as will occur to those of skill in the art.


The audio menu creation module (232) is an application for creating an audio menu describing media content of a media player according to the present invention, including computer program instructions for retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu. The digital media player application (232) is capable of transferring the one or more media files having the speech of the metadata describing the other media files managed by the media player to a portable media player (108).


A portable media player (108) is a device capable of rendering media files and other content. Examples of portable media players include the iPod® from Apple and Creative Zen Vision from Creative labs. The portable media player (108) of FIG. 1 includes a display screen (110) for rendering video content and visually rendering metadata describing media files stored on the portable media player (108). The portable media player (108) of FIG. 1 also includes headphones (112) for rendering audio content of media files stored on the portable media player.


The arrangement of devices making up the exemplary system illustrated in FIG. 1 is for explanation, not for limitation. Data processing systems useful according to various embodiments of the present invention may include additional servers, routers, other devices, and peer-to-peer architectures, not shown in FIG. 1, as will occur to those of skill in the art. Networks in such data processing systems may support many data communications protocols, including for example TCP (Transmission Control Protocol), IP (Internet Protocol), HTTP (HyperText Transfer Protocol), WAP (Wireless Access Protocol), HDTP (Handheld Device Transport Protocol), and others as will occur to those of skill in the art. Various embodiments of the present invention may be implemented on a variety of hardware platforms in addition to those illustrated in FIG. 1.


Creating an audio menu describing media content of a media player according to the present invention is generally implemented with computers, that is, with automated computing machinery. In the system of FIG. 1, for example, all the devices are implemented to some extent at least as computers. For further explanation, therefore, FIG. 2 sets forth a block diagram of automated computing machinery comprising a computer useful in creating an audio menu describing media content of a media player according to the present invention. The computer (114) of FIG. 2 includes at least one computer processor (156) or ‘CPU’ as well as random access memory (168) (‘RAM’) which is connected through a system bus (160) to a processor (156) and to other components of the computer (114).


Stored in RAM (168) is a digital media player application (234). As mentioned above, a digital media player application (234) is an application that manages media content in media files such as audio files and video files. Such digital media player applications are typically capable of storing the managed media files on a portable media player. Examples of digital media player applications include Music Match™, iTunes®, Songbird™ and others as will occur to those of skill in the art.


Also stored in RAM (168) The audio menu creation module (232) is an application for creating an audio menu describing media content of a media player according to the present invention, including computer program instructions for retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu.


Also stored in RAM (168) is an operating system (154). Operating systems useful in computers according to embodiments of the present invention include UNIX, Linux, Microsoft Windows NT, AIX, IBM's i5/OS, and others as will occur to those of skill in the art.


The exemplary computer (114) of FIG. 2 includes non-volatile computer memory (166) coupled through a system bus (160) to a processor (156) and to other components of the computer (114). Non-volatile computer memory (166) may be implemented as a hard disk drive (170), an optical disk drive (172), an electrically erasable programmable read-only memory space (so-called ‘EEPROM’ or ‘Flash’ memory) (174), RAM drives (not shown), or as any other kind of computer memory as will occur to those of skill in the art.


The exemplary computer (114) of FIG. 2 includes one or more input/output interface adapters (178). Input/output interface adapters in computers implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices (180) such as computer display screens, as well as user input from user input devices (181) such as keyboards and mice.


The exemplary computer (114) of FIG. 2 includes a communications adapter (167) for implementing data communications (184) with rendering devices (202). Such data communications may be carried out serially through RS-232 connections, through external buses such as a USB, through data communications networks such as IP networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a network. Examples of communications adapters useful for creating an audio menu describing media content of a media player include modems for wired dial-up communications, Ethernet (IEEE 802.3) adapters for wired network communications, and 802.11b adapters for wireless network communications and other as will occur to those of skill in the art.


For further explanation, FIG. 3 sets forth a flow chart illustrating an exemplary method for creating an audio menu describing media content of a media player. The method of FIG. 3 includes retrieving (302) metadata (304) describing the media files managed by the media player as discussed below with reference to FIG. 4. Retrieving (302) metadata (304) describing the media files managed by the media player according to the method of FIG. 3 may be carried out by retrieving a metadata file describing the media files. iTunes®, for example, maintains an eXtensible Markup Language (‘XML’) library file describing the media files managed by iTunes®.


Retrieving (302) metadata (304) describing the media files managed by the media player alternatively may be carried out by individually retrieving metadata describing each media file from each of the media files managed by the media player themselves as discussed below with reference to FIG. 5. Some media file formats, such as for example, the MPEG file format, provide a portion of the file for storing metadata. MPEG file formats support, for example, an ID3v2 tag prepended to the audio portion of the file for storing metadata describing the file.


The method of FIG. 3 also includes converting (306) at least a portion of the metadata (304) to speech (308). Converting (306) at least a portion of the metadata (304) to speech (308) may be carried out by processing the extracted metadata using a text-to-speech engine in order to produce a speech presentation of the extracted metadata and then recording the speech produced by the text-speech-engine in the audio portion of a media file.


Examples of speech engines capable of converting at least a portion of the metadata to speech for recording in the audio portion of a media filed include, for example, IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and Python's pyTTS class. Each of these text-to-speech engines is composed of a front end that takes input in the form of text and outputs a symbolic linguistic representation to a back end that outputs the received symbolic linguistic representation as a speech waveform.


Typically, speech synthesis engines operate by using one or more of the following categories of speech synthesis: articulatory synthesis, formant synthesis, and concatenative synthesis. Articulatory synthesis uses computational biomechanical models of speech production, such as models for the glottis and the moving vocal tract. Typically, an articulatory synthesizer is controlled by simulated representations of muscle actions of the human articulators, such as the tongue, the lips, and the glottis. Computational biomechanical models of speech production solve time-dependent, 3-dimensional differential equations to compute the synthetic speech output. Typically, articulatory synthesis has very high computational requirements, and has lower results in terms of natural-sounding fluent speech than the other two methods discussed below.


Formant synthesis uses a set of rules for controlling a highly simplified source-filter model that assumes that the glottal source is completely independent from a filter which represents the vocal tract. The filter that represents the vocal tract is determined by control parameters such as formant frequencies and bandwidths. Each formant is associated with a particular resonance, or peak in the filter characteristic, of the vocal tract. The glottal source generates either stylized glottal pulses for periodic sounds and generates noise for aspiration. Formant synthesis generates highly intelligible, but not completely natural sounding speech. However, formant synthesis has a low memory footprint and only moderate computational requirements.


Concatenative synthesis uses actual snippets of recorded speech that are cut from recordings and stored in an inventory or voice database, either as waveforms or as encoded speech. These snippets make up the elementary speech segments such as, for example, phones and diphones. Phones are composed of a vowel or a consonant, whereas diphones are composed of phone-to-phone transitions that encompass the second half of one phone plus the first half of the next phone. Some concatenative synthesizers use so-called demi-syllables, in effect applying the diphone method to the time scale of syllables. Concatenative synthesis then strings together, or concatenates, elementary speech segments selected from the voice database, and, after optional decoding, outputs the resulting speech signal. Because concatenative systems use snippets of recorded speech, they have the highest potential for sounding like natural speech, but concatenative systems require large amounts of database storage for the voice database.


The method of FIG. 3 also includes creating (310) one or more media files (312) for the audio menu and saving (314) the speech (308) in the audio portion of the one or more the media files (312) for the audio menu. Examples of media file formats useful in creating an audio menu describing media content of a media player according to the present invention include MPEG 3 (‘.mp3’) files, MPEG 4 (‘.mp4’) files, Advanced Audio Coding (‘AAC’) compressed files, Advances Streaming Format (‘ASF’) Files, WAV files, and many others as will occur to those of skill in the art. To further aid users of audio menus according to the present invention some embodiments also include prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file. Prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file provides additional functionality to each media file managed by the media player and advantageously provides a speech description of the media file prior to the content of the media file itself. This speech description of the media file prepended to the audio content allows users to determine whether to play the content from the speech description of the content.


As discussed above, retrieving metadata describing the media files managed by the media player may be carried out by retrieving a metadata file describing the media files. For further explanation, therefore, FIG. 4 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player that includes retrieving a metadata file describing the media files managed by the media player. The method of FIG. 4 is similar to the method of FIG. 3 in that the method of FIG. 4 includes retrieving (302) metadata (304) describing the media files managed by the media player; converting (306) at least a portion of the metadata (304) to speech (308); creating (310) one or more media files (312) for the audio menu; and saving (314) the speech (308) in the audio portion of the one or more the media files (312) for the audio menu.


In the method of FIG. 4, retrieving (302) metadata includes retrieving (402) a metadata file (404) describing the media files managed by the media player. As mentioned above, one example of such a metadata file is an eXtensible Markup Language (‘XML’) library file describing the media files managed by iTunes®.


In the method of FIG. 4, retrieving (302) metadata also includes identifying (406) in dependence upon the metadata file (404) an organization (408) of the media files managed by the media player. Identifying (406) in dependence upon the metadata file (404) an organization (408) of the media files managed by the media player may include determining a logical structure, such as for example a tree structure, of the organization of the media files, identifying playlists, determining the organization of media files by artist or genre, or any other organization of the media files as will occur to those of skill in the art. Identifying (406) in dependence upon the metadata file (404) an organization (408) of the media files managed by the media player may be carried out by parsing the markup of an metadata file such as, for example, the XML library file describing the media files managed by iTunes® to determine a logical structure of the organization of the media files, to identify playlists, to determine any organization of media files by artist or genre, or any other organization of the media files as will occur to those of skill in the art


In the method of FIG. 4, saving (314) the speech (308) in the audio portion of the one or more the media files (312) for the audio menu also includes saving (410) the speech (308) according to the organization (408) of the media files (502) managed by the media player. Saving (410) the speech (308) according to the organization (408) of the media files (502) managed by the media player according to the method of FIG. 4 may be carried out by saving the speech in a logical sequence corresponding with any logical structure of the organization of the media files, identified playlists, organization of media files by artist or genre, or any other organization of the media files as will occur to those of skill in the art.


As discussed above, retrieving metadata describing the media files managed by a media player may also include individually retrieving metadata describing each media file from each of the media files managed by the media player themselves. For further explanation, therefore, FIG. 5 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player. The method of FIG. 5 is similar to the methods of FIG. 3 and FIG. 4 in that the method of FIG. 5 also includes retrieving (302) metadata (304) describing the media files managed by the media player; converting (306) at least a portion of the metadata (304) to speech (308); creating (310) one or more media files (312) for the audio menu; and saving (314) the speech (308) in the audio portion of the one or more the media files (312) for the audio menu. In the method of FIG. 5, however, retrieving (302) metadata includes retrieving (506) from each of the media files (502) managed by the media player metadata (508) describing the media file (502). As described above, some media file formats, such as for example, the MPEG file format, provide a portion of the file for storing metadata. MPEG file formats support, for example, an ID3v2 tag prepended to the audio portion of the file for storing metadata describing the file. Retrieving (506) from each of the media files (502) managed by the media player metadata (508) describing the media file (502), for example, therefore may be carried out by retrieving metadata from an ID3v2 tag or other header or container for metadata of each of the media files managed by the media player.


Creating an audio menu describing media content of a media player according to the method of FIG. 5 also includes identifying (510) in dependence upon a file system (504) in which the media files (502) are stored an organization (512) of the media files (502) managed by the media player. Identifying (510) in dependence upon a file system (504) in which the media files (502) are stored an organization (512) of the media files (502) managed by the media player may be carried out by identifying in dependence upon the logical tree structure of the file system (504) an organization of the media files representing that logical structure of the files system. Such an organization may provide for playlists, organization of media files by artist or genre, or other organization by logical structure in a file system.


In the method of FIG. 5, saving (314) the speech (308) in the audio portion of the one or more the media files (312) for the audio menu also includes saving (514) the speech (308) according to the organization (512) of the media files (502) managed by the media player. Saving (514) the speech (308) according to the organization (512) of the media files (502) managed by the media player according to the method of FIG. 5 may be carried out by saving the speech in a logical sequence corresponding with the identified logical structure of the file system of the media files.


As an aid to users of audio menus according to the present invention, some embodiments of the present invention also include providing not only a description of the media files managed by the media player, but also a description of the organization of those media files such that a user may be informed of the organization of the media files empowering the user to navigate the media files using the audio menu. For further explanation, FIG. 6 sets forth a flow chart illustrating an exemplary method for creating an audio file organization menu including identifying (602) an organization (604) of the media files managed by the media player and creating (606) speech (608) describing the organization (604) of the media files managed by the media player. Identifying (602) an organization of the media files may be carried out by in dependence upon a metadata file as described above with reference to FIG. 4 or in dependence upon the logical organization of the media files in a file system as described above with reference to FIG. 5 or in other ways as will occur to those of skill in the art. Creating (606) speech (608) describing the organization (604) of the media files managed by the media player may be carried out a speech synthesis engine to create speech describing the identified organization (604) as discussed above.


The method of FIG. 6 also includes creating (610) one or more media files (312) and saving (614) the created speech (608) describing the organization (604) of the media files managed by the media player in the one or more media files (312). Examples of media files useful in creating an audio menu describing media content of a media player according to the present invention include MPEG 3 (‘.mp3’) files, MPEG 4 (‘.mp4’) files, Advanced Audio Coding (‘AAC’) compressed files, Advances Streaming Format (‘ASF’) Files, WAV files, and many others as will occur to those of skill in the art.


Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for creating an audio menu describing media content of a media player. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.


It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.

Claims
  • 1. A computer-implemented method for creating an audio menu describing media content of a media player, the method comprising: retrieving metadata describing the media files managed by the media player, further comprising: retrieving an extensible markup language (‘XML’) metadata file describing the media files managed by the media player;identifying in dependence upon the XML metadata file an organization of the media files managed by the media player;converting at least a portion of the metadata to speech, including converting metadata describing a particular media file managed by media player to speech;creating one or more media files for the audio menu; andsaving the speech in the audio portion of the one or more media files for the audio menu, further comprising saving the speech according to the organization of the media files managed by the media player.
  • 2. The method of claim 1 wherein retrieving metadata further comprises: retrieving from each of the media files managed by the media player metadata describing the media file;identifying in dependence upon a file system in which the media files are stored an organization of the media files managed by the media player; andwherein saving the speech in the audio portion of the one or more media files for the audio menu further comprises:saving the speech according to the organization of the media files managed by the media player.
  • 3. The method of claim 2 further comprising prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file.
  • 4. The method of claim 1 further comprising creating an audio file organization menu including: identifying an organization of the media files managed by the media player;creating speech describing the organization of the media files managed by the media player; andcreating one or more media files; andsaving the created speech describing the organization of the media files managed by the media player in the one or more media files.
  • 5. A system for creating an audio menu describing media content of a media player, the system comprising: a computer processor;a computer memory operatively coupled to the computer processor;the computer memory having disposed within it computer program instructions capable of:retrieving metadata describing the media files managed by the media player, further comprising: retrieving an extensible markup language (‘XML’) metadata file describing the media files managed by the media player;identifying in dependence upon the XML metadata file an organization of the media files managed by the media player;converting at least a portion of the metadata to speech including converting metadata describing a particular media file managed by media player to speech;creating one or more media files for the audio menu; andsaving the speech in the audio portion of the one or more media files for the audio menu, further comprising saving the speech according to the organization of the media files managed by the media player.
  • 6. The system of claim 5 wherein computer program instructions capable of retrieving metadata further comprise computer program instructions capable of: retrieving from each of the media files managed by the media player metadata describing the media file;identifying in dependence upon a file system in which the media files are stored an organization of the media files managed by the media player; andwherein computer program instructions capable of saving the speech in the audio portion of the one or more media files for the audio menu further comprise computer program instructions capable of:saving the speech according to the organization of the media files managed by the media player.
  • 7. The system of claim 6 wherein the computer memory also has disposed with it computer program instructions capable of prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file.
  • 8. The system of claim 5 wherein the computer memory also has disposed with it computer program instructions capable of creating an audio file organization menu including computer program instructions capable of: identifying an organization of the media files managed by the media player;creating speech describing the organization of the media files managed by the media player; andcreating one or more media files; andsaving the created speech describing the organization of the media files managed by the media player in the one or more media files.
  • 9. A computer program product for creating an audio menu describing media content of a media player, the computer program product embodied on a computer-readable recording medium, the computer program product comprising: computer program instructions for retrieving metadata describing the media files managed by the media player, further comprising: computer program instructions for retrieving an extensible markup language (‘XML’) metadata file describing the media files managed by the media player;computer program instructions for identifying in dependence upon the XML metadata file an organization of the media files managed by the media player,computer program instructions for converting at least a portion of the metadata to speech, including computer program instructions for converting metadata describing a particular media file managed by media player to speech;computer program instructions for creating one or more media files for the audio menu; andcomputer program instructions for saving the speech in the audio portion of the one or more media files for the audio menu, further comprising computer program instructions for saving the speech according to the organization of the media files managed by the media player.
  • 10. The computer program product of claim 9 wherein computer program instructions for retrieving metadata further comprise: computer program instructions for retrieving from each of the media files managed by the media player metadata describing the media file;computer program instructions for identifying in dependence upon a file system in which the media files are stored an organization of the media files managed by the media player, andwherein computer program instructions for saving the speech in the audio portion of the one or more media files for the audio menu further comprise:computer program instructions for saving the speech according to the organization of the media files managed by the media player.
  • 11. The computer program product of claim 10 further comprising computer program instructions for prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file.
  • 12. The computer program product of claim 9 further comprising computer program instructions for creating an audio file organization menu including: computer program instructions for identifying an organization of the media files managed by the media player;computer program instructions for creating speech describing the organization of the media files managed by the media player; andcomputer program instructions for creating one or more media files; andcomputer program instructions for saving the created speech describing the organization of the media files managed by the media player in the one or more media files.
US Referenced Citations (114)
Number Name Date Kind
5819220 Sarukkai et al. Oct 1998 A
6032260 Sasmazel et al. Feb 2000 A
6178511 Cohen et al. Jan 2001 B1
6240391 Ball et al. May 2001 B1
6266649 Linden et al. Jul 2001 B1
6311194 Sheth et al. Oct 2001 B1
6771743 Butler et al. Aug 2004 B1
6944591 Raghunandan Sep 2005 B1
6976082 Ostermann et al. Dec 2005 B1
6993476 Dutta et al. Jan 2006 B1
7039643 Sena et al. May 2006 B2
7062437 Kovales et al. Jun 2006 B2
7130850 Russell-Falla et al. Oct 2006 B2
7171411 Lewis et al. Jan 2007 B1
7356470 Roth et al. Apr 2008 B2
7454346 Dodrill et al. Nov 2008 B1
20010027396 Sato Oct 2001 A1
20010047349 Easty et al. Nov 2001 A1
20010049725 Kosuge Dec 2001 A1
20010054074 Hayashi Dec 2001 A1
20020013708 Walker et al. Jan 2002 A1
20020032564 Ehsani et al. Mar 2002 A1
20020032776 Hasegawa et al. Mar 2002 A1
20020054090 Silva et al. May 2002 A1
20020062216 Guenther et al. May 2002 A1
20020062393 Borger et al. May 2002 A1
20020095292 Mittal et al. Jul 2002 A1
20020152210 Johnson et al. Oct 2002 A1
20020178007 Slotznick et al. Nov 2002 A1
20020194286 Matsuura et al. Dec 2002 A1
20020198720 Takagi et al. Dec 2002 A1
20030028380 Freeland et al. Feb 2003 A1
20030033331 Sena et al. Feb 2003 A1
20030055868 Fletcher et al. Mar 2003 A1
20030103606 Rhie et al. Jun 2003 A1
20030110272 du Castel et al. Jun 2003 A1
20030110297 Tabatabai et al. Jun 2003 A1
20030115056 Gusler et al. Jun 2003 A1
20030115064 Gusler et al. Jun 2003 A1
20030126293 Bushey Jul 2003 A1
20030158737 Csicsatka Aug 2003 A1
20030160770 Zimmerman Aug 2003 A1
20030167234 Bodmer et al. Sep 2003 A1
20030172066 Cooper et al. Sep 2003 A1
20040003394 Ramaswamy Jan 2004 A1
20040041835 Lu Mar 2004 A1
20040068552 Kotz et al. Apr 2004 A1
20040088349 Beck et al. May 2004 A1
20040199375 Ehsani et al. Oct 2004 A1
20040201609 Obrador Oct 2004 A1
20040254851 Himeno et al. Dec 2004 A1
20050015254 Beaman Jan 2005 A1
20050045373 Born Mar 2005 A1
20050071780 Muller et al. Mar 2005 A1
20050076365 Popov et al. Apr 2005 A1
20050108521 Silhavy et al. May 2005 A1
20050232242 Karaoguz et al. Oct 2005 A1
20060008258 Kawana et al. Jan 2006 A1
20060020662 Robinson Jan 2006 A1
20060048212 Tsuruoka et al. Mar 2006 A1
20060050794 Tan et al. Mar 2006 A1
20060052089 Khurana et al. Mar 2006 A1
20060075224 Tao Apr 2006 A1
20060095848 Naik May 2006 A1
20060123082 Digate et al. Jun 2006 A1
20060136449 Parker et al. Jun 2006 A1
20060140360 Crago et al. Jun 2006 A1
20060155698 Vayssiere Jul 2006 A1
20060159109 Lamkin et al. Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060184679 Izdepski et al. Aug 2006 A1
20060190616 Mayerhofer et al. Aug 2006 A1
20060224739 Anantha Oct 2006 A1
20060233327 Roberts et al. Oct 2006 A1
20060265503 Jones et al. Nov 2006 A1
20060288011 Gandhi et al. Dec 2006 A1
20070027958 Haslam Feb 2007 A1
20070061266 Moore et al. Mar 2007 A1
20070073728 Klein et al. Mar 2007 A1
20070083540 Gundia et al. Apr 2007 A1
20070091206 Bloebaum Apr 2007 A1
20070100836 Eichstaedt et al. May 2007 A1
20070112844 Tribble et al. May 2007 A1
20070118426 Barnes, Jr. May 2007 A1
20070124458 Kumar May 2007 A1
20070124802 Anton et al. May 2007 A1
20070130589 Davis et al. Jun 2007 A1
20070174326 Schwartz et al. Jul 2007 A1
20070191008 Bucher et al. Aug 2007 A1
20070192327 Bodin Aug 2007 A1
20070192674 Bodin Aug 2007 A1
20070192683 Bodin Aug 2007 A1
20070192684 Bodin et al. Aug 2007 A1
20070208687 O'Conor et al. Sep 2007 A1
20070213857 Bodin Sep 2007 A1
20070213986 Bodin Sep 2007 A1
20070214147 Bodin et al. Sep 2007 A1
20070214148 Bodin Sep 2007 A1
20070214149 Bodin Sep 2007 A1
20070214485 Bodin Sep 2007 A1
20070220024 Putterman et al. Sep 2007 A1
20070253699 Yen et al. Nov 2007 A1
20070276837 Bodin et al. Nov 2007 A1
20070276865 Bodin et al. Nov 2007 A1
20070276866 Bodin et al. Nov 2007 A1
20070277088 Bodin Nov 2007 A1
20070277233 Bodin Nov 2007 A1
20080034278 Tsou et al. Feb 2008 A1
20080052415 Kellerman et al. Feb 2008 A1
20080082576 Bodin Apr 2008 A1
20080082635 Bodin Apr 2008 A1
20080161948 Bodin Jul 2008 A1
20080162131 Bodin Jul 2008 A1
20080275893 Bodin et al. Nov 2008 A1
Foreign Referenced Citations (2)
Number Date Country
WO 0182139 Nov 2001 WO
WO 2005106846 Nov 2005 WO
Related Publications (1)
Number Date Country
20080082576 A1 Apr 2008 US