Techniques for presenting sound effects on a portable media player

Information

  • Patent Grant
  • 8300841
  • Patent Number
    8,300,841
  • Date Filed
    Friday, June 3, 2005
    19 years ago
  • Date Issued
    Tuesday, October 30, 2012
    12 years ago
Abstract
Improved techniques for presenting sound effects at a portable media device are disclosed. The sound effects can be output as audio sounds to an internal speaker, an external speaker, or both. In addition, the audio sounds for the sound effects can be output together with other audio sounds pertaining to media assets (e.g., audio tracks being played). In one embodiment, the sound effects can serve to provide auditory feedback to a user of the portable media device. A user interface can facilitate a user's selection of sound effect usages, types or characteristics.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates to audio sound effects and, more particularly, to providing audio sound effects on a portable media device.


2. Description of the Related Art


Conventionally, portable media players have user input devices (buttons, dials, etc.) and a display screen for user output. Sometimes the display screen updates as user inputs are provided via the user input devices, thereby providing visual feedback to users regarding their user input. However, the display screen does not always provide visual feedback and the user is not always able to view the display screen to receive the visual feedback. Still further, some portable media players do not include a display screen. Portable media players can also provide auditory feedback as user inputs are provided via the user input devices. For example, to provide auditory feedback for a rotation user input, the iPod® media player, which is available from Apple Computer, Inc. of Cupertino, Calif., outputs a “click” sound using a piezoelectric device provided within the media player.


Unfortunately, however, users often interact with media players while wearing earphones or headphones. In such case, the users will likely not be able to hear any auditory feedback, such as “click” sounds from a piezoelectric device. Moreover, the user might also be listening to audio sounds via the earphones or headphones when the user interaction occurs. Consequently, any users interaction with the media player while wearing earphone or headphones will be without the advantage of auditory feedback. The lack of auditory feedback degrades the user experience and renders the media player less user friendly.


Thus, there is a need for improved techniques to facilitate auditory feedback on portable media players.


SUMMARY OF THE INVENTION

The invention pertains to techniques for presenting sound effects at a portable media device. The sound effects can be output as audio sounds to an internal speaker, an external speaker, or both. In addition, the audio sounds for the sound effects can be output together with other audio sounds pertaining to media assets (e.g., audio tracks being played). In one embodiment, the sound effects can serve to provide auditory feedback to a user of the portable media device. A user interface can facilitate a user's selection of sound effect usages, types or characteristics.


The invention can be implemented in numerous ways, including as a method, system, device, apparatus (including graphical user interface), or computer readable medium. Several embodiments of the invention are discussed below.


As a method for providing auditory feedback to a user of portable media device, one embodiment of the method includes at least the acts of: outputting first audio data pertaining to a digital media asset to an audio output device associated with the portable media device; detecting an event at the portable media device; and outputting second audio data after the event has been detected, the second audio data pertaining to a sound effect associated with the event that has been detected, the second audio data being output to the audio output device.


As a method for outputting a sound effect from an external speaker associated with a portable media device, one embodiment of the method includes at least the acts of: determining whether a sound effect is to be output to the external speaker; identifying sound effect data for the sound effect to be output; retrieving the identified sound effect data; mixing the identified sound effect data with audio data being output, if any, to produce mixed audio data; and outputting the mixed audio data to the external speaker.


As a method for providing auditory feedback to a user of portable media device, one embodiment of the method includes at least the acts of: detecting an event at the portable media device; determining whether device feedback is enabled; producing an auditory feedback at the portable media device in response to the event when it is determined that the device feedback is enabled; determining whether earphone feedback is enabled; and producing an auditory feedback at one or more earphones coupled to the portable media device in response to the event when it is determined that the earphone feedback is enabled.


As a portable media device, one embodiment of the invention includes at least: an audio output device; a first memory device for storing a plurality of sound effects; computer program code for determining when to output at least one of the sound effects; and a processor for determining when to output at least one of the sound effects and for processing the at least one of the sound effects to produce output sound effect data for the audio output device.


As a graphical user interface for a media device adapted to provide auditory feedback, one embodiment of the invention includes at least: a list of auditory feedback options; and a visual indicator that indicates a selected on of the auditory feedback options. The media device thereafter provides auditory feedback in accordance with the selected one of the auditory feedback options.


As a computer readable medium including at least computer program code for outputting a sound effect from an external speaker associated with a portable media device, one embodiment of the invention includes at least: computer program code for determining whether a sound effect is to be output to the external speaker; computer program code for identifying sound effect data for the sound effect to be output; computer program code for retrieving the identified sound effect data; computer program code for mixing the identified sound effect data with audio data being output, if any, to produce mixed audio data; and computer program code for outputting the mixed audio data to the external speaker.


Other aspects and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:



FIG. 1 is a block diagram of an audio system according to one embodiment of the invention.



FIG. 2 is a flow diagram of an audio output process according to one embodiment of the invention.



FIG. 3 is a block diagram of an audio processing system according to one embodiment of the invention.



FIG. 4 is a flow diagram of an audio mixing process according to one embodiment of the invention.



FIG. 5 is an audio processing system according to one embodiment of the invention.



FIG. 6 is a block diagram of a multi-channel audio mixing system according to one embodiment of the invention.



FIG. 7 is a block diagram of a media player according to one embodiment of the invention.



FIG. 8 illustrates a media player having a particular user input device according to one embodiment.



FIG. 9 is a flow diagram of a sound effect event process according to one embodiment of the invention.



FIG. 10 illustrates a graphical user interface according to one embodiment of the invention.





DETAILED DESCRIPTION OF THE INVENTION

The invention pertains to techniques for presenting sound effects at a portable media device. The sound effects can be output as audio sounds to an internal speaker, an external speaker, or both. In addition, the audio sounds for the sound effects can be output together with other audio sounds pertaining to media assets (e.g., audio tracks being played). In one embodiment, the sound effects can serve to provide auditory feedback to a user of the portable media device. A user interface can facilitate a user's selection of sound effect usages, types or characteristics.


The invention is well suited for audio sounds pertaining to media assets (media items), such as music, audiobooks, meeting recordings, and other speech or voice recordings.


The improved techniques are also resource efficient. Given the resource efficiency of these techniques, the improved techniques are also well suited for use with portable electronic devices having audio playback capabilities, such as portable media devices. Portable media devices, such as media players, are small and highly portable and have limited processing resources. Often, portable media devices are hand-held media devices, such as hand-held audio players, which can be easily held by and within a single hand of a user.


Embodiments of the invention are discussed below with reference to FIGS. 1-10. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments.



FIG. 1 is a block diagram of an audio system 100 according to one embodiment of the invention. The audio system 100 depicts a data flow for the audio system 100 under the control of an application 102. Typically, the audio system 100 is provided by a computing device. Often, the computing device is a portable computing device especially designed for audio usage. One example of portable computing devices are portable media players (e.g., music players or MP3 players). Another example of portable computing devices are mobile telephones (e.g., cell phones) or Personal Digital Assistants (PDA).


The application 102 is, for example, a software application that operates on the computing device. The application 102 has access to audio data 104 and sound effect data 106. The application 102 can utilize the audio data 104 when the application 102 desires to output the audio data 104. The sound effect data 106 can represent audio sounds pertaining to sound effects that can be utilized by the computing device. For example, the sound effects may correspond to sounds (actual or synthetic) for mouse clicks, button presses, and the like. The sound effect data 106 is audio data and can be stored in a wide variety of formats. For example, the sound effect data 106 a can be simply Pulse Coded Modulation (PCM) data or can be encoded data, such as MP3 or MPEG-4 format. PCM data is typically either raw data (e.g., a block of samples) or formatted (e.g., WAV or AIFF file formats).


The application 102 controls when a sound effect is to be output by the audio system 100. The application 102 also understands that it may or may not already be outputting audio data 104 at the time at which a sound effect is to the output. In the embodiment shown in FIG. 1, the application 102 can control an audio device 108. The audio device 108 is a hardware component that is capable of producing a sound, such as a sound effect. For example, the audio device 108 can pertain to an audio output device (e.g., speaker or piezoelectric device) that can be briefly activated to provide a sound effect. The sound affect can serve to inform the user of the computing device of a condition, status or event.


In addition, the application 102 produces an audio channel 110 and a mixer channel 112. The audio channel 110 is a virtual channel over which the application 102 can send audio data 104 such that it can be directed to an audio output device. For example, the audio output device can be a speaker that outputs the corresponding audio sounds. In addition, the application 102 can utilize a mixer channel 112 to output sound effects to the audio output device. The mixer channel 112 and the audio channel 110 can be mixed together downstream (see FIG. 3). Hence, the audio system 100 can not only output audio data 104 over the audio channel 110 but can also output sound effects over the mixer channel 112. As discussed in greater detail below, the audio data on the audio channel 110 can be mix with any sound effect data on the mixer channel 112.



FIG. 2 is a flow diagram of an audio output process 200 according to one embodiment of the invention. The audio output process 200 is performed by an audio system. For example, the audio output process 200 can be performed by the application 102 of the audio system 100 illustrated in FIG. 1.


The audio output process 200 begins with a decision 202 that determines whether an audio play request has been issued. For example, an audio play request can be issued as a result of a system action or a user action with respect to the audio system. When the decision 202 determines that an audio play request has been issued, audio data is output 204 to an audio channel. By outputting the audio data to the audio channel, the audio data is directed to an audio output device, namely, a speaker, wherein audible sound is output.


Following the operation 204, or following the decision 202 when an audio play request has not been issued, a decision 206 determines whether a sound effect request has been issued. When the decision 206 determines that a sound effect request has been issued, then sound effect data is output 208 to a mixer channel. The mixer channel carries other audio data, such as audio data pertaining to sound effects (sound effect data). The mixer channel allows the sound effect data to mix with the audio data on the audio channel. After the sound effect data has been output 208 to the mixer channel, or directly following the decision 206 when a sound effect request has not been issued, the audio output processed 200 turns to repeat the decision 202 and subsequent operations so that subsequent requests can be similarly processed.


It should be understood that often audio data is output for a longer duration than is any sound effect data, which tends to be of a shorter duration. Hence, during the output of the audio data to the audio channel, sound effect data for one or more sound effects can be output to the mixer channel and this combined with the audio data.



FIG. 3 is a block diagram of an audio processing system 300 according to one embodiment of the invention. The audio processing system 300 includes an audio channel 302 and a mixer channel 304. The audio channel 302 typically includes a decoder and a buffer. The mixer channel 304 typically includes resolution and/or sample rate converters.


The audio channel 302 receives audio data 306 that is to be output by the audio processing system 300. After the audio data 306 passes through the audio channel 302, it is provided to a mixer 308. The mixer channel 304 receives sound effect data 310. After the sound effect data 310 has passed through the mixer channel 304, it is provided to a mixer 308. The mixer 308 serves to combine the audio data from the audio channel 302 with the sound effect data 310 from the mixer channel 304. The combined data is then supplied to a Digital-to-Analog Converter (DAC) 312. The DAC 312 converts the combined data to an analog audio output. The analog audio output can be supplied to an audio output device, such as a speaker.



FIG. 4 is a flow diagram of an audio mixing process 400 according to one embodiment of the invention. The audio mixing process 400 it is, for example, performed by the audio processing system 300 illustrated in FIG. 3.


The audio mixing process 400 begins with a decision 402 that determines whether a sound effect is to be output. When the decision 402 determines that a sound effect is not to be output, then the audio mixing process 400 awaits the need to output a sound effect. For example, the decision 206 of the audio output process 200 illustrated in FIG. 2 indicates that an audio system can make the determination of whether a sound effect is to be output. Accordingly, the audio mixing process 400 is invoked when a sound effect is to be output.


Once the decision 402 determines that a sound effect is to be output, a desired sound effect to be output is determined 404. Here, in one embodiment, the audio system can support a plurality of different sound effects. In such an embodiment, the audio system needs to determine which of the plurality of sound effects is the desired sound effect. The sound effect data for the desired sound effect is then retrieved 406.


A decision 408 then determines whether audio data is also being output. When the decision 408 determines that audio data is also being output, audio characteristics for the audio data being output are obtained 410. In one implementation, the audio characteristics pertain to metadata corresponding to the audio data being output. The sound effect data is then modified 412 based on the audio characteristics. In one embodiment, the audio characteristics can pertain to one or more of: audio resolution (e.g., bit depth), sample rate, and stereo/mono. For example, the audio resolution for the sound effect data can be modified 412 to match the audio resolution (e.g., bit depth) of the audio data. As another example, the sample rate for the sound effect can be modified 412 based on the sample rate of the audio data. In any case, after the sound effect data has been modified 412, the modified sound effect data is then mixed 414 with the audio data. Thereafter, the mixed audio data is output 416. As an example, the mixed audio data can be output 416 to an audio output device (e.g., speaker) associated with the audio system.


On the other hand, when the decision 408 determines that audio data is not being output, sound effect data is output 418. Here, since there is no audio data being output, the sound effect data can be simply output 418. If desired, the sound effect data can be modified before being output 418, such as to change audio resolution or sample rate conversion. Here, the output 418 of the sound effect data can also be provided to the audio output device. Following the operations 416 and 418, the audio mixing process 400 is complete and ends.



FIG. 5 is an audio processing system 500 according to one embodiment of the invention. The audio processing system 500 includes an audio channel 502. The audio channel 502 includes a decoder 504 and a buffer 506. The decoder 504 receives incoming audio data. The decoder 504 decodes the audio data (which was previously encoded). The decoded audio data is then temporarily stored in the buffer 506. As needed for transmission, the decoded audio data is supplied from the buffer 506 to a mixer 508.


The audio processing system 500 also includes a mixer channel 510. The mixer channel 510 receives sound effect data that is to be output. Since the audio processing system 500 can process audio data of various bit depths, sample rates, and other criteria, the mixer channel 510 can serve to modify the sound effect data. One benefit of providing the mixer channel 500 with conversion or adaptation capabilities is the ability to modify in the audio characteristics of the sound effect data. By doing so, the sound effect data does not have to be stored by the audio system for a large number of different audio formats. Indeed, for efficient use of storage resources, only a single file for each sound effect need be stored. As needed, sound effect data can have its audio characteristics altered so as to closely match those of the audio data also being output by the audio processing system 500. In this regard, the mixer channel 500 can include a bit depth converter 512, a channel count adapter 514, and a sample rate converter 516. The bit depth converter 512 can convert the bit depth (i.e., resolution) of the sound effect data. As one example, if the sound effect data has a bit depth of eight (8) bits, the bit depth converter 512 could change the bit depth to sixteen (16) bits. The channel count adapter 514 can modify the sound effect data to provide mono or stereo audio components. The sample rate converter 516 converts the sample rate for the sound effect data. To assist the mixer channel 510 in converting or adapting the audio characteristics, the audio characteristics from the audio data provided to the audio channel 502 can be provided to the mixer channel 510, so as to inform the mixer channel 510 of the audio characteristics of the audio data in the audio channel 502.


The modified sound effect data output by the mixer channel 510 is supplied to the mixer 508. The mixer 508 adds or sums the decoded audio data from the audio channel 502 with the modified sound effect data from the mixer channel 510. The results of the mixer 508 is mixed audio data that is supplied to a buffer 518. The mixed audio data is digital data stored in the buffer 518. The audio processing system 500 also includes a Digital-to-Analog Converter (DAC) 520. The DAC 520 receives the mixed audio data from the buffer 518, which is digital data, and converts it into an analog audio output. The analog audio output can be supplied to an audio output device, such as a speaker.


Although the audio processing system 500 illustrated in FIG. 5 depicts a single audio channel and a single mixer channel, it should be understood that the audio processing system 500 can include more than one mixer channel. The advantage of having more than one mixer channel is that multiple sound effects can be output concurrently, thereby providing a polyphony audio effect.



FIG. 6 is a block diagram of a multi-channel audio mixing system 600 according to one embodiment of the invention. The multi-channel audio mixing system 600 includes an audio channel 602 that receives audio data and outputs decoded audio data. The decoded audio data being output by the audio channel 602 is supplied to a mixer 604. The multi-channel audio mixing system 600 also includes a plurality of mixer channel's 606-1, 606-2, . . . , 606-N. Each of the mixer channels 606 is capable of receiving a different sound effect. For example, the mixer channel 1606-1 can receive a sound effect A, the mixer channel 2606-2 can receive a sound effect B, and the mixer channel N can receive a sound effect N. If desired, the mixer channels 606 can each carry a sound effect at same time, or at least with partial temporal overlap, so that the various sound effects can be output without substantial distortion amongst one another. Regardless of the number of sound effects being processed by the mixer channels 606, the sound effect data output from the mixer channels 606 are provided to the mixer 604. The mixer 604 combines the sound effect data from one or more of the mixer channels 606 with the decoded audio data from the audio channel 602. The result of the mixer 604 is a mixed audio output that can be supplied to in audio output device.



FIG. 7 is a block diagram of a media player 700 according to one embodiment of the invention. The media player 700 can implement the audio system 100 of FIG. 1 or the audio processing system 200, 500 of FIGS. 3 and 5 The media player 700 includes a processor 702 that pertains to a microprocessor or controller for controlling the overall operation of the media player 700. The media player 700 stores media data pertaining to media items in a file system 704 and a cache 706. The file system 704 is, typically, a storage disk or a plurality of disks. The file system 704 typically provides high capacity storage capability for the media player 700. The file system 704 can store not only media data but also non-media data (e.g., when operated in a disk mode). However, since the access time to the file system 704 is relatively slow, the media player 700 can also include a cache 706. The cache 706 is, for example, Random-Access Memory (RAM) provided by semiconductor memory. The relative access time to the cache 706 is substantially shorter than for the file system 704. However, the cache 706 does not have the large storage capacity of the file system 704. Further, the file system 704, when active, consumes more power than does the cache 706. The power consumption is often a concern when the media player 700 is a portable media player that is powered by a battery (not shown). The media player 700 also includes a RAM 720 and a Read-Only Memory (ROM) 722. The ROM 722 can store programs, utilities or processes to be executed in a non-volatile manner. The RAM 720 provides volatile data storage, such as for the cache 706.


The media player 700 also includes a user input device 708 that allows a user of the media player 700 to interact with the media player 700. For example, the user input device 708 can take a variety of forms, such as a button, keypad, dial, etc. In one implementation, the user input device 708 can be provided by a dial that physically rotates. In another implementation, the user input device 708 can be implemented as a touchpad (i.e., a touch-sensitive surface). In still another implementation, the user input device 708 can be implemented as a combination one or more physical buttons and well as a touchpad. Regardless of how implemented, as the user interacts with the user interface device 708, a piezoelectric device 724 can provide auditory feedback to the user. For example, the piezoelectric device 724 can be controlled by the processor 702 to emit a sound in response to a user action (e.g., user selection or button press). Still further, the media player 700 includes a display 710 (screen display) that can be controlled by the processor 702 to display information to the user. A data bus 711 can facilitate data transfer between at least the file system 704, the cache 706, the processor 702, and the CODEC 712.


In one embodiment, the media player 700 serves to store a plurality of media items (e.g., songs) in the file system 704. When a user desires to have the media player play a particular media item, a list of available media items is displayed on the display 710. Then, using the user input device 708, a user can select one of the available media items. The processor 702, upon receiving a selection of a particular media item, supplies the media data (e.g., audio file) for the particular media item to a coder/decoder (CODEC) 712. The CODEC 712 then produces analog output signals for a speaker 714. The speaker 714 can be a speaker internal to the media player 700 or external to the media player 700. For example, headphones or earphones that connect to the media player 700 would be considered an external speaker. The speaker 714 can not only be used to output audio sounds pertaining to the media item being played, but also to output sound effects. The sound effects can be stored as audio data on the media player 700, such as in file system 704, the cache 706, the ROM 720 or the RAM 722. A sound effect can be output in response to a user input or a system request. When a particular sound effect is to be output to the speaker 714, the associated sound effect audio data can be retrieved by the processor 702 and supplied to the CODEC 712 which then supplies audio signals to the speaker 714. In the case where audio data for a media item is also being output, the processor 702 can process the audio data for the media item as well as the sound effect. In such case, the audio data for the sound effect can be mixed with the audio data for the media item. The mixed audio data can then be supplied to the CODEC 712 which supplies audio signals (pertaining to both the media item and the sound effect) to the speaker 714.


The media player 700 also includes a network/bus interface 716 that couples to a data link 718. The data link 718 allows the media player 700 to couple to a host computer. The data link 718 can be provided over a wired connection or a wireless connection. In the case of a wireless connection, the network/bus interface 716 can include a wireless transceiver.


In one embodiment, the media player 700 is a portable computing device dedicated to processing media such as audio. For example, the media player 700 can be a music player (e.g., MP3 player), a game player, and the like. These devices are generally battery operated and highly portable so as to allow a user to listen to music, play games or video, record video or take pictures wherever the user travels. In one implementation, the media player 700 is a handheld device that is sized for placement into a pocket or hand of the user. By being handheld, the media player 700 is relatively small and easily handled and utilized by its user. By being pocket sized, the user does not have to directly carry the device and therefore the device can be taken almost anywhere the user travels (e.g., the user is not limited by carrying a large, bulky and often heavy device, as in a portable computer). Furthermore, the device may be operated by the user's hands, no reference surface such as a desktop is needed.


The user input device 708 can take a variety of forms, such as a button, keypad, dial, etc. (physical or soft implementations) each of which can be programmed to individually or in combination to perform any of a suite of functions. FIG. 8 illustrates a media player 800 having a particular user input device 802 according to one embodiment. The media player 804 can also include a display 804. The user input device 802 includes a number of input devices 806, which can be either physical or soft devices. Such input devices 806 can take the form of a rotatable dial 806-1, such as in the form of a wheel, capable of rotation in either a clockwise or counterclockwise direction. A depressible input button 806-2 can be provided at the center of the dial 806-1 and arranged to receive a user input event such as a press event. Other input buttons 806 include input buttons 806-3 through 806-6 each available to receive user supplied input action.


As noted above, the audio system can be utilized to mix sound effects with player data such that the mixed audio can be output to an audio output device. The audio system can be system or user configurable as to sound effect processing. For example, a user may desire sound effects to be output to a particular audio output device of the audio system. As one example, the audio output device can be an in-device speaker. As another example, a user may desire sound effects to be output to a headphone (earphone) instead of or in addition to any in-device speaker.



FIG. 9 is a flow diagram of a sound effect event process 900 according to one embodiment of the invention. The sound effect event process 900 begins with a decision 902 that determines whether a sound effect event has been initiated. An audio system, or its user, can initiate a sound effect event. When the decision 902 determines that a sound effect event has not been issued, then the sound effect event process 900 awaits such an event. On the other hand, once the decision 900 determines that a sound effect event has been issued, a decision 904 determines whether a device effect is enabled. When the decision 904 determines that the device effect is enabled, then a device effect is activated 906. The device effect corresponds to an audio output device which can be activated to physically produce the sound effect. For example, the device effect can be produced by an in-device speaker. One type of speaker is a loudspeaker. Another type of speaker is a piezoelectric speaker (e.g., piezoelectric device 724).


A user or system can configure the audio system to provide a given sound effect, the device effect, via an audio output device. For example, if the audio output device is a piezoelectric speaker, the system can control the audio output device to provide the device effect that corresponds to the sound effect event that has been issued. For example, if the sound effect event issued was a “mouse click” event, then the device effect could be a click sound that is physically generated by an electrical control signal supplied to the piezoelectric speaker.


On the other hand, when the decision 904 determines that the device effect is not enabled, or following the activation 906 if the device effect was enabled, a decision 908 determines whether an earphone effect is enabled. Here, the system or user can configure the audio system to provide a sound effect to the user via one or more earphones coupled to the audio system. When the decision 908 determines that the earphone effect is enabled, then an earphone effect is activated 910. By activation 910 of the earphone effect, the appropriate sound effect is output to the user by way of the one or more earphones. As a result, should the user be wearing earphones, the sound effect is able to be perceived in an audio manner by the user. Following the operation 910, or following the decision 908 when the earphone effect is not enabled, the sound effect event process 900 returns to repeat the decision 902 and subsequent operations so that additional sound effect events can be processed.


In one embodiment, the audio system makes use of a graphical user interface to assist the user with configuring audible sound effects. For example, the user may desire to have little or no sound effects active. On the other hand, when sound effects are these partial the active, the user may desire the sound effects be provided at an in-device speaker of the audio system. Alternatively, or in addition, the user may also desire sound effects to be provided in an audio manner via an earphone or headphone.



FIG. 10 illustrates a graphical user interface 1000 according to one embodiment of the invention. The graphical user interface 1000 allows a user to configure a portable computing device for auditory feedback. More particularly, the graphical user interface 1000 includes a header or title 1002 designating that the graphical user interface pertains to “Feedback”. The graphical user interface 1000 also displays a menu or list 1004 of user selectable items. In this example, the menu or list 1004 includes four user selectable items, namely, “Speaker”, “Headphone”, “Both” and “Off”. The “Speaker” selection causes the configuration to provide auditory feedback via a speaker (e.g., piezoelectric device 724). The “Headphone” selection causes the configuration to provide auditory feedback via earphone(s) or headphone(s) (e.g., external speaker 714 (external)). The “Both” selection causes the configuration to provide auditory feedback via a speaker (e.g., piezoelectric device 724) and an earphone(s) or headphone(s) (e.g., external speaker 714 (external)). The “Off” selection causes the configuration to provide no auditory feedback. A selector 1006 indicates current selection of the “Headphone” item.


One example of a media player is the iPod® media player, which is available from Apple Computer, Inc. of Cupertino, Calif. Often, a media player acquires its media assets from a host computer that serves to enable a user to manage media assets. As an example, the host computer can execute a media management application to utilize and manage media assets. One example of a media management application is iTunes®, produced by Apple Computer, Inc.


The various aspects, embodiments, implementations or features of the invention can be used separately or in any combination.


The invention is preferably implemented by software, hardware or a combination of hardware and software. The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, optical data storage devices, and carrier waves. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.


The advantages of the invention are numerous. Different aspects, embodiments or implementations may yield one or more of the following advantages. One advantage of the invention is that processing resources required to implement audio sound effects can be substantially reduced. A media device that is highly portable can make use of audio sound effects. Another advantage of the invention is that sound effects can be output even while a media device is outputting other media (e.g., music). Another advantage of the invention is that the audio data for sound effects can be stored in a single formats and converted to other formats as appropriate to substantially match audio data of a media item being played. Still another advantage of the invention is that multiple sound effects can be output concurrently with substantial preservation of their intelligibility.


The many features and advantages of the present invention are apparent from the written description and, thus, it is intended by the appended claims to cover all such features and advantages of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, the invention should not be limited to the exact construction and operation as illustrated and described. Hence, all suitable modifications and equivalents may be resorted to as falling within the scope of the invention.

Claims
  • 1. In a computing device having limited processing resources, a method for adding an original sound effect file to a first media item, the method comprising: determining one or more formatting characteristics of the first media item; receiving an indication of the one or more formatting characteristics of the first media item at a conversion unit;receiving the original sound effect file at the conversion unit; creating a modified sound effect file by updating the selected formatting characteristics of the original sound effect file to match the one or more formatting characteristics of the first media item, wherein the modified sound effect file is created by the conversion unit;wherein the updating selected formatting characteristics includes formatting a copy of the original sound effect file so as to have a substantially similar format as the first media item including (i) altering a bit depth of the original sound effect file to match a bit depth of the first media item, (ii) altering a sample rate of the original sound effects file to match a sample rate of the first media item, and (iii) when necessary, altering a channel count of the original sound effect file to match a channel count of the first media item such that the bit depth, the sample rate and the channel count of the modified sound effect file match the bit depth, the sample rate and the channel count of the first media item;adding the first media item and the modified sound effect file to create a modified first media item by an adder unit; andoutputting the modified first media item or the sound effect file through a first and a second audio output device at essentially the same time, wherein one of the first and second audio output devices is a headphone lack of the portable computing device and the other of the first and second audio output devices is a speaker of the portable computing device.
  • 2. In a computing device having limited processing resources, a method for adding an original sound effect file to a first media item, the method comprising: determining one or more characteristics of the first media item;receiving an indication of the one or more characteristics of the first media item at a conversion unit;receiving the original sound effect file at the conversion unit; creating a modified sound effect file by updating the selected characteristics of the original sound effect file to match the one or more characteristics of the first media item, wherein the modified sound effect file is created by the conversion unit;adding the first media item and the modified sound effect file to create a modified first media item by an adder unit; andoutputting the modified first media item or the sound effect file through a first audio output device and a second audio output device at essentially the same time, wherein the first audio output device is a headphone jack of the portable computing device and the second audio output device is a speaker of the portable computing device.
  • 3. The method of claim 1, wherein the first audio output device is a single speaker on the computing device.
  • 4. The method of claim 1, wherein the creating a modified sound effect file includes: formatting a copy of the original sound effect file so as to have a substantially similar format as the first media item.
  • 5. The method of claim 4, wherein the formatting includes altering a bit depth of the audio file to match a bit depth of the first media item.
  • 6. The method of claim 4, wherein the formatting includes altering a sample rate of the audio file to match a sample rate of the first media item.
  • 7. The method of claim 4, wherein when the first media item is in stereo, the formatting includes changing the audio file from mono to stereo.
  • 8. The method of claim 4, wherein when the first media item is in mono, the formatting includes changing the audio file from stereo to mono.
  • 9. A method for adding an original sound effect file to a first media item on a portable computing device having limited processing resources, the method comprising: determining a bit rate, sample rate, and stereo characteristics of the first media item;receiving an indication of the bit rate, sample rate and stereo characteristics of the first media item at a conversion unit;receiving the original sound effect file at the conversion unit; creating a modified sound effect file by modifying a bit rate, sample rate, and stereo characteristics of the original sound effect file to match the bit rate, sample rate, and stereo characteristics, respectively, of the first media item;adding the first media item and the modified sound effect file to create a modified first media item by an adder unit;retrieving a configuration profile indicating a first audio output device to play the first media item and a second audio output device to play the original sound effect file;playing the modified first media item or the sound effect file through a first audio output device and a second audio output device at essentially the same time, wherein the first audio output device is a headphone jack of the portable computing device and the second audio output device is a speaker of the portable computing device.
  • 10. The method of claim 9, wherein the configuration profile includes configuration settings that are based partially on whether headphones are currently connected to the headphone jack of the portable computing device.
  • 11. The method of claim 9, further comprising: storing the modified first media item in a buffer prior to playing it.
  • 12. A portable media device having limited processing resources, comprising: a memory storing a plurality of media items and a plurality of original sound effect files;a processor configured to determine one or more formatting characteristics of a first media item;a conversion unit configured to receive an indication of the one or more formatting characteristics, receive the original sound effect file, and create a modified sound effect file by updating selected formatting characteristics of the original sound effect file to match the one or more formatting characteristics of the first media item;wherein the conversion unit includes a bit depth converter, a channel count adapter and a sample rate converter, the bit depth converter being arranged to adjust the bit depth of the original sound effect file to match the bit depth of the first media item in the modified sound effect file, and the sample rate converter being arranged to adjust the sample rate of the original sound effect file to match the sample rate of the first media item in the modified sound effect filean adder unit configured to add the first media item and the modified sound effect file to create a modified first media item; anda digital-to-analog converter configured to convert the modified first media item into an analog audio data format and send the converted modified first media item to a first audio output device; and wherein the portable media device is configured to play the modified first media item or the sound effect file through a first audio output device and a second audio output device at essentially the same time, wherein one of the first and second audio output devices is a headphone lack of the portable computing device and the other of the first and second audio output devices is a speaker of the portable computing device.
  • 13. The portable media device of claim 12, further comprising a memory buffer configured to temporarily store the modified first media item prior to it being delivered to the digital-to-analog converter and played.
  • 14. The portable media device of claim 12, wherein the portable media device is a mobile phone.
  • 15. The portable media device of claim 12, wherein the conversion unit further comprises: a bit depth converter;a channel count adapter; anda sample rate converter.
  • 16. An apparatus for seamlessly integrating an original sound effect file with a first media item, the apparatus comprising: means for determining a bit rate, sample rate, and stereo characteristics of the first media item;means for receiving an indication of the bit rate, sample rate and stereo characteristics at a conversion unit;means for receiving the original sound effect file at the conversion unit;means for creating a modified sound effect file by modifying a bit rate, sample rate, and stereo characteristics of the original sound effect file to match the bit rate, sample rate, and stereo characteristics, respectively, of the first media item;means for adding the first media item and the modified sound effect file to create a modified first media item by an adder unit;means for retrieving a configuration profile indicating a first audio output device to play the first media item or the sound effect file and a second audio output device to play the original sound effect file;means for playing the modified first media item or the original sound effect file through the first audio device and a second audio output device at essentially the same time wherein one of the first and second audio output devices is a headphone lack of a portable media device and the other of the first and second audio output devices is a speaker of the portable media device.
  • 17. The apparatus of claim 16, wherein the means for adding includes means for digitally adding the first media item and the modified sound effect file.
  • 18. The apparatus of claim 16, further comprising means for converting the modified first media item to analog format for playing on the first audio output device.
  • 19. The apparatus of claim 16, wherein the second audio output device is a piezoelectric speaker.
  • 20. A non-transitory program storage device readable by a machine tangibly embodying a program of instructions executable by the machine to add an original sound effect file to a first media item, the program storage device being arranged to: determine one or more formatting characteristics of the first media item;receive an indication of the one or more formatting characteristics at a conversion unit;receive the original sound effect file at the conversion unit;create creating a modified sound effect file by updating formatting characteristics of the original sound effect file to match the formatting characteristics of the first media item by the conversion unit; wherein the updating selected formatting characteristics includes formatting a copy of the original sound effect file so as to have a substantially similar format as the first media item including (i) altering a bit depth of the original sound effect file to match a bit depth of the first media item, (ii) altering a sample rate of the original sound effects file to match a sample rate of the first media item, and (iii) when necessary, altering a channel count of the original sound effect file to match a channel count of the first media item such that the bit depth, the sample rate and the channel count of the modified sound effect file match the bit depth, the sample rate and the channel count of the first media item;add the first media item and the modified sound effect file to create a modified first media item by an adder unit; andplay the modified first media item or the original sound effect file through a first audio output device and a second audio output device at essentially the same time, wherein one of the first and second audio output devices is a headphone jack of a portable media device and the other of the first and second audio output devices is a speaker of the portable media device.
  • 21. The program storage device of claim 20, wherein the first audio output device is a headphone jack of a portable media device and the second audio output device is a single speaker of the portable media device.
  • 22. The program storage device of claim 20, wherein the method further comprises: playing the original sound effect file through a second audio output device at essentially the same time as the playing of the modified first media item through the first audio output device.
  • 23. The program storage device of claim 20, wherein the first audio output device is a single speaker on the portable media device.
  • 24. A portable media device having limited processing resources, comprising: a memory for digitally storing a plurality of media items and a plurality of original sound effect files;an audio decoder arranged to decode a first media item having a first set of media format characteristics;a conversion unit arranged to receive an original sound effect file having at least one selected media format characteristic that is different from one or more associated media format characteristics of the first media item, and to create a modified sound effect file by updating the selected media format characteristic(s) of the original sound effect file to match the associated characteristic(s) of the first media item;wherein the conversion unit includes a bit depth converter, a channel count adapter and a sample rate converter, the bit depth converter being arranged to adjust the bit depth of the original sound effect file to match the bit depth of the first media item in the modified sound effect file, and the sample rate converter being arranged to adjust the sample rate of the original sound effect file to match the sample rate of the first media item in the modified sound effect file;an adder unit configured to add the first media item and the modified sound effect file to create a modified first media item; anda digital-to-analog converter configured to convert the modified first media item into an analog audio data format and send the converted modified first media item to a first audio output device; and wherein the portable media device is configured to play the modified first media item or the sound effect file through a first audio output device and a second audio output device at essentially the same time, wherein one of the first and second audio output devices is a headphone lack of the portable media device and the other of the first and second audio output devices is a speaker of the portable media device.
  • 25. A portable media device as recited in claim 24 further comprising: a memory buffer configured to temporarily store the modified first media item prior to it being played; andwherein the conversion unit includes a bit depth converter, a channel count adapter, and a sample rate converter; andwherein the portable media device is selected from the group consisting of a mobile phone, a hand-held media player that can easily be held by and within a single hand of a user; and a hand-held audio player that can easily be held by and within a single hand of a user.
US Referenced Citations (262)
Number Name Date Kind
4090216 Constable May 1978 A
4386345 Narveson et al. May 1983 A
4451849 Fuhrer May 1984 A
4589022 Prince et al. May 1986 A
4908523 Snowden et al. Mar 1990 A
4928307 Lynn May 1990 A
4951171 Tran et al. Aug 1990 A
5185906 Brooks Feb 1993 A
5293494 Saito et al. Mar 1994 A
5379057 Clough Jan 1995 A
5406305 Shimomura et al. Apr 1995 A
5559945 Beaudet et al. Sep 1996 A
5566337 Szymanski et al. Oct 1996 A
5583993 Foster et al. Dec 1996 A
5596260 Moravec et al. Jan 1997 A
5608698 Yamanoi et al. Mar 1997 A
5616876 Cluts Apr 1997 A
5617386 Choi Apr 1997 A
5670985 Cappels, Sr. et al. Sep 1997 A
5675362 Clough Oct 1997 A
5684513 Decker Nov 1997 A
5710922 Alley et al. Jan 1998 A
5712949 Kato et al. Jan 1998 A
5717422 Fergason Feb 1998 A
5721949 Smith et al. Feb 1998 A
5726672 Hernandez et al. Mar 1998 A
5739451 Winksy et al. Apr 1998 A
5740143 Suetomi Apr 1998 A
5760588 Bailey Jun 1998 A
5778374 Dang et al. Jul 1998 A
5803786 McCormick Sep 1998 A
5815225 Nelson Sep 1998 A
5822288 Shinada Oct 1998 A
5835721 Donahue et al. Nov 1998 A
5835732 Kikinis et al. Nov 1998 A
5838969 Jacklin et al. Nov 1998 A
5864868 Contois Jan 1999 A
5867163 Kurtenbach Feb 1999 A
5870710 Ozawa et al. Feb 1999 A
5918303 Yamaura et al. Jun 1999 A
5920728 Hallowell et al. Jul 1999 A
5923757 Hocker et al. Jul 1999 A
5952992 Helms Sep 1999 A
5982902 Terano Nov 1999 A
5998972 Gong Dec 1999 A
6006274 Hawkins et al. Dec 1999 A
6009237 Hirabayashi et al. Dec 1999 A
6011585 Anderson Jan 2000 A
6018705 Gaudet et al. Jan 2000 A
6041023 Lakhansingh Mar 2000 A
6052654 Gaudet et al. Apr 2000 A
6108426 Stortz Aug 2000 A
6122340 Darley et al. Sep 2000 A
6158019 Squibb Dec 2000 A
6161944 Leman Dec 2000 A
6172948 Keller et al. Jan 2001 B1
6179432 Zhang et al. Jan 2001 B1
6185163 Bickford et al. Feb 2001 B1
6191939 Burnett Feb 2001 B1
6208044 Viswanadham et al. Mar 2001 B1
6216131 Liu et al. Apr 2001 B1
6217183 Shipman Apr 2001 B1
6222347 Gong Apr 2001 B1
6248946 Dwek Jun 2001 B1
6295541 Bodnar et al. Sep 2001 B1
6297795 Kato et al. Oct 2001 B1
6298314 Blackadar et al. Oct 2001 B1
6332175 Birrell et al. Dec 2001 B1
6336365 Blackadar et al. Jan 2002 B1
6336727 Kim Jan 2002 B1
6341316 Kloba et al. Jan 2002 B1
6357147 Darley et al. Mar 2002 B1
6377530 Burrows Apr 2002 B1
6452610 Reinhardt et al. Sep 2002 B1
6467924 Shipman Oct 2002 B2
6493652 Ohlenbusch et al. Dec 2002 B1
6536139 Darley et al. Mar 2003 B2
6549497 Miyamoto et al. Apr 2003 B2
6560903 Darley May 2003 B1
6587403 Keller et al. Jul 2003 B1
6587404 Keller et al. Jul 2003 B1
6605038 Teller et al. Aug 2003 B1
6606281 Cowgill et al. Aug 2003 B2
6611607 Davis et al. Aug 2003 B1
6611789 Darley Aug 2003 B1
6617963 Watters et al. Sep 2003 B1
6621768 Keller et al. Sep 2003 B1
6623427 Mandigo Sep 2003 B2
6631101 Chan et al. Oct 2003 B1
6658577 Huppi et al. Dec 2003 B2
6693612 Matsumoto et al. Feb 2004 B1
6731312 Robbin May 2004 B2
6760536 Amir et al. Jul 2004 B1
6762741 Weindorf Jul 2004 B2
6781611 Richard Aug 2004 B1
6794566 Pachet Sep 2004 B2
6799226 Robbin et al. Sep 2004 B1
6801964 Mahdavi Oct 2004 B1
6832373 O'Neill Dec 2004 B2
6844511 Hsu et al. Jan 2005 B1
6870529 Davis Mar 2005 B1
6871063 Schiffer Mar 2005 B1
6876947 Darley et al. Apr 2005 B1
6882955 Ohlenbusch et al. Apr 2005 B1
6886749 Chiba et al. May 2005 B2
6898550 Blackadar et al. May 2005 B1
6911971 Suzuki et al. Jun 2005 B2
6918677 Shipman Jul 2005 B2
6931377 Seya Aug 2005 B1
6934812 Robbin et al. Aug 2005 B1
6950087 Knox et al. Sep 2005 B2
7010365 Maymudes Mar 2006 B2
7028096 Lee Apr 2006 B1
7046230 Zadesky May 2006 B2
7062225 White Jun 2006 B2
7076561 Rosenberg et al. Jul 2006 B1
7084856 Huppi Aug 2006 B2
7084921 Ogawa Aug 2006 B1
7092946 Bodnar Aug 2006 B2
7124125 Cook et al. Oct 2006 B2
7131059 Obrador Oct 2006 B2
7143241 Hull Nov 2006 B2
7146437 Robbin et al. Dec 2006 B2
7171331 Vock et al. Jan 2007 B2
7191244 Jennings et al. Mar 2007 B2
7213228 Putterman et al. May 2007 B2
7234026 Robbin et al. Jun 2007 B2
7277928 Lennon Oct 2007 B2
7301857 Shah et al. Nov 2007 B2
7356679 Le et al. Apr 2008 B1
7508535 Hart et al. Mar 2009 B2
20010013983 Izawa et al. Aug 2001 A1
20010029178 Criss et al. Oct 2001 A1
20010037367 Iyer Nov 2001 A1
20010041021 Boyle et al. Nov 2001 A1
20010042107 Palm Nov 2001 A1
20020002413 Tokue Jan 2002 A1
20020013784 Swanson Jan 2002 A1
20020028683 Banatre et al. Mar 2002 A1
20020045961 Gibbs et al. Apr 2002 A1
20020046315 Miller et al. Apr 2002 A1
20020055934 Lipscomb et al. May 2002 A1
20020059440 Hudson et al. May 2002 A1
20020059499 Hudson May 2002 A1
20020090912 Cannon et al. Jul 2002 A1
20020116082 Gudorf Aug 2002 A1
20020116517 Hudson et al. Aug 2002 A1
20020122031 Maglio et al. Sep 2002 A1
20020123359 Wei et al. Sep 2002 A1
20020152045 Dowling et al. Oct 2002 A1
20020156833 Maurya et al. Oct 2002 A1
20020161865 Nguyen Oct 2002 A1
20020173273 Spurgat et al. Nov 2002 A1
20020189426 Hirade et al. Dec 2002 A1
20020189429 Qian et al. Dec 2002 A1
20020199043 Yin Dec 2002 A1
20030002688 Kanevsky et al. Jan 2003 A1
20030007001 Zimmerman Jan 2003 A1
20030018799 Eyal Jan 2003 A1
20030037254 Fischer et al. Feb 2003 A1
20030046434 Flanagin et al. Mar 2003 A1
20030050092 Yun Mar 2003 A1
20030074457 Kluth Apr 2003 A1
20030076301 Tsuk et al. Apr 2003 A1
20030076306 Zadesky Apr 2003 A1
20030079038 Robbin et al. Apr 2003 A1
20030095096 Robbin et al. May 2003 A1
20030097379 Ireton May 2003 A1
20030104835 Douhet Jun 2003 A1
20030127307 Liu et al. Jul 2003 A1
20030128192 van Os Jul 2003 A1
20030133694 Yeo Jul 2003 A1
20030153213 Siddiqui et al. Aug 2003 A1
20030156503 Schilling et al. Aug 2003 A1
20030167318 Robbin et al. Sep 2003 A1
20030176935 Lian et al. Sep 2003 A1
20030182100 Plastina et al. Sep 2003 A1
20030221541 Platt Dec 2003 A1
20030229490 Etter Dec 2003 A1
20030236695 Litwin, Jr. Dec 2003 A1
20040001395 Keller et al. Jan 2004 A1
20040001396 Keller et al. Jan 2004 A1
20040012556 Yong et al. Jan 2004 A1
20040055446 Robbin et al. Mar 2004 A1
20040066363 Yamano et al. Apr 2004 A1
20040069122 Wilson Apr 2004 A1
20040076086 Keller Apr 2004 A1
20040086120 Akins, III et al. May 2004 A1
20040094018 Ueshima et al. May 2004 A1
20040103411 Thayer May 2004 A1
20040125522 Chiu et al. Jul 2004 A1
20040165302 Lu Aug 2004 A1
20040177063 Weber et al. Sep 2004 A1
20040198436 Alden Oct 2004 A1
20040210628 Inkinen et al. Oct 2004 A1
20040216108 Robbin Oct 2004 A1
20040224638 Fadell et al. Nov 2004 A1
20040242224 Janik et al. Dec 2004 A1
20040246275 Yoshihara et al. Dec 2004 A1
20040255135 Kitaya et al. Dec 2004 A1
20040267825 Novak et al. Dec 2004 A1
20050015254 Beaman Jan 2005 A1
20050053365 Adams et al. Mar 2005 A1
20050060240 Popofsky Mar 2005 A1
20050060542 Risan et al. Mar 2005 A1
20050108754 Carhart et al. May 2005 A1
20050111820 Matsumi et al. May 2005 A1
20050122315 Chalk et al. Jun 2005 A1
20050123886 Hua et al. Jun 2005 A1
20050146534 Fong et al. Jul 2005 A1
20050149213 Guzak et al. Jul 2005 A1
20050152294 Yu et al. Jul 2005 A1
20050156047 Chiba et al. Jul 2005 A1
20050160270 Goldberg et al. Jul 2005 A1
20050166153 Eytchison et al. Jul 2005 A1
20050216855 Kopra et al. Sep 2005 A1
20050218303 Poplin Oct 2005 A1
20050234983 Plastina et al. Oct 2005 A1
20050245839 Stivoric et al. Nov 2005 A1
20050246324 Paalasmaa et al. Nov 2005 A1
20050248555 Feng et al. Nov 2005 A1
20050257169 Tu Nov 2005 A1
20050259064 Sugino et al. Nov 2005 A1
20050259524 Yeh Nov 2005 A1
20060013414 Shih Jan 2006 A1
20060025068 Regan et al. Feb 2006 A1
20060026424 Eto Feb 2006 A1
20060061563 Fleck Mar 2006 A1
20060068760 Hameed et al. Mar 2006 A1
20060071899 Chang et al. Apr 2006 A1
20060088228 Marriott et al. Apr 2006 A1
20060092122 Yoshihara et al. May 2006 A1
20060094409 Inselberg May 2006 A1
20060095502 Lewis et al. May 2006 A1
20060098320 Koga et al. May 2006 A1
20060135883 Jonsson et al. Jun 2006 A1
20060145053 Stevenson et al. Jul 2006 A1
20060152382 Hiltunen Jul 2006 A1
20060155914 Jobs et al. Jul 2006 A1
20060170535 Watters et al. Aug 2006 A1
20060173974 Tang Aug 2006 A1
20060190577 Yamada Aug 2006 A1
20060190980 Kikkoji et al. Aug 2006 A1
20060221057 Fux et al. Oct 2006 A1
20060221788 Lindahl et al. Oct 2006 A1
20060259758 Deng et al. Nov 2006 A1
20060265503 Jones et al. Nov 2006 A1
20060272483 Honeywell Dec 2006 A1
20060277336 Lu et al. Dec 2006 A1
20070014536 Hellman Jan 2007 A1
20070028009 Robbin et al. Feb 2007 A1
20070061759 Klein, Jr. Mar 2007 A1
20070089057 Kindig Apr 2007 A1
20070106660 Stern et al. May 2007 A1
20070124679 Jeong et al. May 2007 A1
20070129062 Pantalone et al. Jun 2007 A1
20070135225 Nieminen et al. Jun 2007 A1
20070248311 Wice et al. Oct 2007 A1
20070255163 Prineppi Nov 2007 A1
20080055228 Glen Mar 2008 A1
20080134287 Gudorf et al. Jun 2008 A1
20100077338 Matthews et al. Mar 2010 A1
Foreign Referenced Citations (67)
Number Date Country
0 127 139 May 1984 EP
0578604 Jan 1994 EP
0 757 437 Feb 1997 EP
0 813 138 Dec 1997 EP
0 863 469 Sep 1998 EP
0 917 077 May 1999 EP
0 982 732 Mar 2000 EP
1 028 425 Aug 2000 EP
1028426 Aug 2000 EP
1 076 302 Feb 2001 EP
1 213 643 Jun 2002 EP
1 289 197 Mar 2003 EP
1 503 363 Feb 2005 EP
1536612 Jun 2005 EP
1 566 743 Aug 2005 EP
1566948 Aug 2005 EP
1 372 133 Dec 2005 EP
1 686 496 Aug 2006 EP
2 370 208 Jun 2002 GB
2384399 Jul 2003 GB
2399639 May 2005 GB
59-023610 Feb 1984 JP
03-228490 Oct 1991 JP
04-243386 Aug 1992 JP
6-96520 Apr 1994 JP
8-235774 Sep 1996 JP
9-50676 Feb 1997 JP
9-259532 Oct 1997 JP
2000-90651 Mar 2000 JP
2000-224099 Aug 2000 JP
2000-285643 Oct 2000 JP
2000-299834 Oct 2000 JP
2000-311352 Nov 2000 JP
2000-339864 Dec 2000 JP
2001-236286 Aug 2001 JP
2001-312338 Nov 2001 JP
2002-076977 Mar 2002 JP
2002-175467 Jun 2002 JP
2003-188792 Jul 2003 JP
2003-259333 Sep 2003 JP
2003-319365 Nov 2003 JP
2004-021720 Jan 2004 JP
2004-219731 Aug 2004 JP
2004-220420 Aug 2004 JP
20010076508 Aug 2001 KR
WO 9516950 Jun 1995 WO
9817032 Apr 1998 WO
WO 9928813 Jun 1999 WO
WO 0022820 Apr 2000 WO
WO 0133569 May 2001 WO
WO 0165413 Sep 2001 WO
WO 0167753 Sep 2001 WO
WO 0225610 Mar 2002 WO
WO 03023786 Mar 2003 WO
WO 03036457 May 2003 WO
WO 03067202 Aug 2003 WO
2004061850 Jul 2004 WO
WO 2004055637 Jul 2004 WO
WO2004084413 Sep 2004 WO
WO 2004104815 Dec 2004 WO
WO 2005031737 Apr 2005 WO
2005048644 May 2005 WO
WO 2005048644 May 2005 WO
WO 2005008505 Jul 2005 WO
2005109781 Nov 2005 WO
WO 2006040737 Apr 2006 WO
2006071364 Jun 2006 WO
Related Publications (1)
Number Date Country
20060274905 A1 Dec 2006 US