The present invention relates generally to a system for processing audio and music signals, and more particularly, to a dynamic processing system to produce enhanced audio and music signals.
A consistent stream of technological developments has changed the way people listen to and enjoy audio and musical performances. For example, sound digitization has provided a way for large volumes of sound information to be stored on a small, light package known as a compact disk (CD). It is now possible for people to have home sound systems that rival even the best theater systems.
When music is played through the speakers it is possible for the listener 110, who is facing front, to perceive spatial positions relating to sound components within the music. For example, the listener 110 may perceive that a singer's voice 112 is directly in front of him. The listener may also perceive that the sound of a piano 114 is to his front and right, and that the sound of a guitar 116 is behind and to the left. Although
However, a significant problem exists in that the spatial positions and sound qualities of the sound components in a recording, such as on a CD, are determined when the recording is created. Thus, it may not be possible for the sound components of a sound signal to be associated with different spatial positions or sound qualities that may be more enjoyable to the listener.
The present invention provides a system for processing a sound signal that allows listeners to dynamically customize perceived spatial positions and sound qualities of sound components associated with the sound signal. For example, the listener may configure the system to reposition the perceived position of a singer's voice or may cause the perceived position of the singer's voice to dynamically change in accordance with a preprogrammed script. The listener may also use the system to automatically reposition the perceived spatial positions of the sound components based on events detected within the sound signal itself. For example, the detected beat of a drum may be used to changed the perceived spatial position of the singer's voice. It is also possible to use the system to change the sound qualities of the sound components as desired.
One embodiment of the present invention includes apparatus for processing a a sound signal that comprises an input to receive the sound signal, a sound unmixer coupled to the input to receive the sound signal and unmix at least one sound stream from the sound signal based on at least one unmixing instruction, and an output coupled to the sound unmixer to output the at least one sound stream.
Another embodiment of the present invention provides a method of processing a sound signal. The method comprising the steps of receiving the sound signal, unmixing at least one sound stream from the sound signal based on at least one unmixing instruction, and outputting the at least one sound stream.
A further understanding of the nature and the advantages of the inventions disclosed herein may be realized by reference to the remaining portions of the specification and the attached drawings.
The present invention provides a system for processing sound signals that allows listeners to dynamically customize perceived spatial positions and/or sound qualities of components associated with the sound signals.
The sound source 202 has a sound output 212 that couples to the sound unmixer 204. In one embodiment, the sound source may be any type of sound source, such as a CD player, or cassette tape player. The sound source may also be a device that outputs sound data, such as a computer or a musical instrument like an electronic keyboard. Even a microphone picking up a live performance is suitable for use as a sound source in the present system.
The sound output 212 includes digital data representative of the sounds to be processed. If the sound source 202 is a CD, then digital data on the CD would be transmitted on the sound output 212. If the sound source is a cassette tape, wherein an analog signal represents the sounds to be processed, an analog to digital (A/D) converter could be included in the sound source to produce digital sound data for transmission on the sound output 212.
In another embodiment, the sound source 202 is a modified sound source that is capable of operating with modified media, such as modified CDs or cassette tapes that have sound data and control data stored on them. Thus, the modified sound source would be able to output both the digital sound data 212 and the control data 226 when playing back the modified media.
The sound signal can be a single signal or a combination of signals. For example, the sound source may be a CD player and the sound signal may be two signals representing the left and right channels, or four signals representing left and right channels for both front and back speaker locations.
The sound unmixer 204 is coupled to receive the sound output 212. The sound unmixer 204 also receives unmix instructions 214. The sound unmixer unmixes sound streams from the sound signal based on the unmix instructions. The unmix instructions are provided by the instruction generator 210. A later section of this document provides a complete description of the unmix instructions.
Using the unmix instructions, the sound unmixer produces one or more sound streams 216, which are also referred to as “voices.” Each of the sound streams may represent various portions of the sound signal. For example, one stream may represent high frequency components of the sound signal 212, while a second stream represents low frequency components. However, the sound unmixer is very flexible in the way that it unmixes sound streams to represent portions of the input sound signal. For example, special processing may be performed on the sound signal to produce an unmixed stream that contains only certain spectral components of the sound signal. It is also possible to output unmixed sound streams directly from the sound unmixer 204 as shown at 232.
The stream processor 206 is coupled to receives the unmixed streams 216. The stream processor also receives processing instructions 218 from the instruction generator 210. The stream processor processes the unmixed streams from the sound unmixer based on the processing instructions. A later section of this document provides a complete description of the processing instructions.
Using the processing instructions, the stream processor produces processed streams 220. The stream processor 206 processes the sound streams 216 in a number of ways. For example, frequency domain processing, like pitch-shifting, may be performed. Other processes include three-dimensional (3D) position processing, wherein the perceived spatial positions of sounds represented by a stream are changed. Other types of processing performed by the stream processor 206, such as time domain processing, are described in greater detail in a later section of this document. It is also possible to output processed streams directly, as shown at 234.
The mixer 208 receives the processed streams 220 and combines them to form an output signal 222. The mixer comprises logic to combine the processed streams in accordance with mixing instructions 224 received from the instruction generator 210. The mixer may include delay lines or storage buffers to time synchronize the processed streams when forming the output signal 222. The output signal 222 may then be input to a sound system, such as the sound system of
The instruction generator 210 provides unmixing instructions 214, processing instructions 218 and mixing instructions 224. In one embodiment, the instruction generator 210 generates the instructions based on a control script received at control input 228. In another embodiment, the instruction generator generates the instructions based on information received at user input 230. In another embodiment, the instruction generator generates the instructions based on control data 226 received from the sound source 202, wherein the sound source is a modified sound source capable of outputting both sound 212 and control data 226. In another embodiment, the instruction generator generates the instructions based on information detected in the sound signal 212.
The processor 206 is shown comprising a number of subprocessors 304 and a corresponding number of 3D position processors 306. The subprocessors and 3D position processors are used to process the unmixed streams 216.
The subprocessors 304 are used to process the unmixed streams in ways that generally do not change their perceived spatial position. For example, a subprocessor may perform pitch-shifting or signal harmonizing on an unmixed stream. While such processes may change audible characteristics of the stream as perceived by a listener, they generally do not change the perceived spatial position, however, the subprocessors could be programmed to do so if desired. Thus, the subprocessors can perform all manner of signal processing on the unmixed streams to produce subprocessed streams 308. When the subprocessing is complete, the subprocessed streams 308 are input to the 3D position processors 306.
The 3D position processors 306 operate to reposition the perceived spatial position of the sounds in the unmixed streams. For example, assuming the listener is seated in the listening room 100 and facing front, one unmixed stream may represent the singer's voice 112. The singer's voice may be perceived to be directly in front of the listener. The 3D position processors may operate on that stream to change the perceived position of the singer's voice. For example, the singer's voice may be repositioned to be behind the listener. A more detailed example is provided in a later section of this document.
To change the perceived position of a stream, the 3D position processors produce positioning outputs 314 utilizing any 3D or 2D positioning technique. For example, in one embodiment the 3D position processors provide a portion of the unmixed stream to each speaker. By changing the portions of the sound stream provided to each speaker, the perceived spatial position of the stream may be repositioned around the listening room.
The processor instructions 218 determine what processes and positioning to perform on the streams 216. The processor instructions 218 include subprocessor instructions 310 and position processor instructions 312. The subprocessor instructions 310 are used by the subprocessors 304 to determine what signal processing functions are to be performed on the unmixed streams. For example, processes to produce pitch-shifting or echo effects. The position processor instructions 312 are used by the 3D position processors 306 to determine how to change the perceived spatial position of the subprocessed streams 308. Thus, the instruction generator 210 is capable of controlling the operation of both the subprocessors 304 and the 3D position processors 306.
The processor outputs 220 of the processor 206 are coupled to the mixer 208. Assuming that the sound processing system is designed to produce results for playback on a 4 speaker system, each of the 3D processors produces four position signals. The position signals will produce the desired spatial position for the stream when input into a four speaker sound system. It will be apparent to one with skill in the art that any number of speakers may be located in the listening room, and that based on speaker arrangements, the perceived position of unmixed streams may be changed to virtually any position.
The mixer 208 mixes together the processed signals 220 representing all the streams to produce four output signals 222 suitable for use with a four speaker sound system. The mixer 208 receives mixing instructions to determine how to mix together the streams. Thus it is possible to adjust the relative signal level of one stream with respect to another when forming the output signals 222. As a result, when played on a four speaker system, all of the streams will be perceived by a listener to have the desired processing and corresponding spatial positions.
The unmixer 204 creates and outputs the unmixed streams 216 using an unmixing process described in a later section of this document. The unmixer 204 is capable of outputting multiple unmixed streams, wherein each stream may be input to a separate subprocessor included in the processor 206.
The instruction generator 210 produces instructions for the sound unmixer 204, the subprocessors 304, the 3D processors 306 and the mixer 208. The instruction generator 210 includes a control sequencer 316, a sound analyzer 318 and a communication interface 350.
The script input 228 couples to the communication interface 350. The communication interface 350 receives the script data from an external source and provides it to the control sequencer 316 via script channel 352. The communication interface may include a modem for connecting the other computers or computer networks. The communication interface may also include additional memory for storage of received script data. Other types of communication devices may be contained in the communication interface 350. For example, infra-red (IR), radio frequency (RF) or other type of communication device may be included in the communication interface 350 so that script data may be received from a variety of sources.
The control sequencer is also coupled to receive control data 226 that may be included as part of the sound source, when the sound source is a modified sound source that outputs both sound signals and control data. For example, the control script information may be embedded on a modified CD containing both music and script data. In that case, a single CD would contain music and a control script defining how the music is to be processed to achieve a specific effect on playback.
The control sequencer also includes a memory 322 having script presets. The script presets are determined before processing begins and are stored in the memory 322 for future use.
The sound analyzer 318 is also part of the instruction generator 210. The sound analyzer 318 is coupled to the sound source 202 to receive the sound signal 212 and to detect selected events within the sound signal. For example, the beat of a drum or a crash of a cymbal may be events that are detected by the sound analyzer. The control sequencer 316 instructs the sound analyzer 318 to detect selected events via an event channel 320. The event channel 320 is also used by the sound analyzer to transmit indications to the control sequencer 316, that the selected events have been detected. The control sequencer uses these detected events to control the generation of instructions to the components of the sound processing system 200.
The user input 230 couples to the control sequencer 316 to allow a user to interact with the instruction generator 210 to control operation of the sound processing system 200. For example, the user may use the user input to select whether the external script input 228 or the control data input 226 are used to receive scripts for processing the sound signal 212. The user may also specify operation of any of the other components of the sound processing system by using the user input. In one embodiment, the user can instruct the control sequencer 316 to activate the sound analyzer 318 to detect selected events in the sound signal 212. Further, upon detection of the selected events, the control sequencer will use the presets stored in the memory 322 to generate instructions for the components of the sound processing system. The user input 230 may also be used to enter control script information directly into the instruction generator 210.
In another embodiment of the present invention, the unmixer 204 and the instruction generator 210 provide unmixed streams 216 and control instructions 214, 310, 312, 224 to an external system (not shown) that may include subprocessors, 3D position processors and mixers. The external system may be another computer program or computer system including hardware and software. The external system may also be located at a different location from the components of the system 200. As a result, it is possible to distribute the processing of the unmixed streams to one or more systems. However, it will be apparent to one with skill in the art that merely distributing the processing does not deviate from the scope of the invention, which includes ways to produce unmixed streams which may be processed in accordance with instructions based on a control script.
Therefore it is possible to process sounds in a variety of ways using the sound processing system 200. In one method, the sound is processed using events detected within the sound itself. In another method, sound is processed using script information embedded with the sound at the sound source. In another method of processing, the script information is independent from the sound source, for example, a separate data file, that can be input to the controls sequencer 316 to control how the sounds are processed.
The invention is related to the use of the sound processing system 200 for dynamic sound processing. According to one embodiment of the invention, dynamic sound processing is provided by the sound processing system 200 in response to the control sequencer 316 executing one or more sequences of one or more instructions. Such instructions may be read into the control sequencer 316 from another computer-readable medium, such as the sound source 202. Execution of the sequences of instructions causes the control sequencer to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to the control sequencer 316 for execution. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as those that may be used in conjunction with the sound source 202. Volatile media include dynamic memory, such as dynamic memory that may be associated with the presets 322. Transmission media include coaxial cables, copper wire, and fiber optics, including the wires that comprise the script input 228. Transmission media can also take the form of radio or light waves, such as those generated during radio frequency (RF) and infra-red (IR) data communications. Common forms of computer-readable media include, for example, floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns or holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, computer data storage structure, any other memory chip or cartridge, a carrier wave as describe hereinafter, or any other medium from which a computer can read.
Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the control sequencer 316 for execution. For example, the instructions may initially be borne on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to the sound processing system 200 can receive the data on the telephone line via the script input 228. The communication interface 350 receives the data and forwards the data over the channel 352 to the control sequencer 316 which executes instructions included in the data. The instructions received by the control sequencer 316 may optionally be stored in an internal memory within the control sequencer either before or after execution by the control sequencer 316.
The communication interface 350 provides a two-way data communication coupling to a script input 228 that may be connected to a local network (not shown). For example, the communication interface 350 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface 350 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface 350 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
If the script input 228 is to be coupled to a data network, a connection may be established through a local network (not shown) to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the worldwide packet data communication network, now commonly referred to as the “Internet.” The local network and the Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signal through the various networks and the signals on the script input 228 and through the communication interface 350, which carry the digital data to and from the sound processing system 200, are exemplary forms of carrier waves transporting the information.
The sound processing system 200 can send messages and receive data, including program codes, through the network(s), the script input 228 and the communication interface 350. In the Internet example, an Internet server might transmit code for an application program through the Internet, ISP, local network, and communication interface 350. In accordance with the invention, one such downloaded application provides for dynamic sound processing as described herein.
The received code may be executed by the control sequencer 316 as it is received, and/or stored in memory 322 as presets, or other non-volatile storage for later execution. In this manner, the sound processing system 200 obtains application code in the form of a carrier wave.
The outputs of the DFT blocks 402L and 402R are the frequency domain spectra of the left and right stereo channels. Peak detection blocks 404L and 404R detect the peak frequencies where peaks occur in the frequency domain spectra. This information is then passed to a subtraction block 406, which generates a difference spectra signal having values equal to the difference of the left and right frequency domain spectra at each peak frequency. If voice signals are panned to center, then the magnitudes and phases of the frequency domain spectra for each channel at voice frequencies will be almost identical. Accordingly, the magnitude of the difference spectra at those frequencies will be small.
The difference signal as well as the left and right peak frequencies and frequency domain spectra are input to an amplitude adjustment block 410. The amplitude adjustment block utilizes the magnitudes of the difference spectra and frequency domain spectra of each channel to modify the magnitudes of the frequency domain spectra of each channel and output a modified spectra. The magnitude of the modified spectra depends on the magnitude of the difference spectra. Accordingly, the magnitude of the modified frequency domain spectra will be low for frequencies corresponding to voice.
The modified frequency domain spectra for each channel is input to inverse discrete Fourier (IDFT) transform blocks 412L and 412R, which output time domain signals based on the modified spectra. Since the modified spectra was attenuated at frequencies corresponding to voice the modified stereo channels (L′ and R′) output by the IDFT blocks 412L and 412R will have the voice removed. However, the instruments and other sounds not panned to the center will remain in the original stereo channels so that the stereo quality of the recording will be preserved. Additionally, a center output containing the unmixed spectra is input to IDFT block 412C that outputs time domain signals (C′) based on the unmixed spectra.
The time domain signals L′, C′ and R′ are input to mixer 414 that combines the received signals to produce seven “voices.” Each voice represents some combination of the L′, C′ and R′ signals. Therefore, it is possible that V0 represents only the C′ signal and that V1 is comprised of some proportion of L′ and C′, for example.
The unmixing instructions 214 are received by the unmixer 204 and used to determine how to unmix the input signal 212 to form the output voices (VO-7). For example, the unmixing instructions specify how to combine the L′, C′ and R′ outputs to form the voice outputs. The unmixing instructions also provide unmixing parameters that can be used by the subtracter 406 and the amplitude adjustor 410 to select a portion of input signal 212 to be unmixed and provided to the IDFT block 412C. For example, the unmixing parameters are used to select a center portion of the input signal 212 to be unmixed. Thus, equal amplitudes of frequency peaks that occur in both the left and right stereo channels would be unmixed. The effect of this operation can be demonstrated by considering a case where a singer's voice is spatially centered between the left and right channels. Since the singer's voice so positioned would result in identical frequency peaks in the left and right channels, equal amounts of these frequency peaks are removed and as a result, the singer's voice would be unmixed from the sound signal.
In another embodiment the unmixing parameters include amplitude weighting parameters that may be used to unmix signals that do not appear equally in both left and right channels. For example, the singer's voice used in the above example, may be spatially positioned off center, and thus, more toward either the left or right channel. As a result, the frequency peaks representing the singer's voice would have greater amplitude corresponding to the side where the singer voice is located. The amplitude weighting parameters are used by the subtracter 406 and the amplitude adjustor 410 to unmix the singer's voice by compensating for the greater amplitude of the frequency peaks representing the singer's voice that appear one channel (either left or right). As a result, the larger amplitude frequency peaks on that channel would be unmixed while lower amplitude frequency peaks on the other channel would be unmixed. Thus, even if the singer's voice appears to be spatially off-center, given the appropriate mixing parameters the singer's voice can still be unmixed by the unmixer 204.
The above described unmixing process can be used to unmix virtually any part of the input signal to produce one or more of the voice outputs. The unmixing is performed by hardware and/or software that receives the unmixing instructions and performs the above defined functions accordingly,. The various operations performed by the blocks of
A frequency-domain representation of the input signal 212 can be obtained by use of a phase-vocoder, a process in which the incoming signal is split into overlapping, windowed, short-term frames which are then processed by a Fourier Transform, resulting in a series of short-term frequency domain spectra representing the spectral content of the input signal in each short-term frame. The frequency domain representation can then be altered and a modified time-domain signal reconstructed by use of overlapping windowed inverse Fourier transforms. The phase vocoder is a very standard and well known tool that has been used for years in many contexts (voice coding high-quality time-scaling frequency-domain effects and so on).
Assuming the incoming stereo signal is processed by the phase-vocoder, for each stereo input frame there is a pair of frequency-domain spectra that represent the spectral content of the short-term left and right signals. The short-term spectrum of the left signal is denoted by XL(Ωk,t), where Ωk is the frequency channel and t is the time corresponding to the short-time frame. Similarly, the short-term spectrum of the right signal is denoted by XR(Ωk,t). Both XL(Ωk,t) and XR(Ωk,t) are arrays of complex numbers with amplitudes and phases.
The first step consists of identifying peaks in the magnitudes of the short-term spectra. These peaks indicate sinusoidal components that can either belong to the singer's voice or to background instruments. To find the peaks, one calculates the magnitude of XL(Ωk,t) or of XR(Ωk,t) or of XL(Ωk,t)+XR(Ωk,t) and one performs a peak detection process. One such peak detection scheme consists of declaring as peaks those channels where the amplitude of a channel is larger than the two neighbor channels on the left and the two neighbor channels on the right. Associated with each peak is a so called region of influence composed of all the frequency channels around the peak. The consecutive regions of influence are contiguous and the limit between two adjacent regions can be set to be exactly mid-way between two consecutive peaks or to be located at the channel of smallest amplitude between the two consecutive peaks.
The Left-Right difference signal in the frequency domain is obtained next by calculating the difference between the left and right spectra using:
D(Ωk
for each peak frequency Ωk
For peaks that correspond to components belonging to the voice (or any instrument panned in the center) the magnitude of this difference will be small relative to either XL(Ωk
Rather, the key idea is to calculate how much of a gain reduction it takes to bring XL(Ωk
ΓL(Ωk
and
ΓR(Ωk
which are the left gain and the right gain for each peak frequency. The min ( ) function assures that these gains are not allowed to become larger than 1. Peaks for which ΓL(Ωk
To remove the voice one will apply a real gain GL,R(Ωk
YL(Ωk
YR(Ωk
The gains GL,R(Ωk
To remove the voice, GL,R(Ωk
One choice is to define
GL,R(Ωk
where the modified channels YL,R(Ωk
Another choice is to define
GL,R(Ωk
with α>0. Where the exponent α controls the amount of reduction brought by the algorithm: α close to 0 does not remove much while large values of α remove more and α=1 removes exactly the same amount as the standard Left-Right technique. Using large values α makes it possible to attain a larger amount of voice removal than possible with the standard technique.
In general, the gain function is a function based on the magnitude of the difference spectra.
To amplify the voice and attenuate the background instruments the gains GL,R(Ωk
GL,R(Ωk
GL,R(Ωk
etc. Because GL,R(Ωk
It is often to perform time-domain smoothing of the gain values to avoid erratic gain variations that can be perceived as a degradation of the signal quality. Any type of smoothing can be used to prevent such erratic variations. For example, one can generate a smoothed gain by setting
ĜL,R(Ωk
where β is a smoothing parameter between 0 (a lot of smoothing) and 1 (no smoothing) and (t−1) denotes the time at the previous frame and Ĝ is the smoothed version of G. Other types of linear or non-linear smoothing can be used.
Because the voice signal typically lies in a reduced frequency range (for example from 100 Hz to 4 kHz for a male voice) it is possible to set the gains GL,R(Ωk
GL,R(Ωk
Thus, components belonging to an instrument panned in the center (such as a bass-guitar or a kick drum) but whose spectral content do not overlap that of the voice, will not be attenuated as they would with the standard method.
For voice amplification one could set those gains to 0:
GL,R(Ωk
so that instruments falling outside the voice range would be removed automatically regardless of where they are panned.
Sometimes the voice is not panned directly in the center but might appear in both channels with a small amplitude difference. This would happen, for example, if both channels were transmitted with slightly different gains. In that case, the gain mismatch can easily be incorporated in Eq. (1):
D′(Ωk
where δ is a gain adjustment factor that represents the gain ratio between the left and right channels. Thus, by using the appropriate delta (δ) it is possible to unmix sound components that are not centered between the left and right channels, but are panned to one side or the other. The appropriate (δ) will result in the frequency components of interest having a very small difference spectra.
Once YL(Ωk
At block 702, a sound source provides a sound signal to the sound processing system of the present invention. For example, the sound source 202 provides the sound signal 210 for processing.
At block 704, a control script for processing the sound signal is determined. In one embodiment, the user instructs the control sequencer where to find the control script. For example, the user indicates via the user input 230 that an external script is to be received from the script input 228 or that a script accompanying the sound signal at script data input 226 is to be used.
At block 706, the control sequencer 316 begins obtaining script instructions from the selected script input.
At block 708, the control sequencer decodes the script and generates unmixing instructions to the sound unmixer 204. For example, the unmixing instructions provide coefficients for forming one or more voices 216 output from the unmixer.
At block 710, one or more voices 216 are output from the unmixer 204 in response to the unmixing instructions. Although
At block 712, the control sequencer 316 generates processing instructions 310 to transmit to the subprocessors 304 for processing the voices 216 created by the unmixer 204. The processing instructions instruct the subprocessors 304 to perform, for example, frequency based processing, such as pitch-shifting or signal harmonizing. The processing may also include time based processing, such as signal filtering.
At block 714, the control sequencer 316 generates positioning instructions 312 to transmit to the position processors 306 to adjust the perceived spatial positions of the subprocessed voices 308. For example, assuming the sound processing system is to be used with a four speaker system, the position processors outputs a signal for each of the four speakers to produce a perceived position of the voice to the listener. As a result, varying amounts of the voice appear in the 3D processor outputs 220.
At block 716, the control sequencer 316 generates mixing instructions to mix the processed signals 220 together. This is achieved by the mixer 208, which mixes the signals received from the processor 206, according to the mixing instructions 224, to form mixer outputs 222. The mixer outputs are transmitted to the speakers to produce sounds corresponding to the processing and spatial repositioning which can be perceived by the listener.
At block 718, the method continues by processing any remaining script instructions that exist. For example, if the sound signal is a song that lasts three minutes, the script may include a list of instructions to be processed for the three minute duration.
In order to correctly process the sound signals, time synchronization exists between the components of the processing system 200 and the sound signal. For example, if a sound signal is three minutes in duration, and spatial repositioning is to occur at two minutes into the sound signal, the instruction generator 210, the unmixer 204 and the stream processors 206 are synchronized to achieve this.
In one embodiment, the sound signal and the control scripts include time stamps. The control sequencer 316 generates instructions to the components of the processing system by reading the time stamps on the control script and sending the instructions at the appropriate time in the processing. Likewise, the subprocessors 304 and the position processors 306, read the time stamps on the instructions they receive and match those time stamps with time stamps accompanying the sound signal. Thus, it is possible to know exactly when processing is to be applied to a particular stream.
The mixer 208 also receives time stamp information with its instructions from the control sequencer 316. The mixer uses the time stamp information to determine when to apply selected mixing functions. The mixer can also obtain time stamp information from each received stream and align the received streams based on the time stamps before combining them, so that no distortion is introduce by combining mis-aligned streams.
In one embodiment of the present invention, a master clock is coupled to the components of the processing system 200, and is used to synchronize the components with the time stamps accompanying the sound signal and script file. In another embodiment of the present invention, a time stamp accompanying the sound signal is used to synchronize the system. In that case, each component reads the time stamp on the sound signal it is to process in order to determine when to apply the script instructions.
In another embodiment, the sound source provides an analog signal that is converted to a digital signal and tagged with a time stamp which can then be used by the components in the sound processing system 200.
A sound processing example will now be provided to demonstrate how sounds may be processed by the sound processing system 200 using an exemplary script to achieve desired spatial effects.
For the following discussion, it will be assumed that the music track is approximately three minutes in duration and begins at time 0:00 and ends at time 3:00. The exemplary script 800 will be assumed to be the script embedded on the CD with the music track. Thus, when the CD player is activated and playback of the CD begins, the music track and the script file are output to the sound processing system. Therefore, the music data is input to the sound unmixer and the script data is input to the instruction generator. The music track contains sounds representative of a singer's voice, a piano and a guitar. As playback begins, it is assumed that the perceived spatial positions relative to the listener 110, of the voice 908, piano 910 and guitar 912 are as shown in
Referring now to
The second instruction 802 commands the sound processing system to execute a create voice command (893), to create voice ID 3 (894) using the center unmixing technique (895). The center unmixing technique uses coefficients 0, 1, and 2 (896). The voice becomes active 0.1 seconds (897) after the time stamp 0.00. This instruction maintains the position of sound components located at the center as provided by the original source.
The third instruction 803 commands the sound processing system to execute a create voice command (870), to create voice ID 4 (872) using the center unmixing technique (874). The center unmixing technique uses coefficients 0, 1, and 2 (876). The command begins at time stamp 0:00 (878) and produces a perceived voice at an angle of 0 degrees (880) at a radius 1 meter (882). The voice becomes active 0.1 seconds (884) after the time stamp 0:00. This instruction maintains the position of sound components located at the left side as provided by the original source.
Therefore, at the end of the first three instructions 801, 802 and 803, the sound processing system essentially produces sound components having spatial positions corresponding to the spatial positions initially provided by the sound source.
Referring now to
Referring now to
Referring now to
Referring again to
Referring now to
Referring again to
Referring now to
Referring again to
Referring now to
Therefore, the above example demonstrates that by providing script instructions to the sound processing system 200 included in the present invention, the perceived spatial position of sounds can be manipulated in a variety of ways given a particular speaker arrangement.
The present invention provides a method and apparatus for processing sound signals to produced enhanced sound signals. It will be apparent to those with skill in the art that modifications to the above methods and embodiments can occur without deviating from the scope of the present invention. Accordingly, the disclosure and descriptions herein are intended to be illustrative, but not limiting, of the scope of the invention which is set forth in the following claims.
This application is a continuation in part of U.S. application Ser. No. 09/405,941 filed Sep. 27, 1999, now U.S. Pat. No. 6,405,163 entitled PROCESS FOR REMOVING VOICE FROM STEREO RECORDINGS. This application also claims priority from U.S. Provisional Patent Application 60/165,058 filed Nov. 12, 1999, entitled DYNAMIC REPROCESSING FOR ENHANCED AUDIO AND MUSIC, the disclosure of which is incorporated in its entirety herein for all purposes. This application also claims the benefit of PCT Patent Application No. PCT/US00/26601, which claims priority from the above mentioned applications.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US00/26601 | 9/27/2000 | WO | 00 | 12/16/2003 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO01/24577 | 4/5/2001 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4343969 | Kellett | Aug 1982 | A |
4461024 | Rengger et al. | Jul 1984 | A |
4913539 | Lewis | Apr 1990 | A |
5495432 | Ho | Feb 1996 | A |
5666424 | Fosgate et al. | Sep 1997 | A |
5727068 | Karagosian et al. | Mar 1998 | A |
5890125 | Davis et al. | Mar 1999 | A |
5943539 | Hirsch et al. | Aug 1999 | A |
5946352 | Rowlands et al. | Aug 1999 | A |
5963907 | Matsumoto | Oct 1999 | A |
6021386 | Davis et al. | Feb 2000 | A |
6111958 | Maher | Aug 2000 | A |
6405163 | Laroche | Jun 2002 | B1 |
6430528 | Jourjine et al. | Aug 2002 | B1 |
6912501 | Vaudrey et al. | Jun 2005 | B2 |
6934395 | Ito | Aug 2005 | B2 |
7039204 | Baumgarte | May 2006 | B2 |
7272556 | Aguilar et al. | Sep 2007 | B1 |
20020054685 | Avendano et al. | May 2002 | A1 |
20030026441 | Faller | Feb 2003 | A1 |
20030174845 | Hagiwara | Sep 2003 | A1 |
20030233158 | Aiso et al. | Dec 2003 | A1 |
20040196988 | Moulios et al. | Oct 2004 | A1 |
20040212320 | Dowling et al. | Oct 2004 | A1 |
Number | Date | Country |
---|---|---|
09-044194 | Feb 1997 | JP |
0124577 | Apr 2001 | WO |
Entry |
---|
Beerends et al., “A Perceptual Audio Quality Measure Based on a Psychoacoustic Sound Representation,” Journal of Audio Engineering Society, New York, vol. 40, No. 12, Dec. 1, 1992, pp. 963-978. |
International Search Report-PCT/US00/26601-Feb. 6, 2001, 1 page. |
Jourjine, A., et al., “Blind separation of disjoint orthogonal signals: demixing n sources from 2 mixtures”, IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5 (2000), 2985-2988. |
Lindemann, E., “Two microphone nonlinear frequency domain beamformer for hearing and noise reduction”, Applications of Signal Processing to Audio and Acoustics, IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz NY 1995, IEEE ASSP Workshop on Oct. 15-18, 1995, 24-27. |
Allen, et al., “Multimicrophone Signal-Processing Technique to Remove Room Reverberation From Speech Signals”, J. Accoust. Soc. Am., vol. 62, No. 4, 1977, p. 912-915. |
Avendano, et al., “Ambience Extraction and Synthesis From Stereo Signals for Multi-Channel Audio Up-Mix”, IEEE Int'l Conf. On Acoustics, Speech & Signal Processing, May 2002. |
Avendano, et al.,“Frequency Domain Techniques for Stereo to Multichannel Upmix”, AES 22nd International Conference on Virtual and Entertainment Audio, Jun. 2002. |
Baumgarte, et al., “Estimation of Auditory Spatial Cues for Binaural Cue Coding”, IEEE Int'l Conf. On Acoustics, Speech and Signal Processing, May 2000. |
Begault, et al., “3-D Sound for Virtual Reality and Multimedia” AP Professional, 226-229, 1957. |
Blauert, Jens “Spatial Hearing The Psychophysics of Human Sound Localization”, The MIT Press, pp. 238-257, 1997. |
Dressler, Roger “Dolby Surround Pro Logic II Decoder Principles of Operation”, Dolby Laboratories, Inc., 100 Potrero Ave., San Francisco, CA 94103, 2000. |
Faller et al., “Binaural Cue Coding: A Novel and Efficeint Representation of Spatial Audio”, IEEE Int'l Conf. On Acoustics, Speech & Signal Processing, May 2002. |
Gerzon, Michael A. “Optimum Reproduction Matrices for Multispeaker Stereo” J. Audio Eng. Soc. vol. 40 No. 78, 1992. |
Holman, Tomlinson, “Mixing the Sound” Surround Magazine, p. 35-37, Jun. 2001. |
Jot, Jean-Marc, et al., “A Comparative Study of 3-D Audio Encoding and Rendering Techniques”, AES 16th International Conference on Spatial Sound Reproduction, Rovaniemi, Finland 1999. |
Kyriakakis, C., et al., “Virtual Microphones for Multichannel Audio Applications”, In Proc. IEEE ICME 2000, vol. 1, pp. 11-14, Aug. 2000. |
Miles, Michael T., “An Optimum Linear-Matrix Stereo Imaging System”, AES 101st Convention, 1996 preprint 4364 (J-4). |
Pulkki et al., “Localization of Amplitude-Panned Virtual Sources I: Sterephonic Panning”, J. Audio Eng. Soc. vol. 49, No. 9, Sep. 2002. |
Rumsey, Francis “Controlled Subjective Assessments of Two-to-Five Channel Surround Sound Processing Algorithms” J. Audio Eng. Soc., vol. 47, No. 7/8 Jul./Aug. 1999. |
Schoeder, Manfred R., “An Artificial Stereophonic Effect Obtained From a Single Audio Signal,” Journal of the Audio Engineering Society, vol. 6 pp. 74-79, Apr. 1958. |
Number | Date | Country | |
---|---|---|---|
60165058 | Nov 1999 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09405941 | Sep 1999 | US |
Child | 10415770 | US |