The present invention relates in general to audio systems and, more particularly, to an audio system and method of using adaptive intelligence to distinguish dynamic content of an audio signal generated by consumer audio and control a signal process function associated with the audio signal.
Audio sound systems are commonly used to amplify signals and reproduce audible sound. A sound generation source, such as a cellular telephone, mobile sound system, multi-media player, home entertainment system, internet streaming, computer, notebook, video gaming, or other electronic device, generates an electrical audio signal. The audio signal is routed to an audio amplifier, which controls the magnitude and performs other signal processing on the audio signal. The audio amplifier can perform filtering, modulation, distortion enhancement or reduction, sound effects, and other signal processing functions to enhance the tonal quality and frequency properties of the audio signal. The amplified audio signal is sent to a speaker to convert the electrical signal to audible sound and reproduce the sound generation source with enhancements introduced by the signal processing function.
In one example, the sound generation source may be a mobile sound system. The mobile sound system receives wireless audio signals from a transmitter or satellite, or recorded sound signals from compact disk (CD), memory drive, audio tape, or internal memory of the mobile sound system. The audio signals are routed to an audio amplifier. The audio amplifier provides features such as amplification, filtering, tone equalization, and sound effects. The user adjusts the knobs on the front panel of the audio amplifier to dial-in the desired volume, acoustics, and sound effects. The output of the audio amplifier is connected to a speaker to generate the audible sounds. In some cases, the audio amplifier and speaker are separate units. In other systems, the units are integrated into one chassis.
In audio reproduction, it is common to use a variety of signal processing techniques depending on the content of the audio signal to achieve better sound quality and otherwise enhance the listener's enjoyment and appreciation of the audio content. For example, the listener can adjust the audio amplifier settings and sound effects for different music styles. The audio amplifier can use different compressors and equalization settings to enhance sound quality, e.g., to optimize the reproduction of classical, pop, or rock music.
Audio amplifiers and other signal processing equipment are typically controlled with front panel switches and control knobs. To accommodate the processing requirements for different audio content, the user listens and manually selects the desired functions, such as amplification, filtering, tone equalization, and sound effects, by setting the switch positions and turning the control knobs. When the audio content changes, the user must manually make adjustments to the audio amplifier or other signal processing equipment to maintain an optimal sound reproduction of the audio signal. In some digital or analog audio sound systems, the user can configure and save preferred settings as presets and then later manually select the saved settings or factory presets for the system.
In most if not all cases, there is an inherent delay between changes in the audio content from sound generation source and optimal reproduction of the sound due to the time required for the user to make manual adjustments to the audio amplifier or other signal processing equipment. If the audio content changes from one composition to another, or even during playback of a single composition, and the user wants to change the signal processing function, e.g., increase volume or add more bass, then the user must manually change the audio amplifier settings. Frequent manual adjustments to the audio amplifier are typically required to maintain optimal sound reproduction over the course of multiple musical compositions or even within a single composition. Most users quickly tire of constantly making manual adjustments to the audio amplifier settings in an attempt to keep up with the changing audio content. The audio amplifier is rarely optimized to the audio content either because the user gives up making manual adjustments, or because the user cannot make adjustments quickly enough to track the changing audio content.
A need exists to dynamically control an audio amplifier or other signal processing equipment in realtime. Accordingly, in one embodiment, the present invention is a consumer audio system comprising a signal processor coupled for receiving an audio signal from a consumer audio source. The dynamic content of the audio signal controls operation of the signal processor.
In another embodiment, the present invention is a method of controlling a consumer audio system comprising the steps of providing a signal processor adapted for receiving an audio signal from a consumer audio source, and controlling operation of the signal processor using dynamic content of the audio signal.
In another embodiment, the present invention is a consumer audio system comprising a signal processor coupled for receiving an audio signal from a consumer audio source. A time domain processor is coupled for receiving the audio signal and generating time domain parameters of the audio signal. A frequency domain processor is coupled for receiving the audio signal and generating frequency domain parameters of the audio signal. A signature database includes a plurality of signature records each having time domain parameters and frequency domain parameters and control parameters. A recognition detector matching the time domain parameters and frequency domain parameters of the audio signal to a signature record of the signature database. The control parameters of the matching signature record control operation of the signal processor.
In another embodiment, the present invention is a method of controlling a consumer audio system comprising the steps of providing a signal processor adapted for receiving an audio signal from a consumer audio source, generating time domain parameters of the audio signal, generating frequency domain parameters of the audio signal, providing a signature database including a plurality of signature records each having time domain parameters and frequency domain parameters and control parameters, matching the time domain parameters and frequency domain parameters of the audio signal to a signature record of the signature database, and controlling operation of the signal processor based on the control parameters of the matching signature record.
a-4b illustrate musical instruments and vocals connected to a recording device;
a-5b illustrate waveform plots of the audio signal;
a-8b illustrate time sequence frames of the sampled audio signal;
The present invention is described in one or more embodiments in the following description with reference to the figures, in which like numerals represent the same or similar elements. While the invention is described in terms of the best mode for achieving the invention's objectives, it will be appreciated by those skilled in the art that it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims and their equivalents as supported by the following disclosure and drawings.
Referring to
For a given sound source, the user can use front control panel 30 to manually select between a variety of signal processing functions, such as amplification, filtering, equalization, sound effects, and user-defined modules that enhance the signal properties of the audio signal. Front control panel 30 can be fully programmable, menu driven, and use software to configure and control the sound reproduction features with visual display 26 and control knobs, switches, and rotary dials 28. The combination of visual display 26 and control knobs, switches, and dials 28 located on front control panel 30 provide control for the user interface over the different operational modes, access to menus for selecting and editing functions, and configuration of automobile sound system 20. The audio signals are routed to an audio amplifier within automobile sound system 20. The signal conditioned audio signal is routed to one or more speakers 46 mounted within automobile 24. The power amplification increases or decreases the power level and signal strength of the audio signal to drive the speaker and reproduce the sound content with the enhancements introduced into the audio signal by the audio amplifier.
In audio reproduction, it is common to use a variety of signal processing techniques depending on the content of the audio source, e.g., performance or playing style, to achieve better sound quality and otherwise enhance the listener's enjoyment and appreciation of the audio content. For example, the audio amplifier can use different compressors and equalization settings to enhance sound quality, e.g., to optimize the reproduction of classical or rock music.
Automobile sound system 20 receives audio signals from audio sound source 12, e.g., antenna 32, CD 34, memory drive 36, audio tape 38, or internal memory. The audio signal can originate from a variety of audio sources, such as musical instruments or vocals which are recorded and transmitted to automobile sound system 20, or digitally recorded on CD 34, memory drive 36, or audio tape 38 and inserted into slots 40, 42, and 44 of automobile sound system 20 for playback. The digitally recorded audio signal can be stored in internal memory of automobile sound system 20. The instrument can be an electric guitar, bass guitar, violin, horn, brass, drums, wind instrument, piano, electric keyboard, or percussions. The audio signal can originate from an audio microphone handled by a male or female with voice ranges including soprano, mezzo-soprano, contralto, tenor, baritone, and bass. In many cases, the audio sound signal contains sound content associated with a combination of instruments, e.g., guitar, drums, piano, and voice, mixed together according to the melody and lyrics of the composition. Many compositions contain multiple instruments and multiple vocal components.
In one example, the audio signal contains in part sound originally created by electric bass guitar 50, as shown in
The artist can use a variety of playing styles when playing bass guitar 50. For example, the artist can place his or her hand near the neck pickup or bridge pickup and excite strings 52 with a finger pluck, known as “fingering style”, for modern pop, rhythm and blues, and avant-garde styles. The artist can slap strings 52 with the fingers or palm, known as “slap style”, for modern jazz, funk, rhythm and blues, and rock styles. The artist can excite strings 52 with the thumb, known as “thumb style”, for Motown rhythm and blues. The artist can tap strings 52 with two hands, each hand fretting notes, known as “tapping style”, for avant-garde and modern jazz styles. In other playing styles, artists are known to use fingering accessories such as a pick or stick. In each case, strings 52 vibrate with a particular amplitude and frequency and generate a unique audio signal in accordance with the string vibrations phases, such as shown in
The audio signal from bass guitar 50 is routed through audio cable 56 to recording device 58. Recording device 58 stores the audio signal in digital or analog format on CD 34, memory drive 36, or audio tape 38 for playback on automobile sound system 20. Alternatively, the audio signal is stored on recording device 58 for transmission to automobile sound system 20 via antenna 32. The audio signal generated by guitar 50 and stored in recording device 58 is shown by way of example. In many cases, the audio signal contains sound content associated with a combination of instruments, e.g., guitar 60, drums 62, piano 64, and voice 66, mixed together according to the melody and lyrics of the composition, e.g., by a band or orchestra, as shown in
Returning to
The pre-filter block 72, pre-effects block 74, non-linear effects block 76, user-defined modules 78, post-effects block 80, post-filter block 82, and power amplification block 84 within audio amplifier 70 are selectable and controllable with front control panel 30 in
A feature of audio amplifier 70 is the ability to control the signal processing function in accordance with the dynamic content of the audio signal. Audio amplifier 70 employs a dynamic adaptive intelligence feature involving frequency domain analysis and time domain analysis of the audio signal on a frame-by-frame basis to automatically and adaptively control operation of the signal processing functions and settings within the audio amplifier to achieve an optimal sound reproduction. The dynamic adaptive intelligence feature of audio amplifier 70 detects and isolates the frequency domain characteristics and time domain characteristics of the audio signal on a frame-by-frame basis and uses that information to control operation of the signal processing function of the amplifier.
The output of block 90 is routed to frame signature block 92 where the incoming sub-frames of the audio signal are compared to a database of established or learned frame signatures to determine a best match or closest correlation of the incoming sub-frame to the database of frame signatures. The frame signatures from the database contain control parameters to configure the signal processing components of audio amplifier 70.
The output of block 92 is routed to adaptive intelligence control block 94 where the best matching frame signature controls audio amplifier 70 in realtime to continuously and automatically make adjustments to the signal processing functions for an optimal sound reproduction. For example, based on the frame signature, the amplification of the audio signal can be increased or decreased automatically for that particular sub-frame of the audio signal. Presets and sound effects can be engaged or removed automatically for the note being played. The next sub-frame in sequence may be associated with the same note and matches with the same frame signature in the database, or the next sub-frame in sequence may be associated with a different note and matches with a different corresponding frame signature in the database. Each sub-frame of the audio signal is recognized and matched to a frame signature that in turn controls operation of the signal processing function within audio amplifier 70 for optimal sound reproduction. The signal processing function of audio amplifier 70 is adjusted in accordance with the best matching frame signature corresponding to each individual incoming sub-frame of the audio signal to enhance its reproduction.
The adaptive intelligence feature of audio amplifier 70 can learn attributes of each note of the audio signal and make adjustments based on user feedback. For example, if the user desires more or less amplification or equalization, or insertion of a particular sound effect for a given note, then audio amplifier builds those user preferences into the control parameters of the signal processing function to achieve the optimal sound reproduction. The database of frame signatures with correlated control parameters makes realtime adjustments to the signal processing function. The user can define audio modules, effects, and settings which are integrated into the database of audio amplifier 70. With adaptive intelligence, audio amplifier 70 can detect and automatically apply tone modules and settings to the audio signal based on the present frame signature. Audio amplifier 70 can interpolate between similar matching frame signatures as necessary to select the best choice for the instant signal processing function.
The sampled audio signal 112 is routed to source separation blocks 98-104 to isolate sound components associated with specific types of sound sources. The source separation blocks 98-104 separate the sampled audio signal 112 into sub-frames n,s, where n is the frame number and s is the separated sub-frame number. Assume the sampled audio signal includes sound components associated with a variety of instruments and vocals. For example, audio sound block 71 provides an audio signal containing sound components from guitar 60, drums 62, piano 64, and vocals 66, see
In another embodiment, source separation block 98 identifies sound content within a particular frequency band 1, e.g., 100-500 Hz, and separates the sampled audio signal 112 according to frequency content within frequency band 1. The sound content of the sampled audio signal 112 can be isolated and identified by analyzing its amplitude and frequency content, e.g., with a bandpass filter. The output of source separation block 98 is separated sub-frame n,1 containing the isolated frequency content within frequency band 1. In a similar manner, source separation block 100 identifies frequency characteristics associated with frequency band 2, e.g., 500-1000 Hz, and separates the sampled audio signal 112 according to frequency content within frequency band 2. The output of source separation block 100 is separated sub-frame n,2 containing the isolated frequency content within frequency band 2. Source separation block 102 identifies frequency characteristics associated with frequency band 3, e.g., 1000-1500 Hz, and separates the sampled audio signal 112 according to frequency content within frequency band 3. The output of source separation block 102 is separated sub-frame n,3 containing the isolated frequency content within frequency band 3. Source separation block 104 identifies frequency characteristics associated with frequency band 4, e.g., 1500-2000 Hz, and separates the sampled audio signal 112 according to frequency content within frequency band 4. The output of source separation block 104 is separated sub-frame n,4 containing the isolated frequency content within frequency band 4.
The time domain analysis block 108 of
Summer 140 accumulates the difference in energy levels E(m,n) of each frequency band 1-m of separated sub-frame n−1,s and separated sub-frame n,s. The onset of a note will occur when the total of the differences in energy levels E(m,n) across the entire monitored frequency bands 1-m for the separated sub-frames n,s exceeds a predetermined threshold value. Comparator 142 compares the output of summer 140 to a threshold value 144. If the output of summer 140 is greater than threshold value 144, then the accumulation of differences in the energy levels E(m,n) over the entire frequency spectrum for the separated sub-frames n,s exceeds the threshold value 144 and the onset of a note is detected in the instant separated sub-frame n,s. If the output of summer 140 is less than threshold value 144, then no onset of a note is detected.
At the conclusion of each separated sub-frame n,s, attack detector 132 will have identified whether the instant separated sub-frame contains the onset of a note, or whether the instant separated sub-frame contains no onset of a note. For example, based on the summation of differences in energy levels E(m,n) of the separated sub-frames n,s over the entire spectrum of frequency bands 1-m exceeding threshold value 144, attack detector 132 may have identified separated sub-frame 1,s of
At the conclusion of each frame, attack detector 132 will have identified whether the instant separated sub-frame contains the onset of a note, or whether the instant separated sub-frame contains no onset of a note. For example, based on the summation of energy levels E(m,n) of the separated sub-frames n,s within frequency bands 1-m exceeding threshold value 164, attack detector 132 may have identified separated sub-frame 1,s of
Equation (1) provides another illustration of onset detection of a note.
g(m,n)=max(0,[E(m,n)/E(m,n−1)]−1) (1)
where:
The function g(m,n) has a value for each frequency band 1-m and each separated sub-frame n,s. If the ratio of E(m,n)/E(m,n−1), i.e., the energy level of band m in separated sub-frame n,s to the energy level of band m in separated sub-frame n−1,s, is less than one, then [E(m,n)/E(m,n−1)]−1 is negative. The energy level of band m in separated sub-frame n,s is not greater than the energy level of band m in separated sub-frame n−1,s. The function g(m,n) is zero indicating no initiation of the attack phase and therefore no detection of the onset of a note. If the ratio of E(m,n)/E(m,n−1), i.e., the energy level of band m in separated sub-frame n,s to the energy level of band m in separated sub-frame n−1,s, is greater than one (say value of two), then [E(m,n)/E(m,n−1)]−1 is positive, i.e., value of one. The energy level of band m in separated sub-frame n,s is greater than the energy level of band m in separated sub-frame n−1,s. The function g(m,n) is the positive value of [E(m,n)/E(m,n−1)]−1 indicating initiation of the attack phase and a possible detection of the onset of a note.
Returning to
Repeat gate 168 monitors the number of onset detections occurring within a time period. If multiple onsets of a note are detected within a repeat detection time period, e.g., 50 milliseconds (ms), then only the first onset detection is recorded. That is, any subsequent onset of a note that is detected, after the first onset detection, within the repeat detection time period is rejected.
Noise gate 170 monitors the energy levels E(m,n) of the separated sub-frame n,s about the onset detection of a note. If the energy levels E(m,n) of the separated sub-frame n,s about the onset detection of a note are generally in the low noise range, e.g., the energy levels E(m,n) are −90 dB, then the onset detection is considered suspect and rejected as unreliable. A valid onset detection of a note for the instant separated sub-frame n,s is stored in runtime matrix 174.
The time domain analysis block 108 of
Loudness detector block 176 uses the energy function E(m,n) to determine the power spectrum of the separated sub-frames n,s. The power spectrum can be an average or root mean square (RMS) of the energy function E(m,n) of the separated sub-frames n,s. The beat detector is a time domain parameter or characteristic of each separated sub-frame n,s for all frequency bands 1-m and is stored as a value in runtime matrix 174 on a frame-by-frame basis.
Note temporal block 178 determines time period of the attack phase, sustain phase, decay phase, and release phase of the separated sub-frames n,s. The note temporal is a time domain parameter or characteristic of each separated sub-frame n,s for all frequency bands 1-m and is stored as a value in runtime matrix 174 on a frame-by-frame basis.
The frequency domain analysis block 106 in
where:
In another embodiment, block 180 performs a time domain to frequency domain conversion of the separated sub-frames 116 using an autoregressive function on a frame-by-frame basis.
The frequency domain analysis block 106 of
The energy levels E(m,n) of one separate sub-frame n−1,s are stored in block 196 of attack detector 194, as shown in
Summer 202 accumulates the difference in energy levels E(m,n) of each frequency bin 1-m of separated sub-frame n−1,s and separated sub-frame n,s. The onset of a note will occur when the total of the differences in energy levels E(m,n) across the entire monitored frequency bins 1-m for the separated sub-frames n,s exceeds a predetermined threshold value. Comparator 204 compares the output of summer 202 to a threshold value 206. If the output of summer 202 is greater than threshold value 206, then the accumulation of differences in energy levels E(m,n) over the entire frequency spectrum for the separated sub-frames n,s exceeds the threshold value 206 and the onset of a note is detected in the instant separated sub-frame n,s. If the output of summer 202 is less than threshold value 206, then no onset of a note is detected.
At the conclusion of each sub-frame, attack detector 194 will have identified whether the instant separated sub-frame n,s contains the onset of a note, or whether the instant separated sub-frame n,s contains no onset of a note. For example, based on the summation of differences in energy levels E(m,n) of the separated sub-frame n,s over the entire spectrum of frequency bins 1-m exceeding threshold value 206, attack detector 194 may have identified sub-frame 1,s of
At the conclusion of each separated sub-frame n,s, attack detector 194 will have identified whether the instant separated sub-frame n,s contains the onset of a note, or whether the instant the separated sub-frame n,s contains no onset of a note. For example, based on the summation of energy levels E(m,n) of the separated sub-frames n,s within frequency bins 1-m exceeding threshold value 212, attack detector 194 may have identified sub-frame 1,s of
Equation (1) provides another illustration of the onset detection of a note. The function g(m,n) has a value for each frequency bin 1-m and each separated sub-frame n,s. If the ratio of E(m,n)/E(m,n−1), i.e., the energy level of bin m in separated sub-frame n,s to the energy level of bin m in separated sub-frame n−1,s, is less than one, then [E(m,n)/E(m,n−1)]−1 is negative. The energy level of bin m in separated sub-frame n,s is not greater than the energy level of bin m in separated sub-frame n−1,s. The function g(m,n) is zero indicating no initiation of the attack phase and therefore no detection of the onset of a note. If the ratio of E(m,n)/E(m,n−1), i.e., the energy level of bin m in separated sub-frame n,s to the energy level of bin m in separated sub-frame n−1,s, is greater than one (say value of two), then [E(m,n)/E(m,n−1)]−1 is positive, i.e., value of one. The energy level of bin m in separated sub-frame n,s is greater than the energy level of bin m in separated sub-frame n−1,s. The function g(m,n) is the positive value of [E(m,n)/E(m,n−1)]−1 indicating initiation of the attack phase and a possible detection of the onset of a note.
Returning to
Repeat gate 216 monitors the number of onset detections occurring within a time period. If multiple onsets of a note are detected within the repeat detection time period, e.g., 50 ms, then only the first onset detection is recorded. That is, any subsequent onset of a note that, is detected, after the first onset detection, within the repeat detection time period is rejected.
Noise gate 218 monitors the energy levels E(m,n) of the separated sub-frames n,s about the onset detection of a note. If the energy levels E(m,n) of the separated sub-frames n,s about the onset detection of a note are generally in the low noise range, e.g., the energy levels E(m,n) are −90 dB, then the onset detection is considered suspect and rejected as unreliable. A valid onset detection of a note for the instant separated sub-frame n,s is stored in runtime matrix 174.
Returning to
Note spectral block 222 determines the fundamental frequency and 2nd-nth harmonics of the frequency domain separated sub-frames n,s to analysis the tristimulus of the audio signal. The first tristimulus (tr1) measures the power spectrum of the fundamental frequency. The second tristimulus (tr1) measures an average power spectrum of the 2nd harmonic, 3rd harmonic, and 4th harmonic of the frequency domain separated sub-frames n,s frequency. The third tristimulus (tr3) measures an average power spectrum of the 5th harmonic through the nth harmonic of the frequency domain separated sub-frames n,s frequency. The note spectral is a frequency domain parameter or characteristic of each separated sub-frame n,s and is stored as a value in runtime matrix 174 on a frame-by-frame basis.
Note partial block 224 determines brightness (amplitude) of the frequency domain separated sub-frames n,s. Brightness B can be determined by equation (3). The note partial is a frequency domain parameter or characteristic of each separated sub-frame n,s and is stored as a value in runtime matrix 174 on a frame-by-frame basis.
where:
Note inharmonicity block 226 determines the fundamental frequency and 2nd-nth harmonics of the frequency domain separated sub-frames n,s. Ideally, the 2nd-nth harmonics are integer multiples of the fundamental frequency. Some musical instruments can be distinguished and identified by determining whether the integer multiple relationship holds between the fundamental frequency and 2nd-nth harmonics. If the 2nd-nth harmonics is not an integer multiple of the fundamental frequency, then the degree of separation from the integer multiple relationship is indicative of the type of instrument. For example, the 2nd harmonic of piano 64 is typically not an integer multiple of the fundamental frequency. The note inharmonicity is a frequency domain parameter or characteristic of each separated sub-frame n,s and is stored as a value in runtime matrix 174 on a frame-by-frame basis.
Attack frequency block 228 determines the frequency content of the attack phase the separated sub-frames n,s. In particular, the brightness (amplitude) of the higher components are measured and recorded. The attack frequency is a frequency domain parameter or characteristic of each separated sub-frame n,s and is stored as a value in runtime matrix 174 on a frame-by-frame basis.
Harmonic derivative block 230 determines the harmonic derives of the 2nd-nth harmonics of the frequency domain separated sub-frame n,s in order to measure rate of change of the frequency components. The harmonic derivative is a frequency domain parameter or characteristic of each separated sub-frame n,s and is stored as a value in runtime matrix 174 on a frame-by-frame basis.
Runtime matrix 174 contains the frequency domain parameters determined in frequency domain analysis block 106 and the time domain parameters determined in time domain analysis block 108. Each time domain parameter and frequency domain parameter 1-j has a numeric parameter value PVn,j stored in runtime matrix 174 on a frame-by-frame basis, where n is the frame along the time sequence 112 and j is the parameter. For example, the beat detector parameter 1 has value PV1,1 in sub-frame 1,s, value PV2,1 in sub-frame 2,s, and value PVn,1 in sub-frame n,s; pitch detector parameter 2 has value PV1,2 in sub-frame 1,s, value PV2,2 in sub-frame 2,s, and value PVn,2 in sub-frame n,s; loudness factor parameter 3 has value PV1,3 in sub-frame 1,s, value PV2,3 in sub-frame 2,s, and value PVn,3 in sub-frame n,s; and so on. Table 1 shows runtime matrix 174 with the time domain and frequency domain parameter values PVn,j generated during the runtime analysis. The time domain and frequency domain parameter values PVn,j are characteristic of specific sub-frames and therefore useful in distinguishing between the sub-frames.
Table 2 shows one separated sub-frame n,s of runtime matrix 174 with the time domain and frequency domain parameters generated by frequency domain analysis block 106 and time domain analysis block 108 assigned sample numeric values for an audio signal originating from a classical style. Runtime matrix 174 contains time domain and frequency domain parameter values PVn,j for other sub-frames of the audio signal originating from the classical style, as per Table 1.
Table 3 shows one separated sub-frame n,s of runtime matrix 174 with the time domain and frequency domain parameters generated by frequency domain analysis block 106 and time domain analysis block 108 assigned sample numeric values for an audio signal originating from a rock style. Runtime matrix 174 contains time domain and frequency domain parameter values PVn,j for other sub-frames of the audio signal originating from the rock style, as per Table 1.
Returning to
The time domain parameters and frequency domain parameters in frame signature database 92 contain values preset by the manufacturer, or entered by the user, or learned over time from one or more instruments and one or more vocals. The factory or manufacturer of audio amplifier 70 can initially preset the values of time domain and frequency domain parameters 1-j, as well as weighting factors 1-j and control parameters 1-k. The user can change time domain and frequency domain parameters 1-j, weighting factors 1-j, and control parameters 1-k for each frame signature 1-i in database 92 directly using computer 236 with user interface screen or display 238, see
In another embodiment, time domain and frequency domain parameters 1-j, weighting factors 1-j, and control parameters 1-k can be learned by the artist playing guitar 60, drums 62, or piano 64, or singing into microphone 66. The artist sets audio amplifier 70 to a learn mode. The artists repetitively play the instruments or sing into the microphone. The frequency domain analysis 106 and time domain analysis 108 of
The artist can make manual adjustments to audio amplifier 70 via front control panel 30. Audio amplifier 70 learns control parameters 1-k associated with the separated sub-frame n,s by the settings of the signal processing blocks 72-84 as manually set by the artist. When learn mode is complete, the frame signature records in database 92 are defined with the frame signature parameters being an average of the frequency domain parameters and time domain parameters 1-j accumulated in database 92, and an average of the control parameters 1-k taken from the manual adjustments of the signal processing blocks 72-84 of audio amplifier 70 in database 92. In one embodiment, the average is a root mean square of the series of accumulated frequency domain and time domain parameters 1-j and accumulated control parameters 1-k in database 92.
Weighting factors 1-j can be learned by monitoring the learned time domain and frequency domain parameters 1-j and increasing or decreasing the weighting factors based on the closeness or statistical correlation of the comparison. If a particular parameter exhibits a consistent statistical correlation, then the weight factor for that parameter can be increased. If a particular parameter exhibits a diverse statistical diverse correlation, then the weighting factor for that parameter can be decreased.
Once the time domain and frequency domain parameters 1-j, weighting factors 1-j, and control parameters 1-k of frame signatures 1-i are established for database 92, the parameters 1-j in runtime matrix 174 can be compared on a frame-by-frame basis to each frame signature 1-i to find a best match or closest correlation. In normal play mode, the artists sing lyrics and play instruments to generate an audio signal having a time sequence of frames. For each frame, runtime matrix 174 is populated with time domain parameters and frequency domain parameters determined from a time domain analysis and frequency domain analysis of the audio signal, as described in
The time domain and frequency domain parameters 1-j for each separated sub-frame n,s in runtime matrix 174 and the parameters 1-j in each frame signature 1-i are compared on a one-by-one basis and the differences are recorded.
Next, for each parameter of separated sub-frame 1,1, compare block 242 determines the difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 2 and stores the difference in recognition memory 244. The differences between the parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 and the parameters 1-j of frame signature 2 are summed to determine a total difference value between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 2.
The time domain parameters and frequency domain parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 are compared to the time domain and frequency domain parameters 1-j in the remaining frame signatures 3-i in database 92, as described for frame signatures 1 and 2. The minimum total difference between the parameters 1-j of separated sub-frame 1,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation and the frame associated with separated sub-frame 1,1 of runtime matrix 174 is identified with the frame signature having the minimum total difference between corresponding parameters. In this case, the time domain and frequency domain parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 are more closely aligned to the time domain and frequency domain parameters 1-j in frame signature 1.
With time domain parameters and frequency domain parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 matched to frame signature 1, adaptive intelligence control block 94 of
The process is repeated for separated sub-frames 1,2 through 1,s. In one embodiment, the control parameters 1,k of sub-frames 1,1 through 1,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 1,1 through 1,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 1,1 through 1,s.
The time domain and frequency domain parameters 1-j for each separated sub-frame 2,1 through 2,s in runtime matrix 174 and the parameters 1-j in each frame signature 1-i are compared on a one-by-one basis and the differences are recorded. For each parameter 1-j of separated sub-frame 2,1, compare block 242 determines the difference between the parameter value in runtime matrix 174 and the parameter value in frame signature i and stores the difference in recognition memory 244. The differences between the parameters 1-j of separated sub-frame 2,1 in runtime matrix 174 and the parameters 1-j of frame signature i are summed to determine a total difference value between the parameters 1-j of separated sub-frame 2,1 and the parameters 1-j of frame signature i. The minimum total difference between the parameters 1-j of separated sub-frame 2,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation and the frame associated with separated sub-frame 2,1 of runtime matrix 174 is identified with the frame signature having the minimum total difference between corresponding parameters. In this case, the time domain and frequency domain parameters 1-j of separated sub-frame 2,1 in runtime matrix 174 are more closely aligned to the time domain and frequency domain parameters 1-j in frame signature 2. Adaptive intelligence control block 94 uses the control parameters 1-k associated with the matching frame signature 2 in database 92 to control operation of the signal processing blocks 72-84 of audio amplifier 70.
The process is repeated for separated sub-frames 2,2 through 2,s. In one embodiment, the control parameters 1,k of sub-frames 2,1 through 2,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 2,1 through 2,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 2,1 through 2,s. The process continues for each separated sub-frame n,s of runtime matrix 174.
In another embodiment, the time domain and frequency domain parameters 1-j for each separated sub-frame n,s in runtime matrix 174 and the parameters 1-j in each frame signature 1-i are compared on a one-by-one basis and the weighted differences are recorded. For each parameter of separated sub-frame 1,1, compare block 242 determines the weighted difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 1 as determined by weight 1,j and stores the weighted difference in recognition memory 244. The weighted differences between the parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 and the parameters 1-j of frame signature 1 are summed to determine a total weighted difference value between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 1.
Next, for each parameter of separated sub-frame 1,1, compare block 242 determines the weighted difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 2 by weight 2,j and stores the weighted difference in recognition memory 244. The weighted differences between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 2 are summed to determine a total weighted difference value between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 2.
The time domain parameters and frequency domain parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 are compared to the time domain and frequency domain parameters 1-j in the remaining frame signatures 3-i in database 92, as described for frame signatures 1 and 2. The minimum total weighted difference between the parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation and the frame associated with separated sub-frame 1,1 of runtime matrix 174 is identified with the frame signature having the minimum total weighted difference between corresponding parameters. Adaptive intelligence control block 94 uses the control parameters 1-k in database 92 associated with the matching frame signature to control operation of the signal processing blocks 72-84 of audio amplifier 70.
The process is repeated for separated sub-frames 1,2 through 1,s. In one embodiment, the control parameters 1,k of sub-frames 1,1 through 1,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 1,1 through 1,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 1,1 through 1,s.
The time domain and frequency domain parameters 1-j for separated sub-frame 2,1 in runtime matrix 174 and the parameters 1-j in each frame signature 1-i are compared on a one-by-one basis and the weighted differences are recorded. For each parameter 1-j of separated sub-frame 2,1, compare block 242 determines the weighted difference between the parameter value in runtime matrix 174 and the parameter value in frame signature by weight i,j and stores the weighted difference in recognition memory 244. The weighted differences between the parameters 1-j of separated sub-frame 2,1 in runtime matrix 174 and the parameters 1-j of frame signature i are summed to determine a total weighted difference value between the parameters 1-j of separated sub-frame 2,1 and the parameters 1-j of frame signature i. The minimum total weighted difference between the parameters 1-j of separated sub-frame 2,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation and the frame associated with separated sub-frame 2,1 of runtime matrix 174 is identified with the frame signature having the minimum total weighted difference between corresponding parameters. Adaptive intelligence control block 94 uses the control parameters 1-k in database 92 associated with the matching frame signature to control operation of the signal processing blocks 72-84 of audio amplifier 70.
The process is repeated for separated sub-frames 2,2 through 2,s. In one embodiment, the control parameters 1,k of sub-frames 2,1 through 2,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 2,1 through 2,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 2,1 through 2,s. The process continues for each separated sub-frame n,s of runtime matrix 174.
In an illustrative numeric example of the parameter comparison process to determine a best match or closest correlation between the time domain and frequency domain parameters 1-j for each frame in runtime matrix 174 and parameters 1-j for each frame signature 1-i, Table 4 shows time domain and frequency domain parameters 1-j with sample parameter values for frame signature 1 (classical style) of database 92. Table 5 shows time domain and frequency domain parameters 1-j with sample parameter values for frame signature 2 (rock style) of database 92.
The time domain and frequency domain parameters 1-j for separated sub-frames n,s in runtime matrix 174 and the parameters 1-j in each frame signatures 1-i are compared on a one-by-one basis and the differences are recorded. For example, the beat detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 68 (see Table 2) and the beat detector parameter in frame signature 1 has a value of 60 (see Table 4).
Next, the beat detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 68 (see Table 2) and the beat detector parameter in frame signature 2 has a value of 120 (see Table 5). Compare block 242 determines the difference 68−120 and stores the difference between separated sub-frame 1,1 and frame signature 2 in recognition memory 244. The pitch detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 428 (see Table 2) and the pitch detector parameter in frame signature 2 has a value of 250 (see Table 5). Compare block 242 determines the difference 428−250 and stores the difference in recognition memory 244. For each parameter of separated sub-frame 1,1, compare block 212 determines the difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 2 and stores the difference in recognition memory 244. The differences between the parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 and the parameters 1-j of frame signature 2 are summed to determine a total difference value between the parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 and the parameters 1-j of frame signature 2.
The time domain and frequency domain parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 are compared to the time domain and frequency domain parameters 1-j in the remaining frame signatures 3-i in database 92, as described for frame signatures 1 and 2. The minimum total difference between the parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation. In this case, the time domain and frequency domain parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 are more closely aligned to the time domain and frequency domain parameters 1-j in frame signature 1. Separated sub-frame 1,1 of runtime matrix 174 is identified as a frame of a classical style composition.
With time domain parameters and frequency domain parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 generated from the audio signal matched to frame signature 1, adaptive intelligence control block 94 of
The control parameters 1,k of sub-frames 1,1 through 1,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 1,1 through 1,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 1,1 through 1,s.
Next, the time domain and frequency domain parameters 1-j for separated sub-frame 2,1 in runtime matrix 174 and the parameters 1-j in each frame signatures 1-i are compared on a one-by-one basis and the differences are recorded. For each parameter 1-j of separated sub-frame 2,1, compare block 242 determines the difference between the parameter value in runtime matrix 174 and the parameter value in frame signature i and stores the difference in recognition memory 244. The differences between the parameters 1-j of separated sub-frame 2,1 and the parameters 1-j of frame signature i are summed to determine a total difference value between the parameters 1-j of separated sub-frame 2,1 and the parameters 1-j of frame signature i. The minimum total difference between the parameters 1-j of separated sub-frame 2,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation. Separated sub-frame 2,1 of runtime matrix 174 is identified with the frame signature having the minimum total difference between corresponding parameters. In this case, the time domain and frequency domain parameters 1-j of separated sub-frame 2,1 in runtime matrix 174 are more closely aligned to the time domain and frequency domain parameters 1-j in frame signature 1. Separated sub-frame 2,1 of runtime matrix 174 is identified as another frame for a classical style composition. Adaptive intelligence control block 94 uses the control parameters 1-k in database 92 associated with the matching frame signature 1 to control operation of the signal processing blocks 72-84 of audio amplifier 70.
The control parameters 1,k of sub-frames 2,1 through 2,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 2,1 through 2,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 2,1 through 2,s. The process continues for each separated sub-frame n,s of runtime matrix 174.
In another numeric example, the beat detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 113 (see Table 3) and the beat detector parameter in frame signature 1 has a value of 60 (see Table 4). The difference 113−60 between separated sub-frame 1,1 and frame signature 1 is stored in recognition memory 244. The pitch detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 267 (see Table 3) and the pitch detector parameter in frame signature 1 has a value of 440 (see Table 4). Compare block 242 determines the difference 267−440 and stores the difference in recognition memory 244. For each parameter of separated sub-frame 1,1, compare block 242 determines the difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 1 and stores the difference in recognition memory 244. The differences between the parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 and the parameters 1-j of frame signature 1 are summed to determine a total difference value between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 1.
Next, the beat detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 113 (see Table 3) and the beat detector parameter in frame signature 2 has a value of 120 (see Table 5). Compare block 242 determines the difference 113−120 and stores the difference in recognition memory 244. The pitch detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 267 (see Table 3) and the pitch detector parameter in frame signature 2 has a value of 250 (see Table 5). Compare block 242 determines the difference 267−250 and stores the difference in recognition memory 244. For each parameter of separated sub-frame 1,1, compare block 242 determines the difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 2 and stores the difference in recognition memory 244. The differences between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 2 are summed to determine a total difference value between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 2.
The time domain and frequency domain parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 are compared to the time domain and frequency domain parameters 1-j in the remaining frame signatures 3-i in database 92, as described for frame signatures 1 and 2. The minimum total difference between the parameters 1-j of separated sub-frame 1,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation. Separated sub-frame 1,1 of runtime matrix 174 is identified with the frame signature having the minimum total difference between corresponding parameters. In this case, the time domain and frequency domain parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 are more closely aligned to the time domain and frequency domain parameters 1-j in frame signature 2. Separated sub-frame 1,1 of runtime matrix 174 is identified as a frame of a rock style composition.
With time domain parameters and frequency domain parameters 1-j of separated sub-frame 1,1 in runtime matrix 174 generated from the analog signal matched to frame signature 2, adaptive intelligence control block 94 of
The control parameters 2,k of sub-frames 1,1 through 1,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 1,1 through 1,s occur within the same time period, the control parameters 2,k can be an average or other combination of the control parameters determined for each separated sub-frames 1,1 through 1,s.
The time domain and frequency domain parameters 1-j for separated sub-frame 2,1 in runtime matrix 174 and the parameters 1-j in each frame signatures 1-i are compared on a one-by-one basis and the differences are recorded. For each parameter 1-j of separated sub-frame 2,1, compare block 212 determines the difference between the parameter value in runtime matrix 174 and the parameter value in frame signature i and stores the difference in recognition memory 244. The differences between the parameters 1-j of separated sub-frame 2,1 and the parameters 1-j of frame signature i are summed to determine a total difference value between the parameters 1-j of frame 2 and the parameters 1-j of frame signature i. The minimum total difference, between the parameters 1-j of separated sub-frame 2,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation. The separate sub-frame 2,1 of runtime matrix 174 is identified with the frame signature having the minimum total difference between corresponding parameters. In this case, the time domain and frequency domain parameters 1-j of separated sub-frame 2,1 in runtime matrix 174 are more closely aligned to the time domain and frequency domain parameters 1-j in frame signature 2. The separated sub-frame 2,1 of runtime matrix 174 is identified as another frame of a rock style composition. Adaptive intelligence control block 94 uses the control parameters 1-k in database 92 associated with the matching frame signature 2 to control operation of the signal processing blocks 72-84 of audio amplifier 70.
The control parameters 2,k of sub-frames 2,1 through 2,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 2,1 through 2,s occur within the same time period, the control parameters 2,k can be an average or other combination of the control parameters determined for each separated sub-frames 2,1 through 2,s. The process continues for each separated sub-frame n,s of runtime matrix 174.
In another embodiment, the time domain and frequency domain parameters 1-j for each separated sub-frame n,s in runtime matrix 174 and the parameters 1-j in each frame signatures 1-i are compared on a one-by-one basis and the weighted differences are recorded. For example, the beat detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 68 (see Table 2) and the beat detector parameter in frame signature 1 has a value of 60 (see Table 4). Compare block 242 determines the weighted difference (68−60)*weight 1,1 and stores the weighted difference in recognition memory 244. The pitch detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 428 (see Table 2) and the pitch detector parameter in frame signature 1 has a value of 440 (see Table 4). Compare block 242 determines the weighted difference (428−440)*weight 1,2 and stores the weighted difference in recognition memory 244. For each parameter of separated sub-frame 1,1, compare block 242 determines the weighted difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 1 as determined by weight 1,j and stores the weighted difference in recognition memory 244. The weighted differences between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 1 are summed to determine a total weighted difference value between the parameters 1-j of separated sub-frame 1,1 and the parameters 1-j of frame signature 1.
Next, the beat detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 68 (see Table 2) and the beat detector parameter in frame signature 2 has a value of 120 (see Table 5). Compare block 242 determines the weighted difference (68−120)*weight 2,1 and stores the weighted difference in recognition memory 244. The pitch detector parameter of separated sub-frame 1,1 in runtime matrix 174 has a value of 428 (see Table 2) and the pitch detector parameter in frame signature 2 has a value of 250 (see Table 5). Compare block 242 determines the weighted difference (428−250)*weight 2,2 and stores the weighted difference in recognition memory 244. For each parameter of separated sub-frame 1,1, compare block 212 determines the weighted difference between the parameter value in runtime matrix 174 and the parameter value in frame signature 2 by weight 2,j and stores the weighted difference in recognition memory 244. The weighted differences between the parameters 1-j of frame 1 in runtime matrix 174 and the parameters 1-j of frame signature 2 are summed to determine a total weighted difference value between the parameters 1-j of frame 1 and the parameters 1-j of frame signature 2.
The time domain and frequency domain parameters 1-j in runtime matrix 174 for separated sub-frame 1,1 are compared to the time domain and frequency domain parameters 1-j in the remaining frame signatures 3-i in database 92, as described for frame signatures 1 and 2. The minimum total weighted difference between the parameters 1-j of separated sub-frame 1,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation. The separated sub-frame 1,1 of runtime matrix 174 is identified with the frame signature having the minimum total weighted difference between corresponding parameters. Adaptive intelligence control block 94 uses the control parameters 1-k in database 92 associated with the matching frame signature to control operation of the signal processing blocks 72-84 of audio amplifier 70.
The control parameters 1,k of sub-frames 1,1 through 1,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 1,1 through 1,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 1,1 through 1,s.
The time domain and frequency domain parameters 1-j for separated sub-frame 2,1 in runtime matrix 174 and the parameters 1-j in each frame signatures 1-i are compared on a one-by-one basis and the weighted differences are recorded. For each parameter 1-j of separated sub-frame 2,1, compare block 242 determines the weighted difference between the parameter value in runtime matrix 174 and the parameter value in frame signature by weight i,j and stores the weighted difference in recognition memory 244. The weighted differences between the parameters 1-j of separated sub-frame 2,1 and the parameters 1-j of frame signature i are summed to determine a total weighted difference value between the parameters 1-j of separated sub-frame 2,1 and the parameters 1-j of frame signature i. The minimum total weighted difference between the parameters 1-j of separated sub-frame 2,1 of runtime matrix 174 and the parameters 1-j of frame signatures 1-i is the best match or closest correlation. The separated sub-frame 2,1 of runtime matrix 174 is identified with the frame signature having the minimum total weighted difference between corresponding parameters. Adaptive intelligence control block 94 uses the control parameters 1-k in database 92 associated with the matching frame signature to control operation of the signal processing blocks 72-84 of audio amplifier 70.
The control parameters 1,k of sub-frames 2,1 through 2,s each control different functions within signal processing blocks 72-84 of audio amplifier 70. Alternatively, since the separated sub-frames 2,1 through 2,s occur within the same time period, the control parameters 1,k can be an average or other combination of the control parameters determined for each separated sub-frames 2,1 through 2,s. The process continues for each separated sub-frame n,s of runtime matrix 174.
In another embodiment, a probability of correlation between corresponding parameters in runtime matrix 174 and frame signatures 1-i is determined. In other words, a probability of correlation is determined as a percentage that a given parameter in runtime matrix 174 is likely the same as the corresponding parameter in frame signature i. The percentage is a likelihood of a match. As described above, the time domain parameters and frequency domain parameters in runtime matrix 174 are stored on a frame-by-frame basis. For each separated sub-frame n,s of each parameter j in runtime matrix 174 is represented by Pn,j=[Pn1, Pn2, . . . Pnj].
A probability ranked list R is determined between each separated sub-frame n,s of each parameter j in runtime matrix 174 and each parameter j of each frame signature i. The probability value ri can be determined by a root mean square analysis for the Pn,j and frame signature database Si,j in equation (4):
The probability value R is (1−ri)×100%. The overall ranking value for Pn,j and note database Si,j is given in equation (5).
R=[(1−r1)×100%(1−r2)×100%(1−ri)×100%] (5)
In some cases, the matching process identifies two or more frame signatures that are close to the present frame. For example, a frame in runtime matrix 174 may have a 52% probability that it matches to frame signature 1 and a 48% probability that it matches to frame signature 2. In this case, an interpolation is performed between the control parameter 1,1, control parameter 1,2 through control parameter 1,k and control parameter 2,1, control parameter 2,2, through control parameter 2,k, weighted by the probability of the match. The net effective control parameter 1 is 0.52*control parameter 1,1+0.48*control parameter 2,1. The net effective control parameter 2 is 0.52*control parameter 1,2+0.48*control parameter 2,2. The net effective control parameter k is 0.52*control parameter 1,k+0.48*control parameter 2,k. The net effective control parameters 1-k control operation of the signal processing blocks 72-84 of audio amplifier 70. The audio signal is processed through pre-filter block 72, pre-effects block 74, non-linear effects block 76, user-defined modules 78, post-effects block 80, post-filter block 82, and power amplification block 84, each operating as set by net effective control parameters 1-k, respectively. The audio signal is routed to speakers 46 in automobile 24. The listener hears the reproduced audio signal enhanced in realtime with characteristics determined by the dynamic content of the audio signal.
The signal processing functions can be associated with equipment other than automobile sound system 20.
To accommodate the signal processing requirements for the dynamic content of the audio source, cellular phone 250 employs a dynamic adaptive intelligence feature involving frequency domain analysis and time domain analysis of the audio signal on a frame-by-frame basis and automatically and adaptively controls operation of the signal processing functions and settings within the cellular phone to achieve an optimal sound reproduction, see blocks 90-94 of
To accommodate the signal processing requirements for the dynamic content of the audio source, audio equipment rack 262 employs a dynamic adaptive intelligence feature involving frequency domain analysis and time domain analysis of the audio signal on a frame-by-frame basis and automatically and adaptively controls operation of the signal processing functions and settings within the cellular phone to achieve an optimal sound reproduction, see blocks 90-94 of
To accommodate the signal processing requirements for the dynamic content of the audio source, computer 270 employs a dynamic adaptive intelligence feature involving frequency domain analysis and time domain analysis of the audio signal on a frame-by-frame basis and automatically and adaptively controls operation of the signal processing functions and settings within the cellular phone to achieve an optimal sound reproduction, see blocks 90-94 of
While one or more embodiments of the present invention have been illustrated in detail, the skilled artisan will appreciate that modifications and adaptations to those embodiments may be made without departing from the scope of the present invention as set forth in the following claims.
The present application is a continuation-in-part of U.S. patent application Ser. No. 13/109,665, filed May 17, 2011, and claims priority to the foregoing parent application pursuant to 35 U.S.C. §120.
Number | Date | Country | |
---|---|---|---|
Parent | 13109665 | May 2011 | US |
Child | 13189414 | US |