This application relates to the field of audio processing technologies, and in particular, to a virtual stereo synthesis method and apparatus.
Currently, headsets are widely applied to enjoy music and videos. When a stereo signal is replayed by a headset, an effect of head orientation often appears, causing an unnatural listening effect. Researches show that, the effect of head orientation appears because: 1) The headset directly transmits, to both ears, a virtual sound signal that is synthesized from left and right channel signals, where unlike a natural sound, the virtual sound signal is not scattered or reflected by the head, auricles, body, and the like of a person, and the left and right channel signals in the synthetic virtual sound signal are not superimposed in a cross manner, which damages space information of an original sound field, 2) The synthetic virtual sound signal lacks early reflection and late reverberation in a room, thereby affecting a listener in feeling a sound distance and a space size.
To reduce the effect of head orientation, in the prior art, data that can express a comprehensive filtering effect from a physiological structure or an environment on a sound wave is obtained by means of measurement in an artificially simulated listening environment. A common manner is that, a head related transfer function (HRTF) is measured in an anechoic chamber using an artificial head, to express the comprehensive filtering effect from the physiological structure on the sound wave. As shown in
sl(n)=conv(hθ
sr(n)=conv(hθ
where conv(x,y) represents a convolution of vectors x and y, hθ
In the prior art, stereo simulation is further performed, using binaural room impulse response (BRIR) data in replacement of the HRTF data, on signals that are input from left and right channels, where the BRIR data further includes the comprehensive filtering effect from the environment on the sound wave. Although the BRIR data has an improved stereo effect compared with the HRTF data, calculation complexity of the BRIR data is higher, and the coloration effect still exists.
Present application provides a virtual stereo synthesis method and apparatus, which can improve a coloration effect, and reduce calculation complexity.
To resolve the foregoing technical problem, a first aspect of this application provides a virtual stereo synthesis method, where the method includes acquiring at least one sound input signal on one side and at least one sound input signal on the other side, separately performing ratio processing on a preset HRTF left-ear component and a preset HRTF right-ear component of each sound input signal on the other side, to obtain a filtering function of each sound input signal on the other side, separately performing convolution filtering on each sound input signal on the other side and the filtering function of the sound input signal on the other side, to obtain the filtered signal on the other side, and synthesizing all of the sound input signals on the one side and all of the filtered signals on the other side into a virtual stereo signal.
With reference to the first aspect, a first possible implementation manner of the first aspect of this application is the step of separately performing ratio processing on a preset HRTF left-ear component and a preset HRTF right-ear component of each sound input signal on the other side, to obtain a filtering function of each sound input signal on the other side includes separately using a ratio of a left-ear frequency domain parameter to a right-ear frequency domain parameter of each sound input signal on the other side as a frequency-domain filtering function of each sound input signal on the other side, where the left-ear frequency domain parameter indicates the preset HRTF left-ear component of the sound input signal on the other side, and the right-ear frequency domain parameter indicates the preset HRTF right-ear component of the sound input signal on the other side, and separately transforming the frequency-domain filtering function of each sound input signal on the other side to a time-domain function, and using the time-domain function as the filtering function of each sound input signal on the other side.
With reference to the first possible implementation manner of the first aspect, a second possible implementation manner of the first aspect of this application is the step of separately transforming the frequency-domain filtering function of each sound input signal on the other side to a time-domain function, and using the time-domain function as the filtering function of each sound input signal on the other side includes separately performing minimum phase filtering on the frequency-domain filtering function of each sound input signal on the other side, then transforming the frequency-domain filtering function to the time-domain function, and using the time-domain function as the filtering function of each sound input signal on the other side.
With reference to the first or the second possible implementation manner of the first aspect, a third possible implementation manner of the first aspect of this application is, before the step of separately using a ratio of a left-ear frequency domain parameter to a right-ear frequency domain parameter of each sound input signal on the other side as a frequency-domain filtering function of each sound input signal on the other side, the method further includes separately using a frequency domain of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately using a frequency domain of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side, or separately using a frequency domain, after diffuse-field equalization or subband smoothing, of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately using a frequency domain, after diffuse-field equalization or subband smoothing, of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side, or separately using a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately using a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side.
With reference to the first aspect or any one of the first to the third possible implementation manners, a fourth possible implementation manner of the first aspect of this application is the step of separately performing convolution filtering on each sound input signal on the other side and the filtering function of the sound input signal on the other side, to obtain a filtered signal on the other side includes separately performing reverberation processing on each sound input signal on the other side, and then using the processed signal as a sound reverberation signal on the other side, and separately performing convolution filtering on each sound reverberation signal on the other side and the filtering function of the corresponding sound input signal on the other side, to obtain the filtered signal on the other side.
With reference to the fourth possible implementation manner of the first aspect, a fifth possible implementation manner of the first aspect of this application is the step of separately performing reverberation processing on each sound input signal on the other side, and then using the processed signal as a sound reverberation signal on the other side includes separately passing each sound input signal on the other side through an all-pass filter, to obtain a reverberation signal of each sound input signal on the other side, and separately synthesizing each sound input signal on the other side and the reverberation signal of the sound input signal on the other side into the sound reverberation signal on the other side.
With reference to the first aspect or any one of the first to the fifth possible implementation manners, a sixth possible implementation manner of the first aspect of this application is the step of synthesizing all of the sound input signals on the one side and all of the filtered signals on the other side into a virtual stereo signal includes summating all of the sound input signals on the one side and all of the filtered signals on the other side to obtain a synthetic signal, and performing, using a fourth-order infinite impulse response (IIR) filter, timbre equalization on the synthetic signal, and then using the timbre-equalized synthetic signal as the virtual stereo signal.
To resolve the foregoing technical problem, a second aspect of this application provides a virtual stereo synthesis apparatus, where the apparatus includes an acquiring module, a generation module, a convolution filtering module, and a synthesis module, where the acquiring module is configured to acquire at least one sound input signal on one side and at least one sound input signal on the other side, and send the at least one sound input signal on the one side and at least one sound input signal on the other side to the generation module and the convolution filtering module. The generation module is configured to separately perform ratio processing on a preset HRTF left-ear component and a preset HRTF right-ear component of each sound input signal on the other side, to obtain a filtering function of each sound input signal on the other side, and send the filtering function of each sound input signal on the other side to the convolution filtering module. The convolution filtering module is configured to separately perform convolution filtering on each sound input signal on the other side and the filtering function of the sound input signal on the other side, to obtain the filtered signal on the other side, and send all of the filtered signals on the other side to the synthesis module, and the synthesis module is configured to synthesize a virtual stereo signal from all of the sound input signals on the one side and all of the filtered signals on the other side.
With reference to the second aspect, a first possible implementation manner of the second aspect of this application is the generation module which includes a ratio unit and a transformation unit, where the ratio unit is configured to separately use a ratio of a left-ear frequency domain parameter to a right-ear frequency domain parameter of each sound input signal on the other side as a frequency-domain filtering function of each sound input signal on the other side, and send the frequency-domain filtering function of each sound input signal on the other side to the transformation unit, where the left-ear frequency domain parameter indicates the preset HRTF left-ear component of the sound input signal on the other side, and the right-ear frequency domain parameter indicates the preset HRTF right-ear component of the sound input signal on the other side, and the transformation unit is configured to separately transform the frequency-domain filtering function of each sound input signal on the other side to a time-domain function, and use the time-domain function as the filtering function of each sound input signal on the other side.
With reference to the first possible implementation manner of the second aspect, a second possible implementation manner of the second aspect of this application is the transformation unit which is further configured to separately perform minimum phase filtering on the frequency-domain filtering function of each sound input signal on the other side, then transform the frequency-domain filtering function to the time-domain function, and use the time-domain function as the filtering function of each sound input signal on the other side.
With reference to the first or the second possible implementation manner of the second aspect, a third possible implementation manner of the second aspect of this application is the generation module which includes a processing unit, where the processing unit is configured to separately use a frequency domain of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately use a frequency domain of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side, or separately use a frequency domain, after diffuse-field equalization or subband smoothing, of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately use a frequency domain, after diffuse-field equalization or subband smoothing, of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side, or separately use a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately use a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side, and send the left ear and right-ear frequency domain parameters to the ratio unit.
With reference to the second aspect or any one of the first to the third possible implementation manners, a fourth possible implementation manner of the second aspect of this application is a reverberation processing module. The reverberation processing module is configured to separately perform reverberation processing on each sound input signal on the other side, then use the processed signal as a sound reverberation signal on the other side, and output all of the sound reverberation signals on the other side to the convolution filtering module, and the convolution filtering module is further configured to separately perform convolution filtering on each sound reverberation signal on the other side and the filtering function of the corresponding sound input signal on the other side, to obtain the filtered signal on the other side.
With reference to the fourth possible implementation manner of the second aspect, a fifth possible implementation manner of the second aspect of this application is the reverberation processing module which is further configured to separately pass each sound input signal on the other side through an all-pass filter, to obtain a reverberation signal of each sound input signal on the other side, and separately synthesize each sound input signal on the other side and the reverberation signal of the sound input signal on the other side into the sound reverberation signal on the other side.
With reference to the second aspect or any one of the first to the fifth possible implementation manners, a sixth possible implementation manner of the second aspect of this application is the synthesis module which includes a synthesis unit and a timbre equalization unit, where the synthesis unit is configured to summate all of the sound input signals on the one side and all of the filtered signals on the other side to obtain a synthetic signal, and send the synthetic signal to the timbre equalization unit, and the timbre equalization unit is configured to perform, using a fourth-order IIR filter, timbre equalization on the synthetic signal and then use the timbre-equalized synthetic signal as the virtual stereo signal.
To resolve the foregoing technical problem, a third aspect of this application provides a virtual stereo synthesis apparatus, where the apparatus includes a processor, where the processor is configured to acquire at least one sound input signal on one side and at least one sound input signal on the other side, separately perform ratio processing on a preset HRTF left-ear component and a preset HRTF right-ear component of each sound input signal on the other side, to obtain a filtering function of each sound input signal on the other side, separately perform convolution filtering on each sound input signal on the other side and the filtering function of the sound input signal on the other side, to obtain the filtered signal on the other side, and synthesize all of the sound input signals on the one side and all of the filtered signals on the other side into a virtual stereo signal.
With reference to the third aspect, a first possible implementation manner of the third aspect of this application is the processor, and the processor is further configured to separately use a ratio of a left-ear frequency domain parameter to a right-ear frequency domain parameter of each sound input signal on the other side as a frequency-domain filtering function of each sound input signal on the other side, where the left-ear frequency domain parameter indicates the preset HRTF left-ear component of the sound input signal on the other side, and the right-ear frequency domain parameter indicates the preset HRTF right-ear component of the sound input signal on the other side, and separately transform the frequency-domain filtering function of each sound input signal on the other side to a time-domain function, and use the time-domain function as the filtering function of each sound input signal on the other side.
With reference to the first possible implementation manner of the third aspect, a second possible implementation manner of the third aspect of this application is the processor, and the processor is further configured to separately perform minimum phase filtering on the frequency-domain filtering function of each sound input signal on the other side, then transform the frequency-domain filtering function to the time-domain function, and use the time-domain function as the filtering function of each sound input signal on the other side.
With reference to the first or the second possible implementation manner of the third aspect, a third possible implementation manner of the third aspect of this application is the processor, and the processor is further configured to separately use a frequency domain of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately use a frequency domain of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side, or separately use a frequency domain, after diffuse-field equalization or subband smoothing, of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately use a frequency domain, after diffuse-field equalization or subband smoothing, of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side, or separately use a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF left-ear component of each sound input signal on the other side as the left-ear frequency domain parameter of each sound input signal on the other side, and separately use a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF right-ear component of each sound input signal on the other side as the right-ear frequency domain parameter of each sound input signal on the other side.
With reference to the third aspect or any one of the first to the third possible implementation manners, a fourth possible implementation manner of the third aspect of this application is the processor, and the processor is further configured to separately perform reverberation processing on each sound input signal on the other side and then use the processed signal as a sound reverberation signal on the other side, and separately perform convolution filtering on each sound reverberation signal on the other side and the filtering function of the corresponding sound input signal on the other side, to obtain the filtered signal on the other side.
With reference to the fourth possible implementation manner of the third aspect, a fifth possible implementation manner of the third aspect of this application is the processor, and the processor is further configured to separately pass each sound input signal on the other side through an all-pass filter, to obtain a reverberation signal of each sound input signal on the other side, and separately synthesize each sound input signal on the other side and the reverberation signal of the sound input signal on the other side into the sound reverberation signal on the other side.
With reference to the third aspect or any one of the first to the fifth possible implementation manners, a sixth possible implementation manner of the third aspect of this application is the processor, and the processor is further configured to summate all of the sound input signals on the one side and all of the filtered signals on the other side to obtain a synthetic signal, and the timbre equalization unit is configured to perform, using a fourth-order IIR filter, timbre equalization on the synthetic signal and then use the timbre-equalized synthetic signal as the virtual stereo signal.
By means of the foregoing solutions, in this application, ratio processing is performed on left-ear and right-ear components of preset HRTF data of each sound input signal on the other side, to obtain a filtering function that retains orientation information of the preset HRTF data such that during synthesis of a virtual stereo, convolution filtering processing needs to be performed on only the sound input signal on the other side using the filtering function, and then the sound input signal on the other side and an original sound input signal on one side are synthesized to obtain the virtual stereo, without a need to simultaneously perform convolution filtering on the sound input signals that are on the two sides, which greatly reduces calculation complexity, and during synthesis, convolution processing does not need to be performed on the sound input signal on one of the sides, and therefore an original audio is retained, which further alleviates a coloration effect, and improves sound quality of the virtual stereo.
Descriptions are provided in the following with reference to the accompanying drawings and specific implementation manners.
Referring to
Step S201: A virtual stereo synthesis apparatus acquires at least one sound input signal sl
In the present disclosure, an original sound signal is processed to obtain an output sound signal that has a stereo sound effect. In this implementation manner, there are a total of M simulated sound sources located on one side, which accordingly generate M sound input signals on the one side, and there are a total of K simulated sound sources located on the other side, which accordingly generate K sound input signals on the other side. The virtual stereo synthesis apparatus acquires the M sound input signals s1
Generally, in the present disclosure, the sound input signals on the one side and the other side simulate sound signals that are sent from left side and right side positions of an artificial head center in order to be distinguished from each other. For example, if the sound input signal on the one side is a left-side sound input signal, the sound input signal on the other side is a right-side sound input signal, or if the sound input signal on the one side is a right-side sound input signal, the sound input signal on the other side is a left-side sound input signal, where the left-side sound input signal is a simulation of a sound signal that is sent from the left side position of the artificial head center, and the right-side sound input signal is a simulation of a sound signal that is sent from the right side position of the artificial head center. For example, in a dual-channel mobile terminal, a left channel signal is a left-side sound input signal, and a right channel signal is a right-side sound input signal. When a sound is played by a headset, the virtual stereo synthesis apparatus separately acquires the left and right channel signals that are used as original sound signals, and separately uses the left and the right channel signals as the sound input signals on the one side and the other side. Alternatively, for some mobile terminals whose replay signal sources include four channel signals, horizontal angles between simulated sound sources of the four channel signals and the front of the artificial head center are separately ±30° and ±110°, and elevation angles of the simulated sound sources are 0°. It is generally defined that, channel signals whose horizontal angles are positive angles (+30° and +110°) are right-side sound input signals, and channel signals whose horizontal angles are negative angles (−30° and −110°) are left-side sound input signals. When a sound is played by a headset, the virtual stereo synthesis apparatus acquires the left-side and right-side sound input signals that are separately used as the sound input signals on the one side and the other side.
Step S202: The virtual stereo synthesis apparatus separately performs ratio processing on a preset function HRTF left-ear component hθ
A preset HRTF is briefly described herein, HRTF data hθ,φ(n) is filter model data, measured in a laboratory, of transmission paths that are from a sound source at a position to two ears of an artificial head, and expresses a comprehensive filtering function of a human physiological structure on a sound wave from the position of the sound source, where a horizontal angle between the sound source and the artificial head center is θ, and an elevation angle between the sound source and the artificial head center is φ. Different HRTF experimental measurement databases can already be provided in the prior art. In the present disclosure, HRTF data of a preset sound source may be directly acquired, without performing measurement, from the HRTF experimental measurement databases in the prior art, and a simulated sound source position is a sound source position during measurement of corresponding preset HRTF data. In this implementation manner, each sound input signal correspondingly comes from a different preset simulated sound source, and therefore a different piece of HRTF data is correspondingly preset for each sound input signal. The preset HRTF data of each sound input signal can express a filtering effect on the sound input signal that is transmitted from a preset position to the two ears. Furthermore, preset HRTF data hθ
The virtual stereo synthesis apparatus performs ratio processing on the left-ear component hθ
Step S203: The virtual stereo synthesis apparatus separately performs convolution filtering on each sound input signal s2
The virtual stereo synthesis apparatus calculates the filtered signal s2
Step S204: The virtual stereo synthesis apparatus synthesizes all of the sound input signals s1
The virtual stereo synthesis apparatus synthesizes, according to
all of the sound input signals s1
In this implementation manner, ratio processing is performed on left-ear and right-ear components of preset HRTF data of each sound input signal on the other side, to obtain a filtering function that retains orientation information of the preset HRTF data such that during synthesis of a virtual stereo, convolution filtering processing needs to be performed on only the sound input signal on the other side using the filtering function, and the sound input signal on the other side and a sound input signal on one side are synthesized to obtain the virtual stereo, without a need to simultaneously perform convolution filtering on the sound input signals that are on the two sides, which greatly reduces calculation complexity, and during synthesis, convolution processing does not need to be performed on the sound input signal on the one side, and therefore an original audio is retained, which further alleviates a coloration effect, and improves sound quality of the virtual stereo.
It should be noted that, in this implementation manner, the generated virtual stereo is a virtual stereo that is input to an ear on one side, for example, if the sound input signal on the one side is a left-side sound input signal, and the sound input signal on the other side is a right-side sound input signal, the virtual stereo signal obtained according to the foregoing steps is a left-ear virtual stereo signal that is directly input to the left ear, or if the sound input signal on the one side is a right-side sound input signal, and the sound input signal on the other side is a left-side sound input signal, the virtual stereo signal obtained according to the foregoing steps is a right-ear virtual stereo signal that is directly input to the right ear. In the foregoing manner, the virtual stereo synthesis apparatus can separately obtain a left-ear virtual stereo signal and a right-ear virtual stereo signal, and output the signals to the two ears using a headset, to achieve a stereo effect that is like a natural sound.
In addition, in an implementation manner in which positions of virtual sound sources are all fixed, it is not limited that the virtual stereo synthesis apparatus executes step S202 each time virtual stereo synthesis is performed (for example, each time replay is performed using a headset). HRTF data of each sound input signal indicates filter model data of paths for transmitting the sound input signal from a sound source to two ears of an artificial head, and in a case in which a position of the sound source is fixed, the filter model data of the path for transmitting the sound input signal, generated by the sound source, from the sound source to the two ears of the artificial head is fixed. Therefore, step S202 may be separated out, and step 202 is executed in advance to acquire and save a filtering function of each sound input signal, and when the virtual stereo synthesis is performed, the filtering function, saved in advance, of each sound input signal is directly acquired to perform convolution filtering on a sound input signal on the other side generated by a virtual sound source on the other side. The foregoing case still falls within the protection scope of the virtual stereo synthesis method in the present disclosure.
Referring to
Step S301: A virtual stereo synthesis apparatus acquires at least one sound input signal s1
The virtual stereo synthesis apparatus acquires the at least one sound input signal s1
Step S302: Separately perform ratio processing on a preset HRTF left-ear component hθ
The virtual stereo synthesis apparatus performs ratio processing on the left-ear component hθ
A specific method for obtaining the filtering function of each sound input signal on the other side is described using an example. Referring to
Step S401: The virtual stereo synthesis apparatus performs diffuse-field equalization on preset HRTF data hθ
A preset HRTF data of the kth sound input signal on the other side is represented by hθ
(1) Furthermore, it is calculated that a frequency domain of the preset HRTF data hθ
(2) An average energy spectrum DF _avg(n), in all directions, of the preset HRTF data frequency domain Hθ
where |Hθ
(3) The average energy spectrum DF _avg(n) is inversed, to obtain an inversion DF _inv(n) of the average energy spectrum of the preset HRTF data frequency domain Hθ
(4) The inversion DF _inv(n) of the average energy spectrum of the preset HRTF data frequency domain Hθ
df _inv(n)=real(InvFT(DF _inv(n))),
where InfFT( ) represents inverse Fourier transform, and real(x) represents calculation of a real number part of a complex number x.
(5) Convolution is performed on the preset HRTF data hθ
where conv(x,y) represents a convolution of vectors x and y, and
The virtual stereo synthesis apparatus performs the foregoing processing (1) to (5) on the preset HRTF data hθ
Step S402: Perform subband smoothing on the diffuse-field-equalized preset HRTF data
The virtual stereo synthesis apparatus transforms the diffuse-field-equalized preset HRTF data
The virtual stereo synthesis apparatus performs subband smoothing on the frequency domain
bw(n)=└0.2*n┘, └x┘ represents a maximum integer that is not greater than x, and hann(j)=0.5*(1−cos(2*π*j/(2*bw(n)+1))), j=0 . . . (2*bw(n)+1).
Step S403: Use a preset HRTF left-ear frequency domain component Ĥθ
Step S404: Separately use a ratio of the left-ear frequency domain parameter of the sound input signal on the other side to the right-ear frequency domain parameter of the sound input signal on the other side as a frequency-domain filtering function Hθ
The ratio of the left-ear frequency domain parameter of the sound input signal on the other side to the right-ear frequency domain parameter of the sound input signal on the other side further includes a modulus ratio and an argument difference between the left-ear frequency domain parameter and the right-ear frequency domain parameter, where the modulus ratio and the argument difference are correspondingly used as a modulus and an argument in the frequency-domain filtering function of the sound input signal on the other side, and the obtained filtering function can retain orientation information of the preset HRTF left-ear component and the preset HRTF right-ear component of the sound input signal on the other side.
In this implementation manner, the virtual stereo synthesis apparatus performs a ratio operation on the left-ear frequency domain parameter and the right-ear frequency domain parameter of the sound input signal on the other side. Further, the modulus of the frequency-domain filtering function Hθ
the argument of the frequency-domain filtering function Hθ
It should be noted that, in the foregoing description, when diffuse-field equalization and subband smoothing are performed, the preset HRTF data hθ
Step S405: Separately perform minimum phase filtering on the frequency-domain filtering function Hθ
The obtained frequency-domain filtering function Hθ
(1) The virtual stereo synthesis apparatus extends the modulus of the obtained frequency-domain filtering function Hθ
where ln(x) is a natural logarithm of x, N1 is a time-domain transformation length of a time domain hθ
(2) Hilbert transform is performed on the modulus |Hθ
Hθ
where Hilbert( ) represents Hilbert transform.
(3) A minimum phase filter Hθ
where n=1 . . . N2.
(4) A delay τ(θk,φk) is calculated:
(5) The minimum phase filter Hθ
hθ
where InvFT( ) represents inverse Fourier transform, and real( ) represents a real number part of a complex number x.
(6) The time domain hθ
Relatively large coefficients of the minimum phase filter Hθ
A tailored filtering function hθ
It should be noted that, the foregoing example of obtaining the filtering function hθ
arg(Hθ
formula arg(Hθ
the left-ear component and the right-ear component of the subband-smoothed preset HRTF data are separately used as the left-ear frequency domain parameter and the right-ear frequency domain parameter, ratio calculation is performed according to a formula
arg(Hθ
Step S303: Separately perform reverberation processing on each sound input signal s2
After acquiring the at least one sound input signal s2
(1) As shown in
where conv(x,y) represents a convolution of vectors x and y, dk is a preset delay of the kth sound input signal on the other side, hk(n) is an all-pass filter of the kth sound input signal on the other side, and a transfer function thereof is
where gk1, gk2, and gk3 are preset all-pass filter gains corresponding to the kth sound input signal on the other side, and Mk1, Mk2, and Mk3 are preset all-pass filter delays corresponding to the kth sound input signal on the other side.
(2) Separately add each sound input signal s2
ŝ2
where wk is a preset weight of the reverberation signal
Step S304: Separately perform convolution filtering on each sound reverberation signal
After separately performing reverberation processing on each of the at least one sound input signal on the other side to obtain the sound reverberation signal ŝ2
Step S305: Summate all of the sound input signals s1
Furthermore, the virtual stereo synthesis apparatus obtains the synthetic signal s−1(n) corresponding to the one side according to a formula
For example, if the sound input signal on the one side is a left-side sound input signal, a left-ear synthetic signal is obtained, or if the sound input signal on the one side is a right-side sound input signal, a right-ear synthetic signal is obtained.
Step S306: Perform, using a fourth-order IIR filter, timbre equalization on the synthetic signal s−1(n) and then use the timbre-equalized synthetic signal as a virtual stereo signal s1(n).
The virtual stereo synthesis apparatus performs timbre equalization on the synthetic signal s−1(n), to reduce a coloration effect, on the synthetic signal, from the convolution-filtered sound input signal on the other side. In this implementation manner, timbre equalization is performed using a fourth-order IIR filter eq(n). Furthermore, the virtual stereo signal s1(n) that is finally output to the ear on the one side is obtained according to a formula s1(n)=conv(eq(n),s−1(n)).
A transfer function of eq(n) is
For better comprehension of practical use of the virtual stereo synthesis method of this application, descriptions are further provided using an example, in which a sound generated by a dual-channel terminal is replayed by a headset, where a left channel signal is a left-side sound input signal sl(n), and a right channel signal is a right-side sound input signal sr(n), where preset HRTF data of the left-side sound input signal sl(n) is hθ,φl(n) hθ,φl(n), and preset HRTF data of the right-side sound input signal sr(n) is hθ,φr(n).
A virtual stereo synthesis apparatus separately processes the preset HRTF data hθ,φl(n) of the left-side sound input signal and the preset HRTF data hθ,φr(n) of the right-side sound input signal separately according to steps S401 to S405 above, to obtain a tailored filtering function hθ,φc
The virtual stereo synthesis apparatus acquires the left-side sound input signal sl(n) as a sound input signal on one side, and the right-side sound input signal sr(n) as a sound input signal on the other side. The virtual stereo synthesis apparatus executes step S303 to perform reverberation processing on the right-side sound input signal. A reverberation signal
and a right-side sound reverberation signal ŝr(n) is obtained according to ŝr(n)=sr(n)+wr□
and a left-side sound reverberation signal ŝl(n) is obtained according to ŝl(n)=sl(n)+wl□
Values of constants in the foregoing example are:
The values of the constants are numerical values that are obtained by means of multiple experiments and that provide an optimal replay effect for a virtual stereo signal. Certainly, in another implementation manner, other numerical values may also be used. The values of the constants in this implementation manner are not further limited herein.
In this implementation manner, which is used as an optimized implementation manner, steps S303, S304, S305, and S306 are executed to perform reverberation processing, convolution filtering operation, virtual stereo synthesis, and timbre equalization is performed in sequence, to finally obtain a virtual stereo. However, in another implementation manner, steps S303 and S306 may be selectively performed, for example, steps S303 and S306 are not executed, while convolution filtering is directly performed on the sound input signal on the other side using the filtering function of the sound input signal on the other side, to obtain the filtered signal ŝ2
In this implementation manner, reverberation processing is performed on a sound input signal on the other side, which enhances a sense of space of a synthetic virtual stereo, and during synthesis of a virtual stereo, timbre equalization is performed on the virtual stereo using a filter, which reduces a coloration effect. In addition, in this implementation manner, existing HRTF data is improved. Diffuse-field equalization is first performed on the HRTF data, to eliminate interference data from the HRTF data, and then a ratio operation is performed on a left-ear component and a right-ear component that are in the HRTF data, to obtain improved HRTF data in which orientation information of the HRTF data is retained, that is, a filtering function in this application such that corresponding convolution filtering needs to be performed on only the sound input signal on the other side, and then a virtual stereo with a relatively good replay effect can be obtained. Therefore, virtual stereo synthesis in this implementation manner is different from that in the prior art, in which the convolution filtering is performed on sound input signals on both sides, and therefore, calculation complexity is greatly reduced. Moreover, an original input signal is completely retained on one side, which reduces a coloration effect. Further, in this implementation manner, the filtering function is further processed by means of subband smoothing and minimum phase filtering, which reduces a data length of the filtering function, and therefore further reduces the calculation complexity.
Referring to
The acquiring module 610 is configured to acquire at least one sound input signal s1
In the present disclosure, an original sound signal is processed to obtain an output sound signal that has a stereo sound effect. In this implementation manner, there are a total of M simulated sound sources located on one side, which accordingly generate M sound input signals on the one side, and there are a total of K simulated sound sources located on the other side, which accordingly generate K sound input signals on the other side. The acquiring module 610 acquires the M sound input signals s1
Generally, in the present disclosure, the sound input signals on the one side and the other side simulate sound signals that are sent from left side and right side positions of an artificial head center in order to be distinguished from each other, for example, if the sound input signal on the one side is a left-side sound input signal, the sound input signal on the other side is a right-side sound input signal, or if the sound input signal on the one side is a right-side sound input signal, the sound input signal on the other side is a left-side sound input signal, where the left-side sound input signal is a simulation of a sound signal that is sent from the left side position of the artificial head center, and the right-side sound input signal is a simulation of a sound signal that is sent from the right side position of the artificial head center.
The generation module 620 is configured to separately perform ratio processing on a preset HRTF left-ear component hθ
Different HRTF experimental measurement databases can already be provided in the prior art. The generation module 620 may directly acquire, without performing measurement, HRTF data from the HRTF experimental measurement databases in the prior art, to perform presetting, and a simulated sound source position of a sound input signal is a sound source position during measurement of corresponding preset HRTF data. In this implementation manner, each sound input signal correspondingly comes from a different preset simulated sound source, and therefore a different piece of HRTF data is correspondingly preset for each sound input signal. The preset HRTF data of each sound input signal can express a filtering effect on the sound input signal that is transmitted from a preset position to the two ears. Furthermore, preset HRTF data hθ
The generation module 620 performs ratio processing on the left-ear component hθ
The convolution filtering module 630 is configured to separately perform convolution filtering on each sound input signal s2
The convolution filtering module 630 calculates the filtered signal s2
The synthesis module 640 is configured to synthesize all of the sound input signals s1
The synthesis module 640 is configured to synthesize, according to
all of the received sound input signals s1
In this implementation manner, ratio processing is performed on left-ear and right-ear components of preset HRTF data of each sound input signal on the other side, to obtain a filtering function that retains orientation information of the preset HRTF data such that during synthesis of a virtual stereo, convolution filtering processing needs to be performed on only the sound input signal on the other side using the filtering function, and the sound input signal on the other side and a sound input signal on one side are synthesized to obtain the virtual stereo, without a need to simultaneously perform convolution filtering on the sound input signals that are on the two sides, which greatly reduces calculation complexity, and during synthesis, convolution processing does not need to be performed on the sound input signal on the one side, and therefore an original audio is retained, which further alleviates a coloration effect, and improves sound quality of the virtual stereo.
It should be noted that, in this implementation manner, the generated virtual stereo is a virtual stereo that is input to an ear on one side, for example, if the sound input signal on the one side is a left-side sound input signal, and the sound input signal on the other side is a right-side sound input signal, the virtual stereo signal obtained by the foregoing module is a left-ear virtual stereo signal that is directly input to the left ear, or if the sound input signal on the one side is a right-side sound input signal, and the sound input signal on the other side is a left-side sound input signal, the virtual stereo signal obtained by the foregoing module is a right-ear virtual stereo signal that is directly input to the right ear. In the foregoing manner, the virtual stereo synthesis apparatus can separately obtain a left-ear virtual stereo signal and a right-ear virtual stereo signal, and output the signals to the two ears using a headset, to achieve a stereo effect that is like a natural sound.
Referring to
The acquiring module 710 is configured to acquire at least one sound input signal s1
The generation module 720 is configured to separately perform ratio processing on a preset HRTF left-ear component hθ
Further optimized, the generation module 720 includes a processing unit 721, a ratio unit 722, and a transformation unit 723.
The processing unit 721 is configured to separately use a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF left-ear component hθ
a. The processing unit 721 performs diffuse-field equalization on preset HRTF data hθ
(1) Furthermore, the processing unit 721 calculates that a frequency domain of the preset HRTF data hθ
(2) The processing unit 721 calculates an average energy spectrum DF _avg(n), in all directions, of the preset HRTF data frequency domain Hθ
where |Hθ
(3) The processing unit 721 inverses the average energy spectrum DF _avg(n), to obtain an inversion DF inv(n) of the average energy spectrum of the preset HRTF data frequency domain Hθ
(4) The processing unit 721 transforms the inversion DF _inv(n) of the average energy spectrum of the preset HRTF data frequency domain Hθ
df _inv(n)=real(InvFT(DF_inv(n))),
where InvFT( ) represents inverse Fourier transform, and real(x) represents calculation of a real number part of a complex number x.
(5) The processing unit 721 performs convolution on the preset HRTF data hθ
where conv(x,y) represents a convolution of vectors x and y, and
The processing unit 721 performs the foregoing processing (1) to (5) on the preset HRTF data hθ
b. The processing unit 721 performs subband smoothing on the diffuse-field-equalized preset HRTF data
The processing unit 721 performs subband smoothing on the frequency domain
bw(n)=└0.2*n┘, └x┘ represents a maximum integer that is not greater than x, and
hann(j)=0.5*(1−cos(2*π*j/(2*bw(n)+1))),j=0 . . . (2*bw(n)+1).
c. The processing unit 721 uses a preset HRTF left-ear frequency domain component Ĥθ
It should be noted that, in the foregoing description, when diffuse-field equalization and subband smoothing are performed, the preset HRTF data hθ
The ratio unit 722 is configured to separately use a ratio of the left-ear frequency domain parameter of the sound input signal on the other side to the right-ear frequency domain parameter of the sound input signal on the other side as a frequency-domain filtering function Hθ
In this implementation manner, the ratio unit 722 performs a ratio operation on the left-ear frequency domain parameter and the right-ear frequency domain parameter of the sound input signal on the other side. Further, the modulus of the frequency-domain filtering function Hθ
the argument of the frequency-domain filtering function Hθ
The transformation unit 723 is configured to separately perform minimum phase filtering on the frequency-domain filtering function Hθ
(1) The transformation unit 723 extends the modulus of the frequency-domain filtering function Hθ
where ln(x) is a natural logarithm of x, N1 is a time-domain transformation length of a time domain hθ
(2) The transformation unit 723 performs Hilbert transform on the modulus |Hθ
Hθ
where Hilbert( ) represents Hilbert transform.
(3) The transformation unit 723 obtains a minimum phase filter Hθ
where n=1 . . . N2.
(4) The transformation unit 723 calculates a delay τ(θk,φk):
(5) The transformation unit 723 transforms the minimum phase filter Hθ
hθ
where InvFT( ) represents inverse Fourier transform, and real( ) represents a real number part of a complex number x.
(6) The transformation unit 723 truncates the time domain hθ
Relatively large coefficients of the minimum phase filter Hθ
It should be noted that, the foregoing example in which the generation module obtains the filtering function hθ
The reverberation processing module 750 is configured to separately perform reverberation processing on each sound input signal s2
After acquiring the at least one sound input signal s2
(1) As shown in
where conv(x, y) represents a convolution of vectors x and y, dk is a preset delay of the kth sound input signal on the other side, hk(n) is an all-pass filter of the kth sound input signal on the other side, and a transfer function thereof is:
where gk1, gk2, and gk3 are preset all-pass filter gains corresponding to the kth sound input signal on the other side, and Mk1, Mk2, and Mk3 are preset all-pass filter delays corresponding to the kth sound input signal on the other side.
(2) The reverberation processing module 750 separately adds each sound input signal s2
ŝ2
where wk is a preset weight of the reverberation signal
The convolution filtering module 730 is configured to separately perform convolution filtering on each sound reverberation signal ŝ2
After receiving all the sound reverberation signals ŝ2
The synthesis unit 741 is configured to summate all of the sound input signals s1
Furthermore, the synthesis unit 741 obtains the synthetic signal
For example, if the sound input signal on the one side is a left-side sound input signal, a left-ear synthetic signal is obtained, or if the sound input signal on the one side is a right-side sound input signal, a right-ear synthetic signal is obtained.
The timbre equalization unit 742 is configured to perform, using a fourth-order IIR filter, timbre equalization on the synthetic signal
The timbre equalization unit 742 performs timbre equalization on the synthetic signal
A transfer function of eq(n) is
In this implementation manner, which is used as an optimized implementation manner, reverberation processing, convolution filtering operation, virtual stereo synthesis, and timbre equalization are performed in sequence, to finally obtain a virtual stereo. However, in another implementation manner, reverberation processing and/or timbre equalization may not be performed, which is not limited herein.
It should be noted that, the virtual stereo synthesis apparatus of this application may be an independent sound replay device, for example, a mobile terminal such as a mobile phone, a tablet computer, or an MP3 player, and the foregoing functions are also performed by the sound replay device.
Referring to
The memory 820 is configured to store a computer instruction executed by the processor 810 and data that the processor 810 needs to store at work.
The processor 810 executes the computer instruction stored in the memory 820, to acquire at least one sound input signal s1
Further, the processor 810 acquires the at least one sound input signal s1
The processor 810 is configured to separately perform ratio processing on a preset HRTF left-ear component hθ
Further optimized, the processor 810 separately uses a frequency domain, after diffuse-field equalization and subband smoothing is performed in sequence, of the preset HRTF left-ear component hθ
The processor 810 separately uses a ratio of the left-ear frequency domain parameter of the sound input signal on the other side to the right-ear frequency domain parameter of the sound input signal on the other side as a frequency-domain filtering function Hθ
an argument of the frequency-domain filtering function Hθ
The processor 810 separately performs minimum phase filtering on the frequency-domain filtering function Hθ
It should be noted that, the foregoing example in which the processor obtains the filtering function hθ
The processor 810 is configured to separately perform reverberation processing on each sound input signal s2
The processor 810 is configured to separately perform convolution filtering on each sound reverberation signal ŝ2
The processor 810 is configured to summate all of the sound input signals s1
Further, the processor 810 obtains the synthetic signal
For example, if the sound input signal on the one side is a left-side sound input signal, a left-ear synthetic signal is obtained, or if the sound input signal on the one side is a right-side sound input signal, a right-ear synthetic signal is obtained.
The processor 810 is configured to perform, using a fourth-order IIR filter, timbre equalization on the synthetic signal
In this implementation manner, which is used as an optimized implementation manner, reverberation processing, convolution filtering operation, virtual stereo synthesis, and timbre equalization are performed in sequence, to finally obtain a left-ear or right-ear virtual stereo. However, in another implementation manner, the processor may not perform reverberation processing and the timbre equalization may be not performed, which is not limited herein.
By means of the foregoing solutions, in this application, ratio processing is performed on left-ear and right-ear components of preset HRTF data of each sound input signal on the other side, to obtain a filtering function that retains orientation information of the preset HRTF data such that during synthesis of a virtual stereo, convolution filtering processing needs to be performed on only the sound input signal on the other side using the filtering function, and then the sound input signal on the other side and an original sound input signal on one side are synthesized to obtain the virtual stereo, without a need to simultaneously perform convolution filtering on the sound input signals that are on the two sides, which greatly reduces calculation complexity, and during synthesis, convolution processing does not need to be performed on the sound input signal on one of the sides, and therefore an original audio is retained, which further alleviates a coloration effect, and improves sound quality of the virtual stereo.
In the several implementation manners provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the module or unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit.
When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the prior art, or all or a part of the technical solutions may be implemented in the form of a software product. The software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) or a processor to perform all or a part of the steps of the methods described in the implementation manners of this application. The foregoing storage medium includes any medium that can store program code, such as a universal serial bus (USB) flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc.
Number | Date | Country | Kind |
---|---|---|---|
2013 1 0508593 | Oct 2013 | CN | national |
This application is a continuation of International Application No. PCT/CN2014/076089, filed on Apr. 24, 2014, which claims priority to Chinese Patent Application No. 201310508593.8, filed on Oct. 24, 2013, both of which are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
6072877 | Abel | Jun 2000 | A |
6243476 | Gardner | Jun 2001 | B1 |
6768798 | Dempsey | Jul 2004 | B1 |
20050135643 | Lee et al. | Jun 2005 | A1 |
20060062409 | Sferrazza | Mar 2006 | A1 |
20060062410 | Kim et al. | Mar 2006 | A1 |
20080031462 | Walsh | Feb 2008 | A1 |
20080159544 | Kim | Jul 2008 | A1 |
20110243338 | Brown | Oct 2011 | A1 |
Number | Date | Country |
---|---|---|
1630434 | Jun 2005 | CN |
101184349 | May 2008 | CN |
101212843 | Jul 2008 | CN |
101483797 | Jul 2009 | CN |
Entry |
---|
Foreign Communication From a Counterpart Application, Chinese Application No. 201310508593.8, Chinese Office Action dated Apr. 29, 2016, 5 pages. |
Foreign Communication From a Counterpart Application, PCT Application No. PCT/CN2014/076089, English Translation of International Search Report dated Jul. 29, 2014, 2 pages. |
Foreign Communication From a Counterpart Application, PCT Application No. PCT/CN2014/076089, English Translation of Written Opinion dated Jul. 29, 2014, 5 pages. |
Foreign Communication From a Counterpart Application, European Application No. 14856259.8, Extended European Search Report dated Oct. 4, 2016, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20160241986 A1 | Aug 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2014/076089 | Apr 2014 | US |
Child | 15137493 | US |