Reproducing different acoustic scenes in multiple sound zones located nearby without acoustic barriers in between is a well-known task in audio signal processing, which is often referred to as multizone reproduction (see [1]). From the technical point of view, multizone reproduction is closely related to loudspeaker beamforming or spotforming (see [2]) when nearfield scenarios are considered, where the loudspeaker array aperture may also enclose the listener.
A problem in a multizone reproduction scenario may, for example, be to provide substantially different acoustic scenes (e.g. different pieces of music or audio content of different movies) to the listeners occupying individual sound zones.
A simplified ideal example of multizone reproduction is shown in
When reproducing multiple signals in a real-world enclosure, a perfect separation is impossible since acoustic waves cannot be stopped without an acoustic barrier. Hence, there will be a cross-talk between the individual sound zones, which are occupied by individual listeners.
y
1(k)=y1,1(k)+y1,2(k)=u1(k)*h1,1(k)+u2(k)*h1,2(k), (1)
y
2(k)=y2,2(k)+y2,1(k)=u2(k)*h2,2(k)+u2(k)*h2,1(k), (2)
where * denotes the convolution, as defined by
Here, y1,2(k) and y2,1(k) are considered to be unwanted interfering signal components, in contrast to the desired components y1,1(k) and y2,2(k). When u1(k) and u2(k) describe entirely different acoustic scenes, only a very small contribution of u2(k) in y1(k) compared to the contribution of u1(k) in y1(k) is acceptable. The same holds for y2(k) with reversed indices.
A straightforward way to achieve this is to design the loudspeaker setup such that h1,1(k) and h2,2(k) exhibit a higher energy, compared to h1,2(k) and h2,1(k), which describe cross-zone reproduction. One example for this would be to use loudspeakers located nearby the listeners (US 2003103636, US 2003142842), where using headphones can be seen as an extreme case of such a setup. However, placing loudspeakers too close to the listeners is often unacceptable, because this can interfere with the listener's movement, such that this approach is limited in practical applications.
An approach to overcome this, is to use directional loudspeakers, where the loudspeaker directivity is typically higher for higher frequencies (see [35]: JP 5345549, and [21]: US 2005/0190935 A1). Unfortunately, this approach is only suitable for higher frequencies (see [1]),
Another approach is to utilize a loudspeaker array in conjunction with suitable prefilters for a personalized audio reproduction.
In the example of
It should be noted that multizone reproduction is generally not limited to providing two signals to two zones. In fact, the numbers of sources, loudspeakers and listening zones can be arbitrary. The following explanations and definitions can be used for a general scenario with NS signal sources, NL loudspeakers, and NM considered positions in the NZ listening zones. In such a scenario, it is possible that multiple signals are reproduced in an individual zone to achieve a spatial sound reproduction. The corresponding signal model is shown in
u(k)=(u1(k),u2(k), . . . ,uN
x(k)=(x1(k),x2(k), . . . ,xN
y(k)=(y1(k),y2(k), . . . ,yN
x(k)=G(k)*u(k), (7)
y(k)=H(k)*x(k). (8)
Here, a representation of Equation (3) is given by
assuming that the impulse responses captured in G(k) are limited to be non-zero only for 0≤k<LG.
The matrices G(k) and H(k) describe the prefilter impulse responses and the room impulse responses according to
For each source signal there are sound zones in which the signal should be reproduced, the so called “bright zones”. At the same time, there are zones where the individual signal should not be reproduced, the “dark zones”.
For example, in
For multizone reproduction, the prefilters are typically designed such that the ratio between the acoustic energy radiated into the bright zones and the acoustic energy radiated into the dark zones is maximized. This ratio is often termed acoustic contrast (see [3]) and can be measured by defining Bq(k) and Dq(k), which capture the room impulse responses from each loudspeaker, to the considered sampling points in the bright and dark zones, respectively. Since this assignment is different for every source signal, both matrices are dependent on the source signal index q. Additionally, the matrix G(k) may be decomposed into
G(k)=g1(k),g2(k), . . . ,gN
where
g
q(k)=(g1,q(k),g2,q(k), . . . gN
captures the individual filter coefficients gl,q(k) that are associated with loudspeaker l and source q. Eventually, the acoustic contrast achieved for source q can be defined according to
An example of the reproduction levels in bright and dark zone with resulting acoustic contrast is shown in
It should be noted that if any impulse response in H(k) is either assigned to the dark zone or to the bright zone for a source, the following holds:
H(k)=Bq(A)Dq(k)∀q,k. (15)
There are many methods known to determine G(k) such that Cq achieves a high value (see [1], [3], [4], [5] and [6]).
Difficulties exist, when directional sound reproduction is conducted.
Some of the approaches mentioned above try to achieve multizone reproduction by directional sound radiation. Such an approach faces major physical challenges, which are described below.
When a wave is emitted through a finite-size aperture, the ratio of aperture size to the wavelength determines how good the radiation direction can be controlled. A better control is achieved for smaller wavelength and larger aperture sizes. For the angular resolution of a telescope this is described by the approximation
where Θ is the minimum angle between two points that can be distinguished, λ is the wavelength and D the diameter of the telescope, see:
Since acoustic waves obey the same wave equation, this rule is also applicable to acoustic waves. Eventually, technical reasons limit the size of loudspeaker membranes or horn apertures, which implies a lower limit for the frequencies for which directional reproduction is effectively possible. Moreover, the same holds also for loudspeaker arrays, where not the size of the individual loudspeakers is of relevance, but the dimensions of the entire loudspeaker array. Unlike for the drivers of individual loudspeakers, array dimensions are primarily constrained by economical but not by technical reasons.
When using loudspeaker arrays for directional sound reproduction, the minimum inter-loudspeaker distance implies an upper frequency limit. This is because the sampling theorem, see:
is also relevant in the spatial domain, where two sampling points per wave length may be used in order to achieve a controlled directional radiation. Placing loudspeakers sufficiently close to control the directional radiation within the audible frequency range is typically not a problem. However, the resulting minimum aperture size (see above) and a minimum inter-loudspeaker distance implies a minimum number of loudspeakers that depends quadratically on the frequency range in which the radiation direction should be controlled. Since the expenses for a loudspeaker array are proportional to the number of loudspeakers, there are effective frequency limits for commercially viable loudspeaker array reproduction solutions.
Furthermore, the enclosure where the multiple sound zones should be created can influence the achieved radiation pattern itself. For higher frequencies, large enclosures, and straight walls, models can be found to analytically consider the enclosure geometry in the design of directional loudspeakers or prefilters for loudspeaker array reproduction. However, this is no longer possible when the enclosure exhibits a (general) curvature, when arbitrarily shaped obstacles are placed in the enclosure, or when the dimensions of the enclosure are in the order of magnitude of the wavelength. Such a setup exists, e.g., in a car cabin and will be referred to as a complex setup in the following. Under such conditions, exciting a controlled sound field by directional loudspeakers or electrically steered arrays is very challenging because of the sound reflected from the enclosure that cannot be exactly modeled. Under such conditions, even non-directional individually driven loudspeakers may effectively exhibit an uncontrolled directional pattern.
Some of the known documents relate to (cross-) signal dependent gain control.
US 200510152562 A1 (see [8]) relates to in-car surround sound reproduction with different operation modes related to different loudness patterns on the individual seats and different equalization patterns.
US 2013/170668 A1 (see [9]) describes mixing an announcement sound to an entertainment signal. The mix between both signals is individual for each of two zones.
US 200810071400 A1 (see [10]) discloses signal processing depending on source or content information considering two different signals to relief the driver from being “acoustically overloaded”.
US 2006/0034470 A1 (see [11]) relates to equalization, compression, and “mirror image” equalization to reproduce audio in high-noise conditions with increased quality.
US 2011/0222695 A1 (see [12]) discloses audio compression of subsequently played audio tracks, also with considering the ambient noise and psychoacoustic models.
US 2009/0232320 A1 (see [13]) describes compression to have an announcement sound louder than an entertainment program, with user interaction,
US 2015/0256933 A1 (see [14]) discloses a balance level of telephone and entertainment content to minimize acoustic leakage of content.
U.S. Pat. No. 6,674,865 B1 (see [15]) relates to automatic gain control, for hands-free telephony.
DE 30 45 722 A1 (see [16]) discloses parallel compression to noise level and level increase for announcement.
Other known documents relate to multizone reproduction.
US 2012/0140945 A1 (see [17]) relates to explicit sound zones implementation. High frequencies are reproduced by a loudspeaker, low frequencies use constructive and destructive interference by manipulating amplitude phase and delay. To determine how amplitude, phase, and delay have to be manipulated, [17] proposes to use special techniques, the “Tan Theta”-method or solving an elgenvalue problem.
US 2008/0273713 A1 (see [18]) discloses sound zones, array of speakers located near each seat, wherein a loudspeaker array is explicitly assigned to each of the zones.
US 2004/0105550 A1 (see [19]) relates to sound zones, directional close to head, non-directional away from listener.
US 2006/0262935 A1 (see [20]) relates to personal sound zones explicitly.
US 2005/0190935 A1 (see [21]) relates to headrest or seat back loudspeakers for personalized playback.
US 2008/0130922 A1 (see [22]) discloses sound zones implementation with directional loudspeakers near front seat, non-directional loudspeakers near back seat and signal processing such that front and back cancel to leakage of each other.
US 2010/0329488 A1 (see [23]) describes sound zones in a vehicle with at least one loudspeaker and one microphone associated with each zone.
DE 10 2014 210 105 A1 (see [24]) relates to sound zones realized with binaural reproduction, also using crosstalk-cancellation (between ears), and also to a reduction of cross-talk between zones.
US 2011/0286614 A1 (see [25]) discloses sound zones with binaural reproduction based on crosstalk-cancellation and head tracking.
US 2007/0053532 A1 (see [26]) describes headrest loudspeakers.
US 2013/0230175 A1 (see [27]) relates to sound zones, explicitly using microphones.
WO 2016/008621 A1 (see [28]) discloses a head and torso simulator.
Further known documents relate to directional reproduction.
US 2008/0273712 A1 (see [29]) discloses a directional loudspeaker mounted to a vehicle seat.
U.S. Pat. No. 5,870,484 (see [30]) describes stereo reproduction with directional loudspeakers.
U.S. Pat. No. 5,809,153 (see [31]) relates to three loudspeakers point in three directions with circuitry to use them as arrays.
US 2006/0034467 A1 (see [32]) discloses sound zones that re a e to the excitation of the headliner by special transducers.
US 2003/0103636 A1 (see [33]) relates to a personalized reproduction and silencing and to headrest arrays to produce the sound field at listeners ears including silencing.
US 2003/0142842 A1 (see [34]) relates to headrest loudspeakers.
JP 5345549 (see [5]) describes parametric loudspeakers in front seats pointing back.
US2014/0056431 A1 (see [36]) relates to directional reproduction.
US 2014/0064526 A1 (see [37]) relates to producing a binaural and localized audio signal to a user.
US 2005/0069148 A1 (see [38]) discloses the use of loudspeakers in the headlining with an according delay.
U.S. Pat. No. 5,081,682 (see [39]). DE 90 15 454 (see [40]), U.S. Pat. No. 5,550,922 (see [41]), U.S. Pat. No. 5,434,922 (see [42]), U.S. Pat. No. 6,078,670 (see [43]), U.S. Pat. No. 6,674,865 B1 (see [44]), DE 100 52 104 A1 (see [45]) and US 2005/0135635 A1 (see [46]) relate to gain adaptation or spectral modification of signals according to measured ambient noise or estimated ambient noise, e.g., from speed.
DE102 42 558 A1 (see [47]) discloses to antiparallel volume control.
US 2010/0046765 A1 (see [48]) and DE 10 2010 040 689 (see [49]) relate to an optimized cross-fade between subsequently reproduced acoustic scenes.
US 2008/0103615 A1 (see [50]) describes a variation of panning dependent on an event.
U.S. Pat. No. 8,190,438 B1 (see [51]) describes an adjustment of spatial rendering depending on a signal in an audio stream.
WO 20071098916 A1 (see [52]) describes reproducing a warning sound.
US 2007/0274546 A1 (see [53]) determines which piece of music can be played in combination with another.
US 2007/0286426 A1 (see [54]) describes the mixing of one audio signal (e.g. from a telephone) to another (e.g. music).
Some known documents describe audio compression and gain control.
U.S. Pat. No. 5,018,205 (see [55]) relates to band-selective adjustment of gain in presence of ambient noise.
U.S. Pat. No. 4,944,018 (see [56]) discloses speed controlled amplification.
DE 103 51 145 A1 (see [57]) relates to frequency-depended amplification to overcome a frequency-dependent threshold.
Some known documents relate to noise cancellation.
JP 2003-255954 (see [58]) discloses active noise cancellation using loudspeakers located near listeners.
U.S. Pat. No. 4,977,600 (see [59]) discloses attenuation of picked-up noise for individual seat. U.S. Pat. No. 5,416,846 (see [60]) describes active noise cancellation with an adaptive filter. Further known documents relate to array beamforming for audio.
US 2007/0030976 A1 (see [61]) and JP 2004-363696 (see [62]) disclose array beamforming for audio reproduction, delay and sum beamformer.
It would be highly desirable if improved concepts would be provided that provide multizone reproduction within a sufficient range of the audible frequency spectrum.
According to an embodiment, an apparatus for generating a plurality of loudspeaker signals from two or more audio source signals, wherein each of the two or more audio source signals shall be reproduced in one or more of two or more sound zones, and wherein at least one of the two or more audio source signals shall not be reproduced in at least one of the two more sound zones, may have: an audio preprocessor configured to modify each of two or more initial audio signals to obtain two or more preprocessed audio signals, and a filter configured to generate the plurality of loudspeaker signals depending on the two or more preprocessed audio signals, wherein the audio preprocessor is configured to use the two or more audio source signals as the two or more initial audio signals, or wherein the audio preprocessor is configured to generate for each audio source signal of the two or more audio source signals an initial audio signal of the two more initial audio signals by modifying said audio source signal, wherein the audio preprocessor is configured to modify each initial audio signal of the two or more initial audio signals depending on a signal power or a loudness of another initial audio signal of the two or more initial audio signals, and wherein the filter is configured to generate the plurality of loudspeaker signals depending on in which of the two or more sound zones the two or more audio source signals shall be reproduced and depending on in which of the two or more sound zones the two or more audio source signals shall not be reproduced.
According to another embodiment, a method for generating a plurality of loudspeaker signals from two or more audio source signals, wherein each of the two or more audio source signals shall be reproduced in one or more of two or more sound zones, and wherein at least one of the two or more audio source signals shall not be reproduced in at least one of the two more sound zones, may have the steps of: modifying each of two or more initial audio signals to obtain two or more preprocessed audio signals, and generating the plurality of loudspeaker signals depending on the two or more preprocessed audio signals, wherein the two or more audio source signals are used as the two or more initial audio signals, or wherein for each audio source signal of the two or more audio source signals an initial audio signal of the two more initial audio signals is generated by modifying said audio source signal, wherein each initial audio signal of the two or more initial audio signals is modified depending on a signal power or a loudness of another initial audio signal of the two or more initial audio signals, and wherein the plurality of loudspeaker signals is generated depending on in which of the two or more sound zones the two or more audio source signals shall be reproduced and depending on in which of the two or more sound zones the two or more audio source signals shall not be reproduced.
Another embodiment may have a non-transitory digital storage medium having a computer program stored thereon to perform the method for generating a plurality of loudspeaker signals from two or more audio source signals, wherein each of the two or more audio source signals shall be reproduced in one or more of two or more sound zones, and wherein at least one of the two or more audio source signals shall not be reproduced in at least one of the two more sound zones, the method including: modifying each of two or more initial audio signals to obtain two or more preprocessed audio signals, and generating the plurality of loudspeaker signals depending on the two or more preprocessed audio signals, wherein the two or more audio source signals are used as the two or more initial audio signals, or wherein for each audio source signal of the two or more audio source signals an initial audio signal of the two more initial audio signals is generated by modifying said audio source signal, wherein each initial audio signal of the two or more initial audio signals is modified depending on a signal power or a loudness of another initial audio signal of the two or more initial audio signals, and wherein the plurality of loudspeaker signals is generated depending on in which of the two or more sound zones the two or more audio source signals shall be reproduced and depending on in which of the two or more sound zones the two or more audio source signals shall not be reproduced, when said computer program is run by a computer.
An apparatus for generating a plurality of loudspeaker signals from two or more audio source signals is provided. Each of the two or more audio source signals shall be reproduced in one or more of two or more sound zones, and at least one of the two or more audio source signals shall not be reproduced in at least one of the two more sound zones. The apparatus comprises an audio preprocessor configured to modify each of two or more initial audio signals to obtain two or more preprocessed audio signals. Moreover, the apparatus comprises a filter configured to generate the plurality of loudspeaker signals depending on the two or more preprocessed audio signals. The audio preprocessor is configured to use the two or more audio source signals as the two or more initial audio signals, or wherein the audio preprocessor is configured to generate for each audio source signal of the two or more audio source signals an initial audio signal of the two more initial audio signals by modifying said audio source signal. Moreover, the audio preprocessor is configured to modify each initial audio signal of the two or more initial audio signals depending on a signal power or a loudness of another initial audio signal of the two or more initial audio signals. The filter is configured to generate the plurality of loudspeaker signals depending on in which of the two or more sound zones the two or more audio source signals shall be reproduced and depending on in which of the two or more sound zones the two or more audio source signals shall not be reproduced.
Moreover, a method for generating a plurality of loudspeaker signals from two or more audio source signals is provided. Each of the two or more audio source signals shall be reproduced in one or more of two or more sound zones, and at least one of the two or more audio source signals shall not be reproduced in at least one of the two more sound zones. The method comprises:
The two or more audio source signals are used as the two or more initial audio signals, or wherein for each audio source signal of the two or more audio source signals an initial audio signal of the two more initial audio signals is generated by modifying said audio source signal. Each initial audio signal of the two or more initial audio signals is modified depending on a signal power or a loudness of another initial audio signal of the two or more initial audio signals. The plurality of loudspeaker signals is generated depending on in which of the two or more sound zones the two or more audio source signals shall be reproduced and depending on in which of the two or more sound zones the two or more audio source signals shall not be reproduced.
Moreover, computer programs are provided, wherein each of the computer programs is configured to implement one of the above-described methods when being executed on a computer or signal processor.
Some embodiments provide a signal-dependent level modification to reduce the perceived acoustic leakage when using measures for directional reproduction of independent entertainment signals.
In embodiments, optionally, a combination of difference reproduction concepts for different frequency bands is employed.
Optionally, some embodiments use least-squares optimized FIR filters (FIR=finite impulse response) based on once measured impulse responses. Details of some embodiments are described below, when a prefilter according to embodiments is described.
Some of the embodiments are optionally employed in an automotive scenario, but are not limited to such a scenario.
Some embodiments relate to concepts that provide individual audio content to listeners occupying the same enclosure without the use of headphones or alike. Inter alia, these embodiments differ from the state-of-the-art by a smart combination of different reproduction approaches with a signal-dependent preprocessing such that a large perceptual acoustic contrast is achieved while retaining a high level of audio quality.
Some embodiments provide a filter design.
Some of the embodiments employ additional signal-dependent processing,
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
The apparatus comprises an audio preprocessor 110 configured to modify each of two or more initial audio signals to obtain two or more preprocessed audio signals. Moreover, the apparatus comprises a filter 140 configured to generate the plurality of loudspeaker signals depending on the two or more preprocessed audio signals. The audio preprocessor 110 is configured to use the two or more audio source signals as the two or more initial audio signals, or wherein the audio preprocessor 110 is configured to generate for each audio source signal of the two or more audio source signals an initial audio signal of the two more initial audio signals by modifying said audio source signal. Moreover, the audio preprocessor 110 is configured to modify each initial audio signal of the two or more initial audio signals depending on a signal power or a loudness of another initial audio signal of the two or more initial audio signals.
The filter 140 is configured to generate the plurality of loudspeaker signals depending on in which of the two or more sound zones the two or more audio source signals shall be reproduced and depending on in which of the two or more sound zones the two or more audio source signals shall not be reproduced.
While the approaches of the state of the art can achieve a considerable acoustic contrast, the contrast achieved by conventional methods is typically not sufficient to provide multiple unrelated acoustic scenes to inhabitants of the same enclosure, whenever high-quality audio reproduction may be useful.
The acoustic contrast perceived by the listeners shall be improved, which is dependent on the acoustic contrast as defined in Equation (14) above, but not identical to it. It shall be achieved that the acoustic contrast perceived by the listeners is increased rather than maximizing the contrast of acoustic energy. The perceived acoustic contrast will be referred to as subjective acoustic contrast, while the contrast in acoustic energy will be referred to as objective acoustic contrast in the following. Some embodiments employ measures to facilitate directional audio reproduction and measures to shape the acoustic leakage such that it becomes less noticeable.
In addition to
According to some embodiments the apparatus may, e.g., further comprise two or more band splitters 121, 122 being configured to conduct band splitting on the two or more preprocessed audio signals to a plurality of band-splitted audio signals. The filter 140 may, e.g., be configured to generate the plurality of loudspeaker signals depending on the plurality of band-splitted audio signals.
In some embodiments, the apparatus may, e.g., further comprises one or more spectral shapers 131, 132, 133, 134 being configured to modify a spectral envelope of one or more of the plurality of band-splitted audio signals to obtain one or more spectrally shaped audio signals. The filter 140 may, e.g., configured to generate the plurality of loudspeaker signals depending on the one or more spectrally shaped audio signals.
In
There are two signal sources shown in
The (optional) band splitters 121, 122 realize the (optional) processing step band splitting, and split the signal into multiple frequency bands, just like an audio crossover would do in a multi-way loudspeaker. However, unlike audio crossovers in a loudspeaker, it is only a second objective of this band splitter to maximize the radiated acoustic power. The primary objective of this band splitter is to distribute the individual frequency bands to individual reproduction measures such that the acoustic contrast is maximized, given certain quality constraints. For example, the signal w1(k) will later be fed to a single loudspeaker as signal x1(k). Given this loudspeaker is a directional loudspeaker, w1(k) would be high-pass filtered because the directivity of this loudspeaker will be low at low frequencies. On the other hand, w2(k) will later be filtered to obtain x2(k) and x3(k) such that the according loudspeakers are used as an electrically steered array. In a more complex scenario, there can be more outputs of the band splitter such that the signals are distributed to multiple reproduction methods according to the needs of the application (see also below, where a loudspeaker-enclosure-microphone system according to embodiments is described).
As discussed above, the measures for directional reproduction applied later will exhibit a certain leakage from one zone to the other. This leakage can be measured as break down in acoustic contrast between the zones. In a complex setup, these breakdowns can occur at multiple points in the frequency spectrum for each of the envisaged directional reproduction methods, which constitute a major obstacle in the application of those methods. It is well-known that timbre-variations are acceptable to a certain extent. These degrees of freedom can be used to attenuate contrast-critical frequency bands.
Thus, the (optional) spectral shapers 131, 132, 133, 134 are designed in a way such that the signals reproduced later are attenuated in these parts of the frequency spectrum, where a low acoustic contrast is expected. Unlike the band splitters, the spectral shapers are intended to modify the timbre of the reproduced sound. Moreover, this processing stage can also involve delays and gains such that the intentionally reproduced acoustic scene can spatially mask the acoustic leakage.
The blocks denoted by G1(k) and G2(k) may, e.g., describe linear time-invariant filters that are optimized to maximize the objective acoustic contrast given subjective quality constraints. There are various possibilities to determine those filters, which include (but are no limited to) ACC, pressure matching (see [4] and [6]), and loudspeaker beamforming. It was found, that a least squared pressure matching approach as described below, when a prefilter according to embodiments is described, is especially suitable, when measured impulse responses are considered for the filter optimization. This can be an advantageous concept for implementation.
Other embodiments employ the above approach by operating on calculated impulse responses. In particular embodiments, impulse responses are calculated to represent the free-field impulse responses from the loudspeakers to the microphones.
Further embodiments, employ the above approach by operating on calculated impulse responses that have been obtained using image source model of the enclosure.
It should be noted that the impulse responses are measured once such that no microphones may be used during operation. Unlike ACC, the pressure matching approach prescribes a given magnitude and phase in the respective bright zone. This results in a high reproduction quality. Traditional beamforming approaches are also suitable when high frequencies should be reproduced.
The block denoted by H(k) represents the LEMS, where each input is associated with one loudspeaker. Each of the outputs is associated with an individual listener that receives the superposition of all loudspeaker contributions in his individual sound zone. The loudspeakers that are driven without using the prefilters G1(k) and G2(k) are either directional loudspeakers radiating primary into one sound zone or loudspeaker that are arranged near (or in) an individual sound zone such that they primarily excite sound in that zone. For higher frequencies, directional loudspeakers can be build without significant effort. Hence, these loudspeakers can be used to provide the high-range frequencies to the listeners, where the loudspeakers do not have to be placed directly at the listeners ears.
In the following, embodiments of the present invention are described in more detail.
At first, preprocessing according to embodiments are described. In particular, an implementation of the block denoted by “Preprocessing” in
The two input signals u1(k) and u2(k) are also referred to as audio source signals in the following.
In a first, optional, stage, the power of both input signals, u1(k) and u2(k) (the audio source signals) is normalized to alleviate the parameter choice for the following processing.
Thus, according to an optional embodiment, the audio preprocessor (110) may, e.g., be configured to generate the two more initial audio signals d1(k) and d2(k) by normalizing a power of each of the two or more audio source signals u1(k) and uZ(k).
The obtained power estimates b1(k) and b2(k) typically describe a long-term average, in contrast to the estimators used in a later stage that are typically considering a smaller time span. The update of b1(k) and b2(k) can be connected with an activity detection for u1(k) and u2(k), respectively, such that the update of b1(k) or b2(k) is held, when there is no activity in u1(k) or u7(k). The signals c1(k) and c2(k) may, e.g., be inversely proportional to b1(k) and b2(k), respectively, such that a multiplication of c1(k) and c2(k) with u1(k) and u2(k), respectively, yields the signals, d1(k) and d2(k) that would exhibit comparable signal power. While using this first stage is not absolutely necessary, it ensures a reasonable working point for the relative processing of the signals d1(k) and d2(k), which alleviates finding suitable parameters for the following steps. It should be noted that if multiple instances of this processing block are placed after the “Band splitter” blocks or the “Spectral shaper” blocks, the power normalization has still to be applied before the “Band splitter” blocks.
By a normalization of the signals, their relative level difference is already reduced. However this is typically not enough for the intended effect, because the power estimates are long-term, while the level variations of typical acoustic scenes are rather short-term processes. In the following, it is explained how the difference in relative power of the individual signals is explicitly reduced on a short-term basis, which constitutes the primary objective of the preprocessing block.
The two signals d1(k) and d2(k) that are supposed to be scaled and reproduced, are also referred to as initial audio signals in the following.
As described above, the audio preprocessor 110 may, e.g., configured to generate for each audio source signal of the two or more audio source signals u1(k), u2(k) an initial audio signal of the two more initial audio signals d1(k), d2(k) by modifying said audio source signal, e.g., by conducting power normalization.
In alternative embodiments, however, the audio preprocessor 110 may, e.g., be configured to use the two or more audio source signals u1(k), u2(k) as the two or more initial audio signals d1(k), d2(k).
In
These signals may, e.g., be used to determine the scaling factors g′1(k) and k) according to
g′
1
=f(e1,e2), (17)
g′
2
=f(e2,e1), (18)
where, in some embodiments, f(x,y) is a function that is monotonically increasing with respect to y and monotonically decreasing with respect to x, while its value may, for example, be limited to an absolute range.
As a consequence, the value of f(x,y) may, e.g., also be monotonically increasing with the ratio y/x.
The factors g′1(k) and g′2(k) are then used to scale the signals d1(k) and d2(k), respectively, to obtain the output signals h1(k) and h2(k). The output signals h1(k) and h2(k) may, e.g., be fed into one or more modules which are configured to conduct multizone reproduction, e.g., according to an arbitrary multizone reproduction method.
Thus, in some embodiments, the audio preprocessor 110 may, e.g., be configured to modify each initial audio signal of the two or more initial audio signals depending on the signal power or the loudness of another initial audio signal of the two or more initial audio signals by modifying said initial audio signal of the two or more initial audio signals depending on a ratio of a first value (y) to a second value (x). The second value (x) may, e.g., depend on the signal power of said initial audio signal, and the first value (y) may, e.g., depend on the signal power of said another initial audio signal of the two or more initial audio signals. Or, the second value (x) may, e.g., depend on the loudness of said initial audio signal, and the first value (y) may, e.g., depend on the loudness of said another initial audio signal of the two or more initial audio signals.
According to some embodiments, the audio preprocessor 110 may, e.g., be configured to modify each initial audio signal of the two or more initial audio signals depending on the signal power or the loudness of another initial audio signal of the two or more initial audio signals by determining a gain for said initial audio signal and by applying the gain on said initial audio signal. Moreover, the audio preprocessor 110 may, e.g., be configured to determine the gain depending on the ratio between the first value and the second value, said ratio being a ratio between the signal power of said another initial audio signal of the two or more initial audio signals and the signal power of said initial audio signal as the second value, or said ratio being a ratio between the loudness of said another initial audio signal of the two or more initial audio signals and the loudness of said initial audio signal as the second value.
In some embodiments, the audio preprocessor 110 may, e.g., be configured to determine the gain depending on a function that monotonically increases with the ratio between the first value and the second value.
According to some embodiments, e.g., none of the signals u1(k), d1(k), or h1(k) is mixed to any of the signals u2(k), d2(k), or h2(k).
In the following, the implementation of the processing step is explained in more detail. Since the processing steps for u1(k) and u2(k) are identical, only the processing steps for u1(k) will be described, which are also applied to u2(k) by exchanging the indices 1 and 2.
A rule to obtain b1(k) may, e.g., be given by
b
1(k)=λ1b1(k−1)+(1−λ1)Σi=1Lu12(k,l), (19)
where λ1 may, e.g., be chosen close to but less than 1.
In the above-formula u1(k,l) is assumed to comprise one or more audio channels. L indicates the number of audio channels of u1(k).
In a simple case, lit (k) comprises only a single channel and formula (19) becomes:
b
1(k)=λ1b1(k−1)+(1−λ1)u12(k,l) (19a)
λ1 may be in the range 0<λ1<1. Advantageously, λ1 may, e.g., be close to 1. For example, λ1 may, e.g., be in the range 0.9<λ1<1.
In other cases u1(k) for example, comprises two or more channels.
The scaling factor c1(k) can then be determined according to
such that
d
1(k,l)=c1(k)u1(k,l) (21)
describes the scaled audio signal.
A rule to obtain e1(k) may, e.g., be given by
e
1(k)=λ2e1(k−1)+(1−λ2)Σi=1Ld12(k,l), (22)
λ2 may be in the range 0<λ2<1.
In embodiments, for λ1 of formula (19) and A, of formula (22): λ1>λ2.
While there is a variety of other options. One of them, according to an embodiment, is the mean square value of d12(k,l) in a window of K samples given by
Another definition, according to another embodiment, is the maximum squared value in such a window
Acccording to some embodiments, to determine g′1(k), the value e2(k) has also to be determined as described above. However, the actual method to determine e2(k), as well as the parameters, may differ from those chosen for e1(k) (for example, depending on the needs of the application). The actual gain g′1(k) can, e.g., be determined similar to the gaining rule that would be used for a conventional audio compressor, see:
but considering both, e1(k) and e2(k).
According to an embodiment, a gaining rule of an according downward compressor for the signal d1(k) would be
where T1 defines the compression threshold in dB and R the compression ratio, as used in a standard audio compressor. E.g., 1≤R≤100. For example, 1<R<100. For example, 2<R<100, E.g., 2<R<50.
In contrast to formulae (25) and (25′), a standard audio compressor according to the state of the art would not consider e2(k) for determining a gain for d1(k).
Other options are an implementation of an upward compressor defined by
which is similar except for the operating range (note the different condition) and different parameters. It should be noted that T2 defines a lower threshold in contrast to T1.
Some embodiments, where T2<T1 combine both gaining rules.
In embodiments, the resulting rule to obtain g′1(k) and g′2(k) can be any combination of upward and downward compressors, where practical implementations will typically involve setting bound to the considered ranges of e1(k) and e2(k).
When more than two signals e1(k), e2(k), e3(k), . . . , eN(k), for example, N signals, are considered, formula (25) may, e.g., become:
For other gains g′2(k), g′3(k), . . . , g′N(k), formula (25) may, e.g., become:
Formula (25a) may, e.g., become:
For other gains g′2(k), g′3(k), . . . , g′N(k), formula (25a) may, e.g., become:
Further alternative rules can be defined to reduce the energy difference between both scenes as given by
where α=1 would cause the signal h1(k) to have the same energy as the signal d2(k). On the other hand, α=0 would have no effect, a chosen parameter 0<α<1 can be used to vary the intended influence of that step.
Another opportunity is the use of a sigmoid function to limit the energy overshot of h2(k) compared to d1(k)
where f(x) can be one of
which are all limited by −1<f(x)<1 while f′(0)=1 holds.
In some embodiments, the audio preprocessor 110 may, e.g., be configured to modify an initial audio signal of the two or more initial audio signals depending on the signal power or the loudness of another initial audio signal of the two or more initial audio signals by determining a gain g′1(k) for said initial audio signal and by applying the gain g′1(k) on said initial audio signal, and the audio preprocessor 110 may, e.g., be configured to determine the gain g′1(k) according to one or more of the above formulae.
In the following, further features of preprocessing according to embodiments are described,
According to an embodiment, the branch of the signals e1(k) and e2(k) that is fed to the respectively opposite side may, e.g., be filtered through a filter describing the actual acoustic coupling of the two zones.
Moreover, according to an embodiment, the power estimators may, e.g., operate on signals that have been processed by a weighting filter, for example, that have been processed by a weighting filter described in:
According to an embodiment, the power estimators may, e.g., be replaced by loudness estimators as, e.g., described by ITU-R Recommendation BS.1770-4. This will allow for an improved reproduction quality because the perceived loudness is better matched by this model.
Furthermore, according to an embodiment, a level threshold may, e.g., be used to exclude silence from being taken into account for the estimates b1(k) and b2(k) in the absolute power normalization.
Moreover, in an embodiment, a positive time-derivative of the separately estimated power can be used as an indicator for activity of the input signals u1(k) and u2(k). The estimates b1(k) and b2(k) are then only updated when activity is detected.
In the following, a band splitter according to embodiments is described. In particular, an implementation of the block denoted by “Band splitter” shown in
The desired frequency response of the input to output paths may, e.g., be a band pass with a flat frequency response in the pass band and a high attenuation in the stop band. The borders of pass bands and stop bands are chosen depending on the frequency range in which the reproduction measures connected to individual outputs can achieve a sufficient acoustics contrast between the respective sound zones.
As can be seen from
Various concepts may be employed to realize the actual implementation of the one or more band splitters. For example, some embodiments employ FIR filters, other embodiments employ an IIR filter, and further embodiments employ analog filters. Any possible concept for realizing band splitters may be employed, for example any concept that is presented in general literature on that topic.
Some of the embodiments may, for example, comprise a spectral shaper for conducting spectral shaping. When spectral shaping is conducted on an audio signal, the spectral envelope of that audio signal may, e.g., modified and a spectrally-shaped audio signal may, e.g., be obtained.
In the following, a spectral shaper according to embodiments is described, in particular, a spectral shaper” as illustrated in
However, the eventual frequency responses of spectral filter are designed in a completely different way compared to equalizers: Spectral filters consider the maximum spectral distortion that will be accepted by the listener, and the spectral filters are designed such they attenuate those frequencies which are known to produce acoustic leakage.
The rational behind this is that human perception is differently sensitive to spectral distortions of acoustic scenes at certain frequencies, depending on the excitation of the surrounding frequencies and depending on whether the distortion is an attenuation or an amplification.
For example, if a notch filter with a small bandwidth is applied to a broadband audio signal, the listeners will only perceive a small difference, if any. However, if a peak filter with the same bandwidth is applied to the same signal, the listeners will most likely perceive a considerable difference.
Embodiments are based on the finding that this fact can be exploited because a band-limited breakdown in acoustic contrast results in a peak in acoustic leakage (see
An example of the corresponding filter response is shown in
As outlined above, the filter 140 is configured to generate the plurality of loudspeaker signals depending on in which of the two or more sound zones the two or more audio source signals shall be reproduced and depending on in which of the two or more sound zones the two or more audio source signals shall not be reproduced.
In the following, a filter 140, e.g., prefilter according to embodiments is described.
In an embodiment, for example, one or more audio source signals shall be reproduced in a first sound zone, but not in a second sound zone and at least one further audio source signal shall be reproduced in the second sound zone but not in the first sound zone.
See, for example,
As each of the two or more preprocessed audio signals h1(k), h2(k) has been generated based on one of the two or more audio source signals u1(k), u2(k), it follows that in such an embodiment, one or more preprocessed audio signals h1(k) shall be reproduced in the sound zone 1, but not in the sound zone 2 (namely these one or more preprocessed audio signals h1(k) that have been generated by modifying the one or more sound source signals u1(k) that shall be reproduced in the sound zone 1, but not in the sound zone 2). Moreover, it follows that least one further preprocessed audio signal h2(k) shall be reproduced in the sound zone 2, but not in the sound zone 1 (namely those one or more preprocessed audio signals h2(k) that have been generated by modifying the one or more sound source signals u2(k) that shall be reproduced in the sound zone 2, but not in the sound zone 1).
Suitable means may be employed that achieve that an audio source signal is reproduced in a first sound zone but not in a second sound zone, or that at least achieve that the audio source signal is reproduced in the first sound zone with a greater loudness than in the second sound zone (and/or or that at least achieve that the audio source signal is reproduced in the first sound zone with a greater signal energy than in the second sound zone).
For example, a filter 140 may be employed, and the filter coefficients may, e.g., be chosen such that a first audio source signal that shall be reproduced in the first sound zone, but not in the second sound zone is reproduced in the first sound zone with a greater loudness (and/or with a greater signal energy) than in the second sound zone. Moreover, the filter coefficients may, e.g., be chosen such that a second audio source signal that shall be reproduced in the second sound zone, but not in the first sound zone is reproduced in the second sound zone with a greater loudness (and/or with a greater signal energy) than in the first sound zone.
For example, an FIR filter (finite impulse response filter) may, e.g., be employed and the filter coefficients may, e.g., be suitably chosen, for example, as described below.
Or, Wave Field Synthesis (WFS), well-known in the art of audio processing may, e.g., be employed (for general information on Wave Field Synthesis, see, for example, as one of many examples [69]).
Or, Higher-Order Ambisonics, well-known in the art of audio processing, may e.g., be employed (for general information on Higher-Order Ambisonics, see, for example, as one of many examples [70]).
Now, a filter 140 according to some particular embodiments, is described in more detail.
In particular, an implementation of the block denoted by G1(k) and G2(k) shown in FIG. 7 is presented. A prefilter may, e.g., be associated with an array of loudspeakers. A set of multiple loudspeakers is considered as a loudspeaker array, whenever a prefilter feeds at least one input signal to multiple loudspeakers that are primarily excited in the same frequency range. It is possible that an individual loudspeaker is part of multiple arrays and that multiple input signals are fed to one array, which are then radiated towards different directions.
There are different well-known methods to determine linear prefilters such that an array of non-directional loudspeakers will exhibit a directional radiation pattern, see, e.g., [1], [3], [4], [5] and [6].
Some embodiments realize a pressure matching approach based on measured impulse responses. Some of those embodiments, which employ such an approach, are described in the following, where only a single loudspeaker array is considered. Other embodiments use multiple loudspeaker arrays. The application to multiple loudspeaker arrays is straightforward.
For the description of these embodiments, a notation is used that is more suitable to obtain FIR filters compared to the notation above, which would also cover HR filters. To this end, the filter coefficients gl,q(k) are captured in the vectors
g
q=(gq,1(0), . . . ,gq,1(LG−1),gq,2(0), . . . ,gq,2(LG−1),gq,N
For the optimization, the convolved impulse response of the prefilters and the room impulse response (RIR) may be considered, which is given by
where gl(k) and hm,l(k) are assumed to be zero for k<0 and k≥LG or k≥LH, respectively.
As a result, the overall impulse responses zm(k) have a length of LG+LH−1 samples and can be captured by the vector
z=(z1(0),z1(1), . . . ,z1(LG+LH−2),z2(0),z2(1), . . . ,z2(LG+LH−2),zN
Now, it is possible to define the convolution matrix H, such that
{circumflex over (z)}=Hg (29)
describes the same convolution as Equation (27) does. For the optimization, the desired impulse dm,q(k) can be defined according to needs of the application.
A way to define dm,q(k) is to consider each loudspeaker as potential source to be reproduced with its original sound field in the bright zone but no radiation to the dark zone. This is described by
where the delay Δk is used to ensure causality. A perfect reproduction is described by
d
q
=Hg
q (31)
but will typically not be possible due to physical constraints. It should be noted that this definition is just one among many, which has some practical merit due to its simplicity, while other definitions may be more suitable, depending on the application scenario.
Now, the least-squares reproduction error can be defined as:
where Wq is a matrix that can be chosen such that a frequency-dependent weighting and/or a position-dependent weighting is achieved.
When deriving Bq and Dq from Bq(k) and Dq(k), respectively, in the same way as H was derived from 11(k). Equation (14) can be represented by
It should be noted that maximizing Equation (34) can be solved as a generalized eigenvalue problem [3].
The error Eq can be minimized by determining the complex gradient of Equation (33) and setting it to zero [7]. The complex gradient of Equation (33) is given by
Resulting in
g
q=(HHWqHWqH)−1HHWqHWqdq (36)
as the least-squares optimal solution.
Although, many algorithms are formulated for non-weighted least squares, they can be used to implement weighted least squares by simply replacing H and dq with WqH and Wqdq, respectively.
The weighting matrix Wq is in general a convolution matrix similar to H defined by (26) to (29).
The matrix H consist of several submatrices Hm,l:
An example for Hm,l can be given assuming
From that scheme it is clear to the expert how (27) and (29) define the structure of H.
To facilitate a frequency-dependent and microphone-dependent weighting through Wq, the impulse responses wm,q(k) according to the well-known filter design methods. Here, wm,q(k) defines the weight for source q and microphone in. Unlike H, Wq is a block-diagonal matrix:
where Wm,q is structured like Hm,l.
Regarding the computation of the filter coefficients, noting that (36) gives the filter coefficients explicitly, its computation is very demanding in practice. Due to the similarity of this problem to the problem solved for listening room equalization, the methods used there can also be applied.
Hence, a very efficient algorithm to compute (36) is described in [71]: SCHNEIDER, Martin; KELLERMANN, Walter. Iterative DFT-domain inverse filter determination for adaptive listening room equalization. In: Acoustic Signal Enhancement; Proceedings of IWAENC 2012; International Workshop on. VDE, 2012, S. 1-4.
In the following, a loudspeaker-enclosure-microphone system (LEMS) according to embodiments is described. In particular, the design of an LEMS according to embodiments is discussed. In some embodiments, the measures described above may, e.g., rely on the distinct properties of the LEMS
The two loudspeaker arrays denoted by “Array 1” and “Array 2” are used in conjunction with accordingly determined prefilters (see above). In this way, it is possible to electrically steer the radiation of those arrays towards “Zone 1” and “Zone 2”. Assuming that both arrays exhibit an inter-loudspeaker distance of a few centimeters while the arrays exhibit an aperture size of a few decimeters, effective steering is possible for midrange frequencies.
Although it is not obvious, the omni-directional loudspeakers “LS 1”, “LS 2”, “LS 3”, and “LS 4”, which may, e.g., be located 1 to 3 meters distant to each other can also be driven as a loudspeaker array when considering frequencies below, e.g., 300 Hz. According prefilters can be determined using the method described above.
The loudspeakers “LS 5” and “LS 6” are directional loudspeakers that provide high-frequency audio to Zones 3 and 4, respectively.
As described above, measures for directional reproduction may sometimes not lead to sufficient results for the whole audible frequency range. To compensate for this issue, there may, for example, be loudspeakers located in the close vicinity or within the respective sound zones. Although this positioning is suboptimal with respect to the perceived sound quality, the difference in distance of the loudspeakers to the zone assigned compared to the distance to the other zones allows for a spatially focused reproduction, independent of frequency. Thus, these loudspeakers may, e.g., be used in frequency ranges where the other methods do not lead to satisfying results.
In the following, further aspects according to some of the embodiments are described:
In some of the embodiments, the “Preprocessing” block is placed after the “Band splitter” blocks or after the “Spectral shaper” blocks. In that case, one preprocessing block may, e.g., be implemented for each of the “splitted” frequencies bands. In the example shown in
Since the acoustic leakage depends on the reproduction method which is chosen differently for each frequency band, such an implementation has the advantage that the preprocessing parameters can be matched to the demands of the reproduction method. Moreover, when choosing such an implementation, compensating for the leakage in one frequency band will not affect another frequency band. Since the “Preprocessing” block is not an LTI system this exchange implies a change in the functionality of the overall system, even though the resulting system will still reliably solve the same problem.
Additionally, it should be noted that some of the embodiments may use a measuring of the impulse responses from all loudspeakers to multiple microphones prior to operation. Hence, no microphones may be used during operation.
The proposed method is generally suitable for any multizone reproduction scenario, for example, in-car scenarios.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software or at least partially in hardware or at least partially in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitory.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.
The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
16164984.3 | Apr 2016 | EP | regional |
This application is a continuation of copending International Application No. PCT/EP2017/058611, filed Apr. 11, 2017, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No, EP 16 164 984.3, filed Apr. 12, 2016 which is incorporated herein by reference in its entirety. The present invention relates to audio signal processing and, in particular, to an apparatus and method for providing individual sound zones.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2017/058611 | Apr 2017 | US |
Child | 16157827 | US |