The invention relates to audio apparatus for providing different audio outputs in a plurality of zones of a single enclosed space, for example, within a vehicle.
According to one aspect of the invention there is provided apparatus for providing different audio outputs in a plurality of zones of a single enclosed space, comprising loudspeakers associated with, and positioned in, each zone to radiate an audio output, means capable of supplying a different audio signal to the loudspeakers in each zone, signal processing means comprising means for dividing the audio frequency spectrum of each audio signal into higher and lower parts, means for directing the higher frequencies radiated in their respective zones, and means for varying any of the amplitude, phase and delay of the lower frequencies to tend to cancel radiation outside their respective zones.
According to another aspect of the invention, there is provided a method of providing different audio signals in a plurality of zones of a single enclosed space, comprising arranging loudspeakers in or adjacent to each zone to radiate an audio output in the associated zone, supplying a different audio signal to the loudspeakers in each zone, processing the audio signals including dividing the audio frequency spectrum of each audio signal into higher and lower parts, directing the higher frequencies radiated in their respective zones, and varying the phase and delay of the lower frequencies to tend to cancel sound radiation outside their respective zones.
In both complementary aspects, different listeners in the single enclosed space may be simultaneously presented with different desired listening sensations. The different sensations include different audio channels and the possibility of one of the listeners choosing an audio-free experience (i.e. quiet relative to fellow passengers). In other words, tending to cancel sound radiation outside their respective zones means that sound radiation from said loudspeaker is reduced (or preferably minimised) in at least one other zone compared to its sound radiation in its associated zone. Anywhere else in the cabin may experience a combination of the audio signals, but this is unimportant. The following features apply to both aspects.
The enclosed space may be the cabin of a vehicle, e.g. an automobile or aeroplane. The cabin may be trimmed internally with at least one resilient panel, and at least one of the loudspeakers in each zone may be coupled to drive a portion of the at least one trim panel as an acoustic diaphragm. The cabin may be trimmed internally with a headlining, e.g. the resilient panel may form part or all of the headlining. At least one of the loudspeakers in each zone may be coupled to drive a portion of the headlining as an acoustic diaphragm. In this way, the loudspeaker apparatus would not present any visual disturbance to the interior décor.
The loudspeakers in each zone may comprise a cluster having at least one lower frequency driver and an array of higher frequency drivers. The audio frequency dividing means may be arranged so that the division occurs around 1500 Hz, i.e. higher frequencies are above 1500 Hz and lower frequencies below 1500 Hz.
The signal processing means may comprise means for processing the higher frequency signal to the array of higher frequency drivers to control the directivity of the radiation from the array. The signal processing means may employ linear superposition for the lower frequencies to tend to cancel radiation outside their respective zones. The at least one lower frequency driver in or associated with each zone may be a bending wave diaphragm positioned in the near field with respect to a listener in the same zone.
Sound pressure levels at the respective zones may be detected at one or more test positions by measurement and/or modelling. The detected sound pressure levels may be processed to determine (i.e. by measurement) a transfer function of the input signal, i.e. a function which measures the transfer of force applied at the test position to each loudspeaker. The processing may further comprise inferring the inverse of this transfer function, i.e. the transfer function necessary to produce a pure impulse at the test position from each loudspeaker.
The inferring step may be by direct calculation so that measurement of the transfer function ipT is followed by inversion to obtain (ipT)−1. Alternatively, the inferring step may be indirect, e.g. using feedback adaptive filter techniques to implicitly invert ipT. Alternatively, the inferring step may be heuristic, e.g. using parametric equalisation processing, and adjusting the parameters to estimate the inverse transfer function.
Alternatively, the inferring step may be approximated by reversing the measured time responses, which in the frequency domain is equivalent to complex conjugation, thus generating the matched filter response. In this case, the result of applying the filter is not a pure impulse, but the autocorrelation function.
The resulting inverse transfer functions may be stored for later use by the apparatus, for example in a transfer function matrix with the inverse transfer function for each of the plurality of loudspeakers stored at an associated coordinate in the matrix. The spatial resolution of the transfer function matrix may be increased by interpolating between the calibration test points.
The time-reversed responses may be generated by adding a fixed delay which is at least as long as the duration of the detected signal. The fixed delay may be at least 5 ms, at least 7.5 ms or at least 10 ms. The measured time response may be normalised before filtering, e.g. by dividing by the sum of all measured time responses, to render the response more spectrally white.
The audio signal for a particular zone (i.e. desired listening sensation) may be a maximum response at a given test point. Thus, the output signals for each loudspeaker may be in-phase with each other, whereby all the displacements generated by the loudspeakers add up to the maximum displacement at the given test point. It is noted, that at other test points, there may be phase cancellation.
Alternatively, the audio signal for a particular zone (i.e. desired listening sensation) may be a minimum response at a given test point. Thus, the output signals for each loudspeaker may be selected so that the displacements provided at the test position (i.e. so that the appropriate transfer functions) sum to zero. With two loudspeakers, this may be achieved by inverting one output signal relative to the other.
The desired listening sensation may be a maximum at a first test point and a minimum at a second test point (e.g. a maximum for the driver location and a minimum for the passenger location or vice versa). Alternatively, the desired listening sensation may be a response which is between the minimum or maximum at a given test position, for example, where the responses at multiple test positions are to be taken into account.
One or more of the loudspeakers may comprise a vibration exciter for applying a bending wave vibration to a diaphragm, e.g. the resilient panel. The vibration exciter may be electro-mechanical. The exciter may be an electromagnetic exciter. Such exciters are well known in the art e.g. from WO97/09859, WO98/34320 and WO99/13684, belonging to the applicant and incorporated herein by reference. Alternatively, the exciter may be a piezoelectric transducer, a magneto-strictive exciter or a bender or torsional transducer (e.g. of the type taught in WO 00/13464). The exciter may be a distributed mode actuator, as described in WO01/54450, incorporated herein by reference. A plurality of exciters (perhaps of different types) may be selected to operate in a co-ordinated fashion. The or each exciter may be inertial.
One or more of the loudspeakers may be a panel-form member which is a bending wave device, for example, a resonant bending wave device. For example, one or more of the loudspeakers may be a resonant bending wave mode loudspeaker as described in International Patent Application WO97/09842 which is incorporated by reference. Thus, as explained in more detail below, the exciters in each source driving the bending wave devices, particularly the low frequency devices, may be driven by signals which are processed in phase and amplitude using the theory of linear superposition to provide directional and localised different audio signals to listeners in the relative near field.
The invention further provides processor control code to implement the above-described methods, in particular on a data carrier such as a disk, CD- or DVD-ROM, programmed memory such as read-only memory (firmware), or on a data carrier such as an optical or electrical signal carrier. Code (and/or data) to implement embodiments of the invention may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as C, or assembly code, code for setting up or controlling an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array), or code for a hardware description language such as Verilog (Trade Mark) or VHDL (Very High speed integrated circuit Hardware Description Language). As the skilled person will appreciate such code and/or data may be distributed between a plurality of coupled components in communication with one another.
The invention is diagrammatically illustrated, by way of example, in the accompanying drawings in which:
a and 1 b are schematic illustrations of two variations of audio apparatus;
c is a schematic illustration of a detail of
d is a block diagram of the components of the audio apparatus of
e is a schematic illustration of the principle of linear superposition;
a to 3c show the pressure response against frequency for the driver source, the passenger source and the rear source, respectively;
a and 4b show the sound pressure level at 800 Hz on the listening plane of
a shows the transfer functions for each of the sources of
b shows the mean response for each of the filtered sources of
c shows the pressure response against frequency at each of the three locations of
a to 6d show the sound pressure level at 283 Hz, 400 Hz, 576 Hz and 800Hz on the listening plane of
a is a block diagram of a parallel solver;
b is a block diagram of a recursive solver;
a is a block diagram of a variation of
b is a flow chart showing the training mode of the system of
a and 1b show two embodiments of audio apparatus which generate separate listening experiences in an enclosed space (namely a vehicle cabin) whereby different listeners are simultaneously presented with different audio channels. In
c shows one arrangement for each of the sources of
d shows the system components. A processor 20 provides signals to two signal generators 22,23 which provide the independent audio signals for each loudspeaker. A first signal generator 22 provides independent audio signals to each low frequency loudspeaker 14. A second signal generator 23 provides independent audio signals to each cluster 16 of high frequency loudspeakers. Three loudspeakers are shown but there could be any number of loudspeakers.
Due to the wide range of acoustic wavelengths present in the audible spectrum, it is envisaged that more than one approach will be required to generate the desired listening experience. Accordingly, the processor 20 comprises a filter 24 for dividing the audio frequency spectrum of each audio signal into higher and lower parts. At high frequencies, a combination of directivity control and array processing techniques is employed to direct beams of sound to each listener. Control of side-lobes means that other listeners would receive much less sound. This functionality is provided by the high frequency controller 26.
Control of high frequency arrays is well known. The main limitation is that the array should be large enough compared with the wavelength of sound that it is attempting to steer. Example arrays are taught in:
http://gow.epsrc.ac.uk/ViewGrantaspx?GrantRef=GR/S63915/01
http://en.wikipedia.org/wiki/Directional Sound#Speaker arrays
As explained in more detail below, at low frequencies, all sources would be energised with appropriate amplitude, phase and delay variation to result in the desired listening experiences, including cancellation at designated quiet zones. This functionality is provided by the low frequency controller 28, e.g. using linear superposition which allows the generation of multiple audio zones. For example, as shown in
The arrangement shown in
As expected, the sound pressure levels for the driver is greatest with a drop off at the passenger location with a further drop off in sound pressure levels for the backseat. As shown in
The next step is to calculate a transfer function for each source, namely (ipT)<0> for the driver source, (ipT)<1> for the passenger source and (ipT)<2> for the rear source.
a shows the transfer functions (ipT)<u> for each source which are necessary to maximise the driver SPL and minimise the passenger SPL. To reverse the roles of the driver and passenger, it is merely necessary to swap the solid and dotted traces.
As shown in
Experimental results in structural acoustics suggests that approximately 15-20 db separation is more realistic. It is also noted that although the separation between zones remains good to high frequencies, the area over which separation applies reduces in proportion to the wavelength. There is one problematic frequency, just above 1 kHz, which based on the results in structural acoustics, is due to the presence of a modal anti-node at the listener location at this frequency.
The transfer functions may be calculated formally by the various methods detailed below. For any multi-region system, there are a number of inputs and a number of measurement points. The simplest case is two inputs and one target position, but as described above the problem may be considerably more complicated, involving more inputs, and extended target areas. The various methods of solving both the simple and more complex problems are described below:
A Simple Minimisation Problem & Solution by “Tan Theta” Approach
Consider a system with two inputs and one output. Let the transfer function from input 1 (e.g. the first low frequency source in
T=a.P1−b.P2
where a, b, P1, P2 and T are all complex functions of frequency.
The problem to be solved is minimising T for all frequencies. There is no unique solution to the problem, but it is clear from observation that a and b should be related; specifically
b=a.P1/P2, or a=b.P2/P1
Using these ratios is generally not a good idea, as either P1 or P2 may contain zeros. One simple solution is to set a=P2 and b=P1. It is also general practice to normalise the solution to unit energy, that is |a|2+|b|2=1. As P1 and P2 are in general complex quantities, the absolute values are important. Thus, T is minimised by setting:
Incidentally, T is maximised to unity by setting
If P1 or P2 are measured remote from the input, as is generally the case in acoustics, the transfer function will include excess phase in the form of delay. Consequently, these values of a and b may not be the best choice. If we set a=cos(θ) and b=sin(θ), then tan(θ)=P1/P2. This solution may be described as the “tan theta” solution and produces a and b with much less excess phase. It is clear that a2+b2=1 due to the trigonometric identity, but as θ is in general complex, |a|2+|b|2≠1, so normalisation would still be required.
In this simple example, the minimisation problem was solved by inspection. As this may not be possible in general, it would be of advantage to have a systematic method of finding the solution.
The minimisation of energy functions is a key process in many branches of physical modelling with mathematics, and for example forms the foundation of finite element analysis. The task at hand is to determine values of parameters that lead to stationary values to a function (i.e. to find nodal points, lines or pressures). The first step of the process is forming the energy function. For our example, the squared modulus of T may be used, i.e. E=|T|2=|a.P1−b.P2|2. The stationary values occur at the maximum and the minimum of E.
E=(a·P1−b·P2)·
There is a constraint on the values of a and b—they cannot both be zero. This constraint may be expressed using a so called “Lagrange multiplier” to modify the energy equation, thus;
E=(a·P1−b·P2)·
It is common in these types of problem to consider the complex conjugate of each variable as an independent variable. We shall follow the practice here, and differentiate E with respect to each conjugate variable in turn, thus;
At the stationary points, both of these must be zero. It is possible to see straight away that the solutions found in the previous section apply here too. However, continuing to solve the system of equations formally, first the equations are combined to eliminate λ by finding:
(1).b−(2).a
(a·P1−b·P2)·
The resulting equation is quadratic in a and b, the two solutions corresponding to the maximum and the minimum values of E. Introducing a=cos(θ) and b=sin(θ)—although strictly speaking this does not satisfy the Lagrange constraint—obtains a quadratic equation in tan(θ).
P1·
Noting that in many cases, (P|1|2−|P2|2)
for the minimum, and
for the maximum.
For completeness, it is noted that this identity might not apply in the general case, where P1 and P2 are sums or integrals of responses. Nevertheless, it is possible to systematically find both stationary values using this variation of the “tan theta” approach. One application is explained in more detail below to illustrate how these solutions may be used in the examples described above.
It is possible to simultaneously specify a minimal response at one location and a non-zero response at another position. This might be very useful in dual region systems.
We have two inputs (for example), to produce one nodal point and audio at another point. Define transfer functions Pi_j from input i to output j.
Simultaneously solve a.P1—1+b.P2—1=0 and a.P2_+b.P2—2=g.
Provided the denominator is never zero, this pair of transfer functions will produce a nodal response at point 1, and a complex transfer function exactly equal to g at point 2.
Simultaneously solve |a.P1—1+b.P2_P2—1|2=0 and |a.P2—1+b.P2—2|3=|g|2.
Use the variational methods discussed below to solve the first minimisation for a and b, and the normalise the result to satisfy the second equation.
a=r·cos(θ), b=−r·sin(74 ),
r
2·|(cos(74 )·P2—1−sin(θ)·P2—2)|2=|g|2, hence r.
Provided the denominator is never zero, this pair of transfer functions will produce a nodal response at point 1, and a power transfer function equal to |g|2 at point 2. The resulting output at point 2 will not necessary have the same phase response as g, so the coercion is not as strong.
There are other extensions to the methods described above that are particularly relevant when considering more than two input channels. These extensions are general, and would equally well apply to the two-channel case. Additionally, by using eigenvalue analysis as a tool, we get the “best” solution when no exact solution is available.
Relationship between the variational method and the eigenvalue problem.
When minimising an energy function of the form E, below, we arrive at a set of simultaneous equations;
for all n
where P, are the inputs to the system and a, the constants applied to these inputs, i.e. a and b in the previous two channel system.
We may write this system of equations in matrix form, thus
M·v0, where Mij−=
Note that M is conjugate symmetric, i.e. Mi,j=
We wish to find a non-trivial solution; that is a solution other than the trivial v=0, which although mathematically valid, is not of much use.
As any linear scaling of v is also a solution to the equation, the a, are not uniquely defined. We need an additional equation to constrain the scaling. Another way of viewing things is to say that for an exact solution, the number of input variables must be greater than the number of measurement points. Either way, there is one more equation than free variables, so the determinant of M will be zero.
Consider the matrix eigenvalue problem, where we wish to find a non-trivial solution to the equation
M
·v−λ·v=0, where λ is an eigenvalue, and the associated v is the eigenvector (2)
As M is conjugate symmetric, all the eigenvalues will be real and non-negative. If λ=0 is a solution to the eigenvalue problem, it should be clear that we have our original equation. So v is the eigenvector for λ=0.
What is particularly powerful about this method, is that even when there is no solution to (1), the solution to (2) with the smallest value of A is the closest approximate answer.
For example, using the problem posed above:
has a solution λ=0, b/a=P1/P2.
The other eigenvalue corresponds to the maximum; λ=|P1|2+|P2|2, b/a=
When using an eigenvalue solver to find the values of ai, the scaling used is essentially arbitrary. It is normal practice to normalise the eigenvector, and doing so will set the amplitudes;
For example,
The reference phase, however, is still arbitrary—if v is a normalised solution to the eigen-problem, then so is v.ejθ. What constitutes the “best” value for θ, and how to find it is the subject of a later section.
The value of the eigenvalue λ is just the energy associated with that choice of eigenvector. The proof follows;
From our eigenvalue equation and normalisation of the eigenvector, we can continue by stating
In principle, a system of order n has n eigenvalues, which are found by solving an nth order polynomial equation. However, we don't need all the eigenvalues—only the smallest.
M
·v−λ·v=0, leads to |M−λ·I|=0 , leads to
If there is an exact solution to the problem, the determinant will have λ as a factor. For example,
a·c−|b|
2−(a+c)·λ+λ2=0
If a.c−|b|2=0, then there is an exact solution.
As the number of equations is greater than the number of unknowns, there are more than one possible sets of solutions to v, but they are all equivalent;
For example
a=2, b=1+1j, c=3; 6−2−5.λ+λ2=0; λ=1, 4
(λ−2)/(1+1j)=(−1+1j)/2 or 1−1j
(1−1j)/(λ−3)=(−1+1j)/2 or 1−1j
So the best solution to the pair of equations is given by v1/v0=(−1+1j)/2
Choosing the “Best” Scaling for the Solution
Mathematically speaking, any solution to the problem is as good as any other. However we are trying to solve an engineering problem. Both the matrix, M, and its eigenvectors, v, are functions of frequency. We wish to use the components of v as transfer functions, so having sudden changes of sign or phase is not preferred.
M(ω)
·v(ω)=0
For the two-variable problem, we used the substitution a=cos(θ) and b=sin(θ), and the solved for tan(θ). This method seems to produce values of a and b with low excess phase. However, using this method quickly becomes unwieldy, as the equations get more and more complicated to form, never mind solve. For example, for 3 variables we have 2 angles and can use the spherical polar mapping to give a=cos(θ).cos(φ), b=cos(θ).sin(φ), c=sin(θ).
Instead, let us use the variational method to determine the “best” value for θ. We will define best to mean having the smallest total imaginary component.
Now, let v′=v.ejθ, let v=vr+j.vi, and define our error energy as
Let
rr=Re(v)·Re(v)=Σvri2, ii=Im(v)·Im(v)=Σvii2, ri=Re(v)·Im(v)=Σvri·vii
Then
SSE=cos(θ)2.ii+2.cos(θ).sin(θ).ri+sin(θ)2.rr
(For θ=0, SSE=ii, which is our initial cost. We want to reduce this, if possible).
Now differentiate with respect to θ to give our equation
2.(cos(θ)2−sin(θ)2).ri+2.cos(θ).sin(θ).(rr−ii)=0
Dividing through by 2.cos(θ) 2, we get the following quadratic in tan(θ);
ri+tan(θ).(rr−ii)−tan(θ)2.ri=0
Of the two solutions, the one that gives the minimum of SSE is
If ri=0, then we have two special cases;
If ri=0 and rr>=ii, then θ=0.
If ri=0 and rr<ii, then θ=π/2.
The final step in choosing the best value for v is to make sure that the real part of the first component is positive (any component could be used for this purpose), i.e.
Step 1. v′=v.eijθ
Step 2 if v′0<0, v′=−v′
rr=2.534, ii=1.466, ri=−1.204; solving gives θ=0.577
rr′=3.318, ii′=0.682, ri=0
Note that minimising ii simultaneously maximises rr and sets ri to zero.
Comparison of Techniques—a Worked Example
Consider a two-input device with two outputs. There will be exact solutions for minimising each output individually, but only an approximate solution to simultaneous minimisation.
Output 1 transfer admittances: P1—1=0.472+0.00344j, P2—1=0.479−0.129j
Output 2 transfer admittances: P1—2=−0.206−0.195j, P2—2=0.262+0.000274j
Form two error contribution matrices
i.e. exact solution possible
i.e. exact solution possible
We now use the “tan theta” method to solve the three cases.
Now for the eigenvector method. I have two eigenvector solvers; one solves for all vectors simultaneously, and the other solves for a specific eigenvalue. They give numerically different answers when the vectors are complex (both answers are correct), but after applying the “best” scaling algorithm, both solvers give the same results as those above.
M1: eigenvalues, 0 and 0.469:
Eigenvector before scaling: (−0.698+0.195j, 0.689−0.0013j) or (0.724, −0.664-0.184j)
Eigenvector after scaling: (0.718−0.093j, −0.682−0.098j)
M2: eigenvalues, 0 and 0.149:
Eigenvector before scaling: (−0.5+0.46j, 0.734−0.0030j) or (0.498−0.462j, 0.724) Eigenvector after scaling: (0.623−0.270j, 0.692+0.244j)
M1+M2: eigenvalues, 0.137 and 0.480:
Eigenvector before scaling: (−0.717+0.051j, 0.695−0.0007j) or (0.719, −0.693-0.049j) Eigenvector after scaling: (0.719−0.024j, −0.694−0.025j)
Adding a 3rd Input
Now consider the contributions from a third input channel.
Output 1 transfer admittance: P3—1=−0.067-0.180j
Output 2 transfer admittance: P3—2=0.264+0.0014j
Add These Contributions to the Error Matrices
Now there is an exact solution to the joint problem, and M1+M2 has a zero eigenvalue.
(Note that M1 and M2 individually have two zero eigenvalues each—in other words they have a degenerate eigenvalue. There are two completely orthogonal solutions to the problem, and any linear sum of these two solutions is also a solution).
M1+M2: eigenvalues are 0, 0.218 and 0.506:
Eigenvector after scaling: (0.434−0.011j, −0.418+0.199j, 0.764+0.115j)
As illustrated above, for two inputs, the “tan theta” method is quicker and simpler to implement, however for three or four inputs the “scaled eigenvector” method is easier. Both methods produce the same result. For an exact solution, the number of input variables must be greater than the number of measurement points. By using eigenvalue analysis as a tool for the general problem, we get the “best” solution when no exact solution is available.
For the general ‘m’ input, ‘n’ output minimisation problem there are two principle variations on an algorithm to find the best m inputs. These may be referred to as the parallel “all at once” method and the serial “one at a time” method. In general, these may be combined at will. If m>n, then all routes end up with the same, exact answer (within rounding errors). If m<=n, then there are only approximate answers, and the route taken will affect the final outcome. The serial method is useful if m<=n, and some of the n outputs are more important than others. The important outputs are solved exactly, and those remaining get a best fit solution.
The Parallel, “All At Once” Algorithm
a is a block diagram of a parallel solver. One error matrix is formed, and the eigenvector corresponding to the lowest eigenvalue is chosen. If m>n, then the eigenvalue will be zero, and the result exact.
The Recursive or Sequential, “One At A Time” Algorithm
b is a block diagram of a recursive solver. An error matrix for the most important output is formed, and the eigenvectors corresponding to the (m−1) lowest eigenvalues are formed. These are used as new input vectors, and the process is repeated. The process ends with a 2×2 eigenvalue solution. Backtracking then reassembles the solution to the original problem.
As with all recursive algorithms, this process could be turned into an iterative (or sequential) process. For the first m−2 cycles, all the outputs have exact solutions. For the remaining cycle, the best linear combination of these solutions is found to minimise the remaining errors.
Output 1 transfer admittances: P1—1=0.472+0.00344j
Output 2 transfer admittances: P1—2=−0.206−0.195j
Output 1 transfer admittances: P2—1=0.479−0.129j
Output 2 transfer admittances: P2—2=0.262+0.000274j
Output 1 transfer admittance: P3—1=−0.067−0.180j
Output 2 transfer admittance: P3—2=0.264+0.0014j
All at once
M1+M2: eigenvalues are 0, 0.218 and 0.506:
Eigenvector after scaling: (0.434−0.011j, −0.418+0.199j, 0.764+0.115j)
One at a Time
Solve output 1, and then output 2. As 3>2 we should get the same answer.
M1: eigenvalues are 0, 0 and 0.506:
Eigenvector V1: (0.748, −0.596−0.165j, 0.085−0.224j)
Eigenvector V2: (−0.062+0.026j, 0.096+0.350j, 0.929)
New problem; select a and b such that a.V1+b.V2 minimises output 2.
New transfer admittances are;
pv1=(P1—2 P2—2 P3—2).V1=−0.287−0.250j
pv2=(P1—2 P2—2 P3—2).V1=0.287+0.100j
We now repeat the process using these two transfer admittances as the outputs.
New error matrix is
i.e. exact solution possible
M1′ eigenvalues, 0 and 0.237
Eigenvector after scaling: (0.608−0.145j, 0.772+0.114j)
Now combine V1 and V2 to get the inputs
(0.608−0.145j) V1+(0.772+0.114) V2=(0.404−0.095j, −0.352+0.268j, 0.737−0.042j)
Normalise and scale the result: (0.434−0.011j, −0.418+0.199j, 0.764+0.115j)
Notice that this is the same as before, just as it should be.
Here we have 1 acoustic pressure output and a number of velocity outputs.
Acoustic scaled error matrix is M1, summed velocity scaled error matrix is M2.
All at Once
All n output error matrices are summed and the eigenvector corresponding to the lowest eigenvalue is found.
Eigenvalues(M1+M2)=1.146, 3.869, 13.173
Solution=(0.739−0.235j, 0.483+0.306j, 0.246+0.104j)
One at a Time
Actually, we solve just the acoustics problem, then do the rest all at once. That way, the acoustics problem is solved exactly.
Eigenvalues(M1)=0, 0, 10.714
V1=(0.770−0.199j, 0.376+0.202j, 0.377+0.206j)
V2=(0.097−0.071j, 0.765+0.010j, −0.632+0.0016j)
As V1 and V2 both correspond to a zero eigenvalue, a.V1+b.V2 is also an eigenvector corresponding to a zero eigenvalue—i.e. it is an exact solution to the acoustics problem.
Form the “all at once” minimisation for the structural problem using a and b.
M1′ eigenvalues, 1.222 and 4.172
Eigenvector after scaling: (0.984−0.016j, 0.113+0.115j)
Now combine V1 and V2 to get the inputs
(0.984−0.016j) V1+(0.113+0.115j) V2=(0.776−0.207j, 0.473+0.283j, 0.290−0.124j)
Normalise and scale the result: (0.755−0.211j, −0.466+0.270j, 0.246+0.104j)
Notice that this is similar, but not identical to the “all at once” solution. When extended to cover a range of frequencies, it gives a precise result to the acoustics problem, where numerical rounding causes the very slight non-zero pressure in the sequential case).
As set out above, the two methods are not mutually exclusive, and the parallel method may be adopted at any point in the sequential process, particularly to finish the process. The sequential method is useful where the number of inputs does not exceed the number of outputs, particularly when some of the outputs are more important than others. The important outputs are solved exactly, and those remaining get a best fit solution.
As an alternative to the formal methodologies detailed above, the system may be self calibrating.
The processor generates output signals for each exciter which are the results of filtering the input responses (i.e. measured responses). The input responses are filtered by matched filters which are created by the system processor 20 by inverting the impulse responses. In other words, first filtered signal tt1i is created by filtering first input signal h1i using the inverted input signal h1i. Similarly second filtered signal tt2i is created by filtering second input signal h2i using the inverted input signal h2i. The sum of the normalized matched filter responses (i.e. in-phase combination) reinforces the signal at the measurement point and the difference of the normalized matched filter responses (i.e. out-of-phase combination) results in cancellation at the measurement point.
As shown in
The spectrum of the time-reversed signal is the complex conjugate of the original original: x(t)−>X(f)
filter: y(t)=x(−t); Y(f)=conj(X(f))
This is approximated by adding a fixed delay, so
z(t)=x(T−t) if t<=T, or z(t)=0 if t>T
When the filter is applied to the signal (ignoring the approximation for now), the phase information is removed, but the amplitude information is reinforced.
y(t)*x(t)−>X(f)×Y(f)=|X(f)|Λ2
(In fact, the resulting time response is the autocorrelation function).
As shown in step S212, the filter amplitude may adjusted, e.g. using a snapshot of 5 ms, 10 ms or other times. The filter is then applied to each impulse response to generate an output signal to be applied at each source (S214).
No doubt many other effective alternatives will occur to the skilled person. It will be understood that the invention is not limited to the described embodiments and encompasses modifications apparent to those skilled in the art lying within the spirit and scope of the claims appended hereto.
Number | Date | Country | Kind |
---|---|---|---|
0912919.8 | Jul 2009 | GB | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/GB2010/051195 | 7/21/2010 | WO | 00 | 2/24/2012 |