Acoustic control device

Information

  • Patent Grant
  • 11700499
  • Patent Number
    11,700,499
  • Date Filed
    Sunday, February 27, 2022
    2 years ago
  • Date Issued
    Tuesday, July 11, 2023
    11 months ago
Abstract
An acoustic control device includes a control signal generating unit configured to perform signal processing on a sound source signal by a control filter, which is an adaptive FIR filter, to generate a control signal that controls speakers, and a control filter updating unit configured to update sequentially and adaptively the control filter in a manner that a difference between a sound pressure in a viewing area and a sound pressure in a quiet area becomes a predetermined level.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2021-044966 filed on Mar. 18, 2021, the contents of which are incorporated herein by reference.


BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates to an acoustic control device.


Description of the Related Art

JP 2019-083408 A discloses a sound reproduction system. The sound reproduction system divides a vehicle compartment into a first region to a fourth region, and simultaneously outputs from speakers a first sound to be heard in the first region and a second sound to be heard in other regions. The sound reproduction system includes a first control filter and a second control filter. The first control filter is set to have a control characteristic in which the sound pressure of the first sound output from the speaker in each of other regions is lower than the sound pressure of the first sound output from the speaker in the first region. The second control filter is set to have a control characteristic in which the sound pressure of the second sound output from the speaker in each of the other regions is higher than the sound pressure of the second sound output from the speaker in the first region.


SUMMARY OF THE INVENTION

In the sound reproduction system disclosed in JP 2019-083408 A, the control filter is fixed. Therefore, when the sound field characteristic in the vehicle compartment changes, there is concern that the sound reproduction system cannot perform appropriate control.


An object of the present invention is to solve the above-described problems.


According to an aspect of the present invention, an acoustic control device controls a plurality of speakers in order for a sound pressure in a quiet area to become lower than a sound pressure in a viewing area, based on a sound source signal output from a sound source, and the acoustic control device includes sound detectors each configured to detect a sound in an installed area and output a detection sound signal, at least one of the sound detectors installed in the viewing area and at least one of the sound detectors installed in the quiet area, a control signal generating unit configured to perform signal processing on the sound source signal by a control filter, which is an adaptive finite impulse response filter, to generate a control signal that controls the speakers, a sound field characteristic learning unit configured to learn as a sound field filter a sound field characteristic between the speakers and the sound detectors, a reference signal generating unit configured to perform signal processing on the sound source signal by the sound field filter to generate a reference signal, and a control filter updating unit configured to update sequentially and adaptively the control filter based on the reference signal and the detection sound signal in a manner that a difference between the sound pressure in the viewing area and the sound pressure in the quiet area becomes a predetermined level.


The acoustic control device according to the present invention can perform appropriate control even when the sound field characteristic in the vehicle compartment has changed.


The above and other objects, features, and advantages of the present invention will become more apparent from the following description when taken in conjunction with the accompanying drawings, in which a preferred embodiment of the present invention is shown by way of illustrative example.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram illustrating an overview of acoustic control performed by an acoustic control device;



FIG. 2 is a control block diagram of the acoustic control device;



FIGS. 3A and 3B are graphs each illustrating changes over time of a sound signal detected by a microphone;



FIGS. 4A and 4B are graphs each illustrating changes over time of a sound signal detected by the microphone;



FIG. 5 is a graph illustrating changes over time of the difference between the sound pressure of the viewing area and the sound pressure of the quiet area;



FIG. 6 is a control block diagram of the acoustic control device;



FIGS. 7A and 7B are diagrams each illustrating a distribution of sound pressure in the vehicle compartment;



FIG. 8 is a control block diagram of the acoustic control device; and



FIGS. 9A and 9B are diagrams each illustrating an equivalent sound field characteristic.





DESCRIPTION OF THE INVENTION
First Embodiment


FIG. 1 is a diagram illustrating an overview of sound control performed by an acoustic control device 10.


The acoustic control device 10 according to the present embodiment controls a plurality of speakers 14 provided in a vehicle 12. As a result, the sound pressure in each area Q set in the vehicle compartment 16 is made lower than the sound pressure in a viewing (looking and listening) area V set in the vehicle compartment 16. As a result, a vehicle occupant in the viewing area V can clearly hear the sound of a sound source 18. On the other hand, the sound of the sound source 18 can be hardly heard by vehicle occupants in the quiet areas Q. The sound source 18 is a device for playing, for example, music, radio broadcasting, television broadcasting, voice guidance of a car navigation system, or the like.



FIG. 2 is a control block diagram of the acoustic control device 10. The acoustic control device 10 controls N speakers 14. Hereinafter, it is assumed that each of the N speakers 14 is assigned a unique number among 1 to N. When the speaker 14 of a specific number is described, for example, it is described as an n-th speaker 14.


Detection sound signals p_V are input to the acoustic control device 10 from M microphones 20 installed in the viewing area V. In addition, detection sound signals p_Q are input to the acoustic control device 10 from Z microphones 20 installed in the quiet areas Q. The microphone 20 corresponds to a sound detector of the present invention.


Hereinafter, it is assumed that a unique number among V1 to VM is assigned to each of the M microphones 20 installed in the viewing area V. When a microphone 20 with a unique number is described, for example, it is described as a Vm-th microphone 20. Similarly, it is assumed that a unique number among Q1 to QZ is assigned to the Z microphones 20 installed in the quiet areas Q. When the microphone 20 with a unique number is described, for example, it is described as a Qz-th microphone 20.


Hereinafter, a sound field characteristic between each speaker 14 and each microphone 20 is denoted by G. For example, it is assumed that sound field characteristic between an n-th speaker 14 and a Vm-th microphone 20 is G_n−Vm. Similarly, a sound field characteristic between the n-th speaker 14 and a Qz-th microphone 20 is denoted by G_n−Qz.


The acoustic control device 10 includes a plurality of signal processing units 22 corresponding to the speakers 14. The respective signal processing units 22 output control signals u_1 to u_N for controlling the respective speakers 14. Each of the signal processing units 22 includes a control signal generating unit 24, a reference signal generating unit 26, a control filter updating unit 28, an estimated detection sound signal generating unit 30, a signal extraction unit 32, a differential signal generating unit 34, and a sound field filter updating unit 36. The estimated detection sound signal generating unit 30, the signal extraction unit 32, the differential signal generating unit 34, and the sound field filter updating unit 36 constitute a sound field characteristic learning unit 37.


Each of the control signal generating units 24 performs signal processing on a sound source signal s input from the sound source 18 using the control filters W_1 to W_N to generate control signals u_1 to u_N. The control filters W_1 to W_N are FIR (Finite Impulse Response) filters. For example, the control filter W_n is denoted by the following vector. T denotes a transposed vector.

W_n=[W_n1,W_n2, . . . ,W_nL]T  (1)


The sound source signal s can be indicated by the following time series vector. In the expression, t denotes a discrete time.

s=[s_t,s_t−1, . . . ,s_t−L+1]T  (2)


The reference signal generating units 26 respectively generate reference signals r_1 to r_N by performing signal processing on the sound source signals s input from the sound source 18 using sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )}. The sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )} are FIR filters, and the sound field filter G_n{circumflex over ( )} can be expressed by the following vector.

G_n{circumflex over ( )}=[G_n,V1{circumflex over ( )},G_n,V2{circumflex over ( )}, . . . ,G_n,VM{circumflex over ( )},G_n,Q1{circumflex over ( )},G_n,Q2{circumflex over ( )}, . . . ,G_n,Q{circumflex over ( )}]  (3)


For example, the element G_n,V1{circumflex over ( )} of the above vector indicates an identified value of the sound field characteristic G_n,V1 between the n-th speaker 14 and the V1-th microphone 20.


The respective control filter updating units 28 update the control filters W_1 to W_N based on the reference signals r_1 to r_N and the detection sound signals p_V1 to p_VM and p_Q1 to p_QZ detected by the microphones 20. For example, the control filter W_n is updated such that the next evaluation function J is minimized.









J
=


ej
2

=


(



1
M






m
=
1

M




"\[LeftBracketingBar]"

p_Vm


"\[RightBracketingBar]"




-


1
Z






z
=
1

Z




"\[LeftBracketingBar]"

p_Qz


"\[RightBracketingBar]"




-
D

)

2






(
4
)







When the evaluation function J becomes the minimum (J=0), the difference between the average value of the sound pressure in the viewing area V and the average value of the sound pressure in the quiet areas Q becomes a predetermined target value D.


The detection sound signal p_Vm of the Vm-th microphone 20 in the viewing area V and the detection sound signal p_Qz of the Qz-th microphone 20 in the quiet area Q are expressed by the following expressions. In the following expressions, “*” indicates a convolution operation.













p_Vm
=




n
=
1

N



s
*
W_n
*
G_n



,
Vm







p_Qz
=




n
=
1

N


s
*
W_n
*
G_n



,
Qz







(
5
)







In order to update the control filters W_1 to W_N so as to minimize the evaluation function J, the control filters W_1 to W_N may be updated along the negative direction of the gradient of the evaluation function J with respect to the control filters W_1 to W_N. For example, a gradient ∂J/∂w_n of the evaluation function J with respect to the control filter W_n can be expressed by the following expression. Note that sgn(p_Vm) and sgn(p_Qz) in the expression are sign functions.












J



W_n


=

2


ej

(



1
M






m
=
1

M




sgn

(
p_Vm
)

·

s

(
t
)


*
G_n



,

Vm
-


1
Z






z
=
1

Z




sgn

(
p_Qz
)

·

s

(
t
)


*
G_n




,
Qz

)






(
6
)







When the sound field filter G_n,Vm{circumflex over ( )} are substituted for the sound field characteristics G_n,Vm, and the sound field filters G_n and Qz{circumflex over ( )} are substituted for the sound field characteristics G_n and Qz, an update expression for the control filter W_n can be expressed by the following expression. In the expression, μw in the expression is a step size parameter.










W_n


(

t
+
1

)


=


W_n


(
t
)


-

2


μ
w



ej

(
t
)



(



1
M






m
=
1

M



sgn

(

p_Vm


(
t
)


)

×

s

(
t
)

*
G_n



,


Vm
^

(
t
)


-


1
Z






z
=
1

Z



sgn

(

p_Qz


(
t
)


)

×

s

(
t
)

*
G_n




,

Qz
^

(
t
)



)







(
7
)







The evaluation function J can also be set as follows.









J
=


ej
2

=


(



1
M






m
=
1

M


p_Vm
2



-


1
Z






z
=
1

Z


p_Qz
2



-
D

)

2






(
8
)







When the evaluation function J becomes minimum (J=0), the difference between the average value of the acoustic energy in the viewing area V and the average value of the acoustic energy in the quiet areas Q becomes the predetermined target value D. In this case, the update expression for the control filter W_n can be expressed by the following expression.










W_n


(

t
+
1

)


=


W_n


(
t
)


-

2


μ
w



ej

(
t
)



(



1
M






m
=
1

M


p_Vm


(
t
)

×

s

(
t
)

*
G_n



,


Vm
^

(
t
)


-


1
Z






z
=
1

Z


p_Qz


(
t
)

×

s

(
t
)

*
G_n




,

Qz
^

(
t
)



)







(
9
)







By minimizing the evaluation function J of the above expression (4), the difference between the average value of the sound pressure in the viewing area V and the average value of the sound pressure in the quiet areas Q can be brought closer to the predetermined target value D. Further, by minimizing the evaluation function J of the above expression (8), the difference between the average value of the acoustic energy of the viewing area V and the average value of the acoustic energy of the quiet areas Q can be brought closer to the predetermined target value D. However, the sound pressure itself in the viewing area V is not limited.


Therefore, the sizes of the control filters W_1 to W_N may be limited such that the sound pressure in the viewing area V does not become excessive. For example, the size of the control filter W_n is limited as follows. In the expression, η is an attenuation coefficient, and 0<η<1. In the expression, Wth denotes a predetermined threshold value.

If |W_n(t)|2>Wth, Then W_n(t)=W_n(t)×η  (10)
|W_n(t)|2=W_nTW_n


The respective estimated detection sound signal generating units 30 perform signal processing on the control signals u_1 to u_N by sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )} to generate estimated detection sound signals y_1{circumflex over ( )} to y_N{circumflex over ( )}.


The respective signal extraction units 32 extract components of the reproduced sounds output from the speakers 14 from the detection sound signals p_V1 to p_VM and p_Q1 to p_QZ detected by the microphones 20 and output the extracted components as target detection sound signals h_1 to h_N.


For example, the detection sound signal p_Vm detected by the Vm-th microphone 20 is generated in accordance with a synthetic sound obtained by synthesizing the reproduced sounds output from the first to N-th speakers 14 at the position of the Vm-th microphone 20. Therefore, for example, a component p_Vm,n of the reproduced sound output from the n-th speaker 14 among the detection sound signals p_Vm can be obtained by the following expression.









p_Vm
,

n
=

p_Vm
-





n
=
1

,

i

n


N


s
*
W_n
*
G_n




,
Vm




(
11
)







Here, when the sound field function G_n,Vm{circumflex over ( )} is substituted for the sound field filter G_n,Vm, the component p_Vm,n can be obtained by the following expression.









p_Vm
,

n
=

p_Vm
-





n
=
1

,

i

n


N


s
*
W_n
*
G_n




,

V


m
^






(
12
)







The target detection sound signals h_1 to h_N can be expressed by vectors. For example, h_n can be expressed as follows.

h_n=[p_V1,n,p_V2,n, . . . ,p_VM,n,p_Q1,n,p_Q2,n, . . . ,p_QZ,n]  (13)


The respective differential signal generating units 34 generate differential signals d_1 to d_N based on the estimated detection sound signals y_1{circumflex over ( )} to y_N{circumflex over ( )} and the target detection sound signals h_1 to h_N.


Each of the differential signal generating units 34 includes an inverting amplifier 34a and an adder 34b. The estimated detection sound signals −y_1{circumflex over ( )} to −y_N{circumflex over ( )}whose polarities are inverted by the respective inverting amplifiers 34a and the target detection sound signals h_1 to h_N are added by the adders 34b to generate the differential signals d_1 to d_N.


The respective sound field filter updating units 36 update the sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )} based on the control signals u_1 to u_N and the differential signals d_1 to d_N. For example, the sound field filter G_n,Vm{circumflex over ( )} is updated such that the following evaluation function I is minimized.









I
=


e


i
2


=


(



p_V


m

-





n
=
1

,

i

n


N


s
*
W_n
*
G_n



,


Vm
^

-

s
*
W_n
*
G_n


,

V


m
^



)

2






(
14
)







In order to update the sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )} such that the evaluation function I is minimized, the sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )} may be adaptively updated along the negative direction of the gradient of the evaluation function I with respect to the sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )}. For example, an update expression for the sound field filter G_1{circumflex over ( )},Vm{circumflex over ( )} can be expressed by the following expression. In the expression, μc is the step size parameter.

G_n,Vm{circumflex over ( )}(t+1)=G_n,Vm{circumflex over ( )}(t)+2μGei(ts(t)*W_n(t)  (15)


Advantageous Effects


FIGS. 3A and 3B are graphs each showing changes over time of a detection sound signal detected by the microphone 20 installed in the viewing area V and the microphone 20 installed in the quiet area Q when the sound control of the present embodiment is OFF. FIG. 3A is a graph illustrating changes over time of a detection sound signal detected by the microphone 20 installed in the viewing area V. FIG. 3B is a graph illustrating changes over time of a detection sound signal detected by the microphone 20 installed in the quiet area Q.



FIGS. 4A and 4B are graphs each showing changes over time of a detection sound signal detected by the microphone 20 installed in the viewing area V and the microphone 20 installed in the quiet area Q when the sound control of the present embodiment is ON. FIG. 4A is a graph illustrating changes over time of a detection sound signal detected by the microphone 20 installed in the viewing area V. FIG. 4B is a graph illustrating changes over time of a detection sound signal detected by the microphone 20 installed in the quiet area Q.


As shown in FIGS. 3A and 3B, when the sound control is OFF, the difference between the amplitude of the detection sound signals detected by the microphones 20 installed in the viewing area V and the amplitude of the detection sound signals detected by the microphones 20 installed in the quiet areas Q is small. On the other hand, as shown in FIGS. 4A and 4B, when the sound control is ON, the difference between the amplitude of the detection sound signals detected by the microphones 20 installed in the viewing area V and the amplitude of the detection sound signals detected by the microphones 20 installed in the quiet areas Q is large.



FIG. 5 is a graph illustrating changes over time of the difference between the sound pressure of the viewing area V and the sound pressure of the quiet areas Q; As shown in FIG. 5, when the sound control is OFF, the change in the difference between the sound pressure of the viewing area V and the sound pressure of the quiet areas Q is small even as time elapses. On the other hand, when the sound control is ON, the difference between the sound pressure of the viewing area V and the sound pressure of the quiet areas Q increases with time.


Since it takes time until the control filters W_1 to W_N converge after the acoustic control is started, as illustrated in FIG. 5, a difference between the sound pressure of the viewing area V and the sound pressure of the quiet areas Q is small immediately after the acoustic control is started. This can be improved by setting initial values of the control filters W_1 to W_N.


Assuming that the number of speakers 14 is N, the number of microphones 20 in the viewing area V is M, and the number of microphones 20 in the quiet areas Q is Z, the acoustic control device 10 of the present embodiment requires N control filters W_1 to W_N and requires N×(M+Z) sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )}.


In the acoustic control device 10 of the present embodiment, the sound field filter updating unit 36 learns the sound field characteristics G_1 to G_N as the sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )}. Therefore, even when the sound field characteristics G_1 to G_N change, it is possible to make the sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )} follow. Therefore, even when the sound field characteristics G_1 to G_N change, it is possible to maintain the performance of acoustic control.


Further, in the acoustic control device 10 of the present embodiment, the control filter updating unit 28 sequentially and adaptively updates the control filters W_1 to W_N such that the difference between the average of the magnitudes (sound pressures) of the detection sound signals p_V1 to p_VM detected by the microphones 20 installed in the viewing area V and the average of the magnitudes (sound pressures) of the detection sound signals p_Q1 to p_QZ detected by the microphones 20 installed in the quiet areas Q, becomes the target value D. Thus, the sound pressures in the quiet areas Q can be made smaller than the sound pressure in the viewing area V.


Each of the sound field characteristic learning units 37 of the acoustic control device 10 according to the present embodiment includes a signal extraction unit 32. The respective signal extracting units 32 extract components of the reproduced sounds output from the speakers 14 from the detection sound signals p_V1 to p_VM and p_Q1 to p_QZ detected by the microphones 20 and output the extracted components as target detection sound signals h_1 to h_N. Accordingly, the respective sound field characteristic learning units 37 can generate the sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )}based on the components (the target detection sound signals h_1 to h_N) of the reproduced sounds output from the respective speakers 14. Therefore, it is possible to increase the learning accuracy of the sound field characteristics.


Second Embodiment


FIG. 6 is a control block diagram of the acoustic control device 10. The acoustic control device 10 according to the present embodiment is different from the acoustic control device 10 according to the first embodiment in that a corrected sound source signal generating unit 38 is provided in each signal processing unit 22.


In the present embodiment, it is assumed that My viewing areas V are set and one microphone 20 is installed in each viewing area V. In addition, it is assumed that Zq quiet areas Q are set and one microphone 20 is installed in each quiet area Q.


The respective corrected sound source signal generating units 38 perform signal processing on the sound source signal s using fixed filters F_1 to F_N to generate corrected sound source signals s′_1 to s′_N.


The respective fixed filters F_1 to F_N are set according to sound field characteristic G_V between the speakers 14 and grid points set in the viewing areas V and sound field characteristic G_Q between the speakers 14 and grid points set in the quiet areas Q.


When the target area of acoustic control in the vehicle compartment 16 is divided using a grid and M grid points are set, the sound field characteristic G can be expressed by the following matrix.









G
=

[





G_

1

,
1








G_

1

,
M

















G_N
,
1







G_N
,
M




]





(
16
)







Elements corresponding to each grid point in the viewing area V are extracted from the sound field characteristic G to set a matrix G_V, and elements corresponding to each grid point in the quiet area Q are extracted to set a matrix G_Q.


The acoustic energy Eω_V of a frequency ω in the viewing areas V can be expressed by the following expression. In the expression, H denotes a conjugate transpose.











E

ω_V






m
=
1

Mv



p

ω_V


m
2




=

F


ω
H


G


ω_V
H


G

ω_V

F

ω





(
17
)







That is, the acoustic energy Eω_V can be expressed in a matrix form indicating the sum of the squares of the sound pressure. The acoustic energy Eω_Q of the frequency ω in the quiet area Q can be expressed similarly.


The frequency characteristic Fω at the frequency ω is set such that a following evaluation function K becomes maximum.









K
=


Eω_V
Eω_Q

=


F


ω
H



Gω_V
H


Gω_VFω


F


ω
H



Gω_Q
H


Gω_QFω







(
18
)







The frequency characteristic Fω is an eigenvector corresponding to the maximum eigenvalue of the matrix Gω_VHGω_V[Gω_QHGω_Q]−1. Each frequency characteristic Fω is obtained for all frequencies of a control object, and fixed filters F_1 to F_N are obtained by performing inverse fast Fourier transform on each frequency characteristic Fω.


An update expression for the control filter W_n updated in the control filter updating unit 28 can be expressed by the following expression. In the expression, μg in the expression is a step size parameter.










W_n


(

t
+
1

)


=


W_n


(
t
)


-

2


μ
w



ej

(
t
)



(



1

M

v







m
=
1


M

v




sgn

(

p_Vm


(
t
)


)

×

s

(
t
)

*
F_n
*
G_n



,


Vm
^

(
t
)


-


1

Z

q







z
=
1

Zq



sgn

(

p_Qz


(
t
)


)

×

s

(
t
)

*
F_n
*
G_n




,

Qz
^

(
t
)



)







(
19
)







In the expression, ej(t) is expressed by the following expression.










ej

(
t
)

=



1
M






m
=
1

M




"\[LeftBracketingBar]"


p_Vm


(
t
)




"\[RightBracketingBar]"




-


1
Z






z
=
1

Z




"\[LeftBracketingBar]"


p_Qz


(
t
)




"\[RightBracketingBar]"




-
D





(
20
)







In the expression, D is a predetermined target value of a difference between an average value of sound pressure in the viewing areas V and an average value of sound pressure in the quiet areas Q.


An update expression for the sound field filters G_n,Vm{circumflex over ( )} updated by the sound field filter updating unit 36 can be expressed by the following expression. In the expression, μG is a step size parameter.

G_n,Vm{circumflex over ( )}(t+1)=G_n,Vm{circumflex over ( )}(t)+2μG×ei(ts(t)*F_n*W_n(t)  (21)


In the above expression, ei(t) is expressed by the following expression.








ei

(
t
)

=


p_V


m
(
t
)


-





n
=
1

,

i

n


N



s

(
t
)

*
W_n


(
t
)

*
G_n




,









Vm
^

(
t
)

-


s

(
t
)

*
W_n


(
t
)

*
G_n


,

V



m
^

(
t
)






Advantageous Effects


FIGS. 7A and 7B are diagrams illustrating the distribution of sound pressure levels in the vehicle compartment 16. The darker the color is, the higher the sound pressure level is. FIG. 7A shows the distribution of sound pressure level in the vehicle compartment 16 when the acoustic control according to the present embodiment is OFF. FIG. 7B shows the distribution of sound pressure level in the vehicle compartment 16 when the acoustic control according to the present embodiment is ON.


As shown in FIG. 7A, when the acoustic control according to the present embodiment is OFF, the sound pressure levels in the viewing area V and the quiet areas Q are high. On the other hand, as shown in FIG. 7B, when the sound control according to the present embodiment is ON, the sound pressure level in the viewing area V is high, but the sound pressure levels in the quiet areas Q are low.


Assuming that the number of speakers 14 is N, the number of viewing areas V is Mv, the number of quiet areas Q is Zq, and one microphone 20 is provided in each of the viewing areas V and the quiet areas Q, the acoustic control device 10 according to the present embodiment requires N control filters W_1 to W_N and requires N×(Mv+Zq) sound field filters G_1{circumflex over ( )} to G_N{circumflex over ( )}.


As a result, the number of microphones 20 per viewing area V and quiet area Q can be reduced, and the configuration of the acoustic control device 10 can be simplified. In addition, the range of acoustic control per microphone 20 can be expanded.


Third Embodiment


FIG. 8 is a control block diagram of an acoustic control device 10. In the present embodiment, a common signal processing unit 22 is provided for N speakers 14. A control signal generating unit 24 of the signal processing unit 22 includes a common signal generating unit 25 and a control signal correcting unit 40.


The single common signal generating unit 25 is provided for the N speakers 14. The common signal generating unit 25 performs signal processing on a sound source signal s by a control filter W to generate a common control signal v.


Each control signal correcting unit 40 is provided one by one for each of the N speakers 14. The respective control signal correcting units 40 perform signal processing on the common control signal v by fixed filters F_1 to F_N to generate control signals u_1 to u_N. The method of obtaining the fixed filters F_1 to F_N is the same as that in the second embodiment.


In the present embodiment, it is assumed that My viewing areas V are set and one microphone 20 is installed in each viewing area V. In addition, it is assumed that Zq quiet areas Q are set and one microphone 20 is installed in each quiet area Q.


An update expression for the control filter W updated by the control filter updating unit 28 can be expressed as follows.










W

(

t
+
1

)

=


W_n


(
t
)


-

2


μ
w



ej

(
t
)



(



1

M

v







m
=
1


M

v




sgn

(

p_Vm


(
t
)


)

×

s

(
t
)

*
G_n



,


Vm
^

(
t
)


-


1

Z

q







z
=
1


Z

q




sgn

(

p_Qz


(
t
)


)

×

s

(
t
)

*
G_n




,

Qz
^

(
t
)



)







(
23
)







In the expression, ej(t) is expressed by the following expression.










ej

(
t
)

=



1
M






m
=
1

M




"\[LeftBracketingBar]"


p_Vm


(
t
)




"\[RightBracketingBar]"




-


1
Z






z
=
1

Z




"\[LeftBracketingBar]"


p_Qz


(
t
)




"\[RightBracketingBar]"




-
D





(
24
)







In the expression, D is a predetermined target value of a difference between an average value of sound pressure in the viewing areas V and an average value of sound pressure in the quiet areas Q.


An update expression for the sound field filter G{circumflex over ( )} updated by the sound field filter updating unit 36 can be expressed as follows.

G{circumflex over ( )}(t+1)=G{circumflex over ( )}(t)+2μGei(ts(t)*W(t)  (25)


In the above expression, ei(t) is expressed by the following expression.











ei

(
t
)

=


p_V


m
(
t
)


-





n
=
1

,

i

n


N



s

(
t
)

*
W_n


(
t
)

*
G_n




,




(
26
)












Vm
^

(
t
)

-


s

(
t
)

*
W_n


(
t
)

*
G_n


,

V



m
^

(
t
)






By updating the sound field filter G{circumflex over ( )} based on this expression, the sound field filter G{circumflex over ( )} converges to an equivalent sound field characteristic Geq.



FIGS. 9A and 9B are diagrams illustrating the equivalent sound field characteristic Geq. As illustrated in FIG. 9A, sound transfer paths between the speakers 14 and the Vm-th microphone 20 have sound field characteristics G_1,Vm to G_N,Vm. As illustrated in FIG. 9B, when a combination of the fixed filters F_1 to F_N and the N speakers 14 is set as one virtual speaker 42, the sound field characteristic of a transfer path between the virtual speaker 42 and the Vm-th microphone 20 can be expressed by the equivalent sound field characteristic Geq.


Advantageous Effects

In the acoustic control device 10 according to the present embodiment, a common signal processing unit 22 is provided for the plurality of speakers 14. The signal processing unit 22 generates the common control signal v, and the control signal correcting units 40 provided corresponding to the speakers 14 perform signal processing on the common control signal v by the fixed filters F_1 to F_N to generate the control signals u_1 to u_N. Thus, in the signal processing unit 22, it is adequate that the single control filter W and the single sound field filter G{circumflex over ( )} are provided. That is, in the acoustic control device 10 according to the present embodiment, the number of control filters W and sound field filters G{circumflex over ( )}, each of which needs to be updated, can be reduced, and thus the amount of calculation in acoustic control can be reduced.


[Technical Concepts Obtained from Embodiments]


A description will be given below concerning technical concepts that are capable of being grasped from the above-described embodiments.


The acoustic control device (10) that controls the plurality of speakers (14) in order for a sound pressure in the quiet area (Q) to become lower than a sound pressure in the viewing area (V), based on a sound source signal output from the sound source (18), the acoustic control device including the sound detectors (20) each configured to detect a sound in an installed area and output a detection sound signal, at least one of the sound detectors installed in the viewing area and at least one of the sound detectors installed in the quiet area, the control signal generating unit (24) configured to perform signal processing on the sound source signal by a control filter, which is an adaptive finite impulse response filter, to generate a control signal that controls the speakers, the sound field characteristic learning unit (37) configured to learn as a sound field filter a sound field characteristic between the speakers and the sound detectors, the reference signal generating unit (26) configured to perform signal processing on the sound source signal by the sound field filter to generate a reference signal, and the control filter updating unit (28) configured to update sequentially and adaptively the control filter based on the reference signal and the detection sound signal in a manner that a difference between the sound pressure in the viewing area and the sound pressure in the quiet area becomes a predetermined level.


In the acoustic control device, the plurality of sound detectors may be installed in the viewing area and the plurality of sound detectors may be installed in the quiet area, and the control filter updating unit may be configured to update sequentially and adaptively the control filter in a manner that a difference between an average of magnitudes of the detection sound signals detected by the plurality of sound detectors installed in the viewing area and an average of magnitudes of the detection sound signals detected by the plurality of sound detectors installed in the quiet area becomes a predetermined level.


In the acoustic control device, the sound field characteristic learning unit may include the estimated detection sound signal generating unit (30) configured to perform signal processing on the control signal corresponding to each of the speakers by the sound field filter corresponding to each of the speakers and each of the sound detectors to generate an estimated detection sound signal corresponding to each of the speakers and each of the sound detectors, the signal extraction unit (32) configured to extract the detection sound signal corresponding to a sound output from each of the speakers from the detection sound signals detected by the respective sound detectors, to output a target detection sound signal, the differential signal generating unit (34) configured to generate a differential signal corresponding to each of the speakers and each of the sound detectors, from the target detection sound signal corresponding to each of the speakers and each of the sound detectors and the estimated detection sound signal corresponding to each of the speakers and each of the sound detectors, and the sound field filter updating unit (36) configured to update sequentially and adaptively the sound field filter corresponding to each of the speakers and each of the sound detectors based on the control signal corresponding to each of the speakers and the differential signal corresponding to each of the speakers and each of the sound detectors in a manner that the differential signal is minimized.


The acoustic control device may further include the corrected sound source signal generating unit (38) configured to perform signal processing on the detection sound signal by a fixed filter that is set in advance in accordance with a sound field characteristic between each of the speakers and a grid point set in the viewing area and a sound field characteristic between each of the speakers and a grid point set in the quiet area, to generate a corrected sound source signal, wherein the control signal generating unit may be configured to perform signal processing on the corrected sound source signal by the control filter to generate the control signal, and the reference signal generating unit may be configured to perform signal processing on the corrected sound source signal by the sound field filter to generate the reference signal.


In the acoustic control device, the control signal generating unit may include the common signal generating unit (25) installed corresponding to the plurality of speakers in common and the plurality of control signal correcting units (40) installed corresponding to the plurality of speakers, and the common signal generating unit may be configured to perform signal processing on the sound source signal by the control filter to generate a common control signal, and each of the control signal correcting units may be configured to perform signal processing on the common control signal by a fixed filter set in advance in accordance with a sound field characteristic to generate the control signal corresponding to each of the speakers.


The present invention is not particularly limited to the embodiments described above, and various modifications are possible without departing from the essence and gist of the present invention.

Claims
  • 1. An acoustic control device that controls a plurality of speakers in order for a sound pressure in a quiet area to become lower than a sound pressure in a viewing area, based on a sound source signal output from a sound source, the acoustic control device comprising: sound detectors each configured to detect a sound in an installed area and output a detection sound signal, at least one of the sound detectors installed in the viewing area and at least one of the sound detectors installed in the quiet area; andone or more processors that execute computer-executable instructions stored in a memory,wherein the one or more processors execute the computer-executable instructions to cause the acoustic control device to:perform signal processing on the sound source signal by a control filter, which is an adaptive finite impulse response filter, to generate a control signal that controls the speakers;learn as a sound field filter a sound field characteristic between the speakers and the sound detectors;perform signal processing on the sound source signal by the sound field filter to generate a reference signal; andupdate sequentially and adaptively the control filter based on the reference signal and the detection sound signal in a manner that a difference between the sound pressure in the viewing area and the sound pressure in the quiet area becomes a predetermined level.
  • 2. The acoustic control device according to claim 1, wherein a plurality of the sound detectors are installed in the viewing area and a plurality of the sound detectors are installed in the quiet area, andwherein the one or more processors cause the acoustic control device to update sequentially and adaptively the control filter in a manner that a difference between an average of magnitudes of the detection sound signals detected by the plurality of sound detectors installed in the viewing area and an average of magnitudes of the detection sound signals detected by the plurality of sound detectors installed in the quiet area becomes a predetermined level.
  • 3. The acoustic control device according to claim 1, wherein the one or more processors cause the acoustic control device to: perform signal processing on the control signal corresponding to each of the speakers by the sound field filter corresponding to each of the speakers and each of the sound detectors to generate an estimated detection sound signal corresponding to each of the speakers and each of the sound detectors;extract the detection sound signal corresponding to a sound output from each of the speakers from the detection sound signals detected by the respective sound detectors, to output a target detection sound signal;generate a differential signal corresponding to each of the speakers and each of the sound detectors, from the target detection sound signal corresponding to each of the speakers and each of the sound detectors and the estimated detection sound signal corresponding to each of the speakers and each of the sound detectors; andupdate sequentially and adaptively the sound field filter corresponding to each of the speakers and each of the sound detectors based on the control signal corresponding to each of the speakers and the differential signal corresponding to each of the speakers and each of the sound detectors in a manner that the differential signal is minimized.
  • 4. The acoustic control device according to claim 1, wherein the one or more processors cause the acoustic control device to: perform signal processing on the detection sound signal by a fixed filter that is set in advance in accordance with a sound field characteristic between each of the speakers and a grid point set in the viewing area and a sound field characteristic between each of the speakers and a grid point set in the quiet area, to generate a corrected sound source signal;perform signal processing on the corrected sound source signal by the control filter to generate the control signal; andperform signal processing on the corrected sound source signal by the sound field filter to generate the reference signal.
  • 5. The acoustic control device according to claim 1, wherein the one or more processors cause the acoustic control device to: perform signal processing on the sound source signal by the control filter to generate a common control signal; andperform signal processing on the common control signal by a fixed filter set in advance in accordance with a sound field characteristic to generate the control signal corresponding to each of the speakers.
Priority Claims (1)
Number Date Country Kind
2021-044966 Mar 2021 JP national
US Referenced Citations (10)
Number Name Date Kind
6980663 Linhard Dec 2005 B1
8199923 Christoph Jun 2012 B2
9711131 Christoph Jul 2017 B2
10152962 MacNeille Dec 2018 B2
11034211 Fridman Jun 2021 B2
11211061 Li Dec 2021 B2
11575990 Herbig Feb 2023 B2
20140314256 Fincham Oct 2014 A1
20190014430 Christoph Jan 2019 A1
20190132668 Seki May 2019 A1
Foreign Referenced Citations (1)
Number Date Country
2019-083408 May 2019 JP
Related Publications (1)
Number Date Country
20220303705 A1 Sep 2022 US