The present disclosure relates to an acoustic signal processing device, an acoustic signal processing method, and a program.
A technology of canceling external noise when reproducing music or the like with a headset device, or a technology pertaining to what is called noise canceling has been known (for example, see PTL 1).
In this field, it has been desired that external noise that leaks into a headset device can be cancelled.
One object of the present disclosure is to provide an acoustic signal processing device, an acoustic signal processing method, and a program for effectively canceling noise.
For example, the present disclosure is an acoustic signal processing device including a noise cancelling section that is provided for each of multiple microphones and that generates a signal for cancelling noise according to a sound signal inputted from the corresponding microphone, and a digital filter that processes an external input signal.
For example, the present disclosure is an acoustic signal processing method including generating a signal for canceling noise according to a sound signal inputted from a corresponding one of multiple microphones, by a noise cancelling section that is provided for each of the microphones, and processing an external input signal, by a digital filter.
For example, the present disclosure is a program for causing a computer to perform an acoustic signal processing method, including generating a signal for canceling noise according to a sound signal inputted from a corresponding one of multiple microphones, by a noise cancelling section that is provided for each of the microphones, and processing an external input signal, by a digital filter.
Hereinafter, embodiments for carrying out the present disclosure will be explained with reference to the drawings. It is to be noted that the explanation will be given in accordance with the following order.
<First embodiment>
<Second embodiment>
<Third embodiment>
Embodiments etc. explained hereinbelow preferably exemplify the present disclosure. The present disclosure is not limited to these embodiments etc.
First, in order to facilitate understanding of the present disclosure, the summery of the present embodiment will be explained wile problems to be taken into consideration in the present embodiment are explained.
In recent years, regarding music reproduction, headphone reproduction of 3D audio is attracting attention. Headphone reproduction of 3D audio giving more realistic feeling can be realized because reproducibility in a sound source arrival direction can be enhanced, compared to that in conventional stereo content reproduction. The reproduction accuracy of 3D audio is deteriorated due to masking by ambient noise if the ambient noise is large when the audio is being listened to. For this reason, reproduction using a digital noise canceling (hereinafter, referred to as DNC, as appropriate) technology is very effective.
In the case depicted in
A case where 3D audio is reproduced by the system depicted in
In view of the above problems, a headset 1A according to the present embodiment is equipped with multiple microphones LM in a left housing 2A and multiple microphones RM in a right housing 3A, as depicted in
When 3D audio is reproduced by the multi-microphone FFNC, noise arrival directions can be robustly handled, as depicted in
Audio data which is an external input signal is supplied to the headset 1A. The audio data is supplied wirelessly or wiredly. The audio data may be music data, or data including a speaker's voice only. The present embodiment will be explained on the assumption that the audio data is music data MS of 3D audio. It is to be noted that the music data MS may be monophonic type audio data or may be stereophonic type audio data. Further, the explanation will be given on the assumption that an external noise N can intrude into the headset 1A. For example, the external noise N is noise generated by a mobile body such as an airplane or a vehicle, or is noise generated by an air conditioner or the like.
The microphones LM1 to LMN (N represents an optional natural number) are FFNC microphones, and are disposed in the left housing 2A of the headset 1A. It is to be noted that, in a case where it is not necessary to distinguish the microphones LM1 to LMN from one another, the microphones are referred to as microphones LM, as appropriate. The number and arrangement positions of the microphones LM can be appropriately decided, but it is preferable to decide the number and positions in such a way that the external noise N that can intrude from the ambient environment of the listener L can be detected.
The DNC filters 11 include DNC filters 111 to 11N (N represents an optional natural number). The microphones LM are connected to the respective DNC filters 11. For example, the microphone LM1 is connected to the DNC filter 111, and the microphone LM2 is connected to the DNC filter 112.
When a sound outputted by the driver 14 arrives at an ear of the listener, the DNC filters 11 each cancel the external noise N, and generate a noise cancelling signal which has an effect of making only the sound of an audio signal audible to the listener. That is, each of the DNC filters 11 generates a noise cancelling signal which has a characteristic of a reversed phase of the external noise N (a sound signal collected by the corresponding microphone LM) that arrives at the ear of the listener. The DNC filters 11 output the generated noise cancelling signals to the adder section 13.
The DNC filters 11 are formed as FIR (Finite Impulse Response) filters or IIR (Infinite Impulse Response) filters, for example. In addition, in the present embodiment, the DNC filter 11 for use and a filter coefficient of the DNC filter 11 can be changed according to a control parameter generated by the control section 35.
The microphone LFB is a feedback microphone that is disposed in the housing 2A. The microphone LFB is disposed near the driver 14.
The DNC filter 12 generates a noise cancelling signal for cancelling the external noise N on the basis of a sound signal inputted to the microphone LFB. The DNC filter 12 is formed as an FIR filter or an IIR filters, for example. In addition, a filter coefficient of the DNC filter 12 may be changed according to a control parameter generated by the control section 35, although the filter coefficient is fixed in the present embodiment.
The adder section 13 adds up the noise cancelling signals generated by the DNC filters 11, the noise cancelling signal generated by the DNC filter 12, and the music data MS processed by the digital filter 31. The resultant signal is supplied to the driver 14.
The driver 14 outputs the music data MS and the noise cancelling signal supplied from the adder section 13. The signal outputted from the driver 14 is supplied to the adder section 15.
The adder section 15 adds up the music data MS, the noise cancelling signal, and the external noise N. Accordingly, the music data MS in which the external noise N has been cancelled arrives at the left ear of the listener.
The microphones RM1 to RMN (N represents an optional natural number) are FFNC microphones, and are disposed in the right housing 3A of the headset 1A. It is to be noted that, in a case where it is not necessary to distinguish the microphones RM1 to RMN from one another, the microphones are referred to as microphones RM, as appropriate. The number and arrangement positions of the microphones RM may be appropriately decided, but it is preferable to decide the number and positions in such a way that external noise N that can intrude from the ambient environment of the listener L can be detected.
The DNC filters 21 include DNC filters 211 to 21N (N represents an optional natural number). The microphones RM are connected to the respective DNC filters 21. For example, the microphone RM1 is connected to the DNC filter 211, and the microphone RM2 is connected to the DNC filter 212.
When a sound outputted by the driver 24 arrives at an ear of the listener, the DNC filters 21 each cancel the external noise N, and generate a noise cancelling signal which has an effect of making only the sound of an audio signal audible to the listener. That is, each of the DNC filters 21 generates a noise cancelling signal which has a characteristic of a reversed phase of the external noise N (a sound signal collected by the corresponding microphone RM) that arrives at the ear of the listener. The DNC filters 21 output the generated noise cancelling signals to the adder section 23.
The DNC filters 21 are formed as FIR filters or IIR filters, for example. In addition, in the present embodiment, the DNC filter 21 for use and a filter coefficient of the DNC filter 21 can be changed according to a control parameter generated by the control section 35.
The microphone LRB is a feedback microphone that is disposed in the housing 3A. The microphone RFB is disposed near the driver 24.
The DNC filter 22 generates a noise cancelling signal for cancelling the external noise N on the basis of a sound signal inputted to the microphone RFB. The DNC filter 22 is formed as an FIR filter or an IIR filter, for example. In addition, a filter coefficient of the DNC filters 22 may be changed according to a control parameter created by the control section 35, although the filter coefficient is fixed in the present embodiment.
The adder section 23 adds up the noise cancelling signals generated by the DNC filters 21, the noise cancelling signal generated by the DNC filter 22, and the music data MS processed by the digital filter 32. The resultant signal is supplied to the driver 24.
The driver 24 outputs the music data MS and the noise cancelling signal supplied from the adder section 23. A signal outputted from the driver 24 is supplied to the adder section 25.
The adder section 25 adds up the music data MS, the noise cancelling signal, and the external noise N. Accordingly, the music data MS in which the external noise N has been cancelled arrives at the right ear of the listener.
The digital filters 31 and 32 each process an external input signal (the music data MS in the present embodiment). The digital filters 31 and 32 each have an equalizing function of changing a frequency characteristic of the music data MS that has been converted to a digital form by an A/D (Analog to Digital) conversion section (not illustrated), and a rendering function of localizing an audio object at a prescribed position by controlling a delay or the phase of the audio object, as appropriate, for example. The filter characteristics such as filter coefficients of the digital filters 31 and 32 are defined by control parameters from the control section 35.
The control section 35 controls operations of the DNC filters 11 and 21 by generating and supplying control parameters for the DNC filters 11 and 21. In addition, the control section 35 controls operations of the digital filters 31 and 32 by generating and supplying control parameters for the digital filters 31 and 32.
In the present embodiment, the microphones LM, the microphone LFB, the microphones RM, and the microphone RFB serve as multiple microphones. Further, the DNC filters 11 and 12 and the DNC filters 21 and 22 serve as the noise cancelling section that is provided for each of the microphones, and that generates a signal for canceling noise according to an input sound signal collected by the corresponding microphone. It is to be noted that the headset 1A may include a gain adjustment section (not depicted) for adjusting the volume of a sound, for example.
Next, an operation example of the headset 1A will be explained. The DNC filters 11 generate noise cancelling signals for canceling the external noise N according to input sound signals collected by the microphones LM. The DNC filters 21 generate noise cancelling signals for canceling the external noise N according to input sound signals collected by the microphones RM.
The noise cancelling signals are added to the music data MS. As a result, the external noise N is cancelled. Accordingly, a sound corresponding to the music data MS in which the external noise N has been canceled is reproduced to the listener.
According to the first embodiment explained so far, multiple microphones are disposed in housings of a headset. Consequently, noise that arrives from any direction can be effectively cancelled.
Next, a second embodiment will be explained. It is to be noted that, in the explanation of the second embodiment, a section that is identical or similar to that in the above explanation is denoted by the same reference numeral, and a repeated explanation thereof will be omitted, as appropriate. In addition, a matter having been explained in the first embodiment may be applied to the second embodiment unless otherwise stated.
Sound signals corresponding to the external noise N collected by the microphones LM and the microphones RM are inputted to the noise arrival direction estimating section 401. On the basis of the inputted sound signals, the noise arrival direction estimating section 401 generates noise arrival direction information which indicates the arrival direction of the noise. Specifically, the noise arrival direction information is indexes indicating the respective noise intensities in multiple directions. The noise arrival direction information is supplied to both the optimal audio object arrangement position calculating section 402 and the optimal NC filter calculating section 403.
The optimal audio object arrangement position calculating section 402 calculates an optimal arrangement position of an audio object in accordance with the noise arrival direction information. The optimal audio object arrangement position calculating section 402 calculates the optimal arrangement position of the audio object by additionally referring to information written in meta information corresponding to the audio object. The manner of referring to the information will be explained in detail later.
The optimal NC filter calculating section 403 calculates optimal control parameters for the DNC filters 11 and 21 in accordance with the noise arrival direction information. Thereafter, the optimal NC filter calculating section 403 outputs the calculation result to the control section 35.
Next, a specific example of a process that is performed by the noise arrival direction estimating section 401 will be explained.
Outputs from the filters 45 are supplied to respective dB calculation sections 46. The dB calculation sections 46 each calculate the level (dB) of the inputted sound signal. The calculation results obtained by the filters are supplied to an average value calculating section 47. The average value calculating section 47 calculates an average value of the calculation results. Thereafter, the difference from the average value is computed by the adder section 48, and the computation result is used as a noise intensity in a three-dimensional direction corresponding to the prescribed filter. For example, an output from the filter 451 is supplied to the dB calculation section 461. A calculation result obtained by the dB calculation section 461 is supplied to the average value calculating section 47 and the adder section 48. The adder section 48 calculates the difference between the output from the dB calculation section 461 and the output from the average value calculating section 47. An output from the adder section 48 is used as a noise intensity index in a direction corresponding to the filter 451, that is, φ=90 degrees. In this manner, noise intensity indexes corresponding to respective three-dimensional directions are obtained.
The noise arrival direction estimating section 401 generates noise arrival direction information on the basis of the obtained noise intensity indexes.
As depicted in
The sound source direction deciding section 402A decides a direction to which an audio object is to be localized, that is, a sound source direction in accordance with the noise arrival direction information. The sound source direction may be defined by a specific three-dimensional position, or may be defined by a direction with respect to the listener L. The sound source direction deciding section 402A supplies the decided sound source direction to the filter coefficient converting section 402B.
The filter coefficient converting section 402B converts the sound source direction decided by the sound source direction deciding section 402A to a filter coefficient by performing a conversion process on the sound source direction. For example, the filter coefficient converting section 402B holds, in a table, filter coefficients corresponding to multiple sound source directions. The filter coefficient converting section 402B reads out, from the table, a filter coefficient corresponding to the sound source direction supplied from the sound source direction deciding section 402A. Thereafter, the filter coefficient converting section 402B supplies the read filter coefficient to the control section 35. The control section 35 defines the filter coefficient as a control parameter for the digital filter 31 and 32. Accordingly, an audio object is localized in a sound source direction decided by the optimal audio object arrangement position calculating section 402.
Hereinafter, some examples of a sound source deciding process that is performed by the sound source direction deciding section 402A will be explained.
At step ST11, the sound source direction deciding section 402A determines whether or not a change of the sound source direction is permitted on the basis of the change permission/prohibition information included in the meta information. In a case where the determination result is No, the process proceeds to step ST12.
At step ST12, the sound source direction deciding section 402A outputs the recommended sound source direction to the filter coefficient converting section 402B because a change of the sound source direction is not permitted. As a result, the audio object is reproduced in the recommended sound source direction. Then, the process is finished.
In a case where the determination result of step ST13 is Yes, the process proceeds to step ST13. At step ST13, a computation of convoluting a prescribed smoothing filter into the noise arrival direction information is performed. Then, the process proceeds to step ST14.
At step ST14, the sound source direction deciding section 402A outputs a direction (θ, φ) in which the noise intensity index is minimized in accordance with the smoothed noise arrival direction information. As a result, the audio object is reproduced in the direction (θ, φ) in which the noise intensity index is minimized. Then, the process is finished.
It is to be noted that the computation for convoluting the smoothing filter at step ST13 may be omitted.
Meta information includes an identification number for identifying an audio object group, recommended reproduction position information regarding a reference audio object (abbreviated as reference object, as appropriate), change permission/prohibition information, and a list of constituent audio objects that belong to the same group. The list of constituent audio objects includes identification numbers for identifying respective constituent audio objects (also referred to as constituent objects), and relative sound source directions (relative angles with respect to the reproduction position of the reference object) of the constituent audio objects.
At step ST21, the sound source direction deciding section 402A determines whether a change of a sound source direction is permitted or not for the audio object group on the basis of the change permission/prohibition information included in the meta information. In a case where the determination result is No, the process proceeds to step ST22.
At step ST22, a sound source direction of the reference object is set to the sound source direction indicated by the recommended reproduction position information. Then, the process proceeds to step ST26.
The relative sound source directions of the constituent objects are described in the meta information. When the sound source direction of the reference object is set, the sound source direction of each constituent object also can be determined. Therefore, at step ST26, the sound source direction deciding section 402A outputs a list of the sound source directions of the reference object and all the constituent objects. The reference object and the constituent objects are reproduced in the respective sound source directions indicated by the list. Then, the process is finished.
In a case where the determination result of step ST21 is Yes, the process proceeds to step ST23. At step ST23, a convolution computation is performed on the noise arrival direction information. For example,
At step ST24, the sound source direction deciding section 402A sets the sound source direction of the reference object to a direction (θ, φ) in which the noise intensity index is minimized in the noise arrival direction information obtained by the computation. Then, the process proceeds to step ST25.
Since the sound source direction of the reference object has been set, the sound source direction of each constituent object is set to (the angle of the sound source direction of the reference object+the relative angle of the constituent object) at step ST25. Then, the process proceeds to step ST26.
At step ST26, the sound source direction deciding section 402A outputs a list of the sound source directions of the reference object and all the constituent objects. The reference object and the constituent objects are reproduced in the respective sound source directions indicated by the list. Then, the process is finished.
The order of the multiple audio objects is defined. The order may be randomly decided, or may be based on the importance of each audio object. The importance of an audio object which is human voice is high while the importance of an audio object which is BGM is low, for example. Alternatively, if content classification is described in the meta information, the order may be based on the content classification. For example, the audio objects may be sorted in accordance with a priority order that is previously defined for each content classification.
At step ST31, the sound source direction deciding section 402A decides the order of processing the audio objects or audio object groups (hereinafter, abbreviated as audio objects etc., as appropriate). Then, the process proceeds to step ST32.
At step ST32, a process loop regarding the audio objects etc. is started. Then, the process proceeds to step ST33.
At step ST33, the sound source direction deciding section 402A performs the process regarding the abovementioned pattern PT1 or PT2 on the audio objects etc. in accordance with an order corresponding to the decided order. Then, the process proceeds to step ST34.
At step ST34, the noise arrival direction information is updated each time a reproduction position of a prescribed audio object is decided.
At step ST35, whether there is no more audio objects etc. to be processed left is determined. In a case where there is no more audio objects etc. to be processed, the process is finished.
The optimal NC filter calculating section 403 selects an optimal filter coefficient for canceling noise or selects a DNC filter 11 to be operated, by using the noise arrival direction information and the meta information. For example, the optimal NC filter calculating section 403 calculates a DNC filter 11 to be operated, or the noise canceling intensity of each DNC filter 11 on the basis of the noise arrival direction information. Then, the optimal NC filter calculating section 403 generates an optimal control parameter on the basis of the calculation result, as depicted in
In a case where a residual noise in a listener's ear is defined as e(t), intra-ear residual noise can be expressed by the following expression (1) (in the expression (1), l(t) represents leakage noise that is previously measured, xm(t) represents an input to all FFNC microphones, fm(t) represents a DNC filter characteristic, and d(t) represents an acoustic characteristic in the headset).
The optimal NC filter calculating section 403 may calculate control parameters for the DNC filters 11 and 21 in such a way that the intra-ear residual noise in the expression (1) is minimized.
Next, a third embodiment will be explained. It is to be noted that, in the explanation of the third embodiment, a section that is identical or similar to that in the above explanation is denoted by the same reference numeral, and a repeated explanation thereof will be omitted, as appropriate. In addition, a matter having been explained in the first or second embodiment may be applied to the third embodiment unless otherwise stated.
In the third embodiment, roughly, a part of the process that is performed in the headset in the first or second embodiment is performed in an external device (e.g. a smartphone or server device that can communicate with the headset).
The communication section 51 includes an antenna, a modulation-demodulation circuit appropriate for the communication system. It is assumed that wireless communication is performed, but wired communication may be performed. The wireless communication is performed through a LAN (Local Area Network), Bluetooth (registered trademark), Wi-Fi (registered trademark), or a WUSB (Wireless USB), for example. As a result of communication performed by the communication section 51, the headset 10 and an external device such as a smartphone are paired.
The CPU 82 generally controls the smartphone 81. The digital filters 83A and 83B of the DSP 83 perform a rendering process of localizing an audio object at a prescribed position, for example.
The first communication section 84 communicates with a server device 71. As a result of this communication, data on an audio object is downloaded from the server device 71 to the smartphone 81.
The second communication section 85 communicates with the communication section 51 of the headset 10. As a result of this communication, noise arrival direction information is supplied from the headset 10 to the smartphone 81. In addition, an audio object having undergone a process which will be explained later is supplied from the smartphone 81 to the headset 10.
The (optimal audio object arrangement position calculating section and optimal NC filter calculating section) 86 has the functions of the abovementioned optimal audio object arrangement position calculating section 402 and the function of the abovementioned optimal NC filter calculating section 403.
The object filter control circuit 87 sets, for the digital filters 83A and 83B, filter coefficients for implementing an arrangement position of an audio object calculated by the (optimal audio object arrangement position calculating section and optimal NC filter calculating section) 86.
The storage section 88 stores various types of data. For example, the storage section 88 stores a filter coefficient for implementing an arrangement position of an audio object. The storage section 88 is a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device, for example.
<Process that is Performed Between Headset and Smartphone>
Next, a process that is performed between the headset 10 and the smartphone 81 will be performed. First, for example, short-range wireless communication between the headset 10 and the smartphone 81 is performed, whereby the headset 10 and the smartphone 81 are paired.
The headset 10 generates noise arrival direction information, as previously explained in the second embodiment. The noise arrival direction information is supplied from the communication section 51 of the headset 10 to the second communication section 85 of the smartphone 81. The noise arrival direction information is supplied from the second communication section 85 to the (optimal audio object arrangement position calculating section and optimal NC filter calculating section) 86.
By communicating with the server device 71, the first communication section 84 of the smartphone 81 acquires an audio object and meta information corresponding to the audio object from the server device 71. The data on the audio object is supplied to the DSP 83. The meta information is supplied to the (optimal audio object arrangement position calculating section and optimal NC filter calculating section) 86.
The (optimal audio object arrangement position calculating section and optimal NC filter calculating section) 86 decides an arrangement position (sound source direction) of the audio object in the similar manner to that in the second embodiment. The (optimal audio object arrangement position calculating section and optimal NC filter calculating section) 86 supplies the decided sound source direction to the object filter control circuit 87. The object filter control circuit 87 reads out, from the storage section 88, a filter coefficient for implementing the sound source direction, and sets the read coefficient for the digital filters 83A and 83B.
A filter process using the digital filters 83A and 83B is performed on the data on the audio object. The resultant data is transmitted to the headset 10 via the second communication section 85. In addition, optimal control parameters for the DNC filters 11 and 21 calculated by the (optimal audio object arrangement position calculating section and optimal NC filter calculating section) 86 are transmitted to the headset 10 via the second communication section 85.
The data on the audio object received by the communication section 51 of the headset 10 undergoes an equalizing process at the EQ 53, and then, is supplied to the adder section 13. Further, the data on the audio object received by the communication section 51 of the headset 10 undergoes an equalizing process at the EQ 54, and then is supplied to the adder section 23.
Further, the optimal control parameters for the DNC filters 11 and 21 received by the communication section 51 of the headset 10 are supplied to the control section 35, and are respectively set for the DNC filters 11 and 21. The remaining processes are similar to those previously explained in the first or second embodiment.
In the manner explained so far, a part of the headset according to the first or second embodiment may be implemented by an external device such as a smartphone. That is, an acoustic signal processing device according to the present disclosure is not limited to a headset and may be realized by an electronic device such as a smartphone. It is to be noted that which function is implemented by an external device can be changed, as appropriate. For example, in the above third embodiment, the smartphone 81 may have the function of the noise arrival direction estimating section 401 that generates noise arrival direction information.
Some of the embodiments according to the present disclosure have been explained above. The present disclosure is not limited to the above embodiments, and various modifications can be made on the basis of the technical concept of the present disclosure.
Arrangement of the abovementioned sections of the headset can be changed in the headset, as appropriate. For example, in a case where on-ear headphones or neckband headphones are used, a circuit configuration including the digital filters, the control section, the analysis section, etc. is installed in either the left or right housing while a data cable connecting both of the housings is formed so as to allow transmission/reception of data to/from the side with no circuit configuration. In addition, in neckband headphones, the circuit may be installed in one of the housings in the abovementioned manner, or the circuit configuration including the digital filters, the control section, the analysis section, etc. may be disposed in the neckband part. Meanwhile, in what is called left-right independent type headphones such as a canal type or an open-ear type, it is desirable that a circuit including the digital filters, the control section, the analysis section, etc. is independently installed on each of the left and right sides, which are not depicted.
The abovementioned DNC filters, digital filters, and the EQ 53 can be formed so as to be included in a DSP. In addition, the control section and the analysis section can be included in a circuit of a DSP or a processor. Alternatively, the control section and the analysis section may be configured to operate in accordance with a computer program (software) that is operated by a DSP or a processor.
The configurations, methods, steps, shapes, materials, and numerical values described in the above embodiments and modifications are just examples. In place of them, any other configurations, methods, steps, shapes, materials, and numerical values may be used, if needed. The configurations, methods, steps, shapes, materials, and numerical values described in the above embodiments and modifications may be replaced by those that are publicly known. In addition, the configurations, methods, steps, shapes, materials, and numerical values described in the above embodiments and modifications can be combined as long as no technical inconsistency occurs.
It is to be noted that the interpretation of the present disclosure should not be limited by the effects described here.
The present technology can also have the following configurations.
(1)
An acoustic signal processing device including:
a noise cancelling section that is provided for each of multiple microphones and that generates a signal for cancelling noise according to a sound signal inputted from the corresponding microphone; and
a digital filter that processes an external input signal.
(2)
The acoustic signal processing device according to (1), further including:
a control section that generates a control parameter for the noise cancelling section; and
an analysis section that analyzes sound signals inputted from the microphones.
(3)
The acoustic signal processing device according to (2), in which
the analysis section has a noise arrival direction estimating section that, according to the sound signals inputted from the microphones, generates noise arrival direction information which indicates an arrival direction of the noise.
(4)
The acoustic signal processing device according to (3), in which
the noise cancelling section further generates a control parameter for the digital filter, the analysis section further analyzes the external input signal, and the external input signal includes an audio object and meta information corresponding to the audio object, and
the optimal audio object arrangement position calculating section has an optimal audio object reproduction position calculating section that, in accordance with the noise arrival direction information, calculates an optimal reproduction position of the audio object.
(5)
The acoustic signal processing device according to (4), in which
the audio object is a single audio object,
the meta information includes recommended reproduction position information and change permission/prohibition information which indicates whether a change of a sound source direction is permitted or not, and
in a case where the change permission/prohibition information indicates that the change is permitted, a process of reproducing the audio object at the optimal reproduction position is performed, and in a case where the change permission/prohibition information indicates that the change is not permitted, a process of reproducing the audio object at the recommended reproduction position is performed.
(6)
The acoustic signal processing device according to (4), in which
the audio object includes multiple audio objects relative reproduction positions of which are defined, and
in accordance with the noise arrival direction information, the analysis section calculates the optimal reproduction position in such a way that a noise intensity index with respect to the multiple audio objects is minimized.
(7)
The acoustic signal processing device according to (4), in which
the audio object includes multiple audio objects an order of which is defined,
in a case where the change permission/prohibition information indicates that the change is permitted, a process of reproducing the audio object at the optimal reproduction position is performed, and in a case where the change permission/prohibition information indicates that the change is not permitted, a process of reproducing the audio object at the recommended reproduction position is performed, and
the process is performed in accordance with an order corresponding to the defined order, and the noise arrival direction information is updated each time the process is performed.
(8)
The acoustic signal processing device according to (7), in which
the order is randomly decided, is based on a priority level, or is based on a content classification.
(9)
The acoustic signal processing device according to any one of (3) to (8), in which
the analysis section generates an optimal control parameter for the noise cancelling section in accordance with the noise arrival direction information.
(10)
The acoustic signal processing device according to any one of (4) to (8), in which
the digital filter performs a process of localizing the audio object at a prescribed position.
(11)
The acoustic signal processing device according to any one of (1) to (10), further including:
the multiple microphones.
(12)
The acoustic signal processing device according to (11), in which
the multiple microphones include a feedforward microphone and a feedback microphone.
(13)
The acoustic signal processing device according to any one of (1) to (12), in which
the external input signal includes audio data supplied wirelessly or wiredly.
(14)
The acoustic signal processing device according to any one of (1) to (13), the acoustic signal processing device being formed as a headset device.
(15)
An acoustic signal processing method including:
generating a signal for canceling noise according to a sound signal inputted from a corresponding one of multiple microphones, by a noise cancelling section that is provided for each of the microphones; and
processing an external input signal, by a digital filter.
(16)
A program for causing a computer to perform an acoustic signal processing method including:
generating a signal for canceling noise according to a sound signal inputted from a corresponding one of multiple microphones, by a noise cancelling section that is provided for each of the microphones; and
processing an external input signal, by a digital filter.
Number | Date | Country | Kind |
---|---|---|---|
2020-108595 | Jun 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/020229 | 5/27/2021 | WO |