The present disclosure relates to a processing device, a processing method, and a program.
An out-of-head localization technique localizes sound images outside the head by canceling characteristics from headphones to ears and giving four characteristics from a stereo speaker to the ears. Patent Literature 1 (Japanese Unexamined Patent Application Publication No. 2002-209300) discloses a method using a head-related transfer function (HRTF) of a listener as a method for localizing sound images outside the head. Further, it is known that the HRTF varies widely from person to person, and particularly the variation of the HRTF due to a difference in auricle shape is significant.
Thus, it is preferred to measure spatial acoustic transfer characteristics (which are hereinafter referred to also as transfer characteristics) such as the HRTF in the state where a listener is wearing microphones on the left and right ears. With a recent increase in memory capacity and operation speed, it has become possible to perform audio signal processing such as localization by using a mobile terminal such as a smartphone or a tablet. This has enabled measurement and computation of the spatial acoustic transfer characteristics by use of a microphone terminal that comes with a mobile terminal.
In most mobile terminals, a microphone input terminal is for monophonic input, not stereo input. In some personal computers also, a microphone input terminal is for monophonic input. When measuring the spatial acoustic transfer characteristics from a speaker to the left and right ears by a mobile terminal or the like, if the distance from the speaker to the left and right ears is different, a difference (time difference) occurs in time needed for an acoustic signal to reach the left and right ears from the speaker. With a monophonic microphone input terminal, it is not possible to simultaneously record audio by microphones placed on the left and right ears, and therefore it is not possible to acquire a time difference. Thus, with a monophonic microphone input terminal, it has been difficult to obtain the spatial acoustic transfer characteristics that reflect a difference in time of arrival at the left and right ears.
A technique to solve the above problem is disclosed in Patent Literature 2 (Japanese Unexamined Patent Application Publication No. 2017-28365). Patent Literature 2 discloses a sound field reproduction device capable of appropriately measuring transfer characteristics even with monophonic microphone input. This sound field reproduction device includes a microphone unit having left and right microphones, a monophonic input terminal, and a switch unit for switching the output of the microphone unit.
By switching of the switch unit, a first sound pickup signal picked up only by the left microphone, a second sound pickup signal picked up only by the right microphone, and a third sound pickup signal picked up by the left and right microphones are measured. A processing device calculates a difference in time of arrival of a sound from a speaker at the left and right microphones. The processing device calculates transfer characteristics that reflect the time difference based on the first and second sound pickup signals. This enables acquisition of transfer characteristics in consideration of a time difference even with a monophonic input terminal.
The out-of-head localization technique localizes sound images outside the head by giving four transfer characteristics from a stereo speaker to the ears. To perform the out-of-head localization technique, it is necessary to perform measurement where a speaker is placed ahead on the left of a listener and measurement where a speaker is placed ahead on the right of the listener. In Patent Literature 2, it is necessary to perform measurement three times in order to measure the first to third sound pickup signals for one speaker position. Thus, it is necessary to perform measurement six times in total in order to acquire the first to third sound pickup signals for each of the left and right speakers.
Further, there is a demand to perform measurement with different placement of a speaker with respect to a listener. For example, the feeling of localization that suits a listener's preference can be achieved by using transfer characteristics at a different opening angle with respect to the front direction of the listener. An increase in the number of placements causes an increase in the number of times of measurement.
A processing device according to an embodiment is a processing device for processing sound pickup signals obtained by picking up sound output from a sound source by left and right microphones worn on a listener, the device including a measurement signal generation unit configured to generate a measurement signal to be output from the sound source in order to perform characteristics measurement in a state where the sound source is placed in a direction at an angle θ from front of the listener, a monophonic input terminal configured to receive input of sound pickup signals picked up by the left and right microphones, a sound pickup signal acquisition unit configured to acquire the sound pickup signals picked up by the left and right microphones through the monophonic input terminal, a switch unit configured to switch a connection state so that each of a first sound pickup signal picked up only by the left microphone and a second sound pickup signal picked up only by the right microphone is input to the monophonic input terminal, an interaural distance acquisition unit configured to acquire an interaural distance of the listener, a front time difference acquisition unit configured to acquire, as a front time difference, a difference in time of arrival from the sound source placed in front of the listener to the left and right microphones, an incident time difference calculation unit configured to calculate an incident time difference based on the angle θ, the front time difference, and the interaural distance, and a transfer characteristics generation unit configured to calculate transfer characteristics from the sound source to the left and right microphones by applying a delay corresponding to the incident time difference to the first and second sound pickup signals acquired in the characteristics measurement.
A processing method according to an embodiment is a processing method in a processing device for processing sound pickup signals obtained by picking up sound output from a sound source by left and right microphones worn on a listener, where the processing device performs characteristics measurement by outputting a measurement signal to the sound source placed in a direction at an angle θ from front of the listener, the processing device has a monophonic input terminal, a switch unit is placed between the monophonic input terminal and the left and right microphones, and the switch unit switches input to the monophonic input terminal so that each of a first sound pickup signal picked up only by the left microphone and a second sound pickup signal picked up only by the right microphone is input to the monophonic input terminal, the processing method including a step of acquiring an interaural distance of the listener, a step of acquiring, as a front time difference, a difference in time of arrival from the sound source placed in front of the listener to the left and right microphones, a step of calculating an incident time difference based on the angle θ, the front time difference, and the interaural distance, and a step of calculating transfer characteristics from the sound source to the left and right microphones by applying a delay corresponding to the incident time difference to the first and second sound pickup signals acquired in the characteristics measurement.
A program according to an embodiment is a program causing a computer to execute a processing method for processing sound pickup signals obtained by picking up sound by left and right microphones, where the computer performs characteristics measurement by outputting a measurement signal to the sound source placed in a direction at an angle θ from front of the listener, the computer has a monophonic input terminal, a switch unit is placed between the monophonic input terminal and the left and right microphones, and the switch unit switches input to the monophonic input terminal so that each of a first sound pickup signal picked up only by the left microphone and a second sound pickup signal picked up only by the right microphone is input to the monophonic input terminal, the processing method including a step of acquiring an interaural distance of the listener, a step of acquiring, as a front time difference, a difference in time of arrival from the sound source placed in front of the listener to the left and right microphones, a step of calculating an incident time difference based on the angle θ, the front time difference, and the interaural distance, and a step of calculating transfer characteristics from the sound source to the left and right microphones by applying a delay corresponding to the incident time difference to the first and second sound pickup signals acquired in the characteristics measurement.
According to the present disclosure, there are provided a processing device, a processing method and a program capable of measuring transfer characteristics in a simplified way.
The overview of a sound localization process using a filter generated by a processing device according to an embodiment is described hereinafter. An out-of-head localization process according to this embodiment performs out-of-head localization by using spatial acoustic transfer characteristics and ear canal transfer characteristics. The spatial acoustic transfer characteristics are transfer characteristics from a sound source such as a speaker to the ear canal. The ear canal transfer characteristics are transfer characteristics from a speaker unit such as headphones or earphones to the eardrum. In this embodiment, out-of-head localization is implemented by measuring the spatial sound transfer characteristics when headphones or earphones are not worn and using the measurement data.
Out-of-head localization according to this embodiment is performed by a user terminal such as a personal computer, a smart phone, or a tablet PC. The user terminal is an information processor including a processing means such as a processor, a storage means such as a memory or a hard disk, a display means such as a liquid crystal monitor, and an operating means such as a touch panel, a button, a keyboard and a mouse. The user terminal may have a communication function to transmit and receive data. Further, an output means (output unit) with headphones or earphones is connected to the user terminal. As an out-of-head localization device, a general-purpose processing device having a monophonic input terminal may be used.
(Out-of-Head Localization Device)
The out-of-head localization device 100 includes an out-of-head localization unit 10, a filter unit 41, a filter unit 42, and headphones 43. The out-of-head localization unit 10, the filter unit 41 and the filter unit 42 can be implemented by a processor or the like, to be specific.
The out-of-head localization unit 10 includes convolution calculation units 11 to 12 and 21 to 22, and adders 24 and 25. The convolution calculation units 11 to 12 and 21 to 22 perform convolution processing using the spatial acoustic transfer characteristics. The stereo input signals XL and XR from a CD player or the like are input to the out-of-head localization unit 10. The spatial acoustic transfer characteristics are set to the out-of-head localization unit 10. The out-of-head localization unit 10 convolves a filter of the spatial acoustic transfer characteristics (which is referred hereinafter also as a spatial acoustic filter) into each of the stereo input signals XL and XR having the respective channels. The spatial acoustic transfer characteristics may be a head-related transfer function HRTF measured in the head or auricle of a measured person, or may be the head-related transfer function of a dummy head or a third person.
The spatial acoustic transfer characteristics are a set of four spatial acoustic transfer characteristics Hls, Hlo, Hro and Hrs. Data used for convolution in the convolution calculation units 11 to 12 and 21 to 22 is a spatial acoustic filter. The spatial acoustic filter is generated by cutting out the spatial acoustic transfer characteristics Hls, Hlo, Hro and Hrs with a specified filter length.
Each of the spatial acoustic transfer characteristics Hls, Hlo, Hro and Hrs is acquired in advance by impulse response measurement or the like. For example, the listener U wears microphones on the left and right ears, respectively. Left and right speakers placed ahead of the listener U output impulse sounds for performing impulse response measurement. Then, the microphones pick up measurement signals such as the impulse sounds output from the speakers. The spatial acoustic transfer characteristics Hls, Hlo, Hro and Hrs are acquired based on sound pickup signals in the microphones. The spatial acoustic transfer characteristics Hls between the left speaker and the left microphone, the spatial acoustic transfer characteristics Hlo between the left speaker and the right microphone, the spatial acoustic transfer characteristics Hro between the right speaker and the left microphone, and the spatial acoustic transfer characteristics Hrs between the right speaker and the right microphone are measured.
The convolution calculation unit 11 then convolves the spatial acoustic filter in accordance with the spatial acoustic transfer characteristics Hls to the L-ch stereo input signal XL. The convolution calculation unit 11 outputs convolution calculation data to the adder 24. The convolution calculation unit 21 convolves the spatial acoustic filter in accordance with the spatial acoustic transfer characteristics Hro to the R-ch stereo input signal XR. The convolution calculation unit 21 outputs convolution calculation data to the adder 24. The adder 24 adds the two convolution calculation data and outputs the data to the filter unit 41.
The convolution calculation unit 12 convolves the spatial acoustic filter in accordance with the spatial acoustic transfer characteristics Hlo to the L-ch stereo input signal XL. The convolution calculation unit 12 outputs convolution calculation data to the adder 25. The convolution calculation unit 22 convolves the spatial acoustic filter in accordance with the spatial acoustic transfer characteristics Hrs to the R-ch stereo input signal XR. The convolution calculation unit 22 outputs convolution calculation data to the adder 25. The adder 25 adds the two convolution calculation data and outputs the data to the filter unit 42.
An inverse filter that cancels out the headphone characteristics (characteristics between a reproduction unit of headphones and a microphone) is set to the filter units 41 and 42. Then, the inverse filter is convolved to the reproduced signals (convolution calculation signals) on which processing in the out-of-head localization unit 10 has been performed. The filter unit 41 convolves the inverse filter to the L-ch signal from the adder 24. Likewise, the filter unit 42 convolves the inverse filter to the R-ch signal from the adder 25. The inverse filter cancels out the characteristics from the headphone unit to the microphone when the headphones 43 are worn. The microphone may be placed at any position between the entrance of the ear canal and the eardrum. The inverse filter may be calculated from a result of measuring the characteristics of the listener U, or may be measured on another listener or a dummy head.
The filter unit 41 outputs a processed L-ch signal to a left unit 43L of the headphones 43. The filter unit 42 outputs a processed R-ch signal to a right unit 43R of the headphones 43. The user U is wearing the headphones 43. The headphones 43 output the L-ch signal and the R-ch signal toward the user U. It is thereby possible to reproduce sound images localized outside the head of the user U.
As described above, the out-of-head localization device 100 performs out-of-head localization by using the spatial acoustic filters in accordance with the spatial acoustic transfer characteristics Hls, Hlo, Hro and Hrs and the inverse filters of the headphone characteristics. In the following description, the spatial acoustic filters in accordance with the spatial acoustic transfer characteristics Hls, Hlo, Hro and Hrs and the inverse filter of the headphone characteristics are referred to collectively as an out-of-head localization filter. In the case of 2ch stereo reproduced signals, the out-of-head localization filter is composed of four spatial acoustic filters and two inverse filters. The out-of-head localization device 100 then carries out convolution calculation on the stereo reproduced signals by using the total six out-of-head localization filters and thereby performs out-of-head localization.
A measurement device that measures the spatial acoustic transfer characteristics is described hereinafter with reference to
The processing device 210 is an information processor such as a personal computer, a smartphone or a tablet PC. The processing device 210 performs measurement by executing a program stored in a memory 61 or the like. The processing device 210 includes the memory 61 that stores sound pickup signals, an operating unit 62 that receives an operation of the listener U, and a processing unit 63 that processes each signal. The operating unit 62 is a touch panel, for example.
To be specific, the processing device 210 executes an application program (app), and thereby generates an impulse signal and starts measurement of the transfer characteristics. Note that the processing device 210 may be the same device as or a different device from the out-of-head localization device 100 shown in
In
Further, the monophonic input terminal 8 and the audio output terminal 9 may be a common input/output terminal. In this case, a sound can be input and output by connecting a 3-prong or 4-prong plug. Further, the processing device 210 may output a measurement signal to the stereo speaker 5 by wireless communication such as Bluetooth (registered trademark).
The processing device 210 generates an impulse signal to be output from each of the left speaker 5L and the right speaker 5R. Specifically, the measurement device 200 measures each of the transfer characteristics Hls from the left speaker 5L to a left microphone 2L and the transfer characteristics Hlo from the right speaker 5R to a right microphone 2R. Note that, although the left speaker 5L is placed ahead on the left of the listener U, and the right speaker 5R is placed ahead on the right of the listener U in
Further, the microphone 2L for sound pickup is placed at the entrance of the ear canal or the eardrum position of a left ear 3L of the listener U. A microphone 2R for sound pickup is placed at the entrance of the ear canal or the eardrum position of a right ear 3R of the listener U. Note that the listener U may be a person or a dummy head. Thus, in this embodiment, the user U is a concept that includes not only a person but also a dummy head. The microphone unit 2 that includes the left microphone 2L and the right microphone 2R is connected to the switch unit 7. Note that the switch unit 7 may be included in the microphone unit 2.
The switch unit 7 is connected to the monophonic input terminal 8 on the processing device 210 through a cable. Thus, the left microphone 2L and the right microphone 2R are connected to the monophonic input terminal 8 through the switch unit 7. Further, the microphone unit 2 is connected to the processing device 210 through the monophonic input terminal 8. Thus, the sound pickup signal picked up by the microphone unit 2 is input to the processing device 210 through the switch unit 7 and the monophonic input terminal 8.
The switch unit 7 switches the output of the microphone unit 2 so that a sound pickup signal picked up by one or both of the left and right microphones 2L and 2R is input to the monophonic input terminal 8. The adder 7b adds a signal from the left microphone 2L and a signal from the right microphone 2R. The switch 7a switches the output of only the left microphone 2L, the output of only the right microphone 2R, and the output from the adder 7b. The control of the switch unit 7 may be done by the processing device 210 or by the listener U.
The listener U or the processing unit 63 controls the switch 7a, and thereby the connection state is switched. The state where the switch 7a is connected to the left microphone 2L is referred to as a first connection state. The state where the switch 7a is connected to the right microphone 2R is referred to as a second connection state. The state where the switch 7a is connected to the adder 7b is referred to as a third connection state. In the first to third states, the microphone unit 2 picks up the sound generated by the speaker. A signal picked up in the first connection state is referred to as a first sound pickup signal sL. A signal picked up in the second connection state is referred to as a second sound pickup signal sR. A signal picked up in the third connection state is referred to as a third sound pickup signal sC.
A signal picked up only by the left microphone 2L is the first sound pickup signal sL. A signal picked up only by the right microphone 2R is the second sound pickup signal sR. A signal obtained by adding two signals picked up by the left and right microphones 2L and 2R is the third sound pickup signal sC. The third sound pickup signal sC is a signal where the first sound pickup signal sL and the second sound pickup signal sR are superimposed on one another.
When viewed from above, the angle of an incident sound relative to the front of the user U is an incident angle φ (see
As shown in
Further, the processing device 210 calculates a time difference ITD in time for the sound from the speaker to reach the left and right ears (see
Thus, the processing device 210 calculates a time difference ITDθ (which is hereinafter referred to also as an incident time difference ITDθ) when the speaker is placed at an arbitrary angle θ based on the angle θ, a front time difference ITD0, and an interaural distance D. It is thereby possible to accurately obtain the transfer characteristics Hls and Hlo without measuring the third sound pickup signal in the characteristics measurement where the speaker is placed in the direction of the angle θ.
Note that the interaural distance D is the distance from the left ear to the right ear of the listener U (see
The same measurement is performed also for the right speaker 5R, and thereby the processing device 210 records the first and second sound pickup signals for the right speaker 5R. The processing device 210 obtains the transfer characteristics HRo and HRs based on the first and second sound pickup signals for the right speaker 5R.
This embodiment eliminates the need to acquire the third sound pickup signal in the state where the speakers 5L and 5R are placed at the angle θ. This enables measurement of the transfer characteristics with a smaller number of times of sound pickup compared with Patent Literature 2. For example, in the case of measuring a plurality of sets of the transfer characteristics Hls, Hlo, HRo and HRs with different placements of the speakers 5L and 5R, an increase in the number of times of sound pickup is reduced.
The above-described processing is described hereinafter in detail with reference to
As described above, the processing device 210 is an information processor having the monophonic input terminal 8, and it includes the memory 61, the operating unit 62 and the processing unit 63 (see also
The measurement signal generation unit 211 generates a measurement signal. The measurement signal generated by the measurement signal generation unit 211 is converted from digital to analog by a D/A converter (not shown) and output to the left speaker 5L. The measurement signal may be the impulse signal, the TSP signal or the like. The measurement signal contains a measurement sound such as an impulse sound.
The sound pickup signal acquisition unit 212 acquires sound pickup signals from the left microphone 2L and the right microphone 2R. The sound pickup signals from the microphones 2L and 2R are converted from analog to digital by A/D converters (not shown) and input to the sound pickup signal acquisition unit 212. The sound pickup signal acquisition unit 212 may perform synchronous addition of signals obtained by a plurality of times of measurement. Further, the switch unit 7 switches the input to the monophonic input terminal 8 from the speaker 5L. The sound pickup signal acquisition unit 212 acquires each of the first to third sound pickup signals.
The front time difference acquisition unit 213 acquires the front time difference ITD0 of the listener U. Front measurement for acquiring the front time difference ITD0 is described hereinafter with reference to
In the front measurement, a speaker is placed in the middle of left and right, and it is shown as a speaker 5C as in
If the shape of a face or ears is completely bilaterally symmetric, the time of arrival from the speaker 5C placed straight in front of the left ear 3L to the left ear 3L and the time of arrival from the speaker 5C to the right ear 3R are supposed to be the same. In practice, however, a slight difference in distance arises due to a difference in head or auricle shape, which causes the front time difference ITD0 to occur. Thus, the front time difference ITD0 is a time difference caused by the reflection and diffraction of the face or ear shape of the individual listener U.
The processing device 210 performs measurement of an Lch signal that is input to the microphone 2L (S11). To be specific, the switch unit 7 is switched into the first connection state, and the measurement signal generation unit 211 causes the speaker 5C to output an impulse signal. The sound pickup signal acquisition unit 212 then picks up the first sound pickup signal sL. The first sound pickup signal sL corresponds to transfer characteristics CHls from the speaker 5C to the left ear 3L (microphone 2L). The processing device 210 stores data of the first sound pickup signal sL into the memory 61 or the like.
Next, the processing device 210 performs measurement of an Rch signal that is input to the microphone 2R (S12). To be specific, the switch unit 7 is switched into the second connection state, and the measurement signal generation unit 211 causes the speaker 5C to output an impulse signal. The sound pickup signal acquisition unit 212 then picks up the second sound pickup signal sR. The second sound pickup signal sR corresponds to transfer characteristics CHrs from the speaker 5C to the right ear 3R (microphone 2R). The processing device 210 stores data of the second sound pickup signal sR into the memory 61 or the like.
Further, the processing device 210 performs measurement of a signal where the Lch signal that is input to the microphone 2L and the Rch signal that is input to the microphone 2R are added together (S13). To be specific, the switch unit 7 is switched into the third connection state, and the measurement signal generation unit 211 causes the left speaker 5L to output an impulse signal. The sound pickup signal acquisition unit 212 then picks up the third sound pickup signal sC(=sL+sR). The processing device 210 stores data of the third sound pickup signal sC into the memory 61 or the like. Note that the order of measuring the first to third sound pickup signals is not particularly limited. S11 to S13 are performed in the state where the speaker 5C is placed in front of the listener U.
Based on the first to third sound pickup signals, the front time difference acquisition unit 213 calculates a time difference (the front time difference ITDθ) for a sound from the speaker 5C to reach the left and right microphones 2L and 2R (S14). The front time difference acquisition unit 213 calculates a signal where a delay time dt is added between the first sound pickup signal sL and the second sound pickup signal sR as an addition signal y. The front time difference acquisition unit 213 calculates a cross-correlation function of the addition signal y and the third sound pickup signal sC. When the measurement time (filter length) of the sound pickup signal is Lf and the delay time dt is varied from −Lf to Lf, the delay time dt when the cross-correlation function is greatest is the front time difference ITD0.
In the front measurement, it is unknown which of the first sound pickup signal sL and the second sound pickup signal sR delays, and it is thereby necessary to calculate the addition signal y in both of the cases where a delay is applied to the first sound pickup signal sL and where a delay is applied to the second sound pickup signal sR. In other words, the cross-correlation function is calculated for the case where the first sound pickup signal sL delays behind the second sound pickup signal sR and the case where the second sound pickup signal sR delays behind the first sound pickup signal sL. Therefore, the range of the delay time is from −Lf to +Lf. Further, with the delay time t=0, the timing of appearance of the first sound pickup signal sL and the second sound pickup signal sR (i.e., the timing of a direct sound that first reaches the ear) coincides.
Referring back to
In the lateral measurement shown in
The interaural distance acquisition unit 214 calculates the maximum time difference ITDmax by using the first sound pickup signal sL, the second sound pickup signal sR and the third sound pickup signal sC in the lateral measurement. To be specific, the interaural distance acquisition unit 214 calculates the maximum time difference ITDmax according to the flowchart shown in
Just like in S14 of
In the lateral measurement, it is apparent that the second sound pickup signal sR delays behind the first sound pickup signal sL, and therefore a delay is applied only to the second sound pickup signal sR. Therefore, the range of the delay time is from 0 to +Lf. Further, with the delay time t=0, the timing of appearance of the first sound pickup signal sL and the second sound pickup signal sR (i.e., the timing of a direct sound that reaches the ear) coincides.
Next, the interaural distance acquisition unit 214 calculates the interaural distance D from the maximum time difference ITDmax. Using an interaural time difference model, which is described later, a relational expression of the interaural distance D and the time difference ITD is the following expression (1).
φ+sin φ=2c×ITD/D (1)
In the above expression, φ is the incident angle [rad], c is the acoustic velocity, and D is the interaural difference. The expression (1) uses an interaural time difference model where a sound channel length from the nose to the cheek of the listener U is approximated by a straight line, and a sound channel length from the cheek to the ear is approximated by a circular arc. As shown in the approximate expression of the expression (1), the interaural time difference ITD varies depending on the incident angle φ and the interaural distance D.
When the shape of the head when viewed from above is a circle with a radius r and the interaural distance D=a radius 2r, the following expression (2) is obtained from the expression (1).
ITD=r(φ+sin φ)/c (2)
It is assumed that c=340 m/sec. Since φ=π/2(=90° in the lateral measurement, the interaural distance D is obtained by substituting π/2 for φ and ITD=ITDmax. In this manner, the interaural distance D is obtained by applying the time difference ITDmax to the interaural time difference model. Note that the lateral measurement is not limited to φ=90°. The interaural distance D can be calculated from the expression (1) also when φ is an arbitrary value.
Referring back to
To be specific, the incident time difference calculation unit 215 estimates the estimated time difference with φ=θ[rad] in the calculating formula of the expression (1) derived from the interaural time difference model. Specifically, the incident time difference calculation unit 215 calculates the time difference ITD when φ=θ/(2π)[rad] in the above expression (1) as the estimated time difference. Further, the incident time difference calculation unit 215 adds the front time difference ITD0 to the estimated time difference and thereby obtains the incident time difference ITDθ. The incident time difference ITDθ that is most appropriate for the listener U is obtained in this manner.
The transfer characteristics generation unit 216 applies a delay corresponding to the incident time difference ITDθ between the first sound pickup signal sL and the second sound pickup signal sR picked up in the characteristics measurement and thereby generates the transfer characteristics Hls and Hlo. The characteristics measurement is performed in the state where the speaker 5L is placed in the direction at the angle θ as shown in
To be specific, from the state where the timing of appearance of the first sound pickup signal sL and the second sound pickup signal sR coincides, the transfer characteristics generation unit 216 delays the second sound pickup signal sR by the incident time difference ITDθ. Then, the transfer characteristics generation unit 216 acquires the transfer characteristics Hls based on the first sound pickup signal sL, and acquires the transfer characteristics Hlo based on the second sound pickup signal sR to which the delay time has been applied. Further, the transfer characteristics Hls and Hlo may be calculated by cutting out the transfer characteristics with a specified filter length.
The same processing is performed for the Rch speaker. To be specific, the characteristics measurement is performed by using the right speaker 5R placed in the position ahead on the right of the listener U at the angle θ. Just like in the processing for the left speaker 5L, the incident time difference calculation unit 215 calculates the incident time difference ITDθ based on the angle θ, the interaural distance D and the front time difference ITD0. Note that the interaural distance D and the front time difference ITD0 can be common to the left and right transfer characteristics.
From the state where the timing of appearance of the first sound pickup signal sL and the second sound pickup signal sR coincides, the transfer characteristics generation unit 216 delays the first sound pickup signal sL by the incident time difference ITDθ. The transfer characteristics generation unit 216 acquires the transfer characteristics Hro based on the first sound pickup signal sL to which the delay time has been applied, and acquires the transfer characteristics Hrs based on the second sound pickup signal sR. Further, the transfer characteristics Hrs and Hro may be calculated by cutting out the transfer characteristics with a specified filter length. In this manner, one set of the transfer characteristics Hls, Hlo, Hrs and Hro to be used for out-of-head localization are acquired. The out-of-head localization device 100 shown in
As described above, the values of the interaural distance D and the front time difference ITD0 may be common between the transfer characteristics Hls and Hlo and the transfer characteristics HRo and HRs. Thus, the lateral measurement for acquiring the interaural distance D is performed only once for one listener U. Likewise, the front measurement for acquiring the front time difference ITD0 is performed only once for one listener U.
As described above, the processing device 210 acquires the first to third sound pickup signals in the front measurement and the lateral measurement, and acquires the first and second pickup signals in the characteristics measurement. Thus, when there is a need to increase the number of transfer characteristics, which is, when there is a need to place speakers in various places and measure transfer characteristics, the total number of times of sound pickup is reduced compared with Patent Literature 2.
To be specific, when the number of placements of speakers is N, it is necessary to pick up (3N) number of sound pickup signals in Patent Literature 2 because the first to third sound pickup signals are measured at each placement. On the other hand, since there is no need to perform the front measurement and the lateral measurement for both of left and right speakers, it is necessary to pick up only (2N+6) number of sound pickup signals in this embodiment. This allows transfer characteristics to be measured in a simplified way even when the number of placements of speakers is increased.
In this embodiment, the incident time difference ITDθ is calculated by using the front time difference ITD0 obtained in the front measurement. Since the front time difference ITD0 has a value that reflects the shape of the face or auricle of the listener U as described above, the transfer characteristics are calculated more accurately. Further, since the interaural distance D measured for the listener U and the first and second sound pickup signals are used, the transfer characteristics that reflect the shape of the face or auricle of the listener U are obtained. This enables out-of-head localization suitable for the listener U to be performed.
This embodiment reduces the number of times of sound pickup, which allows reduction of errors due to measurement. For example, if the number of times of sound pickup increases, there is a possibility that the posture of the listener U changes during measurement. The change of the posture of the listener U causes a failure to acquire appropriate transfer characteristics. In this embodiment, the number of times of sound pickup is reduced, which allows reduction of measurement time. This allows reduction of errors due to measurement.
A processing method according to this embodiment is described hereinafter with reference to
The interaural distance acquisition unit 214 acquires the interaural distance D (S21). To be specific, the lateral measurement is performed in the speaker placement shown in
The interaural distance D may be acquired by measurement other than the lateral measurement. For example, the interaural distance D may be obtained from a camera image. A camera of the processing device 210 takes an image of the head of the listener U. The processing unit 63 may calculate the interaural distance D by image processing.
Alternatively, the listener U or another person may measure the interaural distance D by using measuring equipment such as a scale. In this case, the listener U or the like inputs a measured value by using the operating unit 62. Further, the interaural distance D of the listener U may be measured in advance by another device or the like. In this case, the measured value may be transmitted in advance from this another device to the processing device 210, or the processing device 210 may read this value each time.
The front time difference acquisition unit 213 acquires the front time difference ITD0 (S22). In this step, the front measurement is performed in the speaker placement shown in
In the case where the interaural distance D and the front time difference ITD0 are measured in advance by another device, the switch unit 7 does not need to switch the connection to the third connection state. The switch unit 7 may be configured so as to switch between the first connection state and the second connection state.
The incident time difference calculation unit 215 calculates the incident time difference ITDθ (S23). As described above, the incident time difference calculation unit 215 calculates the incident time difference ITDθ by using the angle θ, the front time difference ITDθ and the interaural distance D.
Next, the sound pickup signal acquisition unit 212 acquires the first and second sound pickup signals by the characteristics measurement (S24). Then, the transfer characteristics generation unit 216 applies a delay time corresponding to the incident time difference ITDθ between the first and second sound pickup signals and generates the transfer characteristics (S25). The above-described process is performed repeatedly until it reaches the number of placements of speakers.
The transfer characteristics suitable for the individual listener U are thereby generated. Note that the order of the lateral measurement, the characteristics measurement, and the front measurement is not limited to the order shown in the flowchart of
Note that the interaural time difference model for obtaining the interaural distance D and the front time difference ITD0 is not limited to the calculating formula shown in the expression (1). For example, the whole outline of the face of the listener U may be approximated by a circular arc. Alternatively, the whole outline of the face may be approximated by a straight line or a polynomial.
Although the measurement configuration where the stereo speaker 5 is placed ahead of the listener U is shown in
Note that, in the front measurement shown in
The first camera 251 takes an image of the listener U, and the second camera 252 takes an image of the speaker 5C that is placed ahead of the listener U. Then, the processing device 210 performs image processing of the image taken by the first camera 251 and the image taken by the second camera 252, and thereby determines whether the speaker 5C is placed straight in front of the listener U. For example, by the image processing, the processing device 210 obtains the angle φ at which the speaker 5C is placed. The processing device 210 determines whether or not the speaker 5C is placed straight in front of the listener U depending on whether the angle φ is equal to or less than a threshold.
As shown in
When the angle φ of the speaker is equal to or less than the threshold, the processing device 210 enables the front measurement. For example, the processing device 210 displays a front measurement button on the display screen. The front measurement is initiated when the listener U touches this front measurement button. This allows more accurate measurement of the front time difference ITD0.
A part or the whole of the above-described processing may be executed by a computer program. The above-described program can be stored and provided to the computer using any type of non-transitory computer readable medium. The non-transitory computer readable medium includes any type of tangible storage medium. Examples of the non-transitory computer readable medium include magnetic storage media (such as floppy disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g. magneto-optical disks), CD-ROM (Read Only Memory), CD-R, CD-R/W, and semiconductor memories (such as mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory), etc.). The program may be provided to a computer using any type of transitory computer readable medium. Examples of the transitory computer readable medium include electric signals, optical signals, and electromagnetic waves. The transitory computer readable medium can provide the program to a computer via a wired communication line such as an electric wire or optical fiber or a wireless communication line.
Although embodiments of the invention made by the present invention are described in the foregoing, the present invention is not restricted to the above-described embodiments, and various changes and modifications may be made without departing from the scope of the invention.
The present disclosure is applicable to a processing device that processes sound pickup signals.
Number | Date | Country | Kind |
---|---|---|---|
JP2018-53764 | Mar 2018 | JP | national |
This application is a Bypass Continuation of PCT/JP2019/009619, filed on Mar. 11, 2019, which is based upon and claims the benefit of priority from Japanese patent application No. 2018-53764 filed on Mar. 22, 2018, the disclosure of which is incorporated herein in its entirety by reference.
Number | Date | Country |
---|---|---|
H08-111899 | Apr 1996 | JP |
2002-209300 | Jul 2002 | JP |
2017-028365 | Feb 2017 | JP |
Entry |
---|
Fujii et al., Translation of JP2017028365A, 2017 (Year: 2017). |
Kageyama et al., Translation of JPH08111899A, 1996 (Year: 1996). |
Number | Date | Country | |
---|---|---|---|
20200413190 A1 | Dec 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2019/009619 | Mar 2019 | US |
Child | 17016674 | US |