This Nonprovisional application claims priority under 35 U.S.C. § 119(a) on Patent Application No. 2016-104063 filed in Japan on May 25, 2016, the entire contents of which are hereby incorporated by reference.
Some preferred embodiments of the present invention relates to a sound effect producing apparatus that produces a sound effect to provide an audio signal with a sound field effect.
Conventionally, as an apparatus for providing sound content with a sound field effect, for example, a sound field controller is described in JP H08-275300 A. The sound field effect is one that reproduces pseudo sound reflections (sound effect) that simulate sound reflections generated in an acoustic space such as concert hall and thereby causes listeners to experience a feeling of presence as if they were in a separate space such as real concert hall while being in a room.
The sound field controller produces, from an inputted audio signal, audio signals that correspond to the pseudo sound reflections, based on sound field effect information corresponding to an acoustic space selected by a user (for example, a concert hall), and supplies the produced audio signals to respective speakers.
The sound field effect information includes an impulse response of an acoustic space generating a group of sound reflections (see
However, the sound source positions of the group of sound reflections are predetermined correspondingly to the acoustic space that has been chosen. Therefore, for example, even when a sound source position of the direct sound moves, sound source positions of the group of sound reflections never move following the former's movement.
Thus, some preferred embodiments of the present invention are directed to providing a sound effect producing apparatus capable of moving sound source positions of a group of sound reflections.
A sound effect producing apparatus according to preferred embodiments of the present invention includes an input portion, a memory portion, a pseudo sound reflection producing portion, and an effect provision portion. The input portion performs inputting of an audio signal. The memory portion stores sound effect information that includes production information for producing a pseudo sound reflection corresponding to a sound reflection generated in a predetermined acoustic space and sound source position information showing a sound source position of the pseudo sound reflection. The pseudo sound reflection producing portion produces a pseudo sound reflection based on the production information. The effect provision portion performs a process of localizing the pseudo sound reflection using a predetermined direction as a reference, based on the audio signal and the sound source position information.
And thus, the sound effect producing apparatus according to preferred embodiments of the present invention makes it possible to move sound source positions of a group of sound reflections.
A sound effect producing apparatus according to an embodiment of the present invention includes an input portion, a memory portion, a pseudo sound reflection producing portion, and an effect provision portion. The input portion performs inputting of an audio signal. The memory portion stores sound effect information that includes production information for producing pseudo sound reflections corresponding to sound reflections generated in a predetermined acoustic space and sound source position information showing sound source positions of the pseudo sound reflections. The pseudo sound reflection producing portion produces pseudo sound reflections based on the production information. The effect provision portion performs a process of localizing the pseudo sound reflections using predetermined directions as references, based on the audio signal and the sound source position information.
In this manner, the sound effect producing apparatus localizes a position of a pseudo sound reflection using a predetermined direction as a reference. For example, as for a right channel, a pseudo sound reflection is localized using a direction of 45° to the right as a reference. As for a left channel, a pseudo sound reflection is localized using a direction of 45° to the left as a reference. For the purpose of localizing the pseudo sound reflections using predetermined directions as references, the effect provision portion may offset the sound source position information, or the memory portion may store a plurality of pieces of the sound source position information that have been respectively offset beforehand in the right direction and in the left direction. This enables the sound source positions of a group of sound reflections to change, thereby producing a sense of direction also in a sound field effect.
The audio system includes an audio signal processing apparatus 1, a content reproduction device 5, and a plurality of speakers 10 (speaker 10L, speaker 10R, speaker 10SL, speaker 10SR and speaker 10C).
The plurality of speakers 10 are installed, as shown in
The audio signal processing apparatus 1 corresponds to the sound effect producing apparatus of the present invention, and consists of, for example, an audio receiver. Apart from the audio receiver, the sound effect producing apparatus of the present invention can be realized by, for example, an information processing apparatus such as personal computer.
The audio signal processing apparatus 1 includes an input portion 11, a processing portion 12, an output portion 13, a CPU 14 and a memory 15. The processing portion 12 includes a DSP 121 and a CPU 122.
The input portion 11 receives content data from the content reproduction device 5, and outputs an audio signal that is extracted from the content data to the DSP 121. To the input portion 11, as an example, multi-channel audio signals for left front (LF) channel, right front (RF) channel, center (C) channel, left surround (SL) channel and right surround (SR) channel are inputted. Here, the input portion 11, when receiving an analog signal, also has a function of converting thereof into a digital signal to output.
The CPU 122 reads out a program stored in the memory 15, and performs a control of the input portion 11 and the DSP 121. With the program, the CPU 122 causes the DSP 121 to perform a process of producing the sound effect, and thus the sound effect producing apparatus is realized.
According to the control by the CPU 122, the DSP 121 applies a predetermined processing to the audio signal that is inputted from the input portion 11. Here, the DSP 121 carries out a process of producing the sound field effect, as mentioned above.
The pseudo sound reflection producing portion 101L and the pseudo sound reflection producing portion 101R produce pseudo sound reflections from the inputted audio signals.
The pseudo sound reflections are produced based on sound field effect information stored in the memory portion 102. The sound field effect information includes: production information (impulse response) for producing pseudo sound reflections corresponding to sound reflections generated in a predetermined acoustic space; and the sound source position information showing localization positions of a group of pseudo sound reflections. The impulse response includes, specifically, delay time from a direct sound (information showing the timing of occurrences), and information showing ratios of levels of the sound reflections to a level of the direct sound (information showing levels). Also, in the memory portion 102, information showing positions of the respective sound reflections (sound source position information) is stored. Further, although the memory portion 102 is built-in the processing portion 12 (DSP 121 or CPU 122) in this example, actually, another storage medium such as the memory 15 or the like corresponds to the memory portion 102.
The pseudo sound reflection producing portion 101L and the pseudo sound reflection producing portion 101R read out, from the memory portion 102, an impulse response corresponding to an acoustic space chosen by a user, and produce pseudo sound reflections based on the read-out impulse response.
The pseudo sound reflection producing portion 101L accepts audio signals for left channels (in this example, FL channel and SL channel) as input, and produces pseudo sound reflections for the left channels by convoluting the audio signals for the left channels with the impulse response. The produced pseudo sound reflections for the left channels are inputted to the vector decomposition processing portion 103L.
The pseudo sound reflection producing portion 101R accepts audio signals for right channels (in this example, FR channel and SR channel) as input, and produces pseudo sound reflections for the right channels by convoluting the audio signals for the right channels with the impulse response. The produced pseudo sound reflections for the right channels are inputted to the vector decomposition processing portion 103R.
The vector decomposition processing portion 103L and the vector decomposition processing portion 103R correspond to the effect provision portion of the present invention. The vector decomposition processing portion 103L and the vector decomposition processing portion 103R perform a process of localizing the pseudo sound reflections by changing distribution gain ratios for the audio signals to be supplied to the respective speakers (channels), based on the sound source position information that has been read-out from the memory portion 102.
For example, as shown in
In this manner, the vector decomposition processing portion 103L and the vector decomposition processing portion 103R perform the process of localizing the pseudo sound reflections at predetermined positions by distributing the inputted pseudo sound reflections to the respective channels with predetermined gain ratios. Then, the synthesis portion 104 combines the audio signals output from the vector decomposition processing portion 103L and the vector decomposition processing portion 103R with the audio signals inputted from the input portion 11, for the respective channels. The synthesis portion 104 outputs audio signals that have been synthesized for the respective channels to the output portion 13.
The output portion 13 supplies the audio signals for the respective channels that are output from the synthesis portion 104 to the speakers 10L, 10R, 10C, 10SL, and 10SR that correspond to the respective channels. This causes the pseudo sound reflections to be localized at predetermined positions. Thus, a domain 100 having a sound field effect is formed in front of the listening position G.
Then, the vector decomposition processing portion 103L and the vector decomposition processing portion 103R of this embodiment perform a process of changing the sound source position information using predetermined directions as references.
The vector decomposition processing portion 103L causes the sound source position information that has been read-out from the memory portion 102 to be rotated around the listening position by 45° in the left direction, thereby offsetting localization positions of the group of pseudo sound reflections in the left direction. This causes the pseudo sound reflection 101 having been localized on the left side in front of the listening position G (at an angle near)−15° to be moved in the left direction (to an angle near)−60°. Accordingly, the domain 100 having the sound field effect in front of the listening position G as shown in
The vector decomposition processing portion 103R causes the sound source position information that has been read-out from the memory portion 102 to be rotated around the listening position by 45° in the right direction, thereby offsetting localization positions of the group of pseudo sound reflections in the right direction. This causes the pseudo sound reflection 101 having been localized on the left side in front of the listening position G (at an angle near −15°) to be moved in the right direction (to an angle near 30°). Accordingly, the domain 100 having the sound field effect in front of the listening position G, as shown in
In a case where there is a high level of input in the audio signal for the FR channel, the sound source 201 of the direct sound is localized in a direction where the speaker 10R is installed. In this case, because a high level of audio signal is inputted to the vector decomposition processing portion 103R, as shown in
Then, in a case where there are high levels of inputs in both of the audio signals for the FL channel and for the FR channel, that is, where audio signals having in-phase components are inputted to both of the channels, as shown in
Therefore, as shown in
In this manner, the audio signal processing apparatus 1 of this embodiment makes it possible to produce a sense of direction also in the sound field effect, by changing the sound source positions of the group of sound reflections.
Also, since the audio signal processing apparatus 1 changes the localization positions of the pseudo sound reflections through the process of changing the sound source position information, preparation of separate sound source position information for each direction is not required. In other words, because the vector decomposition processing portion 103L and the vector decomposition processing portion 103R respectively read out the same sound source position information, it is not necessary to prepare separate sound source position information for each of the vector decomposition processing portion 103L and the vector decomposition processing portion 103R. Thus, the audio signal processing apparatus 1 of this embodiment makes it possible to produce a sense of direction in the sound field effect by changing the sound source positions of the group of sound reflections, without increasing the amount of data on the sound source position information (and impulse response).
However, the memory portion 102 may store separate sound source position information for each direction. For example, the memory portion 102 stores, as shown in
Subsequently,
Then, the processing portion 12 reads out the sound field effect information (s12). The pseudo sound reflection producing portion 101L and the pseudo sound reflection producing portion 101R read out the impulse response, and the vector decomposition processing portion 103L and the vector decomposition processing portion 103R read out the sound source position information.
Next, the pseudo sound reflection producing portion 101L and the pseudo sound reflection producing portion 101R produce pseudo sound reflections, based on the respectively read-out impulse response (s13). After that, the vector decomposition processing portion 103L and the vector decomposition processing portion 103R change the respectively read-out sound source position information (s14).
As mentioned above, the vector decomposition processing portion 103L causes the read-out sound source position information to be rotated around the listening position G by 45° in the left direction. The vector decomposition processing portion 103R causes the read-out sound source position information to be rotated around the listening position G by 45° in the right direction.
However, procedure to offset the sound source position information is not necessarily limited to this example. For example, the sound source position information may be rotated in ideal directions for the installation of speakers as defined by the ITU recommendations (for example, 30° in the right direction and 30° in the left direction). Also, the sound source position information may be rotated in directions manually set by a user.
Still, it is ideal to carry out the offset in directions in which speakers are actually installed. Then, the audio signal processing apparatus 1 also makes it possible to determine an arrangement of the speakers, by outputting a sound for measurement from speakers of respective channels and picking up the sound for measurement with a microphone (not shown) installed at the listening position G. For example, as disclosed in JP 2009-037143 A, by carrying out the measurement at least at three positions, the audio signal processing apparatus 1 can determine exact locations of the respective speakers. In this case, the audio signal processing apparatus 1 can offset the sound source position information according to the arrangement of the speakers determined from the sound for measurement.
Further, instead of rotation, inversion may be embodied as another procedure for offsetting, for example, as shown in
Returning to
Here, in this embodiment, the pseudo sound reflections are produced and localized for the right side channels and for the left side channels, respectively; however, since the sound source position information is offset in the right direction and in the left direction respectively even when a monaural signal that is down-mixed from the audio signals for all the channels is used, the pseudo sound reflections for the right side channels and the pseudo sound reflections for the left side channels are respectively produced. Thus, even when the signal that is inputted is a monaural signal, the audio signal processing apparatus 1 can produce a sense of direction in the sound field effect. Although an example in which 5-channel audio signals are inputted is shown in this embodiment, the present invention can be implemented also in cases where 2-channel, 7-channel audio signals or the like is inputted. The present invention can be applied to audio signals of any number of channels to be inputted, as long as the number of speakers is more than one.
Here, the audio signal processing apparatus 1 of this embodiment performs the process of producing, for a first channel, the pseudo sound reflections that are offset in the left direction by combining an audio signal for the FL channel with an audio signal for the SL channel, and producing, for a second channel, the pseudo sound reflections that are offset in the right direction by combining an audio signal for the FR channel with an audio signal for the SR channel. However, the same sound source position information may be offset toward front and rear, by combining the FL channel with the FR channel as a first channel, and combing the SL channel with the SR channel as a second channel, respectively. Also, for example, a process of offsetting the sound source position information may be performed by separately producing pseudo sound reflections for respective channels.
However, because the distance between the front side speakers and the surround side speakers is longer than the distance between the right side speakers and the left side speakers, a feeling of connection between the front and the rear may be diluted when the front side and the surround side are processed separately. Therefore, it is preferable for the audio signal processing apparatus 1 to produce the pseudo sound reflections by combining the audio signal for the front side with the audio signal for the surround side to represent a connection between the front and the rear more naturally.
Also, the audio signal processing apparatus 1, as shown in
In this case, the audio signal for the C channel is distributed to a gain adjuster 151L and a gain adjuster 151R. The distributed audio signal for the C channel undergoes gain adjustment at the gain adjuster 151L and the gain adjuster 151R, respectively, and is inputted to the pseudo sound reflection producing portion 101L and the pseudo sound reflection producing portion 101R, respectively.
This results in formation of a steady sound field effect also in front of the listening position, in addition to the sound field effect that is offset in the left direction and the sound field effect that is offset in the right direction. Thus, it follows that connection between the right and the left is strengthened further.
Moreover, in a case where an audio signal for a surround back channel is inputted in addition to the audio signals for the SL channel and the SR channel, the audio signal for the surround back channel may also be distributed to the SL channel and the SR channel in the same manner as in the case of C channel. This also results in formation of a steady sound field effect in the rear of the listening position G.
Further, the sound field effect is not limited to one that is produced on the same plane. For example, as shown in a pictorial drawing in
Number | Date | Country | Kind |
---|---|---|---|
2016-104063 | May 2016 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5680464 | Iwamatsu | Oct 1997 | A |
20060177074 | Ko | Aug 2006 | A1 |
20080279389 | Yoo | Nov 2008 | A1 |
20100260355 | Muraoka | Oct 2010 | A1 |
20100296658 | Ohashi | Nov 2010 | A1 |
20150312690 | Yuyama | Oct 2015 | A1 |
20160227342 | Yuyama | Aug 2016 | A1 |
Number | Date | Country |
---|---|---|
H08-275300 | Oct 1996 | JP |
2009037143 | Feb 2009 | JP |
Number | Date | Country | |
---|---|---|---|
20170345409 A1 | Nov 2017 | US |