This Nonprovisional application claims priority under 35 U.S.C. § 119(a) on Patent Application No. 2020-012289 filed in Japan on Jan. 29, 2020, the entire contents of which are hereby incorporated by reference.
An embodiment according to the present disclosure relates to an audio signal processing apparatus and an audio signal processing method that perform predetermined processing on an audio signal.
Japanese Unexamined Patent Application Publication No. 2016-181834 discloses an audio mixer that is able to assign an operation object to each channel strip.
It is an object of the present disclosure to provide an audio signal processing apparatus and an audio signal processing method that are able to easily control an amount of feed to a different bus.
An audio signal processing apparatus includes a first bus, a second bus, and a controller that assigns a first input channel to the first bus, assigns a second input channel to the second bus, receives an amount of feed from the first input channel to the first bus, and receives an amount of feed from the second input channel to the second bus.
An audio signal processing apparatus is able to assign each of a plurality of input channels to any different bus, and set an amount of feed for each of the plurality of input channels.
A user of an audio mixer, in a case of sending an audio signal of a plurality of input channels to a certain bus (a MIX bus for a monitor, for example), may change the content of signal processing for each input channel. For example, to a MIX bus for a monitor of a vocal, there may be a case in which the user would like to do nothing to the voice of the vocal but to perform localization processing on other sounds (the voice of a chorus, for example).
In view of the foregoing, an audio signal processing apparatus according to an embodiment of the present disclosure assigns a first input channel to a first bus, assigns a second input channel to a second bus, and sets an amount of feed to each of a plurality of input channels. Accordingly, the user can easily set the amount of feed of an audio signal of the first input channel that performs localization processing, and the amount of feed of an audio signal of the second input channel that does not perform the localization processing.
The send level adjustment circuit 14 adjusts an amount of feed to the first bus of an audio signal of the first input channel. The send level adjustment circuit 16 adjusts an amount of feed to the second bus of an audio signal of the second input channel.
The amount of feed of the send level adjustment circuit 14 and the send level adjustment circuit 16 is received through a knob (a rotation operation element) provided in a channel strip or a physical operation element (see
The signal processor 15 performs predetermined signal processing to the audio signal of the first input channel. The predetermined signal processing may be any type of processing, but includes processing that adds localization, for example. Accordingly, localization addition processing is performed on the audio signal of the first input channel. The localization processing is processing that convolves a head-related transfer function to an audio signal, for example. The head-related transfer function represents a transfer function between a predetermined position and an ear of a listener. The head-related transfer function corresponds to an impulse response expressing the loudness, the reaching time, the frequency characteristics and the like of a sound emitted from a virtual sound source placed in a certain position to each of right and left ears. The signal processor 15 applies a head-related transfer function for an L channel and a head-related transfer function for an R channel to an inputted audio signal. As a result, the signal processor 15 generates a stereo signal to be sent out to the first bus.
The PAN 17 is a circuit that adjusts a distribution ratio (level balance) of the second input channel that sends out to the second bus 13 being a stereo bus. The level balance is received through the knob (see
Therefore, when the voice of a chorus is inputted into the first input channel, and the voice of a vocal is inputted into the second input channel, for example, the localization addition processing is performed on the voice of a chorus. When a listener listens to monitor sound using headphones, the voice of a chorus is localized around the listener. The voice of a vocal is localized in the head of the listener. A user of the audio mixer 1 can easily adjust each amount of feed to a bus that sends out the voice of a vocal and a bus that sends out the voice of a chorus.
It is to be noted that the audio signal of the first bus and the audio signal of the second bus may be respectively outputted to different output destinations, and may be mixed and outputted to the same output destination (an AUX Send, for example).
A further specific configuration will be hereinafter described.
These components are connected to each other through a bus 171. In addition, the audio I/O 203 and the signal processor 204 are also connected to a waveform bus 172 for transmitting a digital audio signal.
The CPU 206 is a controller that controls the operation of the audio mixer 100. The CPU 206 reads and implements a predetermined program stored in the flash memory 207 being a storage medium to the RAM 208 and performs various types of operations. It is to be noted that the program may be stored in a server. The CPU 206 may download the program from the server through a network and may execute the program.
The signal processor 204 is configured by a DSP for performing various types of signal processing such as mixing processing or effect processing. The signal processor 204 performs effect processing such as mixing or equalizing on an audio signal received through the network I/F 205 or the audio I/O 203. The signal processor 204 outputs a digital audio signal on which the signal processing has been performed, through the audio I/O 203 or the network I/F 205.
The input patch 301 supplies an audio signal to each channel of the input channel 302.
The first input channel includes an input signal processor 350, a FADER 351, a PAN 352, a send level adjustment circuit 353, a PAN 354, and a level adjustment circuit 357. The second input channel includes an input signal processor 3501, a FADER 3511, a PAN 3521, a send level adjustment circuit 3531, a signal processor 355, and a level adjustment circuit 3571.
The input signal processor 350 and the input signal processor 3501 perform signal processing such as equalizing or compressing. The FADER 351 and the FADER 3511, as a first mode, adjust the gain of each input channel. The FADER 351 and the FADER 3511 each correspond to an operation element provided on an operation panel shown in
The knob corresponds to the PANs 352 and 3521 of
The stereo bus 303 is a bus corresponding to a main speaker in a hall or a conference room. The stereo bus 303 mixes an audio signal to be sent out from each input channel. The stereo bus 303 outputs a mixed audio signal to the output channel 306. The output channel 306 performs signal processing such as equalizing or compressing on the audio signal that the stereo bus 303 has outputted. The output channel 306 outputs the audio signal on which the signal processing has been performed, to the output patch 307.
The output patch 307 assigns each channel of the output channel to any one of a plurality of ports serving as an analog output port or a digital output port. As a result, the audio signal on which the signal processing has been performed is supplied to the audio I/O 203.
In addition, each channel of the input channel 302 sends out the audio signal on which the signal processing has been performed, to the MIX bus 304.
The MIX bus 304 is a bus for sending out an audio signal of one or more input channels to a specific location such as a monitor speaker or monitor headphones. However, in the present embodiment, the audio signal, even when being sent out to the same monitor speakers or the same headphones, is outputted to a different bus.
In the example of
The audio signal of the second input channel on which the signal processing has been performed in the signal processor 355 is sent out to the MIX3 bus and the MIX4 bus.
The MIX bus 304 is routed to the matrix bus 305. In the example of
The matrix1 bus is patched to the L channel of headphones, for example. The matrix2 bus is patched to the R channel of headphones, for example. As a result, the audio signals of the first input channel and the second input channel are supplied to the L channel and the R channel of the headphones.
Therefore, when the voice of a vocal is inputted into the first input channel, and the voice of a chorus is inputted into the second input channel, for example, the localization addition processing is performed on the voice of a chorus. When a listener listens to monitor sound using the headphones, the voice of a chorus is localized around the listener. The voice of a vocal is localized in the head of the listener.
A user of the audio mixer 100 according to the present embodiment can easily adjust each amount of feed to a bus that sends out the voice of a vocal and a bus that sends out the voice of a chorus. Further, as shown in the following
As shown in
In the example of
A destination (a MIX bus) is displayed on each input channel of the channel strips 61. For example, a MIX 1-2 bus is displayed on the first input channel as the destination, and a MIX 3-4 bus is displayed on the second input channel as the destination.
An audio signal to be sent out to the MIX 1-2 bus is distributed in the PAN 354, as shown in
The user, as with the example of
When the user selects “SETUP” corresponding to four MIX buses in the setup screen of a MIX bus, the touch screen 51 shifts to the setup screen shown in
As shown in
The MIX1-2 bus is a (DRY) bus to which an audio signal on which the signal processing such as localization processing is not performed, is sent out, and the MIX3-4 bus is a (WET) bus to which an audio signal on which the signal processing such as localization processing is performed, is sent out. In a case of WET, a display mode is changed. For example, the second input channel is deeply displayed since WET is selected.
When the user sets DRY or WET for each input channel and then selects ENTER, each input channel is assigned to the MIX1-2 bus being DRY or to the MIX3-4 bus being WET. As described above, the audio mixer 100 can simultaneously assign DRY or WET of each input channel.
The audio mixer 100 then determines whether to have received a fader operation (S13). The audio mixer 100, in a case of having received the fader operation, changes the amount of feed to the MIX bus of the input channel corresponding to the operated fader (S14). For example, in the example of
Finally, the audio mixer 100 determines whether to have received a release operation of the SENDS ON FADER mode (S15). The audio mixer 100, when receiving no release operation, repeats the processing from the determination of S13.
It is to be noted that, although the above embodiment describes an example of performing mixing by routing a plurality of MIX buses to the matrix bus, the mixing may be performed in an output channel, for example.
In addition, although the present embodiment describes an example of sending out the audio signal of each input channel to the stereo bus, it is not essential to send out to the stereo bus in present disclosure. For example, when sending out an audio signal to one monitor speaker, the MIX bus may be a monaural bus. At least the audio signal of the input channel that does not perform signal processing such as localization processing may be sent out to a monaural bus.
In addition, in the above-described embodiment, the audio mixer 100 receives the setup of a destination in the setup screen shown in
The description of the present embodiment is illustrative in all points and should not be construed to limit the present disclosure. The scope of the present disclosure is defined not by the foregoing embodiments but by the following claims. Further, the scope of the present disclosure is intended to include all modifications within the scopes of the claims of patent and within the meanings and scopes of equivalents.
For example, as shown in
In addition, as shown in
In addition, the above embodiment describes an example of sending out the audio signal of the first input channel to the MIX1-2 bus and sending out the audio signal of the second input channel to the MIX3-4 bus. As a matter of course, the number of input channels and the number of buses are not limited to these examples. For example, the audio mixer 100 may send out the audio signal of a third input channel to a MIX5-6 bus, and may perform signal processing such as a head-related transfer function. Herein, the signal processing to be performed in the MIX5-6 bus may be the same as or may be different from the signal processing to be performed in the MIX3-4 bus. The signal processing to be performed in the MIX5-6 bus may provide a further stronger effect (a more distant localization effect) than the signal processing to be performed in the MIX3-4 bus. In such a case, a listener can concentrate on listening to the sound (the main vocal, for example) of the MIX1-2 bus, can listen to the sound (the sub vocal, for example) of the MIX3-4 bus so as to surround a surrounding of the vocal, and can listen to the sound (the chorus, for example) of the MIX5-6 bus so as to further surround the surrounding.
In addition, a setup of whether to perform localization processing is not limited to the example of
The audio mixer 100 may also automatically set each channel as WET or DRY based on the past history or the setup that a user has previously inputted. The audio mixer 100 may also automatically set each channel as WET or DRY based on a learned algorithm obtained by machine learning from the past history. For example, the audio mixer 100 may automatically set each channel as WET or DRY based on a channel name set to each channel. The element of input data at the time of performing the machine learning includes the number of an input channel, the name of an input channel (the name of a performer, the name of a musical instrument, or the like), an image associated with an input channel (an image of a musical instrument, for example), the number of a destination, or the name of a destination (the name of a performer, the name of a speaker, or the like). The element of output data is WET or DRY. The audio mixer 100 learns the past history according to the combination of the input data and the output data as teaching data. As an algorithm, a support vector machine or a neural network is able to be used. The neural network is also able to use a deep neural network including a convolutional neural network and a recurrent neural network.
The audio mixer 100, based on the learned algorithm, for example, sets a channel of which the channel name is vocal as DRY, and sets a channel of which the channel name is chorus as WET. In addition, in a case in which the name of a performer, the name of a musical instrument, an image of a musical instrument, or the like is associated with each channel, may set each channel as WET or DRY based on the name of a performer, the name of a musical instrument, or the image of a musical instrument. As a result, the user can shorten the setup time of whether to perform the localization processing by a head-related transfer function or the like, for each channel. The audio mixer 100 may also set each channel as WET or DRY based on the name of a destination in addition to information including the name of an input channel. In such a case, the audio mixer 100 is able to set each channel as WET or DRY in consideration of the preference of a performer corresponding to the destination or the characteristics of a speaker corresponding to the destination.
Number | Date | Country | Kind |
---|---|---|---|
JP2020-012289 | Jan 2020 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
20100239107 | Fujita | Sep 2010 | A1 |
20140328503 | Shirai | Nov 2014 | A1 |
20160285573 | Terada | Sep 2016 | A1 |
20190191261 | Saito | Jun 2019 | A1 |
20190246227 | Imai | Aug 2019 | A1 |
20190356291 | Saito | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
2016181834 | Oct 2016 | JP |
Number | Date | Country | |
---|---|---|---|
20210235211 A1 | Jul 2021 | US |