This application is a U.S. continuation application of International Application No. PCT/JP2016/078752, filed Sept. 29, 2016, which claims priority to Japanese Patent Application No. 2015-195237, filed Sept. 30, 2015. The entire disclosures of International Application No. PCT/JP2016/078752 and Japanese Patent Application No. 2015-195237 are hereby incorporated herein by reference.
Field of the Invention
The present invention relates to an audio processing device and an audio processing method.
Background Information
For example, a mixer is known, which allocates audio signals input from many devices on a stage such as microphones and instruments, and the like, to respective channels, and which controls various parameters for each channel, such as the signal level (volume value). Specifically, for example, in view of the fact that when there are a large number of devices that are connected to the mixer, it requires time to check the wiring that connects the mixer and the devices. Japanese Laid-Open Patent Publication No. 2010-34983 (hereinafter referred to as Patent Document 1) discloses an audio signal processing system in which identification information of each device is superimposed on the audio signal as watermark information, so that it becomes possible to easily check the wiring conditions between the devices and the mixer.
However, in Patent Document 1 described above, while it is possible to confirm the wiring conditions between the devices and the mixer, it is necessary that the user understand each of the functions of the mixer, such as the input gain, the faders, and the like, and to carry out the desired settings according to the given location.
An object of the present invention is to realize an audio signal processing device in which each audio signal is automatically adjusted according to, for example, the combination of the connected musical instruments.
One aspect of the audio processing device according to the present invention comprises an identification means that identifies each of the musical instruments that correspond to each of the audio signals, and an adjustment information acquisition means that acquires adjustment information for adjusting each of the audio signals described above according to the combination of the identified musical instruments.
The audio processing method according to the present invention is characterized by identifying each of the musical instruments that correspond to each of audio signals, and acquiring adjustment information too adjusting each of the audio signals described above according to the combination of the identified musical instruments.
Another aspect of the audio processing device according to the present invention comprises an identification means that identifies each of the musical instruments that correspond to each of the audio signals, an adjustment means that adjusts each of the audio signals according to the combination of the identified musical instruments, and a mixing means that mixes the identified audio signals.
An embodiment of the present invention will be described below with reference to the drawings. Moreover, identical elements have been assigned the same reference symbols in the drawings, and redundant, descriptions have been omitted.
The keyboard 101 is, for example, a synthesizer or an electronic piano, which outputs audio signals according to the performance of a performer. The microphone 104, for example, picks up the voice of a singer and outputs the picked-up sound as an audio signal. The drums 102 comprise, for example, a drum set, and microphones that pick up sounds that are generated by striking a percussion instrument (for example, a bass drum, a snare drum, etc.) included in the drum set. A Microphone is provided for each percussion instrument, and outputs the picked-up sound as an audio signal. The guitar 103 comprises, for example, an acoustic guitar and a microphone, and the microphone picks up the sound of the acoustic guitar and outputs the sound as an audio signal. The guitar 103 can be an electric acoustic guitar or an electric guitar. In that case, it is not necessary to provide a microphone. The top microphone 105 is a microphone installed above the sounds from a plurality of musical instruments, for example, the drum set, and picks up sounds front the entire drum set and outputs the sound as audio signals. This top microphone 105 will inevitably picks up sound, albeit at low volume, from other musical instruments besides the drum set.
The mixer 106 comprises a plurality of input terminals, and electrically adds, processes, and outputs audio signals from the keyboard 101, the drums 102, the guitar 103, the microphone 104, and the like, input to each of the input terminals. The specific configuration of the mixer 106 will be described further below.
The amplifier 107 amplifies and outputs to the speaker 108 audio signals that are output from the output terminal of the mixer 106. The speaker 108 outputs sounds in accordance with the amplified audio signals.
Next, the configuration of the mixer 106 according to the present embodiment will be described.
The electronic controller 201 for example, includes a CPU, an MPU, or the like, which is operated according to a program that is stored in the memory 202. The electronic controller 201 includes at least one processor. The memory 202 is configured from information storage media such as ROM, RAM, and a hard disk, and is an information storage medium that holds programs that are executed by the at least one processor of the electronic controller 201. In other words, the memory 202 is any computer storage device or any computer readable medium with the sole exception of a transitory, propagating signal. For example, the memory 202 can be a computer memory device which can be nonvolatile memory and volatile memory.
The memory 202 also operates as a working memory of the electronic controller 201. Moreover, the programs can be provided b downloading via a network (not shown), or provided by various information storage media that can be read by a computer, such as a CD-ROM or a DVD-ROM.
The user interface 20 outputs to the electronic controller 201 the content of an instruction operation of the user, such as volume slides, buttons, knobs, and the like, according to the instruction operation.
The display 204 is, for example, a liquid-crystal display, an organic EL display, or the like, and displays information in accordance with instructions from the electronic controller 201.
The input/output terminal 205 comprises a plurality of input terminals and an output terminal. Audio signals from each of the musical instruments, such as the keyboard 101, the drums 102, the guitar 103, and the microphone 104, and from the top microphone 105 are input to each of the input terminals. In addition, audio signals obtained by electrically adding and processing the input audio signals described above are output from the output terminal. The above-described configuration of the mixer 106 is merely an example, and the invention is not limited thereto.
Next, one example of the functional configuration of the electronic controller 201 according to the present embodiment will be described. As shown in
The identification module 301 identifies each musical instrument corresponding to each audio signal. Then, the identification module 301 outputs the identified musical instruments to the adjustment module 302 as identification information. Specifically, based on each audio signal, the identification module 301 generates feature data of the audio signal and compares the feature data with feature data registered in the memory 202 to thereby identify the type of each musical instrument corresponding to each audio signal. Registration of feature data of each musical instrument to the memory 202 is, for example, configured using a learning algorithm such as an SVM (Support Vector Machine). The identification of the musical instruments can be carried out using the technique disclosed in Japanese Patent Application No. 2015-191026 and Japanese Patent Application No. 2015-191028; however, a detailed description thereof will be omitted.
Based on the combination of the identified musical instruments, the adjustment information acquisition module 304 acquires adjustment information associated with the combination from the memory 202. Here, for example, the memory 202 stores combination information representing combinations of musical instruments and adjustment information for adjusting each audio signal associated with each combination of musical instruments, as shown in
Specifically, for example, the memory 202 stores musical instrument information (Inst.) representing vocal (Vocal) and guitar (Guitar) as combination information, and stores volume information (Vol), pan information (Pan), reverb information (Reverb.), and compression information (comp.), as adjustment information, as shown in
The adjustment module 302 adjusts each audio signal based on the adjustment information acquired by the adjustment information acquisition module 304. Each audio signal can be directly input to the adjustment module 302, or can be input via the identification module 301. As shown in
Here, the level control module 305 controls the level of each input audio signal. The pan control module 306 adjusts the sound localization of each audio signal. The reverberation control module 307 adds reverberation to each audio signal. The compression control module 308 compresses (compression) the width of change of the volume. The side chain control module 309 controls the turning on and off of effects that are controlled so as to affect the sounds of other musical instruments, using the intensity of sounds and the timings at which sound is emitted from a certain musical instrument. The functional configuration of the adjustment module 302 is not limited to the foregoing: for example, the functional configuration can include a function to control the level of audio signals in specific frequency bands, such as an equalizer function, a function to add amplification and distortion, such as a booster function, a low-frequency modulation function, and other functions provided to effectors. In addition, a portion of the level control module 305, the pan control module 306, the reverberation control module 307, the compression control module 308, and the side chain control module 309 can be provided externally.
Here for example, when the adjustment information shown in
The mixing module 303 mixes each of the audio signals adjusted by the adjustment module 302 and outputs same to the amplifier 107.
Next, one example of the process flow of the mixer 106 according to the present embodiment will be described, using
According to the present embodiment, for example, it is possible to more easily set the mixer. More specifically, according to the present embodiment, for example, by connecting the musical instruments to the mixer, each musical instrument is identified, and the mixer is automatically set according to the composition of the musical instruments, configured by said musical instruments.
The present invention is not limited to the embodiment described above, and can be replaced by a configuration that is substantially the same, a configuration that provides the same action and effect, or a configuration that is capable of achieving the same object as the configuration shown in the above-described embodiment.
For example, when the identification module 301 identifies the same musical instrument during a performance, the adjustment module 302 can appropriately adjust the position of said same musical instrument. Specifically, for example regarding the composition of the musical instruments shown in
In addition to the configuration of the above-described embodiment, for example, a duet determination unit can be provided to the mixer 106, which determines the occurrence of a duet, in which a male and a female sine alternately or together, based on the pitch trajectory between channels identified as vocals. Then, for example, when the duet determination unit determines the occurrence of a duet the sounds of the loss pitch range can be raised by an equalizer included in the mixer 106 and the pan can be shifted slightly to the right with respect to the male vocals, and the sounds of the mid-pitch range can be raised and the pan can be shifted slightly to the left with respect to female vocals. It thereby becomes possible to more easily distinguish between male and female vocals.
Here, an example of a pitch trajectory in the case of a duet is shown in
Additionally, a case in which adjustment information is held for each combination of the musical instruments was described above; however, a plurality of pieces of adjustment information can be stored, one for each music category, even with respect to the same combination of musical instruments, such that the audio signals are adjusted according to the music category specified by the user. Music categories can include genre (pop, classical, etc.) or mood (romantic, funky, etc.). The user can thereby enjoy music adjusted according to the composition of the musical instruments by simply specifying the music category, without performing complicated settings.
Furthermore, the mixer 106 can be configured to have a function to give advice on the state of the microphone (each of the microphones provided for each instrument). Specifically, for example, with respect to setting a microphone, using the technique disclosed in Japanese Laid-Open Patent Application No. 2015-080076, it can be configured to advise the user by displaying the sound pickup state of each microphone (for example, whether the arrangement position of each microphone is good or bad) on the display unit 240, or configured to give advice by means of sound from the mixer 106 by using a monitor speaker (not shown), or another technique, connected to the mixer 106. In addition, howling can be detected and pointed out, or the level control module can adjust the level of the audio signal so as to automatically suppress howling. Furthermore, the amount of crosstalk between microphones can be calculated from cross-correlation, and advice can be given to move or change the direction of the microphone, The audio processing device in the scope of the claims corresponds, for example, to the mixer 106 described above, but is not limited to a mixer 106, and can be realized, for example, with a computer.
Number | Date | Country | Kind |
---|---|---|---|
2015-195237 | Sep 2015 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5406022 | Kobayashi | Apr 1995 | A |
5896358 | Endoh | Apr 1999 | A |
8737638 | Sakurada et al. | May 2014 | B2 |
Number | Date | Country |
---|---|---|
S55-035558 | Mar 1980 | JP |
04306697 | Oct 1992 | JP |
H04-06697 | Oct 1992 | JP |
H09-46799 | Feb 1997 | JP |
2004-328377 | Nov 2004 | JP |
2004328377 | Nov 2004 | JP |
2006-259401 | Sep 2006 | JP |
2010-034983 | Feb 2010 | JP |
2011-217328 | Oct 2011 | JP |
2014-049885 | Mar 2014 | JP |
2014049885 | Mar 2014 | JP |
2015-011245 | Jan 2015 | JP |
2015-12592 | Jan 2015 | JP |
2015012592 | Jan 2015 | JP |
2015-080076 | Apr 2015 | JP |
2017-067901 | Apr 2017 | JP |
2017-067903 | Apr 2017 | JP |
Entry |
---|
International Search Report in PCT/JP2016/078752 dated Dec. 20, 2016. |
Number | Date | Country | |
---|---|---|---|
20180219638 A1 | Aug 2018 | US |