The present application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2014-110810, filed May 29, 2014, the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a specific performance technique for an electronic musical instrument, and more particularly, to a technique of controlling generation of a tone to be generated by the specific performance technique for the electronic musical instrument.
2. Description of the Related Art
In an electronic musical instrument realizing a wind instrument (for instance, a saxophone) by using an electronic technique, a conventional technique is disclosed in Japanese Patent Publication No. 2605761, which technique allows a player to use player's blowing intensity and/or strength of biting a mouthpiece of the wind instrument as musical parameters and to give a blowing performance of the wind instrument in accordance with characteristic values of such musical parameters.
Further, another conventional technique employed in the electronic musical instrument is disclosed in Japanese Patent Publication Nos. 2712406 and 3389618, which technique detects a position and/or movement of the tongue of the player of the wind instrument (a tonging playing) to control a sound in generation of the wind instrument.
There are several playing techniques for the typical wind instruments, such as the simply blowing into the wind instrument, tonging playing, and a specific performance, that is, the player of the wind instrument utters a voice while he/she is blowing into the wind instrument, thereby generating growling tones.
The conventional technique in the electronic musical instrument does not allow the player to give the specific performance by uttering a voice while he/she is blowing into the wind instrument.
The present invention provides an electronic musical instrument which detects that the player has uttered a voice while he/she is blowing into the wind instrument, and generates tones specific to the wind instrument.
According to one aspect of the invention, there is provided an electronic musical instrument which comprises a voice sensor which detects a voice uttered by a user, when the user blows into the musical instrument with a voice, a breath sensor which detects at least one of a blow pressure and a blow volume in a body of the musical instrument, when the use blows into the musical instrument with a voice, and a musical tone controlling unit which controls generation of a musical tone based on at least one of outputs of the voice sensor and the breath sensor.
According to another aspect of the invention, there is provided a method of controlling generation of a tone, in an electronic musical instrument having a breath sensor and a voice sensor, the method which comprises a step of detecting a voice of a user by a voice sensor, when the user blows into the musical instrument with a voice, a step of detecting at least one of a blow pressure and a blow volume in a body of the musical instrument by a breath sensor, when the user blows into the musical instrument with a voice, and a step of controlling generation of a musical tone based on at least one of outputs of the voice sensor and the breath sensor.
An electronic musical instrument (electronic wind instrument) according to the embodiments of the invention will be described with reference to the accompanying drawings in detail.
The mouthpiece 100 of the electronic wind instrument is provided with a pressure sensor 101 in the depth part thereof. When a player of the electronic wind instrument blows into the blowing aperture 103 of the mouthpiece 100, the pressure sensor 101 detects a blow pressure and generates an analog signal representing the detected blow pressure.
Further, the mouthpiece 100 is provided with a microphone (voice sensor) 102. The voice sensor 102 detects a human voice uttered by the player while he/she is blowing into the wind instrument, and generates an analog signal representing the detected human voice.
The analog signal generated by the pressure sensor 101 is sent to an Analog/Digital converter 203, wherein the analog signal is converted into a digital signal representing a sound volume (a digital sound volume signal). The digital sound volume signal is further sent to CPU (Central Processing Unit) 201 (musical-tone controlling unit).
Meanwhile, the analog signal generated by the microphone (voice sensor) 102 is sent to an Analog/Digital converter 204, wherein the analog signal is converted into a digital signal representing a human voice (a digital human voice signal). The digital human voice signal is further sent to CPU (Central Processing Unit) 201 (musical-tone controlling unit).
A waveform ROM (Read Only Memory) 202 stores various sorts of waveform data to be used to generate instrument tones.
When the player presses an operation key(s) 205 of the electronic wind instrument, key data corresponding to the pressed operation key(s) is generated as pitch information and sent to CPU 201. The pitch information is used as an element to determine a pitch of the instrument tone.
Upon receipt of the sound volume signal sent from the pressure sensor 101 through Analog/Digital converter 203, the human voice signal sent from the microphone (voice sensor) 102 through Analog/Digital converter 204, and the pitch information corresponding to the pressed operation key(s), CPU 201 reads waveform data from the waveform ROM 202 as musical-tone waveform information to generate digital voice data. The digital voice data is supplied to a Digital/Analog converter 206, wherein the digital voice data is converted into an analog audio signal. The analog audio signal is supplied to an audio system 207 and amplified to such a level to be heard by the players, and then outputted.
CPU 201 (in
CPU 201 reads a value of the pressed operation key 205 at first (step S301).
CPU 201 acquires the pitch information from the value of the pressed operation key 205 to determine a pitch of the instrument tone to be generated (step S302).
CPU 201 reads the blow pressure detected by the pressure sensor 101 to acquire the sound volume signal (step S303).
Then, CUP 201 sets a boundary value on the basis of the sound volume signal acquired from the pressure sensor 101 (step S304). For example, it is assumed that the boundary value is proportional to the sound volume signal acquired from the pressure sensor 101, and the boundary value can be set so as to increase as the acquired sound volume signal increases. Further, it is possible to allow a user to adjust the boundary value manually independently of the level of the sound volume signal.
CPU 201 acquires the human voice signal from the microphone (voice sensor) 102 (step S305).
CPU 201 rectifies the sound volume signal, thereby obtaining plural harmonic components. Then, CPU 201 compares the envelop(s) of one or plural harmonic component(s) with the boundary value set at step S304 (step S306).
When it is determined that the envelop(s) of one or plural harmonic component(s) is not larger than the boundary value, CPU 201 reads musical-tone waveform information of a normal tone from the waveform ROM 202 in accordance with the pitch determined at step S302 and a sound volume determined based on the sound volume signal acquired from the pressure sensor 101 at step S303, and outputs the musical-tone waveform information of a normal tone to D/A converter unit 206 (step S307). Thereafter, CPU 201 returns to step S301.
Meanwhile, when it is determined that the envelop(s) of one or plural harmonic component(s) is larger than the boundary value, CPU 201 reads musical-tone waveform information of a special tone or of a growling tone from the waveform. ROM 202 in accordance with the pitch determined at step S302 and a sound volume determined based on the sound volume signal acquired from the pressure sensor 101 at step S303 and the envelop(s), and outputs the musical-tone waveform information of a special tone to D/A converter unit 206 (step S308). Thereafter, CPU 201 returns to step S301.
Using the electronic instrument according to the first embodiment of the invention, the player can show a specific performance technique by uttering voice while he/she is blowing into the wind instrument (electronic instrument), thereby generating sampling growling tones specific to the wind instrument.
As shown in
Tones based on the specific performance technique are produced by process circuit blocks surrounded by a broken line 602 in
Meanwhile, the instrument-tone signal generated from Wave Generator (sound-generation block) 601 is supplied to plural band-pass filters (BPF) 605 and divided into plural signals.
The divided signals are further supplied to plural VCA (Voltage Controlled Amplifiers) 607, wherein the divided signals are added with the harmonic components of the human voice outputted from the rectifiers 608, respectively.
The signals added with the harmonic components of the human voice outputted from VCA 607 are combined into one tone of the specific performance technique (specific-performance technique tone), and then, this specific-performance technique tone is sent to a selector 604. To other input terminal of the selector 604 is inputted the instrument-tone signal from Wave Generator (sound-generation block) 601. Meanwhile, the sound volume signal from A/D converter 203 is amplified by an amplifier 603 and supplied as the boundary value to a control input terminal of the selector 604.
When one of the envelopes or a sum of the plural envelopes outputted from the rectifiers 608 is not larger than the boundary value, the selector 604 outputs an instrument tone as a digital sound signal to D/A converter 206 (in
When one of the envelopes or the sum of the plural envelopes outputted from the rectifiers 608 is larger than the boundary value, the selector 604 outputs a specific-performance technique tone as a digital sound signal to D/A converter 206 (in
As described above, in the electronic wind instrument according to the second embodiment of the invention, when it is determined that the envelope is larger than the boundary value, it is assumed that the player has given the specific performance, and the selector 604 switches the instrument tone to the specific-performance technique tone. This boundary value is calculated based on and proportional to the blow pressure detected by the pressure sensor 101 (
As described above, in the electronic wind instrument according to the second embodiment of the invention, since it can be confirmed that the player blows into the instrument while uttering a low voice, the player can give the specific-performance to generate the tones specific to the wind instrument.
In the electronic instruments according to the first and second embodiments of the invention, the instrument tone to be outputted is switched from the normal instrument tone to the specific-performance technique tone based on whether the envelope of the human voice detected by the microphone (voice sensor) 102 is larger than the boundary value calculated based on the blow pressure detected by the pressure sensor 101 or not. Further, it is possible to combine and output the normal instrument tone with the specific-performance technique tone at a rate of the envelope to the boundary value.
In the case where the normal instrument tone is switched to the specific-performance technique tone depending on comparing the envelope with the boundary value, it is possible to use a hysteresis value in place of the fixed boundary value.
Further, in the electronic instruments according to the first and second embodiments of the invention, the blow pressure is detected by the pressure sensor 101, but a flow sensor can be used in place of the pressure sensor 101 to obtain a blow volume by the player.
Furthermore, it is possible for the musical instrument to employ a structure consisting of both the pressure sensor 101 and the flow sensor.
Although specific circuit configurations and structures of the invention have been described in the foregoing detailed description, it will be understood that the invention is not limited to the particular embodiments described herein, but modifications and rearrangements may be made to the disclosed embodiments while remaining within the scope of the invention as defined by the following claims. It is intended to include all such modifications and rearrangements in the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2014-110810 | May 2014 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
3609201 | Adachi | Sep 1971 | A |
4038895 | Clement | Aug 1977 | A |
4342244 | Perkins | Aug 1982 | A |
4385541 | Muller | May 1983 | A |
4757737 | Conti | Jul 1988 | A |
4915008 | Sakashita | Apr 1990 | A |
4919032 | Sakashita | Apr 1990 | A |
5010801 | Sakashita | Apr 1991 | A |
5069106 | Sakashita | Dec 1991 | A |
5149904 | Kamiya | Sep 1992 | A |
5245130 | Wheaton | Sep 1993 | A |
5300727 | Osuga | Apr 1994 | A |
5403966 | Kawashima | Apr 1995 | A |
6011206 | Straley | Jan 2000 | A |
6570077 | Goss | May 2003 | B1 |
7049503 | Onozawa | May 2006 | B2 |
7309827 | Sakurada | Dec 2007 | B2 |
7321094 | Sakurada | Jan 2008 | B2 |
7390959 | Masuda | Jun 2008 | B2 |
7470852 | Masuda | Dec 2008 | B2 |
7605324 | Masuda | Oct 2009 | B2 |
7772482 | Shibata | Aug 2010 | B2 |
7786372 | Sawada | Aug 2010 | B2 |
8581087 | Hashimoto | Nov 2013 | B2 |
8987577 | Eventoff | Mar 2015 | B2 |
20050056139 | Sakurada | Mar 2005 | A1 |
20050076774 | Sakurada | Apr 2005 | A1 |
20070017352 | Masuda | Jan 2007 | A1 |
20150101477 | Park | Apr 2015 | A1 |
Number | Date | Country |
---|---|---|
2605761 | Apr 1997 | JP |
2712406 | Feb 1998 | JP |
3389618 | Mar 2003 | JP |
Number | Date | Country | |
---|---|---|---|
20150348525 A1 | Dec 2015 | US |