This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2013-1420, filed Jan. 8, 2013, and the entire contents of which are incorporated herein by reference.
Field of the Invention
The present invention relates to a musical sound control device, a musical sound control method and a storage medium.
Related Art
A musical sound control device is conventionally known that produces tapping harmonics according to a state of a switch on a left-hand side (refer to Japanese Patent No. 3704851). This musical sound control device determines a pitch difference with respect to pitch specified by a pitch specification operator prior to pitch specified by a pitch specification operator having tapping detected by a tapping determination unit, and a harmonics generation unit determines whether or not the pitch difference is coincident with a predetermined pitch difference, thereby generating predetermined harmonics corresponding to the pitch difference.
However, in the musical sound control device of Japanese Patent No. 3704851, it is impossible to realize sound generation of a musical sound having a frequency characteristic with a less high frequency component of muting or the like by changing a frequency characteristic of a musical sound.
The present invention has been realized in consideration of this type of situation, and it is an object of the present invention to change a frequency characteristic of a musical sound so as to generate a musical sound with mute timbre having a frequency characteristic with a less high frequency component of muting or the like.
In order to achieve the above-mentioned object, a musical sound control device according to an aspect of the present invention includes:
an acquisition unit that acquires a string vibration signal in a case where a string picking operation is performed with respect to a stretched string;
an analysis unit that analyzes a frequency characteristic of the string vibration signal acquired by the acquisition unit;
a determination unit that determines whether or not the analyzed frequency characteristic satisfies a condition; and
a change unit that changes a frequency characteristic of a musical sound generated in a sound source according to a determination result by the determination unit.
Descriptions of embodiments of the present invention are given below, using the drawings.
Overview of Musical Sound Control Device 1
First, a description for an overview of a musical sound control device 1 as an embodiment of the present invention is given with reference to
The head 30 has a threaded screw 31 mounted thereon for winding one end of a steel string 22, and the neck 20 has a fingerboard 21 with a plurality of frets 23 embedded therein. It is to be noted that in the present embodiment, provided are 6 pieces of the strings 22 and 22 pieces of the frets 23. 6 pieces of the strings 22 are associated with string numbers, respectively. The thinnest string 22 is numbered “1”. The string number becomes higher in order that the string 22 becomes thicker. 22 pieces of the frets 23 are associated with fret numbers, respectively. The fret 23 closest to the head 30 is numbered “1” as the fret number. The fret number of the arranged fret 23 becomes higher as getting farther from the head 30 side.
The body 10 is provided with: a bridge 16 having the other end of the string 22 attached thereto; a normal pickup 11 that detects vibration of the string 22; a hex pickup 12 that independently detects vibration of each of the strings 22; a tremolo arm 17 for adding a tremolo effect to sound to be emitted; electronics 13 built into the body 10; a cable 14 that connects each of the strings 22 to the electronics 13; and a display unit 15 for displaying the type of timbre and the like.
Additionally, the electronics 13 include a DSP (Digital Signal Processor) 46 and a D/A (digital/analog converter) 47.
The CPU 41 executes various processing according to a program recorded in the ROM 42 or a program loaded into the RAM 43 from a storage unit (not shown in the drawing).
In the RAM 43, data and the like required for executing various processing by the CPU 41 are appropriately stored.
The string-pressing sensor 44 detects which number of the fret is pressed by which number of the string. The string-pressing sensor 44 includes the type for detecting electrical contact of the string 22 (refer to
The sound source 45 generates waveform data of a musical sound instructed to be generated, for example, through MIDI (Musical Instrument Digital Interface) data, and outputs an audio signal obtained by D/A converting the waveform data to an external sound source 53 via the DSP 46 and the D/A 47, thereby giving an instruction to generate and mute the sound. It is to be noted that the external sound source 53 includes an amplifier circuit (not shown in the drawing) for amplifying the audio signal output from the D/A 47 for outputting, and a speaker (not shown in the drawing) for emitting a musical sound by the audio signal input from the amplifier circuit.
The normal pickup 11 converts the detected vibration of the string 22 (refer to
The hex pickup 12 converts the detected independent vibration of each of the strings 22 (refer to
The switch 48 outputs to the CPU 41 an input signal from various switches (not shown in the drawing) mounted on the body 10 (refer to
The display unit 15 displays the type of timbre and the like to be generated.
In the type of the string-pressing sensor 44 for detecting an electrical contact location of the string 22 with the fret 23 as a string-pressing position, a Y signal control unit 52 supplies a signal received from the CPU 41 to each of the strings 22. An X signal control unit 51 outputs, in response to reception of a signal supplied to each of the strings 22 in each of the frets 23 by time division, a fret number of the fret 23 in electrical contact with each of the strings 22 to the CPU 41 (refer to
In the type of the string-pressing sensor 44 for detecting a string-pressing position based on output from an electrostatic sensor, the Y signal control unit 52 sequentially specifies any of the strings 22 to specify an electrostatic sensor corresponding to the specified string. The X signal control unit 51 specifies any of the frets 23 to specify an electrostatic sensor corresponding to the specified fret. In this way, only the simultaneously specified electrostatic sensor of both the string 22 and the fret 23 is operated to output a change in an output value of the operated electrostatic sensor to the CPU 41 (refer to
In
In
Main Flow
Initially, in step S1, the CPU 41 is powered to be initialized. In step S2, the CPU 41 executes switch processing (described below in
Switch Processing
Initially, in step S11, the CPU 41 executes timbre switch processing (described below in
Timbre Switch Processing
Initially, in step S21, the CPU 41 determines whether or not a timbre switch (not shown in the drawing) is turned on. When it is determined that the timbre switch is turned on, the CPU 41 advances processing to step S22, and when it is determined that the switch is not turned on, the CPU 41 finishes the timbre switch processing. In step S22, the CPU 41 stores in a variable TONE a timbre number corresponding to timbre specified by the timbre switch. In step S23, the CPU 41 supplies an event based on the variable TONE to the sound source 45. Thereby, timbre to be generated is specified in the sound source 45. After the processing of step S23 is finished, the CPU 41 finishes the timbre switch processing.
Musical Performance Detection Processing
Initially, in step S31, the CPU 41 executes string-pressing position detection processing (described below in
String-Pressing Position Detection Processing
Initially, in step S41, the CPU 41 acquires an output value from the string-pressing sensor 44. In a case of the type of the string-pressing sensor 44 for detecting electrical contact of the string 22 with the fret 23, the CPU 41 receives, as an output value of the string-pressing sensor 44, a fret number of the fret 23 in electrical contact with each of the strings 22 together with the number of the string in contact therewith. In a case of the type of the string-pressing sensor 44 for detecting contact of the string 22 with the fret 23 based on output from an electrostatic sensor, the CPU 41 receives, as an output value of the string-pressing sensor 44, the value of electrostatic capacity corresponding to a string number and a fret number. Additionally, the CPU 41 determines, in a case where the received value of electrostatic capacity corresponding to a string number and a fret number exceeds a predetermined threshold, that a string is pressed in an area corresponding to the string number and the fret number.
In step S42, the CPU 41 executes processing for confirming a string-pressing position. Specifically, the CPU 41 determines that a string is pressed with respect to the fret 23 corresponding to the highest fret number among a plurality of frets 23 corresponding to each of the pressed strings 22.
In step S43, the CPU 41 executes preceding trigger processing (described below in
Preceding Trigger Processing
Initially, in step S51, the CPU 41 receives output from the hex pickup 12 to acquire a vibration level of each string. In step S52, the CPU 41 executes preceding trigger propriety processing (described below in
In step S54, the CPU 41 sends a signal of a sound generation instruction to the sound source 45 based on timbre specified by a timbre switch and velocity decided in step S63 of preceding trigger propriety processing. At the time, in a case where a mute flag described below is turned on with reference to
Preceding Trigger Propriety Processing
Initially, in step S61, the CPU 41 determines whether or not a vibration level of each string based on output from the hex pickup 12 received in step S51 in
In step S62, the CPU 41 turns on the preceding trigger flag to allow preceding trigger. In step S63, the CPU 41 executes velocity confirmation processing.
Specifically, in the velocity confirmation processing, the following processing is executed. The CPU 41 detects acceleration of a change of a vibration level based on sampling data of three vibration levels prior to the point when a vibration level based on output of a hex pickup exceeds Th1 (referred to below as “Th1 point”). Specifically, first velocity of a change of a vibration level based on first and second preceding sampling data from the Th1 point. Further, second velocity of a change of a vibration level based on second and third preceding sampling data from the Th1 point. Then, acceleration of a change of a vibration level is detected based on the first velocity and the second velocity. Additionally, the CPU 41 applies interpolation so that velocity falls into a range from 0 to 127 in dynamics of acceleration obtained in an experiment.
Specifically, where velocity is “VEL”, the detected acceleration is “K”, dynamics of acceleration obtained in an experiment are “D” and a correction value is “H”, velocity is calculated by the following expression (1).
VEL=(K/D)×128×H (1)
Data of a map (not shown in the drawing) indicating a relationship between the acceleration K and the correction value H is stored in the ROM 42 for every one of pitch of respective strings. In a case of observing a waveform of certain pitch of a certain string, there is a unique characteristic in a change of the waveform immediately after the string is distanced from a pick. Therefore, data of a map of the characteristic is stored in the ROM 42 beforehand for every one of pitch of respective strings so that the correction value H is acquired based on the detected acceleration K.
In step S64, the CPU 41 executes mute detection processing (described below in
Mute Processing
Initially, in step S71, a waveform is subjected to FFT (Fast Fourier Transform) based on a vibration level of each string based on output from the hex pickup 12 that is received in step S51 in
In step S73, data of a curve of pitch corresponding to the string-pressing position decided in step S42 in
Additionally,
Returning to
In step S75, the CPU 41 compares the data of the FFT curve generated in step S72 to the data of the FFT curve in muting that is selected in step S73, to determine whether or not the value indicating correlation is a predetermined value or more. In a case where it is determined that the value indicating correlation is a predetermined value or more, it is determined that muting is performed, and the CPU 41 advances processing to step S76. In step S76, the CPU 41 turns on a mute flag. On the other hand, in a case where it is determined in step S75 that the value indicating correlation is less than a predetermined value, it is determined that muting is not performed, and the CPU 41 finishes the mute processing.
Mute Processing (First Variation)
Initially, in step S81, a peak value corresponding to a frequency of 1.5 KHz or more is extracted among peak values based on a vibration level of each string based on output from the hex pickup 12 that is received in step S51 in
Mute Processing (Second Variation)
Initially, in step S91, the CPU 41 determines whether or not sound is being generated. In a case where sound is being generated, in step S92, the CPU 41 applies FFT (Fast Fourier Transform) to a waveform based on a vibration level of each string based on output from the hex pickup 12 that is received in step S51 in
String Vibration Processing
Initially, in step S101, the CPU 41 receives output from the hex pickup 12 to acquire a vibration level of each string. In step S102, the CPU 41 executes normal trigger processing (described below in
Normal Trigger Processing
Initially, in step S111, the CPU 41 determines whether preceding trigger is not allowed. That is, the CPU 41 determines whether or not a preceding trigger flag is turned off. In a case where it is determined that preceding trigger is not allowed, the CPU 41 advances processing to step S112. In a case where it is determined that preceding trigger is allowed, the CPU 41 finishes the normal trigger processing. In step S112, the CPU 41 determines whether or not a vibration level of each string based on output from the hex pickup 12 that is received in step S101 in
Pitch Extraction Processing
In step S121, the CPU 41 extracts pitch by means of known art to decide pitch. Here, the known art includes, for example, a technique described in Japanese Unexamined Patent Application, Publication No. H1-177082.
Sound Muting Detection Processing
Initially, in step S131, the CPU 41 determines whether or not the sound is being generated. In a case where determination is YES in this step, the CPU 41 advances processing to step S132, and in a case where determination is NO in this step, the CPU 41 finishes the sound muting detection processing. In step S132, the CPU 41 determines whether or not a vibration level of each string based on output from the hex pickup 12 that is received in step S101 in
Integration Processing
Initially, in step S141, the CPU 41 determines whether or not sound is generated in advance. That is, in the preceding trigger processing (refer to
On the other hand, in a case where it is determined in step S141 that a sound generation instruction is not given to the sound source 45 in the preceding trigger processing, the CPU 41 advances processing to step S143. In step S143, the CPU 41 determines whether or not a normal trigger flag is turned on. In a case where the normal trigger flag is turned on, the CPU 41 sends a sound generation instruction signal to the sound source 45 in step S144. At the time, in a case where a mute flag is turned on, the CPU 41 changes timbre to mute timbre to send data of the timbre to the sound source 45. Thereafter, the CPU 41 advances processing to step S145. In a case where a normal trigger flag is turned off in step S143, the CPU 41 advances processing to step S145.
In step S145, the CPU 41 determines whether or not a sound muting flag is turned on. In a case where the sound muting flag is turned on, the CPU 41 sends a sound muting instruction signal to the sound source 45 in step S146. In a case where the sound muting flag is turned off, the CPU 41 finishes the integration processing. After the processing of step S146 is finished, the CPU 41 finishes the integration processing.
A description has been given above concerning the configuration and processing of the musical sound control device 1 of the present embodiment.
In the present embodiment, the CPU 41 acquires a string vibration signal in a case where a string picking operation is performed with respect to the stretched string 22, analyzes a frequency characteristic of the acquired string vibration signal, determines whether or not the analyzed frequency characteristic satisfies a predetermined condition, and changes a frequency characteristic of a musical sound generated in the connected sound source 45 depending on a case where it is determined that the predetermined condition is satisfied or determined that the predetermined condition is not satisfied.
Therefore, in a case where the predetermined condition is satisfied, it is possible to realize generation of a musical sound having a frequency characteristic with a less high frequency component of muting or the like by changing a frequency characteristic of a musical sound.
Further, in the present embodiment, the CPU 41 makes a change, in a case where it is determined that the predetermined condition is satisfied, into a musical sound having a frequency characteristic with a less high frequency component compared to a case where it is determined that the predetermined condition is not satisfied.
Therefore, in a case where the predetermined condition is satisfied, it is possible to realize generation of a musical sound having a frequency characteristic with a less high frequency component of muting or the like.
Additionally, in the present embodiment, the CPU 41 determines that the predetermined condition is satisfied in a case where there is correlation at a certain level or above between a predetermined frequency characteristic model prepared beforehand and the analyzed frequency characteristic.
Therefore, it is possible to easily realize muting by appropriately setting a predetermined condition.
Moreover, in the present embodiment, the CPU 41 extracts a frequency component in a predesignated part of the acquired string vibration signal to determine that the predetermined condition is satisfied in a case where the extracted frequency component includes a specific frequency component.
Therefore, it is possible to easily realize muting by appropriately setting a predetermined condition.
Further, in the present embodiment, the CPU 41 extracts a frequency component in an interval from a vibration start time of the acquired string vibration signal to before a predetermined time.
Therefore, it is possible to determine whether or not muting is performed before a musical sound is first generated.
Furthermore, in the present embodiment, the CPU 41 extracts a frequency component in an interval from a vibration end time of the acquired string vibration signal to an elapsed predetermined time.
Therefore, in a case where sound is being successively generated during musical performance, it is possible to determine whether or not muting is performed immediately after a musical sound being generated is muted and until a next musical sound is generated.
A description has been given above concerning embodiments of the present invention, but these embodiments are merely examples and are not intended to limit the technical scope of the present invention. The present invention can have various other embodiments, and in addition various types of modification such as abbreviations or substitutions can be made within a range that does not depart from the scope of the invention. These embodiments or modifications are included in the range and scope of the invention described in the present specification and the like, and are included in the invention and an equivalent range thereof described in the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
2013-001420 | Jan 2013 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
3813473 | Terymenko | May 1974 | A |
4041783 | Shimauchi | Aug 1977 | A |
4823667 | Deutsch | Apr 1989 | A |
5024134 | Uchiyama | Jun 1991 | A |
5025703 | Iba et al. | Jun 1991 | A |
5033353 | Fala | Jul 1991 | A |
5990408 | Hasebe | Nov 1999 | A |
6111186 | Krozack et al. | Aug 2000 | A |
20120294457 | Chapman et al. | Nov 2012 | A1 |
Number | Date | Country |
---|---|---|
102790932 | Nov 2012 | CN |
01279297 | Nov 1989 | JP |
3704851 | Oct 2005 | JP |
Entry |
---|
Japanese Office Action (and English translation thereof) dated Oct. 25, 2016, issued in counterpart Japanese Application No. 2013-001420. |
Chinese Office Action (and English translation thereof) dated Mar. 31, 2016, issued in counterpart Chinese Application No. 201410051518.8. |
Number | Date | Country | |
---|---|---|---|
20140190336 A1 | Jul 2014 | US |