This application is a non-provisional application that claims priority benefits under Title 35, United States Code, Section 119(a)-(d) from Japanese Patent Application entitled “TONE CONTROL DEVICE” by Mizuki NAKAGAWA and Shun TAKAI, having Japanese Patent Application Serial No. 2011-050107, filed on Mar. 8, 2011, which Japanese Patent Application is incorporated herein by reference in its entirety.
1. Field of the Invention
The present invention relates to a method, computer storage device, and tone control device for generating tones with a vibrato effect.
2. Description of the Related Art
Electronic musical instruments such as synthesizers and the like can generate tones with various kinds of tone colors. When performance of a natural musical instrument is imitated by an electronic musical instrument, it is necessary to make the tone colors to be faithfully imitated to tone colors of the natural musical instrument. In addition, the performer needs to understand the characteristic peculiar to the musical instrument and needs to perform while operating user interfaces of the musical instrument (such as, for example, the keyboard, the pitch-bend lever, the modulation lever, the HOLD pedal and the like) during his or her performance. Therefore, when the performer attempts to imitate performance of a certain musical instrument, using an electronic musical instrument, the performer needs to have good understanding of the characteristic of the musical instrument to be imitated, and is required to have high-level skills in performance technique to make full use of the user interfaces according to the characteristic during performance.
For example, when the vibrato effect is added to tones, the LFO (Low Frequency Oscillator) depth in pitch and level is assigned to the user interfaces such as a modulation lever. The performer manually operates the user interfaces according to the performance state, thereby adjusting the amount of vibrato. Therefore, for adding the vibrato effect, the performer needs to have high-level performance skills.
In this respect, in recent years, electronic musical instruments automatically add the vibrato effect to tones according to the state of performance. Among those electronic musical instruments, there is an electronic musical instrument that variably controls the amount of vibrato according to velocity. However, these electronic musical instruments add the vibrato effect for a single note even in polyphony performance. Therefore, when performance of a solo musical instrument that is capable of polyphony, such as, a violin is imitated, the performance would sound artificial. One of the reasons for this artificiality is that for a solo musical instrument capable of polyphony, such as, a violin, when the performer plays polyphonic notes, vibrato added for a single note tends not to be added for polyphonic notes due to the structural problem of the musical instrument.
Japanese Patent Publication No. JP2199500 describes an electronic musical instrument that is capable of changing the effect according to the number of keys depressed, when the tone color of a piano is selected.
Provided are a method, computer storage device, and tone control device for generating tones with a vibrato effect. A current note-on event at a current time is received. A determination is made of a key depression interval comprising a difference of the current time from a previous time of a previous note. A performance mode in the memory device is set to a single tone mode, in which only one note is generated, or a polyphonic mode, in which multiple notes are simultaneously generated, based on the determined key depression interval. The tone is generated to control a sound source to output the current note with a first modulation magnitude in response to determining that the performance mode is the single tone mode. A tone is generated to control the sound source to output the current note with a second modulation magnitude in response to determining that the performance mode is the polyphonic mode, wherein the first modulation magnitude is greater than the second modulation magnitude.
Described embodiments relate to tone control devices and, more particularly, to a tone control device that is capable of highly sophisticated imitation of performance of a solo musical instrument capable of polyphony (for example, a violin and the like) based on real-time performance operation by the performer.
In prior art systems, such as the electronic musical instrument described in Japanese Patent Publication No. JP2199500, the decision of whether to add an effect to the tones, such as a vibrato effect, is based solely according to the number of keys depressed. Therefore, when multiple keys are depressed, no discrimination is made as to whether key-depression of these multiple keys is intended for polyphony (i.e., a chord), or key-depression is intended for individual notes, and the same effect is added for both of them. Therefore, such prior art technology is not sufficient for imitating a solo musical instrument capable of polyphony.
Described embodiments provide a tone control device that is capable of imitating performance of a solo musical instrument capable of polyphony, such as, a violin to a higher degree.
In described tone control device embodiments, an interval judgment device judges as to whether or not an input interval between a current tone generation instruction and a last tone generation instruction is equal to a predetermined time or less. When the interval judgment device judges that the input interval between the current tone generation instruction and the last tone generation instruction is not equal to the predetermined time or less, it is assumed that the current tone generation instruction has been inputted with an intention not to perform polyphony with respect to the last tone generation instruction (in other words, with an intention to perform a single tone). Then, an instruction device instructs a tone generation device to set the amount of tone modulation to a predetermined value. On the other hand, when the interval judgment device judges that the input interval between the current tone generation instruction and the last tone generation instruction is equal to the predetermined time or less, it is assumed that the current and last tone generation instructions are given based on polyphony performance. Then, the instruction device instructs the tone generation device to set the amount of tone modulation to a value smaller than the predetermined value. In other words, based on the input interval between the current tone generation instruction and the last tone generation instruction, it is discriminated as to whether the current tone generation instruction has been made in polyphonic performance together with the last tone generation instruction, or not in polyphonic performance (in other words, made in performance of a single tone). When it is judged to be of polyphonic performance, the amount of tone modulation is made smaller (suppressed), compared to the case where a single tone is performed.
Therefore, with certain described embodiments, when the performer plays notes with an intention to perform polyphony, the amount of tone modulation (for example, the magnitude of the vibrato effect) is made smaller. Accordingly, when imitating performance of a solo musical instrument capable of polyphony such as a violin through real-time performance operation by the performer, highly sophisticated imitation that sufficiently reflects the characteristic of the solo musical instrument can be realized.
In further embodiments, when the number of tones being generated, which is counted by a count device, is counted down to one, the instruction device instructs the tone generation device to set the amount of tone modulation to a predetermined value. More specifically, when tones that are generated in polyphony are gradually silenced, and when there remains only a single tone being generated, the amount of tone modulation for the tone being generated is set to the same amount of modulation as that for a single tone. Therefore, the amount of tone modulation that is suppressed as a result of the judgment of polyphony is returned to the modulation for a single tone, when there remains only one single tone being generated. Therefore, it is possible to realize highly sophisticated imitation that sufficiently reflects the characteristic of a solo musical instrument in which vibrato tends to be given to a single tone (a solo musical instrument capable of polyphony), such as, a violin.
In further embodiments, when the interval judgment device judges that the input interval between the current tone generation instruction and the last tone generation instruction is equal to the predetermined time or less, it is assumed that the current tone generation instruction and the last tone generation instruction are given based on polyphony performance. Then, the instruction device instructs the tone generation device to set a poly mode that is capable of concurrently generating two or more tones. In other words, it is effective in that a tone based on the current tone generation instruction can be generated together with a tone being generated based on the last tone generation instruction. On the other hand, when the interval judgment device judges that the input interval between the current tone generation instruction and the last tone generation instruction is not equal to the predetermined time or less, it is assumed that the current tone generation instruction has been inputted with an intention not to perform polyphony with respect to the last tone generation instruction (in other words, with an intention to perform a single tone). Then, the instruction device instructs the tone generation device to set a mono mode that prohibits concurrent generation of two or more tones. Therefore, while tones are being generated based on the performance of polyphony, when a tone generation instruction newly inputted by the performer (the current tone generation instruction) is inputted in an input interval not less than the predetermined time (in other words, exceeding the predetermined time) from the last tone generation instruction, the tone generation device is set to the mono mode. In other words, when it is judged, based on the input interval between the current tone generation instruction and the last tone generation instruction, that the current tone generation instruction has been inputted (performed) with an intention to play a single tone, a tone based on the current tone generation instruction (in other words, a tone performed with an intention to play a single tone) alone is generated. Therefore, more sophisticated imitation that better reflects the characteristic of a solo musical instrument capable of polyphony such as a violin can be realized.
In a further embodiment, when the interval judgment device judges that the input interval between the current tone generation instruction and the last tone generation instruction is equal to the predetermined time or less, it is assumed that the current tone generation instruction and the last tone generation instruction are given based on polyphony performance. Then, the instruction device instructs the tone generation device to set a poly mode that is capable of concurrently generating two or more tones. In other words, it is effective in that a tone based on the current tone generation instruction can be generated together with a tone being generated based on the last tone generation instruction. Meanwhile, when the number of tones being generated, which is counted by the count device, is counted down to one, the instruction device instructs the tone generation device to set a mono mode that prohibits concurrent generation of two or more tones. Therefore, when tones that are generated in polyphony are gradually silenced, and when there remains only a single tone being generated, the tone being generated can be generated as a single tone in the mono mode. Therefore, it is possible to realize highly sophisticated imitation that better reflects the characteristic of a solo musical instrument capable of polyphony such as a violin.
Embodiments are described with reference to the accompanying drawings.
The keyboard 2 is one of the user interfaces operated by the performer. The keyboard 2 outputs to a CPU 11 (see
When the performer performs a key-depression intended for a single tone, the electronic musical instrument 1 may add a vibrato effect with a predetermined vibrato depth to a tone corresponding to the key depressed. On the other hand, when the performer performs a key-depression intended for polyphony, the electronic musical instrument 1 may correct the vibrato depth to a value smaller than the value to be added to a single tone, such as zero. As the electronic musical instrument 1 has such a configuration, performance of a solo musical instrument capable of polyphony such as a violin can be realized to a highly sophisticated level.
The CPU 11 is a central control unit that controls each of the components of the electronic musical instrument 1 according to fixed value data and a control program stored in the ROM 12 and the RAM 13. The CPU 11 includes a built-in timer that counts clock signals, thereby measuring the time.
Upon receiving a note-on (a piece of performance information indicating that one of the keys 2a is depressed) from the keyboard 2, the CPU 11 outputs a tone generation instruction to the sound source 14, thereby rendering the sound source 14 to start generation of a tone (an audio signal) according to the note-on. Also, upon receiving a note-off (a piece of performance information indicating that one of the keys 2a that has been depressed is released) from the keyboard 2, the CPU 11 outputs a silencing instruction to the sound source 14, thereby performing a silencing control. By this, the tone that is being generated by the sound source 14 is stopped.
The ROM 12 is a non-rewritable memory, and stores a control program 12a to be executed by the CPU 11, fixed value data (not shown) to be referred to by the CPU 11 when the control program 12a is executed, and the like. The fixed value data includes values of vibrato depth to be applied to single tones. The processes shown in the flow chart of
The RAM 13 is a rewritable memory, and has a temporary storage area for temporarily storing various kinds of data for the CPU 11 to execute the control program 12a. The temporary area of the RAM 13 is provided with a polyphony start flag 13a, a generating tone number counter 13b, a previous tone voice information memory 13c, a vibrato depth memory 13d, and a key-depression time memory 13e.
The polyphony start flag 13a is a flag that is used to identify a note-on to be processed initially (first) when a performance intended for polyphony (in other words, key-depression of a chord) is executed by the performer. The polyphony start flag 13a is initialized (set to OFF) when the electronic musical instrument 1 is powered on. Then, the polyphony start flag 13a is set to ON, upon key-depression of the key 2a, each time when the key-depression interval between the current key-depression and the last key-depression exceeds a predetermined time (20 milliseconds (msec) in the present embodiment), or the last key-depression does not exist. When the key 2a is depressed, if the key-depression interval between the current key-depression and the last key-depression is equal to the predetermined time or less, it is identified that the current key-depression and the last key-depression are intended to be key-depressions for polyphony. In this instance, if the polyphony start flag 13a is ON, the note-on by the last key-depression is identified as a note-on that is processed first among the key-depressions intended for polyphony. In this manner, when the note-on that is first processed among the key-depressions intended for polyphony is identified, the polyphony start flag 13a is set to OFF.
The generating tone number counter 13b is a counter for counting the number of tones being generated. The generating tone number counter 13b is initialized (zeroed) when the electronic musical instrument 1 is powered on, is incremented by one each time the key 2a is key-depressed, and is decremented by one each time the key 2a is key-released. Also, when the key-depression interval between the current key-depression and the last key-depression exceeds the predetermined time, the generating tone number counter 13b is once zeroed, before the addition based on the current key-depression is executed.
The previous tone voice information memory 13c is a memory for managing information relating to a voice-assigned voice based on the last key-depression (hereafter referred to as “previous tone voice information”), when the key 2a is key-depressed. The previous tone voice information memory 13c is initialized (zeroed) when the electronic musical instrument 1 is powered on. Each time the CPU 11 receives a note-on from the keyboard 2 upon key-depression of the key 2a, previous tone voice information is stored in the previous tone voice information memory 13c. The previous tone voice information includes information indicative of a voice assigned (voice-assigned) to the note-on received among 128 tone generation channels provided in the sound source 14, and the like. The voice may be composed of one or a plurality of tone generation channels.
The vibrato depth memory 13d is a memory for storing vibrato depth indicative of the depth of vibrato effect. The vibrato depth stored in the vibrato depth memory 13d is supplied to the sound source 14, whereby a vibrato effect with a magnitude according to the supplied vibrato depth is given to a tone to be generated. The vibrato depth memory 13d is initialized (zeroed) when the electronic musical instrument 1 is powered on.
The vibrato depth memory 13d stores a vibrato depth stored in the ROM 12 (the vibrato depth applied to a single tone) as an initial value each time the CPU 11 receives a note-on from the keyboard 2 upon key-depression of the key 2a. The vibrato depth stored in the vibrato depth memory 13d as an initial value may be corrected to zero when the key-depression interval between the current key-depression and the last key-depression is equal to 20 msec or less. By this, the vibrato effect to be added to a tone becomes zero. Therefore, according to the electronic musical instrument 1, when the performer depresses keys of a chord, intended for polyphony, vibrato is not added to each of the tones composing polyphony, such that the characteristic of a solo musical instrument capable of polyphony (for example, a violin) can be imitated.
Also, the vibrato depth stored in the vibrato depth memory 13d is corrected again to the initial value (in other words, the vibrato depth to be applied to a single tone) when the number of remaining tones being generated reduces to 1 due to key-release of the keys 2a. By this, the vibrato effect to be applied to a single tone changes again, from zero, to a magnitude according to the vibrato depth to be applied to a single tone. In other words, the electronic musical instrument 1 in accordance with the present embodiment is configured such that, when multiple tones being generated as polyphonic are gradually reduced to one remaining tone (i.e., a single tone), vibrato with a magnitude according to the vibrato depth to be applied to a single tone is again added to the single tone. Therefore, according to the electronic musical instrument 1, it is possible to imitate the characteristic of a solo musical instrument in which vibrato is not added to polyphony, but vibrato is added to a single tone, such as, a violin.
The key-depression time memory 13e is a memory for storing key-depression times sequentially in the order of key-depressions. The key-depression time memory 13e is initialized when the electronic musical instrument 1 is powered on. Then, each time the CPU 11 receives a note-on from the keyboard 2, the time measured by the timer 11a is stored as a key-depression time in the key-depression time memory 13e, together with a note (a note number) indicated by the received note-on. The present embodiment is configured to store a predetermined number of (for example, 10) latest key-depression times in the order of key-depressions. However, the present embodiment may be configured such that, when the key 2a is key-released, the key-depression time of the corresponding note may be erased. The key-depression interval of keys that are successively depressed is calculated by a difference between the key-depression time of the current key-depression and the key-depression time of the last key-depression based on the stored content of the key-depression time memory 13e.
Also, the temporary area of the RAM 13 is provided with a note-on map (not shown). The note-on map is a map that indicates as to whether or not a tone corresponding to each of the keys 2a is being generated. More specifically, the note-on map is composed of tone generation flags associated with notes (note numbers) corresponding to the respective keys 2a, and indicates a voice of the sound source 14 assigned to each of the notes. When a tone generation instruction is outputted to the sound source 14, a tone generation flag of a note corresponding to the tone generation instruction is set to ON, and information indicative of an assigned voice is stored. On the other hand, when a silencing instruction is outputted to the sound source 14, a tone generation flag of a note corresponding to the silencing instruction is set to OFF, and information indicative of the corresponding voice is erased.
The sound source 14 generates (emanates) tones with a tone color set by the performer at pitches corresponding to those of the keys 2a depressed or stops tones that are being generated, based on tone generation instructions or silencing instructions received from the CPU 11, respectively. Upon receiving a tone generation instruction from the CPU 11, the sound source 14 generates a tone (an audio signal) with a pitch, a volume and a tone color according to the tone generation instruction. Tone signals outputted from the sound source 14 are supplied to the DAC 15, converted into analog signals by the DAC 15, and outputted (emanated) from the speaker 32 through the amplifier 31. On the other hand, upon receiving a silencing instruction from the CPU 11, the sound source 14 stops the tone being generated according to the silencing instruction. The tone being emanated from the speaker 32 is silenced accordingly.
The sound source 14 has 128 tone generation channels (not shown). Each of the voices is provided with an LFO (not shown) that outputs a low frequency oscillation signal. The tone generated through each voice is modified (in other words, a vibrato effect is added) by the low frequency oscillation signal outputted from the LFO of that voice. The magnitude of the vibrato effect, in other words, the magnitude of the modulation can be changed by changing LFO_Pitch_Depth and LFO_TVA_Depth among setting values set for each voice. In certain embodiments, a parameter deciding table (see
Referring to
As shown in
In the example shown in
Next, referring to
As shown in
After the processing in S2, the key-depression time memory 13e is looked up to judge as to whether or not the key-depression interval between the current note (the note key-depressed this time) and the last note (the note key-depressed last time) is equal to 20 msec or less (S3).
When it is judged in S3 that the key-depression time interval exceeds 20 msec (S3: No), the polyphony start flag 13a is set to ON (S14). In S3, if the key-depression time of the previous note does not exist in the key-depression time memory 13e, the process also proceeds to S14 and the polyphony start flag 13a is set to ON. Then, it is judged as to whether or not the mode set at the sound source 14 is a poly mode (a mode that is capable of concurrently generating two or more tones) (S15).
When it is judged in S15 that the mode set at the sound source 14 is the poly mode (S15: Yes), the mode at the sound source 14 is set to a mono mode (a mode that prohibits concurrent generation of two or more tones) (S16), and the process proceeds to S17.
In other words, when the key-depression interval between the current note and the last note exceeds 20 msec and the mode of the sound source 14 is the poly mode, the current note is specified as a note that is key-depressed as a single tone after the performer key-depressed multiple notes as polyphony in the last and prior notes. Then, the processing in S16 is executed, whereby the mode of the sound source 14 is set to the mono mode. In this case (S3: No, and S15: Yes), the mode of the sound source 14 is switched from the poly mode to the mono mode. Therefore, even when tones corresponding to all or a part of notes composing polyphony are being generated, the sound source 14 forcefully silences the tones being generated, and generates only a tone corresponding to the current note. Accordingly, the tone corresponding to the current note can be generated like a single tone as intended by the performer.
On the other hand, when it is judged in S15 that the mode set at the sound source 14 is the mono mode (S15: No), the process proceeds to S17 without changing the mode of the sound source 14 as being the mono mode. In this case (S3: No, and S15: No), the mode of the sound source 14 is also the mono mode. Therefore, even when there are tones being generated, the sound source 14 can forcefully silence the tones being generated, and generate only a tone corresponding to the current note as a single tone.
In S17, the generating tone number counter 13b is zeroed (S17). After the processing in S17, a tone generation processing according to the note-on received from the keyboard 2 is executed (S10). More specifically, the tone generation instruction is outputted after a vacant voice in the sound source 14 is assigned for the tone generation instruction corresponding to the note-on received. In this instance, in the note-on map (not shown) in the RAM 13, the tone generation flag corresponding to the current note is set to ON, and information indicative of the voice-assigned voice is stored.
After executing the tone generation processing (S10), LFO_Pitch_Depth and LFO_TVA_Depth corresponding to the vibrato depth stored in the vibrato depth memory 13d are set for the voice assigned with respect to the tone generation instruction (S11). It is noted that the set values of LFO_Pitch_Depth and LFO_TVA_Depth may be decided by referring to the parameter deciding table (see
As the operations in S10 and S11 are executed, the sound source 14 adds the vibrato effect corresponding to the vibrato depth stored in the vibrato depth memory 13d to a tone corresponding to the current note, and generates the tone. Therefore, when the current note is key-depressed at an interval exceeding 20 msec from the last note (in other words, S3: No), the vibrato depth set in S2, in other words, vibrato according to the vibrato to be applied to a single tone, is added to the tone corresponding to the current note.
After the processing in S11, information indicative of the voice assigned with respect to the tone generation instruction of the current note (in other words, the current voice) is stored, as last tone voice information, in the previous voice information memory 13c (S12). Then, 1 is added to the generating tone number counter 13b (S13), and the note event process is ended.
On the other hand, when it is judged in S3 that the key-depression interval is equal to 20 msec or less (S3: Yes), the current note is specified as a note that is key-depressed by the performer intended, together with the last note, for polyphony (key-depressed as a chord). Then, it is judged as to whether or not the mode set at the sound source 14 is the mono mode (S4).
When it is judged in S4 that the mode set at the sound source 14 is the mono mode (S4: Yes), the mode of the sound source 14 is set to the poly mode (S5), and the process proceeds to S6. In other words, when the key-depression interval between the current note and the last note is equal to 20 msec or less, the current note is specified as a note key-depressed by the performer for polyphony together with the last note. Then, the processing in S5 is executed, thereby setting the mode of the sound source 14 to the poly mode. As the mode of the sound source 14 is switched to the poly mode, the sound source 14 can generate a tone corresponding to the current note concurrently with a tone corresponding to the last note. Therefore, the tone corresponding to the current note can be generated in polyphony as intended by the performer.
On the other hand, when it is judged in S4 that the mode set at the sound source 14 is the poly mode (S4: No), the mode of the sound source 14 is not changed, and the process proceeds to S6.
In S6, the vibrato depth stored in the vibrato depth memory 13d is corrected to zero (S6). After the processing in S6, it is judged as to whether or not the polyphony start flag 13a is ON (S7). When the polyphony start flag 13a is ON (S7: Yes), the LFO_Pitch_Depth and LFO_TVA_Depth are set for the voice assigned to the last note (hereafter this voice is referred to as the “last tone voice”) based on the last tone voice information (S8). It is noted that the last tone voice information is stored in the previous tone voice information memory 13c. Further, LFO_Pitch_Depth and LFO_TVA_Depth are set according to the vibrato depth stored in the vibrato depth memory 13d.
The vibrato depth stored in the vibrato depth memory 13d has been corrected to zero by the processing in S6. Accordingly, as the processing in S8 is executed, the vibrato effect added to the tone corresponding to the last note being generated becomes zero. In other words, no vibrato is added to the tone corresponding to the last note.
As described above, the polyphony start flag 13a is set to ON by the processing in S14 when the key-depression interval between the current note and the last note exceeds 20 msec. Therefore, when the key-depression interval between the current note and the last note is equal to 20 msec or less, and the polyphony start flag 13a is ON (S3: Yes, and S7: Yes), the last note can be specified as a note first processed among notes composing polyphony. However, the last note, despite being a note composing polyphony, is judged as No in S3 at the time when the note-on is inputted (in other words, processed as a single tone), and is therefore outputted with a vibrato effect added thereto according to the vibrato depth to be applied to a single tone.
Therefore, when the key-depression interval between the current note and the last note is equal to 20 msec or less, and the polyphony start flag 13a is ON (S3: Yes, and S7: Yes), the current note is specified as a note to be processed second among notes composing polyphony. Then, the vibrato effect for a single note added to the tone that has been processed first and is being generated (in other words, the tone corresponding to the last note) is automatically set to zero. Accordingly, awkwardness in which vibrato is added only to a tone of a note processed first among notes composing polyphony can be eliminated. By this, the characteristic of a violin (a solo musical instrument capable of polyphony) having a tendency in which vibrato is not added at the time of performing polyphony can be imitated to a high level.
After the processing in S8, the polyphony start flag 13a is set to OFF (S9), and the process proceeds to S10. Also, when it is judged in S7 that the polyphony start flag 13a is OFF (S7: No), the current note is a note that is processed third or later among notes composing polyphony, and the vibrato effect to be added to tones being generated as polyphony has already been set to zero. Therefore, the process proceeds to S10 without executing the operations in S8 and S9.
In S10, as the tone generation processing described above, the tone generation instruction is outputted after a vacant voice in the sound source 14 is assigned for the tone generation instruction corresponding to the note-on of the current note (S10). After executing the processing in S10, the LFO_Pitch_Depth and LFO_TVA_Depth corresponding to the vibrato depth stored in the vibrato depth memory 13d are set (S11). When the current note is key-depressed at an interval equal to 20 msec or less from the last note (in other words, S3: Yes), the values of the LFO_Pitch_Depth and LFO_TVA_Depth set according to the vibrato depth corrected in S6 are set for a tone corresponding to the current note. As described above, the vibrato depth is corrected to zero in S6, the values of the LFO_Pitch_Depth and LFO_TVA_Depth are also set to zero, whereby vibrato is not added to the tone corresponding to the current note.
After the processing in S11, information about the present voice is stored in the previous tone voice information memory 13c as last tone voice information (S12). Then, 1 is added to the generating tone number counter 13b (S13), and the note event process is ended.
Also, when it is judged in S1 that the received note event is a note-off (S1: No), a silencing process according to the received note-off is executed (S18). More specifically, the note-on map (not shown) in the RAM 13 is looked up, and a silencing instruction according to the received note-off is outputted to a corresponding voice. By this, the tone generation corresponding to the note that has been key-released is silenced.
After the processing in S18, one is subtracted from the generating tone number counter 13b (S19), and it is judged as to whether or not the value of the generating tone number counter 13b is a negative value (S20). At this time, when it is judged that the value of the generating tone number counter 13b is a negative value (S20: Yes), the value of the generating tone number counter 13b is set to zero (S21), and the process proceeds to S22. On the other hand, when it is judged that the value of the generating tone number counter 13b is not a negative value (S20: No), the process proceeds to S22.
It is judged in S22 as to whether or not the value of the generating tone number counter 13b is one (S22). When it is judged that the value of the generating tone number counter 13b is not one (S22: No), then the note event process is ended. On the other hand, when it is judged that the value of the generating tone number counter 13b is one (S22: Yes), the vibrato depth stored in the ROM 12 (in other words, the vibrato depth to be applied to a single tone) is stored as an initial value in the vibrato depth memory 13d. By this, the vibrato depth is set (S23).
After the processing in S23, the note-on map (not shown) in the RAM 13 is looked up, and the LFO_Pitch_Depth and LFO_TVA_Depth are set for the voice assigned to the note being generated (S24). Here, the LFO_Pitch_Depth and LFO_TVA_Depth are set according to the vibrato depth stored in the vibrato depth memory 13d. Then, the note event process is ended.
The vibrato depth stored in the vibrato depth memory 13d is set to the vibrato depth to be applied to a single tone by the processing in S23. Therefore, when the processing in S24 is executed, the vibrato effect according to the vibrato depth to be applied to a single tone is added to the tone being generated. In other words, vibrato according to the vibrato depth to be applied to a single tone is added again to the tone being generated.
In other words, upon receiving a note-off, and the value of the generated tone number counter 13b becomes one, accompanying the reception of the note-off (S1: No, and S22: Yes), it is specified that, after having key-depressed multiple notes in the last and prior notes as polyphony, the performer gradually releases the keys into a state where only the last one key is kept depressed. Then, the vibrato effect according to the vibrato depth to be applied to a single tone is automatically resumed for the remaining one tone. By this, the characteristic of a violin (a solo musical instrument capable of polyphony) having a tendency in which vibrato is not added at the time of performing polyphony but vibrato is added to a single tone can be imitated in a highly sophisticated manner.
As described above, according to the electronic musical instrument 1 of described embodiments, based on the key-depression interval of successive key-depressions (in other words, the input interval between note-ons), it is discriminated as to whether these successive key-depressions are intended for polyphony or for single tones, whereby the vibrato effect to be added to the tone is controlled. Therefore, it is possible to achieve highly sophisticated imitation of performance characteristic of a solo musical instrument capable of polyphony, such as, a violin based on real-time performance operation by the performer.
In particular, when it is specified, based on the key-depression interval, that the current key-depression has been made by the performer intended for polyphony, and when the sound source 14 is set to the mono mode, the setting of the sound source 14 is changed to the poly mode. In this manner, by discriminating between key-depressions intended for polyphony and key-depressions for single tones, the tone generation characteristic of a solo musical instrument capable of polyphony such as a violin can be imitated in a highly sophisticated manner.
Also, when it is specified, based on the key-depression interval, that the current key-depression has been made by the performer intended for single tones, and when the sound source 14 is set in the poly mode, the setting of the sound source 14 is changed to the mono mode. Therefore, when key-depressions have been made by the performer intended for single tones, even when tones based on polyphony are being generated (in other words, corresponding keys are being key-depressed), the tones being generated are forcefully silenced, and only a tone based on the current key-depression is generated. Accordingly, the tone corresponding to the current note can be sounded like a single tone as intended by the performer. In other words, even when a key-depression intended for a single tone is made by the performer, before releasing keys that have been key-depressed as intended for polyphony, tone generation in a manner intended by the performer can be made.
The invention has been described with respect to various embodiments, but the invention is not limited to the embodiments described above, and it can be readily presumed that various changes and improvements can be made to the described embodiments that do not depart from the subject matter of the invention.
For example, in certain of the embodiments described above, the process shown in
In certain of the embodiments described above, the process shown in
Also, in certain of the embodiments described above, when the key-depression interval between the current note and the last note exceeds 20 msec, the mode of the sound source 14 is changed from the poly mode to the mono mode. Instead, the mode of the sound source 14 may be changed from the poly mode to the mono mode when the value of the generating tone number counter 13b becomes one, along with key-release of the keys 2a.
More specifically, in accordance with the modified embodiments, the operations in S15-S17 in the note event process in the present embodiment (see
According to the embodiment above, when a key-depression intended for a single tone is made before the performer releases keys that have been key-depressed to be intended for polyphony, the tone based on the current note is generated as a tone with vibrato added, together with tones being generated based on polyphony without vibrato added. Even so, however, the characteristic of a violin (i.e., a solo musical instrument capable of polyphony) having a tendency in which vibrato is not added at the time of performing polyphony, but vibrato is added to a single tone can be sufficiently imitated. According to the above embodiments, occasions of mode switching of the sound source 14 can be reduced, which is advantageous in reducing the controlling load.
Also, in the note event process of the embodiment described above (see
Also, according to the note event process of the present embodiment described above, when the key-depression interval between the current note and the last note is equal to 20 msec or less, the processing in S6 is executed to correct the vibrato depth stored in the vibrato depth memory 13d to zero, whereby vibrato is not added to each of the tones composing polyphony. Instead, in S6, the vibrato depth stored in the vibrato depth memory 13d may be multiplied by a predetermined ratio (for example, 0.05), thereby reducing the vibrato depth smaller. Even with such a composition, vibrato added to each of the tones comprising polyphony can be suppressed to a smaller level than the vibrato to be applied to a single tone. Therefore, with the embodiments, the characteristic of a solo musical instrument capable of polyphony, such as, a violin, having a tendency in which vibrato is not added at the time of performing polyphony, can be imitated in a highly sophisticated manner.
Also, in the note event process of the embodiment described above, when it is judged in S7 that the polyphony start flag 13a is ON (S7: Yes), the values of the LFO_Pitch_Depth and LFO_TVA_Depth of the last tone voice are modified according to the vibrato depth corrected by the processing in S6. However, when a note-on is inputted, only the content of a voice corresponding to the note first processed among notes composing polyphony being generated is changed. Instead, each time the number of notes composing polyphony increases, the value of the vibrato depth to be corrected by the processing in S6 may be made smaller, whereby the values of the LFO_Pitch_Depth and LFO_TVA_Depth of voices corresponding to notes composing polyphony processed in the last and prior notes may be changed. By this, the more the number of tones in polyphony, the less the vibrato is added.
Also, according to the embodiment described above, when the current note composes polyphony together with the last note, the vibrato depth of the current voice and the vibrato depth of the last voice are set to the same value (i.e., zero in the embodiment described above). However, the vibrato depth of the current voice and the vibrato depth of the last voice do not need to be the same value and may be different values, as long as they are smaller than the initial value of the vibrato depth (the vibrato depth stored as fixed value data in the ROM 12).
In certain described embodiments, when it is judged, based on the key-depression interval between the current note and the last note, that the current note composes polyphony together with the last note, and the polyphony start flag 13a is ON (S3: Yes, and S7: Yes), the vibrato depth of the last voice is made smaller. Instead, when it is judged that the current note composes polyphony together with the last note (S3: Yes), the vibrato depth for the entire tones being generated may be made smaller. However, tones being generated with another tone color may not be subject to change of the vibrato depth. With such a configuration, only the LFO_Pitch_Depth and LFO_TVA_Depth need to be instructed as modulation amounts to the sound source 14, and information concerning voices becomes unnecessary. In this case, for voices in the sound source 14 whose vibrato depth (LFO_Pitch_Depth and LFO_TVA_Depth) has already been set smaller, if the vibrato depth remains to be the same as before, the processing to change the vibrato depth may be omitted. By this configuration, for voices whose vibrato depth has already been set smaller, the sound source 14 does not make the vibrato depth smaller again.
In certain described embodiments, the previous tone voice information memory 13c stores information indicative of a voice corresponding to a note-on received. In the processing in S8 executed when the current note and the last note compose polyphony, the vibrato depth (LFO_Pitch_Depth and LFO_TVA_Depth) of the last voice is set based on the information indicative of the voice stored in the previous tone voice information memory 13c. Instead, the pitch of a note-on received may be stored in the previous tone voice information memory 13c and, in S8, the vibrato depth of the last tone voice may be set based on the pitch stored in the previous tone voice information memory 13c. In this way, information on voices does not need to be stored.
In the embodiment described above, when it is judged, based on the key-depression interval between the current note and the last note, that the current note composes polyphony together with the last note, and the polyphony start flag 13a is ON (S3: Yes, and S7: Yes), the vibrato depth of the last voice is made smaller. Instead, when it is judged that the current note composes polyphony together with the last note (S3: Yes), it is possible to configure such that the vibrato depth of the last voice is always made smaller. With such a configuration, the polyphony start flag 13a may not be used. In this case, for voices in the sound source 14 whose vibrato depth (LFO_Pitch_Depth and LFO_TVA_Depth) has already been set smaller, if the vibrato depth remains to be the same as before, the processing to change the vibrato depth may be omitted. By this configuration, when three or more tones are to be generated as polyphony, for voices whose vibrato depth has already been set smaller, the sound source 14 does not make the vibrato depth smaller again.
Also, in the embodiment described above, in the note event process shown in
Also, the note event process of the embodiment described above is configured such that, when the key-depression interval is equal to 20 msec or less, it is judged, based on the setting of the polyphony start flag 13a, as to whether or not the current note is a note that is processed first among notes composing polyphony. Instead, it can be configured to specify as to whether or not the current note is a note that is processed first among notes composing polyphony, according to whether or not the setting of the sound source 14 is the mono mode. More specifically, in the note event process shown in
In the described embodiments, the tone control device may be mounted on the electronic musical instrument 1 constructed in one piece with the keyboard 2. Alternatively, the tone control device may be configured to be mounted on a sound source module to which a keyboard that outputs note-on and note-off signals can be detachably connected, like the keyboard 2. Yet further, the tone control device may be configured independently from the sound source 14.
Number | Date | Country | Kind |
---|---|---|---|
2011-050107 | Mar 2011 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4558624 | Tomisawa et al. | Dec 1985 | A |
5272274 | Kimura | Dec 1993 | A |
5763807 | Clynes | Jun 1998 | A |
6946595 | Tamura et al. | Sep 2005 | B2 |
20090249943 | Tanaka et al. | Oct 2009 | A1 |
Number | Date | Country |
---|---|---|
1198797 | Aug 1989 | JP |
2199500 | Aug 1990 | JP |
8286665 | Nov 1996 | JP |
Entry |
---|
English machine translation of JP06-083334 published Mar. 25, 1994 by Yamaha Corp. |
English machine translation of JP06-348267 published Dec. 22, 1994 by Kawai Musical Instr Mfg Co Ltd. |
English machine translation of JP08-286665 published Nov. 1, 1996 by Yamaha Corp. |
“How to Use Vibrato in Electronic Music Production”, Sonic Transfer, [online][retrieved Jan. 15, 2012] http://sonictransfer. com/vibrato-electronic-music-production-tutorial.shtml, pp. 1-5. |
Number | Date | Country | |
---|---|---|---|
20120227573 A1 | Sep 2012 | US |