The present invention relates to tone control apparatus and methods for controlling generation of tones while imparting various types of rendition styles (or articulation) to musical tones, or voices or other desired sounds in response to operation by a user, as well as computer programs for such tone generation. More particularly, the present invention relates to an improved tone control apparatus and method, which, in response to operation, by a user, of only a same operator, can control tone generation in real time while imparting the tones with any of a plurality of different release rendition styles (or attack rendition styles) that faithfully express tone color variations specific to natural musical instruments or tone color variations based on various types of articulation, as well as a computer program for such tone generation. The present invention can be extensively applied to not only electronic musical instruments but also all fields of other equipment, apparatus and methods, such as automatic performance apparatus, computers, electronic game apparatus and other multimedia equipment, which have functions of generating tones, voices or other desired sounds.
Today, various apparatus are known which are intended to achieve realistic reproduction and control of various rendition styles etc. that faithfully express tone color variations specific to natural musical instruments or tone color variations based on various types of articulation. Among examples of such apparatus is one that employs a tone waveform control technique commonly known as “SAEM” (Sound Articulation Element Modeling), which is disclosed, for example, in Japanese Patent Application Laid-open Publication No. 2004-78095 corresponding to the US2004-0055449 A1. In the apparatus employing the SAEM technique, whole waveforms corresponding to various rendition styles are prestored for individual partial sections, such as attack, release and body sections, of a tone, so that the tone can be formed by time-serially combining the prestored waveforms for the partial sections. Let it now be assumed that the term “tone” is used in this specification to refer to not only a musical tone but also a voice or any other type of sound.
With the conventionally-known technique, it is possible for the user to control tone generation while imparting tones with rendition styles, by appropriately operating any of a plurality of rendition style designating operators assigned to various rendition styles. For release-related rendition styles (i.e., release rendition styles), for example, rendition style designating operators (e.g., switches and/or pedals), functioning like rendition style switches, are assigned to various different release rendition styles, and generation of a tone can be controlled, through appropriate ON/OFF operation of any one of the rendition style designating operators, such that the tone is silenced (or released) by being imparted with the corresponding release rendition style. Similarly, for attack rendition styles, rendition style designating operators are assigned to various attack release rendition styles, and generation of a tone can be controlled, through appropriate ON/OFF operation of any one of the attack rendition style designating operators, such that the tone starts to be audibly generated (i.e., sounded) by being imparted with the corresponding attack rendition style. Namely, in the case where a release rendition style or attack rendition style is imparted by identifying only the ON or OFF state of the corresponding rendition style designating operator, there are provided a multiplicity of operators for selecting any desired one of a plurality of different release rendition styles, and thus the user has to appropriately select and operate a necessary one of the multiplicity of rendition style designating operators. However, it is extremely difficult for the user to control generation of tones while selecting and operating, at appropriate timing, the necessary rendition style designating operators, in addition to executing performance operation by operating a performance operator unit, such as a keyboard. Consequently, with the conventionally-known technique, it has been difficult for the user to play the performance operator unit while imparting release or attack rendition styles in real time.
In view of the foregoing, it is an object of the present invention to provide an improved tone control apparatus, method and program which allow a user to control generation of a tone with an appropriate release rendition style (or attack rendition style) reflected therein while readily controlling any one of a plurality of release rendition styles (or attack rendition styles) in real time.
According to a first aspect of the present invention, there is provided a tone control apparatus, which comprises: a performance device that instructs generation of a tone; an operator operable by a human player; a storage device that stores one or more rendition style parameters each for realizing a particular release rendition style in a release section of a tone; a detection section that, on the basis of an output of the operator, detects an operation-related time length of the operator when the operator has been operated in a predetermined manner; a selection section that, on the basis of the operation-related time length detected by the detection section, selects any one of the rendition style parameters from the storage device; and a tone generation control section that generates a tone in accordance with a tone generation instruction by the performance device and controls the generated tone to be silenced with a characteristic of a release rendition style corresponding to the rendition style parameter selected by the selection section.
In the present invention, a detection is made, on the basis of the output of the operator, of an operation-related time length of the operator when the operator has been operated in a predetermined manner, and any one of the rendition style parameters is selected from the storage device on the basis of the detected operation-related time length. The storage device has prestored therein one or more rendition style parameters each intended to realize a particular rendition style in a release section of a tone. Then, control is performed to silence a generated tone in accordance with the release rendition style corresponding to the selected rendition style parameter. Namely, the tone, having been started to be generated by the performance device, is silenced (released) in accordance with the rendition style parameter. In the aforementioned manner, any one of the plurality of rendition style parameters is selected in accordance with the detected operation-related time length of the operator, and the tone being generated is silenced on the basis of the selected rendition style parameter. Consequently, by only manipulating the single operator, the user is allowed to control generation of a tone with an appropriate release rendition style reflected therein while readily controlling in real time any one of the plurality of release rendition styles.
According to a second aspect of the present invention, there is provided a tone generation apparatus, which comprises: a performance device that instructs generation of a tone; an operator operable by a human player; a storage device that stores one or more rendition style parameters each for realizing a particular release rendition style in a release section of a tone; a generation section that, on the basis of an output of the operator, generates velocity data corresponding to at least one of turning-on operation and turning-off operation of the operator; a selection section that, on the basis of the velocity data generated by the generation section, selects any one of the rendition style parameters from the storage device; and a tone generation control section that generates a tone in accordance with a tone generation instruction by the performance device and controls the generated tone to be silenced with a characteristic of a release rendition style corresponding to the rendition style parameter selected by the selection section.
In the present invention, velocity data corresponding to turning-on operation or turning-off operation of the operator is generated on the basis of the output of the operator, and any one of the rendition style parameters is selected from the storage device on the basis of the generated velocity data. In the aforementioned manner, any one of the plurality of rendition style parameters is selected in accordance with ON velocity data or OFF velocity data of the operator, and the tone being generated is silenced on the basis of the selected rendition style parameter. Consequently, by only manipulating the single operator, the user is allowed to control generation of a tone with an appropriate release rendition style reflected therein while readily controlling in real time any one of the plurality of release rendition styles.
According to a third aspect of the present invention, there is provided a tone generation apparatus, which comprises: a performance device that instructs generation of a tone; an operator operable by a human player; a storage device that stores one or more rendition style parameters each for realizing a particular attack rendition style in an attack section of a tone; a generation section that, on the basis of an output of the operator, generates velocity data corresponding to turning-on operation of the operator; a selection section that, on the basis of the velocity data generated by the generation section, selects any one of the rendition style parameters from the storage device; and a tone generation control section that controls a tone, corresponding to a tone generation instruction by the performance device, to start to be generated with a characteristic of the attack rendition style corresponding to the rendition style parameter selected by the selection section. Consequently, by only manipulating the single operator, the user is allowed to control generation of a tone with an appropriate attack rendition style reflected therein while readily controlling in real time any one of the plurality of attack rendition styles.
Thus, the present invention allows the user to select an appropriate release or attack rendition style, from among the plurality of release or attack rendition styles, by just operating the single operator. As a result, the user can control in real time a plurality of release or attack rendition styles faithfully representing tone color variations specific to natural musical instruments or tone color variations based on various types of articulation, and thereby control generation of a tone with an appropriate release or attack rendition style reflected therein.
The present invention may be constructed and implemented not only as the apparatus invention as discussed above but also as a method invention. Also, the present invention may be arranged and implemented as a software program for execution by a processor such as a computer or DSP, as well as a storage medium storing such a software program. Further, the processor used in the present invention may comprise a dedicated processor with dedicated logic built in hardware, not to mention a computer or other general-purpose type processor capable of running a desired software program.
The following will describe embodiments of the present invention, but it should be appreciated that the present invention is not limited to the described embodiments and various modifications of the invention are possible without departing from the basic principles. The scope of the present invention is therefore to be determined solely by the appended claims.
For better understanding of the objects and other features of the present invention, its preferred embodiments will be described herein below in greater detail with reference to the accompanying drawings, in which:
In the electronic music instrument of
The ROM 2 stores therein various programs to be executed by the CPU 1 and various data. The RAM 3 is used as a working memory for temporarily storing various data generated as the CPU 1 executes predetermined programs, and as a memory for storing a currently-executed program and data related to the currently-executed program. Predetermined address regions of the RAM 3 are allocated to various functions and used as various registers, flags, tables, memories, etc. The external storage device 4 stores therein a parameter table (see
The performance operator unit 5 is, for example, a keyboard including a plurality of keys operable to select pitches of tones to be generated and key switches corresponding to the keys. The performance operator unit (keyboard) 5 generates performance information for a tone performance. Namely, for each of the keys, the performance operator unit 5 generates keyboard event information, such as key-on/key-off event information and note information, in response to ON/OFF operation, by the user, of the key. It should be obvious that the performance operator unit 5 may be of any other type than the keyboard type, such as a neck-like device having tone-pitch-selecting strings provided thereon. The performance controlling operation pedal 6 is an operator operable by the user using, for example, a foot; in the instant embodiment, the pedal 6 functions as a rendition style selecting operator for selecting a release rendition style to be used for silencing a tone. The pedal 6 generates operator event information, such as pedal-on event information responsive to turning-on (pedal-on) operation by the user, pedal-off event information responsive to turning-off (pedal-off) operation by the user and a velocity value corresponding to a velocity or acceleration with which the pedal 6 is stepped on. The other operator unit 7 include various operators for changing or entering rendition style parameters, general-purpose switches, etc. The other operator unit 7 also include various other operators, such as a numeric keypad, character (text)-data entering keyboard and mouse, for selecting, setting and controlling a tone pitch, tone color, effect, etc. Note that part of the keyboard 5 may be used as operators of the other operator unit 7. The display unit 8 comprises a liquid crystal display (LCD) panel, CRT (Cathode Ray Tube) and/or the like, which displays selected rendition style parameters and controlling states of the CPU 1.
The tone generator 9, which is capable of simultaneously generating tone signals in a plurality of tone generation channels, receives performance information supplied via the communication bus 1D and synthesizes a tone on the basis of the received performance information to generate a tone signal. For example, once a key-on signal is received in response to ON (i.e., depressing) operation, by the user of a key on the keyboard 5, the tone generator 9 starts generation of a tone at a tone pitch corresponding to the depressed key. Further, once a key-off signal is received in response to OFF (i.e., releasing) operation, by the user of a key on the keyboard 5, the tone generator 9 silences a tone of a tone pitch corresponding to the released key. Also, in the instant embodiment, the tone generator 9 can silence a tone in accordance with a supplied rendition style parameter. Each tone signal generated by the tone generator 9 is subjected to predetermined digital signal processing performed by a not-shown effect circuit etc., and the tone signal having undergone the digital signal processing is supplied to a sound system 9A including an amplifier, speaker, etc. for audible generation or sounding. The tone generator 9 and sound system 9A may be constructed in any conventionally-known manner. For example, the tone generator 9 may employ any of the conventionally-known tone synthesis methods, such as the FM, PCM, physical model and formant synthesis methods. Further, the tone generator 9 may be implemented by either dedicated hardware or software processing performed by the CPU 1.
The interface 10, which is an input/output interface for communicating performance information between the electronic musical instrument and external equipment (not shown), is, for example, a MIDI interface for communicating performance information of the MIDI standard (i.e., MIDI information) between the electronic musical instrument and the external MIDI equipment or other MIDI equipment. In this case, the other MIDI equipment may be of any type (or operating type), such as the keyboard type, guitar type, wind instrument type, percussion instrument type or gesture type, as long as it can generate MIDI information in response to operation by a user of the MIDI equipment. The MIDI interface may be a general-purpose interface rather than a dedicated MIDI interface, such as RS232-C, USB (Universal Serial Bus) or IEEE1394, in which case other data than MIDI information may be communicated at the same time. In the case where a general-purpose interface as mentioned above is used as the MIDI interface, the other MIDI equipment may be arranged to transmit and receive other data than MIDI information. Also, the interface 10 may be a communication interface connected to a wired or wireless communication network (not shown), such as a LAN, Internet or telephone line network (not shown), via which the interface 10 is connected to an external server computer or the like so as to input a desired control program, various data, etc. to the electronic musical instrument. Such a communication interface may be capable of both wired and wireless communication rather than just one of wired and wireless communication.
The following paragraphs describe the parameter table stored in the ROM 2, RAM 3, external storage device 4 or the like.
In order to realize a variety of release rendition styles, the parameter table is created by data basing rendition style parameters for the release rendition styles and storing the database parameters in the ROM 2, external storage device 4 or the like. As illustrated in
The parameter sets corresponding to the various types of rendition styles each comprises a plurality of rendition style parameters corresponding to various tone pitches, such as “C1”, “C#1” and “D1”. Namely, even in each of the rendition styles classified in the above-described manner, there are included a plurality of different variations according to the width over which to lower the pitch, pitch varying speed, performance intensity, etc. Thus, the illustrated example of
Next, a general description will be given about a first embodiment of the tone control processing performed in the electronic musical instrument of
In
Operator information output section B1 outputs, to an operator-off detection section B2 and time length detection section C1, various operator event information (operation information), such as pedal-on event information generated in response to turning-on operation of the pedal 6 and pedal-off event information generated in response to turning-off operation of the pedal 6. The time length detection section C1 detects a predetermined ON-to-OFF time length on the basis of the pedal-on and pedal-off event information output from the operator information output section B1. Here, the “ON-to-OFF time length” means a time length from the time when the pedal 6 was turned on (i.e., turned-on time of the pedal or a time when a pedal-on event occurred) to the time when the pedal 6 was turned off (i.e., turned-off time of the pedal or a time when a pedal-off event occurred); namely, the ON-to-OFF time length represents an operation time, length of the pedal 6. The ON-to-OFF time length detected by the time length detection section C1 is supplied to a release-rendition-style determination section C2, which in turn determines, on the basis of the supplied ON-to-OFF time length, a particular rendition style ID for designating a parameter set of a release rendition style type to be used. The release-rendition-style parameter selection section C3 selects, on the basis of the determined particular rendition style ID and note information supplied from the keyboard note detection section A3, one rendition style parameter, corresponding to the note, from the parameter set of the release rendition style type corresponding to the determined rendition style ID, and it then supplies the selected rendition style parameter to the tone synthesis section D. Namely, the section C3 determines, in accordance with the input information, a rendition style parameter for realizing a release rendition style and supplies the determined rendition style parameter to the tone synthesis section D.
Operator-off detection section B2 extracts only the pedal-off event information out of the operator event information output from the operator information output section B1, and it supplies the extracted pedal-off event information to the tone synthesis section D. If the tone synthesis section D has received the pedal-off event information from the operator-off detection section B2 before receiving the key-off event information from the keyboard ON/OFF detection section A2, it silences the currently generated tone while, in accordance with the rendition style parameter selected by the release-rendition-style parameter selection section C3, reflecting the corresponding release rendition style in the tone. Namely, the tone synthesis section D has a tone generation function for starting audible generation of a tone in response to user's depressing operation of a key on the keyboard, a no-rendition-style-imparted silencing function for silencing a currently-generated tone, in response to user's releasing operation of a key on the keyboard, with a standard release without any release rendition style being imparted to the tone, and a rendition-style-imparted silencing function for silencing, in response to user's turning-off operation of the pedal 6 during a key-on period following depression of a key, the currently-generated tone while reflecting a release rendition style in the tone.
In the electronic musical instrument of
First, at step S1, an initialization process is performed; for example, in this initialization, the timer for counting predetermined sampling times is reset to “0” (zero), a key status provided for each of the keys to determine whether an operational state of the key is to be reflected or ignored (however, only in the case of “monophonic” tone generation) is set to “OFF”. The initialization process may of course include other operations. At following step S2, a detection is made of various keyboard events generated in response to user's operation of the keyboard; the various keyboard events include a key-on event generated in response to depressing operation of a key or key-off event generated in response to releasing operation of a key, and a note assigned to the operated key. At step S3, a detection is made of operator events generated in response to user's operation of the predetermined pedal 6. The operator events generated in response to user's operation of the predetermined pedal 6 include a pedal-on event generated in response to user's turning-on operation of the pedal 6 or pedal-off event generated in response to user's turning-off operation of the pedal 6, and a velocity value corresponding to a pushing (or moving) velocity or acceleration of the pedal 6.
At next step S4, a determination is made as to whether the keyboard event detected at step S2 above is a key-on event. If the keyboard event detected at step S2 is a key-on event (YES determination at step S4), the key status corresponding to the key, of which the key-on event has been detected, is set to “ON” (step S5). If the key status corresponding to the key, of which the key-on event has been detected, is set at “ON”, keyboard events generated in response to operation of the key are reflected, while, if the key status is set at “OFF”, keyboard events generated in response to operation of the key are ignored without being reflected. In the instant embodiment, even when a key whose key status is set at “OFF” has been released, the key-off event generated in response to the releasing operation is not reflected, and thus the tone corresponding to the releasing operation is not silenced (see steps S18-S19 to be later described). At step S6, the note information generated along with the key-on event information as the keyboard event information is stored. At step S7, synthesis of a tone is started on the basis of the key-on event information and note information, so that audible generation of the tone at the corresponding pitch is initiated. At next step S8, a determination is made as to whether the operator event detected at step S3 above is a pedal-on event. With a YES determination at step S8, the timer count is set to a value indicative of the “ON” time when the pedal-on event has occurred (step S9). This “ON” time is used to calculate the ON-to-OFF time length at step S14 as will be later described. At step S10, the time is cause to advance by the sampling time (e.g., Δt). At next time S11, the sampling time (Δt) is added to the current count of the timer. Then, the processing reverts to step S2 to repeat the operations at and after step S2.
If the operator event is not a pedal-on event as determined at step S8 (NO determination at step 8), a further determination is made at step S12 as to whether the operator event is a pedal-off event. If the operator event is a pedal-off event (YES determination at step S12), it is further determined, at step S13, whether the key status is currently set at “ON”. If the operator event is not a pedal-off event (NO determination at step S12), or if the key status is not currently set at “ON” (NO determination at step S13), the processing jumps to step S10. If, on the other hand, the key status is currently set at “ON” (YES determination at step S13), the ON-to-OFF time length is calculated at step S14. In the instant embodiment, the “ON-to-OFF time length” means a time length from the time when the pedal 6 was turned on to the time when the pedal 6 was turned off. Namely, the ON-to-OFF time length is calculated by subtracting the “ON time” having been set at the turned-on time of the pedal 6 from the timer count at the turned-off time of the pedal 6 (see step S9). At step S15, a “rendition style parameter determination process” is performed on the basis of the calculated ON-to-OFF time length and stored note information (see step S6 above). In this “rendition style parameter determination process”, as will be later detailed, one parameter set for a release rendition style type to be used is selected, on the basis of the ON-to-OFF time length, from the parameter table, and also one rendition style parameter is selected, on the basis of the note information, from among the multiplicity of rendition style parameters included in the selected parameter set. At step S16, the currently-generated (i.e., currently-sounding) tone is silenced in accordance with the determined rendition style parameter. At that time, control may be performed to smoothly generate a section of the tone to which the release rendition style has been connected, e.g. by generating a separate tone, corresponding to the determined rendition style parameter, from the currently-generated tone and cross-fade synthesizing these two tones. Such a waveform connection may be performed using any other method than the cross-fade synthesis. At step S17, the key status is set to “OFF”. Namely, because the tone generated in response to the depressing operation of the key has already been silenced with the release rendition style, the key status is set to “OFF” so as to prevent silencing control of a tone from being performed in response to subsequent releasing operation of the key, so that the control responsive to the releasing operation of the key is disabled. Following step S17, the processing reverts to step S10.
If the keyboard event detected at step S2 is not a key-on event (NO determination at step S4), it is further determined at step S18 whether the detected keyboard event is a key-off event. If the detected keyboard event is not a key-off event (NO determination at step S18), a determination is made at step S19 as to whether the key status is currently set at “ON”. If the key status is not currently set at “ON” (NO determination at step S19), the processing jumps to step S10. If on the other hand, the key status is currently set at “ON” (YES determination at step S19), then a rendition style parameter is set at step S20 for realizing a standard, default release with no rendition style imparted, and then the processing goes to step S16. Namely, if no rendition style parameter corresponding to a release rendition style has been supplied, e.g. if a normal key-off event is input with no operation of the pedal 6, a rendition style parameter is automatically set so as to silence the corresponding tome with a standard release operation.
The following paragraphs describe the “rendition style parameter determination process” carried out in the above-described “tone control processing” (see step S15 of
First, at step S21, a determination is made as to whether the ON-to-OFF time length is shorter than a predetermined time (e.g., one second). If the ON-to-OFF time length is shorter than the predetermined time (YES determination at step S21), a parameter set for realizing a fast-fall rendition style with rendition style ID “FastFall” assigned thereto is selected from the parameter table (step S22). If, on the other hand, the ON-to-OFF time length is longer than the predetermined time (NO determination at step S21), a parameter set for realizing a slow-fall rendition style with rendition style ID “SlowFall” assigned thereto is selected from the parameter table (step S23). At step S24, a release rendition style to be applied is determined by selecting one rendition style parameter, corresponding to the note in question, from the selected parameter set.
In the above-described manner, the user can control tones while controlling in real time a plurality of release rendition styles, by just operating the single pedal 6. Here, specific examples of tone control based on any one of the plurality of release rendition styles corresponding to operation of the pedal 6 will be described, with reference to
In section (a) of
In section (b) of
In section (c) of
In the above-described embodiment, a time length from the time when the pedal 6 was turned on to the time when the pedal 6 was turned off is calculated as the ON-to-OFF time length, and a release rendition style to be imparted or applied is determined on the basis of the ON-to-OFF time length. In an alternative, a time length from the later one of the time when the pedal 6 was turned on (i.e., when an operator-on event was generated) and the time when a key was depressed (i.e., when a key-on event was generated) to the time when the pedal 6 was turned off may be set as the ON-to-OFF time length. In such a case, key-on event information, generated in response to the depression of the key, is output from the keyboard ON/OFF detection section A2 to the time length detection section C1 (see a dotted-line arrow of
Namely, in the above-described first embodiment of the tone control apparatus, tone generation control is performed such that a tone, having started to be audibly generated on the basis of a key-on event generated in response to depressing operation of a key, is silenced on the basis of a key-off event generated in response to releasing operation of a key. Also, when the pedal 6 has been operated before the releasing operation of the key, an appropriate one of a plurality of release rendition styles is imparted to the tone, in response to the pedal operation, so as to silence the sounding tone in accordance with the release rendition style. Thus, by only operating the single pedal 6, the user can control generation of a tone while controlling in real time any one of the plurality of release rendition styles faithfully representing tone color variations specific to natural musical instruments or tone color variations based on various types of articulation. Further, the tone control apparatus, which performs the tone generation control to silence the generated tone by imparting an appropriate one of the plurality of release rendition styles, can impart a long fall-down to a release rendition style even in a performance where a time from a key-on event to a key-off event is short. Furthermore, the first embodiment of the tone control apparatus is very advantageous in that it can be extensively applied to all types of tone generators without being influenced by the types of tone generators.
Whereas the first embodiment of the tone control apparatus has been described as employing the pedal 6 as the rendition style selecting operator, the present invention is not so limited; for example, a dedicated switch may be assigned as the rendition style selecting operator, or any one of the keys on the keyboard may be assigned as the rendition style selecting operator. Namely, the rendition style selecting operator may be an ordinary panel switch or sustain pedal capable of detecting at least two values (i.e., ON and OFF values). Further, in a case where an operator, such as a volume control, which outputs an analog value, is assigned as the rendition style selecting operator, the output analog value is binaries as necessary.
Further, whereas the first embodiment of the tone control apparatus has been described as selecting either the fast-slow rendition style or the slow-slow rendition style as a type of the release rendition style, it may of course select another release-related rendition style type, such as the medium-fall rendition style, from among the plurality of release rendition styles,
Furthermore, the first embodiment of the tone control apparatus has been described as setting an ON-to-OFF time of the pedal 6 as the operating time length and selecting a release rendition style on the basis of the operating time length of the pedal 6, the present invention is not so limited; for example, an ON-to-ON time, OFF-to-OFF time or any other suitably-measured time interval of the pedal 6 or other operator 7 may be set as the operating time length, and a release rendition style on the basis of the operating time length.
Furthermore, although the first embodiment of the tone control apparatus has been described in relation to the case where a selected release rendition style is merely imparted to a generated tone to silence the tone, the present invention is not so limited; of course, a plurality of release rendition styles may be imparted, in response to operation of the pedal, to a series of tones when these successive tones are to be silenced.
In the case where the polyphonic tone generation is employed, a same release rendition style may be imparted compulsorily to all currently-generated tones, in response to turning-off of the pedal, so as to silence all of the currently-generated tones. In the case where the monophonic tone generation with a single output track is employed such that tones are generated at pitches corresponding to sequentially-generated note information, the tone pitch to be sounded is replaced with a note of each newly-generated keyboard event information and the note at the time of turning-off of the pedal may be imparted with a release rendition style to silence the tone.
Next, a description will be made about the second embodiment of the present invention. The tone control apparatus in accordance with the second embodiment of the present invention performs generation control of individual tones such that a tone, having started to be generated in response to turning-on (depressing) operation of the keyboard (performance operator unit) is silenced (released) while being imparted with an appropriate release rendition style selected from among a plurality of different release rendition styles, or that audible generation (or sounding) of a tone is started with an appropriate attack rendition style selected from among a plurality of different attack rendition styles. In the second embodiment of the tone control apparatus too, the general hardware setup as shown in
First, only differences of the second embodiment of the tone control apparatus from the first embodiment of the tone control apparatus will be briefed below. Various processing performed by the CPU 1 in the second embodiment include “tone control processing for a release” (see
In the second embodiment, the parameter tables stored in the ROM 2, RAM 3, external storage device 4 or the like are of generally the same data format as shown in
Next, a description will be given about the second embodiment of the tone control processing performed in the electronic musical instrument of
First, a general outline is given about the tone control processing for impartment of a release rendition style. In
Next, a general outline is given about the tone control processing for impartment of an attack rendition style. In
The following paragraphs describe an example of the tone control processing for a release rendition style carried out in the second embodiment, with reference to a flow chart of
When an operator event has been detected, the processing of
If, on the other hand, the key status is currently set at “ON” (YES determination at step S13), an OFF velocity value is detected at step S25; this OFF velocity value is detected, for example, from a moving velocity, acceleration, etc. of the pedal 6 when the pedal 6 has been turned off. “rendition style parameter determination process for a release” is performed at step S15a on the basis of the detected OFF velocity value and stored note information (see step S6 above). In this “rendition style parameter determination process for a release”, as will be later detailed, one parameter set of a release rendition style type to be used is determined, on the basis of the OFF velocity value, from the parameter table, and also one rendition style parameter is selected, on the basis of the note information, from among a multiplicity of rendition style parameters included in the selected parameter set. Then, an operation of step S16 is performed in the same manner as at step S16 of
The following paragraphs describe the “rendition style parameter determination process for a release” carried out in the above-described “tone control processing for a release” (see step S15a of
First, at step S26, a determination is made as to whether or not the velocity value (OFF velocity value in this case) is greater than a predetermined value (e.g., 64). If the velocity value (OFF velocity value in this case) is greater than the predetermined value “64” (YES determination at step S26), then the process goes to step S22, where, in the same manner as noted earlier, a parameter set for realizing a fast-fall rendition style with rendition style ID “FastFall” assigned thereto is selected from the parameter table (step S22). If, on the other hand, the velocity value (OFF velocity value in this case) is smaller than the predetermined value (NO determination at step S26), the process goes to step S23, where, in the same manner as noted earlier, a parameter set for realizing a slow-fall rendition style with rendition style ID “SlowFall” assigned thereto is selected from the parameter table (step S23).
Whereas the “tone control processing for a release” has been described above as selecting a rendition style parameter on the basis of an OFF velocity value corresponding to turning-off operation of the pedal 6, the selection of a rendition style parameter may be made on the basis of an ON velocity value corresponding to turning-on operation of the pedal 6. In such a case, the “tone control processing for a release” is modified in such a manner that step S12 determines whether the operator event detected at step S3 is a pedal-on event, step S25 detects an ON velocity value and step S26 determines whether or not the ON velocity value is greater than a predetermined value.
In the above-described manner, the user can control tones while controlling in real time a plurality of release rendition styles, by just operating the single pedal 6 with appropriately-adjusted forces. Here, specific examples of generation control of tones based on a plurality of release rendition styles corresponding to operation of the pedal 6 will be described, with reference to
Now, the various examples of the tone generation control based on turning-off operation of the pedal 6, illustratively shown in
In section (b) of
In section (c) of
Next, the tone generation control based on turning-on operation of the pedal 6 will be described below, with reference to
As seen from section (b) of
Referring to section (c) of
The following paragraphs describe the “tone control processing for an attack” for selecting, from among a plurality of attack rendition styles, an attack rendition style to be imparted in response to operation of the pedal 6 and starting audible generation of a tone with the selected attack rendition styles imparted thereto.
First, at step S31, an initialization process is performed; for example, in this initialization, the timer for counting predetermined sampling times is reset to “0” (zero), a pedal status provided for determining whether an operational state of the pedal is to be reflected or ignored is set to “OFF”. At following step S32, a detection is made of various keyboard events generated in response to user's operation of the keyboard. At step S33, a detection is made of an operator event generated in response to user's operation of the predetermined pedal 6. At next step S34, a determination is made as to whether the detected operator event is a pedal-on event. If the detected operator event is a pedal-on event (YES determination at step S34), the pedal status is set to “ON” (step S35). At next step S36, an ON velocity value is detected; this ON velocity value is detected, for example, on the basis of a moving (pushing) velocity, acceleration, etc. of the pedal 6 when the pedal 6 has been turned on. If, on the other hand, the detected operator event is not a pedal-ff event but a pedal-on event (NO determination at step S34 and YES determination at step S37), the pedal status is set to “OFF” (step S38). When the pedal status is set at “ON”, the operator event generated by user's operation of the pedal 6 is reflected, while, when the pedal status is set at “OFF”, the operator event generated by user's operation of the pedal 6 is ignored without being reflected.
At next step S39, a determination is made as to whether the detected keyboard event is a key-on event. If the detected keyboard event has been determined to be a key-on event (YES determination at step S39), the note information generated along with the key-on event information as the keyboard event information is stored at step S40. At step S41, it is determined whether the pedal status is currently set at “ON”. If the pedal status is currently set at “ON” (YES determination at step S41), a “rendition style parameter determination process for an attack” is performed at step S42. The “rendition style parameter determination process for an attack” may be one obtained by appropriately modifying the rendition style parameter determination process for a release of
If the pedal status is not currently set at “ON” (NO determination at step S41), then a rendition style parameter for realizing a standard, default attack with no rendition style imparted thereto is set at step S43, and then the process moves on to step S44. Namely, when no rendition style parameter corresponding to an attack rendition style has been given, e.g. when normal key-on even information has been input with no pedal operation involved, a rendition style parameter is set such that audible generation of a tone is started with a standard attack. At following step S44, generation of a tone is started in accordance with the determined rendition style parameter. If the detected keyboard event is not a key-on event but a key-off event (NO determination at step S39 and YES determination at step S45), then the tone is silenced at step S46. At step S47, the time is cause to advance by the sampling time (e.g., Δt). At next time S48, the sampling time (Δt) is added to the current count of the timer.
In the above-described manner, the user can control tones while controlling in real time a plurality of attack rendition styles, by just operating the single pedal 6. Here, specific examples of tone control based on a plurality of attack rendition styles corresponding to operation of the pedal 6 will be described, with reference to
As seen from section (a) of
As seen from section (b) of
As can be seen from section (c) of
Namely, in the above-described second embodiment of the tone control apparatus, tone generation control is performed such that a tone, audibly generated on the basis of a key-on event generated in response to depressing operation of a key, is silenced on the basis of a key-off event generated in response to releasing operation of a key. Also, when the pedal 6 has been operated before the releasing operation of the key, an appropriate one of a plurality of release rendition styles is imparted to the tone, in response to the pedal operation, so as to silence the sounding tone by imparting the release rendition style to the tone. Further, when the pedal 6 has been operated before depressing operation of a key, audible generation of a tone is started with an appropriate one of a plurality of attack rendition styles imparted to the tone. Thus, by only operating the single pedal 6, the user can control generation of tones while controlling in real time a plurality of release or attack rendition styles faithfully representing tone color variations specific to natural musical instruments or tone color variations based on various types of articulation. Further, the tone control apparatus of the present invention is very advantageous in that it can be extensively applied to all types of tone generators without being influenced by the types of tone generators.
Whereas the second embodiment of the tone control apparatus too has been described as employing the pedal 6 as the rendition style selecting operator, the present invention is not so limited; for example, a dedicated switch may be assigned as the rendition style selecting operator, or any one of the keys on the keyboard may be assigned as the rendition style selecting operator.
Further, whereas the second embodiment of the tone control apparatus has been described above as selecting either the fast-slow rendition style or the slow-slow rendition style as a type of the release rendition style to be applied, it may of course select another release-related rendition style, such as the medium-fall rendition style, from among the plurality of release rendition styles. Needless to say, the same applies to the attack rendition styles.
Furthermore, although the second embodiment of the tone control apparatus has been described above in relation to the case where only one tone is generated and a selected release rendition style is imparted to the generated tone to silence the tone, the present invention is not so limited; of course, a plurality of release rendition styles may be imparted to a series of tones to silence the successive tones in response to operation of the pedal 6.
In the case of the polyphonic tone generation, a same release rendition style may be imparted compulsorily to all currently-generated tones, in response to turning-off operation of the pedal, so as to silence all of the currently-generated tones. In the case where the monophonic tone generation, on the other hand, the tone pitch to be sounded is replaced with a note of each newly-generated keyboard event information and the note at the time of turning-off of the pedal may be imparted with a release rendition style to silence the tone.
It should also be appreciated that the tone generation control of the present invention may be performed, in response to the operation of the pedal 6, using a combination of release rendition and attack rendition styles. Further, in each of the first and second embodiments, audible generation of tones may be instructed via any other performance operation means than the keyboard. Furthermore, the control of the present invention may be applied to tones generated by automatic performance apparatus as well as manual performance apparatus.
Number | Date | Country | Kind |
---|---|---|---|
2004-095435 | Mar 2004 | JP | national |
2004-095436 | Mar 2004 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4383462 | Nagai et al. | May 1983 | A |
5033352 | Kellogg et al. | Jul 1991 | A |
5160799 | Tozuka et al. | Nov 1992 | A |
5218158 | Kimura | Jun 1993 | A |
5569870 | Hirano | Oct 1996 | A |
5726371 | Shiba et al. | Mar 1998 | A |
5750914 | Takahashi | May 1998 | A |
6150598 | Suzuki et al. | Nov 2000 | A |
6281423 | Shimizu et al. | Aug 2001 | B1 |
6365818 | Suzuki | Apr 2002 | B1 |
6376760 | Tozuka et al. | Apr 2002 | B1 |
6392135 | Kitayama | May 2002 | B1 |
6403871 | Shimizu et al. | Jun 2002 | B2 |
6584442 | Suzuki et al. | Jun 2003 | B1 |
6798427 | Suzuki et al. | Sep 2004 | B1 |
6881888 | Akazawa et al. | Apr 2005 | B2 |
6911591 | Akazawa et al. | Jun 2005 | B2 |
7271330 | Akazawa et al. | Sep 2007 | B2 |
20020023530 | Komano et al. | Feb 2002 | A1 |
20030084778 | Suzuki et al. | May 2003 | A1 |
20030172799 | Sakurai et al. | Sep 2003 | A1 |
20040055449 | Akazawa et al. | Mar 2004 | A1 |
20040267791 | Sunako | Dec 2004 | A1 |
20050061141 | Yamauchi | Mar 2005 | A1 |
20050188819 | Lin et al. | Sep 2005 | A1 |
Number | Date | Country |
---|---|---|
2187794 | Jul 1990 | JP |
05-019755 | Jan 1993 | JP |
2000-172264 | Jun 2000 | JP |
2001-215971 | Aug 2001 | JP |
2002-041041 | Feb 2002 | JP |
2004045832 | Feb 2004 | JP |
2004-078095 | Mar 2004 | JP |
Number | Date | Country | |
---|---|---|---|
20050211074 A1 | Sep 2005 | US |