The present application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2010-125713, file Jun. 1, 2010, and Japanese Patent Application No. 2010-120623, filed Jun. 8, 2010, and the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a performance apparatus and an electronic musical instrument, which generate musical tones, when a player holds with his or her hand and swings the performance apparatus.
2. Description of the Related Art
An electronic musical instrument has been proposed, which has an elongated member of a stick type with a sensor provided thereon, and generates musical tones when the sensor detects the motion of the elongated member. The elongated member of a stick type has a shape of a drumstick, and the musical instrument is constructed so as to generate musical tones as if percussion instruments generate sounds in response to a player's motion to strike drums.
Japanese Patent No. 2,663,503 discloses a performance apparatus, which has a member of a stick type with an acceleration sensor provided thereon, and generates a musical tone when a certain of time passes by after an output (acceleration sensor value) of the acceleration sensor reaches a predetermined threshold value.
The player holds the one end of the elongated performance apparatus of a stick type with his or her hand, and for instance, swings the performance apparatus down. In practical drum performance, when the player swings the drumstick down, he or she sometimes hits the surface of the drum hard with the highest swinging-down speed, but frequently swings the drumstick down to the lowest position to hit the drum so as to quickly swing the drumstick up to move to the following motion. Therefore, it is preferable for the electronic musical instrument to generate musical tones at the moment the elongated performance apparatus has been swung down to the lowest position.
But it is difficult for the performance apparatus disclosed in Japanese Patent No. 2,663,503 to generate musical tones at the moment said performance apparatus has been swung down to the lowest position.
The present invention has an object to provide a performance apparatus and an electronic musical instrument, which is able to generate a musical tone at a timing desired by a player without failure.
According to one aspect of the invention, there is provided a performance apparatus to be used with a musical-tone generating device for generating a musical tone, which apparatus comprises a holding member extending in a longitudinal direction to be held by a player, an acceleration sensor provided in the holding member, for obtaining an acceleration-sensor value, and controlling means for giving the musical-tone generating device an instruction of generating a sound, wherein the controlling means comprises sound-generation timing detecting means for giving an instruction to the musical-tone generating device to generate a musical tone at a sound-generation timing represented by a time when the acceleration-sensor value obtained by the acceleration sensor has decreased to a value less than a second threshold value after increasing to a value larger than a first threshold value, wherein the second threshold value is less than the first threshold value.
According to another aspect of the invention, there is provided an electronic musical instrument, which comprises a musical instrument unit and a performance apparatus, wherein the musical instrument unit comprises musical-tone generating device for generating musical tones, and the performance apparatus comprises a holding member extending in a longitudinal direction to be held by a player, an acceleration sensor provided in the holding member, for obtaining an acceleration-sensor value, and controlling means for giving an instruction of generating a sound to the musical-tone generating device, wherein the controlling means comprises sound-generation timing detecting means for giving an instruction to the musical-tone generating device to generate a musical tone at a sound-generation timing represented by a time when the acceleration-sensor value obtained by the acceleration sensor has decreased to a value less than a second threshold value after increasing to a value larger than a first threshold value, the second threshold value being less than the first threshold value, and wherein both the musical instrument unit and the performance apparatus comprise communication means, respectively.
a and
a is a view showing an example of a table, which associates ranges of the difference values θd with pitches of musical tones of percussion instruments, respectively.
b is a view schematically showing relationship between pitches of musical tones and ranges, in which the performance apparatus 11 is swung by the player as if he or she beats drums and other percussion instruments.
a is a view of an example of a table, which associates the ranges of the difference values θd with timbres of musical tones of the percussion instruments, respectively.
b is a view schematically showing relationship between timbres of musical tones and ranges, in which the performance apparatus 11 is swung by the player as if he or she beats drums and other percussion instruments.
a is a flow chart of an example of a process performed in the performance apparatus according to the fourth embodiment.
b is a flow chart of an example of a timer interruption process performed in the performance apparatus according to the fourth embodiment.
Now, embodiments of the present invention will be described with reference to the accompanying drawings.
The I/F 13 of the musical instrument unit 19 serves to receive data (for instance, a note-on event) from the performance apparatus 11 to store the received data in RAM 15 and gives notice of receipt of such data to CPU 12. In the present embodiment, the performance apparatus 11 is provided with an infrared communication device 24 at the edge of the base of the performance apparatus 11 and the I/F 13 of the musical instrument unit 19 is also provided with an infrared communication device 33. Therefore, the infrared communication device 33 of I/F 13 receives infrared light generated by the infrared communication device 24 of the performance device 11, whereby the musical instrument unit 19 can receive data from the performance apparatus 11.
CPU 12 serves to control whole operation of the electronic musical instrument 10. In particular, CPU 12 serves to perform various processes including a controlling operation of the musical instrument unit 19, a detecting operation of a manipulated state of key switches (not shown) in the input unit 17 and a generating operation of musical tones based on note-on events received through I/F 13.
ROM 14 stores various programs for controlling the whole operation the electronic musical instrument 10, controlling the operation of the musical instrument unit 19, detecting the operated state of the key switches (not shown) in the input unit 17 and generating musical tones based on note-on events received through I/F 13. ROM 14 has a waveform-data area for storing various timbres of waveform data. In particular, the waveform data includes waveform data of percussion instruments such as bass drums, high-hats, snare drums and cymbals. The waveform data is not limited to data of the percussion instruments but waveform data of wind instruments such as flutes, saxes and trumpets, waveform data of keyboard instruments such as pianos, and waveform data of string instruments such as guitars may be stored in ROM 14.
RAM 15 serves to store the program read from ROM 14, and data and parameters generated during the course of process. The data generated in the process includes the manipulated state of the switches in the input unit 17, sensor values received through I/F 13 and generating states of musical tones(sound generation graph).
The displaying unit 16 has a liquid crystal displaying device (not shown) and is able to display a selected timbre and a table, which associates ranges of differences in angle with pitches of musical tones, respectively. The input unit 17 has the switches (not shown), and is used to designate a timbre of musical tones to be generated.
The sound system 18 comprises a sound source unit 31, audio circuit 32 and a speaker 35. In accordance with an instruction from CPU 12, the sound source unit 31 reads waveform data from the waveform-data area of ROM 14 to generate musical-tone data. The audio circuit 32 converts the musical-tone data generated by the sound source unit 31 into an analog signal, and amplifies the analog signal to output the amplified signal from the speaker 35, whereby musical tones are output from the speaker 35.
When the player actually plays the drum, he or she holds the one end (base) of the stick with his or her hand and rotates the stick with his or her wrist kept at the center. In the present embodiment, the acceleration sensor 23 obtains an acceleration-sensor value in the axial direction of the performance apparatus 11 to detect centrifugal force caused by a rotational motion of the stick. In this case, three axial sensors can be used.
The performance apparatus 11 comprises CPU 21, the infrared communication device 24, ROM 25, RAM 26, an interface (I/F) 27 and an input unit 28. CPU 21 performs various processes including an obtaining operation of a sensor value in the performance apparatus 11, a detecting operation of a timing of sound generation of a musical tone in accordance with the sensor value and a reference value generated by the geomagnetic sensor 22, a producing operation of a note-on event, and an operation of controlling a sending operation of the note-on event through I/F 27 and the infrared communication device 24.
ROM 25 stores various programs for obtaining a sensor value from the performance apparatus 11, detecting a timing of sound-generation of a musical tone in accordance with the sensor value and a reference value generated by the geomagnetic sensor 22, producing a note-on event, and controlling the sending operation of the note-on event through I/F 27 and the infrared communication device 24. In RAM 26 are stored values obtained and/or produced in the processes, such as sensor values. Data is transmitted through I/F 27 to the infrared communication device 24 in accordance with an instruction from CPU 21. The input unit 28 includes switches (not shown).
CPU 21 judges at step 402 whether or not the setting switch of the input unit 28 has been turned on. When it is determined at step 402 that the setting switch has been turned on (YES at step 402), CPU 21 stores the calculated difference angle in RAM 26 as a reference discrepancy value θp at step 403. Then, CPU 21 judges at step 404 whether or not a terminating switch (not shown) in the input unit 28 has been turned on. When it is determined at step 404 that the terminating switch has not been turned on (NO at step 404) , CPU 21 returns to the process at step 401. Meanwhile, when it is determined at step 404 that the terminating switch has been turned on (YES at step 404), the reference setting process will terminate. During the course of the reference setting process described above, the reference offset values or reference discrepancy values Op are stored in RAM 26.
When the reference setting process terminates at step 303 in
Then, CPU 21 performs a sound-generation timing detecting process at step 307.
When it is determined at step 502 that the acceleration-sensor value is not larger than the first threshold value α (NO at step 502), CPU 21 judges at step 506 whether or not a value of “1” has been set to the acceleration flag in RAM 26. When it is determined at step 506 that a value of “1” has not been set to the acceleration flag (NO at step 506), the sound-generation timing detecting process will terminate. When it is determined at step 506 that a value of “1” has been set to the acceleration flag (YES at step 506), CPU 21 judges at step 507 whether or not the acceleration-sensor value is less than a predetermined second threshold value β. When it is determined YES at step 507, CPU 21 performs a note-on event producing process at step 508.
Before describing the note-on event producing process, the sound-generation timing in the electronic musical instrument 10 of the present embodiment will be described.
When the player swings the performance apparatus 11 down, the acceleration value gradually increases (refer to Reference number 801, a curve 800 in
So as to make the electronic musical instrument generate musical tones at the time or just before the player strikes the imaginary surface of the drum, the present invention employs the following logic. It is assumed in the present embodiment that the sound-generation timing is defined by a time when the acceleration sensor value decreases to a value less than the second threshold value B, which is slightly larger than “0”. But the sound-generation timing can reach around the second threshold value β, because of fluctuation of the acceleration sensor value due to unintentional motion of the player. Therefore, to avoid effects of the fluctuation of the acceleration-sensor value, a condition is set that requires the acceleration sensor value to once increase to a value larger than the first threshold value α (the value of α is sufficiently larger than the value β). In other words, the sound generating timing is specified by a time tβ when the acceleration-sensor value increases to a value larger than the first threshold value α (refer to a time tα in
In the note-on event producing process shown in
The maximum value of the acceleration-sensor value is denoted by Amax, and the maximum value of the sound-volume level (velocity) is denoted by Vmax. Then, the sound-volume level Vel will be given by the following equation:
Vel=a·Amax,
where if a·Amax≧Vmax, Vel=Vmax, and “a” is a positive constant.
Then, CPU 21 calculates a difference value (θd=θ−θp) between the discrepancy value θ and the reference discrepancy value θp, both stored in RAM 26. CPU 21 determines a pitch of a musical tone to be generated based on the calculated difference value (θd=θ−θp) at step 602.
The difference value θd between the direction (reference direction) (Reference symbol: P), in which the performance apparatus 11 is held at the time when the setting switch is turned on and a direction (Reference symbol: C) of the performance apparatus 11 which has been swung down can be positive as shown in
Toms (Hi-tom, Low tom and Floor tom) of a drum set are arranged in order of pitch around a single player in a clockwise direction. For example, the toms are arranged in a clockwise direction in order of a hi-tom, low tom and floor tom. Therefore, in the case that musical tones of timbres of percussion instruments are generated, the pitches of the performance apparatus 11 are set so as to go low as the axial direction of the performance apparatus 11 moves in a clockwise direction while the player swings the performance apparatus 11 down repeatedly as if he or she strikes drums and other percussion instruments. Meanwhile, in the keyboard instruments such as pianos, marimbas and vibraphones, a key, which is arranged to the rightward in the keyboard seen from the player generates a tone of a higher pitch. Therefore, in the case that musical tones of timbres of keyboard instruments are generated, the pitches of the performance apparatus 11 are set so as to go high as the axial direction of the performance apparatus 11 moves in a clockwise direction while the player swings the performance apparatus 11 down repeatedly.
a is a view showing an example of a table, which associates pitches of musical tones of the percussion instruments with ranges of the difference values θd, respectively.
At step 602 in
CPU 21 outputs the produced note-on event to the infrared communication device 24 through I/F 27 at step 604. Then, an infrared signal of the note-on event is sent from the infrared communication device 24. The infrared signal sent from the infrared communication device 24 is received by the infrared communication device 33 of the musical instrument unit 19. Thereafter, CPU 21 resets the acceleration flag in RAM 26 to “0” at step 605.
When the sound-generation timing detecting process finishes at step 307 in
The process to be performed in the musical instrument unit 19 according to the present embodiment will be described.
CPU 12 sets a timbre of a musical tone to be generated in accordance with switching operation of the input unit 17. CPU 12 stores designated timbre information in RAM 15. CPU 12 designates the table in RAM 15 based on the selected timbre, wherein the ranges of the difference values θd and pitches are associated with each other in the table. In the present embodiment, plural tables corresponding to timbres of musical tones to be generated are prepared, and a table is selected based on the selected timbre of the musical tone.
A rearrangement may be possible such that a table is edited, which associates the ranges of the difference values θd with pitches of musical tones, respectively. For example, CPU 21 displays the contents of the table on the display screen of the displaying unit 16, allowing the player to change the range of difference values θd and pitches of musical tones by operating the switches and ten keys in the input unit 17. The table whose contents are changes is stored in RAM 15.
CPU 12 judges at step 703 whether or not any note-on event has been received through I/F 13. When it is determined at step 703 that a note-on event has been received (YES at 703), CPU 12 performs the sound generating process at step 704. In the sound generating process, CPU 12 outputs the received note-on event to the sound source unit 31. The sound source unit 31 reads waveform data from ROM 14 in accordance with the timbre represented in the note-on event. The waveform data is read at a rate corresponding to the pitch included in the note-on event. The sound source unit 31 multiplies the waveform data by a coefficient corresponding to the sound-volume data (velocity) included in the note-on event, producing musical tone data of a predetermined sound-volume level. The produced musical tone data is supplied to the audio circuit 32, and musical tones are finally output through the speaker 35.
After the sound generating process (step 704), CPU 12 performs a parameter communication process at step 705. In the parameter communication process, CPU 12 gives an instruction to the infrared communication device 33, and the infrared communication device 33 sends the timbre of musical tones which are set to be generated in the switch operation process and data of the table to the performance apparatus 11 through I/F 13, wherein the table associates pitches of musical tones with the range of the difference values θd corresponding to said timbres of musical tones (step 702). In the performance apparatus 11, when the infrared communication device 24 receives the data, CPU 21 stores the data in RAM 26 through I/F 27 at step 308 in
When the parameter communication process finishes at step 705 in
The elongated performance apparatus 11 according to the present embodiment is provided with the acceleration sensor 23 on the extending portion where the player holds or grips with his or her hand. CPU 21 of the performance apparatus 11 gives an instruction (note-on event) of generating sounds to the sound source unit 31 for generating musical tones. CPU 21 produces a note-on event at the time when the acceleration-sensor value of the acceleration sensor 23 once increases to a value larger than the first threshold value α and thereafter has reached a value less than the second threshold value β, wherein the second threshold value β is less than the first threshold value α, giving an instruction of generating sounds to the musical instrument unit 19. Therefore, the musical instrument unit 19 can generate sounds at the moment when the player strikes the imaginary surface or head of the drum with his or her drumstick.
In the present embodiment, the performance apparatus 11 is provided with the geomagnetic sensor 22. CPU 21 obtains a difference value θd representing angles between the axial direction of the performance apparatus 11 and the predetermined orientation based on the sensor value of the geomagnetic sensor 22. Further, CPU 21 determines a pitch of a musical tone to be generated based on the obtained difference value θd. Therefore, the player can change the pitch of the musical tones by selecting an orientation of the direction, in which he or she swings the performance apparatus 11 down.
In the present embodiment, CPU 21 determines a pitch of a musical tone such that the pitch constantly increases or decreases as the difference value θd increases. In general, the keyboard instruments and toms of a drum set are arranged to constantly change the pitches as the player plays the instrument along some direction. Therefore, the player can intuitively generate musical tones of his or her desired pitch.
In the present embodiment, CPU 21 obtains the offset value or discrepancy value θ representing angles between the magnetic north and the axial direction of the performance apparatus 11. Further, CPU 21 obtains the reference offset value or reference discrepancy values θp representing the reference orientation, wherein the reference discrepancy values θp represents angles between the magnetic north and the axial direction of the performance apparatus 11 held for setting. And CPU 21 calculates a difference value representing a difference between the discrepancy value θ and the reference discrepancy values θp, whereby the player can generate a musical tone of his or her desired pitch and in his or her desired position.
In the present embodiment, CPU 21 detects the maximum value of the acceleration-sensor values of the acceleration sensor 22 and calculate a sound-volume level in accordance with the detected maximum value. Then, CPU 21 produces a note-on event representing the calculated sound volume level. Therefore, the player can use the performance apparatus 11 to generate a musical tone having a sound volume corresponding to a rate at which he or she swings the performance apparatus 11 down.
For example, in the present embodiment, CPU 21 calculates the sound volume level Vel from the following equation:
Vel=a·Amax,
where if a·Amax≧Vmax, Vel=Vmax, and “a” is a positive constant. Using the calculated sound volume level, a musical tone can be generated, having a precise sound volume corresponding to a rate at which the performance apparatus 11 is swung down.
Now, the second embodiment of the present invention will be described. In the first embodiment, the pitch of a musical tone to be generated is adjusted based on the difference value, θd=(θ−θp), representing angles between the reference discrepancy value θp and the axial direction of the elongated performance apparatus 11. But in the second embodiment, a timbre of a musical tone to be generated is adjusted based on the difference value, θd=(θ−θp). In the second embodiment, processes to be performed in the performance apparatus 11 are substantially the same as those in the first embodiment except the note-on event producing process.
As shown in
Thereafter, CPU 21 produces a note-on event including a sound-volume level (velocity), pitch and timbre of a musical tone to be generated (step 1103), wherein pitch information can be constant at step 1103. Processes to be performed at step 1104 and the processes to be performed at step 1105 are substantially the same as those at steps 604 and 605 in
In the switch operation process (step 702 in
In the second embodiment, CPU 21 obtains the difference value representing a difference in angle between the predetermined reference orientation and the orientation of the axial direction of the elongated performance apparatus 11. CPU 21 determines the timbre of a musical tone to be generated based on the obtained difference value. Therefore, the timbre of the musical tone to be generated can be changed depending on the orientation of the axial direction of the performance apparatus 11, which the player swings down.
Now, the third embodiment of the present invention will be described. In the third embodiment, the sound volume level (velocity) of a musical tone to be generated is determined depending on which one of the ranges of the acceleration sensor values the maximum acceleration sensor value belongs to. In the first embodiment, the sound volume level (velocity) is determined at step 601 from the following equation:
Vel=a·Amax (≦Vmax)
In the third embodiment, the sound volume level is determined at step 601 as described below.
In RAM 26 is stored the table which associates the sound volume levels (velocity) with the ranges of the maximum values Amax of the acceleration sensor values, respectively.
For example, in the case where when the performance apparatus 11 is swung down and an acceleration-sensor value is given by a curve 1301 (
In the third embodiment, CPU 21 obtains the sound-volume level depending on which range in the table the maximum value Amax belongs to. Therefore, an appropriate sound volume level can be determined without operating multiplication.
The present invention has been described with reference to the accompanying drawings and the first to the third embodiment, but it will be understood that the invention is not limited to these particular embodiments described herein, and numerous rearrangements, modifications, and substitutions may be made to the embodiments of the invention described herein without departing from the scope of the invention.
In the first to the third embodiment, CPU 21 of the performance apparatus 11 detects an acceleration-sensor value caused when the player swings the performance apparatus 11 down, determining the timing of sound generation. CPU 21 of the performance apparatus 11 calculates a discrepancy value based on a sensor value of the geomagnetic sensor 22, and determines a pitch (the first embodiment) and a timbre (the second embodiment) of a musical tone to be generated based on the calculated discrepancy value. Thereafter, CPU 21 of the performance apparatus 11 produces the note-on event including the pitch and timbre at the timing of sound generation, and transmits the note-on event to the musical instrument unit 19 through I/F 27 and the infrared communication device 24. Meanwhile, in the musical instrument unit 19, receiving the note-on event, CPU 12 supplies the received note-on event to the sound source unit 31, thereby generating a musical tone. The above arrangement is preferably used in the case that the musical instrument unit 19 is not a device specified for generating musical tones, such as personal computers and game machines provided with a MIDI board.
The processes to be performed in the performance apparatus 11 and the processes to be performed in the musical instrument unit 19 are not limited to those described herein in the embodiments.
For example, an rearrangement may be made to the performance apparatus 11, that obtains the reference discrepancy value, discrepancy values and acceleration sensor values, and sends them to the musical instrument unit 19. In the rearrangement, the sound generation timing detecting process (
Now, the fourth embodiment of the present invention will be described. In the fourth embodiment, an acceleration sensor value caused when the performance apparatus 11 is swung down by the player is detected, and a sound generation timing is determined based on the detected acceleration sensor value. A sound volume level of a musical tone to be generated is determined based on information of a time interval “T” from the time when the acceleration sensor value reaches the first threshold value α to the time when immediately after the acceleration-sensor value reaches the second threshold value B.
Like the performance apparatus 10 in the first to the third embodiment, the performance apparatus 110 comprises CPU 21, infrared communication device 24, ROM 25, RAM 26, interface (I/F) 27 and input unit 28. CPU 21 performs various processes including an obtaining operation of a sensor value of the performance apparatus 110, a detecting operation of a timing of sound generation of a musical tone in accordance with the sensor value and a reference value generated by the geomagnetic sensor 22, a producing operation of a note-on event, and an operation of controlling a sending operation of the note-on event through I/F 27 and the infrared communication device 24.
ROM 25 stores various programs for obtaining a sensor value of the performance apparatus 110, detecting a timing of sound generation of a musical tone in accordance with the sensor value and a reference value generated by the geomagnetic sensor 22, producing a note-on event, and controlling the sending operation of the note-on event through I/F 27 and the infrared communication device 24. Data is transmitted through I/F 27 to the infrared communication device 24 in accordance with an instruction from CPU 21. The input unit 28 includes switches (not shown).
a is a flow chart of an example of a process performed in the performance apparatus 110 according to the fourth embodiment. CPU 21 of the performance apparatus 110 performs an initializing process at step 1601, clearing data in RAM 26 and resetting a timer value “t”. CPU 21 obtains and stores a sensor value (acceleration-sensor value) of the acceleration sensor 23 in RAM 26 at step 1602. As described above, the sensor value in the axial direction of the performance apparatus 110 is used as the acceleration-sensor value in the fourth embodiment.
CPU 21 performs a sound-generation timing detecting process at step 1603.
After the process of step 1704, CPU 21 adds a timer value “t” to the time-interval information “T” at step 1705, thereby updating said time-interval information “T”. Then, the time-interval information “T” is stored in RAM 26. Thereafter, CPU 21 resets the timer value “t” to a value of “0” at step 1706.
When it is determined at step 1702 that the acceleration sensor value is not larger than the first threshold value α (NO at step 1702), CPU 21 judges at step 1707 whether or not the acceleration flag in RAM 26 has been set to “1”. When it is determined YES at step 1707, CPU 21 judges at step 1708 whether or not the acceleration sensor value is less than the second threshold value β. When it is determined NO at step 1708, CPU 21 advances to step 1705 to add the timer value “t” to the time-interval information “T”. When it is determined YES at step 1708, CPU 21 performs a note-on event producing process at step 1709.
Before describing the note-on event producing process, the sound-generation timing in the electronic musical instrument 10 of the present embodiment will be described.
When the player swings the performance apparatus 110 down, the acceleration value gradually increases (refer to Reference number 1901, a curve 1900 in
So as to make the electronic musical instrument generate musical tones at the time or just before the player strikes the imaginary surface of the drum, the present invention employs the following logic. It is assumed in the fourth embodiment that the sound-generation timing is specified by a time when the acceleration-sensor value decreases to a value less than the second threshold value β, which is slightly larger than “0”. But the sound-generation timing can reach around the second threshold value β, because of fluctuation of the acceleration-sensor value due to unintentional motion of the player. Therefore, to avoid effects of the fluctuation of the acceleration-sensor value, a condition is set that requires the acceleration-sensor value to once increase to a value larger than the first threshold value α (the value of α is sufficiently larger than the value β). In other words, the sound-generation timing is defined by a time tβ when the acceleration-sensor value increases to a value larger than the first threshold value α (refer to a time tα in
Further, in the fourth embodiment is measured information of a time interval “T” between the time tα when the acceleration sensor value increases to a value larger than the first threshold value α and the time tβ when the acceleration sensor value thereafter decreases to a value less than the second threshold value β. And the sound volume level of a musical tone to be generated is determined based on the time interval information “T”. Every time the sound-generation timing detecting process is performed after the acceleration sensor value increases larger than the first threshold value α, the timer value “t” is added to the time interval information “T” at step 1705 in
In the note-on event producing process shown in
When the maximum value of the sound volume level is denoted by Vmax, the sound volume level will be obtained as follows:
Vel=a·T, where if a·T>Vmax, Vel=Vmax,
“a” is a positive constant.
Then, CPU 21 produces a note-on event containing the sound volume level at step 1802. The note-on event contains information of pitch and timbre. CPU 21 sends the produced note-on event to the infrared communication device 24 through L/F 27 at step 1803. The infrared communication device 24 sends an infrared signal of the note-on event to the infrared communication device 33 of the musical instrument unit 19. Thereafter, CPU 21 resets the acceleration flag in RAM 26 to “0” at step 1804. Further, CPU 21 resets the timer value “t” to “0” at step 1805, and makes the timer interruption ineffective at step 1806.
After the note-on event producing process of step 1709, CPU 21 resets the time-interval information “T” to “0” at step 1710. When it is determined NO at step 1707, CPU 21 also resets the time-interval information “T” to “0” at step 1710.
When the sound generation timing detecting process finishes at step 1603 in
In the fourth embodiment, the performance apparatus 110 extends in an elongated direction to be held by the player with his or her hand. The elongated performance apparatus 110 is provided with the acceleration sensor 23. CPU 21 of the performance apparatus 110 gives an instruction (note-on event) of generating a musical tone to the sound source unit 31. CPU 21 produces the note-on event, which has a sound-generation timing specified by the time when the acceleration-sensor value of the acceleration sensor 23 has decreased to a value less than the second threshold value β after increasing to a value larger than the first threshold value α, wherein the second threshold value β is less than the first threshold value α, and then gives the musical tone unit 19 the instruction of generating sounds. Therefore, the musical instrument unit 19 can generate musical tones at the moment the player strikes the imaginary surface or head of the drum.
In the fourth embodiment, the sound-volume level is determined based on the time interval between the time when the acceleration sensor value reaches the first level and the time when the acceleration-sensor value thereafter reaches a level (the second threshold value β is less than the first threshold value α) corresponding to the sound generation timing. Therefore, the musical instrument unit 19 can generate a musical tone of a sound volume determined depending on the manner in which the player swings the performance apparatus 110 down.
In the fourth embodiment, the time when the acceleration-sensor value reaches the first level is set to the time when the acceleration sensor value reaches the first threshold value, wherein at the latter time the detection of the sound-generation timing is triggered first time.
Therefore, it is possible to obtain the time-interval information with reference the time when the acceleration-sensor value is detected in the sound-generation timing detecting process.
For example, in the fourth embodiment, CPU 21 calculates the sound-volume level Vel based on the time-interval information “T” as follows:
Vel=a·T,
where if a·T≧the maximum value Vmax of the sound volume level, Vel=Vmax, “a” is a positive constant. Therefore, the musical instrument unit 19 can generates musical tones having precise sound volumes depending on the manner in which the player swings the performance apparatus 110 down.
It will be understood that the present invention is not limited to these particular embodiments described herein, and numerous rearrangements, modifications, and substitutions may be made to the embodiments of the invention described herein without departing from the scope of the invention.
In the fourth embodiment, the time-interval information “T” is multiplied by a positive constant “a” to calculate the sound volume level, wherein the time-interval information “T” represents an interval between the time when the acceleration-sensor value reaches the first threshold value α and the time when the acceleration-sensor value thereafter reaches the second threshold value β. But the calculation of the sound-volume level is not limited to the above, and a modification may be made such that the sound-volume level is determined depending which range the time-interval information “T” belongs to.
The performance apparatus 110 in the other embodiment determines the sound-volume level at step 1801 as described below. In RAM 26 are store the table contains the ranges of the time-interval information “T” and the corresponding sound-volume levels. The table stores the following information:
In the embodiment, CPU 21 obtains the sound-volume level depending which range in the table the time-interval information “T” belongs to. Therefore, an appropriate sound volume level can be obtained without operating multiplication.
In the embodiment, CPU 21 of the performance apparatus 110 detects an acceleration-sensor value caused when the player swings the performance apparatus 110 down, determining the timing of sound generation. CPU 21 of the performance apparatus 110 determines the sound-volume level of a musical tone to be generated in accordance with the time interval information “T” representing an interval between the time when the acceleration-sensor value reaches the first threshold value α and the time when the acceleration-sensor value thereafter reaches the second threshold value β. Then, CPU 21 of the performance apparatus 110 produces and sends the note-on event containing the sound volume level to the musical instrument unit 19 through I/F 27 and the infrared communication device 24 at the timing of the sound generation.
Further, in the embodiments, the infrared communication devices 24 and 33 are used to exchange an infrared signal of data between the performance apparatus 110 and the musical instrument unit 19, but the invention is not limited to the exchange of infrared signals. For example, modification may be made such that wireless communication and/or wire communication is used to exchange data between the performance apparatus 110 and the musical instrument unit 19.
Number | Date | Country | Kind |
---|---|---|---|
2010-125713 | Jun 2010 | JP | national |
2010-130623 | Jun 2010 | JP | national |