This application claims the priority benefit of Japan Patent Application No. 2018-170745, filed on Sep. 12, 2018. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
The disclosure relates to an electronic musical instrument and a musical sound generation processing method of the electronic musical instrument.
In patent literature 1, a technology of an electronic musical instrument is disclosed in which the electronic musical instrument includes a keyboard device KY for instructing an occurrence start and stop of a musical sound and a ribbon controller RC for detecting a detection position on a detection surface, and applies the degree of one musical sound effect (cut-off, resonance or the like) corresponding to the detection position of the ribbon controller RC to each of a plurality of tones constituting the musical sound and outputs the tones. Accordingly, the degree of one musical sound effect desired by a user can be easily changed according to the detection positions of the ribbon controller RC.
[Patent literature 1] Japanese Laid-Open No. 2017-122824
However, the change of the degree of one musical sound effect corresponding to the detection position of the ribbon controller RC is the same in all of the plurality of tones. Accordingly, there is a risk that because the degrees of the musical sound effects with respect to all of the plurality of tones are all changed in the same way even if the user frequently changes the detection position of the ribbon controller RC during performance, the change of the musical sound effect that is output eventually and heard by audience sounds monotonous.
The disclosure provides an electronic musical instrument capable of changing the degrees of musical sound effects with respect to a plurality of tones, suppressing the monotony of this change and performing expressively.
The electronic musical instrument of the disclosure includes: an input unit, which inputs a pronunciation indication of a plurality of tones; a detection unit, which has a detection surface and detects detection positions on the detection surface; a musical sound control unit, which applies a musical sound effect to each of the plurality of tones based on the pronunciation indication input by the input unit and outputs the tones; and a musical sound effect change unit, which changes, for each tone, a degree of the musical sound effect applied to each tone by the musical sound control unit corresponding to the detection positions detected by the detection unit.
In the following, preferred examples are described with reference to the attached diagrams.
As shown in
In a position adjacent to the keyboard 2, a neck 4 which becomes a handle of the performer H in the keytar 1 is formed. By grasping the neck 4 with a hand (the left hand of the performer H in
Next, the ribbon controller 5 and the modulation bar 6 arranged in the neck 4 are described with reference to
As shown in
The ribbon 5 has a structure in which the position sensor and the pressure sensitive sensor are formed in a part of a folded sheet (a film) 51. In this embodiment, resistance membranes 52A, 52B which function as the position sensor are formed. In addition, membranes 53A, 53B made of pressure sensitive conductive ink (hereinafter referred to as pressure sensitive ink) which function as the pressure sensitive sensor are formed.
The film 51 includes four parts (a first part, a second part, a third part, and a fourth part). In a state that the film 51 is folded, the four parts are laminated.
As described hereinafter, a surface on which the resistance membrane 52A in the first part (corresponding to a part 51A shown in
The rear surface of the second part and the rear surface of the third part are adhered by a double-face tape (a double-face adhesive tape). In regard to the double-face tape, an adhesive 60 is laminated on a front surface and a rear surface of a support (a setting plate) 54. Besides, in
A terminal portion 57 is formed at one end of the film 51 (see
As shown in
The ribbon 5 has a front surface panel 81. The front surface panel 81 is adhered to the laminated film 51 by an adhesive (for example, the double-face tape).
The resistance membrane 52A (see
In addition, the part 51A and the part 51B can also be seen as being adjacent via a boundary in the width direction (the Q-direction). The part 51A and the part 51D can also be seen as being adjacent via a boundary in the longitudinal direction (the P-direction). The part 51D and the part 51C can also be seen as being adjacent via the boundary in the longitudinal direction (the P-direction).
In addition, in
The part 51B in the ribbon 5 shown in
Next, a formation method of the film 51 is described with reference to
Next, as shown in
In addition, as shown in
Furthermore, as shown in
In addition, as shown in
In addition, as shown in
After that, the double-face tape is pasted on the rear surfaces of the parts 51C, 51D. Besides, the double-face tape on the rear surface of the part 51C is used for adhesion with the rear surface of the part 51B. The double-face tape on the rear surface of the part 51D is used for adhesion between the ribbon 5 and other members. In addition, the reinforcement plate 56 is pasted on the rear surface of the terminal portion 57. Then, punching processing is performed to obtain the film 51 in the shape shown in
Furthermore, the parts 51B, 51C, 51D are folded in the following procedure for example. The following procedure is described with reference to
Firstly, the part 51C is bent toward the part 51D side so that a boundary of the part 51C and the part 51D is creased and the membranes 53A, 53B face each other. In addition, the part 51B is bent toward the part 51A side so that a boundary of the part 51A and the part 51B is creased and the resistance membranes 52A, 52B face each other.
After that, the parts 51A, 51B, 51C, 51D are temporarily expanded to return to the state as shown in
In this state, the separator 71 (see
Next, the separator 71 (see
In addition, the separator 72 of the double-face tape pasted on the rear surface of the part 51C is peeled. Besides, in this state, the part 51B is folded toward the part 51A side, and the part 51C is folded toward the part 51D side. Then, the rear surface of the part 51C and the rear surface of the part 51B are adhered by the double-face tape.
Furthermore, the double-face tape is pasted on the rear surface of the front surface panel 81, and the front surface panel 81 and the part 51A of the film 51 are adhered by the double-face tape.
In this way, the ribbon 5 shown in
Besides, the processes for bending or folding the four parts (the first part, the second part, the third part, and the fourth part) may be carried out manually or a jig for carrying out the processes may be used.
Next, actions of the position sensor formed on the parts 51A, 51B of the film 51 and the pressure sensitive sensor formed on the parts 51C, 51D of the film 51 are described with reference to
The film 51 is shown in two places of
As shown in
The direction orthogonal to the two sides of the resistance membrane 52A is set as a p-direction. As shown in
The ratio of a distance from the place E to the electrodes on two ends is equivalent to the ratio of the resistance values of R1 and R2. Thus, when the resistance membrane 52A comes into contact with the resistance membrane 52B due to the contact of the finger of the performer H or the like in the place E, a voltage corresponding to the position of the p-direction appears as the Vout.
The film 51 is shown in two places of
As shown in
As shown by the resistance-load (pressure) characteristic shown in
As described above, the ribbon 5 of the embodiment can detect the contact position of the finger of the performer H or the like, namely the detecting position, by the position sensor and can detect the pressing force of the finger of the performer H or the like by the pressure sensitive sensor.
In addition, in the ribbon 5 of the disclosure, one base material (for example, the film 51) includes four parts (the first part, the second part, the third part, and the fourth part, which are, for example, the part 51A, the part 51B, the part 51C, and the part 51D), resistance membranes for position detection (for example, the resistance membranes 52A, 52B) are formed on each of the first part (for example, the part 51) and the second part (for example, the part 51B) which are two adjacent parts in the four parts, and resistance membranes being pressure sensitive (for example, the membranes 53A, 53B made of the pressure sensitive ink 93) are formed in each of the third part (for example, the part 51C) and the fourth part (for example, the part 51D) which are the other two adjacent parts of the four parts; the second part is laminated by being folded with respect to the first part, the third part is laminated by being folded with respect to the fourth part, and the two parts (for example, a laminate of the parts 51A,51B and a laminate of the parts 51C, 51D) formed by folding are interfolded; due to this structure, the amount of components of the ribbon 5 is reduced compared with a case in which the position sensor and the pressure sensitive sensor are separately fabricated. As a result, the ribbon 5 can be manufactured inexpensively. In addition, because one base material is folded and manufactured, assembling of the ribbon 5 becomes simple. For example, when the position sensor and the pressure sensitive sensor are fabricated separately, alignment in high accuracy is required when the position sensor and the pressure sensitive sensor are integrated; in comparison, the alignment is relatively easy in the ribbon 5 of the disclosure. Furthermore, because the position sensor and the pressure sensitive sensor are formed in one member (the film 51), the terminal portion 57 can be aggregated and arranged on the same plane.
In addition, the position sensor and the pressure sensitive sensor can be appropriately applied, by being used in combination, to an electronic musical instrument capable of controlling the strength of sound corresponding to a contact degree of the finger of the performer H or the like.
In addition, in the embodiment, the ribbon 5 is also disclosed which is configured in a manner that in the state before the respective parts are folded, the second part (for example, the part 51B) is adjacent to the first part (for example, the part 51A) in the longitudinal direction of the first part, the fourth part (for example, the part 51D) is adjacent to the first part in the width direction (the direction orthogonal to the longitudinal direction) of the first part, and the third part is adjacent to the fourth part in the longitudinal direction of the fourth part.
In addition, in the embodiment, the ribbon 5 is also disclosed in which the resistance membrane for position detection made of carbon or made of silver and carbon is formed on the first part (for example, the part 51A) and the second part (for example, the part 51B) by screen printing, and the resistance membrane being pressure sensitive made of silver and pressure sensitive ink is formed on the third part (for example, the part 51C) and the fourth part (for example, the part 51D) by screen printing.
In addition, in the embodiment, the ribbon 5 is also disclosed in which the front surface of the first part (for example, the part 51A) and the front surface of the second part (for example, the part 51B) are adhered by the pressure sensitive adhesive, the front surface of the third part (for example, the part 51C) and the front surface of the fourth part (for example, the part 51D) are adhered by the pressure sensitive adhesive, and the rear surface of the second part and the rear surface of the third part are adhered by the double-face adhesive tape.
Return to
Different types of musical sound effects are respectively assigned to the detection positions in the X-direction and the pressing force in the Z-direction detected by the ribbon 5 and the operation amount in the Y-direction detected by the operation bar 6, and the degrees of the musical sound effects are respectively set corresponding to the detection positions in the X-direction, the pressing force in the Z-direction or the operation amount in the Y-direction; the details are described later.
In a conventional keytar, the keyboard and the ribbon controller capable of detecting only the detection positions of the X-direction are also arranged; the performer H performs, on the keytar, a sound instruction by an operation of the right hand on the keyboard and controls the musical sound effect corresponding to the position of the ribbon controller specified by the left hand, and thereby put on a performance as if playing on a guitar. However, since the ribbon controller of the keytar is capable of detecting only the detection positions in the X-direction, the ribbon controller of the keytar cannot change the degree of the musical sound effect even if a pressing force is applied to the ribbon controller in the manner of changing a force of the finger pressing down a guitar string or of strongly pressing the guitar string in a flapping manner with the finger.
On the contrary, in the ribbon 5 of the keytar 1 in the embodiment, a pressing force in the Z-direction can be detected, and the degree of the musical sound effect corresponding to this pressing force in the Z-direction is set. Accordingly, when the pressing force in the Z-direction is applied to the ribbon 5 in the manner of changing the force of the finger pressing down the guitar string or of strongly pressing the guitar string in the flapping manner with the finger, the degree of the musical sound effect can be changed corresponding to the pressing force. That is, the performance of the guitar can be put on more appropriately by the keytar 1.
In addition, because the ribbon 5 and the operation bar 6 are arranged adjacently, three different degrees of musical sound effects can be changed while a hand movement of the performer H is suppressed to the minimum. Furthermore, as shown in
Next, a function of the keytar 1 is described with reference to
The input unit 20 has a function for inputting a sound instruction of a plurality of tones to the keytar 1 by one input from the performer H and is implemented by the keyboard 2 (the keys 2a). The musical sound control unit 21 has a function for applying a musical sound effect to each of the plurality of tones that is based on the sound instruction input from the input unit 20 and outputting the tones and is implemented by a CPU 11 described later in
The detection unit 22 has a detection surface and has a function for detecting the detection positions on the detection surface and the pressing force loaded on the detection surface, and is implemented by the ribbon 5. The operator 23 has a function for inputting the operation from the performer H and is implemented by the operation bar 6. The musical sound effect change unit 24 has a function for changing, for each tone, the degree of the musical sound effect applied to each tone by the musical sound control unit 21 corresponding to the detection positions and the pressing force detected by the detection unit 22 or the operation of the operator 23, and is implemented by the CPU 11. In the embodiment, different types of musical sound effects are respectively assigned to the detection positions and the pressing force of the detection unit 22, or the operation amount of the operator 23 in advance, and the musical sound effect change unit 24 changes, for each tone, the degrees of the musical sound effects respectively assigned corresponding to the detection positions and the pressing force of the detection unit 22, or the operation amount of the operator 23.
The aspect information storage unit 25 has a function for storing aspect information representing a change of the degree of the musical sound effect applied to each tone corresponding to the detection positions detected by the detection unit 22, and is implemented by an X-direction aspect information table 11b described later in
From the above, by the musical sound control unit 21, a plurality of tones which is selected by the tone selection unit 27 and which is based on the sound instruction obtained by one input of the input unit 20 is output after the musical sound effects are applied to the plurality of tones. At this time, the musical sound effect change unit 24 changes, for each tone, the degrees of the musical sound effects respectively assigned corresponding to the detection positions and the pressing force of the detection unit 22 or the operation amount of the operator 23. Accordingly, an expressive performance rich in change of the degree of the musical sound effect for each tone can be achieved.
Particularly, the change of the degree of the musical sound effect for each tone corresponding to the detection positions detected by detection unit 22 is stored in the aspect information storage unit 25, and is performed based on the aspect information selected by the aspect selection unit 26. Accordingly, the degree of the musical sound effect can be changed appropriately according to the aspect information suitable for the preference of the performer H or the genre or tune of a song to be played.
Next, an electrical configuration of the keytar 1 is described with reference to
The CPU 10 is an arithmetic device for controlling each portion connected by the bus line 15. The flash ROM 11 is a rewritable non-volatile memory and is equipped with a control program 11a, an X-direction aspect information table 11b, and an YZ-direction aspect information table 11c. When the control program 11a is executed by the CPU 10, the main processing of
As shown in
The input values are values obtained by converting the detection positions in the X-direction detected by the ribbon 5 into numbers of 0-127. Specifically, in regard to the input value, when the position of one end (for example, the left end in a front view) in the X-direction of the front surface panel 81 of the ribbon 5 in
The degree of the musical sound effect with respect to the input value is also set to “0” as the minimum value and “127” as the maximum value, and the degrees are set as integers equally divided into 128. That is, the assigned musical sound effect is not applied when the degree of the musical sound effect is 0, while the musical sound effect is applied to the fullest when the degree of the musical sound effect is 127.
Then, the degree of the musical sound effect for each of the tone A-tone D corresponding to the input values based on the detection positions in the X-direction of the front surface panel 81 of the ribbon 5 is acquired from the aspect information L14 and applied to the musical sound effect which is assigned to the X-direction of the front surface panel 81. For example, when the aspect information L14 is specified, “volume” is assigned as a musical sound effect in the X-direction of the front surface panel 81, and the input value based on the detection position in the X-direction of the front surface panel 81 is “41”, as shown in
In the embodiment, the degree of the musical sound effect stored in the aspect information L14 and the like is not applied only to a case when the musical sound effect is the “volume”, but applied in common to a setting of the degree of other musical sound effects such as pitch change or resonance, cut-off and the like. Accordingly, it is unnecessary to respectively prepare the aspect information L14 and the like for the types of the musical sound effect and thus memory resource can be saved. In the X-direction aspect information table 11b of
In the aspect information L14, by changing the degree of the musical sound effect in this way, the musical sound effects assigned to the detection positions in the X-direction are only applied to the tone A when the input value is 0; the musical sound effects assigned to the detection positions in the X-direction are only applied to the tones A, B when the input value is 1-40; the musical sound effects assigned to the detection positions in the X-direction are only applied to the tones A, B, C when the input value is 41-80; and the musical sound effects assigned to the detection positions in the X-direction are applied to all the tones A-D when the input value is 81 or more. Accordingly, according to the detection positions in the X-direction specified by the performer H toward the front surface panel 81 of the ribbon 5, the number of tones A-D to which the musical sound effects assigned to the detection positions in the X-direction are applied can be switched rapidly.
Furthermore, if the performer H continuously specifies by sliding the finger from one end side to the other end side (that is, from the input value of 0 to the input value of 127) in the X-direction of the front surface panel 81, the musical sound effect can be applied to overlay the tones A-D in order. In addition, because the degrees of the musical sound effects of the tones A-D are increased by a linear function corresponding to the change of the input value, for at least one of the degrees of the musical sound effects of the tones A-D, the change of this degree of the musical sound effect always rises to the right. Accordingly, any one of the degrees of the musical sound effects of the tones A-D is always increased when the degree of the musical sound effect is continuously changed from one end side to the other end side in the X-direction of the front surface panel 81. Accordingly, a musical sound rich in dynamic feeling (excitement feeling) obtained by the musical sound effect can be produced.
On the other hand, if the performer H continuously specifies from the other end side to one end side (that is, from the input value of 127 to the input value of 0) in the X-direction of the front surface panel 81, the musical sound effects of the tones A-D that are applied can be released in order. Accordingly, by continuously specifying the front surface panel 81, an expressive performance rich in change of the degrees of the musical sound effects of the tones A-D can be achieved.
In addition, the aspect information L13 shown in
Next, an aspect level 2 which is an aspect level different from the aspect level 1 is described with reference to
As shown in
In the aspect information L24, by changing the degree of the musical sound effect in this way, when the input values are 0, 40, 80, 127, the degree of the musical sound effect with respect to only one tone within the tones A, B, C, D becomes the maximum value of 127 and the degrees of the musical sound effects with respect to the other tones become 0. Accordingly, by specifying the detection positions in the X-direction corresponding to the input values of 0, 40, 80, 127, the musical sound effects assigned to the detection positions in the X-direction can be applied to only one tone.
In addition, the musical sound effects assigned to the detection positions in the X-direction are only applied to the tones A, B when the input value is 1-40; the musical sound effects assigned to the detection positions in the X-direction are applied to the tones B, C when the input value is 41-80; and the musical sound effects assigned to the detection positions in the X-direction are applied to the tones C, D when the input value is 81 or more. That is, the degrees of the musical sound effects with respect to only two tones within the four tones can be set finely.
In addition, for example, a volume change is set in the tone effect for the detection position in the X direction, a clear guitar sound is set in the tone A, and tones with a strong distortion are set in the tones B-D in the order of tone B→tone C→tone D. If the performer H continuously specifies from one end side to the other end side in the X-direction of the front surface panel 81, a distortion condition of the produced musical sound can be increased gradually; on the other hand, if the performer H discretely specifies the position in the X-direction of the front surface panel 81, the musical sound of the distortion condition corresponding to this position can be produced.
In addition, similar to the aspect level 1, the aspect information L23 shown in
In this way, the aspect information of a plurality of aspect levels is stored in the X-direction aspect information table 11b, and thus an aspect level suitable for the preference of the performer H or the genre or tune of a song to be played can be selected from the plurality of aspect levels, and the degree of the musical sound effect can be changed appropriately. In addition, the change of the degree of the musical sound effect can be switched in various ways by switching the aspect level during performance, and thus an expressive performance can be achieved.
Return to
As shown in
In the embodiment, the input value for the operation amount in the Y-direction of the operation bar 6 is set to “0” in a state that the operation bar 6 is separated from the performer H, and is set to “127” in a state that the operation bar 6 is reclined toward the ribbon 5 side as much as possible, and thereby the operation amount is expressed as the integers equally divided into 128. In addition, the input value for the pressing force in the Z-direction of the ribbon 5 is set to “0” in a state that the pressing force is not loaded, and is set to “127” in a state that the maximum pressing force that can be detected by the ribbon 5 is applied, and thereby the pressing force is expressed as the integers equally divided into 128.
As shown in
In this way, in the embodiment, only the aspect information of one aspect level is stored in the YZ-direction aspect information table 11c, and the aspect information is also set as so-called simple aspect information in which the degrees of the musical sound effects of the tones A-D are increased by a linear function with respect to the input values. The reason is that compared with the detection positions in the X-direction of the front surface panel 81 of the ribbon 5, the operation amount in the Y-direction of the operation bar or the pressing force toward the Z-direction of the front surface panel 81 is hard for the performer H to know how much the operation amount or the pressing force is added; moreover, when the degree of the musical sound effect is changed complicatedly according to a plurality of aspect information with respect to the operation amount in the Y-direction of the operation bar 6 or the Z-direction of the front surface panel 81, it is even harder to know the aspect of this change.
Therefore, by changing the degree of the musical sound effect assigned to the operation amount in the Y-direction of the operation bar 6 or the pressing force in the Z-direction of the ribbon 5 according to one simple aspect information, the performer H easily grasps the change of the degree of the musical sound effect, and thus operability of the keytar 1 can be improved. On the other hand, if the musical sound effects in which complicate change of the degrees is intended are assigned to the detection positions in the X-direction of the ribbon 5, as in the above-described aspect information L14 and the like, the degrees of the musical sound effects with respect to the tones A-D can be changed finely. In addition, by appropriately switching the musical sound effects assigned to the detection positions in the X-direction of the ribbon 5, the operation amount in the Y-direction of the operation bar 6, and the pressing force in the Z-direction of the ribbon 5, the change of the degrees of the musical sound effects can be switched flexibly corresponding to the preference of the performer H.
Return to
The sound source 13 is a device which outputs waveform data corresponding to performance information input from the CPU 10. The DSP 14 is an arithmetic device for performing an arithmetic processing on the waveform data input from the sound source 13. The DAC 16 is a conversion device which converts the waveform data input from the DSP 14 into analog waveform data. The amplifier 17 is an amplification device which amplifies the analog waveform data output from the DAC 16 with a predetermined gain, and the speaker 18 is an output device which emits (outputs) the analog waveform data amplified by the amplifier 17 as a musical sound.
Next, main processing executed by the CPU 10 is described with reference to
In the main processing, firstly, a confirmation is made on whether a selection operation of the tone or the aspect level is performed by the setting key 3 (see
When the selection operation of the tones or the aspect level is performed in the processing of S1 (S1: Yes), the aspect information corresponding to the selected number of tones and the selected aspect level is acquired from the X-direction aspect information table 11b and stored in the X-direction aspect information memory 12d (S2); the aspect information corresponding to the number of tones that is set is acquired from the Y-direction aspect information table 11c and stored in the Y-direction aspect information memory 12e (S3). At this time, the setting on which tone within the selected tones corresponds to the tones A-D is also performed at the same time. Besides, the CPU 11 executing the processing of S1 is an example of the tone selection unit 27 in
Then, after the processing of S3, an instruction of tone change is output to the sound source 13 (S4). On the other hand, in the processing of S1, when the selection operation of the tones is not performed (S1: No), the processing of S2-S4 are skipped.
After the processing of S1 or S4, a confirmation is made on whether the musical sound effects assigned to the detection positions in the X-direction of the ribbon 5, the operation amount in the Y-direction of the operation bar 6 or the pressing force in the Z-direction of the ribbon 5 are changed by the setting key 3 (S5). When the assigned musical sound effects are changed (S5: Yes), mutually different musical sound effects are respectively assigned to the detection positions in the X-direction of the ribbon 5, the operation amount in the Y-direction of the operation bar 6 or the pressing force in the Z-direction of the ribbon 5 (S6). Accordingly, it can be prevented that the same type of musical tone effect is assigned to the detection positions in the X-direction of the ribbon 5, the operation amount in the Y-direction of the operation bar 6 or the pressing force in the Z-direction of the ribbon 5, and thus a feeling of strangeness on the performance of the keytar 1 can be suppressed. On the other hand, in the processing of S5, when the assigned musical sound effects are not changed (S5: No), the processing of S6 is skipped.
After the processing of S5 or S6, the detection positions in the X-direction of the ribbon 5 are acquired, and the detection positions in the X-direction converted into the input values are stored in the X-direction input value memory 12a (S7); the operation amount in the Y-direction of the operation bar 6 is acquired, and the operation amount in the Y-direction converted into the input values is stored in the Y-direction input value memory 12b (S8); the pressing force in the Z-direction from the ribbon 5 is acquired, and the pressing force in the Z-direction converted into the input values is stored in the Z-direction input value memory 12c (S9).
After the processing of S9, musical sound generation processing is executed (S10). Herein, the musical sound generation processing is described with reference to
When the keys 2a of the keyboard 2 are turned on in the processing of S11 (S11: Yes), a confirmation is made on whether the keys 2a of the keyboard 2 are changed from turn-off to turn-on (S12). Specifically, a confirmation is made on whether the same key 2a which is off in the last musical sound generation processing is turned on in the present musical sound generation processing.
When the keys 2a of the keyboard 2 are changed from turn-off to turn-on (S12: Yes), an instruction for producing the tones selected in the processing of S1 and S4 of
After the processing of S12 or S13, the degrees of respective musical sound effects assigned to the detection positions in the X-direction of the ribbon 5, the operation amount in the Y-direction of the operation bar 6, and the pressing force in the Z-direction of the ribbon 5 are changed. Specifically, after the processing of S12 or S13, the degrees of the musical sound effects of respective tones in the aspect information of the X-direction aspect information memory 12d corresponding to the input values stored in the X-direction input value memory 12a are acquired, and are respectively applied to the degrees of the musical sound effects assigned to the detection positions in the X-direction of the ribbon 5 (S14). The CPU 11 executing the processing of S14 is an example of the musical sound effect change unit 24 in
After the processing of S14, the degrees of the musical sound effects of respective tones in the aspect information of the YZ-direction aspect information memory 12d corresponding to the input values for the operation amount in the Y-direction of the operation bar 6 are acquired, and are respectively applied to the degree of the musical sound effect assigned to the operation amount in the Y-direction of the operation bar 6 (S15); the degrees of the musical sound effects of respective tones in the aspect information of the YZ-direction aspect information memory 12d corresponding to the input values for the pressing force in the Z-direction of the ribbon 5 are acquired, and are respectively applied to the degree of the musical sound effect assigned to the pressing force in the Z-direction of the ribbon 5 (S16).
That is, by the processing of S14-S16, the degrees of the musical sound effects assigned to the detection positions in the X-direction of the ribbon 5, the operation amount in the Y-direction of the operation bar 6 and the pressing force in the Z-direction of the ribbon 5 can be changed based on the input value which is based on each detection position, the operation amount in the Y-direction, and the pressing force. Particularly, in the musical sound effects assigned to the detection positions in the X-direction of the ribbon 5, as described above in
When the keys 2a of the keyboard are turned off in the processing of S11 (S11: No), a confirmation is made on whether the keys 2a of the keyboard 2 are changed from turn-on to turn-off (S17). Specifically, a confirmation is made on whether the same key 2a which is on in the last musical sound generation processing is turned off in the present musical sound generation processing.
When the keys 2a of the keyboard 2 are changed from turn-on to turn-off (S17: Yes), an instruction for sound-deadening the tones corresponding to the keys 2a is performed on the sound source 13 (S18). On the other hand, when the keys 2a of the keyboard 2 are not changed from turn-on to turn-off, the corresponding sound-deadening instruction of the keys 2a is already output and thus the processing of S18 is skipped.
After the processing of S16-S18, a confirmation is made on whether the processing of S11-S18 is completely performed on all the keys 2a of the keyboard 2 (S19); when the processing is not completed, the processing of S11-S18 is performed on the keys 2a other than the keys 2a on which the processing of S11-S18 are already performed. On the other hand, when the processing of S11-S18 is completely performed on all the keys 2a of the keyboard 2 (S19: Yes), the musical sound generation processing is ended, and the processing returns to the main processing of
Return to
A description is given above based on the above-described embodiments, but it can be easily inferred that various improvements and changes can be made.
In the above-described embodiments, the keytar 1 is illustrated as the electronic musical instrument. However, the disclosure is not limited hereto and may be applied to other electronic musical instruments such as an electronic organ, an electronic piano or the like in which a plurality of musical sound effects are applied to the tones that are produced. In this case, it is sufficient if the ribbon 5 and the operation bar 6 are arranged on the electronic musical instrument.
In the above-described embodiments, according to the aspect information stored in the X-direction aspect information table 11b and the YZ-direction aspect information table 11c, the degrees of all the musical sound effects are changed. However, the disclosure is not limited hereto, and the degrees of the musical sound effects may be changed according to different aspect information in the musical sound effects. In this case, the X-direction aspect information table 11b and the YZ-direction aspect information table 11c may be arranged for each musical sound effect, and the aspect information corresponding to the musical sound effects assigned to the detection positions in the X-direction, the operation amount in the Y-direction or the pressing force in the Z-direction is acquired from each of the X-direction aspect information table 11b and YZ-direction aspect information table 11c.
In the above-described embodiments, one musical sound effect is assigned to the detection positions in the X-direction of the ribbon 5 in the processing of S6 in
For example, the musical sound effects of volume change, pitch change, cut-off, and resonance may be respectively assigned to the detection positions in the X-direction of the ribbon 5; furthermore, from the musical sound effects, the volume change may be assigned to the tone A, the pitch change may be assigned to the tone B, the cut-off may be assigned to the tone C, and the resonance may be assigned to the tone D to acquire the degree of the musical sound effect of each of the tones A-D in the aspect information of the X-direction aspect information memory 12d and apply the acquired degree of the musical sound effect with respect to the tone A to the degree of the volume change assigned to the tone A, and the degrees of the musical sound effects with respect to the tones B-D acquired similarly are applied to the respective degrees of the pitch change, the cut-off, and the resonance assigned to the tones B-D.
With this configuration, the degrees of the plurality of musical sound effects assigned to the respective tones A-D can be changed corresponding to the detection positions in the X-direction of the ribbon 5, and thus a performance having a high degree of freedom can be achieved. In addition, because the degrees of the plurality of musical sound effects are changed according to the same aspect information, the degrees of the plurality of musical sound effects are respectively changed in a similar aspect corresponding to the detection positions in the X-direction of the ribbon 5. Accordingly, an expressive performance which gives regularity to the changes of the plurality of different musical sound effects can be achieved.
In the above-described embodiments, in the processing of S6 in
For example, the pitch changes are assigned to the musical sound effects of the detection positions in the X-direction of the ribbon 5 and the operation amount in the Y-direction of the operation bar 6, and the resonance is assigned to the musical sound effect of the pressing force in the Z-direction of the ribbon 5. Then, the performer H can achieve a performance in which after the operation bar 6 is operated with the index finger of the left hand to change the pitch continuously, the pitch is changed discretely by specifying the positions of the ribbon 5 with the ring finger of the left hand, and furthermore, the sound production is controlled by a nuance of the resonance corresponding to the pressing force applied to the ribbon 5 with the ring finger of the left hand. Accordingly, by a left-hand operation substantially similar to that of the real guitar, performance expressions unique to guitar playing can be achieved. The performance expressions refer to that, in a performance using a real guitar, in regard to a picked string, a so-called choking performance method for changing the pitch of sound by pulling the string with the index finger of the left hand that presses the string is performed; after that, a so-called hammer-on performance method for strongly pressing (in a beating manner), with the ring finger of the left hand, the other fret on the same string being pressed to produce sound is performed.
In the above-described embodiments, the aspect information corresponding to the aspect level of the X-direction aspect information table 11b is set in the musical sound effects for the detection positions in the X-direction, and the aspect information of the YZ-direction aspect information table 11c is set in the musical sound effects for the operation amount in the Y-direction and the pressing force in the Z-direction. However, the disclosure is not limited hereto, the aspect information corresponding to the aspect level of the X-direction aspect information table 11b may be set in the musical sound effects for the operation amount in the Y-direction and the pressing force in the Z-direction, or the aspect information of the YZ-direction aspect information table 11c may be set in the musical sound effects for the detection positions in the X-direction.
For example, the aspect information corresponding to the aspect level of the X-direction aspect information table 11b is set in the musical sound effect for the pressing force in the Z-direction, and the aspect level is set to the aspect level 2 and is only set for two tones, namely the tone A and the tone B; furthermore, the musical sound effect for the pressing force in the Z-direction is set to volume change. Accordingly, the volumes of the tone A and the tone B can be changed according to the aspect information L22 (see
In the above-described embodiments, in
In the above-described embodiments, the degrees of the assigned musical sound effects are respectively changed according to the detection positions in the X-direction, the operation amount in the Y-direction, and the pressing force in the Z-direction. However, the disclosure is not limited hereto, and other settings may be changed corresponding to the detection position in the X-direction, the operation amount in the Y-direction, and the pressing force in the Z-direction. For example, the type of the musical sound effects assigned to the detection positions in the X-direction or the operation amount in the Y-direction may be changed corresponding to the pressing force in the Z-direction, or the type or the number of the tones assigned to the keys 2a may be changed corresponding to the operation amount in the Y-direction.
In the above-described embodiments, the keytar 1 is equipped with the ribbon 5 and the operation bar 6. However, the disclosure is not limited hereto, and the operation bar 6 may be omitted and only the ribbon 5 is arranged on the keytar 1, or the ribbon 5 may be omitted on the keytar 1 and only the operation bar 6 is arranged on the keytar 1. In addition, a plurality of ribbons 5 or operation bars 61 may be arranged on one keytar 1. In this case, different musical sound effects may be assigned to the detection position in the X-direction of the ribbon 5 and the pressing force in the Z-direction or the operation amount in the Y-direction of the operation bar 6 respectively. Furthermore, when a plurality of ribbons 5 are arranged, different aspect levels may be set for the respective detection positions in the X-direction.
In the above-described embodiments, the number of tones which are sound production objects of one key 2a is four at most. However, the disclosure is not limited hereto, and the maximum number of tones which are the sound production objects of one key 2a may be five or more or be three or less. In this case, the degree of the musical sound effect of the maximum number of the tones which are the sound production objects of one key 2a may be stored in the aspect information L14, L4 and the like of
The numerical values mentioned in the above-described embodiments are merely examples, and certainly other numerical values can be adopted.
Number | Date | Country | Kind |
---|---|---|---|
2018-170745 | Sep 2018 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5561257 | Cardey, III | Oct 1996 | A |
6018118 | Smith et al. | Jan 2000 | A |
8426719 | Shim | Apr 2013 | B2 |
9799316 | Owens | Oct 2017 | B1 |
10157602 | Hanks | Dec 2018 | B2 |
10621963 | Starr | Apr 2020 | B2 |
20030188627 | Longo | Oct 2003 | A1 |
20120297962 | O'Donnell | Nov 2012 | A1 |
20130255474 | Hanks | Oct 2013 | A1 |
20160163298 | Butera | Jun 2016 | A1 |
20190066645 | Abadi | Feb 2019 | A1 |
Number | Date | Country |
---|---|---|
H078896 | Feb 1995 | JP |
H08297489 | Nov 1996 | JP |
H10124055 | May 1998 | JP |
H10319961 | Dec 1998 | JP |
2002351468 | Dec 2002 | JP |
2017122824 | Jul 2017 | JP |
2005096133 | Oct 2005 | WO |
2018136829 | Jul 2018 | WO |
Entry |
---|
“Search Report of Europe Counterpart Application”, dated Jan. 2, 2020, p. 1-p. 8. |
Number | Date | Country | |
---|---|---|---|
20200082801 A1 | Mar 2020 | US |