This application claims priority to and the benefit of Japanese Patent Application No. 2022-046538 filed on Mar. 23, 2022, the entire contents of which are incorporated herein by reference.
The disclosure of the present specification relates to an information processing device, a method and a computer-readable non-transitory recording medium.
Known is a technology for reproducing musical sounds of plucked string instruments in electronic keyboard musical instruments. For example, JP2004-264501A describes an electronic keyboard musical instrument capable of generating musical sounds with a special playing style such as a glissando playing style and a hammer-on playing style.
In the electronic keyboard musical instrument described in JP2004-264501A, a function for allowing a player to select a playing style is assigned to some keys (keys in the non-sound-generating key area) in the keyboard. The player can select a playing style at a time of generating a musical sound by operating a key in the non-sound-generating key area.
An information processing device according to an embodiment of the present disclosure includes a control unit configured to executing processing of acquiring first information about a first musical sound, acquiring second information about a second musical sound played or generated after the first musical sound, acquiring a probability based on the acquired first information and second information, and selecting a playing style at a time of generating the second musical sound from a plurality of types of playing styles, according to the probability.
Referring to the drawings, an information processing device according to an embodiment of the present disclosure, a method that are executed by the information processing device, which is an example of a computer, and a computer-readable non-transitory recording medium will be described in detail.
The electronic musical instrument 1 includes, as a hardware configuration, a processor 10, a RAM (Random Access Memory) 11, a flash ROM (Read Only Memory) 12, a rotation volume 13, an A/D converter 14, a switch panel 15, an input/output interface 16, a keyboard 17, a key scanner 18, an LCD (Liquid Crystal Display) 19, an LCD controller 20, a serial interface 21, a waveform ROM 22, a sound source LSI (Large Scale Integration) 23, a D/A converter 24, an amplifier 25 and a speaker 26. Each unit of the electronic musical instrument 1 is connected by a bus 27.
The processor 10 is configured to collectively control the electronic musical instrument 1 by reading programs and data stored in the flash ROM 12 and using the RAM 11 as a work area.
The processor 10 is, for example, a single processor or a multi-processor, and includes at least one processor. In the case of a configuration including a plurality of processors, the processor 10 may be packaged as a single device, or may be configured by a plurality of devices physically separated within the electronic musical instrument 1. The processor 10 may also be called a “control unit”.
The processor 10 includes, as functional blocks, a first information acquisition unit 100A, a second information acquisition unit 100B, a probability determination unit 100C, and a playing style selection unit 100D. By operations of the functional blocks, it is possible to generate a musical sound with a natural playing style suitable for performance in the electronic musical instrument 1.
The RAM 11 is configured to temporarily hold data and programs. In the RAM 11, programs and data read from the flash ROM 12 and other data required for communication are held.
The flash ROM 12 is a non-volatile semiconductor memory such as a flash memory, an erasable programmable ROM (EPROM) or an electrically erasable programmable ROM (EEPROM), and serves as a secondary storage device or an auxiliary storage device. In the flash ROM 12, programs and data that are used by the processor 10 so as to perform various processing, including a program 120, are stored.
In the present embodiment, each functional block of the processor 10 is implemented by a program 120 that is software. Note that, each functional block of the processor 10 may also be partially or entirely implemented by hardware such as a dedicated logic circuit.
The rotation volume 13 includes volumes 132, 134G, 134H and 134P. The volume 132 is an operation knob for adjusting an overall sound volume. The volume 134G is an operation knob for adjusting a probability that a musical sound will be generated with a glissando playing style. The volume 134H is an operation knob for adjusting a probability that a musical sound will be generated with a hammer-on playing style. The volume 134P is an operation knob for adjusting a probability that a musical sound will be generated with a pull-off playing style. In other words, the electronic musical instrument 1 has a means for receiving a player's operation and adjusting an occurrence probability of each playing style.
The A/D converter 14 is a device configured to convert analog voltages corresponding to volume positions of various volumes included in the rotation volume 13 into digital data. When a player operates the volume 132 or the like, a signal indicating a content of the operation is output to the processor 10 via the A/D converter 14.
The switch panel 15 is provided on a housing of the electronic musical instrument 1, and includes a switch of a mechanical, electrostatic capacity non-contact, or membrane type or the like, and an operation element such as a button, a knob, a rotary encoder, a wheel and a touch panel. By operating the switch panel 15, the player can specify, for example, a tone color (guitar, bass, piano, etc.). When the player operates the switch panel 15, a signal indicating a content of the operation is output to the processor 10 via the input/output interface 16.
The keyboard 17 is a keyboard having a plurality of white keys and black keys as a plurality of performance operation elements. The respective keys are associated with different pitches, respectively.
A key scanner 18 is configured to monitor a key pressing and a key releasing on the keyboard 17. The key scanner 18 is configured to output key press event information to the processor 10 when a player's key pressing operation is detected. The key press event information includes information (key number) about a pitch of a key related to the key pressing operation. The key number is also called a key number, a MIDI key, or a note number.
In the present embodiment, a means for measuring a key pressing speed (velocity value) is provided separately, and the velocity value measured by this means is also included in the key press event information. Illustratively, a plurality of contact switches are provided for each key. A velocity value is measured by a difference in conduction time of each contact switch when a key is pressed. The velocity value can also be called a value indicating a strength of a key pressing operation, and can also be called a value indicating a loudness (sound volume) of a musical sound.
The LCD 19 is configured to be driven by an LCD controller 20. When the LCD controller 20 drives the LCD 19 in response to a control signal from the processor 10, a screen corresponding to the control signal is displayed on the LCD 19. The LCD 19 may be replaced with a display device such as organic EL (Electro Luminescence) or LED (Light Emitting Diode). The LCD 19 may also be a touch panel.
The serial interface 21 is an interface configured to input/output MIDI data (MIDI message) in a serial format with respect to an external MIDI (Musical Instrument Digital Interface) device under control of the processor 10.
The waveform ROM 22 is configured to store a set of waveform data for each tone color (guitar, bass, piano, etc.). The set of waveform data for each tone color includes waveform data (for convenience, referred to as “waveform data group”) of some (plural) key numbers among all key numbers corresponding to each key of the keyboard 17. In the present embodiment, by changing a reading speed (in other words, a pitch) of the waveform data, it is possible to generate even a musical sound of a key number not included in the waveform data group. In other words, it is possible to generate musical sounds of all key numbers corresponding to each key of the keyboard 17 by using a waveform data group including only some key numbers.
The waveform ROM 22 is configured to further store a waveform data group for each tone color with respect to each playing style. The playing style is roughly divided into a normal playing style and a special playing style. In the present embodiment, a glissando playing style, a hammer-on playing style, and a pull-off playing style are assumed as the special playing style. That is, the waveform ROM 22 is configured to store waveform data groups of a plurality of types (four, here) of playing styles (normal playing style, glissando playing style, hammer-on playing style, and pull-off playing style) for each tone color.
Note that, the special playing styles listed here are only examples. In other embodiments, other special playing styles may be applied.
The processor 10 is configured to instruct the sound source LSI 23 to read the corresponding waveform data from the plurality of waveform data stored in the waveform ROM 22. The waveform data to be read is determined according to, for example, a tone color selected by a player's operation, key press event information, and a playing style selected in processing described later (refer to
The sound source LSI 23 is configured to generate a musical sound, based on the waveform data read from the waveform ROM 22 under instruction of the processor 10. The sound source LSI 23 has, for example, 128 generator sections, and can generate up to 128 musical sounds at the same time. Note that, in the present embodiment, the processor 10 and the sound source LSI 23 are configured as separate devices, but in other embodiments, the processor 10 and the sound source LSI 23 may be configured as one processor.
A digital audio signal of the musical sound generated by the sound source LSI 23 is converted into an analog signal by the D/A converter 24, amplified by the amplifier 25 and output to the speaker 26.
By executing the processing in
In the following description, as an example, a case where a plucked string instrument such as a guitar or a bass is set as a tone color is assumed.
In the processing shown in
When the glissando playing style is not selected, a predetermined parameter (difference in loudness and difference in interval of the musical sound, number of consecutive generations of a musical sound with the hammer-on playing style, etc.) are acquired, and based on the parameter, it is determined whether there is a possibility that the musical sound will be generated with the hammer-on playing style. When it is determined that there is a possibility that the musical sound will be generated with the hammer-on playing style, an occurrence probability thereof is calculated, and based on the calculated occurrence probability and a random number, it is determined whether to generate the musical sound with the hammer-on playing style. When the hammer-on playing style is selected as a result of this determination, the musical sound is generated with the hammer-on playing style.
When the hammer-on playing style is not selected, a predetermined parameter (difference in loudness and difference in interval of the musical sound, number of consecutive sound generations of a musical sound with the pull-off playing style, etc.) are acquired, and based on the parameter, it is determined whether there is a possibility that the musical sound will be generated with the pull-off playing style. When it is determined that there is a possibility that the musical sound will be generated with the pull-off playing style, an occurrence probability thereof is calculated, and based on the calculated occurrence probability and a random number, it is determined whether to generate the musical sound with the pull-off playing style. When the pull-off playing style is selected as a result of this determination, the musical sound is generated with the pull-off playing style.
When none of the above special playing styles are selected, the musical sound is generated with the normal playing style.
In this way, the processor 10 acquires a predetermined parameter, determines a probability that a special playing style will be selected, based on the acquired parameter, and selects a playing style according to the determined probability. Incidentally, the processor 10 operates as a probability determination unit 100C configured to determine a probability that a special playing style will be selected. In addition, the processor 10 operates as a playing style selection unit 100D configured to select a playing style according to the determined probability.
Note that, in the present embodiment, the possibility of sound generation is determined in the order of the glissando playing style, the hammer-on playing style, and the pull-off playing style, but this order is only an example. This order can be changed as appropriate.
In the following description, a musical sound corresponding to a key pressing operation that becomes a trigger for starting execution of the processing in
Information about the first musical sound is described as “first information”. The first information includes, for example, key press event information (key number, velocity value) corresponding to the first musical sound, elapsed time Tf (unit: msec) after contact of a playing means (a finger, a pick, etc. on a fret side) is released (predetermined operation) from the string of the plucked musical instrument that generates the first musical sound (single sound) (i.e., after the string is released), elapsed time To (unit: msec) after the first musical sound (single sound) is generated, and a playing style at a time of sound generation. Note that, the elapsed time Tf is actually an elapsed time after “key is released”, but is described as “string is released”, for convenience.
Information about the second musical sound is described as “second information”. The second information includes, for example, key press event information (key number, velocity value) corresponding to the second musical sound.
The processor 10 holds the first information and the second information in a work area of the RAM 11, for example. That is, the processor 10 operates as a first information acquisition unit 100A configured to acquire first information about a first musical sound, and as a second information acquisition unit 100B configured to acquire second information about a second musical sound generated later than the first musical sound.
In the processing shown in
Specifically, in step S101, the processor 10 determines whether a variable Gn (Gn is a natural number) is zero. The variable Gn indicates the number of musical sounds being currently generated. When the variable Gn is zero, only the current musical sound is generated. For this reason, in step S101, it is determined that the current musical sound is a single sound. When the variable Gn is not zero, two or more musical sounds are generated at the same time. For this reason, in step S101, it is determined that the current musical sound becomes a chord playing.
In the case of a chord playing, it is difficult to generate sounds with the special playing style. In addition, in the electronic musical instrument 1, which is a keyboard musical instrument, it is difficult to perform a special playing style by playing chords (e.g., moving fingers in parallel while pressing a plurality of keys). For this reason, when it is determined that the current musical sound becomes a chord playing (step S101: NO), the processor 10 selects the normal playing style as the playing style for generating the current musical sound (step S102).
The processor 10 instructs the sound source LSI 23 to read out the waveform data of the normal playing style corresponding to the key press event information of the current musical sound from the plurality of waveform data stored in the waveform ROM 22. Thereby, the current musical sound is generated with the normal playing style.
The processor 10 sets a large value (for example, a value of 10,000) as the elapsed time To (step S103), increments the variable Gn by 1 (step S104), and ends the processing shown in
Here, the processing shown in
As shown in
The processor 10 decrements 1 from the variable Gn (step S202).
The processor 10 determines whether the variable Gn (i.e., the number of musical sounds being currently generated) is zero (step S203). When it is determined that the variable Gn is zero (i.e., the musical sound at the time of key release is a single sound) (step S203: YES), the processor 10 resets the elapsed time Tf to zero, starts measurement of the elapsed time Tf (step S204), and ends the processing shown in
Returning to the description of the processing shown in
First, the processor 10 acquires an occurrence probability Pa, based on the elapsed time Tf, which is one of the pieces of first information (step S105). The occurrence probability Pa is used when calculating the probability that the current musical sound will be generated with the special playing style. The occurrence probability Pa may be acquired as a first occurrence probability.
In step S105, the processor 10 acquires an occurrence probability Pa corresponding to the current elapsed time Tf (in other words, the elapsed time after the string is release until the current musical sound is generated) by using the function shown in
In the case where the time interval after the string release of the previous musical sound until the current musical sound is generated (for example, when playing a fast passage) is shorter, it means, for example, a state in which the string vibration does not converge and the energy of sound generation remains. Therefore, it is physically difficult to generate the current musical sound as usual (with the normal playing style). For this reason, the shorter the elapsed time since the string release is, the higher the occurrence probability Pa is set. From another standpoint, as the elapsed time since the string release is longer, the string vibration converges and the energy of sound generation disappears. Therefore, it is difficult to generate the musical sound at a sufficient sound volume with the special playing style even when the special playing style is performed with the plucked string instrument. For this reason, the longer the elapsed time since the string release is, the lower the occurrence probability Pa is set.
Note that, in the present embodiment, in the reproduction of the plucked string instrument by the electronic musical instrument 1, it is regarded that immediately after the string release, the string vibration is hardly attenuated and the energy of sound generation remains with almost no loss. For this reason, in
When it is determined that the occurrence probability Pa acquired in step S105 is 0% (step S106: YES), the probability that a sound will be generated with a special playing style also becomes 0%. In this case, the processor 10 selects the normal playing style as the playing style at the time of generating the current musical sound (step S115).
When it is determined that the occurrence probability Pa acquired in step S105 is greater than 0% (step S106: NO), the processor 10 acquires the occurrence probability Pb, based on the elapsed time To, which is one of the pieces of first of information (step S107). The occurrence probability Pb is also used when calculating the probability that the current musical sound will be generated with the special playing style. The occurrence probability Pb may be acquired as a second occurrence probability.
In step S107, the processor 10 acquires the occurrence probability Pb corresponding to the current elapsed time TO (in other words, the elapsed time after the musical sound of the previous single sound is generated until the current musical sound is generated) by using the function shown in
Even when the string is not released, the string vibration is attenuated over time, and the energy of sound generation of the previous musical sound is also reduced. In addition, as time elapses since the previous musical sound is generated, the player is given more spare time for performance operation, so that a plucking possibility by the normal playing style increases regardless of the attenuation of string vibration. Taking these into consideration, the shorter the elapsed time To is, the higher the occurrence probability Pb is set. In other words, the longer the elapsed time To is, the lower the occurrence probability Pb is set.
Note that, the occurrence probability Pb varies greatly depending on how much the player is given spare time for performance operation. For this reason, the function shown in
It is rare for a musical sound to be generated with a special playing style after playing chords. For this reason, in the present embodiment, the previous musical sound serving as the starting point of the elapsed times Tf and To is set to a single sound.
When it is determined that the occurrence probability Pb acquired in step S107 is 0% (step S108: YES), the probability that a sound will be generated with a special playing style also becomes 0%. Also in this case, the processor 10 selects the normal playing style as the playing style at the time of generating the current musical sound (step S115).
When it is determined that the occurrence probability Pb acquired in step S107 is greater than 0% (step S108: NO), the processor 10 executes glissando playing style determination processing (step S109).
As described above, the first information and the second information held in the work area of the RAM 11 include the velocity value of the previous musical sound and the velocity value of the current musical sound, respectively. The processor 10 calculates a difference in sound volume between the previous musical sound and the current musical sound from the velocity values (step S301).
When the velocity value of the previous musical sound and the velocity value of the current musical sound are the same, the difference in sound volume calculated in step S301 becomes zero. When the velocity value of the current musical sound is larger than the velocity value of the previous musical sound, the difference in sound volume calculated in step S301 becomes a positive value. When the velocity value of the current musical sound is smaller than the velocity value of the previous musical sound, the difference in sound volume calculated in step S301 becomes a negative value. For example, when the velocity value of the current musical sound is 1.5 times the velocity value of the previous musical sound, the difference in sound volume is indicated by +50% in
The processor 10 acquires the probability Pgv associated with the difference in sound volume calculated in step S301 from the table shown in
The closer the loudness of the current musical sound to the loudness of the previous musical sound is (the smaller the change in sound volume is), the higher the probability that the current musical sound will be generated with the glissando playing style is. Incidentally, due to the characteristic that the current musical sound is generated with the glissando playing style using the energy corresponding to the string vibration at the time of sound generation of the previous musical sound, the larger the loudness of the current musical sound with respect to the loudness of the previous musical sound is, the lower the probability that the current musical sound will be generated with the glissando playing style is. In addition, it is difficult for the loudness of the musical sound to be larger or extremely smaller than that of the previous musical sound by the glissando playing style. For this reason, in
As described above, the first information and the second information held in the work area of the RAM 11 include the key number of the previous musical sound and the key number of the current musical sound, respectively. More specifically, in the work area of the RAM 11, as the first information, not only information about the musical sound generated one before the current musical sound but also information about a plurality of musical sounds (e.g., musical sound generated two before the current musical sound, musical sound generated three before the current musical sounds, etc.) are held.
The processor 10 calculates a difference (i.e., difference in interval) between the key number of the previous musical sound and the key number of the current musical sound (step S303). As used herein, the “previous musical sound” refers to a musical sound last generated with the normal playing style, among a plurality of first musical sounds corresponding to each of a plurality of pieces of first information held in the work area of RAM 11. For convenience, this musical sound is described as “previous normal playing style musical sound”.
When the key number of the previous normal playing style musical sound is the same as the key number of the current musical sound, the difference in interval calculated in step S303 (“the number of moved frets” in
The processor 10 acquires the probability Pgp associated with the difference in interval calculated in step S303 from the table shown in
The glissando playing style has a limitation on speed. In the plucked string instrument, the farther the key (in other words, the fret) of the first musical sound to be generated with the glissando playing style is from the key (in other words, the fret) during the immediately previous normal playing style, the more difficult it becomes the player to move a finger or the like between the two frets instantly (specifically, at the speed at which a sound is generated with the glissando playing style). For this reason, the greater the difference in interval between the current musical sound and the previous normal playing style musical sound is, the lower a probability that the current musical sound will be generated with the glissando playing style is.
Therefore, in
In addition, when the difference in interval is large, there is a high possibility that a string for generating the current musical sound is different from a string for generating the previous normal playing style musical sound. Therefore, in the present embodiment, when the difference in interval is ±4 half tones or more, it is regarded that the glissando playing style is not applied. Also in this case, the probability Pgp becomes zero, and the probability that a musical sound will be generated with the glissando playing style also becomes zero.
The processor 10 calculates an occurrence probability Rg (unit: %) of the glissando playing style (step S305). Specifically, the processor 10 calculates the occurrence probability Rg by a following equation. The coefficient Cg is a value set in response to an operation on the volume 134G, and takes a value of 0 to 100, for example.
Rg=Cg×(Pa×Pb×Pgv×Pgp)×10-6
The processor 10 generates a random number by a random function (step S306). The random number takes a value of 0 or 1. More specifically, in step S306, the processor 10 generates a random number so that the value 1 is obtained with the occurrence probability Rg calculated in step S305.
When it is determined that the value 1 is obtained (step S307: YES), the processor 10 selects the glissando playing style as the playing style for generating the current musical sound (step S308), and ends the subroutine shown in
Further, the processor 10 instructs the sound source LSI 23 to read out the waveform data of the glissando playing style corresponding to the key press event information of the current musical sound from the plurality of waveform data stored in the waveform ROM 22. Thereby, the current musical sound is generated with the glissando playing style.
When it is determined that the value 0 is obtained (step S307: NO), the processor 10 ends the subroutine shown in
In this way, the processor 10 determines the probability that the glissando playing style will be selected, based on the elapsed time Tf after the string is released until the current musical sound is generated.
In addition, the processor 10 determines the probability that the glissando playing style will be selected, based on the elapsed time To after the musical sound of the previous single sound is generated until the current musical sound is generated.
Further, the processor 10 determines the probability that the glissando playing style will be selected, as a higher value as the elapsed times Tf and To are shorter.
Further, the processor 10 determines the probability that the glissando playing style will be selected, based on the loudness of the previous musical sound and the loudness of the current musical sound. More specifically, the processor 10 determines the probability that the glissando playing style will be selected, as a higher value as the difference in sound volume calculated in step S301 is smaller.
Further, the processor 10 determines the probability that the glissando playing style will be selected, based on the difference in interval between the previous musical sound and the current musical sound. More specifically, the processor 10 determines the probability that the glissando playing style will be selected, as a higher value as the difference in interval calculated in step S303 is smaller.
In processing of step S117 described later, the number of consecutive sound generations of a musical sound with the hammer-on playing style is counted. The processor 10 acquires the count value (step S401).
The processor 10 acquires the probability Poc associated with the number of consecutive sound generations acquired in step S401 from the table shown in
In a plucked string instrument, when generating a musical sound with a hammer-on playing style, the number of consecutive sound generations has a limitation on the number of fingers used for performance operation. For example, a case where a musical sound is generated with the normal playing style of plucking a string with an index finger is considered. In this case, when raising an interval with the hammer-on playing style, a player can only play up to three consecutive times using the three fingers of the middle finger, the ring finger, and the small finger. For this reason, as the number of consecutive sound generations increases, as shown in
The processor 10 calculates a difference in sound volume between the previous musical sound and the current musical sound from the velocity value of the previous musical sound included in the first information and the velocity value of the current musical sound included in the second information (step S403).
The processor 10 acquires the probability Pov associated with the difference in sound volume calculated in step S403 from the table shown in
The closer the loudness of the current musical sound to the loudness of the previous musical sound is (the smaller the change in sound volume is), the higher the probability that the current musical sound will be generated with the hammer-on playing style is. Incidentally, due to the characteristic that the current musical sound is generated with the hammer-on playing style using the energy corresponding to the string vibration at the time of sound generation of the previous musical sound, the larger the loudness of the current musical sound with respect to the loudness of the previous musical sound is, the lower the probability that the current musical sound will be generated with the hammer-on playing style is. For this reason, in
The processor 10 calculates a difference in interval between the previous musical sound and the current musical sound from the key number of the previous normal playing style musical sound included in the first information and the key number of the current musical sound included in the second information (step S405).
The processor 10 acquires the probability Pop associated with the difference in interval calculated in step S405 from the table shown in
In a plucked musical instrument, when generating a musical sound with a hammer-on playing style, the player needs to press two frets at the same time. For this reason, the size of the hand pressing the two frets at the same time leads to a limitation on the width of the musical scale that can be achieved with the hammer-on playing style. For this reason, the greater the difference in interval between the current musical sound and the previous normal playing style musical sound is, the lower the probability that the current musical sound will be generated with the hammer-on playing style is.
Therefore, in
The processor 10 calculates an occurrence probability Ro (unit: %) of the hammer-on playing style (step S407). Specifically, the processor 10 calculates the occurrence probability Ro by a following equation. The coefficient Co is a value set in response to an operation on the volume 134H, and takes a value of 0 to 100, for example.
Ro=Co×(Pa×Pb×Pov×Pop)×10-6
The processor 10 generates a random number so that the value 1 is obtained with the occurrence probability Ro calculated in step S407 (step S408).
When it is determined that the value 1 is obtained (step S409: YES), the processor 10 selects the hammer-on playing style as the playing style for generating the current musical sound (step S410), and ends the subroutine shown in
Further, the processor 10 instructs the sound source LSI 23 to read out the waveform data of the hammer-on playing style corresponding to the key press event information of the current musical sound, from the plurality of waveform data stored in the waveform ROM 22. Thereby, the current musical sound is generated with the hammer-on playing style.
When it is determined that the value 0 is obtained (step S409: NO), the processor 10 ends the subroutine shown in
In this way, the processor 10 determines the probability that the hammer-on playing style will be selected, based on the elapsed time Tf after the string is released until the current musical sound is generated.
In addition, the processor 10 determines the probability that the hammer-on playing style will be selected, based on the elapsed time To after the musical sound of the previous single sound is generated until the current musical sound is generated.
Further, the processor 10 determines the probability that the hammer-on playing style will be selected, as a higher value as the elapsed times Tf and To are shorter.
Further, the processor 10 measures the number of consecutive sound generations of a musical sound with the hammer-on playing style, and determines the probability that the hammer-on playing style will be selected, based on the measured number of consecutive sound generations.
Further, the processor 10 determines the probability that the hammer-on playing style will be selected, based on the loudness of the previous musical sound and the loudness of the current musical sound. More specifically, the processor 10 determines the probability that the hammer-on playing style will be selected, as a higher value as the difference in sound volume calculated in step S403 is smaller.
Further, the processor 10 determines the probability that the hammer-on playing style will be selected, based on the difference in interval between the previous musical sound and the current musical sound. More specifically, the processor 10 determines the probability that the hammer-on playing style will be selected, as a lower value as the key number (pitch) of the current musical sound is higher with respect to that of the previous musical sound.
In processing of step S117 described later, the number of consecutive sound generations of a musical sound with the pull-off playing style is counted. The processor 10 acquires the count value (step S501).
The processor 10 acquires the probability Pfc associated with the number of consecutive sound generations acquired in step S501 from the table shown in
Similarly to the hammer-on playing style, also in the pull-off playing style, the number of consecutive sound generations has a limitation on the number of fingers used for performance operation. For example, a case where a musical sound is generated with the normal playing style of plucking a string with a small finger is considered. In this case, when lowering an interval with the pull-off playing style, a player can only play up to three consecutive times using the three fingers of the ring finger, the middle finger, and the index finger. Therefore, as the number of consecutive sound generations increases, as shown in
The processor 10 calculates a difference in sound volume between the previous musical sound and the current musical sound from the velocity value of the previous musical sound included in the first information and the velocity value of the current musical sound included in the second information (step S503).
The processor 10 acquires the probability Pfv associated with the difference in sound volume calculated in step S503 from the table shown in
The closer the loudness of the current musical sound to the loudness of the previous musical sound is (the smaller the change in sound volume is), the higher the probability that the current musical sound will be generated with the pull-off playing style is, similarly to the hammer-on playing style. For this reason, in
The processor 10 calculates a difference in interval between the previous musical sound and the current musical sound from the key number of the previous normal playing style musical sound included in the first information and the key number of the current musical sound included in the second information (step S505).
The processor 10 acquires the probability Pfp associated with the difference in interval calculated in step S505 from the table shown in
Similarly to the hammer-on playing style, the size of the hand pressing the two frets at the same time leads to a limitation on the width of the musical scale that can be achieved with the pull-off playing style. For this reason, the greater the difference in interval between the current musical sound and the previous normal playing style musical sound is, the lower the probability that the current musical sound will be generated with the pull-off playing style is.
Therefore, in
The processor 10 calculates an occurrence probability Rf (unit: %) of the pull-off playing style (step S507). Specifically, the processor 10 calculates the occurrence probability Rf by a following equation. The coefficient Cf is a value set in response to an operation on the volume 134P, and takes a value of 0 to 100, for example.
Rf=Cf×(Pa×Pb×Pfv×Pfp)×10-6
The processor 10 generates a random number so that the value 1 is obtained with the occurrence probability Rf calculated in step S507 (step S508).
When it is determined that the value 1 is obtained (step S509: YES), the processor 10 selects the pull-off playing style as the playing style for generating the current musical sound (step S510), and ends the subroutine shown in
Further, the processor 10 instructs the sound source LSI 23 to read out the waveform data of the pull-off playing style corresponding to the key press event information of the current musical sound, from the plurality of waveform data stored in the waveform ROM 22. Thereby, the current musical sound is generated with the pull-off playing style.
When it is determined that the value 0 is obtained (step S509: NO), the processor 10 ends the subroutine shown in
In this way, the processor 10 determines the probability that the pull-off playing style will be selected, based on the elapsed time Tf after the string is released until the current musical sound is generated.
Further, the processor 10 determines the probability that the pull-off playing style will be selected, based on the elapsed time To after the musical sound of the previous single sound is generated until the current musical sound is generated.
Also, the processor 10 determines the probability that the pull-off playing style will be selected, as a higher value as the elapsed times Tf and To are shorter.
Further, the processor 10 measures the number of consecutive sound generations of a musical sound with the pull-off playing style, and determines the probability that the pull-off playing style will be selected, based on the measured number of consecutive sound generations.
Further, the processor 10 determines the probability that the pull-off playing style will be selected, based on the loudness of the previous musical sound and the loudness of the current musical sound. More specifically, the processor 10 determines the probability that pull-off playing style will be selected, as a higher value as the difference in sound volume calculated in step S403 is smaller.
Further, the processor 10 determines the probability that the pull-off playing style will be selected, based on the difference in interval between the previous musical sound and the current musical sound. More specifically, the processor 10 determines the probability that pull-off playing style will be selected, as a lower value as the key number (pitch) of the current musical sound is lower with respect to that of the previous musical sound.
In step S116, the processor 10 resets the elapsed time To to zero, and then starts measurement of the elapsed time To.
The processor 10 updates the counter (step S117).
Specifically, when the hammer-on playing style is selected in step S410 in
The processor 10 increments the variable Gn by 1 (step S104), and ends the processing shown in
As described above, in the present embodiment, the musical sound by the special playing style such as the glissando playing style, the hammer-on playing style and the pull-off playing style is not simply generated at random. In the present embodiment, by calculating the occurrence probability of the musical sound by each playing style and selecting a playing style based on the calculated occurrence probability, it is possible to generate a musical sound by each playing style at a frequency close to actual performance. For this reason, in the present embodiment, the musical sound is generated with a natural playing style suitable for performance.
The present disclosure is not limited to the above-described embodiments, and can be variously modified at the implementation stage without departing from the gist of the present invention. In addition, the functions executed in the above-described embodiments may be implemented combined as appropriate as possible. In the above-described embodiments, various stages are included, and various inventions can be extracted by appropriate combinations according to a plurality of disclosed constitutional requirements. For example, even when some constitutional elements are deleted from all the constitutional elements shown in the embodiments, a configuration in which the constitutional elements are deleted can be extracted as an invention as long as the effects are obtained.
In the above, the configuration of generating a musical sound with a playing style corresponding to performance data generated by a performance operation has been described. However, for example, a configuration of generating a musical sound with a playing style corresponding to musical sound data such as MIDI data input from the serial interface 21, for example, also falls within the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2022-046538 | Mar 2022 | JP | national |