This application claims the priority benefit of Japan Application No. 2022-169862, filed on Oct. 24, 2022. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
The disclosure relates to an automatic performance device, an automatic performance program, and an automatic performance method.
Japanese Patent Laid-Open No. 2021-113895 discloses an electronic musical instrument which repeatedly reproduces a patterned accompaniment sound created based on accompaniment style data ASD. The accompaniment style data ASD includes a plurality of accompaniment section data according to combinations of a “section” such as intro, main section, and ending, and a “liveliness level” such as quiet, slightly loud, and loud. From among the accompaniment style data ASD, a performer selects, via a setting operation part 102, the accompaniment section data corresponding to the section and liveliness level of a musical piece being performed. Accordingly, in addition to the musical piece being performed, a patterned accompaniment sound suitable for that musical piece can be outputted.
However, when a melody of the musical piece being performed by the performer changes and does not match the liveliness level of the patterned accompaniment sound being outputted, there arises a need to switch the patterned accompaniment sound. In this case, a problem occurs that the performer, while performing, has to manually select via the setting operation part 102 the accompaniment section data matching the changed melody from among the accompaniment style data ASD.
An automatic performance device according to the disclosure includes: a pattern storage part, storing a plurality of performance patterns; a performing part, performing a performance based on the performance pattern stored in the pattern storage part; an input part, inputting performance information from an input device; a rhythm detection part, detecting a rhythm from the performance information inputted by the input part; an acquisition part, acquiring from among the plurality of performance patterns stored in the pattern storage part the performance pattern corresponding to the rhythm detected by the rhythm detection part; and a switching part, switching the performance pattern being performed by the performing part to the performance pattern acquired by the acquisition part.
A non-transitory computer-readable medium according to the disclosure stores an automatic performance program that causes a computer to execute automatic performance. The computer includes a storage part and an input part that inputs performance information. The automatic performance program causes the storage part to function as a pattern storage part storing a plurality of performance patterns, and causes the computer to: perform a performance based on the performance pattern stored in the pattern storage part; input the performance information by the input part; detect a rhythm from the inputted performance information; acquire from among the plurality of performance patterns stored in the pattern storage part the performance pattern corresponding to the detected rhythm; and switch the performance pattern being performed to the acquired performance pattern.
An automatic performance method according to the disclosure is executed by an automatic performance device including a pattern storage part storing a plurality of performance patterns and an input device inputting performance information. The automatic performance method includes following. A performance is performed based on the performance pattern stored in the pattern storage part. The performance information is inputted by the input device. A rhythm is detected from the inputted performance information. The performance pattern corresponding to the detected rhythm is acquired from among the plurality of performance patterns stored in the pattern storage part. The performance pattern being performed is switched to the acquired performance pattern.
The disclosure provides an automatic performance device, an automatic performance program, and an automatic performance method which make it possible to automatically switch to a performance pattern suitable for a performer's performance.
Hereinafter, embodiments will be described with reference to the accompanying drawings.
As illustrated in
In the synthesizer 1 of the present embodiment, a plurality of performance patterns Pa are stored in which a note to be sounded at each sound production timing is set, and a performance is performed based on the performance pattern Pa, thereby performing automatic performance. At that time, among the stored performance patterns Pa, the performance may be switched to the performance pattern Pa matching a rhythm of depression/release of the key 2a performed by the performer. Based on velocity (strength) of depression of the key 2a, the volume of the performance pattern Pa being automatically performed is changed. Hereafter, the automatic performance based on the performance pattern Pa will simply be abbreviated as “automatic performance.”
First, switching of the performance pattern Pa is described. In the present embodiment, a rhythm is detected from depression/release of the key 2a and is compared with a preset rhythm pattern, the performance pattern Pa corresponding to a most similar rhythm pattern is acquired, and the performance is switched to this performance pattern Pa from the performance pattern Pa being performed.
In a rhythm pattern, a “note duration” being the duration of each sound arranged in one bar in 4/4 time, a “note spacing” being a time between each sound arranged and a sound produced immediately therebefore, and a “number of sounds” being the number of sounds arranged are set. A length of the rhythm pattern is set to up to one bar.
In the present embodiment, a plurality of rhythm patterns RP1 to RP3 and so on are provided, the rhythm detected from depression/release of the key 2a is compared with each rhythm pattern, and the most similar rhythm pattern is acquired. Referring to (a) to (c) of
(a) to (c) of
As illustrated in (b) of
If the rhythm pattern includes a plurality of note durations or note spacings, the note durations or note spacings are set in order of their corresponding sounds appearing within one bar of the rhythm pattern. In the present embodiment, these combinations of note duration, note spacing, and number of sounds are used as indicators representing rhythm patterns or rhythms of depression/release of the key 2a.
Although musical notes are arranged at the position of “La” (A) in (a) to (c) of
A plurality of rhythm patterns set in this way are compared with the rhythm detected from depression/release of the key 2a, that is, the note duration, note spacing, and number of sounds detected from depression/release of the key 2a, and the most similar rhythm pattern is acquired. Specifically, performance information outputted from the keyboard 2 is sequentially accumulated, and from note-on/note-off information in the performance information detected within a first period that is most recent, the note duration and note spacing of each sound and the number of sounds are acquired. In the present embodiment, “3 seconds” is set as the first period. However, the disclosure is not limited thereto, and the first period may be longer than or shorter than 3 seconds.
Among them, a time from note-on to note-off continuously at the same pitch detected within the most recent first period is acquired as the note duration. If a plurality of note-ons and note-offs continuously at the same pitch are detected within the most recent first period, each note duration is acquired in order of the detected note-ons and note-offs.
A time from a certain note-off to the next note-on detected within the most recent first period is acquired as the note spacing. Similarly to note duration, if a plurality of note-offs and note-ons are detected within the most recent first period, each note spacing is acquired in order of the detected note-offs and note-ons. The number of note-ons detected within the most recent first period is acquired as the number of sounds.
With respect to each of a plurality of rhythm patterns, a similarity representing how similar the note duration, note spacing, and number of sounds set in the rhythm pattern are to the note duration, note spacing, and number of sounds within the most recent first period is calculated. Specifically, first, a “score” for each of the note duration, note spacing, and number of sounds is acquired, and the similarity is calculated by summing up the acquired scores.
Among them, with respect to the score for the note duration, first, a difference between the note duration included in a rhythm pattern and the note duration acquired within the corresponding most recent first period is calculated. An integer of 1 to 5 is acquired as a score for the note duration in ascending order of absolute value of the calculated difference. In the present embodiment, if the absolute value of the difference in note duration is between 0 and 0.05 second, “5” is acquired as the score for the note duration; in the cases of between 0.05 and 0.1 second, between 0.1 and 0.15 second, between 0.15 and 0.2 second, and greater than 0.2 second, “4”, “3”, “2”, and “1”, respectively, are acquired as the respective scores for the note duration. If a rhythm pattern includes only one note duration, these scores are acquired as the score for the note duration of the rhythm pattern concerned.
On the other hand, if a rhythm pattern includes a plurality of note durations, the score mentioned above is acquired for each of the plurality of note durations, and an average value of the acquired scores is taken as the score for the note duration of the rhythm pattern concerned. Specifically, note durations are acquired in order from the rhythm pattern, while note durations acquired within the most recent first period are also acquired in order. Then, each score is acquired for the acquired note durations of the rhythm pattern and the note durations acquired within the most recent first period in the order corresponding to the aforementioned note durations. The average value of the acquired scores is taken as the score for the note duration of the rhythm pattern concerned.
For example, if a rhythm pattern includes three note durations, a score is acquired for the first note duration of this rhythm pattern and the first note duration acquired within the most recent first period. A score is acquired for the second note duration of the rhythm pattern and the second note duration acquired within the most recent first period, and a score is acquired for the third note duration of the rhythm pattern and the third note duration acquired within the most recent first period. An average value of the three scores thus acquired is taken as the score for the note duration of the rhythm pattern concerned.
With respect to the score for the note spacing, first, a difference between the note spacing included in a rhythm pattern and the note spacing within the corresponding most recent first period is calculated. An integer of 1 to 5 is acquired as a score for the note spacing in ascending order of absolute value of the calculated difference. If the absolute value of the difference in note spacing is between 0 and 0.05 second, “5” is acquired as the score for the note spacing; in the cases of between 0.05 and 0.1 second, between 0.1 and 0.15 second, between 0.15 and 0.2 second, and greater than 0.2 second, “4”, “3”, “2”, and “1”, respectively, are acquired as the respective scores for the note spacings. If a rhythm pattern includes only one note spacing, these scores are acquired as the score for the note spacing of the rhythm pattern concerned.
On the other hand, if a rhythm pattern includes a plurality of note spacings, similarly to the note duration mentioned above, the score mentioned above is acquired for each of the plurality of note spacings, and an average value of the acquired scores is taken as the score for the note spacing of the rhythm pattern concerned. Specifically, note spacings are acquired in order from the rhythm pattern, while note spacings acquired within the most recent first period are also acquired in order. Then, each score is acquired for the acquired note spacings of the rhythm pattern and the note spacings acquired within the most recent first period in the order corresponding to the aforementioned note spacings. The average value of the acquired scores is taken as the score for the note spacing of the rhythm pattern concerned.
With respect to the score for the number of sounds, a difference between the number of sounds included in a rhythm pattern and the number of sounds acquired within the most recent first period is calculated, and an integer of 1 to 5 is acquired as a score for the number of sounds in ascending order of absolute value of the calculated difference. If the absolute value of the difference in number of sounds is 0, “5” is acquired as the score for the number of sounds; in the cases of 1, 2, 3, and 4 or greater, “4”, “3”, “2”, and “1”, respectively, are acquired as the respective scores for the number of sounds of the rhythm pattern concerned.
Ranges of the absolute value of the difference in note duration or note spacing corresponding to the scores for the note duration or note spacing or values of the scores for the note duration or note spacing are not limited to those mentioned above. Other ranges may be set for the absolute value of the difference in note duration or note spacing, or other values may be set for the scores for the note duration or note spacing. Similarly, ranges of the absolute value of the difference in number of sounds corresponding to the scores for the number of sounds or values of the scores for the number of sounds are not limited to those mentioned above. Other ranges may be set for the absolute value of the difference in number of sounds, or other values may be set for the scores for the number of sounds.
A sum total of the scores for the note duration, note spacing and number of sounds thus acquired is calculated as a similarity of the rhythm pattern concerned. The similarity is calculated similarly for all of a plurality of rhythm patterns. Then, among the plurality of rhythm patterns, a rhythm pattern having highest similarity is acquired as a rhythm pattern most similar to the rhythm of depression/release of the key 2a acquired within the most recent first period.
Then, the performance pattern Pa corresponding to the most similar rhythm pattern acquired is acquired, and the performance is switched to this performance pattern Pa from the performance pattern Pa being performed. Accordingly, without the performer interrupting performance by letting go their hand from the keyboard 2 they have been playing and operating the setting button 3 or the like, it is possible to automatically switch to the performance pattern Pa suitable for a rhythm of the performance of the keyboard 2.
Next, changing the volume of the performance pattern Pa to be automatically performed is described. In the present embodiment, the volume of the performance pattern Pa is changed based on the velocity at the time of depression of the key 2a. More specifically, the performance pattern Pa includes a plurality of performance parts such as drum, bass, and accompaniment (musical instrument having a pitch), and the volume is changed based on the velocity at the time of depression of the key 2a for each performance part.
First, similarly to the switching of the performance pattern Pa mentioned above, the performance information outputted from the keyboard 2 is sequentially accumulated, and each velocity in the performance information acquired within a second period that is most recent is acquired. Then, an average value V of the acquired velocities is calculated. In the present embodiment, “3 seconds” is set as the second period, like the first period. However, the disclosure is not limited thereto, and the second period may be longer than or shorter than 3 seconds.
A differential value ΔV is calculated which is a value obtained by subtracting an intermediate value Vm of the velocity from the calculated average value V. The intermediate value Vm of the velocity is a reference value serving as a reference in calculating the differential value ΔV. In the present embodiment, an intermediate value “64” between a maximum possible value “127” and a minimum possible value “0” of the velocity is set as the intermediate value V. The intermediate value here refers to a value obtained by dividing, by 2, a sum of the maximum and minimum possible values of the velocity, or a value in the vicinity thereof, and may be expressed as an “approximately intermediate value”.
A value obtained by multiplying the calculated differential value ΔV by a weight coefficient set for each performance part is added to a set value of the volume of each performance part, and a result thereof is taken as the volume of each performance part after change. Changing of the volume of the performance pattern Pa is described with reference to (d) to (i) of
(d) of
As illustrated in (d) of
In the present embodiment, the weight coefficient is set in advance by the performer via the setting button 3 for each performance pattern Pa and each performance part of the performance pattern Pa. In particular, if the weight coefficient for a certain performance part is set to 0, the volume of this performance part can be kept constant (that is, kept at the set value of the volume) regardless of the average velocity V. Weight coefficients such as α and β may have the same value regardless of the performance pattern Pa and the performance part.
In the present embodiment, the set value of the volume is set in advance by the performer via the setting button 3 for each performance pattern Pa and each performance part. The set value of the volume may be the same volume regardless of the performance pattern Pa and the performance part.
For example, as illustrated in (e) of
Similarly, as illustrated in (f) of
In the present embodiment, the weight coefficient such as α and β is set in advance for each performance pattern Pa and each performance part of the performance pattern Pa. The weight coefficient such as α and β may be set to the same coefficient regardless of the performance pattern Pa and the performance part, or the performer may be allowed to set the weight coefficient arbitrarily via the setting button 3. The weight coefficient is set to a positive value but is not limited thereto. Rather, the weight coefficient may be set to a negative value.
In (d) to (f) of
Next, a case is described where the differential value ΔV is negative. (g) of
As illustrated in (g) of
As illustrated in (h) of
In (g) to (i) of
In the present embodiment, the volume of each performance part is obtained by adding a value based on the differential value ΔV of the velocity to the set value of the volume. That is, since the volume of each performance part changes according to whether the differential value ΔV is positive or negative relative to the set value of the volume and the magnitude of the differential value ΔV, it is prevented that the volume of each performance part differs markedly from the set value of the volume. Thus, a balance of volume between the performance parts in the performance pattern Pa can be maintained close to a balance between the set values of the volume set in advance for each performance part. Accordingly, discomfort experienced by a listener in the case where the volume of each performance part in the performance pattern Pa is changed based on the velocity at the time of depression of the key 2a may be reduced.
By varying the weight coefficient for each performance part, the change amount of the volume can be varied for each performance part. Accordingly, a uniform change in volume of each performance part of the performance pattern Pa can be reduced, and automatic performance full of variety and expression can be realized.
As described above, in the present embodiment, the performance pattern Pa is switched according to the rhythm of depression/release of the key 2a, and the volume of the performance pattern Pa is changed according to the velocity at the time of depression of the key 2a. Furthermore, it is possible to set a range of the key 2a on the keyboard 2 in which performance information used for switching the performance pattern Pa is outputted and a range of the key 2a on the keyboard 2 in which performance information used for changing the volume of the performance pattern Pa is outputted. Hereinafter, a sequential range of the keys 2a on the keyboard 2 is referred to as a “key range”.
(j) of
Among them, the key range kL corresponds to the left-hand part played by the performer with their left hand. On the other hand, the key range kR corresponds to the right-hand part played by the performer with their right hand. In the present embodiment, the key range kL is set as a rhythm key range kH used for switching the performance pattern Pa.
Here, the key range kL is a key range corresponding to the left-hand part played by the performer. The left-hand part mainly performs an accompaniment, and a rhythm is generated by the accompaniment. By detecting a rhythm from performance information on the key range kL corresponding to such a left-hand part, and switching the performance pattern Pa based on the rhythm, the performance pattern Pa matching the rhythm in the performer's performance can be automatically performed.
On the other hand, the key range kR is set as a velocity key range kV used for changing the volume of the performance pattern Pa. Here, the key range kR is a key range corresponding to the right-hand part played by the performer, and the right-hand part mainly performs a main melody. By detecting a velocity from performance information on the key range kR in which the main melody is performed in this way, and changing the volume of the performance pattern Pa based on the velocity, the performance pattern Pa having a volume matching intonation of the main melody of the performance can be automatically performed.
Next, a function of the synthesizer 1 is described with reference to
The pattern storage part 200 is a means of storing a plurality of performance patterns, and is realized by a style table 11c described later in
The rhythm detection part 203 is a means of detecting a rhythm from the performance information inputted by the input part 202, and is realized by the CPU 10. The acquisition part 204 is a means of acquiring a performance pattern corresponding to the rhythm detected by the rhythm detection part 203 from among the plurality of performance patterns stored in the pattern storage part 200, and is realized by the CPU 10. The switching part 205 is a means of switching a performance pattern being performed by the performing part 201 to the performance pattern acquired by the acquisition part 204, and is realized by the CPU 10.
A performance pattern is acquired based on the rhythm detected from the inputted performance information, and the acquired performance pattern is switched to a performance pattern being performed. This enables automatic switching to a performance pattern suitable for a performer's performance without interrupting the performance.
Next, an electrical configuration of the synthesizer 1 is described with reference to
The CPU 10 is an arithmetic unit that controls each part connected by the bus line 15. The flash ROM 11 is a rewritable non-volatile memory, and includes a control program 11a, a rhythm table 11b, and the style table 11c. When the control program 11a is executed by the CPU 10, main processing of
(b) of
The “complexity of a rhythm” is set according to a time interval between sounds arranged in one bar or irregularity of the sounds arranged in one bar. For example, the shorter the time interval between the sounds arranged in one bar, the more complex the rhythm; the longer the time interval between the sounds arranged in one bar, the simpler the rhythm. The more irregularly the sounds are arranged in one bar, the more complex the rhythm; the more regularly the sounds are arranged in one bar, the simpler the rhythm. The rhythm levels are set in order of simplicity of the rhythm as level L1, level L2, level L3, and so on. The note duration, note spacing and number of sounds mentioned above are stored in the rhythm pattern in the rhythm table 11b.
Although detailed later, in switching the performance pattern Pa, a similarity between a detected rhythm of depression/release of the key 2a and all the rhythm patterns stored in the rhythm table 11b is calculated, and a rhythm level corresponding to the most similar rhythm pattern is acquired.
(c) of
For example, performance pattern Pa_L1_i for the intro, performance pattern Pa_L1_m1 for main 1, performance pattern Pa_L1_e for the ending and so on are stored as the performance pattern Pa corresponding to level L1 in the style table 11c. Similarly, for levels L2 and L3 and other rhythm levels, the performance pattern Pa is stored for each section.
Although detailed later, the performance pattern Pa corresponding to a rhythm level acquired based on the rhythm of depression/release of the key 2a and a section set via the setting button 3 is acquired from the style table 11c, and the performance pattern Pa being automatically performed is switched to the acquired performance pattern Pa.
Please refer back to (a) of
In the input information memory 12c, information obtained by combining performance information inputted from the keyboard 2 with a time when this performance information was inputted is stored in order of input of the performance information. In the present embodiment, the input information memory 12c is composed of a ring buffer, and is configured to be able to store information obtained by combining performance information with a time when this performance information was inputted within the most recent first period (second period). The information obtained by combining performance information with a time when this performance information was inputted is hereinafter referred to as “input information”.
The sound source 13 is a device that outputs waveform data according to the performance information inputted from the CPU 10. The DSP 14 is an arithmetic unit for arithmetically processing the waveform data inputted from the sound source 13. The DAC 16 is a conversion device that converts the waveform data inputted from the DSP 14 into analog waveform data. The amplifier 17 is an amplification device that amplifies the analog waveform data outputted from the DAC 16 with a predetermined gain. The speaker 18 is an output device that emits (outputs) the analog waveform data amplified by the amplifier 17 as a musical sound.
Next, main processing executed by the CPU 10 is described with reference to
In the main processing, first, it is confirmed whether there has been an instruction from the performer via the setting button 3 to start automatic performance of the performance pattern Pa (S1). In the processing of S1, if there has been no instruction to start automatic performance of the performance pattern Pa (S1: No), the processing of S1 is repeated. On the other hand, in the processing of S1, if there has been an instruction to start automatic performance of the performance pattern Pa (S1: Yes), initial values of rhythm key range kH, velocity key range kV, rhythm level, section, and volume of each performance part of the performance pattern Pa are set in the rhythm key range memory 12a, the velocity key range memory 12b, the rhythm level memory 12d, the section memory 12e, and the volume memory 12f, respectively (S2).
Specifically, the key range kL (see (j) of
After the processing of S2, the performance pattern Pa according to the initial values of the rhythm level in the rhythm level memory 12d and the section in the section memory 12e is acquired from the style table 11c. Automatic performance of the acquired performance pattern Pa in which the initial value of the volume of each performance part in the volume memory 12f is applied to the volume of each performance part of the acquired performance pattern Pa is started (S3).
After the processing of S3, it is confirmed whether there has been a key input, that is, whether performance information from the key 2a has been inputted (S4). In the processing of S4, if the performance information from the key 2a has been inputted (S4: Yes), a musical sound corresponding to the inputted performance information is outputted (S5). Specifically, the inputted performance information is outputted to the sound source 13, waveform data corresponding to the inputted performance information is acquired in the sound source 13, and the waveform data is outputted as the musical sound via the DSP 14, the DAC 16, the amplifier 17 and the speaker 18. Accordingly, a musical sound according to the performer's performance is outputted.
After the processing of S5, the inputted performance information and a time when this performance information was inputted are added as input information to the input information memory 12c (S6). In the processing of S4, if the performance information from the key 2a has not been inputted (S4: No), the processing of S5 and S6 is skipped.
After the processing of S4 and S6, it is confirmed whether the rhythm key range kH or the velocity key range kV has been changed by the performer via the setting button 3 (S7). In the processing of S7, if the rhythm key range kH or the velocity key range kV has been changed (S7: Yes), the changed rhythm key range kH or velocity key range kV is saved in the corresponding rhythm key range memory 12a or velocity key range memory 12b (S8). On the other hand, if neither the rhythm key range kH nor the velocity key range kV has been changed (S7: No), the processing of S8 is skipped.
After the processing of S7 and S8, performance pattern switching processing (S9) and performance pattern volume changing processing (S10) described later with reference to
On the other hand, in the processing of S20, if the section has not been changed (S20: No), the processing of S21 is skipped. After the processing of S20 and S21, it is confirmed whether automatic pattern switching is on (S22). The automatic pattern switching is a setting of whether to switch the performance pattern Pa based on the rhythm of depression/release of the key 2a mentioned above in
In the processing of S22, if the automatic pattern switching is on (S22: Yes), it is confirmed whether a first period has passed since the last determination of rhythm level by the processing of S24 to S26 (described later in detail) (S23). In the processing of S23, if the first period has passed since the last determination of rhythm level (S23: Yes), a rhythm is acquired from the input information in the input information memory 12c within the most recent first period, which is input information of performance information corresponding to the rhythm key range kH in the rhythm key range memory 12a (S24).
Specifically, in the processing of S24, the input information within the most recent first period is acquired from the input information memory 12c. In the acquired input information, the input information of performance information corresponding to the rhythm key range kH is further acquired. From the acquired input information, the rhythm, that is, note duration, note spacing and number of sounds, are acquired by the method mentioned above in
After the processing of S24, a similarity between the rhythm acquired in the processing of S24 and each rhythm pattern in the rhythm table 11b is calculated (S25). Specifically, as mentioned above in
After the processing of S25, a rhythm level corresponding to a rhythm pattern having the highest similarity among the calculated similarities for each rhythm pattern is acquired from the rhythm table 11b and saved in the rhythm level memory 12d (S26). Accordingly, a rhythm level corresponding to a rhythm pattern most similar to the rhythm detected from depression/release of the key 2a within the most recent first period is saved in the rhythm level memory 12d.
In the processing of S23, if the first period has not passed since the last determination of rhythm level (S23: No), the processing of S24 to S26 is skipped.
In the processing of S22, if the automatic pattern switching is off (S22: No), it is confirmed whether the rhythm level has been changed by the performer via the setting button 3 (S27). In the processing of S27, if the rhythm level has been changed by the performer (S27: Yes), the changed rhythm level is saved in the rhythm level memory 12d (S28). On the other hand, in the processing of S27, if the rhythm level has not been changed by the performer (S27: No), the processing of S28 is skipped.
After the processing of S23 and S26 to S28, it is confirmed whether the value in the rhythm level memory 12d or the section memory 12e has been changed by the processing of S20 to S28 (S29). In the processing of S29, if it is confirmed that the value in the rhythm level memory 12d or the section memory 12e has been changed (S29: Yes), the performance pattern Pa corresponding to the rhythm level in the rhythm level memory 12d and the section in the section memory 12e is acquired from the style table 11c (S30).
After the processing of S30, the performance pattern Pa to be outputted for performing automatic performance is switched to the performance pattern Pa acquired in the processing of S30 (S31). If the switching to the performance pattern Pa is performed by the processing of S31, automatic performance according to the performance pattern Pa acquired in the processing of S30 is started after automatic performance according to the performance pattern Pa before switching has been performed until its end. Accordingly, switching from a performance pattern Pa being automatically performed to another performance pattern Pa in the middle of the automatic performance is prevented. Thus, the listener may experience less discomfort with respect to switching of the performance pattern Pa.
In the processing of S29, if it is confirmed that neither the value in the rhythm level memory 12d nor the value in the section memory 12e has been changed (S29: No), the processing of S30 and S31 is skipped. After the processing of S29 and S31, the performance pattern switching processing is ended.
In the processing of S40, if the automatic volume changing is on (S40: Yes), it is confirmed whether a second period has passed since the last time the processing of S42 and S43 (described later in detail) was performed, that is, the last determination of volume (S41). In the processing of S41, if the second period has passed since the last determination of volume (S41: Yes), the average value V of the velocity is acquired from the input information in the input information memory 12c within the most recent second period, which is input information of performance information corresponding to the velocity key range kV in the velocity key range memory 12b (S42).
Specifically, in the processing of S42, the input information within the most recent second period is acquired from the input information memory 12c. In the acquired input information, the input information of performance information corresponding to the velocity key range kV is further acquired. Each velocity is acquired from the performance information in the acquired input information. By averaging the acquired velocities, the average value V of the velocity is acquired.
After the processing of S42, the volume of each performance part is determined from the acquired average value V of the velocity and is saved in the volume memory 12f (S43). Specifically, as mentioned above in
A set value of the volume set by the setting button 3 is acquired for each performance part. By adding the calculated change amount for each performance part to each acquired set value of the volume, the volume after change of each performance part is calculated. Each calculated volume after change is saved in the volume memory 12f. Accordingly, the volume of each performance part of the performance pattern Pa set according to the velocity of depression/release of the key 2a is saved in the volume memory 12f.
In the processing of S41, if the second period has not passed since the last determination of volume (S41: No), the processing of S42 and S43 is skipped. In the processing of S40, if the automatic volume changing is off (S40: No), it is confirmed whether the volume of any of the performance parts of the performance pattern Pa has been changed by the performer via the setting button 3 (S44).
In the processing of S44, if the volume of any of the performance parts has been changed (S44: Yes), the changed volume of the performance part is saved in the volume memory 12f (S45). On the other hand, in the processing of S44, if no change has occurred in the volume of any performance part (S44: No), the processing of S45 is skipped.
After the processing of S41 and S43 to S45, it is confirmed whether the value in the volume memory 12f has been changed by the processing of S40 to S45 (S46). In the processing of S46, if it is confirmed that the value in the volume memory 12f has been changed (S46: Yes), the volume of each performance part in the volume memory 12f is applied to the volume of each performance part of the performance pattern Pa being automatically performed (S47).
At this time, the volume after change is immediately applied to each performance part of the performance pattern Pa being automatically performed. Accordingly, the volume of the performance pattern Pa can be changed following a change in the velocity at the time of depression of the key 2a. Thus, automatic performance is made possible of the performance pattern Pa having an appropriate volume that follows the liveliness or delicateness of the performer's performance.
On the other hand, if it is confirmed in the processing of S46 that the value in the volume memory 12f has not been changed (S46: No), the processing of S47 is skipped. After the processing of S46 and S47, the performance pattern volume changing processing is ended.
Although the disclosure has been described above based on the above embodiments, it can be easily inferred that various improvements or modifications may be made.
In the above embodiments, the rhythm level such as level L1 and level L2 is acquired according to the rhythm of depression/release of the key 2a in the processing of S24 to S26 of
Accordingly, the rhythm level is acquired in which the greater the velocity at the time of depression of the key 2a, the more complex the rhythm. Thus, in the case of a lively performance having a great velocity at the time of depression of the key 2a, it is possible to output automatic performance of the performance pattern Pa corresponding to a complex rhythm so as to spur this performance. On the other hand, in the case of a delicate performance having a small velocity at the time of depression of the key 2a, it is possible to output automatic performance of the performance pattern Pa corresponding to a simple rhythm that does not destroy the atmosphere of this performance.
In the above embodiments, in the processing of S43 of
In this case, it suffices if the simpler the rhythm, the smaller the value is set for the “numerical value corresponding to the rhythm level”, and the more complex the rhythm, the greater the value is set for the “numerical value corresponding to the rhythm level”. For example, “−5” may be set as the “numerical value corresponding to the rhythm level” for level L1 at which the rhythm is simplest, “0” for level L2, and “5” for level L3.
Accordingly, in the case of a lively performance having a fast rhythm in which depression/release of the key 2a is repeated quickly, it is possible to output automatic performance of the performance pattern Pa having a loud volume so as to spur this performance. On the other hand, in the case of a delicate performance having a slow rhythm in which the key 2a is slowly depressed/released, it is possible to output automatic performance of the performance pattern Pa having a small volume that does not destroy the atmosphere of this performance.
In setting the volume of each performance part after change, both the differential value ΔV of the velocity and the rhythm of depression/release of the key 2a may be used. It is also possible to mix a performance part in which the volume after change is set using only the differential value ΔV of the velocity, a performance part in which the volume after change is set using only the rhythm of depression/release of the key 2a, and a performance part in which the volume after change is set using both the differential value ΔV of the velocity and the rhythm of depression/release of the key 2a.
In the above embodiments, in (d) and (g) of
In the above embodiments, in (a) to (c) of
In the above embodiments, there is no definition of a tempo for the rhythm pattern. However, the disclosure is not limited thereto. For example, an initial value of the tempo of the rhythm pattern may be set to 120 beats per minute (BPM), and the performer may be allowed to change the tempo of the rhythm pattern using the setting button 3. If the tempo is changed, it suffices if an actual time length of the musical notes and rests included in the rhythm pattern is corrected accordingly.
In the above embodiments, in the processing of S25 of
The indicator representing the rhythm or the similarity is not limited to being calculated based on note duration, note spacing, and number of sounds. For example, the indicator or the similarity may be calculated based on note duration and note spacing, or may be calculated based on note duration and number of sounds, or may be calculated based on note spacing and number of sounds, or may be calculated based on only one of note duration, note spacing and number of sounds. The similarity may be calculated based on note duration, note spacing, number of sounds, and other indicators representing the rhythm.
In the above embodiments, note duration and note spacing are set in the amount corresponding to the sounds included in the rhythm pattern. However, the disclosure is not limited thereto. For example, it is possible to set only average values of note durations and note spacings of the sounds included in the rhythm pattern. In this case, in calculating the similarity, similarities may be respectively calculated between the average values of note durations and note spacings detected from depression/release of the key 2a within the most recent first period and the average values of note durations and note spacings set in the rhythm pattern. Instead of the average values, other values such as maximum values, minimum values or intermediate values of the note durations and note spacings may be used. Furthermore, the average value of note durations and the maximum value of note spacings may be set, or the minimum value of note durations and the average value of note spacings may be set in the rhythm pattern.
In the above embodiments, in the case where a plurality of note durations or note spacings are included in the rhythm pattern, the average value of the individually acquired scores for note duration or note spacing is taken as the score for note duration or note spacing. However, the disclosure is not limited thereto. Other values such as the maximum value or the minimum value or the intermediate value of the individually acquired scores for note duration or note spacing may also be taken as the score for note duration or note spacing.
In the above embodiments, the similarity is the sum total of the scores for note duration, scores for note spacing and scores for number of sounds. However, the disclosure is not limited thereto. For example, the scores for note duration, note spacing and number of sounds may each be multiplied by a weight coefficient, and the similarity may be a sum total of the scores obtained by multiplication by the weight coefficient. In this case, the weight coefficient for each of note duration, note spacing and number of sounds may be varied according to the section in the section memory 12e.
Accordingly, when acquiring the rhythm pattern most similar to the rhythm detected from depression/release of the key 2a, it is possible to vary which indicator among note duration, note spacing, and number of sounds is to be emphasized for each section. Thus, automatic performance of the performance pattern Pa in a mode relatively matching the section is possible.
In the above embodiments, the performer manually changes the section with the setting button 3 by the processing of S20 and S21 of
Alternatively, a program of sections to be switched may be stored in advance (for example, intro→main 1→main 2 performed twice→main 1 performed three times→ . . . ending), and automatic performance of the performance patterns Pa of the corresponding sections may be performed in the order stored.
In the above embodiments, the first period and the second period are set to the same time. However, the disclosure is not limited thereto. The first period and the second period may be set as different times. In the processing of S24 of
Similarly, in the processing of S42 of
In the above embodiments, in the case of switching the performance pattern Pa in the processing of S31 of
For example, when the processing of S31 is executed, if the performance pattern Pa being automatically performed is in the middle of a certain beat, automatic performance may be performed in this performance pattern Pa until this beat, and automatic performance according to the performance pattern Pa acquired in the processing of S30 may be started from the next beat.
In the above embodiments, in the processing of S47 of
In the above embodiments, in (j) of
In the above embodiments, the synthesizer 1 is illustrated as an example of the automatic performance device. However, the disclosure is not limited thereto, and may be applied to an electronic musical instrument such as an electronic organ or an electronic piano, in which the performance pattern Pa can be automatically performed along with musical sounds produced by the performer's performance.
In the above embodiments, the performance information is configured to be inputted from the keyboard 2. However, instead of this, a configuration is possible in which an external keyboard of the MIDI standard may be connected to the synthesizer 1 and the performance information may be inputted from such a keyboard. Alternatively, a configuration is possible in which the performance information may be inputted from MIDI data stored in the flash ROM 11 or the RAM 12.
In the above embodiments, as the performance pattern Pa used for automatic performance, an example is given where notes are set in chronological order. However, the disclosure is not limited thereto. For example, voice data of human singing voices or applause or animal cries or the like may also be taken as the performance pattern Pa used for automatic performance.
In the above embodiments, an accompaniment sound or musical sound is configured to be outputted from the sound source 13, the DSP 14, the DAC 16, the amplifier 17 and the speaker 18 provided in the synthesizer 1. However, instead of this, a configuration is possible in which a sound source device of the MIDI standard may be connected to the synthesizer 1, and an accompaniment sound or musical sound of the synthesizer 1 may be inputted from such a sound source device.
In the above embodiments, the control program 11a is stored in the flash ROM 11 of the synthesizer 1 and is configured to be operated on the synthesizer 1. However, the disclosure is not limited thereto, and the control program 11a may be configured to be operated on any other computer such as a personal computer (PC), a mobile phone, a smartphone or a tablet terminal. In this case, the performance information may be inputted from, instead of the keyboard 2 of the synthesizer 1, a keyboard of the MIDI standard or a keyboard for text input connected to the PC or the like in a wired or wireless manner, or the performance information may be inputted from a software keyboard displayed on a display device of the PC or the like.
The numerical values mentioned in the above embodiments are examples, and it is of course possible that other numerical values may be used.
Number | Date | Country | Kind |
---|---|---|---|
2022-169862 | Oct 2022 | JP | national |