This application claims the priority benefit of Japan application no. 2020-112612, filed on Jun. 30, 2020. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
The disclosure relates to a non-transitory computer-readable storage medium stored with an automatic music arrangement program and an automatic music arrangement device.
In Patent Document 1, an automatic music arrangement device that generates a new performance information file by identifying notes serving as chord component sounds of which production is started at the same time among notes included in a performance information file 24 and deleting notes exceeding a predetermined threshold among the identified notes in order of lowest to highest pitch is disclosed. In accordance with this, in the generated new performance information file, the number of chords that are generated at the same time is smaller than that in the performance information file 24, and thus a performer can easily perform the new performance information file.
However, there are cases in which notes are recorded with sounds of multiple pitches that are not produced at the same time partly overlapping each other in a performance information file 24. When such a performance information file 24 is input to the automatic music arrangement device disclosed in Patent Document 1, production of notes of which multiple pitches partly overlap each other is not started at the same time and thus they are not recognized as chord component sounds. Thus, in such a case, the number of notes is not decreased, and the notes are output to a new performance information file as they are, and thus there is a problem in that a musical score that can be easily performed cannot be generated from the performance information file.
The disclosure provides an automatic music arrangement program and an automatic music arrangement device capable of generating arranged data, in which the number of sounds to be produced at the same time is decreased, and that can be easily performed from musical piece data.
According to an embodiment of the disclosure, there is provided a non-transitory computer-readable storage medium stored with an automatic music arrangement program causing a computer to execute a process of music arrangement of a musical piece data, the automatic music arrangement program causing the computer to execute: a musical piece acquiring step of acquiring the musical piece data; a melody acquiring step of acquiring notes of a melody part from the musical piece data acquired in the musical piece acquiring step; an outer voice identifying step of identifying a note having a highest pitch among notes of which start times of sound production are approximately the same as an outer voice note, among the notes acquired in the melody acquiring step; an inner voice identifying step of identifying a note of which sound production starts within a sound production period of the outer voice note identified in the outer voice identifying step and of which a pitch is lower than that of the outer voice note as an inner voice note, among the notes acquired in the melody acquiring step; an arranged melody generating step of generating an arranged melody part by deleting the inner voice note identified in the inner voice identifying step from the notes acquired in the melody acquiring step; and an arranged data generating step of generating an arranged data on a basis of the melody part generated in the arranged melody generating step.
According to another embodiment of the disclosure, there is provided a non-transitory computer-readable storage medium stored with an automatic music arrangement program causing a computer to execute a process of music arrangement of a musical piece data, the automatic music arrangement program causing the computer to execute: a musical piece acquiring step of acquiring the musical piece data; a chord information acquiring step of acquiring chords and sound production timings of the chords from the musical piece data acquired in the musical piece acquiring step; a note name acquiring step of acquiring note names of root notes of the chords acquired in the chord information acquiring step; a range changing step of changing a position in a pitch of a pitch range that is a predetermined range of pitches by one semitone each time; a candidate accompaniment generating step of generating candidate accompaniment parts that are candidates for an accompaniment part from sounds of pitches corresponding to the note names which are acquired in the acquiring of note names in the pitch range, and the sound production timings of the chords which are acquired in the chord information acquiring step corresponding to the sounds, for each pitch range changed in the range changing step; a selection step of selecting an arranged accompaniment part among the candidate accompaniment parts on a basis of the pitches of sounds included in the candidate accompaniment parts generated in the candidate accompaniment generating step; and an arranged data generating step of generating an arranged data on a basis of the accompaniment part selected in the selection step.
In addition, according to another embodiment of the disclosure, there is provided an automatic music arrangement device including: a musical piece acquiring portion, configured to acquire a musical piece data; a melody acquiring portion, configured to acquire notes of a melody part from the musical piece data acquired by the musical piece acquiring portion; an outer voice identifying portion, configured to identify a note having a highest pitch among notes of which start times of sound production are approximately the same as an outer voice note, among the notes acquired by the melody acquiring portion; an inner voice identifying portion, configured to identify a note of which sound production starts within a sound production period of the outer voice note identified by the outer voice identifying portion and of which a pitch is lower than that of the outer voice note as an inner voice note, among the notes acquired by the melody acquiring portion; an arranged melody generating portion, configured to generate an arranged melody part by deleting the inner voice note identified by the inner voice identifying portion from the notes acquired by the melody acquiring portion; and a arranged data generating portion, configured to generate an arranged data on a basis of the melody part generated by the arranged melody generating portion.
According to another embodiment of the disclosure, there is provided an automatic music arrangement device including: a musical piece acquiring portion, configured to acquire a musical piece data; a chord information acquiring portion, configured to acquire chords and sound production timings of the chords from the musical piece data acquired by the musical piece acquiring portion; a note name acquiring portion, configured to acquire note names of root notes of the chords acquired by the chord information acquiring portion; a range changing portion, configured to change a position in a pitch of a pitch range that is a predetermined range of pitches by one semitone each time; a candidate accompaniment generating portion, configured to generate candidate accompaniment parts that are candidates for an accompaniment part from sounds of pitches corresponding to the note names which are acquired by the note name acquiring portion in the pitch range, and the sound production timings of the chords which are acquired by the chord information acquiring portion corresponding to the sounds, for each pitch range changed by the range changing portion; a selection portion, configured to select the arranged accompaniment part among the candidate accompaniment parts on a basis of the pitches of sounds included in the candidate accompaniment parts generated by the candidate accompaniment generating portion; and a arranged data generating portion, configured to generate an arranged data on a basis of the accompaniment part selected by the selection portion.
In
In
In
In
In
Hereinafter, a preferred embodiment will be described with reference to the accompanying drawings. An overview of a PC 1 according to this embodiment will be described with reference to
In the musical piece data M, performance data P in which performance information of a musical piece according to a musical instrument digital interface (MIDI) format is stored and chord data C in which chord progression in the musical piece is stored are provided. In this embodiment, a melody part Ma that is a main melody of a musical piece and is performed by a user H using his or her right hand is acquired from performance data P of musical piece data M, and an arranged melody part Mb acquired by decreasing the number of notes that are produced at the same time for the acquired melody part Ma is generated.
In addition, an arranged accompaniment part Bb that is an accompaniment sound of a musical piece and is performed by the user H using his or her left hand is generated from a root note of a chord and the like acquired from chord data C of the musical piece data M. Then, arranged data A is generated from these melody part Mb and accompaniment part Bb. First, a technique for generating the arranged melody part Mb will be described with reference to
In
In the melody part Ma, the note N1 having the highest pitch and a long sound production period starts to be produced together with the note N2, and, during the sound production of the note N1, stop of sound production of the note N2, start and stop of sound production of the notes N3 and N4, and start of sound production of the note N5 are performed. When a musical score is generated on the basis of such a melody part Ma, the notes N2 to N5 need to be produced during the sound production of the note N1, and it becomes difficult for a user H to perform the musical score.
In this embodiment, the number of sounds to be produced at the same time in such a melody part Ma is decreased. More specifically, first, notes of which sound production is started at the same time in the melody part Ma are acquired. In (a) of
Then, a note having the highest pitch among the acquired notes is identified as an outer voice note Vg, and a note having a pitch lower than that of the outer voice note Vg is identified as an inner voice note Vi. In (a) of
In addition, notes of which start of sound production and stop of sound production are performed within a sound production period of the note identified as the outer voice note Vg are acquired and are additionally identified as inner voice notes Vi. In (a) of
Then, by deleting the notes that are identified as inner voice notes Vi from the melody part Ma, an arranged melody part Mb is generated. In (b) of
In accordance with this, in the arranged melody part Mb, the notes N2 to N4 of which sound production is started and stopped within the sound production period of the note N1 are deleted from among sounds that are produced at the same time with the note N1 that is the outer voice note Vg, and thus the number of sounds that are produced at the same time in the entire melody part Mb can be decreased. In addition, the outer voice note Vg included in the melody part Mb is regarded as a sound of which a pitch is high and is heard conspicuously by a listener in the melody part Ma of the musical piece data M. In accordance with this, the arranged melody part Mb can be formed as a melody part that is maintained like the melody part Ma of the musical piece data M.
Here, production of the note N5 that is recorded in the arranged melody part Mb together with the note N1 starts within the sound production period of the note N1 and stops after production of the note N1 stops. In accordance with such notes remaining in the arranged melody part Mb, the melody part Mb maintaining a tune that is a change in the pitch of the melody part Ma of the musical piece data M can be formed.
Next, a technique for generating an arranged accompaniment part Bb will be described with reference to
More specifically, as illustrated in
In this embodiment, the candidate accompaniment parts BK1 to BK12 are generated from ranges of pitches each acquired by shifting the pitch ranges by one semitone. More specifically, the pitch range of the candidate accompaniment part BK1 according to this embodiment is set to a range of pitches corresponding to one octave of C4 (note number 60) to C#3 (note number 49), and the candidate accompaniment part BK1 is generated in such a range. In other words, in a case in which progression of note names of root notes or denominators-side note names of chords acquired from the chord data C is “C→F→G→C,” in the pitch range described above, “C4→F3→G3→C4” that are sounds of pitches corresponding to such note names are acquired, and a part in which these are arranged such that they are produced at sound production timings of corresponding chords in the chord data C is regarded as the candidate accompaniment part BK1.
In the candidate accompaniment part BK2 following the candidate accompaniment part BK1, a pitch range thereof is set to a range of pitches that are lower than those of the candidate accompaniment part BK1 by one semitone. In other words, in the candidate accompaniment part BK2, B3 (note number 59) to C3 (note number 48) are set to a pitch range. Thus, “C3→F3→G3→C3” is generated as the candidate accompaniment part BK2.
While the pitch range is shifted by one semitone, candidate accompaniment parts BK3 to BK12 are generated. In this way, multiple accompaniment parts acquired by changing the pitch ranges by 12 semitones, that is, one octave, are each generated in accordance with the candidate accompaniment parts BK1 to BK12. An arranged accompaniment part Bb is selected from among the candidate accompaniment parts BK1 to BK12 generated in this way. A technique for selecting an arranged accompaniment part Bb will be described with reference to
First, pitch differences D1 to D8 between notes NN1 to NN4 composing the candidate accompaniment part BKn and notes NM1 to NM8 of the arranged melody part Mb that are produced at the same time are calculated, and a standard deviation S according to the calculated pitch differences D1 to D8 is calculated. As a technique for calculating the standard deviation S, a known technique is applied, and thus detailed description thereof will be omitted.
Next, an average value Av of the pitches of the notes NN1 to NN4 composing the candidate accompaniment part BKn is calculated, and a difference value D that is an absolute value of a difference value between the average value Av and a specific pitch (a note number 53 in this embodiment) is calculated. Next, a keyboard range W that is a pitch difference between the highest pitch and the lowest pitch among the notes NN1 to NN4 composing the candidate accompaniment part BKn is calculated. In addition, the specific pitch used in the calculation of the difference value D is not limited to the note number 53 and may be equal to or lower than 53 or equal to or higher than 53.
An evaluation value E is calculated using the following Equation 1 on the basis of the standard deviation S, the difference value D, and the keyboard range W that have been calculated.
E=(S*100000)+(D*1000)+W (Equation 1)
Here, coefficients by which the standard deviation S, the difference value D, and the keyboard range W are multiplied in Equation 1 are not limited to those represented above, and arbitrary values may be used as appropriate.
Such evaluation values E are calculated for all the candidate accompaniment parts BK1 to BK12, and one of the candidate accompaniment parts BK1 to BK12 that has the smallest evaluation value E is selected as the arranged accompaniment part Bb.
As described above, the candidate accompaniment parts BK1 to BK12 are composed by only note names of root notes or denominator-side note names of chords of the chord data C of the musical piece data M. In accordance with this, also for the candidate accompaniment parts BK1 to BK12 performed by the user using his or her left hand, the number of sounds that are produced at the same time as a whole can be decreased.
Here, the chords of the chord data C of the musical piece data M represent chord progression of the musical piece, and a root note or a denominator-side sound of a chord is a sound that forms a basis of the chord. Thus, by composing the candidate accompaniment parts BK1 to BK12 using root notes or denominator-side sounds of the chords, the chord progression of the musical piece data M can be appropriately maintained.
Evaluation values E are calculated on the basis of the candidate accompaniment parts BK1 to BK12 generated in this way, and a candidate accompaniment part having the smallest evaluation value E is selected as the arranged accompaniment part Bb. More specifically, by setting a candidate accompaniment part having a small standard deviation S composing the evaluation value E as the arranged accompaniment part Bb, one of the candidate accompaniment parts BK1 to BK12 having a small pitch difference from that of the melody part Mb described above is selected as the accompaniment part Bb. In accordance with this, an accompaniment part for which a distance between the right hand, which performs a melody part Mb, and the left hand, which performs the accompaniment part, of the user H is small, and movement unbalance between the right hand and the left hand is small is selected as the accompaniment part Bb, and thus arranged data A that can be easily performed by a user H even when the user H is a beginner is formed.
By setting a candidate accompaniment part having a small difference value D composing the evaluation value E as the arranged accompaniment part Bb, and thus a pitch difference between a sound included in the accompaniment part Bb and a sound of a specific pitch (in other words, the note number 53) can be decreased. In accordance with this, movement of the left hand of the user H performing the accompaniment part Bb can be limited near the sound of the specific pitch, and thus arranged data A that can be easily performed is formed.
In addition, by setting a candidate accompaniment part of which the keyboard range W composing the evaluation value E is small as the arranged accompaniment part Bb, a difference between a sound of the highest pitch and a sound of the lowest pitch included in the accompaniment part Bb can be decreased. In accordance with this, a maximum amount of movement of the left hand of the user H performing the accompaniment part Bb can be decreased, and thus arranged data A that can be easily performed is formed.
The evaluation value E is configured as a value that is acquired by adding the standard deviation S, the difference value D, and the keyboard range W. Thus, by selecting one of the candidate accompaniment parts BK1 to BK12 as the accompaniment part Bb in accordance with such an evaluation value E, an accompaniment part for which a distance between the right hand, which performs the melody part Mb, and the left hand, which performs the accompaniment part Bb, of a user H is short, a pitch difference between a sound included in the accompaniment part Bb and a sound of the specific pitch is small, and a difference between a sound of the highest pitch and a sound of the lowest pitch included in the accompaniment part Bb is small and which is well balanced and can be easily performed by the user H can be selected as the accompaniment part Bb.
Next, the electric configuration of the PC 1 will be described with reference to
The CPU 20 is an arithmetic operation device that controls each part connected through the bus line 23. The HDD 21 is a rewritable nonvolatile storage device that stores programs executed by the CPU 20, fixed-value data, and the like and stores an automatic music arrangement program 21a and musical piece data 21b. When the automatic music arrangement program 21a is executed by the CPU 20, a main process of (a) of
In
In
Description will be continued with reference back to (a) of
In the melody data 22a, the melody part Ma of the musical piece data M described above or the arranged melody part Mb are stored. The data structure of the melody data 22a is the same as that of the performance data 21b1 described above with reference to (b) of
In the input chord data 22b, chord data C acquired from the chord data 21b2 of the musical piece data 21b described above is stored. The data structure of the input chord data 22b is the same as that of the chord data 21b2 described above with reference to (a) of
The candidate accompaniment table 22c is a data table in which the candidate accompaniment parts BK1 to BK12 described above with reference to
In
In
Next, a main process executed by the CPU 20 of the PC 1 will be described with reference to
In the main process, first, musical piece data M is acquired from musical piece data 21b (S1). A place from which the musical piece data M is acquired is not limited to the musical piece data 21b and, for example, the musical piece data M may be acquired from another PC or the like through a communication device not illustrated in the drawing.
After the process of S1, a quantization process is performed on the acquired musical piece data M, and a transposition process into C Major or A Minor is performed (S2). The quantization process is a process of correcting a slight difference between sound production timings when real-time recording is performed.
There are cases in which notes included in the musical piece data M are recorded by recording an actual performance, and, in such cases, a sound production timing may slightly deviate. Thus, by performing a quantization process on the musical piece data M, a start time and a stop time of sound production of each note included in the musical piece data M can be corrected, and thus, notes of which sound production starts at the same time among notes included in the musical piece data M and the like can be accurately identified, and the outer voice note Vg and the inner voice note Vi described above with reference to
In addition, by performing a transposition process on the musical piece data M into C Major or A Minor, in a case in which arranged data A acquired by arranging the musical piece data M is performed by a keyboard instrument, the frequency of use of chromatic keys can be reduced. Before and after the transposition process on the musical piece data M will be compared with each other with reference to (a) and (b) of
In
As illustrated in (a) of
The quantization process and the transposition process are performed using known technologies, and thus detailed description thereof will be omitted. In addition, in the process of S2, both the quantization process and the transposition process do not need to be performed, for example, only the quantization process may be performed, only the transposition process may be performed, or the quantization process and the transposition process may be omitted. Furthermore, the transposition process is not limited to conversion into “C Major”, and conversion into another key such as G Major may be performed.
Description will be continued with reference back to (a) of
After the process of S4, a melody part process (S5) is executed. The melody part process will be described with reference to (b) of
In
After the process of S20, an N-th note is acquired from the melody data 22a (S21). After the process of S21, a note having the same start time as that of the N-th note acquired in S21 and having a pitch lower than that of the N-th note, in other words, a note of a note number smaller than the note number of the N-th note is deleted from the melody data 22a (S22). After the process of S22, a note of which sound production starts and stops within a sound production period of the N-th note and of which a pitch is lower than that of the N-th note is deleted from the melody data 22a (S23).
After the process of S23, 1 is added to the counter variable N (S24), and it is checked whether or not the counter variable N is larger than the number of notes of the melody data 22a (S25). In the process of S25, in a case in which the counter variable N is equal to or smaller than the number of the notes of the melody data 22a, the processes of S21 and subsequent steps are repeated. On the other hand, in the process of S25, in a case in which the counter variable N is larger than the number of the notes of the melody data 22a, the melody part process ends.
In other words, in accordance with the processes of S22 and S23, in a case in which the N-th note is an outer voice note Vg, a note of which a start time is the same as that of the N-th note in the melody data 22a and of which a pitch is lower than that of the N-th note is identified as an inner voice note Vi and is deleted from the melody data 22a. In addition, a note of which sound production starts and ends within a sound production period of the N-th note and of which a pitch is lower than that of the N-th note is also identified as an inner voice note Vi and is deleted from the melody data 22a. By performing such a process for all the notes stored in the melody data 22a, an arranged melody part Mb acquired by deleting inner voice notes Vi from the melody part Ma of the musical piece data M are deleted is stored in the melody data 22a.
Description will be continued with reference back to (a) of
In the accompaniment part process, first, “60(C4)” is set to a highest note representing a note number of a highest pitch in the pitch range described above with reference to
After the process of S40, 1 is set to a counter variable M representing the position of the candidate accompaniment table 22c (in other words, “No.” illustrated in (b) of
After the process of S42, a note name of a root note of the K-th chord of the input chord data 22b or, in a case in which the K-th chord is a fraction chord, a note name of the denominator side is acquired (S43). After the process of S43, a note number corresponding to the note name acquired in the process of S43 in a highest note to a lowest note of the pitch range is acquired and is added to the chords of the M-th record of the candidate accompaniment table 22c (S44).
For example, in a case in which the highest note of the pitch range is 60 (C4), and the lowest note is 49 (C#3), when the pitch acquired in the process of S43 is “C”, “C4” that is a pitch corresponding to “C” within such a pitch range is acquired, and the note number thereof is added to the candidate accompaniment table 22c.
After the process of S44, 1 is added to the counter variable K (S45), and it is checked whether or not the counter variable K is larger than the number of chords stored in the input chord data 22b (S46). In a case in which the counter variable K is equal to or smaller than the number of the chords stored in the input chord data 22b in the process of S46 (S46: No), a chord that has not been processed is included in the input chord data 22b, and thus the processes of S43 and subsequent steps are repeated.
On the other hand, in a case in which the counter variable K is larger than the number of the chords stored in the input chord data 22b in the process of S46 (S46: Yes), it is determined that generation of the M-th accompaniment part among the candidate accompaniment parts BK1 to BK12 has been completed from the chords of the input chord data 22b. Thus, a standard deviation S according to a pitch difference between each sound of the M-th record of the generated candidate accompaniment table 22c and the sound of the arranged melody part Mb of the melody data 22a that is produced at the same time described above with reference to
After the process of S47, an average value Av of pitches of sounds included in the M-th record of the candidate accompaniment table 22c described above with reference to
After the process of S50, an evaluation value E is calculated using Equation 1 described above on the basis of the standard deviation S, the difference value D, and the keyboard range W stored in the M-th record of the candidate accompaniment table 22c and is stored in the M-th record of the candidate accompaniment table 22c (S51).
After the process of S51, in order to generate the next candidate accompaniment parts BK1 to BK12, by subtracting the highest note and the lowest note of the pitch range by one, the pitch range is set to a range of pitches lowered by one semitone (S52). After the process of S52, 1 is added to the counter variable M (S53), and it is checked whether the counter variable M is larger than 12 (S54). In a case in which the counter variable M is equal to or smaller than 12 in the process of S54 (S54: No), there are candidate accompaniment parts BK1 to BK12 that have not been generated, and thus the processes of S42 and subsequent steps are repeated.
On the other hand, in a case in which the counter variable M is larger than 12 in the process of S54 (S54: Yes), candidate accompaniment parts BK1 to BK12 of which evaluation values E are minimums in the candidate accompaniment table 22c are acquired, and start times of chords corresponding to note numbers of sounds composing the acquired candidate accompaniment parts BK1 to BK12 and each note number acquired from the input chord data 22b are stored in the output accompaniment data (S55). After the process of S55, the accompaniment part process ends.
In accordance with this, the candidate accompaniment parts BK1 to BK12 according to only root notes or denominator-side sounds of chords are generated from chords of the input chord data 22b, and a candidate accompaniment part among them having the smallest evaluation value E is stored in the output accompaniment data 22d as the accompaniment part Bb.
Description will be continued with reference back to (a) of
After the process of S7, the arranged data A stored in the arranged data 22e is displayed in the display device 4 in the form of a musical score (S8), and the main process ends. Here, the arranged data A generated from the musical piece data M will be described with reference to (b) and (c) of
In
Then, among notes that start to be produced at the same time in such a melody part Ma, a note having the highest pitch is identified as an outer voice note Vg, a note N2 of which a pitch is lower than that of the outer voice note Vg is identified as an inner voice note Vi, and a note of which sound production starts and ends within a sound production period of the note identified as the outer voice note Vg is acquired and is additionally identified as an inner voice note Vi. Then, by deleting the inner voice notes Vi from the melody part Ma, similar to the melody part Mb illustrated in (c) of
In addition, the outer voice note Vg included in the arranged data A is composed of a sound that has a high pitch and is heard conspicuously for a listener in the melody part Ma of the musical piece data M. In accordance with this, the melody part Mb of the arranged data A can maintain to be the melody part Ma of the musical piece data M.
On the other hand, the accompaniment part Bb of the arranged data A (in other words, a lower stage of the musical score in (c) of
Here, the chords of the chord data C of the musical piece data M represents chord progression of the musical piece, and a root note or a denominator-side sound of a chord is a sound that becomes a base of the chord. Thus, by composing the accompaniment part Bb using the root notes or the denominator-side sounds of the chords, the chord progression of the musical piece data M can be appropriately maintained.
Generally, the frequency of changes in the sounds of the chords of the chord data C is lower than that of the accompaniment part that is originally included in the musical piece data M (in other words, the lower stage of the musical score illustrated in (b) of
As above, although the description has been presented on the basis of the embodiment described above, it can be easily understood that various modifications and alterations can be made.
In the embodiment described above, as the outer voice note Vg, a note having the highest pitch among notes of which start times are the same in the musical piece data M is selected. However, the configuration is not limited thereto, and a note of which the pitch is the highest and of which a sound production time is equal to or longer than a predetermined time (for example, a time corresponding to a quarter note) among notes of which start times are the same in the musical piece data M may be identified as an outer voice note Vg. In accordance with this, in a case in which a sound production time is shorter than a predetermined time, and a chord is produced at the same time in a short time, no outer voice note Vg is identified, and such a chord can be caused to remain in the arranged melody part Mb, and thus, the arranged melody part Mb can be more appropriately maintained to be the melody part Ma of the musical piece data M.
In the embodiment described above, a note of which sound production starts and stops within the sound production period of the outer voice note Vg is identified as an inner voice note Vi. However, the configuration is not limited thereto, and all the notes of which sound production starts within the sound production period of the outer voice note Vg may be identified as inner voice notes Vi. In addition, notes of which sound production times are equal to or shorter than a predetermined time (for example, a time corresponding to a quarter note) among notes of which sound production starts within the sound production period of the outer voice note Vg and stops after stopping of the sound production of the outer voice note Vg may be identified as inner voice notes Vi.
In the embodiment described above, in generation of the candidate accompaniment parts BK1 to BK12, the pitch range is set to be shifted to be lowered by one semitone each time. However, the configuration is not limited thereto, and the pitch range may be raised by one semitone each time. In addition, the pitch range is not limited to being shifted by one semitone each time and may be shifted by two semitones or more each time.
In the embodiment described above, by using a standard deviation S according to pitch differences between the candidate accompaniment parts BK1 to BK12 and the arranged melody part Mb, a state of such pitch differences is evaluated. However, the evaluation is not limited thereto, and, by using another index such as an average value, a median value, or a dispersion of pitch differences between the candidate accompaniment parts BK1 to BK12 and the arranged melody part Mb, a state of such pitch differences may be evaluated.
In the embodiment described above, in the processes of S47 to S51 illustrated in
In the embodiment described above, arranged data A is generated from the arranged melody part Mb and the accompaniment part Bb. However, the configuration is not limited thereto, and arranged data A may be generated from the arranged melody part Mb and the accompaniment part extracted from the musical piece data M or may be generated from the melody part Ma of the musical piece data M and the arranged accompaniment part Bb. In addition, arranged data A may be generated only from the arranged melody part Mb, or arranged data A may be generated only from the arranged accompaniment part Bb.
In the embodiment, the musical piece data M is composed of the performance data P and the chord data C. However, the configuration is not limited thereto, and, for example, the chord data C may be omitted from the musical piece data M, chords may be recognized from the performance data P of the musical piece data M using a known technology, and chord data C may be configured from the recognized chords.
In the embodiment described above, in the process of S8 illustrated in (a) of
In the embodiment described above, although the PC 1 has been illustrated as a computer that executes the automatic music arrangement program 21a as an example, the subject of the execution is not limited thereto, and the automatic music arrangement program 21a may be executed using an information processing device such as a smartphone or a tablet terminal or an electronic instrument. In addition, the automatic music arrangement program 21a may be stored in a ROM or the like, and the disclosure may be applied to a dedicated device (an automatic music arrangement device) that executes only the automatic music arrangement program 21a.
The numerical values represented in the embodiment described above are examples, and, naturally, other numerical values may be employed.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations provided that they fall within the scope of the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2020-112612 | Jun 2020 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5418325 | Aoki et al. | May 1995 | A |
5561256 | Aoki et al. | Oct 1996 | A |
5756916 | Aoki et al. | May 1998 | A |
7351903 | Ishida et al. | Apr 2008 | B2 |
10354628 | Watanabe | Jul 2019 | B2 |
20020007721 | Aoki | Jan 2002 | A1 |
20160148606 | Minamitaka | May 2016 | A1 |
20170084261 | Watanabe | Mar 2017 | A1 |
20200302902 | Vorobyev | Sep 2020 | A1 |
Number | Date | Country |
---|---|---|
H0636151 | May 1994 | JP |
2002202776 | Jul 2002 | JP |
2002258846 | Sep 2002 | JP |
2008145564 | Jun 2008 | JP |
009020323 | Jan 2009 | JP |
2011118221 | Jun 2011 | JP |
2017058596 | Mar 2017 | JP |
Entry |
---|
“Office Action of Japan Counterpart Application”, issued on Jan. 16, 2024, with English translation thereof, pp. 1-9. |
Number | Date | Country | |
---|---|---|---|
20210407476 A1 | Dec 2021 | US |