1. Field of the Invention
The present invention relates to a musical performance evaluation device, as musical performance evaluation method, and a storage medium suitable for use in an electronic musical instrument.
2. Description of the Related Art
A device is known which compares note data of an etude serving as a model and musical performance data generated in response to a musical performance operation on that etude, and evaluates the musical performance ability of a user (instrument player). As this type of technology, Japanese Patent Application Laid Open (Kokai) Publication No. 2008-242131 discloses a technology of calculating an accuracy rate according to the number of notes correctly played based on a comparison between musical performance data inputted by a musical performance and prepared data corresponding to a musical performance model, and evaluating the musical performance ability of the user based on calculated accuracy rate.
However, all that is performed in the technology disclosed in Japanese Patent Application Laid-Open (Kokai) Publication No. 2008-242131 is the calculation of an accuracy rate according to the number of notes correctly played and the evaluation of the musical performance ability of the user based on the calculation accuracy rate. Therefore, there is a problem in that the degree of improvement in the musical performance ability of a user cannot be evaluated when the user performs a musical performance practice on a part of a musical place such as a phrase.
The present invention has been conceived in light of the above described problem. An object of the present invention is to provide a musical performance evaluation device, a musical performance evaluation method, and a storage medium by which the degree of improvement in a user's musical performance ability can be evaluated even when a musical performance practice on a part of a musical piece is performed.
In order to achieve the above-described object, in accordance with one aspect of the present invention, there is provided a musical performance evaluation device comprising: a first obtaining section which obtains number of notes for each skill type from note data included in a segment of an inputted musical piece, among pieces of note data including at least a skill value and a skill type for each sound constituting the musical piece; a second obtaining section which obtains number of correctly played notes for each skill type by comparing the note data included in the predetermined segment of the inputted musical piece among the pieces of note data and inputted musical performance data; and an evaluating section which accumulates evaluation values of respective skill types each obtained based on an accuracy rate for each skill type defined by the number of notes and the number of correctly played notes for each type obtained by the first obtaining section and the second obtaining section an a skill value of each skill type, and generates a musical performance evaluation value.
In accordance with another aspect of the present invention, there is provided a musical performance evaluation method comprising: a step of obtaining number of notes for each skill type from note data included in a segment of an inputted musical piece, among pieces of note data including at least a skill value and a skill type for each sound constituting the musical piece; a step of obtaining number of correctly played notes for each skill type by comparing the note data included in the predetermined segment of the inputted musical piece among the pieces of note data and inputted musical performance data; and a step of accumulating evaluation values for respective skill types each obtained based on accuracy rate for each skill type defined by the obtained number of notes and the obtained number of correctly played notes for each skill type and a skill value of each skill type, and generating a musical performance evaluation value.
In accordance with another aspect of the present invention, there is provided a non-transitory computer-readable storage medium having stored thereon a program that is executable by a computer, the program being executable by the computer to perform functions comprising: processing for obtaining number of notes for each skill type from note data included in a segment of a musical piece inputted by musical performance, among pieces of note data including at least a skill value and a skill type for each sound constituting the musical piece; processing for obtaining number of correctly played notes for each skill type by comparing the note data included in the predetermined segment of the musical piece inputted by the musical performance among the pieces of note data and musical performance data generated by musical performance input for the predetermined segment of the musical piece; and processing for accumulating evaluation values for respective skill types each obtained based on an accuracy rate for each skill type defined by the obtained number of notes and the obtained number of correctly played notes for each skill type and a skill value of each skill type, and generating a musical performance evaluation value.
The above and further objects and novel features of the present invention will more fully appear from the following detailed description when the same is read in conjunction with the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration only and are not intended as a definition of the limits of the invention.
An embodiment of the present invention is described below with reference to the drawings.
A. Structure
An operating section 11 in
A display section 12 in
Also, the CPU 13 generates musical performance data constituted by “sound emission time”, “sound length”, “sound pitch”, “finger number”, and “musical performance part” based on musical performance data in the MIDI format generated when musical performance input is performed, the finger number, the musical performance part, and the time of the press/release key operation, and stores the generated musical performance data in a musical performance data input area PIE of a RAM 15 (refer to
In a ROM 14 in
A RAM 15 in
The note data includes a note attribute and a musical performance attribute. The note attribute is constituted by “sound emission times”, “sound length”, and “sound pitch”. The musical performance attribute is constituted by “musical performance part”, “finger number”, “skill value”, and “skill type”. “Musical performance part” represents a right-hand part, a left-hand part, or a both-hand part. The both-hand part indicates chord musical performance in which a plurality of sounds are simultaneously emitted. “Finger number” represents a finger pressing a key, and the thumb to the little finger are represented by “1” to “5”, respectively. “Skill value” represents the degree of difficulty in musical performance technique represented by “skill type” (the type of musical performance technique) such as finger underpassing or finger overpassing.
The correct/error table for the right hand RT is a table in which musical performance data and note data are arranged in a matrix, as depicted in
Diagonal elements between the musical performance data 1 to n serving as row elements and the note data 1 to n serving as column elements are each provided with a correct/error flag indicating whether a note has been played in the same manner as that of a model, or in other words, a sound matching with the note attribute of note data has been emitted by musical performance with a specified musical performance part and a specified finger number. If the note has been played in the same manner as that of the model, the correct/error flag is set at “1”. If the note has not been played in the same manner as that of the model, the correct/error flag is set at “0”.
The correct/error table for the left hand LT and the correct/error table for both hands RLT depicted in
In the correct/error table for both hands RLT, the musical performance data 1 to n serving as row elements are obtained by extracting pieces of musical performance data of a both-hand part from the musical performance data of one phrase stored in the musical performance data input area PIE, and arranging these pieces of musical performance data in the order in which the musical piece proceeds. In addition, the note data 1 to n serving as column elements are obtained by extracting pieces of note data of the both-hand part in the phrase segment for which the musical performance input has been performed by the user from the musical piece data serving as a model, and arranging these piece of note data in the order in which the musical piece proceeds.
Next, the configuration of the present embodiment is described with reference to
B. Operation
Next, the operation of the above-structured musical performance evaluation device 100 is described with reference to
(1) Operation of Main Routine
Next, at Step SA2, the CPU 13 performs musical piece data read processing for counting the number of notes for each skill type based on note data corresponding to the phrase segment for which the musical performance input has been performed, among pieces of note data for one musical piece stored in the musical piece data area KDE of the RAM 15. This processing will be described further below.
Next, at Step SA3, the CPU 13 performs musical performance input data read processing for dividing the musical performance data of one phrase inputted by the musical performance and the note data corresponding to the phrase segment for which the musical performance input has been performed into “right-hand part”, “left-hand part”, and “both-hand part”; updates the correct/error table for the right hand RT based on the musical performance data and the note data of “right-hand part”; updates the correct/error table for the left hand LT based on the musical performance data and the note data of “left-hand part”; and updates the correct/error table for both hands RLT based on the musical performance data and the note data of “both-hand part”. This processing will also be described further below.
Subsequently, at Step SA4, the CPU 13 performs musical performance judgment processing for counting the number of correctly played notes for each of the right-hand part, the left-hand part, and the both-hand part, and the number of correctly played notes for each skill type with reference to the correct/error table for the right hand RT, the correct/error table for the left hand LT, and the correct/error table for both hands RLT based on the note data corresponding to the phrase segment for which the musical performance input has been performed, among the note data of one musical piece stored in the musical piece data area KDE of the RAM 15. This processing will also be described further below.
Then, at Step SA5, the CPU 13 performs musical performance evaluation processing for accumulating evaluation value for the respective skill types each obtained by multiplying an accuracy rate for each skill type calculated based on the number of notes for each skill type obtained in the musical piece data read processing and the number of correctly played notes for each skill type obtained in the musical performance judgment processing by a skill value for each skill type, and thereby obtaining an overall musical performance evaluation value. This processing will also be described further below. After the musical performance evaluation processing, the main routine ends.
(2) Operation of Musical Piece Data Read Processing
Next, the operation of the musical piece data read processing is described with reference to
When the musical performance part is “both-hand part”, since the judgment result is “YES”, the CPU 13 proceeds to Step SB3 and obtains the number of notes for each skill type from each note data having the same sound emission time, that is, each note data forming a chord. The CPU 13 then proceeds to Step SB4 and counts the obtained number of notes for each skill type. Conversely, when the musical performance part is not “both-hand part”, since the judgment result at Step SB2 is “NO”, the CPU 13 proceeds to Step SB4, and increments a counter provided corresponding to a skill type included in the musical performance attribute of the read note data. That is, the CPU 13 counts the number of notes for each skill type.
Next, at Step SB5, the CPU 12 judges whether the counting of the number of notes for the relevant part (the right-hand part, the left-hand part, or the both-hand part) of one piece of note data has been completed. When judged that the counting of the number of notes has not been completed, since the judgment result is “NO”, the CPU 13 returns to Step SB1 and counts the number of notes for each skill type for another part. When judged that the counting of the number of notes for the relevant part (the right-hand part, the left-hand part, or the both-hand part) has been completed, since the judgment result at Step SB5 is “YES”, the CPU 13 proceeds to Step SB6.
Subsequently, at Step SB6, the CPU 13 judges whether the counting of the number of notes for each skill type has been completed for the entire note data included in the phrase segment for which the musical performance input has been performed. When judged that this counting of the number of notes for each skill type has not been completed, since the judgment result is “NO”, the CPU 13 returns to Step SB1.
Thereafter, the CPU 13 repeats Steps SB1 to SB6 until the counting of the number of notes for each skill type is completed for the entire note data included in the phrase segment for which the musical performance input has been performed. Then, when the counting of the number of notes for each skill type is completed based on the entire note data included in the phrase segment for which the musical performance input has been performed, since the judgment result at Step SB6 is “YES”, the CPU 13 ends the processing.
As such, in the musical piece data read processing, the number of notes for each skill type is counted based on the note data include in the phrase segment for which the musical performance input has been performed, among the note data of one musical piece stored in the musical piece data area KDE of the RAM 15.
(3) Operation of Musical Performance Input Data Read Processing
Next, the operation of the musical performance input data read processing is described with reference to
Next, at Step SC2, the CPU 13 updates the correct/error table for the right hand RT and the correct/error table for the left hand LT based on the read musical performance data 1 to n of one phrase. In the updating of the correct/error table for the right hand RT, among the read musical performance data 1 to n of one phrase, the musical performance data of the right-hand part are set as row elements on the correct/error table for the right hand RT. On the other hand, among the note data corresponding to the phrase segment for which the musical performance input has been performed, the note data of the right-hand part are set as column elements on the correct/error table for the right hand RT.
Then, among diagonal elements on the correct/error table for the right hand RT where the musical performance data of the right-hand part have been set as row elements an the note data of the right-hand part have been set as column elements, the correct/error flag of a diagonal element indicating that the note has been played in the same manner as that of the model is set at “1”, and the correct/error flag of a diagonal element indicating that the note has not been played in the same manner as that of the model is set at “0”.
At Step SC2, the CPU 13 also updates the correct/error table for the left hand LT in a manner similar to that for the correct/error table for the right hand RT. That is, among the read musical performance data 1 to n of one phrase, the musical performance data of the left-hand part are set as row elements on the correct/error table for the left hand LT. In addition, among the note data corresponding to the phrase segment for which the musical performance input has been performed, the note data of the left-hand part are set as column elements on the correct/error table for the left hand LT.
Then, among diagonal elements on the correct/error table for the left hand LT where the musical performance data of the left-hand part have been set as row elements and the note data of the left-hand part have been set as column elements, the correct/error flag of a diagonal element indicating that the note has been played in the same manner as that of the model is set at “1”, and the correct/error flag of a diagonal element indicating that the note has not been played in the same manner as that of the model is set at “0”.
Next at Step SC3, the CPU 13 judges whether the read musical performance data is both-hand part data. When judged that the read musical performance data is not both-hand part data, since the judgment result is “NO”, the CPU 13 ends the processing. Conversely, when judged that the read musical performance data is both-hand part data, since the judgment result is “YES”, the CPU 13 proceeds to Step SC4. At Step SC4, among the read musical performance data 1 to n of one phrase, the musical performance data of the both-hand part are set as row elements on the correct/error table for the both hands RLT. In addition, among the note data corresponding to the phrase segment for which the musical performance input has been performed, the note data of the both-hand part are set as column elements on the correct/error table for the both hands RLT.
Then, among diagonal elements on the correct/error table for the both hands RLT where the musical performance data of the both-hand part have been set as row elements and the note data of the both-hand part have been set as column elements, the correct/error flag of a diagonal element indicating that the note has been played in the same manner as that of the model is set at “1”, and the correct/error flag of a diagonal element indicating that the note has not been played in the same manner as that of the model is set at “0”.
As such, in the musical performance input data read processing, the musical performance data of one phrase inputted by the musical performance and the note data corresponding to the phrase segment for which the musical performance input has been performed are each divided into “right-hand part”, “left-hand part”, and “both-hand part”, the correct/error table for the right hand RT is updated based an the musical performance data and the note data of “right-hand part”, the correct/error table for the left hand LT is updated based on the musical performance data and the note data of “left-hand part”, and the correct/error table for both hands RLT is updated based on the musical performance data and the note data of “both-hand part”.
(4) Operation of Musical Performance Judgment Processing
Next, the operation of the correct/error table update processing is described with reference to
When judged that the musical performance part is not “both-hand part”, since the judgment result at Step SD2 is “NO”, the CPU 13 proceeds to Step SD3. At Step SD3, the CPU 13 judges whether a correct/error flag set to a diagonal element between the read note data and its corresponding musical performance data indicates “1”, or in other words, judges whether the note has been correctly played. When the musical performance part included in the musical performance attribute of the read note data is “right-hand part”, this judgment is made with reference to the correct/error table for the right hand RT. When the musical performance part is “left-hand part”, this judgment is made with reference to the correct/error table for the left hand LT. When the musical performance part is “both-hand part”, this judgment is made with reference to the correct/error table for both hands RLT. Then, when the correct/error flag indicates “0”, the CPU 13 judges that the note has been incorrectly played and, since the judgment result is “NO”, proceeds to Step SD8 described below.
On the other hand, when the correct/error flag set to the diagonal element between the read note data and its corresponding musical performance data indicates “1”, or in other words, when the note has been correctly played, the judgment result at Step SD3 is “YES”, and therefore the CPU 13 proceeds to Step SD4 to count the number of correctly played notes for the right-hand/left-hand part. The CPU 3 then proceeds to Step SD7 to cause a counter associated with the skill types of the correctly played note data to count the number of correctly played notes.
At Step SD2, when the musical performance part included in the musical performance attribute of the read note data is “both-hand part”, since the judgment result at Step SD2 is “YES”, the CPU 13 proceeds to Step SD5. At Step SD5, the CPU 13 refers to the correct/error table for both hands RLT to judge whether a correct/error flag set to a diagonal element between the read note data and its corresponding musical performance data indicates “1”, or in other words, the note has been correctly played. When the correct/error flag indicates “0”, since the judgment result is “NO” indicating that the note has been incorrectly played, the CPU 13 proceeds to Step SD8 described below.
On the other hand, when the correct/error flag set to the diagonal element between the read note data and its corresponding musical performance data indicates that “1”, that is, when the note has been correctly played, the judgment result at Step SD5 is “YES”, and therefore the CPU 13 proceeds to Step SD6 to count the number of correctly played notes for the both-hand part. The CPU 13 then proceeds to step SD7 to cause a counter associated with the type of the correctly played note data to count the number of correctly played notes.
Then, at Step SD8, the CPU 13 judges whether a musical performance judgment for the relevant part (the right-hand part, the left-hand part, or the both-hand part) of one place of note data has been completed. When a musical performance judgment for the relevant part has not been completed, since the judgment result is “NO”, the CPU 13 returns to Step SD1, and counts the number of correctly played notes for another part and the number of correctly played notes for each skill type. When the counting for the relevant part (the right-hand part, the left-hand part, or the both-hand part) is completed, the judgment result at Step SD8 “YES” and therefore the CPU 13 proceeds to Step SD9.
At Step SD9, the CPU 13 judges whether a musical performance judgment has been made to all pieces of the relevant note data included in the phrase segment for which the musical performance input has been performed. When judged that these musical performance judgments have not been completed, since the judgment result is “NO”, the CPU 13 returns to Step SD1. Thereafter, the CPU 13 repeats Steps SD1 to SD9 until a musical performance judgment is made to all pieces of the relevant note data included in the phrase segment for which the musical performance input has been performed. Then, when a musical performance judgment is made to all pieces of the relevant note data included in the phrase segment for which the musical performance input has been performed, since the judgment result at Step SD9 is “YES”, the CPU 13 ends the processing.
As such, in the musical performance judgment processing, the number of correctly played notes for each of the right-hand part, the left-hand part, and the both-hand part and the number of correctly played notes for each skill type are counted based on the note data corresponding to the phrase segment for which the musical performance input as been performed, among the note data of one musical piece stored in the musical piece data area KDE of the RAM 15.
(5) Operation of Musical Performance Evaluation Processing
Next, the operation of the musical performance evaluation processing is described with reference to
Subsequently, at Step SE2, the CPU 13 calculates the evaluation value (skill type) of a currently targeted skill type by multiplying the skill value of the currently targeted skill type by an accuracy rate K2/K1. Then, at Steps SE3 and SE4, the CPU 13 performs the processing of Steps SE1 and SE2 for all of the types, and accumulates the evaluation values of the respective skill types obtained thereby to calculate an overall musical performance evaluation value. Then, when the calculation of the overall musical performance evaluation value is completed, since the judgment result at Step SE4 is “YES”, the CPU 13 ends the processing.
As such, in the musical performance evaluation processing, the evaluation values of the respective skill types each obtained by multiplying an accuracy rate for each skill type calculated based on the number of notes for each skill type obtained in the musical piece data read processing and the number of correctly played notes for each skill type obtained in the musical performance judgment processing by the skill value of each skill type are accumulated to obtain an overall performance evaluation value.
As described above, in the present embodiment, the number of notes for each skill type is obtained from note data included in a phrase segment for which musical performance input has been performed; the note data included in the phrase segment for which the musical performance input has been performed and musical performance data inputted by the musical performance are compared with each other to obtain the number of correctly played notes for each skill type; and the evaluation values of the respective skill types each obtained by multiplying an accuracy rate for each skill type obtained based on the number of notes for each skill type and the number of correctly played notes for each skill type by the skill value of each skill type are accumulated to obtain an overall performance evaluation value. Therefore, the degree of improvement in user's musical performance ability can be evaluated even when a musical performance practice for a part of a musical piece is performed.
In the above-described embodiment, the number of notes and the number of correctly played notes are obtained for each skill type. However, the present invention is not limited thereto, and configuration may be adopted in which the number of notes and the number of correctly played notes for each musical performance part are obtained, and evaluation for each musical performance part is performed. Also, a configuration may be adopted in which a correct/error counter is assigned to a diagonal element on the above-described correct/error table, the number of correctly played notes or the number of incorrectly played notes is counted every time musical performance input is performed, and a portion (note) or a musical performance part that is difficult to play in musical performance is analyzed and evaluated, as depicted in
While the present invention has been described with reference to the preferred embodiments, it is intended that the invention be not limited by any of the details of the description therein but includes all the embodiments which fall within the scone of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2013-085341 | Apr 2013 | JP | national |
This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2013-085341, filed Apr. 16, 2013, the entire contents of which are incorporated herein by reference.