A unique suffix number between 10 and 22 appearing next to each element or component in some drawings is referred to throughout the following description.
User input signals (i.g., audio signals) caused by the human performer 10 are continuously scanned for by the user input analysis algorithm 12 through the input sensing device 11. Upon detection of a user input signal satisfying a condition predetermined by the human performer 10 using the user input sensitivity settings 13, the user input signal is then analyzed by the user input analysis algorithm 12. Based on the analysis result, the melody composing algorithm 14 generates melody note data using scale note data provided by the chord change loop algorithm 17 at the moment the user input signal being detected. The generated melody note data is converted into a form of sound data by the sound conversion algorithm 16 for the sound synthesizer or DSP 18 to play through the amplified speaker(s) 20. The entire process is carried out in real time and virtually at the same time as the original signal being input by the human performer 10 through the input sensing device 11, allowing him or her to automatically compose and produce a melody note as audible feedback 21.
For a favorable audible feedback 21, the human performer 10 is allowed to choose one of musical instrument sounds at anytime using a sound controller when using an external sound synthesizer or DSP connected to the computer. In case of using a software sound synthesizer or DSP within a modern computer, the human performer 10 is allowed to choose one of the pre-installed musical instrument sounds at anytime using the musical instrument settings 19.
The human performer 10 is additionally allowed to choose one of fifteen music keys; C, F, B flat, E flat, A flat, D flat, G flat, C flat, G, D, A, E, B, F sharp or C sharp, at anytime using the music style/key settings 15. A value such as 1, 2 and 3 is assigned to every music key indicating a difference in the twelve musical pitch degree from the key of C to a particular music key. 0 always stands for the key of C. For example, 4 is for the key of E while 5 is for the key of F.
The human performer 10 is further allowed to choose one of the pre-installed music styles at anytime using the music style/key settings 15. The chosen music style is stored in a memory for the later use by the chord change loop algorithm 17.
If no user input signal value previously stored is found in the record, no accented flag is additionally transferred to the melody composing algorithm 14. If only one user input signal value previously stored is found in the record, the current user input signal value is compared with the previously stored user input signal value and, if the current user input signal value exceeds the previously stored user input signal value, an accented flag is additionally transferred to the melody composing algorithm 14. If two user input signal values previously stored are found in the record, the current user input signal value is compared with the average value of these previously stored user input signal values and, if the current user input signal value exceeds the average value, an accented flag is additionally transferred to the melody composing algorithm 14. The oldest user input signal value in the record is discarded when the total number of user input signal value exceeds three.
A root degree represents the Roman Numeral analysis of a chord. A root degree value such as 0, 2 and 3 indicates a difference in the twelve musical pitch degree from the tonality (i.e., tonal center) of a chord progression containing a chord to the root of the particular chord. 0 always stands for the I(Roman Numeral: one) chord. For example, the root degree value for the IV(Roman Numeral: four) chord is 5 while 7 for the V(Roman Numeral: five) chord.
Depending on the quality such as diminished, dominant or major 7 of a chord being referred to, scale note data contain seven or eight values between 0 and 11 each of which is assigned to a corresponding scale note in the scale. Each scale note value indicates a difference in the twelve musical pitch degree from the root note of a scale containing a scale note to the particular scale note. 0 always stands for the scale note value of the root note. Scale note data further contain the preference flag some of which mark NO while others mark YES indicating a chord tone of the scale.
Upon a selection of a music key and a music style by the human performer 10, every scale note value in scale note data for all chords in a chord progression of the selected music style are shifted upward in the twelve musical pitch degree for the sum of two elements; a value representing the selected music key and, a root degree value defined to each chord. When the shifted scale note value exceeds 11, then the final scale note value is calculated by subtracting 12 from the shifted value. For example, the second scale note value of the IV(Roman Numeral: four) major chord in key of D is shifted from 2 to 9 (i.e., 2+5+2) while the fourth scale note value of the V(Roman Numeral: five) major chord in key of E is shifted from 5 to 16 (i.e., 5+7+4), then finally to 4 (i.e., 16−12).
Using a pseudo random number algorithm between 0 and 23, a pitch value is determined from within the set of two octave scale note values in the referral memory. This pitch determination process is, by referring to a record of the previously determined pitch value, repeated until a pitch value other than the previously determined value is being chosen. The determined pitch value is put in a record for a referral by the next pitch determination process.
Upon a user input signal being received from the user input analysis algorithm 12, a velocity value between 0.000000 and 1.000000 is determined in such a way directly proportional to the received value of the user input signal. A value of 1.000000 is assigned to a user input signal exceeding 1.000000. A set of the determined pitch and velocity values is then sent to the sound conversion algorithm 16.
In the sound conversion algorithm 16, a constant of 60 is added to the pitch value being received from the melody composing algorithm 14 so as to comfort to a practical pitch register adopted by the majority of sound synthesizer and DSP on the market.
A value between 0 and 67 is determined in such a way directly proportional to the velocity value being received from the melody composing algorithm 14. A constant of 60 is then added to the determined value so as to comfort to a practical velocity range adopted by the majority of sound synthesizers and DSP's on the market. The resultant value of either pitch or velocity, thereby, falls within 0 and 127 as defined by the MIDI (Musical Instrument Digital Interface) format for a greater and flexible control over various types of sound synthesizer and DSP.
Using a musical instrument sound chosen by the human performer 10, the sound synthesizer or DSP 18 plays and sustains a melody note of the pitch and velocity values received from the sound conversion algorithm 16 until an interruption by the sound conversion algorithm 16 responding to the next user input signal by the human performer 10 originally from the input sensing device 11 through the user input analysis algorithm 12. As the result, the sound synthesizer or DSP 18 produces audible feedback 21 through the amplified speaker(s) 20, including longer melody notes such as a whole note and shorter melody notes such as a sixteenth note all depending on time intervals between user input signals by the human performer 10 through the input sensing device 11 and the user input analysis algorithm 12.