The present disclosure relates to an accompaniment sound generating device, an accompaniment sound generating method, a non-transitory computer readable medium storing an accompaniment sound generating program and an electronic musical instrument including the accompaniment sound generating device.
Electronic musical instruments including a function that adds an automatic accompaniment sound to a musical performance sound input by a player based on prestored accompaniment pattern data have been known. There is an electronic keyboard musical instrument that includes an automatic accompaniment function, for example. When the player gives a musical performance by using a keyboard, the electronic keyboard musical instrument outputs an automatic accompaniment sound in accordance with a musical performance sound. An automatic accompaniment data generating device controls the rhythm of an automatic accompaniment sound to be in accordance with an accent position of a musical performance.
The player plays an electronic musical instrument including an automatic accompaniment function, thereby being able to enjoy a musical performance sound accompanying an accompaniment sound while playing a melody, for example. Since the automatic accompaniment function generates an accompaniment sound repeatedly based on accompaniment pattern data, the accompaniment sound may be monotonous to the player. In order to provide further enjoyment of a musical performance to the player, it is expected that the automatic accompaniment function generates an accompaniment sound having variations.
An object of the present disclosure is to generate an automatic accompaniment sound having variations.
An accompaniment sound generating device according to one aspect of the present disclosure includes a specifier that specifies a plurality of musical performance parts for which accompaniment sounds are to be generated based on an input musical performance sound, an accompaniment sound generator that generates the accompaniment sounds that belong to the plurality of specified musical performance parts for each musical performance sound and an accompaniment sound outputter that outputs the accompaniment sounds generated for the plurality of musical performance parts with timing for generating the accompaniment sounds aligned with timing for generating musical performance sounds.
Other features, elements, characteristics, and advantages of the present disclosure will become more apparent from the following description of preferred embodiments of the present disclosure with reference to the attached drawings, in which:
An accompaniment sound generating device, an electronic musical instrument, an accompaniment sound generating method and a non-transitory computer readable medium storing an accompaniment sound generating program according to embodiments of the present disclosure will be described below in detail with reference to the drawings.
The electronic musical instrument 1 includes a performance operating element 101, a setting operating element 102 and a display 103. The performance operating element 101 includes a pitch specifying operator such as a keyboard and is connected to a bus 120. The performance operating element 101 receives a musical performance operation performed by the player and outputs musical performance data representing a musical performance sound. The musical performance data is made of MIDI (Musical Instrument Digital Interface) data or audio data. The setting operating element 102 includes a switch that is operated in an on-off manner, a rotary encoder that is operated in a rotational manner, a linear encoder that is operated in a sliding manner, etc. and is connected to the bus 120. The setting operating element 102 is used for adjustment of the volume of a musical performance sound or an automatic accompaniment sound, on-off of a power supply and various settings. The display 103 includes a liquid crystal display, for example, and is connected to the bus 120. Various information related to a musical performance, settings, etc. is displayed on the display 103. At least part of the performance operating element 101, the setting operating element 102 and the display 103 may be constituted by a touch panel display.
The electronic musical instrument 1 further includes a CPU (Central Processing Unit) 106, a RAM (Random Access Memory) 107, a ROM (Read Only Memory) 108 and a storage device 109. The CPU 106, the RAM 107, the ROM 108 and the storage device 109 are connected to the bus 120. The CPU 106, the RAM 107, the ROM 108 and the storage device 109 constitute the accompaniment sound generating device 10.
The RAM 107 is made of a volatile memory, for example, which is used as a working area for execution of a program by the CPU 106 and temporarily stores various data. The ROM 108 is made of a non-volatile memory, for example, and stores a computer program such as an accompaniment sound generating program P1 and various data such as setting data SD and accompaniment style data set ASD. A flash memory such as EEPROM is used as the ROM 108. The CPU 106 executes the accompaniment sound generating program P1 stored in the ROM 108 while utilizing the RAM 107 as a working area, thereby executing an automatic accompaniment process, described below.
The storage device 109 includes a storage medium such as a hard disc, an optical disc, a magnetic disc or a memory card. The accompaniment sound generating program P1, the setting data SD or the accompaniment style data set ASD may be stored in the storage device 109.
The accompaniment sound generating program P1 in the present embodiment may be supplied in the form of being stored in a recording medium which is readable by a computer and installed in the ROM 108 or the storage device 109. In addition, in a case where the communication I/F included in the electronic musical instrument 1 is connected to a communication network, the accompaniment sound generating program P1 delivered from a server connected to the communication network may be installed in the ROM 108 or the storage device 109. Similarly, the setting data SD or the accompaniment style data set ASD may be acquired from a storage medium or may be acquired from a server connected to the communication network.
The electronic musical instrument 1 further includes a tone generator 104 and a sound system 105. The tone generator 104 is connected to the bus 120, and the sound system 105 is connected to the tone generator 104. The tone generator 104 generates a music sound signal based on musical performance data received from the performance operating element 101 or the data according to an automatic accompaniment sound generated by the accompaniment sound generating device 10.
The sound system 105 includes a digital-analogue (D/A) conversion circuit, an amplifier and a speaker. The sound system 105 converts the music sound signal supplied from the tone generator 104 into an analogue sound signal and produces a sound based on the analogue sound signal. Thus, the music sound signal is reproduced.
Next, an automatic accompaniment sound generated by the accompaniment sound generating device 10 according to the present embodiment will be described. The accompaniment sound generating device 10 according to the present embodiment can generate two types of automatic accompaniment sounds, which are a pattern accompaniment sound and a real-time accompaniment sound. The pattern accompaniment sound is generated by repeated reproduction of prestored accompaniment pattern data. A category, etc. is designated by the player, so that accompaniment pattern data corresponding to the designated category is reproduced. The player can give a musical performance in accordance with the reproduction of a pattern accompaniment sound.
A real-time accompaniment sound is an accompaniment sound generated in real time in accordance with a musical performance sound generated by a musical performance operation performed by the player. A real-time accompaniment sound is generated for each musical performance sound in accordance with the contents of the setting data SD. When the player gives a musical performance, a real-time accompaniment sound is added based on a musical performance sound input by the player.
Next, the accompaniment style data set ASD will be described. The accompaniment style data set ASD is the data obtained when the contents of a pattern accompaniment sound are classified according to categories. Further, the accompaniment style data set ASD may be utilized when a tone color of a real-time accompaniment sound is determined.
The accompaniment section data sets are classified into data sets for an “introduction” section, data sets for a “main” section, data sets for a “fill-in” section and data sets for an “ending” section. “Introduction,” “main,” “fill-in” and “ending” represent types of sections, respectively, and are indicated by alphabetic letters “I,” “M,” “F” and “E,” respectively. Each accompaniment section data set is further classified into a plurality of variations.
The variations of the “introduction” section, the “main” section and the “ending” section indicate an atmosphere or a degree of climax of an automatic accompaniment sound. In the example of
Because the “fill-in” section is a fill-in between other sections, the variations of the “fill-in” section are represented by a combination of two alphabetic letters corresponding to a change in atmosphere or degree of climax between the section before the fill-in section and the section after the fill-in section in
In
The accompaniment section data set includes an accompaniment pattern data set PD in regard to each of a plurality of musical performance parts (tracks) such as a main drum part, a base part, a chord part, a phrase part and a pad part. Further, each accompaniment section data set includes reference chord information and a pitch conversion rule (pitch conversion table information, a sound range, a sound regeneration rule at the time of chord change and so on). The accompaniment pattern data set PD is MIDI data or audio data, and can be converted into any pitch based on the reference chord information and the pitch conversion rule. The number of the musical performance parts for which pattern accompaniment sounds are to be generated, the note sequence of the accompaniment pattern data set PD and the like are different depending on the corresponding variation.
For example, the player can select the category of a desired pattern accompaniment sound and an accompaniment style data set ASD by using the setting operating element 102 of
The musical performance sound receiver 11 receives musical performance data (a musical performance sound) output from the performance operating element 101. The musical performance sound receiver 11 outputs the musical performance data to the specifier 13 and the real-time accompaniment sound generator 14. As described above, MIDI data or audio data is used as musical performance data.
The mode determiner 12 receives a mode operation performed by the player from the setting operating element 102. The accompaniment sound generating device 10 of the present embodiment has a plurality of modes as modes for performing real-time accompaniment. The player can perform a mode setting by using the setting operating element 102. In the present embodiment, two modes which are an accent mode and a unison mode are prepared as the modes for real-time accompaniment.
The specifier 13 specifies a musical performance part for which a real-time accompaniment sound is to be generated. Similarly to pattern accompaniment sounds, real-time accompaniment sounds are output for a plurality of musical performance parts such as a main drum part, a base part, a chord part, a phrase part and a pad part with the timing for generating the real-time accompaniment sounds aligned with the timing for generating musical performance sounds. The specifier 13 specifies musical performance parts for which real-time accompaniment sounds are to be generated based on musical performance data received from the musical performance sound receiver 11. That is, the specifier 13 specifies a plurality of musical performance parts for which real-time accompaniment sounds are to be generated in regard to each musical performance sound.
The real-time accompaniment sound generator 14 generates real-time accompaniment data RD. The real-time accompaniment sound generator 14 generates real-time accompaniment data RD to be generated for the musical performance part specified by the specifier 13. The real-time accompaniment sound generator 14 determines a tone color, the volume and so on of a real-time accompaniment sound based on the musical performance data (musical performance sound) received from the musical performance sound receiver 11. The real-time accompaniment sound generator 14 outputs the generated real-time accompaniment data RD to the accompaniment sound outputter 17. The accompaniment sound outputter 17 outputs the real-time accompaniment data RD to the tone generator 104 shown in
The accompaniment style data acquirer 15 acquires an accompaniment style data set ASD. As described above, the player performs an operation of selecting a pattern accompaniment sound, whereby the accompaniment style data acquirer 15 accesses the ROM 108 and acquires the selected accompaniment style data set ASD.
The pattern accompaniment sound generator 16 receives the accompaniment style data set ASD acquired by the accompaniment style data acquirer 15, and acquires accompaniment pattern data set PD to be used for a pattern accompaniment sound from the accompaniment style data set ASD. The pattern accompaniment sound generator 16 acquires the accompaniment pattern data set PD included in the accompaniment style data set ASD based on the accompaniment section data including the variation selected by the player. The pattern accompaniment sound generator 16 performs pitch conversion necessary for the accompaniment pattern data set PD and outputs the converted accompaniment pattern data set PD to the accompaniment sound outputter 17 in accordance with a tempo of a musical performance sound. The accompaniment sound outputter 17 outputs the accompaniment pattern data set PD to the tone generator 104 shown in
The real-time accompaniment sound generator 14 also provides an instruction for stopping generation of pattern accompaniment data to the pattern accompaniment sound generator 16. When the real-time accompaniment data RD is being generated, generation of the accompaniment pattern data set PD is stopped. That is, when a real-time accompaniment sound is output, a pattern accompaniment sound is muted for a set period of time. Further, when the accent mode or the unison mode is turned on, generation of the accompaniment pattern data set PD is stopped in regard to part of the musical performance parts for which real-time accompaniment data RD is generated. That is, in a case where either of the modes is turned on, a pattern accompaniment sound is muted for each musical performance part in regard to part of the musical performance parts.
As described above, the accompaniment sound generating device 10 of the present embodiment has the accent mode and the unison mode as the modes for generation of a real-time accompaniment sound. In the accent mode, for example, when the player strongly depresses a key or the player gives a music performance in forte, an automatic accompaniment sound such as the sound of a cymbal being struck strongly is generated. In the unison mode, for example, an automatic accompaniment sound such as the sound of a stringed musical instrument at the same pitch as that of a sound of piano in accordance with a melody of piano or such as the sound of a stringed musical instrument that has the same name of musical note as a sound of piano and has a relationship of octave with a sound of piano is generated.
The “musical performance sound (input sound)” is the data representing the type of a musical performance sound. The real-time accompaniment sound generator 14 determines a musical performance sound in accordance with predetermined algorithm.
A “top note” indicates a highest pitch among the sounds included in a musical performance sound. For example, when the player is playing chords, the highest pitch among the chords is determined as a top note. “All notes” indicate all of the sounds included in a musical performance sound. A “chord” indicates a musical performance sound having a harmonic (or accompaniment) role among the sounds played simultaneously at one time. A “bottom note” indicates the lowest pitch among the sounds included in a musical performance sound. For example, when the player is playing chords, the lowest pitch among the chords is determined as a bottom note.
The “strength condition” indicates a strength condition of a “musical performance sound.” In
The “conversion destination” further includes fields for a “part” and a “pitch (musical instrument).” In the “part,” a musical performance part for which a real-time accompaniment sound is to be generated is registered. In the “pitch (musical instrument),” a pitch of a real-time accompaniment sound or a musical instrument is indicated. As a musical performance part for which a real-time accompaniment sound is to be generated, a “main drum,” a “chord 1,” a “phase 1” and so on are designated similarly to a musical performance part included in accompaniment style data set ASD. As a pitch (music instrument) for generation of a real-time accompaniment sound, a drum kit is usually set in a case where the musical performance part is the “main drum.” The drum kit is set when a rhythm musical instrument that is used in the drum kit is assigned to a note number of MIDI. The type of a rhythm musical instrument and the assigning method are different depending on a drum kit. In a case where the “pitch (musical instrument)” of “conversion destination” is “cymbal” in
The “mute subject” indicates a pattern accompaniment sound to be stopped when a real-time accompaniment sound is being generated. In a case where the accent mode or the unison mode is turned on, generation of a pattern accompaniment sound is stopped in regard to part of musical performance parts. In the “mute subject,” in a case where a musical performance part is registered, when either of the modes is turned on, a pattern accompaniment sound of the musical performance part is not reproduced. Further, when a real-time accompaniment sound is being generated, a pattern accompaniment sound of the musical performance part for which a real-time accompaniment sound being generated is not reproduced. In a case where “1 sound” is registered in the “mute subject,” reproduction of a pattern accompaniment sound is stopped for each sound in accordance with generation of a real-time accompaniment sound. Alternatively, reproduction of a pattern accompaniment sound may be stopped only in regard to the same sound as a real-time accompaniment sound for each sound in accordance with generation of a real-time accompaniment sound.
The “mute cancellation time” indicates a point in time at which stop of reproduction of a pattern accompaniment sound indicated by the “mute subject” is to be canceled. While “after a predetermined period of time elapses” is described as the “mute cancellation time,” a period of time from the stop of reproduction of a pattern accompaniment sound to the restart of reproduction such as one sound, one beat or the like is registered specifically. In a case where “detection of OFF of input sound” is mentioned as the “mute cancellation time,” it indicates that reproduction of a pattern accompaniment sound is to be restarted at a point in time at which the input of a musical performance sound is turned off. However, in a case where a musical performance part is registered in the “mute subject,” mute of a pattern accompaniment sound is not canceled during a period in which the accent mode or the real-time mode continues to be turned on. In a case where the mode is turned off, mute of a pattern accompaniment sound is canceled at a point at which OFF of an input sound is detected.
For example, the data in the first row of the accent mode represents the following settings. In a case where the top note of a musical performance data (musical performance sound) is “strong,” the sound of the cymbal of the main drum is generated as a real-time accompaniment sound. When the sound of the cymbal of the main drum is generated, the main drum of a pattern accompaniment sound is not reproduced. Then, after a predetermined period of time elapses, reproduction of the sound of the main drum of a pattern accompaniment sound restarts.
Further, the data in the third row of the accent mode indicates the following settings, for example. The strength of sound has no conditions with respect to all pitches of musical performance data (a musical performance sound), and a real-time accompaniment sound is generated at the same pitch as a musical performance sound (input sound) in regard to a musical performance part of the chord 1. When the accent mode is turned on, reproduction of a pattern accompaniment sound is stopped in regard to a musical performance part of the chord 1. In a case where the mode is turned off, reproduction of a pattern accompaniment sound restarts after detection of OFF of an input sound.
Further, the data in the eighth row of the accent mode represents the following settings, for example. In a case where the strength of sound of a chord included in musical performance data (a musical performance sound) is equal to or higher than medium strength and lower than strong, a kick is generated by the main drum part as a real-time accompaniment sound. When a kick is generated in the main drum part, a kick sound of the main drum of a pattern accompaniment sound is not reproduced. Then, after a predetermined period of time elapses, generation of a kick sound of the main drum of a pattern accompaniment sound restarts.
For example, the data in the third row of the unison mode represents the following settings. The strength of sound has no conditions with respect to the top note of musical performance data (a musical performance sound), and a real-time accompaniment sound is generated at the same pitch as the top note in regard to a musical performance part of the chord 1. When the unison mode is turned on, reproduction of a pattern accompaniment sound is stopped in regard to a musical performance part of the chord 1. In a case where the mode is turned off, reproduction of a pattern accompaniment sound restarts after detection of OFF of an input sound.
Further, the data in the ninth row of the unison mode represents the following settings, for example. The strength of sound has no conditions with respect to the sound of chord included in musical performance data (a musical performance sound), and a sound of snare drum is generated by the main drum part as a real-time accompaniment sound. When the sound of snare drum is generated by the main drum part, the sound of snare drum of the main drum which is a pattern accompaniment sound is not reproduced. Then, after a predetermined period of time elapses, reproduction of the sound of snare drum of the main drum which is a pattern accompaniment sound restarts.
In this manner, the setting information for generating a real-time accompaniment sound is registered in the setting data SD. Specifically, in the setting data SD shown in
In the examples shown in
Next, the accompaniment sound generating method according to the present embodiment will be described. The CPU 106 executes the accompaniment sound generating program P1 shown in
As shown in
Then, in the step S14, the pattern accompaniment sound generator 16 determines whether the setting operating element 102 has detected an instruction for stopping the automatic accompaniment. In a case where the instruction for stopping the automatic accompaniment has been detected, the pattern accompaniment sound generator 16 stops generating the accompaniment pattern data set PD and stops reproduction of a pattern accompaniment sound in the step S15.
In a case where the instruction for stopping automatic accompaniment has not been detected in the step S14, whether the mode determiner 12 has detected ON of the accent mode or the unison mode is determined in the step S16. That is, whether the mode determiner 12 has detected an instruction for starting a real-time accompaniment function is determined. In a case where the mode determiner 12 detects ON of the accent mode or the unison mode, the real-time accompaniment sound generator 14 reads setting data SD in the step S21 of
Subsequently, in a case where a musical performance part for which a pattern accompaniment sound is to be stopped is present, the real-time accompaniment sound generator 14 provides an instruction for stopping a pattern accompaniment sound to the pattern accompaniment sound generator 16. The real-time accompaniment sound generator 14 makes reference to the setting data SD. In a case where generation of a pattern accompaniment sound is set to be stopped (muted) in units of a musical performance part in the “mute subject,” the real-time accompaniment sound generator 14 provides an instruction for stopping a pattern accompaniment sound in regard to the musical performance part. In response to this instruction, the pattern accompaniment sound generator 16 stops outputting accompaniment pattern data set PD in regard to a musical performance part for which a stop instruction has been provided (step S22).
Next, in the step S23, the mode determiner 12 determines whether the instruction for stopping a currently set mode has been detected. In a case where the mode determiner 12 has detected the instruction for stopping the mode, the real-time accompaniment sound generator 14 stops generating real-time accompaniment data RD (step S24). Further, the real-time accompaniment sound generator 14 instructs the pattern accompaniment sound generator 16 to restart generating a pattern accompaniment sound in regard to a musical performance part for which generation of a pattern accompaniment sound is being stopped (muted) (step S25). Thereafter, the process returns to the step S14 of
In a case where the mode determiner 12 has not detected an instruction for stopping the mode in the step S23, the mode determiner 12 determines whether the instruction for changing a mode has been detected in the step S26. For example, whether the mode has been changed from the accent mode to the unison mode, etc. is determined. In a case where the instruction for changing a mode has been detected, the process returns to the step S21, and the setting data SD of the mode after the change is read. In a case where the instruction for changing a mode has not been detected, the musical performance sound receiver 11 determines whether a note-on has been acquired in the step S31 of
When the musical performance sound receiver 11 acquires a note-on, the specifier 13 specifies a musical performance part for which a real-time accompaniment sound is to be generated based on the acquired musical performance sound in the step S32. The specifier 13 makes reference to the setting data SD, and specifies a musical performance part for which a real-time accompaniment sound is to be generated based on the information of the “part” of the “conversion destination” corresponding to a currently set mode. Subsequently, in the step S33, the real-time accompaniment sound generator 14 generates real-time accompaniment data RD in regard to the specified musical performance part. The real-time accompaniment sound generator 14 makes reference to the setting data SD, and determines the pitch, the tone color, the volume and so on of a real-time accompaniment sound to be generated based on the information of the “pitch (musical instrument)” of the “conversion destination” corresponding to a currently set mode.
The real-time accompaniment data RD generated in the real-time accompaniment sound generator 14 is supplied to the accompaniment sound outputter 17. The accompaniment sound outputter 17 outputs the real-time accompaniment data RD to the tone generator 104 with the timing for generating a real-time accompaniment sound aligned with the timing for generating the acquired musical performance sound of a note-on. In a case where real-time accompaniment data RD is reproduced in regard to a plurality of musical performance parts, the real-time accompaniment data RD for the plurality of musical performance parts is output to the tone generator 104 with the timing for generating a real-time accompaniment sound aligned with the timing for generating a musical performance sound. Thus, the sound system 105 outputs a musical performance sound and real-time accompaniment sounds for the plurality of musical performance parts with the timing for generating the musical performance sound aligned with the timing for generating the real-time accompaniment sounds.
In the step S34, the musical performance sound receiver 11 determines whether a note-off has been acquired. A note-off indicates that the input of musical performance data (a musical performance sound) has been changed from an ON state to an OFF state. When the musical performance sound receiver 11 acquires a note-off, the real-time accompaniment sound generator 14 stops generating a real-time accompaniment sound following a note-off in the step S35.
Next, at a point T2 in time, ON of the unison mode is detected. Thus, pattern accompaniment sounds are stopped (muted) in regard to musical performance parts including the base part, the chord 1 part and the phrase 1 part. At the point T2 in time and later, generation of a pattern accompaniment sound continues in regard to the main drum part.
Next, at a point T3 in time, a musical performance sound is input. Real-time accompaniment sounds are generated for the main drum part, the base part and the chord 1 part based on the musical performance sound. Then, at the point T3 in time, the pattern accompaniment sound for the main drum part is stopped (muted). Between points T4 and T5 in time, musical performance sounds are input again. Based on this musical performance sound, real-time accompaniment sounds are generated for the main drum part, the base part and the phrase 1 part. Then, between the points T4 and T5 in time, the pattern accompaniment sound for the main drum part is stopped (muted). Subsequently, at a point T6 in time, a musical performance sound is input. Based on the musical performance sound, real-time accompaniment sounds are generated for all of the musical performance parts. Then, at the point T6 in time, a pattern accompaniment sound for the main drum part is stopped (muted).
The accompaniment sound generating device of the present embodiment specifies a plurality of musical performance parts for which real-time accompaniment sounds are to be generated based on an input musical performance sound and generates the real-time accompaniment sounds that belong to the plurality of specified musical performance parts. Then, the real-time accompaniment sounds generated for the plurality of musical performance parts are output with the timing for generating the real-time accompaniment sounds aligned with the timing for generating a musical performance sound. Thus, the player can enjoy automatic accompaniment sounds having variations. Because a real-time accompaniment sound is generated for each musical performance sound, the automatic accompaniment sound is not monotonous to the player.
Further, with the present embodiment, a plurality of modes are prepared as the modes for generation of a real-time accompaniment sound based on a musical performance sound. Then, in the setting data SD, a plurality of musical performance parts for which real-time accompaniment sounds are to be generated in each mode are registered. A real-time accompaniment sound can be adjusted in accordance with a mode preferred by the player.
Further, with the present embodiment, the setting data SD includes information relating to the generation rule of a real-time accompaniment sound to be generated based on a musical performance sound in each mode. Then, the real-time accompaniment sound generator 14 makes reference to the setting data SD and generates a real-time accompaniment sound based on a musical performance sound in accordance with the generation rule corresponding to a set mode. A real-time accompaniment sound can be adjusted in accordance with a mode preferred by the player.
For example, when two human players perform, in a case where syncopation or a “match (or tutti)” is present in a music piece, the players perform in accordance with syncopation or a match. However, since using a pattern accompaniment sound, a conventional automatic accompaniment sound generating device cannot give such a musical performance. That is, performance expression that has a sense of unity which can be realized by human players cannot be provided by the conventional automatic accompaniment sound generating device unless corresponding accompaniment sound data is prepared in advance. If such accompaniment sound data is to be prepared in advance, a large amount of data is required. Further, it is difficult and requires time for a general user to create such accompaniment sound data. With the accompaniment sound generating device of the present embodiment, a musical performance part is specified for each musical performance sound, and a real-time accompaniment sound is generated based on a musical performance sound. Thus, an automatic accompaniment sound corresponding to an impromptu musical performance such as syncopation or a “match” can be reproduced in real time.
Further, with the present embodiment, during a period in which a real-time accompaniment sound is being generated, the pattern accompaniment sound generator 16 stops generating a pattern accompaniment sound in regard to a same musical performance part. It is easier for the player to listen to a real-time accompaniment sound, and the user can enjoy the real-time accompaniment sound.
Further, with the present embodiment, when either of the modes in which a real-time accompaniment sound is to be generated is turned on, the pattern accompaniment sound generator 16 stops generating a pattern accompaniment sound in regard to a musical performance part such as the base part, the chord part, the pad part or the phrase part. It is easier for the player to listen to a real-time accompaniment sound, and the user can enjoy the real-time accompaniment sound. Further, when either of the modes in which a real-time accompaniment sound is to be generated is turned on, generation of a pattern accompaniment sound continues in regard to a musical performance part such as the main drum part. The player can enjoy a real-time accompaniment sound in a flow of a pattern accompaniment sound.
In the following paragraphs, non-limiting examples of correspondences between various elements recited in the claims below and those described above with respect to various preferred embodiments of the present disclosure are explained. However, the present disclosure is not limited to the below-mentioned examples. In the above-mentioned embodiment, the real-time accompaniment sound is an example of an accompaniment sound in claims. In the above-mentioned embodiment, the setting data SD is an example of setting information. In the above-mentioned embodiment, the “musical performance sound (input sound)” and the “strength condition” in
As each of each constituent elements, various elements recited in the claims, various other elements having configurations or functions described in the claims can be also used.
While the accent mode and the unison mode are described as the modes for real-time accompaniment, by way of example, in the above-mentioned embodiment, this is merely one example. For example, modes corresponding to the categories such as a hard rock mode or a jazz mode and so on may be prepared.
In the above-mentioned embodiment, the tone color, the volume and so on of a real-time accompaniment sound are determined with reference to the “pitch (musical instrument)” of the “conversion destination” of the setting data SD. In another embodiment, the tone color, the volume and so on of a real-time accompaniment sound may be determined with reference to the accompaniment style data set ASD based on the category and genre currently set for a pattern accompaniment sound.
In the above-mentioned embodiment, during a period in which the mode for a real-time accompaniment sound is turned on, a pattern accompaniment sound is set to be muted in regard to musical performance parts other than the main drum part, and a pattern accompaniment sound continued to be generated only for the main drum part. In another embodiment, generation of a pattern accompaniment sound may continue for part of the other musical performance parts in accordance with the main drum part. For example, generation of a pattern accompaniment sound may continue for the main drum part and the base part.
Further, when the unison mode is changed to the other mode (the unison mode is turned off or changed to the accent mode), generation of a pattern accompaniment sound for an accompaniment part other than rhythm does not have to be restarted until an instruction for changing a chord is received.
The accompaniment sound generating device, the electronic musical instrument, the accompaniment sound generating method and the non-transitory computer readable medium storing the accompaniment sound generating program have characteristics described below.
An accompaniment sound generating device according to one aspect of the present disclosure includes a specifier that specifies a plurality of musical performance parts for which accompaniment sounds are to be generated based on an input musical performance sound, an accompaniment sound generator that generates the accompaniment sounds that belong to the plurality of specified musical performance parts for each musical performance sound and an accompaniment sound outputter that outputs the accompaniment sounds generated for the plurality of musical performance parts with timing for generating the accompaniment sounds aligned with timing for generating musical performance sounds.
A plurality of modes may be prepared as modes for generation of an accompaniment sound based on the musical performance sound, and the specifier may make reference to setting information in which the plurality of musical performance parts for which the accompaniment sounds are to be generated in each mode are registered and may specify the plurality of musical performance parts corresponding to a set mode. The setting information may include information relating to a generation rule of the accompaniment sound to be generated based on the musical performance sound in each mode, and the accompaniment sound generator may make reference to the setting information and may generate the accompaniment sounds based on the musical performance sound in accordance with the generation rule corresponding to a set mode.
Information that associates characteristics of the musical performance sound to characteristics of the accompaniment sound may be registered as the generation rule in the setting information.
An electronic musical instrument according to another aspect of the present disclosure that includes the above-mentioned accompaniment sound generating device, includes a pattern accompaniment sound generator that generates a pattern accompaniment sound for a predetermined musical performance part based on predetermined accompaniment pattern information, wherein the pattern accompaniment sound generator stops generating the pattern accompaniment sound in regard to a same musical performance part as a musical performance part for which the accompaniment sound is generated during a period in which the accompaniment sound is being generated from the accompaniment sound generator.
The pattern accompaniment sound generator may stop generating the pattern accompaniment sound in regard to a first musical performance part, and may continue generating the pattern accompaniment sound in regard to a second musical performance part, when a mode in which the accompaniment sound is to be generated is turned on.
An accompaniment sound generating method according to yet another aspect of the present disclosure for specifying a plurality of musical performance parts for which accompaniment sounds are to be generated based on an input musical performance sound, generating the accompaniment sounds that belong to the plurality of specified musical performance parts for each musical performance sound and outputting the accompaniment sounds generated for the plurality of musical performance parts with timing for generating the accompaniment sounds aligned with timing for generating musical performance sounds.
An accompaniment sound generating program according to yet another aspect of the present disclosure causes a computer to execute the processes of specifying a plurality of musical performance parts for which accompaniment sounds are to be generated based on an input musical performance sound, generating the accompaniment sounds that belong to the plurality of specified musical performance parts for each musical performance sound and outputting the accompaniment sounds generated for the plurality of musical performance parts with timing for generating the accompaniment sounds aligned with timing for generating musical performance sounds.
While preferred embodiments of the present disclosure have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing the scope and spirit of the present disclosure. The scope of the present disclosure, therefore, is to be determined solely by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2020-006370 | Jan 2020 | JP | national |