This disclosure relates to a singing sound output system and method for outputting singing sounds.
A technology for generating singing sounds in response to performance operations is known. For example, the singing sound synthesizer disclosed in Japanese Laid-Open Patent Application No. 2016-206323 generates singing sounds by advancing through lyrics one character or one syllable at a time in response to a real-time performance.
However, Japanese Laid-Open Patent Application No. 2016-206323 does not disclose the outputting of singing sounds together with an accompaniment in real time. If singing sounds were to be output together with an accompaniment in real time, it would be difficult to accurately generate singing sounds at the originally intended timing. For example, even if performance operations were started at the intended timing of sound generation, the actual start of singing would be delayed because of the processing time required from synthesis to pronunciation of the singing sounds. Therefore, there is room for improvement regarding the outputting of singing sounds at the intended timing in accordance with an accompaniment.
An object of this disclosure is to provide a singing sound output system and method that can output singing sounds at the timing at which sound information is input, in synchronization with the accompaniment.
An embodiment of this disclosure provides a singing sound output system, comprising at least one processor configured to execute a plurality of units including a teaching unit configured to indicate to a user a progression position in singing data that are temporally associated with accompaniment data and that include a plurality of syllables, an acquisition unit configured to acquire at least one piece of sound information input by a performance, a syllable identification unit configured to identify, from the plurality of syllables in the singing data, a syllable corresponding to the at least one piece of sound information acquired by the acquisition unit, a timing identification unit configured to associate, with the at least one piece of sound information, relative information indicating a relative timing with respect to an identified syllable that has been identified by the syllable identification unit, a synthesizing unit configured to synthesize a singing sound based on the identified syllable, and an output unit configured to, based on the relative information, synchronize and output the singing sound synthesized by the synthesizing unit and an accompaniment sound based on the accompaniment data.
Embodiments of this disclosure are described below with reference to the drawings. It will be apparent to those skilled from this disclosure that the following descriptions of the embodiments are provided for illustration only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
The keyboard 105 and the drum 107 are electronic instruments used for inputting MIDI (Musical Instrument Digital Interface) signals. The wind instrument 106 is an acoustic instrument used for inputting monophonic analog sounds. The keyboard 105 and the wind instrument 106 can also input pitch information. The wind instrument 106 can be an electronic instrument, and the keyboard 105 and the drum 107 can be acoustic instruments. These instruments are examples of devices for inputting sound information and are played by a user on the PC 101 side. A vocalization by the user on the PC 101 side can also be used as a means for inputting analog sound, in which case the physical voice is input as an analog sound. Therefore, the concept of “performance” for inputting sound information in the present embodiment includes input of actual voice. In addition, the device for inputting sound information need not be in the form of a musical instrument.
Although details will be described further below, an overview of a typical process of the singing sound output system 1000 will be described. A user on the PC 101 side plays a musical instrument while listening to an accompaniment. The PC 101 transmits singing data 51, timing information 52, and accompaniment data 53 (all of which will be described further below in connection with
The CPU 11 controls the entire PC 101. The CPU 11 is one example of at least one processor as an electronic controller of the PC 101. Here, the term “electronic controller” as used herein refers to hardware, and does not include a human. The PC 101 can include, instead of the CPU 11 or in addition to the CPU 11, one or more types of processors, such as a GPU (Graphics Processing Unit), a DSP (Digital Signal Processor), an FPGA (Field Programmable Gate Array), an ASIC (Application Specific Integrated Circuit), and the like.
The ROM 12 stores various data in addition to a program executed by the CPU 11. The RAM 13 provides a work area when the CPU 11 executes a program. The RAM 13 temporarily stores various information. The memory 14 (computer memory) includes non-volatile memory. The timer 15 measures time. The timer 15 can employ a counter method. The operation unit (user operable input) 16 includes a plurality of operators for inputting various types of information and receives instructions from a user. The display unit (display) 17 displays various information. The sound generation unit (sound generator) 18 includes a sound source circuit, an effects circuit, and a sound system.
The input unit 8 includes an interface for acquiring sound information from devices for inputting electronic sound information, such as the keyboard 105 and the drum 107. The input unit 8 also includes devices such as a microphone for acquiring sound information from devices for inputting acoustic sound information, such as the wind instrument 106. The various I/Fs 19 connect to the communication network 104 (
The cloud server 102 includes CPU (Central Processing Unit) 21, ROM (Read Only Memory) 22, RAM (Random Access Memory) 23, a memory 24, a timer 25, an operation unit 26, a display unit 27, a sound generation unit 28, and various I/Fs 29. These constituent elements are interconnected by a bus 20. The configurations of these constituent elements are the same as those indicated by reference numerals 11-17 and 19 in the PC 101.
The sound output device 103 includes CPU (Central Processing Unit) 31, ROM (Read Only Memory) 32, RAM (Random Access Memory) 33, a memory 34, a timer 35, an operation unit 36, a display unit 37, a sound generation unit 38, and various I/Fs 39. These constituent elements are interconnected by a bus 30. The configurations of these constituent elements are the same as those indicated by reference numerals 11-19 in the PC 101.
In the present embodiment, for example, each function of the teaching unit 41 and the acquisition unit 42 is realized by the PC 101. Each of these functions is realized by software by programs stored in the ROM 12. That is, each function is provided by the CPU 11 extracting the necessary program, executing various computations in the RAM 13, and controlling hardware resources. In other words, these functions are realized by cooperation primarily between the CPU 11, the ROM 12, the RAM 13, the timer 15, the display unit 17, the sound generation unit 18, the input unit 8, and the various I/Fs 19. The programs executed here include sequencer software.
In addition, the functions of the syllable identification unit 43, the timing identification unit 44, the synthesizing unit 45, and the phrase generation unit 47 are realized by the cloud server 102. Each of these functions is implemented in software by a program stored in the ROM 22. These functions are realized by cooperation primarily between the CPU 21, the ROM 22, the RAM 23, the timer 25, and the various I/Fs 29.
In addition, the function of the output unit 46 is realized by the sound output device 103. The function of the output unit 46 is implemented in software by a program stored in the ROM 32. These functions are realized by cooperation primarily between the CPU 31, the ROM 32, the RAM 33, the timer 35, the sound generation unit 38, and the various I/Fs 39.
The singing sound output system 1000 refers to the singing data 51, the timing information 52, the accompaniment data 53, and a phrase database 54. The phrase database 54 is stored in the ROM 12, for example. The phrase generation unit 47 and the phrase database 54 are not essential to the present embodiment. These elements will be described in connection with the third embodiment.
The singing data 51, the timing information 52, and the accompaniment data 53 are associated with each other for each song and stored in the ROM 12 in advance. The accompaniment data 53 are information for reproducing the accompaniment of each song stored as sequence data. The singing data 51 include a plurality of syllables. The singing data 51 include lyrics text data and a phonological information database. The lyrics text data are data describing lyrics, in which the lyrics of each song is described, divided into units of syllables. In each song, the accompaniment position in the accompaniment data 53 and the syllable in the singing data 51 are temporally associated with each other by the timing information 52.
The processes carried out by each functional unit in the functional block 110 will be described in detail in
The user performs in accordance with the indicated syllable progression. Here, an example is given in which a MIDI signal is inputted by performance of the keyboard 105 that can input pitch information. The user, who is the performer, sequentially presses keys corresponding to the syllables in time with the start of each of the syllables “sa,” “ku,” and “ra.” In this manner, sound information N (N1-N3) is acquired sequentially. The pronunciation length of each piece of sound information N is the time from an input start timing s (s1-s3) to an input end timing e (e1-e3). The input start timing s corresponds to note-on and the input end timing e corresponds to note-off. The sound information N includes pitch information and velocity.
The user can intentionally shift the actual input start timing s from the pronunciation start timing t. In the cloud server 102, the shift time of the input start timing s with respect to the pronunciation start timing t is calculated as the temporal difference ΔT (ΔT1-T3) (relative information). The difference ΔT is calculated for and is associated with each syllable. The cloud server 102 synthesizes a singing sound based on the sound information N and sends the synthesized singing sound to the sound output device 103 together with the accompaniment data 53.
The sound output device 103 synchronizes and outputs the singing sound and the accompaniment sound based on the accompaniment data 53. At this time, the sound output device 103 outputs the accompaniment sound at a set constant tempo. The sound output device 103 outputs singing sounds such that each syllable matches the accompaniment position based on the timing information 52. Note that processing time is required from the input of sound information N to the output of the singing sound. Thus, the sound output device 103 delays the output of the accompaniment sound using delay processing in order to match each syllable with the accompaniment position.
For example, the sound output device 103 adjusts the output timing by referring to the difference ΔT corresponding to each syllable. As a result, the output of the singing sound is started in accordance with the input timing (at input start timing s). For example, the output (pronunciation) of the syllable “ku” is started at a timing that is earlier than the pronunciation start timing t2 by the difference ΔT2. In addition, the output (pronunciation) of the syllable “ra” is started at a timing that is later than the pronunciation start timing t3 by the difference ΔT3. The pronunciation of each syllable ends (is muted) at a time corresponding to the input end timing e. Therefore, the accompaniment sounds are output at a fixed tempo, and the singing sounds are output at timings corresponding to the performance timings. Therefore, the singing sound can be synchronized with the accompaniment and output at the timing when the sound information N is input.
The PC processing will be described first. In Step S101, the CPU 11 of the PC 101 selects a song to be played at this time (hereinafter referred to as selected song) from among a plurality of prepared songs based on an instruction from the user. The performance tempo of the song is set in advance by default for each song. However, the CPU 11 can change the tempo to be set based on an instruction from the user when the song to be performed is selected.
In Step S102, the CPU 11 transmits related data corresponding to the selected song (singing data 51, timing information 52, accompaniment data 53) via the various I/Fs 19.
In Step S103, the CPU 11 initiates the teaching of the progression position. Therefore, the CPU 11 sends a notification to the cloud server 102 indicating that the teaching of the progression position has been initiated. The teaching process here is realized by executing sequencer software, for example. The CPU 11 (teaching unit 41) teaches the current progression position by using the timing information 52.
For example, the display unit 17 displays lyrics corresponding to the syllables in the singing data 51. The CPU 11 teaches the progression position on the displayed lyrics. For example, the teaching unit 41 varies the display mode of the lyrics of the current position, such as color, or moves the current position or the position of the lyrics themselves to indicate the progression position. Furthermore, the CPU 11 reproduces the accompaniment data 53 at the set tempo to indicate the progression position. The method for indicating the progression position is not limited to these examples, and various methods of visual or auditory recognition can be employed. For example, a method of indicating the note of the current position on a displayed musical score can be employed. Alternatively, after the start timing is indicated, a metronome sound can be generated. At least one method can be employed, but a plurality of methods can also be used in combination.
In Step S104, the CPU 11 (acquisition unit 42) executes a sound information acquisition process. For example, the user performs along with the lyrics while checking the progression position that has been taught (for example, while listening to the accompaniment). The CPU 11 acquires analog sound or MIDI data produced by the performance as sound information N. Sound information N typically includes input start timing s, input end timing e, pitch information, and velocity information. Note that pitch information is not necessarily included, as is the case when the drum 107 is played. The velocity information can be canceled. The input start timing s and input end timing e are defined by the time relative to the accompaniment progression. In the case that analog sound, such as the physical voice, is acquired with a microphone, audio data are acquired as the sound information N.
In Step S105, the CPU 11 sends the sound information N acquired in Step S104 to the cloud server 102. In Step S106, it is determined whether the selected song has ended, that is, whether teaching of the progression position has been completed to the final position in the selected song. Then, if the selected song has not ended, the CPU 11 returns to Step S104. Therefore, the sound information N acquired in accordance with the performance along with the progression of the song is sent to the cloud server 102 as needed until the selected song has ended. When the selected song ends, the CPU 11 sends a notification to that effect to the cloud server 102 and terminates the PC processing.
The cloud server processing will now be described. In Step S201, when related data corresponding to the selected song are received via the various I/Fs 29, the CPU 21 of the cloud server 102 proceeds to Step S202. In Step S202, the CPU 21 transmits the related data that have been received to the sound output device 103 via the various I/Fs 29. It is not necessary to transmit the singing data 51 to the sound output device 103.
In Step S203, the CPU 21 starts a series of processes (S204-S209). In starting this series of processes, the CPU 21 executes the sequencer software and uses the related data that have been received to advance the time while waiting for the reception of the next sound information N. In Step S204, the CPU 21 receives the sound information N.
In Step S205, the CPU 21 (syllable identification unit 43) identifies the syllable corresponding to the sound information N that have been received. First, the CPU 21 calculates for each syllable the difference ΔT between the input start timing s in the sound information N and the pronunciation start timing tin each of a plurality of syllables in the singing data 51 corresponding to the selected song. The CPU 21 then identifies the syllable with the smallest difference ΔT from among the plurality of syllables in the singing data 51 as the syllable corresponding to the sound information N received this time.
For example, in the example shown in
In the case that the sound information N is audio data, the CPU 21 (syllable identification unit 43) determines the pronunciation/muting timing, tone height (pitch) and the velocity of the sound information N.
In Step S206, the CPU 21 (timing identification unit 44) executes a timing identification process. That is, the CPU 21 associates the difference ΔT with respect to sound information N received this time and the syllable identified as the syllable corresponding to the sound information N.
In Step S207, the CPU 21 (synthesizing unit 45) synthesizes a singing sound based on the identified syllable. The pitch of the singing sound is determined based on the pitch information of the corresponding sound information N. In the case that sound information N is the sound of a drum, the pitch of the singing sound can be a constant pitch, for example. Regarding the singing sound output timing, the pronunciation timing and the muting timing are determined based on input end timing e (or the pronunciation length) and pronunciation start timing t of the corresponding sound information N. Therefore, a singing sound is synthesized from the syllable corresponding to the sound information N and the pitch determined by the performance. Note that there are cases in which the pronunciation period of the current syllable overlaps with the original pronunciation timing of the next syllable in the singing data because the sound during the performance was muted too late. In this case, the input end timing e can be corrected so that the sound is forcibly muted before the original pronunciation timing of the next syllable.
In Step S208, the CPU 21 executes data transmission. That is, the CPU 21 transmits the synthesized singing sound, the difference ΔT corresponding to the syllable, and the velocity information at the time of performance to the sound output device 103 via the various I/Fs 29.
In Step S209, the CPU 21 determines whether the selected song has ended, that is, whether a notification indicating that the selected song has ended has been received from the PC 101. Then, if the selected song has not ended, the CPU 21 returns to Step S204. Therefore, until the selected song ends, singing sounds based on the syllables corresponding to sound information N are synthesized and transmitted as needed. The CPU 21 can determine that the selected song has ended when a prescribed period of time has elapsed after the processing of the last received sound information N has ended. When the selected song ends, CPU 21 terminates the cloud server processing.
The sound output device processing will now be described. In Step S301, when related data corresponding to the selected song are received via the various I/Fs 39, the CPU 31 of the sound output device 103 proceeds to Step S302. In Step S302, the CPU 31 receives the data (singing sound, difference ΔT, velocity) transmitted from the cloud server 102 in Step S208.
In Step S303, the CPU 31 (output unit 46) performs the synchronized output of the singing sound and the accompaniment based on the received singing sound and difference ΔT, the already received accompaniment data 53, and the timing information 52.
As described in
Note that the mode of the synchronized output is not limited to reproduction, but the output can be stored in the memory 34 as an audio file or transmitted to an external device through the various I/Fs 39.
In Step S304, the CPU 31 determines whether the selected song has ended, that is, whether a notification indicating that the selected song has ended has been received from the cloud server 102. If the selected song has not ended, the CPU 31 then returns to Step S302. Therefore, the synchronized output of the singing sound that has been received is continued until the selected song ends. The CPU 31 can determine that the selected song has ended when a prescribed period of time has elapsed after the processing of the last received data has ended. When the selected song ends, the CPU 31 terminates the sound output device processing.
By the present embodiment, the syllable corresponding to the sound information N acquired while the progression position in the singing data 51 is being indicated to the user is identified from the plurality of syllables in the singing data 51. The relative information (difference ΔT) is associated with the sound information N, and the singing sound is synthesized based on the identified syllable. The singing sound and the accompaniment sound based on the accompaniment data 53 are synchronized and output based on the relative information. Therefore, the singing sound can be synchronized with the accompaniment and output at the timing at which the sound information N is input.
Also, in the case that the sound information N included pitch information, the singing sound can be output at the pitch input by the performance. In the case that the sound information N also includes velocity information, the singing sound can be output at a volume corresponding to the intensity of the performance.
Although the related data (singing data 51, timing information 52, accompaniment data 53) are transmitted to the cloud server 102 or the sound output device 103 after the selected song is determined, no limitation is imposed thereby. For example, the related data of a plurality of songs can be pre-stored in the cloud server 102 or the sound output device 103. Then, when the selected song is determined, information specifying the selected song can be transmitted to the cloud server 102 and also to the sound output device 103.
In the second embodiment of this disclosure, a part of the system processing differs from the first embodiment. Therefore, the differences from the first embodiment are primarily described with reference to
The cloud server 102 identifies the syllable that was next in the order of progression at the time the sound information N was input as the syllable corresponding to the sound information N that has been input. Therefore, with each key-on, the corresponding syllable is identified in turn.
The actual input start timing s can deviate relative to the pronunciation start timing t. In the same manner as in the first embodiment, the shift time of the input start timing s with respect to the pronunciation start timing t is calculated in the cloud server 102 as the temporal difference ΔT (ΔT1-T3) (relative information). The difference ΔT is calculated for each syllable and associated with it. The cloud server 102 synthesizes a singing sound based on the sound information N and sends it together with the accompaniment data 53 to the sound output device 103.
In
The sound output device 103 outputs the singing sound and the accompaniment sound based on the accompaniment data 53 in synchronized fashion, by outputting the singing sound while adjusting the output timing based on the timing information and the difference ΔT. At this time, the sound output device 103 outputs the singing sound at the syllable pronunciation start timing t′. Regarding the accompaniment sound, the sound output device 103 outputs each syllable matched to the accompaniment position based on the difference ΔT. In order to match each syllable to the accompaniment position, the sound output device 103 uses the delay process to delay the output of the accompaniment sound. Therefore, the singing sound is output at a timing corresponding to the performance timing, and the tempo of the accompaniment sound changes in accordance with the performance timing.
The system processing in the present embodiment will be described with reference to the flowchart of
In the teaching process initiated at Step S103 by the PC 101, the CPU 11 (teaching unit 41) uses the timing information 52 to teach the current progression position. In Step S104, the CPU 11 (acquisition unit 42) executes a sound information acquisition process. The user plays and inputs the sound corresponding to the next syllable while checking the progression position. The CPU 11 awaits the progression of the accompaniment and the progression of the teaching of the syllables until there is input of the next sound information N. Therefore, the CPU 11 teaches the next syllable while waiting for the input of the sound information N, and each time the sound information N is input, advances the syllable indicating the progression position one step to the next syllable. The CPU 11 also matches the accompaniment progression to the progression of the teaching of the syllables.
In the series of processes that is started in Step S203 in the cloud server 102, the CPU 21 advances the time while waiting for the reception of sound information N. In Step S204, the CPU 21 continually receives sound information N and advances the time as sound information N is received. Therefore, the CPU 21 waits for time to pass until the next sound information N is received.
When the sound information N is received, the CPU 21 (syllable identification unit 43) in Step S205 identifies the syllable corresponding to the sound information N that has been received. Here, the CPU 21 identifies the syllable that was next in the order of progression at the time the sound information N was input as the syllable corresponding to the sound information N that has been received this time. Therefore, with each key-on due to the performance, the corresponding syllable is identified in turn.
After a syllable is identified, in Step S206, the CPU 21 calculates the difference ΔT and associates this difference with the identified syllable. That is, as shown in
In the data transmission in Step S208, the CPU 21 transmits the synthesized singing sound, the difference ΔT corresponding to the syllable, and the velocity information at the time of performance to the sound output device 103 via the various I/Fs 29.
In the synchronized output process performed by the sound output device 103 in Step S303, the CPU 31 (output unit 46) synchronously outputs the singing sound and the accompaniment based on the singing sound and the difference ΔT that have been received, the accompaniment data 53 that have already been received, and the timing information 52. At this time, the CPU 31 performs the output process while matching each syllable to the accompaniment position by adjusting the output timings of the accompaniment sound and the singing sound with reference to the difference ΔT.
As a result, as shown in
On the other hand, the performance tempo of the accompaniment sound changes in accordance with the performance timing. For example, with respect to the accompaniment sound, the CPU 31 corrects the position of the pronunciation start timing t2 to the position of the syllable pronunciation start timing t2′, and outputs the accompaniment sound.
Accordingly, the accompaniment sound is output at a variable tempo and the singing sound is output at the timing corresponding to the performance timing. Therefore, the singing sound can be synchronized with the accompaniment and output at the timing at which the sound information N is input.
By the present embodiment, the teaching unit 41 indicates the next syllable while waiting for the input of sound information N, and each time sound information N is input, advances the syllable indicating the progression position by one to the next syllable. The syllable identification unit 43 then identifies the syllable that was next in the order of progression at the time sound information N was input as the syllable corresponding to the sound information N that has been input. Thus, it is possible to exhibit the same effect as the first embodiment in terms of outputting the singing sound at a timing at which the sound information N is input, in synchronization with the accompaniment. In addition, even if the user performs at a free tempo, the singing sound can be output synchronously with the accompaniment according to the user's performance tempo.
In the first and second embodiments, the relative information to be associated with the sound information N is not limited to the difference ΔT. For example, the relative information indicating the relative timing with respect to the identified syllable can be the relative time of the sound information N and the relative time of each syllable based on a certain time defined by the timing information 52.
The third embodiment of this disclosure will be described with reference to
In the present embodiment, the teaching unit 41, the timing identification unit 44, the singing data 51, the timing information 52, and the accompaniment data 53 shown in
Here, the accents of the series of sound information N refer to strong/weak accents based on the relative intensity of sound. The accent of a phrase refers to high/low accents based on the relative height of the pitch of each syllable Therefore, the intensity of sound of the sound information N corresponds to the high/low of the pitch of the phrase.
The PC processing will be described first. In Step S401, the CPU 11 of the PC 101 transitions to a performance start state based on the user's instruction. At this time, the CPU 11 transmits a notification of a transition to the performance start state to the cloud server 102 via the various I/Fs 19.
In Step S402, when the user strikes the drum 107, the CPU 11 (acquisition unit 42) acquires the corresponding sound information N. The sound information N is MIDI data or analog sound. The sound information N includes at least information indicating the input start timing (strike-on) and information indicating velocity.
In Step S403, the CPU 11 (acquisition unit 42) determines whether the current series of sound information N has been finalized. For example, in the case that the first sound information N is input within a first prescribed period of time after transition to the performance start state, the CPU 11 determines that the series of sound information N has been finalized when a second prescribed period of time has elapsed since the last sound information N was input. Although a series of sound information N is assumed to be a collection of a plurality of pieces of sound information N, it can be one piece of sound information N.
In Step S404, the CPU 11 transmits the series of sound information N that has been acquired to the cloud server 102. In Step S405, the CPU 11 determines whether the user has indicated the end of the performance state. The CPU 11 then returns to Step S402 if the end of the performance has not been indicated, and if the end of the performance has been indicated, it transmits a notification to that effect to the cloud server 102 and terminates the PC processing. Therefore, each time a set of a series of sound information N is finalized, said series of sound information N is transmitted.
The cloud server processing will now be described. When a notification of a transition to the performance start state, the CPU 21 starts a series of processes (S502-S506) in Step S501. In Step S502, the CPU 21 receives the series of sound information N transmitted from the PC 101 in Step S404.
In Step S503, the CPU 21 (phrase generation unit 47) generates one phrase with respect to the current series of sound information N. The method is illustrated in an example below. For instance, the CPU 21 analyzes the accents of a series of sound information N from the velocity of each piece of sound information N and extracts from the phrase database 54 a phrase matching said accents and the number of syllables constituting the series of sound information N. In doing so, the extraction range can be narrowed down based on conditions. For example, the phrase database 54 can be categorized according to conditions and configured such that the user can set at least one condition, such as “noun,” “fruit,” “stationery,” “color,” “size,” etc, or more.
For example, consider the case in which there are four pieces of sound information N and the condition is “fruit.” If the analyzed accents are “strong/weak/weak/weak,” “durian” is extracted, and if the accents are “weak/strong/weak/weak,” “orange” is extracted. Consider the case in which there are four pieces of sound information N and the condition is “stationery.” If the analyzed accents are “strong/weak/weak/weak,” “compass” is extracted, and if the accents are “weak/strong/weak/weak,” “crayon” is extracted. Setting conditions is not essential.
In Step S504, the CPU 21 (synthesizing unit 45) synthesizes a singing sound from the generated phrase. The pitch of the singing sound can conform to the pitch of each syllable set in the phrase. In Step S505, the CPU 21 transmits the singing sound to the sound output device 103 via the various I/Fs 29.
In Step S506, the CPU 21 determines whether a notification of an end-of-performance instruction has been received from the PC 101. Then, if a notification of an end-of-performance instruction has not been received, the CPU 21 returns to Step S502. If a notification of an end-of-performance instruction has been received, the CPU 21 transmits the notification of the end-of-performance instruction and terminates the cloud server processing.
The sound output device processing will now be described. In Step S601, when the singing sound is received via the various I/Fs 39, the CPU 31 of the sound output device 103 proceeds to Step S602. In Step S602, the CPU 31 (output unit 46) outputs the singing sound that has been received. The output timing of each syllable depends on the input timing of the corresponding sound information N. Here, as in the first embodiment, the output mode is not limited to reproduction.
In Step S603, it is determined whether a notification of an end-of-performance instruction has been received from the cloud server 102. Then, if a notification of an end-of-performance instruction has not been received, the CPU 31 returns to Step S601, and if a notification of an end-of-performance instruction has been received, the CPU 31 terminates the sound output device processing. Therefore, the CPU 31 outputs each time the singing sound of the phrase is received.
By the present embodiment, it is possible to output singing sounds in accordance with the timing and intensity of the performance input.
In the present embodiment, since the striking of the head of the drum and its rim (rim shot) results in different timbres, this difference in timbre can also be used as a parameter for the phrase generation. For example, the above-described condition for phrase extraction can be varied between striking the head of the drum and a rim shot.
Sound generated by striking is not limited to that of a drum and can include hand clapping. When an electronic drum is used, the striking position on the head can be detected and the difference in the striking position can be used as a parameter for phrase generation.
In the present embodiment, if the sound information N that can be acquired includes pitch information, the high/low of the pitch can be replaced with an accent, and a similar processing as that of striking the drum can be executed. For example, when “do/mi/do” is played on a piano, a phrase that corresponds to playing “weak/strong/weak” on the drum can be extracted.
In the embodiments described above, if the sound output device 103 is provided with a plurality of singing voices (a plurality of genders, etc.), the singing voice to be used can be switched in accordance with sound information N. For example, if the sound information N is audio data, the singing voice can be switched in accordance with the timbre. If the sound information N is MIDI data, the singing voice can be switched in accordance with the timbre or other parameters set in the PC 101.
In the embodiments described above, it is not essential for the singing sound output system 1000 to include the PC 101, the cloud server 102, and the sound output device 103. It is also not limited to a system that goes through a cloud server. That is, each functional unit shown in
In the embodiments described above, at least some of the functional units shown in
This disclosure was described above based on preferred embodiments, but this disclosure is not limited to the above-described embodiments and includes various embodiments that do not depart from the scope of the invention. Some of the above-described embodiments can be appropriately combined.
A storage medium that stores a control program represented by software for achieving this disclosure can be read into this disclosure to achieve the same effects of this disclosure, in which case the program code read from the storage medium realizes the novel functions of this disclosure, so that a non-transitory, computer-readable storage medium that stores the program code constitutes this disclosure. In addition, the program code can be supplied via a transmission medium, or the like, in which case the program code itself constitutes this disclosure. The storage medium in these cases can be, in addition to ROM, a floppy disk, a hard disk, an optical disc, a magneto-optical disk, a CD-ROM, a CD-R, magnetic tape, a non-volatile memory card, etc. The non-transitory, computer-readable storage medium includes storage media that retain programs for a set period of time, such as volatile memory (for example, DRAM (Dynamic Random-Access Memory)) inside a computer system that constitutes a server or client, when the program is transmitted via a network such as the Internet or a communication line, such as a telephone line.
By one embodiment of this disclosure, it is possible to output a singing sound at a timing at which sound information is input, in synchronization with the accompaniment.
This application is a continuation application of International Application No. PCT/JP2021/013379, filed on Mar. 29, 2021. The entire disclosures of International Application No. PCT/JP2021/013379 are hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/013379 | Mar 2021 | US |
Child | 18475309 | US |