The present invention relates to a sound production control apparatus, a sound production control method, and a storage medium, by which sound produced from operation of a player is controlled on the basis of player's operation.
Conventionally, a sound production control in which an effect such as a delay is added to sound produced from playing operation of a player. If the time between events of played music changes according to progression of chords or a playing tempo although a delay time set to provide the delay is a fixed value, the added effect may become unnatural in some cases. To cope with this, JP 6-110454 A discusses a technique of controlling additional effect characteristics on the basis of play speed information in instrumental playing.
However, in general, a player tends to make some motions to keep the tempo or for other purposes even during a non-playing control operation period such as a non-playing operation period for which a player does not make playing operation in the middle of an instrumental or a non-control operation period for which a player does not make operation for controlling a hi-hat to be closed or opened. For example, a player may make a motion for operating an operating element without producing sound. Herein, such a motion will be referred to as a “ghost motion”. For example, a player controls the hi-hat cymbals to be closed or opened by depressing a pedal during a performance. However, during a non-control operation period, a player keeps beats by lifting and lowering a player's heel while a player's toe is placed on the pedal (during the lifting and lowering, the heel may be placed on the pedal occasionally) as a ghost motion.
However, the ghost motion itself is performed to keep the tempo for a player as described above. Therefore, the player's operation is not reflected on sound production, and the effect also does not change thereby. For example, assuming that a player makes a ghost motion on the hi-hat pedal described above, the ghost motion does not provide a play sound and is not reflected on the effect of the sound produced by striking other pads. For example, in the technique discussed in JP 6-110454 A, if a play speed of the instrumental is changed, this change is reflected on the effect. However, the ghost motion is not reflected on the effect.
The ghost motion is a player's motion usually made during a non-playing period for a player to retain the accurate playing tempo or express accents or presence of the playing. If a ghost motion caused by the tempo or the “groove” of music is used in a sound production control, more excellent sound quality can be obtained.
The present invention provides a sound production control apparatus, a sound production control method, and a storage medium, by which a sound production mode is controlled on the basis of a player's motion even during a non-playing control operation period. In addition, the present invention provides a sound production control apparatus, a sound production control method, and a storage medium, by which a playing tempo is estimated from a player's motion even during a non-playing control operation period.
Accordingly, a first aspect of the present invention provides a sound production control apparatus comprising an information obtaining unit that obtains detection information by detecting a player's motion, a sound production unit that produces sound on the basis of the detection information obtained in response to operation for generating a sound trigger in the player's motion by the information obtaining unit, and a control unit that controls a sound production mode of the sound production unit on the basis of the detection information obtained in response to operation for generating no sound trigger in the player's motion by the information obtaining unit.
Accordingly, a second aspect of the present invention provides a sound production control apparatus comprising an information obtaining unit that obtains detection information by detecting a player's motion, a sound production unit that produces sound on the basis of the detection information obtained in response to operation for generating a sound trigger in the player's motion by the information obtaining unit, and an estimation unit that estimates a playing tempo performed by the player on the basis of the detection information obtained in response to operation for generating no sound trigger in the player's motion by the information obtaining unit.
Accordingly, a first aspect of the present invention provides a sound production control method comprising an information obtaining step for obtaining detection information by detecting a player's motion, a sound production step for producing sound on the basis of the detection information obtained in response to operation for generating a sound trigger in the player's motion in the information obtaining step, and a control step for controlling a sound production mode in the sound production step on the basis of the detection information obtained in response to operation for generating no sound trigger in the player's motion in the information obtaining step.
Accordingly, a second aspect of the present invention provides sound production control method comprising an information obtaining step for obtaining detection information by detecting a player's motion, a sound production step for producing sound on the basis of the detection information obtained in response to operation for generating a sound trigger in the player's motion in the information obtaining step, and an estimation step for estimating a playing tempo performed by the player on the basis of the detection information obtained in response to operation for generating no sound trigger in the player's motion in the information obtaining step.
According to the first aspect of the present invention, it is possible to control the sound production mode on the basis of a player's motion even during a non-playing control operation period.
According to the first aspect of the present invention, it is possible to reflect a motion that has no intention to operate the operating element on the sound production control.
According to the first aspect of the present invention, it is possible to control sound production caused by a playing motion on the operating element on the basis of a player's motion on the operating element, the latter motion having no intention to operate the operating element.
According to the first aspect of the present invention, it is possible to reflect a movement of a player's body on the sound production control.
According to the first aspect of the present invention, it is possible to reflect a beat-keeping motion on the sound production control, the beat-keeping motion having no intention to play an instrument.
According to the first aspect of the present invention, it is possible to reflect a player's motion on the effect, the player's motion having no intention to play an instrument.
According to the first aspect of the present invention, it is possible to estimate a tempo from the player's motion, the player's motion having no intention to play an instrument.
According to the second aspect of the present invention, it is possible to estimate a playing tempo from a player's motion even during the non-playing control operation period and reflect the estimated playing tempo on the sound production control.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Embodiments of the present invention will now be described with reference to the accompanying drawings.
The display unit 9 is comprised of a liquid crystal display (LCD) or the like to display various types of information. The timer 8 is connected to the CPU 5. A sound system 15 is connected to the sound source circuit 13 through the effect circuit 14. Various I/Fs 11 include a musical instrument digital interface (MIDI) I/F and a communication I/F. The CPU 5 controls the entire instrument. The ROM 6 stores a control program executed by the CPU 5, various table data, and the like. The RAM 7 temporarily stores various types of input information, various flags or buffer data, computation results, and the like. The memory device 10 is comprised of, for example, a nonvolatile memory and stores the aforementioned control program, various musical data, various other data, and the like. The sound source circuit 13 converts playing data input from the operating element 1, preset playing data, and the like into music signals. The effect circuit 14 applies various effects to the music signals input from the sound source circuit 13, and the sound system 15 including a digital-to-analog converter (DAC), an amplifier, a loudspeaker, and the like converts the music signals input from the effect circuit 14 into acoustic sounds. The CPU 5 controls the sound source circuit 13 and the effect circuit 14 on the basis of the detection result of the detection circuit 3 to generate produce sound from the sound system 15. It should be noted that an example of the setting for the effect of sound produced by a percussion of each of pads 21 and 26 will be described below with reference to
According to this embodiment, a motion that is not intended to make playing (sound production) in a player's motion is detected, and the sound triggered by a player's motion for playing (sound production) is controlled on the basis of the detection result. The sound production control may include a control of the added effect. Using at least information not corresponding to a sound trigger in a player's motion, effect control parameters are determined, and the effect is controlled on the basis of the determined effect control parameters. In many cases, a player makes a certain motion in a mode in which the sound trigger is not generated for the purpose of tempo keeping even in a non-playing operation period in which the player is not required to play in the middle of an instrumental. In this manner, a motion that is not intended to produce sound is called a “ghost motion (hereinafter, abbreviated as a “G-motion”). For example, according to this embodiment, when a G-motion is operated using the hi-hat pedal portion 29a, this operation is detected, and a delay time for delay sound as the effect control parameter is set in response to the detection of that operation.
As illustrated in
As illustrated in
The analyzer 34 analyzes the generated pulse to estimate both a beat interval D and a playing tempo TP. Here, basically, positions of the pulses correspond to the beats of the instrumental. A time interval between neighboring pulses matches the beat interval D. However, detection errors or a deviation in the player's motion may occur. It should be noted that a motion may stop during the detection. In this case, the pulse generation also stops. To cope with this, a moving average calculation method is employed in the estimation of the beat interval D. For example, the analyzer 34 calculates the time interval between the neighboring pulses on the basis of a predetermined number of pulses (for example, ten pulses) obtained immediately before the computation of the beat interval D. The analyzer 34 determines an average of the time intervals between pulses excluding its maximum and minimum as the beat interval D. Since the playing tempo TP is defined as the number of beats per minute, the playing tempo is calculated as “TP=6000/beat interval (ms).” It should be noted that the number of pulses used to calculate the beat interval D is not limited to those described above. In addition, the method of calculating the beat interval D on the basis of the pulse interval is not limited to those described above.
The effect setting unit 35 sets a delay effect as an example of the effect. Specifically, the effect setting unit 35 determines a delay setting value DT and sets a delay time DTT (setting value of the counter) on the basis of the delay setting value DT. The delay time DTT may be set to a common value in overall pads, that is, overall sound production channels. However, in this embodiment, it is assumed that a different delay time DTT is set for each sound production channel as expressed as “DTTn=DT×K(n).” Here, “K(n)” denotes a correction coefficient set for each pad (sound production channel) and is determined in advance. The sound processing unit 36 executes a sound production process in accordance with the output signals from the playing detectors 32 and 33 in a real-time reproduction. In this case, the sound processing unit 36 applies the effect based on the effect control parameters set by the effect setting unit 35 to the playing signals generated from the output signals, and amplifies and outputs the playing signals with the effect.
Then, the CPU 5 detects a percussion on the pad 26 of the kick unit 28, a foot-close sound production operation using the hi-hat unit 29, and other percussions on each of other pads 21 through the functions of the playing detectors 32 and 33 (step S102). Specifically, it is detected “THERE IS PERCUSSION” when the output from each sensor of the pad 21 exceeds a predetermined value (threshold value) set for each sensor. It should be noted that a sound production instruction for the foot-close operation of the hi-hat unit 29 is detected as “THERE IS PERCUSSION” when the output of the pedal sensor 29d exceeds the first threshold value th1. Then, the CPU 5 sequentially executes a delay setting value determination process (step S103) and an effect control process (step S104) described below in conjunction with
Otherwise, if the peak value does not exceeds the first threshold value th1 (peak value th1), the information obtaining unit 30 detects a peak from the detection output of the pedal sensor 29d using the second threshold value th2 (step S207). That is, the G-motion detector 31 of the information obtaining unit 30 determines whether or not the peak value of the detected waveform exceeds the second threshold value th2 (peak value>th2). As a result of the determination, if the peak value exceeds the second threshold value th2 (th1>peak value>th2), it is determined that the operation of G-motion is detected, and the peak of the waveform W2 (second detection information) is obtained (refer to
If it is determined that operation for foot-close playing is detected in step S201, the CPU 5 generates a tempo pulse having an edge rising at the peak timing (peak timing of the waveform W1) when the output exceeds the first threshold value th1 as a function of the analyzer 34, and the process advances to step S203 (step S202). Meanwhile, if it is determined that operation of the G-motion is detected in step S207, the CPU 5 reads the correction value “t0-tp” corresponding to the current playing tempo TP (estimated in the previous step S205, excluding the first step S205) from the ROM 6 as a function of the analyzer 34 (step S208). Then, the CPU 5 generates a tempo pulse having a rising edge delayed by the read correction value “t0-tp” from the peak timing (peak timing of the waveform W2) at which the output becomes “th1>peak value>th2” as a function of the analyzer 34 (step S209). Then, the process advances to step S203. The pulse of
Subsequently, as a function of the analyzer 34, the CPU 5 deletes the oldest value in the register Tm (where “m” denotes a natural number, for example, 0 to 9) and stores the current value of the counter CNT as the latest value in the register Tm (step S203). Therefore, the value in the register Tm is updated in a first-in-first-out manner, and the latest ten values are stored in the register Tm at all times. Then, the CPU 5 resets the counter CNT as a function of the analyzer 34 (step S204). Therefore, the time elapsing from the previous pulse generation to the current pulse generation (that is, a pulse time interval) is recorded in the register Tm. The CPU 5 estimates the beat interval D on the basis of the value of the register Tm and estimates the playing tempo TP on the basis of the beat interval D as a function of the analyzer 34 (step S205). That is, as described above, the CPU 5 calculates an average of the values in the register Tm excluding the minimum and the maximum values as the beat interval D as a function of the analyzer 34. In addition, the analyzer 34 calculates the playing tempo TP on the basis of the beat interval as a function of the analyzer 34.
Then, in step S206, the CPU 5 calculates the delay setting value DT on the basis of the beat interval D as a function of the effect setting unit 35. Here, a table or a calculation formula that defines a relationship between the beat interval D and the delay setting value DT is stored in the ROM 6 or the like in advance, and the delay setting value DT is determined by referencing the table or the calculation formula. Then, the process of
Then, the CPU 5 determines whether or not the sound production of the processing target such as the pad 21, the kick unit 28, and the hi-hat unit 29 is set to be applied with a delay effect (delay ON) (step S303). As a result of the determination, if the sound production is set not to be applied with the delay effect, the CPU 5 advances the process to step S306. Otherwise, if the sound production is set to be applied with the delay effect, the CPU 5 sets the delay time DTTn as “DTTn=DT×K (n)” as described above as a function of the effect setting unit 35 (step S304). Then, the CPU 5 asserts a delay sound flag (step S305) and a sound production flag (step S306), and the process of
First, the CPU 5 sets the channel counter ch(n) to “1” (step S401) and determines whether or not the sound production flag is asserted (step S402). As a result of the determination, if the sound production flag is not asserted, the CPU 5 advances the process to step S405 because it is not necessary to perform the sound production. Otherwise, if the sound production flag is asserted, the CPU 5 executes the sound production process by generating a sound trigger in the sound production channel ch serving as the current processing target in the sound source circuit 13 (step S403) and resets the sound production flag (step S404). Then, the process advances to step S405.
In step S405, the CPU 5 determines whether or not a delay effect is to be applied (delay ON). If the delay effect is not to be applied, it is not necessary to apply the delay effect. Therefore, the process advances to step S413. Otherwise, if the delay effect is to be applied, the CPU 5 determines whether or not the delay sound flag is asserted (step S406). As a result of the determination, if the delay sound flag is not asserted, it is not necessary to apply the delay effect. Therefore, the CPU 5 advances the process to step S413. Otherwise, if the delay sound flag is asserted, the delay time DTTn is decremented by “1” for updating the delay time DTTn (step S407).
Then, in step S408, the CPU 5 determines whether or not the delay time DTTn becomes zero (DTTn=0). As a result of the determination, if the delay time DTTn does not become zero, the CPU 5 advances the process to step S411 because it is not the timing at which the delay is to be applied. Otherwise, if the delay time DTTn becomes zero, the CPU 5 produces the delay sound (step S409) and increments a repetition counter DCNT by “1” for updating the repetition counter DCNT (step S410). Then, in step S411, the CPU 5 determines whether or not the updated repetition counter DCNT reaches a delay repetition number (for example, three times (DCNT=3)). If the condition of DCNT=3 is not satisfied, the process advances to step S413. Meanwhile, if the condition of DCNT=3 is satisfied, the CPU 5 resets both the repetition counter DCNT and the delay sound flag (step S412), and the process advances to step S413. Therefore, during the delay time DTT elapses after sound is produced in response to the percussion, the delay sound is repeatedly generated as many as the delay repetition number. It should be noted that the delay sound production number may be set to one or more.
Subsequently, the CPU 5 updates the channel counter ch(n) by incrementing it by “1” (step S413) and determines whether or not the channel counter ch(n) reaches the channel number parameter “max” (ch(n)=max) (step S414). As a result of the determination, if the condition “ch (n)=max” is not satisfied, the CPU 5 returns the process to step S402. If the condition “ch (n)=max” is satisfied, the counter CNT is updated by incrementing it by “1” (step S415). Then, the process of
According to this embodiment, the sound production control is performed on the basis of information (peak timing of the waveform W2) obtained by the information obtaining unit 30 (or the G-motion detector 31 included therein) at least from operation that does not generate the sound trigger out of the detection information obtained by detecting a player's motion. Therefore, even in the non-playing control operation period, a sound production mode can be controlled by the player's motion. In particular, since the sound production mode control is a control for applying the sound effect such as a delay sound, it is possible to reflect the non-playing player's motion on the sound production. In addition, since the detection information is obtained on the basis of the operation of the hi-hat pedal portion 29a as a operating element, it is possible to reflect an operation that is not intended to play the operating element on the sound production control. In particular, since the G-motion has been influenced from accents, presence, tempo, or “groove” of the music, it is possible to obtain the more excellent sound quality by using the G-motion in the sound production control.
Since the effect based on the G-motion on the hi-hat pedal portion 29a is also given to the sound produced by operating the hi-hat pedal portion 29a, the sound production based on a playing motion on the operating element can be controlled on the basis of a player's motion on the operating element without intention of playing the operating element. In addition, the beat interval D is estimated from the information that is not used in the sound trigger, and the sound production is controlled on the basis of the estimated beat interval D. Therefore, a motion for keeping the beat without intention of play can be reflected on the sound production control. Furthermore, since the playing tempo TP is calculated on the basis of the estimated beat interval D, it is possible to estimate the playing tempo TP from a player's motion without intention of play. Therefore, even during a non-playing control operation period, it is possible to estimate the playing tempo TP from a player's motion and reflect it on the sound production control.
It should be noted that, according to this embodiment, the G-motion is detected from a movement of the hi-hat pedal portion 29a as a operating element. However, the present invention is not limited thereto. Information that generates no sound trigger out of the detection information detected from the player's motion may be detected as the G-motion. Therefore, an operating element used in detection of the G-motion may be different from an operating element for generating a sound trigger as a control target. For example, as illustrated in
It should be noted that the device for detecting the G-motion may be installed in a position, for example, where the G-motion of a player's body is reflected. In addition, the G-motion is not necessarily detected from operation for the operating element. For example, the G-motion may be detected through taking a motion image of a certain part of a player's body by a camera, analyzing the taken image, and additionally reflecting the amount and the direction of the motion.
Although the effect is set on the basis of the motion of the hi-hat pedal portion 29a according to the present invention, the effect may be set by detecting a G-motion on the basis of the motion of the kick pedal 24. In addition, in the case of a pad 21 that is directly stricken, such as a snare drum, a motion may be distinguished into a percussion or a G-motion on the basis of a magnitude of the vibration. In this case, a small percussion having a signal level that is equal to or lower than a predetermined value and does not cause a sound trigger may be regarded as the G-motion.
It should be noted that the present invention may also be applied to instruments other than the percussive instrument. For example, in the case of a keyboard instrument, a G-motion may be detected on the basis of a pedal movement, and control parameters such as the beat interval D, the delay time DTTn, and the playing tempo TP may be determined on the basis of the pedal movement. The present invention may also be applied to a sound production control in music games or the like without limiting to the instrument. In addition, as the sound production control mode implemented in the present invention, a volume or a tone may be controlled without limiting to the delay effect. Therefore, the present invention can also be applied to determination of control parameters such as a rate of a flanger or phaser effect, a setting of a low frequency oscillator (LFO) in a wah effect, a period change time in a tremolo or a rotary speaker, a distortion rate of a distortion effect, and a cut-off frequency of a filter. It should be noted that the effect level (degree of effect) may be further controlled on the basis of the peak value H of the waveform W2 in
Although the control parameters are estimated or determined on the basis of a detection result of a player's G-motion during a non-playing control operation period to reflect them on the sound production control in the embodiment of the present invention, even if at least the control parameters are estimated or determined on the basis of a detection result of a player's G-motion during a non-playing control operation period, the present invention is fully implemented. Therefore, the embodiment of the present invention may include a case in which the control parameters are additionally estimated or determined from a player's motion during a playing control operation period in which playing operation or controlling operation is performed, the control parameters estimated or determined on the basis of not only a detection result of a player's G-motion operation during a non-playing control operation period but also a player's motion during a playing control operation period, and the control parameters are reflected on the sound production control.
As a specific configuration of the detection mechanism for detecting operation that generates a sound trigger or operation that generates no sound trigger, the modification illustrated in
In the example of
In the example of
It should be noted that similar effects may be achieved by causing the apparatus of the present invention or a computer to read a storage medium capable of storing a control program as software according to the present invention. In this case, the program codes themselves read from the storage medium implement the novel functions according to the present invention, and the storage medium in which the program codes are stored implements the present invention. The program codes may also be supplied via a transmission medium or the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2016-048197 filed on Mar. 11, 2016 and the benefit of Japanese Patent Application No. 2017-005823 filed on Jan. 17, 2017, which are hereby incorporated by reference herein in their entireties.
Number | Date | Country | Kind |
---|---|---|---|
2016-048197 | Mar 2016 | JP | national |
2017-005823 | Jan 2017 | JP | national |