Musical note generation device, electronic musical instrument, method, and storage medium

Information

  • Patent Grant
  • 10204610
  • Patent Number
    10,204,610
  • Date Filed
    Monday, December 18, 2017
    6 years ago
  • Date Issued
    Tuesday, February 12, 2019
    5 years ago
Abstract
A musical note generation device includes at least one processor, performing a process of generating convolved sound waveform data by convolving first sound waveform data corresponding to pitch information associated with a specified key with second sound waveform data corresponding to an impulse response; a process of generating third sound waveform data by respectively reducing, among frequency components included in the generated convolved sound waveform data, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; and a process of outputting piano sound waveform data generated on the basis of the generated third sound waveform data generated.
Description
BACKGROUND OF THE INVENTION
Technical Field

The present invention relates to a musical note generation device, an electronic musical instrument, a method, and a storage medium.


Background Art

In an acoustic piano, when the damper pedal is not depressed, dampers arranged corresponding to the keys contact the strings while the keys are not depressed and are lifted from contact with the strings when the keys are pressed. Moreover, hammers that are actuated by pressing the keys strike the strings. Meanwhile, when the damper pedal is depressed, the dampers that provide damping for the keys are all lifted. In this state, if any of the keys are pressed and the string corresponding to that key is struck, a note corresponding to the vibration of that string is produced, and all of the other strings resonate with the vibration of that string and produce resonant tones. The vibration of the string that was struck as well as the resonance of the resonant tones continue for a long period of time even after the key is released. These resonant tones are one of the characterizing features of piano sounds.


In conventional electronic pianos, simulating the resonant tones of an acoustic piano is typically accomplished with signal processing techniques involving a combination of feedback filters such as reverbs and resonators, for example.


Moreover, one conventional example of an approach to reproducing the complex sound image profile of string resonance is the following resonant tone sound image generation device (see Patent Document 1, for example). A resonant tone generator includes string resonance circuit groups in which a plurality of string resonance circuits are grouped together. Each string resonance circuit is a digital filter having a resonant frequency corresponding to harmonics of musical notes. When a musical note signal is input by pressing a key, a string resonance signal corresponding to the musical note signal is input to a convolution processor and convolved with a pre-measured impulse response. The convolved string resonance signal is then synthesized by an adder and output. The respective output signals from the string resonance circuit groups are convolved with impulse responses from mutually different sound source positions defined as if to be on the bridge of an acoustic piano occupying the same space.


Patent Document 1: Japanese Patent Application Laid-Open Publication No. 2007-193129


However, in the conventional technology based on the feedback filter signal processing techniques described above, it is difficult to achieve a realistic sound equivalent to the resonant tones of an acoustic piano.


One advantage of the present invention lies in making it possible to generate natural resonant tones similar to those of an acoustic piano.


Accordingly, the present invention is directed to a scheme that substantially obviates one or more of the problems due to limitations and disadvantages of the related art.


SUMMARY OF THE INVENTION

Additional or separate features and advantages of the invention will be set forth in the descriptions that follow and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.


To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described, in one aspect, the present disclosure provides a musical note generation device, including: a plurality of keys, the plurality of keys respectively being associated with pitch information; and at least one processor, the at least one processor performing processes including: a convolution operation process of generating convolved sound waveform data by convolving first sound waveform data corresponding to the pitch information associated with a specified key with second sound waveform data corresponding to an impulse response; a third sound waveform data generation process of generating third sound waveform data by respectively reducing, among frequency components included in the convolved sound waveform data generated by the convolution operation process, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; and an output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the third sound waveform data generation process.


In another aspect, the present disclosure provides a method performed by at least one processor in an electronic musical instrument, including: a convolution operation process of generating convolved sound waveform data by convolving first sound waveform data corresponding to pitch information associated with a specified key with second sound waveform data corresponding to an impulse response; a third sound waveform data generation process of generating third sound waveform data by respectively reducing, among frequency components included in the convolved sound waveform data generated by the convolution operation process, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; and an output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the third sound waveform data generation process.


In another aspect, the present disclosure provides a non-transitory storage medium having stored therein instructions that cause at least one processor in an electronic musical instrument to perform the following processes: a convolution operation process of generating convolved sound waveform data by convolving first sound waveform data corresponding to pitch information associated with a specified key with second sound waveform data corresponding to an impulse response; a third sound waveform data generation process of generating third sound waveform data by respectively reducing, among frequency components included in the convolved sound waveform data generated by the convolution operation process, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; and an output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the third sound waveform data generation process.


In another aspect, the present disclosure provides a musical note generation device, including: a plurality of keys, the plurality of keys respectively being associated with pitch information; and at least one processor, the at least one processor performing processes including: an attenuated sound waveform data generation process of generating attenuated sound waveform data by respectively reducing, among frequency components included in first sound waveform data corresponding to the pitch information associated with a specified key, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; a convolution operation process of generating third sound waveform data by convolving the attenuated sound waveform data generated by the attenuated sound waveform data generation process with second sound waveform data corresponding to an impulse response; and an output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the convolution operation process.


In another aspect, the present disclosure provides a method performed by at least one processor in an electronic musical instrument, including: an attenuated sound waveform data generation process of, when a damper pedal is depressed, generating attenuated sound waveform data by reducing, among frequency components included in first sound waveform data corresponding to pitch information associated with a specified key, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; a convolution operation process of generating third sound waveform data by convolving the attenuated sound waveform data generated by the attenuated sound waveform data generation process with second sound waveform data corresponding to an impulse response; and an output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the convolution operation process.


In another aspect, the present disclosure provides a non-transitory storage medium having stored therein instructions that cause at least one processor in an electronic musical instrument to perform the following processes: an attenuated sound waveform data generation process of, when a damper pedal is depressed, generating attenuated sound waveform data by reducing, among frequency components included in first sound waveform data corresponding to pitch information associated with a specified key, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; a convolution operation process of generating third sound waveform data by convolving the attenuated sound waveform data generated by the attenuated sound waveform data generation process with second sound waveform data corresponding to an impulse response; and an output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the convolution operation process.


It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory, and are intended to provide further explanation of the invention as claimed.





BRIEF DESCRIPTION OF THE DRAWINGS

The detailed descriptions below are intended to be read with reference to the following figures in order to gain a deeper understanding of the present application.



FIG. 1 is a block diagram illustrating an example of an embodiment of an electronic musical instrument.



FIG. 2 is a block diagram illustrating an embodiment of a damper sound effect generator.



FIG. 3 illustrates an example of the characteristics of a comb filter that attenuates the fundamental resonant tones of strings in recorded piano sounds.



FIG. 4 is a block diagram illustrating an example of an embodiment of an FFT convolver.



FIG. 5 is an explanatory drawing of a method of recording impulse response waveform data (second sound waveform data).



FIGS. 6A to 6D are a first example of flowcharts illustrating examples of processes in the electronic musical instrument.



FIGS. 7A and 7B are a second example of flowcharts illustrating examples of processes in the electronic musical instrument.



FIG. 8 is a first example of a block diagram illustrating another embodiment of a damper sound effect generator.



FIG. 9 is a second example of a block diagram illustrating another embodiment of a damper sound effect generator.





DETAILED DESCRIPTION OF EMBODIMENTS

An embodiment of the present invention will be described in detail below with reference to figures. The present embodiment relates to an electronic musical instrument that simulates an acoustic piano. Waveform data (first sound waveform data) is created by recording the sounds produced when the keys of an acoustic piano are pressed, and this data is stored in a waveform memory in a piano sound source (an integrated circuit). Then, when the keys of an electronic piano are pressed, piano sound waveform data is generated by reading the waveform data corresponding to the pitches of the pressed keys from the waveform memory.


Moreover, in the present embodiment, to simulate the resonance from string vibration that occurs when the damper pedal of an acoustic piano is depressed, impulse response waveform data (second sound waveform data) for resonant tones obtained by causing the acoustic piano to vibrate while depressing the damper pedal of the acoustic piano is recorded in advance and stored in a memory of the electronic musical instrument. Then, a convolution operation process of convolving the first sound waveform data corresponding to pressed keys with the impulse response waveform data (second sound waveform data) is performed, and resonant tone waveform data (third sound waveform data) is generated. Next, piano sound waveform data is generated by mixing together the first sound waveform data and the resonant tone waveform data (third sound waveform data) in a ratio corresponding to the amount by which the damper pedal is depressed. Then, the generated piano sound waveform data is output.


The impulse response waveform data (second sound waveform data) recorded while the damper pedal is depressed is recorded while all of the strings are in a free state (that is, a state in which all of the strings can resonate and vibrate to produce sound). Therefore, the impulse response waveform data (second sound waveform data) includes frequency characteristics for a state equivalent to when all of the strings are producing sound and also includes harmonic characteristics of strings producing sound due to keypresses. As a result, when the first sound waveform data produced from the waveform memory when a key is pressed is convolved with the impulse response waveform data (second sound waveform data) including these frequency characteristics, the waveform data components of the pitch corresponding to the keypress that are included in both types of waveform data are undesirably emphasized, which produces unnatural resonant tones.


As a countermeasure, in the present embodiment, a process of convolving the first sound waveform data produced from the waveform memory when a key is pressed with the abovementioned impulse response waveform data (second sound waveform data) is performed to generate convolved sound waveform data. Then, a filtering calculation process is performed to generate the resonant tone waveform data (third sound waveform data; attenuated sound waveform data) by respectively reducing, from the frequency components included in the convolved sound waveform data, the amplitudes of the respective frequency components of the fundamental tone and harmonics of the pitch corresponding to the keypress. In this way, the present embodiment makes it possible to generate natural resonant tones.



FIG. 1 is a block diagram illustrating an example of an embodiment of an electronic musical instrument 100. The electronic musical instrument 100 includes a damper sound effect generator 101, a piano sound source 102; a central processing unit (CPU) 103; a randomly accessible memory 104; multipliers 105 and 106; adders 107 and 108; a general-purpose input/output (GPIO) 130 to which a keyboard 140, a damper pedal 150, and a switch unit 160 are connected; and a system bus 170. The damper sound effect generator 101, the piano sound source 102, the multipliers 105 and 106, and the adders 107 and 108 may be implemented using a single-chip or multi-chip digital signal processor (DSP) integrated circuit, for example.


The keyboard 140 is a keyboard with which a performer inputs a piano performance and includes 88 keys, for example.


The damper pedal 150 is depressed by the performer to create an effect simulating the behavior of the damper pedal in an acoustic piano.


The switch unit 160 includes general-purpose switches such as a power switch, a volume switch, and tone color selection switches as well as a switch for specifying the amount of damper pedal effect to apply, a switch for changing the temperament, a switch for changing the master tuning, and the like.


The GPIO 130 detects keypress and key release information from the keys in the keyboard 140, ON (depressed) and OFF (not depressed) information from the damper pedal 150, and operation information from the switches in the switch unit 160 and notifies the CPU 103 of this information via the system bus 170.


The CPU 103, in accordance with control programs stored in the memory 104, executes processes for handling information received from the performer via the GPIO 130, including a process for keypress and key release information from the keyboard 140 and a process for ON/OFF information from the damper pedal 150, as well as processes triggered by operation of the switch unit 160 such as a process for power ON information, a process for volume change information, a process for tone color selection information, a process for changing the temperament, a process for master tuning change information, and a process for specifying the amount of damper pedal effect to apply, for example. As a result of these processes, the CPU 103 outputs performance information 117 that includes note-on information, note-off information, tone color selection information, temperament change information, master tuning change information, and the like to the piano sound source 102 via the system bus 170. Moreover, in the present embodiment, this performance information 117 includes damper pedal depression information 118. This damper pedal depression information 118 is also sent to the damper sound effect generator 101. Furthermore, the CPU 103 outputs volume change information to analog amplifiers (not illustrated in the figure). The CPU 103 also outputs the following to the damper sound effect generator 101 via the system bus 170: a pitch control signal 119, a sympathetic effect reduction amount configuration signal 120, and impulse response waveform data (second sound waveform data) 121 that is read from the memory 104. In addition, the CPU 103 outputs a damper pedal effect application amount configuration signal 122 to the multipliers 105 and 106 via the system bus 170.


The memory 104 stores the control programs for operating the CPU 103 and also temporarily stores various types of working data while programs are executed. The memory 104 also stores the impulse response waveform data (second sound waveform data) 121.


The piano sound source 102 stores, in an internal waveform memory (not illustrated in the figure), waveform data obtained by recording sounds produced by pressing the keys of an acoustic piano. In accordance with performance information 117 indicating a note-on instruction from the CPU 103, the piano sound source 102 allocates a free channel from among time-divided sound production channels (or, if there are no free channels, a channel obtained by silencing the oldest channel) and then uses this sound production channel to start reading waveform data for the specified pitch from the internal waveform memory (not illustrated in the figure). Upon receiving performance information 117 indicating a note-off instruction from the CPU 103, the piano sound source 102 stops reading the waveform data from the waveform memory to the sound production channel currently producing sound for the specified pitch and then frees that sound production channel. However, when damper pedal depression information 118 indicating that the damper pedal is ON (depressed) is input, even if performance information 117 indicating a note-off instruction is input, the process of reading the waveform data from the waveform memory continues rather than stops.


Here, the piano sound source 102 respectively stores, in the waveform memory, left channel waveform data and right channel waveform data obtained by recording the sounds produced by pressing the keys of an acoustic piano in stereo. Moreover, upon receiving performance information 117 indicating a note-on instruction, the piano sound source 102 respectively allocates a sound production channel for the left channel and a sound production channel for the right channel and then uses the allocated sound production channels to start respectively reading left channel waveform data and right channel waveform data from the waveform memory. The piano sound source 102 processes, in a time-divided manner and individually for the left channel and the right channel, the reading of a plurality of sets of waveform data using a plurality of sound production channels corresponding to a plurality of note-on instructions. The piano sound source 102 outputs the waveform data corresponding to the note-on instructions and currently being read for the left channel to the adder 107 and the damper sound effect generator 101 as first sound waveform data (L-ch) 109, and similarly outputs the waveform data corresponding to the note-on instructions and currently being read for the right channel to the adder 108 and the damper sound effect generator 101 as first sound waveform data (R-ch) 110.


The damper sound effect generator 101 performs a process of convolving the first sound waveform data (L-ch) 109 input from the piano sound source 102 with left channel impulse response waveform data (second sound waveform data) 121 read from the memory 104. The damper sound effect generator 101 then performs a filtering calculation process that respectively reduces, from the frequency components included in the convolved sound waveform data for the left channel generated by the convolution operation process, the amplitudes of the respective frequency components of fundamental tones and harmonics of pitches corresponding to note numbers currently being produced, and outputs the resulting third sound waveform data (L-ch) 113 to the multiplier 105. Similarly, the damper sound effect generator 101 performs a process of convolving the first sound waveform data (R-ch) input from the piano sound source 102 with right channel impulse response waveform data (second sound waveform data) 121 read from the memory 104. The damper sound effect generator 101 then performs a filtering calculation process that respectively reduces, from the frequency components included in the convolved sound waveform data for the right channel generated by the convolution operation process, the amplitudes of the respective frequency components of fundamental tones and harmonics of pitches corresponding to note numbers currently being produced, and outputs the resulting third sound waveform data (R-ch) 114 to the multiplier 106.


Here, by operating a switch in the switch unit 160, the performer can specify the amount of resonant tone effect to apply when the damper pedal 150 is depressed, and the CPU 103 outputs the specified amount of effect as the damper pedal effect application amount configuration signal 122. On the basis of this damper pedal effect application amount configuration signal 122, the multipliers 105 and 106 respectively control the amplitudes of the third sound waveform data (L-ch) 113 and the third sound waveform data (R-ch) 114 output from the damper sound effect generator 101 in order to determine the respective amounts of resonant tone for the left channel and the right channel.


The adder 107 adds together the first sound waveform data (L-ch) 109 output from the piano sound source 102 and the third sound waveform data (L-ch) 113 output from the damper sound effect generator 101 via the multiplier 105, and then outputs the resulting left channel piano sound waveform data (L-ch) 115 to which the damper pedal effect has been applied. Similarly, the adder 108 adds together the first sound waveform data (R-ch) 110 output from the piano sound source 102 and the third sound waveform data (R-ch) 114 output from the damper sound effect generator 101 via the multiplier 106, and then outputs the resulting right channel piano sound waveform data (R-ch) 116 to which the damper pedal effect has been applied. The piano sound waveform data (L-ch) 115 and the piano sound waveform data (R-ch) 116 are then respectively output to digital-to-analog (D/A) converters, analog amplifiers, and speakers (not illustrated in the figure) to be played as stereo piano ON signals.



FIG. 2 is a block diagram illustrating an embodiment of the damper sound effect generator 101 illustrated in FIG. 1. The damper sound effect generator 101 includes a damper sound effect generator (L-ch) 201 that processes the left channel and a damper sound effect generator (R-ch) 202 that processes the right channel. The damper sound effect generator (L-ch) 201 performs processes for generating damper sound effects on the first sound waveform data (L-ch) 109 input from the piano sound source 102 illustrated in FIG. 1, and then outputs the resulting third sound waveform data (L-ch) 113 illustrated in FIG. 1 to the multiplier 105. Similarly, the damper sound effect generator (R-ch) 202 performs processes for generating damper sound effects on the first sound waveform data (R-ch) 110 input from the piano sound source 102 illustrated in FIG. 1, and then outputs the resulting third sound waveform data (R-ch) 114 illustrated in FIG. 1 to the multiplier 106.


The damper sound effect generator (L-ch) 201 and the damper sound effect generator (R-ch) 202 have the same configuration except in that the inputs and outputs respectively correspond to the left channel and the right channel, and therefore the following description will only focus on the damper sound effect generator (L-ch) 201. The damper sound effect generator (L-ch) 201 includes a convolution processor 204 and a filter processor 203.


When the performer depresses the damper pedal 150 illustrated in FIG. 1, the convolution processor 204 illustrated in FIG. 2 performs a process of convolving the first sound waveform data (L-ch) 109 input from the piano sound source 102 with the left channel impulse response waveform data (second sound waveform data) 121 read from the memory 104, and thereby generates the convolved sound waveform data for the left channel.


In order to implement this process, the convolution processor 204 includes a Fast Fourier transform (FFT) convolver 213, a multiplier 214 arranged on the input side of the FFT convolver 213, a multiplier 215 arranged on the output side of the FFT convolver 213, and envelope generators (EGs) 216 and 217 that respectively generate scaling factor change information for the multipliers 214 and 215.


The FFT convolver 213 stores, in an internal register, impulse response data corresponding to impulse responses obtained by sampling string resonance and body characteristics in an acoustic piano while depressing the damper pedal. Furthermore, the FFT convolver 213 performs a process of convolving the input first sound waveform data (L-ch) 109 with this impulse response data and outputs the resulting convolved sound waveform data.


Here, in order to produce the behavior for when the performer depresses the damper pedal 150 illustrated in FIG. 1, the convolution processor 204 utilizes the multipliers 214 and 215 arranged before and after the FFT convolver 213 as well as the EGs 216 and 217 that control the multiplication factors of the multipliers 214 and 215 to manipulate the volume before and after the FFT convolver 213. When the performer depresses the damper pedal 150, the CPU 103 inputs damper pedal depression information 118 indicating that the damper pedal is ON to the EGs 216 and 217 via the system bus 170. Conversely, when the performer releases the damper pedal 150, the CPU 103 inputs damper pedal depression information 118 indicating that the damper pedal is OFF to the EGs 216 and 217 via the system bus 170. The EGs 216 and 217 generate envelope values for when the damper pedal is ON and envelope values for when the damper pedal is OFF in accordance with the damper pedal depression information 118 and then respectively apply these values to the multipliers 214 and 215. In this way, the amount of damper pedal effect for when the damper pedal is ON or OFF is controlled with the multipliers 214 and 215. In an acoustic piano, the impulse length of the resonance from string vibration is relatively long (several dozen seconds, for example), and therefore here, if only the multiplier 215 on the output side of the FFT convolver 213 is present, any residual sound in the FFT convolver 213 could potentially be output again. To prevent this, the multiplier 214 is arranged on the input side of the FFT convolver 213 as well to control the amount of damper pedal effect.


The filter processor 203 includes comb filters 206 that are connected in series and individually numbered from #0 to #87. In the filter processor 203, first, the first sound waveform data (L-ch) 109 illustrated in FIG. 1 from the piano sound source 102 is input to the #0 comb filter 206. The output from the #0 comb filter 206 is then input to the #1 comb filter 206. The output from the #1 comb filter 206 is then input to the #2 comb filter 206. The remaining comb filters 206 are configured in a similar manner, and the output from the final #87 comb filter 206 is output to the multiplier 105 illustrated in FIG. 1 as the third sound waveform data (L-ch) 113.


Each of the comb filters 206 numbered from #0 to #87 and connected in series as described above generates note number-specific attenuated sound waveform data by respectively reducing, from the frequency components included in the first sound waveform data (L-ch) 109, the amplitudes of the respective frequency components of the fundamental tone and harmonics of the pitch for the note number that among one or more note numbers specified in that waveform data corresponds to the key number assigned to that comb filter 206, and then inputs the generated data to the comb filter 206 in the next stage.


As illustrated for the #0 comb filter 206 in FIG. 2, in order to perform this filtering calculation process, each of the comb filters 206 includes a delayer 208 (indicated by “Delay” in the figure) that delays the input waveform data by a specified delay length (number of samples; hereinafter, this delay length is represented by K), a multiplier 209 that multiplies the output of the delayer 208 by a scaling factor α, and an adder 210 that adds together the input waveform data and the output of the multiplier 209 and then outputs the addition results as the note number-specific attenuated sound waveform data. The comb filter 206 further includes a register Reg#1211 that stores the pitch control signal 119 specified via the system bus 170 by the CPU 103 illustrated in FIG. 1 and supplies the delay length K to the delayer (Delay) 208, as well as a register Reg#2212 that stores the sympathetic effect reduction amount configuration signal 120 similarly specified via the system bus 170 by the CPU 103 and supplies the scaling factor α to the multiplier 209.


The comb filter 206 configured as described above thus forms a feedforward comb filter. In the comb filter 206, letting the input be x[n] and the output be y[n], the comb filter 206 satisfies equation 1 below.

y[n]=x[n]+αx[n−K]  <Eq. 1>


Given equation 1, the transfer function for the comb filter 206 can be defined as shown below in equation 2.

Y(z)=(1+αz−K)X(z)  <Eq. 2>


To obtain the frequency characteristics of a discrete-time system expressed in the z-domain, the substitution z=e (where e is an exponent, j is a unit complex number, and ω is angular frequency) is made, thereby allowing the transfer function given by equation 2 to be expressed as equation 3 below.










H


(
z
)


=



Y


(
z
)



X


(
z
)



=


1
+

α






z

-
K




=



z
K

+
α


z
K










Eq
.




3









Then, using Euler's formula, equation 3 can be rewritten as equation 4.

H(e)={1+α cos(ωK)}−jα sin(ωK)  <Eq. 4>

Therefore, from equation 4, the frequency-amplitude response of the comb filter 206 can also be expressed by equation 5.

|H(e)|=√{square root over ((1+α2)+2α cos(ωK))}  <Eq. 5>


In equation 5, the (1+α2) term is a constant, while the 2α cos(ωK) term is a periodic function. Therefore, as illustrated in FIG. 3, the frequency characteristics of the comb filter 206 has periodic zero points. Here, when the delay length K is set to a sample length corresponding to the period of the pitch assigned to the key number (one of #0 to #87) for that comb filter 206, the frequency of the zero points in the frequency characteristics of the comb filter 206 illustrated in FIG. 3 corresponds to the respective frequencies of the fundamental tone and harmonics of the pitch. Thus, the comb filter 206 performs the filtering calculation process of respectively reducing, from the frequency components included in the input waveform data, the amplitudes of the respective frequency components of the fundamental tone and harmonics of the pitch corresponding to the note number specified in that waveform data. As a result, the note number-specific attenuated sound waveform data output from the comb filter 206 exhibits frequency characteristics in which the amplitudes of the respective frequency components of the fundamental tone and harmonics of the pitch assigned to the key number (one of #0 to #87) for that comb filter 206 are respectively reduced.


As described above, the delay length K set to the delayer (Delay) 208 of the comb filter 206 corresponds to the pitch assigned to the key number (one of #0 to #87) for that comb filter 206. However, as also described above, the CPU 103 illustrated in FIG. 1 can supply this pitch information in advance via the system bus 170 as the pitch control signal 119. The pitch is determined by the pitch frequency of the key corresponding to the key number, the temperament setting specified by the performer, and the master tuning setting similarly specified by the performer. As will be described in more detail later (see the description of FIG. 6C), any time when the electronic musical instrument 100 illustrated in FIG. 1 is powered on, when the performer changes the temperament, or when the performer changes the master tuning, the CPU 103 recalculates the pitch information corresponding to each of the comb filters 206 and then sets this information to the register Reg#1211 of each comb filter 206 as the pitch control signal 119.


Moreover, from equation 5 above, changing the scaling factor α set to the multiplier 209 makes it possible to change the depth of the zero points in the frequency characteristics illustrated in FIG. 3. The amount by which the amplitudes of the respective frequency components of the fundamental tone and harmonics of the pitch assigned to a key number should be respectively reduced varies depending on the key number. Therefore, for each of the comb filters 206, the CPU 103 sets the scaling factor α corresponding to the key number assigned to that comb filter 206 to the register Reg#2212 of that comb filter 206 via the system bus 170 as the sympathetic effect reduction amount configuration signal 120.


Among the #0 to #87 comb filters 206, for the comb filters 206 for key numbers corresponding to note numbers that are not specified in the first sound waveform data (L-ch) 109, the sympathetic effect reduction amount configuration signal 120 set by the CPU 103 illustrated in FIG. 1 via the system bus 170 sets the scaling factors α in the respective registers Reg#2212 illustrated in FIG. 2 to a value of 0, thereby making it possible to simply pass the input waveform data through the respective adders 210 as-is and output that data without making any changes thereto. More specifically, when a note-on event occurs, the CPU 103 uses the sympathetic effect reduction amount configuration signal 120 to set, to the register Reg#2212 of the comb filter 206 corresponding to the note number specified by that note-on event, the value for the scaling factor α corresponding to that note number. Then, when a note-off event occurs, the CPU 103 uses the sympathetic effect reduction amount configuration signal 120 to set a value of 0 for the scaling factor α to the register Reg#2212 of the comb filter 206 corresponding to the note number specified by that note-off event.


The operation of the filter processor 203 described above makes it possible to generate the third sound waveform data (L-ch) 113 by respectively reducing, from the frequency components included in the convolved sound waveform data for the left channel output from the convolution processor 204, the amplitudes of the respective frequency components of the fundamental tones and harmonics of the pitches corresponding to the one or more note numbers specified in that waveform data.



FIG. 4 is a block diagram illustrating an example of an embodiment of the FFT convolver 213 illustrated in FIG. 2. The FFT convolver 213 includes an FFT processor 401, an impulse response waveform data register 402, a delay unit 403, a complex multiplier 404, a complex adder 405, and an inverse FFT processor 406.


The FFT processor 401 performs an FFT process on input waveform data 407 input from the multiplier 214 illustrated in FIG. 2.


The impulse response waveform data register 402 stores impulse response complex number frequency waveform data sent from the memory 104 via the system bus 170 by the CPU 103 illustrated in FIG. 1.


The delay unit 403 stores complex number frequency waveform data from the FFT processor 401 while shifting that data by an analysis frame unit or half of that unit.


The complex multiplier 404, in accordance with equation 6 below, and for each frequency, performs complex multiplication of the impulse response frequency waveform data stored in the impulse response waveform data register 402 with the frequency waveform data stored in the delay unit 403.

out.r=in1.r×in2.r−in1.i×in2.i
out.i=in1.i×in2.r+in1.r×in2.i  <Eq. 6>


The complex adder 405 calculates the complex sum of the multiplication results from the complex multiplier 404.


Then, the inverse FFT processor 406 performs an inverse FFT process on the output of the complex adder 405 to generate convolved sound waveform data 408 and then outputs this data to the multiplier 215 illustrated in FIG. 2.



FIG. 5 is an explanatory drawing of a method of recording the impulse response waveform data (second sound waveform data). Actuators that cause the body of an acoustic piano to vibrate are arranged at a plurality of locations on a frame that supports the strings of the acoustic piano, and these actuators generate time-stretched pulse (TSP) signals (S501 in FIG. 5).


The sound produced from the body of the acoustic piano due to TSP signals generated while depressing the damper pedal is recorded using two stereo microphones (S502 in FIG. 5). Here, although it would also be conceivable to make the actuators generate impulse signals and then directly record the resulting pulse responses, this would require the microphone gain and maximum actuator drive capability to be excessively large as well as present challenges related to signal-to-noise ratio (S/N), and therefore TSP signals are used. TSPs are a type of sweep waveform signal generated by shifting the phase of an impulse for each frequency. TSPs make it possible to disperse drive times for a certain period of time and are therefore effective for solving the problems described above. Moreover, impulse hammers may be used instead of the actuators to drive the piano. Furthermore, the number and positions of the microphones that record the produced sound may be different from those illustrated in FIG. 5, and TSP signals recorded at a plurality of locations above or below the soundboard and then mixed together may be used.


The shifted phase of the recorded TSP signal is inverse-shifted to obtain a time-domain impulse response signal of the type shown in A in FIG. 5 (S503 in FIG. 5).


An FFT process is performed on the obtained time-domain impulse response signal (S504 in FIG. 5), thereby yielding the impulse response waveform data (second sound waveform data) 121, which is a complex number signal in the frequency domain, and which is then stored in the memory 104 illustrated in FIG. 1 (S505 in FIG. 5).



FIGS. 6A-D and FIGS. 7A-B are flowcharts illustrating examples of processes in the electronic musical instrument 100 illustrated in FIG. 1 that are related to generating damper sound effects. These processes are operations resulting from the execution of the control programs stored in the memory 104 by the CPU 103 illustrated in FIG. 1.



FIG. 6A is a flowchart illustrating an example of a damper pedal ON interrupt process executed when the performer depresses the damper pedal 150 illustrated in FIG. 1. When this interrupt occurs, the CPU 103, via the system bus 170, inputs damper pedal depression information 118 indicating that the damper pedal is ON to the EGs 216 and 217 (see FIG. 2) in the convolution processors 204 in the damper sound effect generator (L-ch) 201 and the damper sound effect generator (R-ch) 202 included in the damper sound effect generator 101 (see FIG. 1) (step S600 in FIG. 6A). The CPU 103 then returns from the interrupt. Due to this process, the EGs 216 and 217, in accordance with the damper pedal depression information 118 including the damper pedal ON instruction, respectively generate and apply the envelope values to the multipliers 214 and 215.



FIG. 6B is a flowchart illustrating an example of a damper pedal OFF interrupt process executed when the performer releases the damper pedal 150 illustrated in FIG. 1 from the depressed state. When this interrupt occurs, the CPU 103, via the system bus 170, inputs damper pedal depression information 118 indicating that the damper pedal is OFF to the EGs 216 and 217 (see FIG. 2) in the convolution processors 204 in the damper sound effect generator (L-ch) 201 and the damper sound effect generator (R-ch) 202 included in the damper sound effect generator 101 (see FIG. 1) (step S610 in FIG. 6B). The CPU 103 then returns from the interrupt. Due to this process, the EGs 216 and 217, in accordance with the damper pedal depression information 118 including the damper pedal OFF instruction, respectively generate and apply the envelope values to the multipliers 214 and 215.



FIG. 6C is a flowchart illustrating an example of an interrupt process for when the performer operates the switch unit 160 to power on, change the temperament of, or change the master tuning of the electronic musical instrument 100 illustrated in FIG. 1. When any of these interrupts occur, the CPU 103 recalculates the pitches corresponding to the key numbers #0 to #87 in accordance with the respective key numbers and the changed temperament or master tuning, and then, in accordance with the recalculated pitches, recalculates the delay length K for the delayer (Delay) 208 in each of the comb filters 206 corresponding to the key numbers #0 to #87 illustrated in FIG. 2 (step S620 in FIG. 6C). Moreover, the changed temperament information and master tuning information are stored in a non-volatile memory (not illustrated in the figures), and then when the interrupt triggered by powering on the electronic musical instrument 100 occurs, the temperament information and the master tuning information stored in the non-volatile memory are used for the recalculations described above.


The CPU 103 then, via the system bus 170, sets, as the pitch control signal 119, the recalculated delay length K for each comb filter 206 to the register Reg#1211 in each of the comb filters 206 in the damper sound effect generator (L-ch) 201 and the damper sound effect generator (R-ch) 202 included in the damper sound effect generator 101 (see FIG. 1) (step S621 in FIG. 6C).


Moreover, when the interrupt triggered by powering on the electronic musical instrument 100 occurs, the CPU 103, via the system bus 170, sets, as the sympathetic effect reduction amount configuration signal 120, a scaling factor of 0 for the scaling factor α for the multiplier 209 in each of the comb filters 206 corresponding to the key numbers #0 to #87 illustrated in FIG. 2 to the register Reg#2212 (see FIG. 2) in each of the comb filters 206 in the damper sound effect generator (L-ch) 201 and the damper sound effect generator (R-ch) 202 included in the damper sound effect generator 101 (see FIG. 1) (step S622 in FIG. 6C). Thus, upon start-up when no keypresses have yet occurred, the comb filters 206 corresponding to the key numbers #0 to #87 all simply pass through and output any input waveform data as-is. The CPU 103 then returns from the interrupt.



FIG. 6D is a flowchart illustrating an example of an interrupt process for when the performer operates the switch unit 160 to change the amount of damper pedal effect to apply. When this interrupt occurs, the CPU 103 sets the damper pedal effect application amount configuration signal 122 configured with the changed application amount to the multipliers 105 and 106 (see FIG. 1) via the system bus 170 (step S630 in FIG. 6D). The CPU 103 then returns from the interrupt. Thus, the application amount is changed in the third sound waveform data (L-ch) 113 and the third sound waveform data (R-ch) 114 (that is, the resonant tones for the damper pedal effect from the damper sound effect generator 101) that are respectively added into the piano sound waveform data (L-ch) 115 and the piano sound waveform data (R-ch) 116 by the adders 107 and 108 illustrated in FIG. 1.



FIG. 7A is a flowchart illustrating an example of an interrupt process for when a keypress occurs due to the performer operating the keyboard 140 illustrated in FIG. 1. When a keypress interrupt occurs, the CPU 103, on the basis of keypress information input via the GPIO 130 illustrated in FIG. 1, outputs performance information 117 indicating a note-on instruction for the note number corresponding to the pressed key to the piano sound source 102 (step S700 in FIG. 7A).


Next, the CPU 103 reads a value for the scaling factor α corresponding to the note number specified in step S700 from a read-only memory (ROM), for example (not illustrated in the figures), and uses the sympathetic effect reduction amount configuration signal 120 to set this value to the register Reg#2212 of the comb filter 206 illustrated in FIG. 2 corresponding to that note number (step S701 in FIG. 7A). The CPU 103 then returns from the interrupt.



FIG. 7B is a flowchart illustrating an example of an interrupt process for when a key release occurs due to the performer operating the keyboard 140 illustrated in FIG. 1. When a key release interrupt occurs, the CPU 103, on the basis of key release information input via the GPIO 130 illustrated in FIG. 1, outputs performance information 117 indicating a note-off instruction for the note number corresponding to the released key to the piano sound source 102 (step S710 in FIG. 7B).


Next, the CPU 103 uses the sympathetic effect reduction amount configuration signal 120 to set a value of 0 for the scaling factor α corresponding to the note number specified in step S710 to the register Reg#2212 of the comb filter 206 illustrated in FIG. 2 corresponding to that note number (step S711 in FIG. 7B). Thus, the corresponding comb filter 206 is set to the state in which input waveform data is simply passed through and output as-is. The CPU 103 then returns from the interrupt.



FIG. 8 is a (first) block diagram illustrating another embodiment of a damper sound effect generator. In the configuration for the left channel in the embodiment illustrated in FIG. 2 and described above, first, the convolution processor 204 performs the process of convolving the first sound waveform data (L-ch) 109 input from the piano sound source 102 with the left channel impulse response waveform data (second sound waveform data) 121 in order to generate the convolved sound waveform data for the left channel. Next, the filter processor 203 generates and outputs the third sound waveform data (L-ch) 113 (the attenuated sound waveform data) by respectively reducing, from the convolved sound waveform data for the left channel, the amplitudes of the respective frequency components of the fundamental tones and harmonics of the pitches currently being produced in the first sound waveform data (L-ch) 109. In contrast, the configuration of the other embodiment illustrated in FIG. 8 is reversed relative to FIG. 2 such that first, the filter processor 203 outputs left channel attenuated sound waveform data 218 by respectively reducing, from the first sound waveform data (L-ch) 109 input from the piano sound source 102, the amplitudes of the respective frequency components of the fundamental tones and harmonics of the pitches currently being produced in that waveform data. Next, the convolution processor 204 performs a process of convolving the left channel attenuated sound waveform data 218 with the left channel impulse response waveform data (second sound waveform data) 121 and then outputs the resulting third sound waveform data (L-ch) 113. The same relationship is used for the right channel as well.



FIG. 9 is a (second) block diagram illustrating another embodiment of the damper sound effect generator 101 illustrated in FIG. 1. In the configuration of the other embodiment illustrated in FIG. 9, similar to in the configuration of the embodiment illustrated in FIG. 2, the damper sound effect generator 101 includes a damper sound effect generator (L-ch) 201 that processes the left channel and a damper sound effect generator (R-ch) 202 that processes the right channel. In the damper effect sound generator (L-ch) 201 and the damper effect sound generator (R-ch) 202 illustrated in FIG. 9, the convolution processor 204 has the same configuration as in FIG. 2.


However, in the damper effect sound generator (L-ch) 201 and the damper effect sound generator (R-ch) 202 illustrated in FIG. 9, a filter processor 901 has a different configuration than the filter processor 203 illustrated in FIG. 2. In FIG. 9, comb filters 206 numbered from #0 to #87 each have the same individual configuration as in FIG. 2 but are different overall in that these comb filters 206 operate in parallel instead of operating in series as described with reference to FIG. 2. Next, the damper effect sound generator (L-ch) 201 will be described.


As described above, the piano sound source 102 adds together a plurality of sets of waveform data corresponding to a plurality of note-on instructions and currently being read for the left channel and outputs the result to the adder 107 as musical note waveform data (L-ch) 109. Similarly, the piano sound source 102 adds together a plurality of sets of waveform data corresponding to a plurality of note-on instructions and currently being read for the right channel and outputs the result to the adder 108 as musical note waveform data (R-ch) 110. Moreover, the piano sound source 102 outputs the plurality of sets of waveform data corresponding to the plurality of note-on instructions and currently being read for the left channel to the damper sound effect generator 101 in parallel (that is, without adding the sets of data together) as first sound waveform data (L-ch) 902. Similarly, the piano sound source 102 outputs the plurality of sets of waveform data corresponding to the plurality of note-on instructions and currently being read for the right channel to the damper sound effect generator 101 in parallel (that is, without adding the sets of data together) as first sound waveform data (R-ch) 903. Furthermore, the piano sound source 102 outputs note number information for sound production channels that were newly allocated in response to the note-on instructions to the damper sound effect generator 101 as sound production channel information 904.


On the basis of the sound production channel information 904 input from the piano sound source 102, for each sound production channel for which the same note number is specified in the first sound waveform data (L-ch) 902 input from the piano sound source 102, the damper sound effect generator 101 performs a filtering calculation process of generating attenuated sound waveform data by respectively reducing, from the frequency components included in the waveform data in that sound production channel, the amplitudes of the respective frequency components of the fundamental tone and harmonics of the pitch corresponding to the note number specified for that sound production channel. The damper sound effect generator 101 then performs a process of convolving attenuated sound waveform data (which is obtained by combining, for the left channel, the note number-specific attenuated sound waveform data generated by the filtering calculation process) with left channel impulse response waveform data (second sound waveform data) 121 read from the memory 104, and outputs the resulting third sound waveform data (L-ch) 113 to the multiplier 105. Similarly, on the basis of the sound production channel information 904 input from the piano sound source 102, for each sound production channel for which the same note number is specified in the first sound waveform data (R-ch) 903 input from the piano sound source 102, the damper sound effect generator 101 performs a filtering calculation process of generating attenuated sound waveform data by respectively reducing, from the frequency components included in the waveform data in that sound production channel, the amplitudes of the respective frequency components of the fundamental tone and harmonics of the pitch corresponding to the note number specified for that sound production channel. The damper sound effect generator 101 then performs a process of convolving attenuated sound waveform data (which is obtained by combining, for the right channel, the note number-specific attenuated sound waveform data generated by the filtering calculation process) with right channel impulse response waveform data (second sound waveform data) 121 read from the memory 104, and outputs the resulting third sound waveform data (R-ch) 114 to the multiplier 106.


More specifically, the filter processor 901 includes a sound production channel-comb filter allocator 205, 88 comb filters 206 numbered from #0 (A0) to #87 (C8) and corresponding to the pitches of the 88 keys on the keyboard of an acoustic piano, and an adder 207 that adds together the outputs of the 88 comb filters 206 and outputs the addition results to the convolution processor 204 as attenuated sound waveform data 218.


The sound production channel-comb filter allocator 205, on the basis of the sound production channel information 904 input from the piano sound source 102, allocates and inputs waveform data that, among sets of waveform data in N note-on instruction-specific sound production channels #0 to #N−1 for the first sound waveform data (L-ch) 902 input from the piano sound source 102 illustrated in FIG. 1, is in sound production channels for which the same note number is specified to the comb filter 206 that, among the 88 comb filters 206 numbered from #0 to #87, corresponds to that note number. Here, the allocation of any waveform data in a sound production channel for the same note number that had previously been allocated to that comb filter 206 is cleared. This means that when the same key on the keyboard 140 illustrated in FIG. 1 is pressed multiple times, the damper effect applied to an earlier keypress is cleared so that the damper effect can be applied to a later keypress.


For the sets of waveform data that are allocated by the sound production channel-comb filter allocator 205 and in which note numbers corresponding to the pitches of the key numbers #0 to #87 in the first sound waveform data (L-ch) 902 input from the piano sound source 102 are specified, the #0 to #87 comb filters 206 respectively generate and output note number-specific attenuated sound waveform data by respectively reducing, from the frequency components included in that waveform data, the amplitudes of the respective frequency components of the fundamental tones and harmonics of the pitches corresponding to the note numbers specified in that waveform data. Then, the note number-specific attenuated sound waveform data is added together by the adder 207, and the addition results are output to the convolution processor 204 as the attenuated sound waveform data 218.


Similar to the convolution processor 204 illustrated in FIG. 2, the convolution processor 204 illustrated in FIG. 9 performs a process of convolving the attenuated sound waveform data 218 with the impulse response waveform data (second sound waveform data) 121 in order to generate and output the third sound waveform data (L-ch) 113 (the resonant tone waveform data).


The embodiments described above utilize a method based on convolving resonant tone characteristics sampled directly from an acoustic piano to generate and add together the correct damper sound effects, thereby making it possible to obtain piano damper sounds and piano sounds that are more natural, realistic, and beautiful.


In the embodiments described above, for the impulse response waveform data (second sound waveform data) 121 that is stored in the memory 104 in advance, a plurality of types of data for various piano types and tonal variations or the like may be stored and selected from.


Although the embodiments described above output two-channel stereo musical notes, the output does not necessarily need to be stereo output, or the output may be three or more channel stereo output.


In the embodiments described above, the number of comb filters 206 prepared matches the 88 keys #0 to #87 corresponding to the number of strings in a standard acoustic piano. However, when the amount of delay is long, such as for bass strings, a configuration in which the delay lengths K for the delayers (Delay) 208 are set to half the periods of the pitches corresponding to the key numbers or a configuration in which some of the comb filters are shared for other strings may be used.


Although the embodiments described above use an FFT process as an example of the convolution operation process performed by the convolution processor 204, the convolution operation process may alternatively be performed by direct multiplication-accumulation of the waveform data in the time domain without using an FFT.


It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover modifications and variations that come within the scope of the appended claims and their equivalents. In particular, it is explicitly contemplated that any part or whole of any two or more of the embodiments and their modifications described above can be combined and regarded within the scope of the present invention.

Claims
  • 1. A musical note generation device, comprising: a plurality of keys, the plurality of keys respectively being associated with pitch information; andat least one processor, the at least one processor performing processes including: a convolution operation process of generating convolved sound waveform data by convolving first sound waveform data corresponding to the pitch information associated with a specified key with second sound waveform data corresponding to an impulse response;a third sound waveform data generation process of generating third sound waveform data by respectively reducing, among frequency components included in the convolved sound waveform data generated by the convolution operation process, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; andan output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the third sound waveform data generation process,wherein in the third sound waveform data generation process, the at least one processor identifies the respective frequency components of the fundamental tone and the harmonics with a comb filter.
  • 2. The musical note generation device according to claim 1, wherein the first sound waveform data includes at least a sound obtained from vibration of a string struck due to a keypress performed while not depressing a damper pedal in a keyboard instrument, andwherein the second sound waveform data is sound waveform data for resonant tones obtained from vibration of a plurality of strings included in the keyboard instrument that is caused by causing the keyboard instrument to vibrate while depressing the damper pedal of the keyboard instrument.
  • 3. The musical note generation device according to claim 1, wherein in the third sound waveform data generation process, the at least one processor generates the third sound waveform data by performing a delay process corresponding to the specified key on the convolved sound waveform data.
  • 4. The musical note generation device according to claim 1, wherein the at least one processor performs the convolution operation process, the third sound waveform data generation process, and the output process when a damper pedal is depressed.
  • 5. An electronic musical instrument, comprising: a damper pedal; andthe musical note generation device as set forth in claim 1,wherein the at least one processor of the musical note generation device performs the convolution operation process, the third sound waveform data generation process, and the output process when the damper pedal is depressed.
  • 6. A method performed by at least one processor in an electronic musical instrument, comprising: a convolution operation process of generating convolved sound waveform data by convolving first sound waveform data corresponding to pitch information associated with a specified key with second sound waveform data corresponding to an impulse response;a third sound waveform data generation process of generating third sound waveform data by respectively reducing, among frequency components included in the convolved sound waveform data generated by the convolution operation process, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; andan output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the third sound waveform data generation process,wherein the third sound waveform data generation process includes identifying the respective frequency components of the fundamental tone and the harmonics with a comb filter.
  • 7. A non-transitory storage medium having stored therein instructions that cause at least one processor in an electronic musical instrument to perform the following processes: a convolution operation process of generating convolved sound waveform data by convolving first sound waveform data corresponding to pitch information associated with a specified key with second sound waveform data corresponding to an impulse response;a third sound waveform data generation process of generating third sound waveform data by respectively reducing, among frequency components included in the convolved sound waveform data generated by the convolution operation process, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information; andan output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the third sound waveform data generation process,wherein the third sound waveform data generation process includes identifying the respective frequency components of the fundamental tone and the harmonics with a comb filter.
  • 8. A musical note generation device, comprising: a plurality of keys, the plurality of keys respectively being associated with pitch information; andat least one processor, the at least one processor performing processes including: an attenuated sound waveform data generation process of generating attenuated sound waveform data by respectively reducing, among frequency components included in first sound waveform data corresponding to the pitch information associated with a specified key, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information;a convolution operation process of generating third sound waveform data by convolving the attenuated sound waveform data generated by the attenuated sound waveform data generation process with second sound waveform data corresponding to an impulse response; andan output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the convolution operation process,wherein the attenuated sound waveform data generation process includes identifying the respective frequency components of the fundamental tone and the harmonics with a comb filter.
  • 9. An electronic musical instrument, comprising: a damper pedal; andthe musical note generation device as set forth in claim 8,wherein the at least one processor of the musical the generation device performs the attenuated sound waveform data generation process, the convolution operation process, and the output process when the damper pedal is depressed.
  • 10. A method performed by at least one processor in an electronic musical instrument, comprising: an attenuated sound waveform data generation process of, when a damper pedal is depressed, generating attenuated sound waveform data by reducing, among frequency components included in first sound waveform data corresponding to pitch information associated with a specified key, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information;a convolution operation process of generating third sound waveform data by convolving the attenuated sound waveform data generated by the attenuated sound waveform data generation process with second sound waveform data corresponding to an impulse response; andan output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the convolution operation process,wherein the attenuated sound waveform data generation process includes identifying the respective frequency components of the fundamental tone and the harmonics with a comb filter.
  • 11. A non-transitory storage medium having stored therein instructions that cause at least one processor in an electronic musical instrument to perform the following processes: an attenuated sound waveform data generation process of, when a damper pedal is depressed, generating attenuated sound waveform data by reducing, among frequency components included in first sound waveform data corresponding to pitch information associated with a specified key, amplitudes of respective frequency components of a fundamental tone and harmonics of the fundamental tone corresponding to a pitch indicated by the pitch information;a convolution operation process of generating third sound waveform data by convolving the attenuated sound waveform data generated by the attenuated sound waveform data generation process with second sound waveform data corresponding to an impulse response; andan output process of outputting piano sound waveform data generated on the basis of the third sound waveform data generated by the convolution operation process,wherein the attenuated sound waveform data generation process includes identifying the respective frequency components of the fundamental tone and the harmonics with a comb filter.
Priority Claims (1)
Number Date Country Kind
2016-252129 Dec 2016 JP national
US Referenced Citations (12)
Number Name Date Kind
8729376 Kakishita May 2014 B2
8754316 Shimizu Jun 2014 B2
20030177889 Koseki Sep 2003 A1
20080006141 Gouhara et al. Jan 2008 A1
20090000462 Fujita Jan 2009 A1
20090133566 Nakae May 2009 A1
20090266219 Nakae Oct 2009 A1
20100307322 Tominaga Dec 2010 A1
20110226119 Shinoda Sep 2011 A1
20120137857 Liu Jun 2012 A1
20170243571 Cogliati et al. Aug 2017 A1
20180182365 Sakata Jun 2018 A1
Foreign Referenced Citations (8)
Number Date Country
H9-81156 Mar 1997 JP
2007-193129 Aug 2007 JP
2009-008736 Jan 2009 JP
2009-025589 Feb 2009 JP
2009-175677 Aug 2009 JP
2009-265470 Nov 2009 JP
2010-117536 May 2010 JP
2011-197326 Oct 2011 JP
Non-Patent Literature Citations (5)
Entry
U.S. Appl. No. 15/845,805, filed Dec. 18, 2017.
European Search Report dated May 4, 2018, in a counterpart European patent application No. 17209232.2.
Zambon et al., “Simulation of Piano Sustain-Pedal Effect by Parallel Second-Order Filters”, Sep. 1-4, 2008, Proc. of the 11th Int. Conference on Digital Audio Effects (DAFx-08), Espoo, Finland.
European Search Report dated May 7, 2018, in a counterpart European patent application No. 17209236.3.
Japanese Office Action dated Nov. 6, 2018, in a counterpart Japanese patent application 2016-252129. (A machine translation (not reviewed for accuracy) attached.).
Related Publications (1)
Number Date Country
20180182364 A1 Jun 2018 US