1. Field of the Invention
The present invention relates to a sound signal analysis apparatus, a sound signal analysis method and a sound signal analysis program for receiving sound signals indicative of a musical piece and detecting beat positions (beat timing) and tempo of the musical piece.
2. Description of the Related Art
Conventionally, there are sound signal analysis apparatuses which receive sound signals indicative of a musical piece and detect beat positions and tempo of the musical piece, as described in Japanese Unexamined Patent Publication No. 2009-265493, for example.
First, the conventional sound signal analysis apparatus of the above-described Japanese Unexamined Patent Publication calculates beat index sequence as candidate beat positions in accordance with changes in strength (amplitude) of sound signals. Then, in accordance with the calculated result of beat index sequence, the sound signal analysis apparatus detects tempo of the musical piece. In a case where the accuracy with which the beat index sequence is detected is low, therefore, the accuracy with which the tempo is detected is also decreased.
The present invention was accomplished to solve the above-described problem, and an object thereof is to provide a sound signal analysis apparatus which can detect beat positions and changes in tempo in a musical piece with high accuracy. As for descriptions about respective constituent features of the present invention, furthermore, reference letters of corresponding components of an embodiment described later are provided in parentheses to facilitate the understanding of the present invention. However, it should not be understood that the constituent features of the present invention are limited to the corresponding components indicated by the reference letters of the embodiment.
In order to achieve the above-described object, it is a feature of the present invention to provide a sound signal analysis apparatus including sound signal input portion (S12) for inputting a sound signal indicative of a musical piece; feature value calculation portion (S165, S167) for calculating a first feature value (XO) indicative of a feature relating to existence of a beat in one of sections of the musical piece and a second feature value (XB) indicative of a feature relating to tempo in one of the sections of the musical piece; and estimation portion (S17, S18) for concurrently estimating a beat position and a change in tempo in the musical piece by selecting, from among a plurality of probability models described as sequences of states (qb,n) classified according to a combination of a physical quantity (n) relating to existence of a beat in one of the sections of the musical piece and a physical quantity (b) relating to tempo in one of the sections of the musical piece, a probability model whose sequence of observation likelihoods (L) each indicative of a probability of concurrent observation of the first feature value and the second feature value in corresponding one of the sections of the musical piece satisfies a certain criterion.
In this case, the estimation portion may concurrently estimate a beat position and a change in tempo in the musical piece by selecting a probability model of the most likely sequence of observation likelihoods from among the plurality of probability models.
In this case, the estimation portion may have first probability output portion (S172) for outputting, as a probability of observation of the first feature value, a probability calculated by assigning the first feature value as a probability variable of a probability distribution function defined according to the physical quantity relating to existence of beat.
In this case, as a probability of observation of the first feature value, the first probability output portion may output a probability calculated by assigning the first feature value as a probability variable of any one of (including but not limited to the any one of) normal distribution, gamma distribution and Poisson distribution defined according to the physical quantity relating to existence of beat.
In this case, the estimation portion may have second probability output portion for outputting, as a probability of observation of the second feature value, goodness of fit of the second feature value to a plurality of templates provided according to the physical quantity relating to tempo.
In this case, the estimation portion may have second probability output portion for outputting, as a probability of observation of the second feature value, a probability calculated by assigning the second feature value as a probability variable of probability distribution function defined according to the physical quantity relating to tempo.
In this case, as a probability of observation of the second feature value, the second probability output portion may output a probability calculated by assigning the first feature value as a probability variable of any one of (including but not limited to the any one of) multinomial distribution, Dirichlet distribution, multidimensional normal distribution, and multidimensional Poisson distribution defined according to the physical quantity relating to existence of beat.
In this case, furthermore, the sections of the musical piece correspond to frames, respectively, formed by dividing the input sound signal at certain time intervals; and the feature value calculation portion may have first feature value calculation portion (S165) for calculating amplitude spectrum (A) for each of the frames, applying a plurality of window functions (BPF) each having a different frequency band (wk) to the amplitude spectrum to generate amplitude spectrum (M) for each frequency band, and outputting, as the first feature value, a value calculated on the basis of a change in amplitude spectrum provided for the each frequency band between the frames; and second feature value calculation portion (S167) having a filter (FBB) that outputs a value in response to each input of a value corresponding to a frame, that has keeping portion (db) for keeping the output value for a certain period of time, and that combines the input value and the value kept for the certain period of time at a certain ratio, and output the combined value, the second feature value calculation portion outputting, as a sequence of the second feature values, a data sequence obtained by inputting, to the filter, a data sequence obtained by reversing a time sequence of a data sequence obtained by inputting a sequence of the first feature values to the filter.
The sound signal analysis apparatus configured as above can select a probability model satisfying a certain criterion (a probability model such as the most likely probability model or a maximum a posteriori probability model) of a sequence of observation likelihoods calculated by use of the first feature values indicative of feature relating to existence of beat and the second feature values indicative of feature relating to tempo to concurrently (jointly) estimate beat positions and changes in tempo in a musical piece. Unlike the above-described related art, therefore, the sound signal analysis apparatus of the present invention will not present a problem that a low accuracy of estimation of either beat positions or tempo causes low accuracy of estimation of the other. As a result, the sound signal analysis apparatus can enhance estimation accuracy of beat positions and changes in tempo in a musical piece, compared with the related art.
Furthermore, it is a further feature of the present invention that the sound signal analysis apparatus further includes correction information input portion (11, S23) for inputting correction information indicative of corrected content of one of or both of a beat position and a change in tempo in the musical piece; observation likelihood correction portion (S23) for correcting the observation likelihoods in accordance with the input correction information; and re-estimation portion (S23, S18) for re-estimating a beat position and a change in tempo in the musical piece concurrently by selecting, by use of the estimation portion, a probability model whose sequence of the corrected observation likelihoods satisfies the certain criterion from among the plurality of probability models.
In accordance with user's input correction information, as a result, the sound signal analysis apparatus corrects observation likelihoods, and re-estimates beat positions and changes in tempo in a musical piece in accordance with the corrected observation likelihoods. Therefore, the sound signal analysis apparatus re-calculates (re-selects) states of one or more frames situated in front of and behind the corrected frame. Consequently, the sound signal analysis apparatus can obtain estimation results which bring about smooth changes in beat intervals (that is, tempo) from the corrected frame to the one or more frames situated in front of and behind the corrected frame.
Furthermore, the present invention can be embodied not only as the invention of the sound signal analysis apparatus, but also as an invention of a sound signal analysis method and an invention of a computer program applied to the apparatus.
A sound signal analysis apparatus 10 according to an embodiment of the present invention will now be described. As described below, the sound signal analysis apparatus 10 receives sound signals indicative of a musical piece, and detects beat positions and changes in tempo of the musical piece. As indicated in
The input operating elements 11 are formed of switches capable of on/off operation (e.g., a numeric keypad for inputting numeric values), volumes or rotary encoders capable of rotary operation, volumes or linear encoders capable of sliding operation, a mouse, a touch panel and the like. These operating elements are manipulated with a player's hand to select a musical piece to analyze, to start or stop analysis of sound signals, to reproduce or stop the musical piece (to output or stop sound signals from the later-described sound system 16), or to set various kinds of parameters on analysis of sound signals. In response to the player's manipulation of the input operating elements 11, operational information indicative of the manipulation is supplied to the later-described computer portion 12 via the bus BS.
The computer portion 12 is formed of a CPU 12a, a ROM 12b and a RAM 12c which are connected to the bus BS. The CPU 12a reads out a sound signal analysis program and its subroutines which will be described in detail later from the ROM 12b, and executes the program and subroutines. In the ROM 12b, not only the sound signal analysis program and its subroutines but also initial setting parameters and various kinds of data such as graphic data and text data for generating display data indicative of images which are to be displayed on the display unit 13 are stored. In the RAM 12c, data necessary for execution of the sound signal analysis program is temporarily stored.
The display unit 13 is formed of a liquid crystal display (LCD). The computer portion 12 generates display data indicative of content which is to be displayed by use of graphic data, text data and the like, and supplies the generated display data to the display unit 13. The display unit 13 displays images on the basis of the display data supplied from the computer portion 12. At the time of selection of a musical piece to analyze, for example, a list of titles of musical pieces is displayed on the display unit 13. At the time of completion of analysis, for example, a beat/tempo information list indicative of beat positions and changes in tempo and its graphs (see
The storage device 14 is formed of high-capacity nonvolatile storage media such as HDD, FDD, CD-ROM, MO and DVD, and their drive units. In the storage device 14, sets of musical piece data indicative of musical pieces, respectively, are stored. Each set of musical piece data is formed of a plurality of sample values obtained by sampling a musical piece at certain sampling periods (1/44100 s, for example), while the sample values are sequentially recorded in successive addresses of the storage device 14. Each set of musical piece data also includes title information representative of the title of the musical piece and data size information representative of the amount of the set of musical piece data. The sets of musical piece data may be previously stored in the storage device 14, or may be retrieved from an external apparatus via the external interface circuit 15 which will be described later. The musical piece data stored in the storage device 14 is read by the CPU 12a to analyze beat positions and changes in tempo in the musical piece.
The external interface circuit 15 has a connection terminal which enables the sound signal analysis apparatus 10 to connect with an external apparatus such as an electronic musical apparatus and a personal computer. The sound signal analysis apparatus 10 can also connect to a communication network such as a LAN (Local Area Network) and the Internet via the external interface circuit 15.
The sound system 16 has a D/A converter for converting musical piece data to analog tone signals, an amplifier for amplifying the converted analog tone signals, and a pair of right and left speakers for converting the amplified analog tone signals to acoustic sound signals and outputting the acoustic sound signals. In response to user's instructions for reproducing a musical piece which is to analyze by use of the input operating elements 11, the CPU 12a supplies musical piece data which is to analyze to the sound system 16. As a result, the user can listen to the musical piece which the user intends to analyze.
Next, the operation of the sound signal analysis apparatus 10 configured as described above will be explained. First, the operation of the sound signal analysis apparatus 10 will be briefly explained. The musical piece which is to analyze is separated into a plurality of frames ti{i=0, 1, . . . , last}. For each frame th furthermore, onset feature values XO representative of feature relating to existence of beat and BPM feature values XB representative of feature relating to tempo are calculated. From among probability models (Hidden Markov Models) described as sequences of states qb, n classified according to combination of a value of beat period b (value proportional to reciprocal of tempo) in a frame ti and a value of the number of frames n between the next beat, a probability model having the most likely sequence of observation likelihoods representative of probability of concurrent observation of the onset feature value XO and BPM feature value XB as observed values is selected (see
Next, the operation of the sound signal analysis apparatus 10 will be explained concretely. When the user turns on a power switch (not shown) of the sound signal analysis apparatus 10, the CPU 12a reads out a sound signal analysis program of
The CPU 12a starts a sound signal analysis process at step S10. At step S11, the CPU 12a reads title information included in the sets of musical piece data stored in the storage device 14, and displays a list of titles of the musical pieces on the display unit 13. Using the input operating elements 11, the user selects a set of musical piece data which the user desires to analyze from among the musical pieces displayed on the display unit 13. The sound signal analysis process may be configured such that when the user selects a set of musical piece data which is to analyze at step S11, a part of or the entire of the musical piece represented by the set of musical piece data is reproduced so that the user can confirm the content of the musical piece data.
At step S12, the CPU 12a makes initial settings for sound signal analysis. More specifically, the CPU 12a keeps a storage area appropriate to data size information of the selected set of musical piece data in the RAM 12c, and reads the selected set of musical piece data into the kept storage area. Furthermore, the CPU 12a keeps an area for temporarily storing a beat/tempo information list, the onset feature values XO, the BPM feature values XB and the like indicative of analyzed results in the RAM 12c.
The results analyzed by the program are to be stored in the storage device 14, which will be described in detail later (step S21). If the selected musical piece has been already analyzed by this program, the analyzed results are stored in the storage device 14. At step S13, therefore, the CPU 12a searches for existing data on the analysis of the selected musical piece (hereafter, simply referred to as existing data). If there is existing data, the CPU 12a determines “Yes” at step S14 to read the existing data into the RAM 12c at step S15 to proceed to step S19 which will be described later. If there is no existing data, the CPU 12a determines “No” at step S14 to proceed to step S16.
At step S16, the CPU 12a reads out a feature value calculation program indicated in
At step S161, the CPU 12a starts a feature value calculation process. At step S162, the CPU 12a divides the selected musical piece at certain time intervals as indicated in
At step S163, the CPU 12a performs a short-time Fourier transform for each frame to figure out an amplitude A (fj, ti) of each frequency bin fj {j=1, 2, . . . } as indicated in
At step S165, the CPU 12a calculates the onset feature value XO (ti) of frame ti on the basis of the time-varying amplitudes M. As indicated in step S165 of
By use of the onset feature values XO (t0), XO (t1), . . . , the CPU 12a then calculates the BPM feature value XB for each frame ti. The BPM feature value XB (ti) of frame ti is represented as a set of BPM feature values XBb=1,2, . . . (ti) calculated in each beat period b (see
At step S167, the CPU 12a obtains the sequence XBb(t){=XBb(t0), XBb(t1), . . . } of the BPM feature values by inputting a data sequence obtained by reversing the sequence XDb(t) of data XDb in time series to the filter bank FBB. As a result, the phase shift between the phase of the onset feature values XO(t0), XO (t1), . . . and the phase of the BPM feature values XBb(t0), XBb(t1), . . . can be made “0”. The BPM feature values XB(ti) calculated as above are exemplified in
At step S168, the CPU 12a terminates the feature value calculation process to proceed to step S17 of the sound signal analysis process (main routine).
At step S17, the CPU 12a reads out a log observation likelihood calculation program indicated in
At step S171, the CPU 12a starts the log observation likelihood calculation process. Then, as explained below, a likelihood P (XO(ti)|Zb,n(ti)) of the onset feature value XO(ti) and a likelihood P (XB(ti)|Zb,n(ti)) of the BPM feature value XB(ti) are calculated. The above-described “Zb=β,n=η (ti)” represents the occurrence only of a state qb=β,n=η where the value of the beat period b is “β” in frame th with the value of the number n of frames between the next beat is “η”. In frame ti, more specifically, the state qb=β,n=η and a state qb≠β,n≠η cannot occur concurrently. Therefore, the likelihood P (XO(ti)|Zb=β,n=η (ti)) represents the probability of observation of the onset feature value XO(ti) on condition that the value of the beat period b is “β” in frame ti, with the value of the number n of frames between the next beat being “η”. Furthermore, the likelihood P (XB(ti)|Zb=β,n=η (ti)) represents the probability of observation of the BPM feature value XB(ti) on condition that the value of the beat period b is “β” in frame ti, with the value of the number n of frames between the next beat being “η”.
At step S172, the CPU 12a calculates the likelihood P (XO(ti)|Zb,n(ti)). Assume that if the value of the number n of frames between the next beat is “0”, the onset feature values XO are distributed in accordance with the first normal distribution with a mean value of “3” and a variance of “1”. In other words, the value obtained by assigning the onset feature value XO(ti) as a random variable of the first normal distribution is the likelihood P (XO(ti)|Zb,n=0 (ti)). Furthermore, assume that if the value of the beat period b is “β”, with the value of the number n of frames between the next beat being “β/2”, the onset feature values XO are distributed in accordance with the second normal distribution with a mean value of “1” and a variance of “1”. In other words, the value obtained by assigning the onset feature value XO(ti) as a random variable of the second normal distribution is the likelihood P (XO(ti)|Zb=β,n=β/2 (ti)). Furthermore, assume that if the value of the number n of frames between the next beat is neither “0” nor “β/2”, the onset feature values XO are distributed in accordance with the third normal distribution with a mean value of “0” and a variance of “1”. In other words, the value obtained by assigning the onset feature value XO(ti) as a random variable of the third normal distribution is the likelihood P (XO(ti)|Zb,n≠0,β/2 (ti)).
At step S173, the CPU 12a calculates the likelihood P (XB(ti)|Zb,n(ti)). The likelihood P (XB(ti)|Zb=γ,n (ti)) is equivalent to goodness of fit of the BPM feature value XB(ti) with respect to template TPγ{γ=1, 2, . . . } indicated in
At step S174, the CPU 12a combines the log of the likelihood P (XO(ti)|Zb,n (ti)) and the log of the likelihood P(XB(ti)|Zb,n(ti)) and define the combined result as log observation likelihood Lb,n (ti). The same result can be similarly obtained by defining, as the log observation likelihood Lb,n (ti), a log of a result obtained by combining the likelihood P (XO(ti)|Zb,n (ti)) and the likelihood P (XB(ti)|Zb,n(ti)). At step S175, the CPU 12a terminates the log observation likelihood calculation process to proceed to step S18 of the sound signal analysis process (main routine).
At step S18, the CPU 12a reads out the beat/tempo concurrent estimation program indicated in
In a concrete example which will be described later, it is assumed for the sake of simplicity that the value of the beat period b of musical pieces which will be analyzed is “3”, “4”, or “5”. As a concrete example, more specifically, procedures of the beat/tempo concurrent estimation process of a case where the log observation likelihoods Lb,n (ti) are calculated as exemplified in
Hereafter, the beat/tempo concurrent estimation process will be explained concretely. At step S181, the CPU 12a starts the beat/tempo concurrent estimation process. At step S182, by use of the input operating elements 11, the user inputs initial conditions CSb,n of the likelihoods C corresponding to the respective states qb,n as indicated in
At step S183, the CPU 12a calculates the likelihoods Cb,n (t) and the states Ib,n (ti). The likelihood Cb=βe,n=ηe (t0) of the state qb=βe,n=ηe where the value of the beat cycle b is “βe” at frame t0 with the value of the number n of frames being “ηe” can be obtained by combining the initial condition CSb=βe,n=ηe and the log observation likelihood Lb=βe,n=ηe (t0).
Furthermore, at the transition from the state qb=βs,n=ηs to the state qb=βe,n=ηe, the likelihoods Cb=βe,n=ηe (ti) {i>0} can be calculated as follows. If the number n of frames of the state qb=βs,n=ηs is not “0” (that is, ηe≠0), the likelihood Cb=βe,n=ηe (ti) is obtained by combining the likelihood Cb=βe,n=ηe+1 (ti−1), the log observation likelihood Lb=βe,n=ηe (ti), and the log transition probability T. In this embodiment, however, since the log transition probability T of a case where the number n of frames of a state which precedes a transition is not “0” is “0”, the likelihood Cb=βe,n=ηe (t) is substantially obtained by combining the likelihood Cb=βe,n=ηe+1 (ti−1) and the log observation likelihood Lb=βe,n=ηe (ti) (Cb=βe,n=ηe (ti)=Cb=βe,n=ηe+1 (ti−1)+Lb=βe,n=ηe (ti)). In this case, furthermore, the state Ib=βe,n=ηe (ti) is the state qb=βe,ηe+1. In an example where the likelihoods C are calculated as indicated in
Furthermore, the likelihood Cb=βe,n=ηe (ti) of a case where the number n of frames of the state qb=βs,n=ηs is “0” (ηs=0) is calculated as follows. In this case, the value of the beat period b can increase or decrease with state transition. Therefore, the log transition probability T is combined with the likelihood Cβe−1,0 (t0), the likelihood Cβe,0 (t0) and the likelihood Cβe+1,0 (ti−1), respectively. Then, the maximum value of the combined results is further combined with the log observation likelihood Cb=βe,n=ηe (ti) to define the combined result as the likelihood Cb=βe,n=ηe (ti). Furthermore, the state Ib=βe,n=ηe (ti) is a state q selected from among state qβe−1,0, state qβe,0, and state qβe+1,0. More specifically, the log transition probability T is added to the likelihood Cβe−1,0 (ti−1), the likelihood Cβe,0 (ti−1) and the likelihood Cβe+1,0 (ti−1) of the state Cβe−1,0, state qβe,0, and state qβe+1,0, respectively, to select a state having the largest added value to define the selected state as the state Ib=βe,n=ηe (ti). More strictly, the likelihoods Cb,n (ti) have to be normalized. Even without normalization, however, the results of estimation of beat positions and changes in tempo are mathematically the same.
For instance, the likelihood C4,3 (t4) is calculated as follows. Since in a case where a state preceding a transition is state q3,0, the value of the likelihood C3,0 (t3) is “0.4” with the log transition probability T being “−0.6”, a value obtained by combining the likelihood C3,0 (t3) and the log transition probability T is “−0.2”. Furthermore, since in a case where a state preceding a transition is state q4,0, the value of the likelihood C4,0 (t3) preceding the transition is “3” with the log transition probability T being “−0.2”, a value obtained by combining the likelihood C4,0 (t3) and the log transition probability T is “2.8”. Furthermore, since in a case where a state preceding a transition is state q5,0, the value of the likelihood C5,0 (t3) preceding the transition is “1” with the log transition probability T being “−0.6”, a value obtained by combining the likelihood C5,0 (t3) and the log transition probability T is “0.4”. Therefore, the value obtained by combining the likelihood C4,0 (t3) and the log transition probability T is the largest. Furthermore, the value of the log observation likelihood L4,3 (t4) is “0”. Therefore, the value of the likelihood C4,3 (t4) is “2.8” (=2.8+0). Therefore, the value of the likelihood C4,3 (t4) is “2.8” (=2.8+0), so that the state I4,3 (t4) is the state q4,0.
When completing the calculation of likelihoods Cb,n (ti) and the states Ib,n (ti) of all the states qb,n for all the frames ti, the CPU 12a proceeds to step S184 to determine the sequence Q of the maximum likelihood states (={qmax (t0), qmax (t1), qmax (tlast)}) as follows. First, the CPU 12a defines a state qb,n which is in frame tlast and has the maximum likelihood Cb,n (tlast) as a state qmax (tlast), The value of the beat period b of the state qmax (tlast), is denoted as “βm”, while the value of the number n of frames is denoted as “ηm”. More specifically, the state Iβm,ηm is a state qmax (tlast-1) of the frame tlast-1 which immediately precedes the frame tlast. The state qmax (tlast-2), the state qmax (tlast-3), . . . of frame (tlast-2), frame (tlast-3), . . . are also determined similarly to the state qmax (tlast-1). More specifically, the state Iβm,ηm (ti+1) where the value of the beat period b of a state qmax (ti+1) of frame ti+1 is denoted as “βm” with the value of the number n of frames being denoted as “ηm” is the state qmax (ti) of the frame ti which immediately precedes the frame ti+1. As described above, the CPU 12a sequentially determines the states qmax from frame tlast-1 toward frame t0 to determine the sequence Q of the maximum likelihood states.
In the example shown in
At step S185, the CPU 12a terminates the beat/tempo concurrent estimation process to proceed to step S19 of the sound signal analysis process (main routine).
At step S19, the CPU 12a calculates “BPM-ness”, “probability based on observation”, “beatness”, “probability of existence of beat”, and “probability of absence of beat” for each frame t; (see expressions indicated in
By use of the “BPM-ness”, “probability based on observation”, “beatness”, “probability of existence of beat”, and “probability of absence of beat”, the CPU 12a displays a beat/tempo information list indicated in
Furthermore, in a case where existing data has been found by the search for existing data at step S13 of the sound signal analysis process, the CPU 12a displays the beat/tempo information list, the graph indicative of changes in tempo, and the graph indicative of beat positions on the display unit 13 at step S19 by use of various kinds of data on the previous analysis results read into the RAM 12c at step S15.
At step S20, the CPU 12a displays a message asking whether the user desires to terminate the sound signal analysis process or not on the display unit 13, and waits for user's instructions. Using the input operating elements 11, the user instructs either to terminate the sound signal analysis process or to execute a later-described beat/tempo information correction process. For instance, the user clicks on an icon with a mouse. If the user has instructed to terminate the sound signal analysis process, the CPU 12a determines “Yes” to proceed to step S21 to store various kinds of data on results of analysis of the likelihoods C, the states I, and the beat/tempo information list in the storage device 14 so that the various kinds of data are associated with the title of the musical piece to proceed to step S22 to terminate the sound signal analysis process.
If the user has instructed to continue the sound signal analysis process at step S20, the CPU 12a determines “No” to proceed to step S23 to execute the tempo information correction process. First, the CPU 12a waits until the user completes input of correction information. Using the input operating elements 11, the user inputs a corrected value of the “BPM-ness”, “probability of existence of beat” or the like. For instance, the user selects a frame that the user desires to correct with the mouse, and inputs a corrected value with the numeric keypad. Then, a display mode (color, for example) of “F” located on the right of the corrected item is changed in order to explicitly indicate the correction of the value. The user can correct respective values of a plurality of items. On completion of input of corrected values, the user informs of the completion of input of correction information by use of the input operating elements 11. Using the mouse, for example, the user clicks on an icon indicates completion of correction. The CPU 12a updates either of or both of the likelihood P (XO (ti)|Zb,n (ti)) and the likelihood P (XB (t)|Zb,n (ti)) in accordance with the corrected value. For instance, in a case where the user has corrected such that the “probability of existence of beat” in frame ti is raised with the value of the number n of frames on the corrected value being “ηe”, the CPU 12a sets the likelihood P (XB (ti)|Zb,n≠ηe (ti)) at a value which is sufficiently small. At frame ti, as a result, the probability that the value of the number n of frames is “ηe” is relatively the highest. For instance, furthermore, in a case where the user has corrected the “BPM-ness” of frame ti such that the probability that the value of the beat period b is “βe” is raised, the CPU 12a sets the likelihoods P (XB (t)|Zb≠βe,n (ti)) of states where the value of the beat period b is not “βe” at a value which is sufficiently small. At frame ti, as a result, the probability that the value of the beat period b is “βe” is relatively the highest. Then, the CPU 12a terminates the beat/tempo information correction process to proceed to step S18 to execute the beat/tempo concurrent estimation process again by use of the corrected log observation likelihoods L.
The sound signal analysis apparatus 10 configured as above can select a probability model of the most likely sequence of the log observation likelihoods L calculated by use of the onset feature values XO relating to beat position and the BPM feature values XB relating to tempo to concurrently (jointly) estimate beat positions and changes in tempo in a musical piece. Unlike the above-described related art, therefore, the sound signal analysis apparatus 10 will not present a problem that a low accuracy of estimation of either beat positions or tempo causes low accuracy of estimation of the other. As a result, the sound signal analysis apparatus 10 can enhance estimation accuracy of beat positions and changes in tempo in a musical piece, compared with the related art.
In this embodiment, furthermore, the transition probability (log transition probability) between states is set such that transition is allowed only from a state where the value of the number n of frames is “0” to a state of the same value of the beat period b or a state where the value of the beat period b is different by “1”. Therefore, the sound signal analysis apparatus 10 can prevent erroneous estimation which brings about abrupt changes in tempo between frames. Consequently, the sound signal analysis apparatus 10 can obtain estimation results which bring about natural beat positions and changes in tempo as a musical piece. For musical pieces in which the tempo abruptly changes, the sound signal analysis apparatus 10 may set transition probability (log transition probability) between states such that a transition from a state where the value of the number n of frames between the next beat is “0” to a state of a largely different value of the beat cycle b is also allowed.
Since the sound signal analysis apparatus 10 uses Viterbi algorithm for the beat/tempo concurrent estimation process, the sound signal analysis apparatus 10 can reduce the amount of calculation, compared to cases where a different algorithm (“sampling method”, “forward-backward algorithm” or the like, for example) is used.
In accordance with user's input correction information, furthermore, the sound signal analysis apparatus 10 corrects log observation likelihoods L, and re-estimates beat positions and changes in tempo in a musical piece in accordance with the corrected log observation likelihoods L. Therefore, the sound signal analysis apparatus 10 re-calculates (re-selects) states qmax of the maximum likelihoods of one or more frames situated in front of and behind the corrected frame. Consequently, the sound signal analysis apparatus 10 can obtain estimation results which bring about smooth changes in beat intervals and tempo from the corrected frame to the one or more frames situated in front of and behind the corrected frame.
The information about changes in beat position and tempo in a musical piece estimated as above is used for search for musical piece data and search for accompaniment data representative of accompaniment, for example. In addition, the information is also used for automatic generation of accompaniment part and for automatic addition of harmony for an analyzed musical piece.
Furthermore, the present invention is not limited to the above-described embodiment, but can be modified variously without departing from object of the invention.
For example, the above-described embodiment selects a probability mode of the most likely observation likelihood sequence indicative of probability of concurrent observation of the onset feature values XO and the BPM feature values XB as observation values. However, criteria for selection of probability model are not limited to those of the embodiment. For instance, a probability model of maximum a posteriori distribution may be selected.
Furthermore, the above-described embodiment is designed, for the sake of simplicity, such that the length of each frame is 125 ms. However, each frame may have a shorter length (e.g., 5 ms). The reduced frame length can contribute improvement in resolution relating to estimation of beat position and tempo. For example, the enhanced resolution enables tempo estimation in increments of 1 BPM. Furthermore, although the above-described embodiment is designed to have frames of the same length, the frames may have different lengths. In such a case as well, the onset feature values XO can be calculated similarly to the embodiment. For calculation of BPM feature values XB, in this case, it is preferable to change the amount of delay of the comb filters in accordance with the frame length. For calculation of the likelihoods C, furthermore, the greatest common divisor F of respective lengths of frames (that is, the greatest common divisor of the number of samples which form frames) is figured out. Then, it is preferable to define a probability of transition from a state qb,n (n≠0) to a state qb,n-L (τ) as 100% if the length of a frame ti (=τ) is represented by L (τ)×F.
In the above-described embodiment, furthermore, a whole musical piece is subjected to analysis. However, only a part of a musical piece (e.g., a few bars) may be subjected to analysis. In this case, the embodiment may be modified to allow a user to select a portion of input musical piece data to define as a portion to analyze. In addition, only a single part (e.g., rhythm section) of a musical piece may be subjected to analysis.
For tempo estimation, furthermore, the above-described embodiment may be modified such that a user can specify a tempo range which is given a high priority in estimation. At step S12 of the sound signal analysis process, more specifically, the sound signal analysis apparatus 10 may display terms indicative of tempo such as “Presto” and “Moderato” so that the user can choose a tempo range which is to be given a high priority in estimation. In a case where the user chooses “Presto”, for instance, the sound signal analysis apparatus 10 is to set the log observation likelihoods L for those other than a range of BPM=160 to 190 at a sufficiently small value. As a result, a tempo of the range of BPM=160 to 190 can be preferentially estimated. Consequently, the sound signal analysis apparatus 10 can enhance accuracy in tempo estimation in a case where the user knows an approximate tempo of a musical piece subjected to analysis.
In the beat/tempo information correction process (step S23), the user is prompted to input correction by use of the input operating elements 11. Instead of or in addition to the input operating elements 11, however, sound signal analysis apparatus 10 may allow the user to input corrections by use of operating elements of an electronic keyboard musical instrument, an electronic percussion instrument or the like connected via the external interface circuit 15. In response to user's depressions of keys of the electronic keyboard instrument, for example, the CPU 12a calculates tempo in accordance with the timing of the user's key-depressions to use the calculated tempo as a corrected value of the “BPM-ness”.
In the embodiment, furthermore, the user can input corrected values on beat positions and tempo as many times as the user desires. However, the embodiment may be modified to disable user's input of a corrected value on beat positions and tempo if the mean value of “probability of existence of beat” has reached a reference value (e.g., 80%).
As for the beat/tempo information correction process (step S23), furthermore, the embodiment may be modified such that, in addition to the correction of beat/tempo information of a user's specified frame to have a user's input value, beat/tempo information of neighboring frames of the user's specified frame is also automatically corrected in accordance with the user's input value. For example, in a case where a few successive frames have the same estimated tempo value, with the value of one of the frames being corrected by the user, the sound signal analysis apparatus 10 may automatically correct the respective tempo values of the frames to have the user's corrected value.
In the above-described embodiment, furthermore, at step S23, in response to user's indication of completion of input of a corrected value by use of the input operating elements 11, the concurrent estimation of beat position and tempo is carried out again. However, the embodiment may be modified such that the estimation of beat position and tempo is carried out again when a certain period of time (e.g., 10 seconds) has passed without any additional correction of any other values after user's input of at least one corrected value.
Furthermore, the display mode of the beat/tempo information list (
Number | Date | Country | Kind |
---|---|---|---|
2013-51158 | Mar 2013 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5491751 | Paulson et al. | Feb 1996 | A |
5585585 | Paulson et al. | Dec 1996 | A |
5808219 | Usa | Sep 1998 | A |
7449627 | Sako et al. | Nov 2008 | B2 |
7659472 | Arimoto | Feb 2010 | B2 |
7668610 | Bennett | Feb 2010 | B1 |
7711652 | Schmelzer | May 2010 | B2 |
7777121 | Asano | Aug 2010 | B2 |
7797249 | Schmelzer et al. | Sep 2010 | B2 |
7863512 | Takeda | Jan 2011 | B2 |
7952013 | Komori et al. | May 2011 | B2 |
8005666 | Goto et al. | Aug 2011 | B2 |
8153880 | Sasaki | Apr 2012 | B2 |
8178770 | Kobayashi | May 2012 | B2 |
8420921 | Kobayashi | Apr 2013 | B2 |
8437869 | Bennett | May 2013 | B1 |
8481839 | Shaffer et al. | Jul 2013 | B2 |
8484691 | Schmelzer | Jul 2013 | B2 |
8487176 | Wieder | Jul 2013 | B1 |
8595009 | Lu et al. | Nov 2013 | B2 |
8618401 | Kobayashi | Dec 2013 | B2 |
8645279 | Schmelzer | Feb 2014 | B2 |
8706274 | Kobayashi | Apr 2014 | B2 |
8829322 | Walmsley | Sep 2014 | B2 |
8873813 | Tadayon et al. | Oct 2014 | B2 |
8886345 | Izo et al. | Nov 2014 | B1 |
20050081700 | Kikumoto | Apr 2005 | A1 |
20070157798 | Sako et al. | Jul 2007 | A1 |
20070169614 | Sasaki et al. | Jul 2007 | A1 |
20070221046 | Ozaki et al. | Sep 2007 | A1 |
20080053295 | Goto et al. | Mar 2008 | A1 |
20080097754 | Goto et al. | Apr 2008 | A1 |
20080202321 | Goto et al. | Aug 2008 | A1 |
20080245214 | Kikumoto | Oct 2008 | A1 |
20090025538 | Arimoto | Jan 2009 | A1 |
20090071315 | Fortuna | Mar 2009 | A1 |
20090163276 | Inubushi et al. | Jun 2009 | A1 |
20090288546 | Takeda | Nov 2009 | A1 |
20100011939 | Nakadai et al. | Jan 2010 | A1 |
20100077306 | Shaffer et al. | Mar 2010 | A1 |
20100126332 | Kobayashi | May 2010 | A1 |
20100170382 | Kobayashi | Jul 2010 | A1 |
20100186576 | Kobayashi | Jul 2010 | A1 |
20100211200 | Kobayashi | Aug 2010 | A1 |
20100246842 | Kobayashi | Sep 2010 | A1 |
20100251877 | Jochelson et al. | Oct 2010 | A1 |
20110112994 | Goto et al. | May 2011 | A1 |
20120031257 | Saino | Feb 2012 | A1 |
20120125179 | Kobayashi | May 2012 | A1 |
20130046536 | Lu et al. | Feb 2013 | A1 |
20130103624 | Thieberger | Apr 2013 | A1 |
20130192445 | Sumi et al. | Aug 2013 | A1 |
20130305904 | Sumi | Nov 2013 | A1 |
20140079297 | Tadayon et al. | Mar 2014 | A1 |
20140111418 | Lee et al. | Apr 2014 | A1 |
20140116233 | Walmsley | May 2014 | A1 |
20140121797 | Ales | May 2014 | A1 |
20140140536 | Serletic et al. | May 2014 | A1 |
20140174279 | Wong et al. | Jun 2014 | A1 |
20140180673 | Neuhauser et al. | Jun 2014 | A1 |
20140180674 | Neuhauser et al. | Jun 2014 | A1 |
20140180675 | Neuhauser et al. | Jun 2014 | A1 |
20140238220 | Nakamura | Aug 2014 | A1 |
20140260911 | Maezawa | Sep 2014 | A1 |
20140260912 | Maezawa | Sep 2014 | A1 |
20140297012 | Kobayashi | Oct 2014 | A1 |
20140358265 | Wang et al. | Dec 2014 | A1 |
20140366710 | Eronen et al. | Dec 2014 | A1 |
20150013527 | Buskies et al. | Jan 2015 | A1 |
20150013528 | Buskies et al. | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
1835503 | Sep 2007 | EP |
2009265493 | Nov 2009 | JP |
Entry |
---|
Degara et al., “Reliability-Informed Beat Tracking of Musical Signals”, IEEE Transactions on Audio, Speech and Language Processing, vol. 20, No. 1, Jan. 1, 2012, pp. 290-301. |
Klapuri et al., “Analysis of the Meter of Acoustic Musical Signals”, IEEE Transactions on Audio, Speech, and Language Processing, Jan. 1, 2006, pp. 342-355 (cited as pp. 1-14). |
Fox et al., “Drum‘N’Bayes: On-line Variational Inference for Beat Tracking and Rhythm Recognition”, International Computer Music Conference Proceedings, 2007, 8 pages. |
Stark et al., “Real-Time Beat-Sychronous Analysis of Muscial Audio”, Proceedings of the 12th International Conference on Digital Audio Effects (DAFX-09), Como, Italy, Sep. 1-4, 2009. pp. 1-6. |
Rodriguez-Serrano et al., “Amplitude Modulated Sinusoidal Modeling for Audio Onset Detection”, 18th European Signal Processing Conference (EUSIPCO-2010), Aalborg, Denmark, Aug. 23-27, 2010, 5 pages. |
European Search Report dated Jul. 28, 2014, issued in corresponding European Patent Application No. 14157744. |
Masataka Goto; “An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds”; Journal of New Music Research, 2001, vol. 30, No. 2, pp. 159-171. |
Masataka Goto, et al.; “Songle: A Web Service for Active Music Listening Improved by User Contributions”; 12th International Society for Music Information Retrieval Conference, 2011, pp. 311-316. |
Klapuri, Anssi P., et al.; “Analysis of the Meter of Acoustic Musical Signals”; IEEE Trans. Speech and Audio Proc. (in press), pp. 1-14, 2004. |
Dixon, Simon, et al.; “Beat Tracking with Musical Knowledge”. |
European Search Report issued in European application No. EP14157746.0, dated Jul. 25, 2014. Cited in U.S. pending related U.S. Appl. No. 14/207,816. |
Davies, et al.; “Beat Tracking With a Two State Model”, Queen Mary, University of London, Centre for Digital Music, Mile End Road, London E1 4NS, UK. (2 selected pages) Cited in U.S. pending related U.S. Appl. No. 14/207,816. |
Oliveira, et al.; “IBT: A Real-Time Tempo and Beat Tracking System”, 2010 International Society for Music Information Retrieval. (3 selected pages) Cited in U.S. pending related U.S. Appl. No. 14/207,816. |
Number | Date | Country | |
---|---|---|---|
20140260912 A1 | Sep 2014 | US |