The present invention relates to musical aids. More particularly, the invention relates to the detection of melody from played music.
At each instant in time, a piece of music consists of a multitude of sounds generated by a variety of acoustic sources acting simultaneously, including various musical instruments, human voices, percussion instruments, possibly corrupted by unintentional effects such as instrument mistuning, background noise, poor recording quality and playback distortion. Some of the above sounds may last for a prolonged period of time, up to several seconds, while other sounds may show up for only a very short time, of the order of less than one tenth of a second. Thus each instantaneous composite sound due to the combination of all the simultaneous sounds present at a given instant, lasts for a time period at most equal to the time period of the shortest sound.
In turn, the sound generated by each one of the sounds sources present at a given instant, is composed by a multitude of periodic sinusoidal components, each one at a different frequency, with different phase and different amplitude. The collection of all the sinusoidal components of a certain source, when heard at once, sounds like the particular instrument/voice that generates them.
At any given instant, the sound of a musical instrument as well as of a human voice, consists of the collection of several sinusoidal components (up to few tens), referred to as the harmonic components (in short “harmonics”) of the sound source, whose frequencies are all integer multiples of a basic frequency denoted as the fundamental frequency (in short “fundamental”). When a musical instrument plays a single note, for instance, the middle C which we hear as a sound at frequency f0=261.6 Hz, it in fact simultaneously emits a collection of harmonics at frequencies f0, 2f0, 3f0, . . . etc., each with different amplitude and phase. The sinusoidal component at frequency f0 is the fundamental component, the component at frequency 2f0 is the 2nd harmonic, the component at frequency 3f0 is the 3rd harmonic and so on. A collection of such harmonic components, each of arbitrary amplitude and phase, is referred to as a Fourier series.
At any given instant, the fundamental frequency of the Fourier series of a sound source determines the tone we perceive, for instance, whether we hear a bass (lower-frequency) sound or a treble (higher-frequency) sound, while the relative amplitude of the various harmonics in the Fourier series determines the timbre we perceive, namely, whether we hear a violin, a piano, a human voice, or other.
Although the fundamental frequency determines the tone we perceive, the fundamental component itself needs not be present (its amplitude may be zero). In fact, in order for us to hear a tone at the fundamental frequency f0, it suffices that some harmonic components that differ in frequency by f0 be present (for instance second and third harmonics at frequencies 2f0 and 3f0), while all the other harmonics in the Fourier series, as well as the fundamental component at frequency f0, may be missing. This fact is clearly seen in the lower-frequency piano keys, which sound as bass, while the fundamental component is typically missing, and thus the lowest frequency actually present in the sound is the second harmonic, which is at frequency twice higher than the frequency we perceive.
The human hearing system does not react equally to all frequencies, but behaves according to certain mechanisms referred to as perceptual rules. In particular, up to a certain limit, the human ear is more sensitive to higher frequencies than to lower ones, according to a behavior known as the equal loudness contour (defined in the standard ISO 226:2003). If a treble note and a bass note have the same amplitude, we perceive the treble sound as being much louder than the bass one. Thus, at a given instant, an instrument playing at lower volume and at higher frequency, may be heard as dominant as compared to an instrument playing at higher volume at lower frequency. Moreover, this perceptual effect applies to each harmonic component separately. Thus, the perceptual power differs from the physical power.
When listening to a piece of music, we hear at once all the harmonics generated by all the instruments playing at a given instant, together with surrounding noise and distortion, and we cannot distinguish which harmonic was generated by which instrument. Our ear will collect all the sounds at the same time, and the combination of the various components will give rise to a multitude of Fourier series, possibly sharing common harmonics, each with his own perceptual loudness, and with harmonic components each possibly arising from a different source. The perceptual loudness of each such Fourier series will be equal to the sum of the individual perceptual powers of the harmonic components in it. In general, our hearing system will perceive the Fourier series with the strongest perceptual loudness as the dominant tone at the given instant. However, the inventor has observed that if two Fourier series have comparable perceptual loudness, the Fourier series with the higher fundamental frequency will be perceived as the dominant one.
It follows from the above that, when listening to a piece of music, the melody we hear is the time sequence of the dominant Fourier series, while such dominant Fourier series may be the result of the combined sounds of different instruments and voices, as well as distortion and noise, rather that the sound of some specific instrument, and there may be no single instrument actually playing the melody.
The present invention relates to a method and apparatus for performing melody detection, thereby to yield a list of sequential musical tones that relates to the melody that the human ear perceives, and the instants when each tone was perceived.
Melody detection should not be confused with music annotation. As the two are fundamentally different. Performing music annotation is trying to trace all the notes actually played by a specific instrument in order to reconstruct the original music sheet, whether consisting of a single note as for a trumpet, or multiple notes as for a piano hitting multiple keys at the same time. Performing melody detection is trying to interpret the global perceptual effect of all the sounds at once, to determine what is the melody actually perceived by the human ear, rather than the notes actually played by each instrument, and provide a music sheet including a time sequence of single notes describing that melody.
In one aspect the invention relates to a method for performing melody detection by interpreting the global perceptual effect of all the sounds at once, to determine what is the melody actually perceived by the human ear, and providing a music sheet or a text printout including a time sequence of single notes describing that melody.
According to one embodiment of the invention the method comprises the steps of:
According to one embodiment of the invention step (I) is carried out about 15 times every second, using a novel set of multiple bases, each built so to separately fulfill the requirements of Heisenberg's uncertainty principle, thereby to allow detecting each frequency component in the shortest possible time, and where different sets of “mistuned” multiple bases may be used to accommodate mistuned instruments or voices.
According to another embodiment of the invention the method of step (II) further comprises setting a melody threshold as a given percent of the total perceptual power, or a correct detection probability threshold (directly derived from said melody threshold) thereby allowing to detect the presence of melody above a strong background. The melody threshold can be set in a broad range, e.g., in the 10%±50% range.
In yet another embodiment of the invention in the method of step (III) the difference between the power of each frequency component in the optimal detection, and the power of the same frequency component found in a previous optimal detection are computed and, if the difference is positive this difference is assigned as the differential power of the sinusoidal component at the given frequency; otherwise, the differential power is set to zero.
In still another embodiment of the invention the method according to step (VIII) comprises detecting a chord by looking at all the groups of at least three simultaneous long-lasting groups of tones having mutually different fundamental frequency and finding the dominant chord by summing up the perceptual power of all the dyadic tones related to each group, and selecting the group that has the largest total perceptual power.
The invention is further directed to a N by N selection matrix, which when multiplied by the vector of the power values of the N frequencies components found with the least-square process selects all the possible Fourier series and generates a vector of N component values, where the value of the nth component corresponds to the cumulative power of the Fourier series the fundamental frequency of which corresponds to the nth key.
The number of rows and columns in the N by N selection matrix may vary and according to one embodiment of the invention the selection matrix is a 60 by 60 matrix, comprising a first line consisting of 60 values which are all zeros except the first, 13th, 20th and the 25th values which are 1, and wherein line number n is identical to the first line but with the 1 values shifted to the right by n places, and wherein if a 1 is shifted beyond place 60 is discarded.
According to the invention different octaves can be used to performing the melody detection (also referred to herein as “interpretation”), and according to one embodiment of the invention the interpretation is carried out using all the octaves of a standard piano keyboard. According to another embodiment of the invention the interpretation is carried out using only part of the octaves of a standard piano keyboard. According to still another embodiment of the invention the interpretation is carried out using the four and a half octaves starting at the third octave of a standard piano keyboard.
Also encompassed by the invention is a device for performing melody detection, comprising a CPU and memory means associated with said CPU, which memory means contain information about the fundamental frequencies of all or of part of the keys of a standard piano keyboard. According to one embodiment of the invention the device of the invention is adapted to analyze a streaming audio in blocks of 1104 samples and to compare it with the third octave of a standard piano keyboard, at a sampling rate resulting in a sampling time of about 128 milliseconds per block or longer. In one embodiment of the invention the sampling time is about 138 milliseconds per block.
According to another embodiment of the invention the memory location stores samples of signals at fundamental frequencies of each of 12 keys of an octave. In one mode of operation a first set of memory locations refers to the DO3, a second set refers to DO#3, and a third set refers to RE3, and so on.
In one implementation of the invention each set of memory locations contains two vectors of values, one containing samples of a sine function at the frequency corresponding to the first key, and the second containing samples of a cosine function at the frequency corresponding to said first key. For instance, each of said vectors of values may consist of 1104 samples that have been computed beforehand.
The device of the invention is adapted, according to one embodiment, to analyze a streaming audio in blocks of 1104 samples and to compare it with the fourth octave of a standard piano keyboard, at a sampling rate resulting in a processing time of about 64 milliseconds per block or longer. In another embodiment of the invention the device of the invention is adapted to analyze a streaming audio in blocks of 1104 samples and to compare it with the fifth octave of a standard piano keyboard, at a sampling rate resulting in a processing time of about 32 milliseconds per block or longer. In yet another embodiment of the invention the device of the invention is adapted to analyze a streaming audio in blocks of 1104 samples and to compare it with the sixth octave of a standard piano keyboard, at a sampling rate resulting in a processing time of about 16 milliseconds per block or longer. According to yet another embodiment of the invention the device of the invention is adapted to analyze a streaming audio in blocks of 1104 samples and to compare it with the seventh octave of a standard piano keyboard, at a sampling rate resulting in a processing time of about 8 milliseconds per block or longer.
In one embodiment, the device of the invention comprises computation circuits adapted to analyze a matrix containing a random mix of frequencies pertaining to all the keys of all the octaves, at the same time by comparison with the prestored vectors of values, by carrying out a least-square analysis to find which combination of stored vectors at optimal amplitudes best describes the sampled data.
Other characteristics and advantages of the invention will become apparent as the description proceeds.
In the drawings:
While a detailed description of all steps will be provided hereinafter, including the full mathematical processing, it will be useful for the sake of understanding to describe first the invention in respect of its main building blocks. In the basic description below the process exemplified will make use of the four and a half octaves illustrated in
In order to perform the invention the memory means associated with the CPU must contain information about the fundamental frequencies of all the keys referred to above, as explained in further detail with reference to
As seen, line 1 consists of data pertaining to the 12 keys of the first octave, where set 1 of memory locations refers to the DO3, set 2 refers to DO#3, set 3 refers to RE3, and so on. Each set of memory locations contains two vectors of values, one containing samples of a sine function at the frequency corresponding to Key 1, and the second containing samples of a cosine function at the frequency corresponding to Key 1. Each of said vectors of values consists of 1104 samples. These values have been computed beforehand and are used according to the invention to carry out computations with streaming sampled data, as will be further explained hereinafter. Precomputed values are also contained in the remaining 11 memory sets of line 1.
The length of 138 milliseconds of each memory set of line 1 is dictated by the need to discriminate between the frequencies of two adjacent keys, because the frequency spacing of two adjacent keys is about 5.95% of the lower frequency key. So, for example, in the case of DO3 (key 1) the frequency is about 130.81 Hz, and for DO#3, the frequency is about 138.59 Hz, so the difference is 7.78 Hz. The time required for recognizing one cycle of the frequency difference is about 1/7.78 seconds=0.128 seconds, thus in order to ensure proper recognition in this illustrative process a time of 0.138 has been selected, corresponding to 1104 samples.
The same DO in the higher octave, DO4, has a frequency twice as high as that of DO3, namely 261.62 Hz. Therefore the difference in frequency between the two adjacent keys DO4 and DO#4 is 15.56 Hz and therefore the time required for recognizing one cycle of the frequency difference (to discriminate between two adjacent keys) is about 1/15.56 seconds=0.064 seconds, and therefore each vector of values in this octave consists of 1104/2=552 samples, and for each subsequent increase in octave, the number of samples in each vector of values is reduced by a factor of two. Thus, during the time needed for detecting one key of the first octave, one may simultaneously detect two subsequent keys of the second octave, four subsequent keys of the third octave, eight subsequent keys of the fourth octave, and sixteen subsequent keys of the fifth octave. This is shown in
Looking now at SA, which may contain a random mix of frequencies pertaining to all the keys of all the octaves, as will be apparent from the above description, the same sampled data are analyzed at the same time by comparison with the prestored vectors described above. The “comparison” is performed by carrying out a least-squares analysis to find which combination of stored vectors at optimal amplitudes best describes the sampled data. The least-squares method is well known to the skilled person and therefore is not further described herein, for the sake of brevity.
Once the above determination is completed, there is a need to identify the sub-set of detected vectors and amplitudes that best fits the melody note heard by our perception. This requires finding among the vectors identified by the above process the one combination that defines the dominant Fourier series among the many possible Fourier series that can be constructed using all the different combinations of vectors and amplitudes detected. For all practical purposes it has been found that it is sufficient to consider only the set of the first 3 or 4 harmonics of each candidate Fourier series, as it usually contains more than 90% of the total power of the series. For this purpose, the invention provides a novel selection matrix, as illustrated with reference to
The method of the invention comprises the following steps:
As opposed to melody that may change rapidly, a chord is detected by looking at all the groups of at least three simultaneous long-lasting groups of tones each carrying a substantial portion of the total perceptual power and having mutually different fundamental frequency. Long lasting tones are the tones repeatedly detected in step (I). The dominant chord is found by summing up the perceptual power of the fundamental tone and of all the dyadic tones related to each group, and selecting the group that has the largest total perceptual power. For each fundamental frequency f0 within such a group, the related dyadic tones are all the tones that satisfy fn=2n f0, n=1, 2, 3, . . . . A dominant chord is valid provided that it satisfies certain conditions specified hereinafter, and related to relative perceptual power and to relative fundamental frequency within the dominant chord. Chord detection is done in parallel to melody detection, but with a much simpler and independent process.
The invention will now be illustrated with reference to a specific illustrative embodiment, by putting the frequencies of the sinusoidal components in correspondence with the fundamental frequency of piano keys. For the sake of simplicity, the musical instrument of this illustrative and non-limitative example is the piano, it being understood that the very same description applies to any other musical instrument and to other examples.
The fundamental frequency fkey# of each of the keys of a piano, is given by
The illustrative example, as shown in
Thus, for every 12 keys “jump” the fundamental key frequency is doubled. Doubling the frequency is denoted as increasing it by an octave. Equation (2) implies that the frequency difference between any two adjacent keys is
By Heisenberg's uncertainty principle, the minimal period of time ΔTn required to distinguish between two keys with frequency separation Δfn is of the order of magnitude of the inverse of the frequency separation, namely
Since we don't know a-priori whether or not adjacent tones will be present, the processing time for the detection of each key must be set at least to the length defined by (4) for the relevant index n. Therefore, during the time period needed to detect one bass sound, several different treble sounds may show-up, and the process must be able to simultaneously detect all of them. In the following, we denote by T the frame time, namely, the processing time required to detect the lowest-frequency component at frequency f1. In the present example we set
T=138 msec, fs=8000 samples/sec, ts=1/fs=125 μs, S=T×fs=1104 samples (5)
where T is the frame time, fs is the sampling rate, and S is the frame size in samples.
As stated, the first task should be carried out is the simultaneous optimal detection of the frequency and the amplitude of all the sinusoidal components occurring in the global composite sound during the frame time T. In the current art, such type of detection is often done by computing a fast Fourier transform (FFT) of the corresponding S samples in T, and looking for the absolute value of the FFT components. Alternatively, a wavelet transform is used looking for the absolute value of the wavelet coefficients. However, these approaches are not optimal, because they disregard the phase information. Although the human ear is not sensitive to phase, the phase information is useful in finding an optimal detection, and reducing the errors that occur due to the artifacts showing up between adjacent frames.
From equation (5) it follows that the processing time required for the detection of each key in octave #3 is 1104×125 μsec=138 msec and all the 1104 samples of the global sample set taken over the time frame must be used, while in octave #7 the key detection time is only 1104/24×125 μsec=8.625 msec and can be carried on using any subset of consecutive samples of size 1104/24=69 from the same global sample set of size S=1104. Thus we are able to detect simultaneously several keys at different octaves within one time frame. The process is pictorially shown in
For the purpose of illustration, let us denote by Amt the transpose of the matrix Am, and define the 24×24 matrix Bm as
B
m
24×24
=A
m
t
A
m (6)
The Bm matrices so constructed are Hermitian, thus their eigenvalues are also their singular values, and a straightforward computation shows that for all m, they have almost identical maximal and minimal positive (nonzero) eigenvalues (EV). Moreover, the maximal and minimal eigenvalues of each of the matrices Bm are close enough in value so that their condition number KB is small, namely
Equation (7) implies that all the matrices Bm, m=0, 1, 2, 3, 4 are non-singular. The fact that Bm is non-singular, guarantees that, given a vector ym consisting of any subset of Sm=1104/2″ adjacent samples taken from the global sample set of S=1104 samples in the time frame T, there exists a vector xm of dimension 24, consisting of a set of 24 optimal coefficients, such that the vector zm=Amxm in (8) provides the best possible approximation to the set of samples ym, that one can build using only a combination of the columns of Am whose elements consist of samples of components at fundamental frequencies belonging to octave#(m+3). In other words, the 2-norm of the error, ∥zm−ym∥2, is a measure of how “close” zm is to the samples ym of a sound belonging to the octave#(m+3). The approximation is optimal in the least-squares (LS) sense, meaning that the energy of the error ∥zm−ym∥22 is minimal. Since the columns of Am are normalized to unit 2-norm, the more the samples in ym resemble the samples of a single frequency components in octave#(m+3), the closer ∥xm∥2 gets to ∥ym∥2.
For each m, the vector xm is found by performing a numerical computation known as the QR decomposition of the matrix Am. The success in finding xm is guaranteed since Bm is non-singular. The QR decomposition is a standard operation in numerical algebra, may be performed using different algorithms, and we don't discuss it here. In the illustrative embodiment, the QR decomposition is carried out using a standard algorithm known as the Modified Gram-Schmidt Algorithm.
The small value of the condition number in equation (7), implies that all the matrices Bm are well-conditioned, which in turn implies that the computation of xm is numerically stable, namely, when computing the vectors xm the numerical error is well-bounded, and the solution is reliable.
Upon completing the computation of xm, m=0, 1, 2, 3, 4 we are left with five vectors x0, x1, x2, x3, x4, each of them of dimension 24, where
x
m=[x1[m],x2[m], . . . ,x2i−1[m],x2i[m], . . . ,x23[m],x24[m]], m=0,1,2,3,4
Due to the architecture of the matrices Am, where the normalized odd columns consist of samples of cosine functions, and the normalized even columns consist of samples of sine functions, xm yields an optimal estimate of the power pi[m], i=1, 2, . . . , 12 of each of the 12 sinusoidal components present at each octave#(m+3) within the global composite sound, in the form
Due to Heisenberg's principle which dictated the hierarchical architecture of the matrices Am in
The sinusoidal components in the global composite sound, whether musical or vocal, are all generated by physical processes that can build up in a very short time, but often decay very slowly, whether because of natural slow decay as in a guitar string, or because of echo/reverberant effects. Moreover, a piece of music may comprise a strong steady accompaniment, such as the sound of an organ or a violin. These long-lasting sounds may have power comparable or even greater than the sound related to melody, and may mask it during the detection process described hereinabove. However these long-lasting sinusoidal components have all the characteristic that their power is either steady or decaying, while a newly-generated sinusoidal component suddenly shows-up from zero power level to a considerable power level in a very short while. A typical example is the impulse response showing up almost instantaneously when one hits a piano string. Even if the newly-generated component has the same frequency of an existing steady component previously generated by some instrument, the power detected at the given frequency will exhibit a sudden power jump. Therefore, in order to discriminate between melody and strong steady accompaniment or prolonged echo form previous tones, we retain only the positive differential power, namely, we continuously compute the difference between the power of each frequency component just found in the present optimal detection
Doing so we keep only the newly-generated components. Optionally, in the case where the melody to be detected derives from nearly steady sounds only, such as the sound of an organ, or a steady prolonged singing human voice, Δ
From now on, unless otherwise stated, when talking about “power” we always implicitly mean “differential power”. Equation (11) yields the estimate of the physical power of all the sinusoidal components at all the five octaves in the illustrative example (octave #(m+3), m=0, 1, 2, 3, 4) where for each octave we estimated the power of the 12 sinusoidal sound components belonging to that octave. Altogether we found the estimated power of all the 12×5=60 sinusoidal components within the global composite sound, where the components with index i=55 through i=60 are set to zero because they are very close to the Nyquist bound of the sampling rate, and cannot be relied upon.
In order to compute the perceptual power, we must multiply each value of Δ
We multiply each Δ
Using the perceptual power coefficients wpi[m] we build a perceptual power vector pv of dimension 60 as follows
pv
60×1=[pv1,pv2, . . . ,pv60]t, pvi+12m=wpi[m], i=1,2, . . . ,12, m=0,1,2,3,4 (13)
Where the superscript [⋅]t indicates transpose. Thus pv comprises the estimated perceptual powers of all the fundamental sinusoidal tones in the illustrative example ordered from the lowest piano key# to the highest piano key#. Summing up all the components wabspi[m] in (12) we compute the total absolute perceptual power Pt, which is used in the illustrative example together with the melody threshold mentioned hereinbefore. A pictorial description of the architecture of pv is given in
The perceptual vector pv consists of the optimal estimate of the perceptual power of each of the sinusoidal components at octave#(m+3) in the global sound. However, once pv has been determined, there is still a critical task left, namely, find out the dominant Fourier series.
From (2) we note that for all n=1, 2, 3, . . . we get
f
n+k=123.47×2(n+k)/12=2k/12fn (14)
It turns out that all the relevant harmonic frequencies potentially discoverable with the given sampling rate, fall at or very close to one of fundamental frequencies of the keys indexed 1 through 54. As we see shortly, this fact is of major impact and extremely useful when trying to determine the fundamental frequency and the perceptual power of all the possible Fourier series sharing common harmonics, arising from all the possible combinations of the sinusoidal components, as pointed out above.
For instance, according to (14), fn+19=219/12fn=2.9966fn≈3fn. According to Heisenberg's uncertainty principle, the frequencies 219/12fn and 3fn are much too close to be distinguished in the time required for the detection of the note. Therefore, when we try to detect whether the key with index n=22 has been hit, we cannot determine if the power we detect at frequency f22 belongs to the fundamental of the key with index n=22, or is due to the third harmonic of the key with n=3, or even a combination of the two. However, as pointed out before, as opposed to music annotation, when looking for melody detection, we don't care what key has been hit, and we are concerned only with finding the fundamental frequency of the Fourier series with the strongest perceptual power.
Based on our observation, in practical cases, more than 90% of the perceptual power of the dominant Fourier series resides within the first three (3) harmonics for instrumental sounds, and within the first six (6) harmonics for vocal sounds, including fundamental. Moreover, as pointed out in the background section, the fundamental frequency of low-frequency Fourier series may be missing. Therefore the knowledge of at least three harmonic components is required to guarantee the proper detection of the fundamental frequency of a Fourier series. Thus in the illustrative example we assumed as a default, that all the Fourier series include at most the first three components, which we denote as a “H3 series” and left the option to include up to the first six components (“H6 series”).
For h=0, 1, 2, 3, 4, 5, for any k that satisfies 2k/12≈h, the fundamental piano key frequency fn+k in (14), is close to the frequency of one of the first six harmonic frequencies of the piano key with index n. Let us write down a table of 2k/12 for 2k/12 h, and compare it with the closest integer. The result is shown in Table 1, where “error” indicates the error with respect to the integer value.
The central conclusion of Table 1 is the following: if two or more nonzero perceptual components in the pv of
{nh−n}=[12,19,24,28,31], h=1,2,3,4,5 (15)
then altogether as a group, they constitute a Fourier series whose fundamental frequency is the frequency of the key with index n. This is because all the members of a group of tones satisfying (15) comprises only harmonics of the fundamental frequency fn. Note that the component of index n itself may be absent (may have value 0). The sequence (15) corresponds to the incremental sequence
{Δnh}=[12,7,5,4,3], h=1,2,3,4,5, Δnh=nh−nh−1 (16)
If Δnh is known, then from Table 1 we get the relation Δnh=nh−nh−1=n+kh−nh−1 thus the index of the key corresponding to the fundamental frequency fn of the Fourier series is
n=Δn
h+
n
h−1
−k
h (17)
The following two examples are provided for clarification:
assume that the dominant series consists of only two components corresponding to pv15 and pv22 in
assume that the dominant series includes three components with frequencies corresponding to pv37, pv30, and pv18. Since 37−30=7 and 30−18=12, then according to Table 1, all the three components belong to the same Fourier series. To compute n we may look at any of the differences. For instance, since 37−30=7 then 30 corresponds to the second harmonic, and we get, n=30−12=18, and key#=28+18−1=45. Alternatively, since 30−18=12, then, according to Table 1, the lower frequency is the fundamental, thus n=18 as before.
Of course, if the sequence contains only one component with index n, then the index of the fundamental frequency is the frequency of the key with index n. However, this case does not occur in practice, since there are always other components, although small, due to noise or other sounds. Nevertheless, as we see soon, the algorithm dealing with background perceptual power mentioned in (V) above handles this case as well.
The number of possible combinations is large, which a first sight looks as a daunting task. However, there is no need for complex computations. Once the vector pv is determined, the task of finding the dominant Fourier series may be carried out in a simple and automatic way.
To make things clear we show how this is done in the default illustrative example mode, which assumes that most of the perceptual power is contained in a H3 series. The case of series of larger size is obvious and immediate.
Let us construct a square selection matrix G, of dimensions equal to the dimension of pv
G
60×60
={g
i,j
}, i,j=1,2, . . . ,60 (18)
Let us build the matrix G for a H3 series in a way similar to
If we multiply pv by G we obtain a vector fs of dimension 60, whose first element fs1 is given by
fs
1
=pv
1
+pv
13
+pv
20
A little thought reveals that the value of fs1 is the perceptual power of the H3 Fourier series with fundamental frequency corresponding to n=1, namely, corresponding to key#28 in the illustrative example. This is because subtracting the first index from the second yields 12, and subtracting the second index from the third yields 7.
If now we build the second row of G in an identical manner, except that we shift all the 1's one place to the right, the value of the second element of fs, namely fs2, will be the perceptual power of the H3 Fourier series with fundamental frequency corresponding to n=2, namely fs2=pv2+pv14+pv21.
If we continue to build the matrix in the same way, namely
Then the elements fsn, n=1, . . . , 60 of the vector
fs
60×1
=G
60×60
pv
60×1
, fs=[fs1,fs2, . . . ,fs60]t (20)
consist of the perceptual powers of the Fourier series whose fundamental frequency corresponds to the key with index n=1, . . . , 60, which according to (2) corresponds to key#=28+n−1.
The power of a strong background accompaniment is usually not concentrated in a single Fourier series. In absence of specific information regarding its nature, we assume that the background perceptual power is uniformly distributed over the components of the vector pv. A little though reveals that when computing (20) with a matrix G adapted for H3 series, if the global composite sound consists of only one nonzero components, without any noise added, the vector fs will consist of exactly 3 components of identical amplitude. Similarly, if G is adapted for H6 series, the vector fs will consist of exactly 6 components of identical amplitude. In this scenario, we pick up the Fourier series with the highest fundamental frequency. If the vector fs has one largest component, in absence of noise this component will correspond to the dominant Fourier series.
The separation coefficient SC is a measure of how “far” the power of the strongest differential Fourier series detected is from the total absolute perceptual power.
The multiplication by HN/60 gives an estimate of the portion of the average “background” perceptual power one should expect to find in the strongest differential Fourier series.
In fact SC represents the estimated noise-to-signal ratio within the strongest Fourier series, thus, we take 1−SC as the estimated probability of correct detection, which therefore can be directly inferred from the melody threshold MT, and vice versa, since MT=fsmax/Pt. Therefore the estimated probability of correct detection 1−SC is used as the adjustable threshold value in lieu of MT in the illustrative example.
The Fourier series corresponding to the component fs, has comparable perceptual power if
In other words, a Fourier series is comparable if its perceptual power differs from the strongest Fourier series by less than the sum of all the estimated noise components invading each one of the harmonics in the series.
The dominant Fourier series is the series of comparable perceptual power and highest fundamental frequency. If the power of the dominant series is above the melody threshold MT, then its index i corresponds to the detected tone. In practical applications SC in (24) may be replaced by α·SC where the value of a 1 is adjusted experimentally for optimal performance.
The algorithm for performing chord detection runs in parallel and independently from the algorithm for melody detection. It makes use of the absolute (non-differential) estimates
ch
i+12(m−1)
=w
i
[m]
i
[m]
, i=1,2, . . . ,12, m=0,1,2,3,4 (25)
as well as the total perceptual power Pt in (12) which is also the sum of all the components in (25). Then we perform the modulo-12 computation on all indexes of chi+12(m−1), and we add-up all the perceptual power values yielding the same index i following the modulo-12 operation. Since an increment of 12 indexes corresponds to doubling the frequency, in view of the previous analysis, the result is a vector cr12×1 of dimension 12, in which the value of each component consists of the sum of all the perceptual powers of the frequencies that are dyadic harmonics of the one of the 12 fundamental frequencies, namely 2mfk, k=0, 1, 2, . . . , 11, and therefore they all sound as the same note at different octaves. If more than 60% of the total perceptual power in contained in three out of the 12 values, while the smallest value in not less than 10% of the largest value, and if the same detection occurs continuously again for a period of more than 138 msec, the algorithm in the illustrative example decides “chord detected”, and outputs the three relevant indexes out of the possible 12. Then the algorithm checks several standard musical rules to decide whether the three detected tones may constitute a valid chord or are just a dissonance, an upon passing the check, it outputs the chord in the form of a group of three notes. Then following standard music rules, the combination of the three notes may be put in correlation with a specific chord denomination.
The matrix Am of the illustrative embodiment has the form
In words, the columns in the matrix related to the octave corresponding to index m, include samples of sine and cosine functions at the fundamental frequencies at that level for all the 12 keys belonging to that octave.
In view of the relations
V cos(ωt+ϕ)=I×cos ωt+Q×sin ωt, V=√{square root over (I2+Q2)}, ϕ=arctan(Q/I) (27)
we see that using a combination of the columns of Am, which include samples of sine and cosine functions, we are able to construct samples of a waveform consisting of a combination of 12 sinusoidal component of arbitrary amplitude and phase each, at octave#(m+3).
The weight function generated by piecewise polynomial interpolation and the weights taken from the equal loudness contour are given in
In order to be able to perform the process of the invention, the hardware and software employed must have the following minimal specifications: As pointed out before, the process of setting the optimal melody threshold includes a continuous mutual human-machine interaction, where the human hearing perception plays a significant role in optimally discriminating between accompaniment and melody, and the user adjusts the threshold until he hears the best detection of the melody. This interaction is a distinctive feature of the present invention. In fact, in many, if not most pieces of music, there is no mean of automatically deciding what sound belongs to accompaniment, and what belongs to melody, because melody is in many respects an interpretation of the listener with respect to the global effect of the all sounds present at a given moment, and the melody heard often does not belong to a single instrument, but is the result of a global perception (for instance when hearing a chorus). Thus the human hearing and the subjective perception of the user are the best (and often the unique possible) judge of whether the melody has been properly detected.
Since during the detection process all the possible Fourier series due to all the possible combinations of harmonics are checked, another distinctive feature of the invention is the capability to consider only the Fourier series whose fundamental frequency lies in some adjustable frequency range. This is done by setting lower and higher fundamental frequency boundaries, related to piano key fundamental frequencies, and named “Start End Keys”, outside which any tone detected as melody will be discarded, thus reducing the risk of erroneous melody detection due to a strong instantaneous accompaniment level (such as a strong bass instrument, or a high guitar tone), and then fine-adjusting the boundaries “on the fly” until the melody detected is heard best. For instance, as pointed out before, in the case of a female soprano singer, whose fundamental voice frequency typically lies in the 261 Hz-1044 Hz range, the boundaries may be set a-priori so that all the Fourier series whose fundamental frequency lies outside the 261 Hz-1044 Hz range (about two octaves) will be discarded even if their perceptual power is the strongest. Then, the boundaries may be fine-adjusted “on the fly” by the user until he best hears the melody sung by the singer. In most cases, the melody will reside in a frequency range much smaller than the full two octaves, thus the “on the fly” adjustment guided by the user perception will lead to a much better result than to one obtained from the default range values.
In another instance, when playing the piano, the melody is mainly played by the right hand, and the accompaniment by the left hand. The user may want to hear the right hand alone, or the left hand alone. Setting the “Start End Keys” values the user may select the piano keys range that will be taken into account for the purpose of melody detection while all the other piano keys will be ignored. Therefore the user may discriminate between left and right hand, which effectively discriminates between melody and accompaniment.
Therefore the hardware should provide
a) The means of generating musical sounds, such a loudspeaker (or equivalent)
b) The means of displaying rulers (or equivalent arrangement) where the user can see displayed
b1) the melody threshold value b2) the frequency range boundaries values (Start End Keys)
b3) The specific musical time segment to be analyzed (start time, stop time)
c) A mean for the user to modify “on the fly” the values of
c1) the melody threshold
c2) the frequency range boundaries (Start End Keys)
The user will be able to modify the above settings following his perception of what values leads to the best melody detection. Such mean for modifying the values may be a mouse, a joystick, a touch screen, or equivalent ones.
d) Means of inputting the various default parameter settings (such as a keyboard or equivalent). Such default settings may be
d1) The maximal length of the Fourier series, which is often dependent on the character of the music piece. For instance, detecting melody from a-cappella music, will require keeping many harmonics in the series, since the human voice is rich in harmonic content, while for a trumpet concert, the number of the harmonics kept must be small in order to better separate accompaniment from melody.
d2) The time segment to be analyzed. Different time segment of the same music piece may have very different character, and may require different threshold settings, or different setting in the number of harmonic. Therefore the user must be capable to isolate music segments of alike character to be analyzed. Isolating a music segment may be performed by setting the time segment to be analyzed.
d3) The previously mentioned melody threshold and frequency boundaries
e) For real-time melody detection, sound-capturing means, such as a microphone are required.
f) Means of displaying/printing the sequence of the dominant fundamental tones, along with the time instant when said tone was detected. Optionally the corresponding chord denominations detected should be displayed when chord detection is used.
The software should provide
a) Means of capturing the music piece to be analyzed, such as a recording algorithm. Subsequently the recorded file should be led to the proper format for analysis (for instance PCM-8 bit/sample, 8000 samples/sec in the illustrative embodiment).
b) Means of producing sounds corresponding to the detected melody and means of properly conveying the sound to the loudspeaker/earphones/other. Such means may consist of MIDI sound generation (MIDI—Musical Instrument Digital Interface. “MIDI 1.0 Specifications” is a technical standard that began in 1983, includes a large number of documents and specifications, and defines a protocol, a digital interface and connectors). Alternatively, the sound may be embedded using signal-processing means. It should be noted that the purpose of playing the sound in the invention is not to provide a computerized version of the melody in MIDI format (although this can be a by-product of the algorithm), rather, the purpose of playing the melody detected is to allow the human-machine interactions previously described, in virtue of which the user is able to optimize the detection of the melody, by interactively adjusting the melody threshold and the fundamental frequency range, until he is satisfied with the melody heard, and feels that he reached the best possible (or a satisfactory) melody detection. Therefore, the human hearing judgment is an inherent part of the algorithm itself, and an important input to the algorithm convergence. This human-machine interaction is a distinctive feature of the invention.
c) Means of accepting and modifying “on the fly” the parameter setting previously mentioned, including, among others, melody threshold, fundamental frequency range, and Fourier series length. The modifications should be capable to affect the algorithm “on the fly”.
c) Means of generating vector bases of the type mentioned before, while various sets of vector bases may be optionally generated so to be able to accommodate mistuned instruments. In other words, the algorithm should optionally generate various sets of vector bases, each slightly “mistuned”, and choose to use the one that is best adapted, in the sense that it yields the largest value when summing up all the non-perceptual absolute power components defined in equation (10). Doing so the detection may be optimized even for mistuned instruments or voice (for instance, this may occur when someone adjusts a guitar without hearing first a reference tone, or sings on a mistuned scale).
The invention allows for operation in an automatic default mode, namely, using a default “set of parameters” (melody threshold, fundamental frequency range etc.) that have been setup by the user so to be well adapted to the music style he deals with. However, as pointed out before, in order to obtain more personalized and optimal performance, the process may be operated interactively. A snapshot of an illustrative user panel, according to one embodiment of the invention, is shown in
As will be apparent to the skilled person, the invention permits to obtain a result that, before the invention, was impossible: using the invention anyone can take a piece of recorded music from any source and, without any prior knowledge of it, obtain on the fly the melody of that music, even in many cases where there is no single instrument playing it. This result is of paramount importance to musicians and to dilettantes alike, since it significantly increases their ability to understand melodies they heard and liked, and to play them on their instruments of choice at the best of their perception.
All the above description has been provided for the purpose of illustration and is not intended to limit the invention in any way. All the principles of the invention can be applied to different sounds, instruments, types of music, etc. without exceeding the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
253472 | Jul 2017 | IL | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IL2018/050716 | 7/2/2018 | WO | 00 |