The present invention relates to the processing of a sound sequence, such as a piece of music or, more generally, a sound sequence comprising the repetition of a subsequence.
Distributors of musical productions, for example recorded on CD, cassette or other medium, make booths available to potential customers where the customers can listen to music of their choice, or else music promoted on account of its novelty. When a customer recognizes a verse or a refrain from the piece of music to which he is listening, he can decide to purchase the corresponding musical production.
More generally, an averagely attentive listener concentrates his attention more on a verse and refrain strung together, than on the introduction of the piece, in particular. It will thus be understood that a sound resume comprising at least one verse and one refrain would suffice for dissemination among booths of the aforesaid type, rather than providing for the complete musical production to be disseminated.
In another application such as the transmission of sound data by mobile telephone, it will be understood that the downloading of the complete piece of music onto a mobile terminal, from a remote server, is much lengthier and, therefore, more expensive than the downloading of a sound resume of the aforesaid type.
Likewise, in an electronic commerce context, sound resumes may be downloaded onto a facility communicating with a remote server, via an extended network of the INTERNET type. The user of the computer facility may thus place an order for a musical production whose sound resume he likes.
However, detecting a verse and a refrain by ear and thus creating a sound resume for all the musical productions distributed would be a prohibitively cumbersome task.
The present invention aims to improve the situation.
One of the aims of the present invention is to propose an automated detection of a subsequence repeated in a sound sequence.
Another aim of the present invention is to propose an automated creation of sound resumes of the type described above.
For this purpose, the present invention pertains firstly to a method of processing a sound sequence, in which:
a) a spectral transform is applied to said sequence to obtain spectral coefficients varying as a function of time in said sequence.
The method within the sense of the invention furthermore comprises the following steps:
b) at least one subsequence repeated in said sequence is determined by statistical analysis of said spectral coefficients, and
c) start and end instants of said subsequence in the sound sequence are evaluated.
Advantageously, according to an additional step:
d) the aforesaid subsequence is extracted so as to store, in a memory, sound samples representing said subsequence.
Preferably, the extraction of step d) relates to at least one subsequence whose duration is the biggest and/or one subsequence whose frequency of repetition is the biggest in said sequence.
The present invention finds an advantageous application in aiding the detection of failures of industrial machines or motors, especially by obtaining sound recording sequences of phases of acceleration and of deceleration of the motor speed. The application of the method within the sense of the invention makes it possible to isolate a sound subsequence corresponding for example to a steady speed or to an acceleration phase, this subsequence being, as the case may be, compared with a reference subsequence.
In another advantageous application to the obtaining of musical data of the type described above, the sound sequence is a piece of music comprising a succession of subsequences from among at least an introduction, a verse, a refrain, a bridgeway, a theme, a motif, or a movement which is repeated in the sequence. In step c), at least the respective start and end instants of a first subsequence and of a second subsequence are determined.
In a particularly advantageous embodiment, in step d), a first and a second subsequence are extracted so as to obtain, on a memory medium, a sound resume of said piece of music comprising at least the first subsequence strung together with the second subsequence.
Preferably, the first subsequence corresponds to a verse and the second subsequence corresponds to a refrain.
However, it may happen that a first and a second subsequence, that are extracted from a sound sequence, are not contiguous in time.
For this purpose, the following steps are moreover provided:
d1) detecting at least one cadence of the first subsequence and/or of the second subsequence so as to estimate the mean duration of a bar at said cadence, as well as at least one end segment of the first subsequence and at least one start segment of the second subsequence, of respective durations corresponding substantially to said mean duration and isolated in the sequence by an integer number of mean durations,
d2) generating at least one bar of transition of duration corresponding to said mean duration and comprising an addition of the sound samples of at least said end segment and of at least said start segment,
d3) and concatenating the first subsequence, the transition bar or bars and the second subsequence to obtain a stringing together of the first and of the second subsequence.
It will be noted that the succession of steps d1) to d3) finds, over and above the automatic generation of sound resumes, an advantageous application to computer assisted musical creation. In this application, a user can himself create two subsequences of a piece of music, whereas software comprising instructions for running steps d1) to d3) provides for the stringing together of the two subsequences by concatenation, without artefact and pleasant to the ear.
More generally, the present invention is also aimed at a computer program product, stored in a computer memory or on a removable medium able to cooperate with a computer reader, and comprising instructions for running the steps of the method within the sense of the invention.
Other characteristics and advantages of the invention will become apparent on examining the detailed description hereinbelow, and the appended drawings in which:
a represents an audio signal of a piece of music corresponding, in the example represented, to a light popular song;
b represents the variation in spectral energy as a function of time, for the piece of music whose audio signal is represented in
c illustrates the durations occupied by the various passages of the piece of music of
a diagrammatically represents segments si(t) and sj(t) selected from the aforesaid respective parts of the piece, so as to prepare a concatenation of the two parts by superposition/addition;
b diagrammatically illustrates by the sign “⊕” the aforesaid superposition/addition;
The audio signal of
To the audio signal represented in
In an embodiment, one is concerned with a plurality of successive short-term FFTs, the result of which is applied to a bank of filters over several ranges of frequencies (preferably of wavelengths that increase like the logarithm of the frequency). Another Fourier transform is then applied to obtain dynamic parameters of the audio signal (which are referenced PD in
It is indicated that the dynamic parameters of the type represented in
As a variant, the variables deduced from the audio signal and making it possible to characterize the piece of music may be of different type, in particular so-called “Mel Frequency Cepstral Coefficients”. Globally, it is indicated that these coefficients (known per se) are still obtained by a short-term fast Fourier transform.
c offers a visual representation of the profile of the spectral energy of
Typically, the determination of the number of states (the parts of the piece of music) which are necessary for the representation of a piece of music is performed in an automated manner, by comparison of the similarity of the states found at each iteration of the aforesaid algorithms, and by eliminating the redundant states. This technique, termed “pruning” thus makes it possible to isolate each redundant part of the piece of music and to determine its temporal coordinates (its start and end instants, as indicated hereinabove).
Thus, one studies the variations, for example in the tonal frequencies (of a human voice), of the spectral energy to determine the repetition of a particular musical passage in the audio signal.
Preferably, one seeks to extract one or more musical passages whose duration is the biggest in the piece of music and/or whose frequency of repetition is the biggest.
For example, for most light popular pieces, it will be possible to choose to isolate the refrain parts, whose repetition is generally the most frequent, and then the verse parts, whose repetition is frequent, then, as the case may be, other parts again if they repeat.
It is indicated that other types of subsequences representative of the piece of music may be extracted, provided that these subsequences repeat in the piece of music. For example, it is possible to choose to extract a musical motif, generally of shorter duration than a verse or a refrain, such as a passage of percussion repeated in the piece of music, or else a vocal phrase chanted several times in the piece. Furthermore, a theme may also be extracted from the piece of music, for example a musical phrase repeated in a piece of jazz or of classical music. In classical music, a passage such as a movement may moreover be extracted.
In the visual resume represented by way of example in
Reference is now made to
In the example represented, the piece of music exhibits a structure (classical in light popular) of the type comprising:
In step 22, the instants t0 to t7 are catalogued and indexed as a function of the corresponding musical passage (introduction, verse or refrain) and stored, as the case may be, in a work memory. In step 23, it is then possible to construct a visual resume of this piece of music, as represented in
In the example described hereinabove of a light popular piece comprising a typical structure, the sound resume is constructed from a verse extracted from the piece, followed by a refrain extracted from the piece. In step 24, a concatenation is prepared of the sound samples of the audio signal between the instants t1 and t2, on the one hand, and between the instants t2 and t3, on the other hand, in the example described. As the case may be, the result of this concatenation is stored in a permanent memory MEM for subsequent use, in step 26.
However, as a general rule, the end instant of an isolated verse and the start instant of an isolated refrain are not necessarily identical, or else, one may choose to construct the sound resume from the first verse and the second refrain (between t4 and t5) or from the end refrain (between t6 and t7). Thus, the two passages selected to construct the sound resume are not necessarily contiguous.
A blind concatenation of sound signals corresponding to two parts of a piece of music gives an impression unpleasant to the ear. Hereinbelow is described, with reference to
One of the aims of this construction by concatenation is to locally preserve the tempo of the sound signal.
Another aim is to ensure a temporal distance between points of concatenation (or points of “alignment”) that is equal to an integer multiple of the duration of a bar.
Preferably, this concatenation is performed by superposition/addition of sound segments chosen and isolated from the two abovementioned respective parts of the piece of music.
Described below is a superposition/addition of such sound segments, firstly by beat synchronization (termed “beat-synchronous”), then by bar synchronization according to a preferred embodiment.
The following notation applies:
bpm, the number of beats per minute of a piece of music,
In principle, the aforesaid first and second passages are not contiguous. ŝ(t) is then obtained as follows.
Referring to
The first segment si(t) is then defined so that:
si(t)=s(t+mi).hL(t) [1]
where mi is the start instant of the first segment.
As shown by
sj(t)=s(t+mj).hL(t) [1a]
where mj is the start instant of the second segment.
Even if the duration L of the time window is the same for both segments, it is however indicated that the shape of the window may be different from one segment si(t) to the other sj(t), as shown moreover by
Let bi and bj be two respective positions inside the first and second segments, and called the “synchronization positions”, with respect to which the superposition/addition is performed, and such that:
0≦bi≦L and 0≦bj≦L [2]
Advantageously, the temporal distance between bi and bj is chosen equal to an integer multiple of the duration T of a beat (bj−bi=kT). Under these conditions, there is said to be a “beat-synchronous” reconstruction if
and where k′ is the largest integer such that k′T≦L−(bi−mi), c is a time constant such that c=bi−mi.
Advantageously, the distance between the instants mi and mj is chosen equal to an integer multiple of k′NT, in which N denotes the numerator of the metric.
Thus, the reconstructed signal may be written:
An in-time synchronous superposition/addition is then obtained.
More particularly, the instants mi and mj are chosen so that they correspond to a first bar time. Under these conditions, a so-called “aligned” beat-synchronous superposition/addition is advantageously obtained.
Thus, by moreover determining the metric of the first passage and/or of the second passage, an in-time beat-synchronous reconstruction can be performed. If, moreover, the first and second segments are chosen so that they commence with a first bar time, this beat-synchronous reconstruction is aligned.
It is indicated that a reconstruction of the signal ŝ (t) may be undertaken on the basis of more than two musical passages to be concatenated. For i musical passages (i>2), the generalization of the above method is expressed by the relation:
Each integer kj′ is defined as the largest integer such that kj′T≦Lj−(bj−mj), where Lj corresponds to the width of the window of the jth musical passage to be concatenated.
It is indicated that the first bar times, or else the metric, or else the tempo of a piece of music, may be detected automatically, for example by using existing software applications. For example, the MPEG-7 standard (Audio Version 2) provides for the determination and the description of the tempo and of the metric of a piece of music, by using such software applications.
Of course, the present invention is not limited to the embodiment described hereinabove by way of example; it extends to other variants.
Thus, it will be understood that the sound resume may comprise more than two musical passages, for example an introduction, a verse and a refrain, or else two different passages of a verse and of a refrain, such as the introduction and a refrain, for example.
It will also be noted that the steps represented in flowchart form in
Number | Date | Country | Kind |
---|---|---|---|
03/07667 | Jun 2003 | FR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/FR04/01493 | 6/16/2004 | WO | 12/22/2005 |