The present invention, in some embodiments thereof, relates to a method and apparatus for isolation of audio and like sources and, more particularly, but not exclusively, to the use of cross-modal association and/or visual localization for the same.
The term multi-modal signal processing naturally refers to many areas of application. Herein we describe recent relevant studies conducted in the specific field of audio-visual analysis. Studies in this field have been directed at solving many different tasks. Speech analysis is the most common one, since it is an essential tool in many human-computer interfaces. For instance: performing speech recognition in noisy environments can utilize lip images, rather than only speech sounds. This results in an improved performance in speech recognition [6, 65]. Other audio-visual tasks include: source separation based on vision [16, 27, 61]; and video event-detection [66]. Such integration of different modalities is backed by evidence that biological systems also fuse cross-sensory information to enhance their ability to understand their surroundings [22, 24].
Additional background art includes
The present embodiments relate to the enhancement of source localization using cross modal association between say audio events and events detected using other modes.
According to an aspect of some embodiments of the present invention there is provided apparatus for cross-modal association of events from a complex source having at least two modalities, multiple object, and events, the apparatus comprising:
a first recording device for recording the first modality;
a second recording device for recording a second modality;
an associator configured for associating event changes such as event onsets recorded in the first mode and changes/onsets recorded in the second mode, and providing an association between events belonging to the onsets;
a first output connected to the associator, configured to indicate ones of the multiple objects in the second modality being associated with respective ones of the multiple events in the first modality.
In an embodiment, the associator is configured to make the association based on respective timings of the onsets.
An embodiment may further comprise a second output associated with the first output configured to group together events in the first modality that are all associated with a selected object in the second modality; thereby to isolate a isolated stream associated with the object.
In an embodiment, the first mode is an audio mode and the first recording device is one or more microphones, and the second mode is a visual mode, and the second recording device is a camera.
An embodiment may comprise start of event detectors placed between respective recording devices and the correlator, to provide event onset indications for use by the associator.
In an embodiment, the associator comprises a maximum likelihood detector, configured to calculate a likelihood that a given event in the first modality is associated with a given object or predetermined events in the second modality.
In an embodiment, the maximum likelihood detector is configured to refine the likelihood based on repeated occurrences of the given event in the second modality.
In an embodiment, the maximum likelihood detector is configured to calculate a confirmation likelihood based on association of the event in the second modality with repeated occurrence of the event in the first mode.
According to a second aspect of the present invention there is provided a method for isolation of a media stream for respected detected objects of a first modality from a complex media source having at least two media modalities, multiple objects, and events, the method comprising:
recording the first modality;
recording a second modality;
detecting events and respective onsets or other changes of the events;
associating between events recorded in the first modality and events recorded in the second modality, based on timings of respective onsets and providing a association output; and
isolating those events in the first modality associated with events in the second modality associated with a predetermined object, thereby to isolate a isolated media stream associated with the predetermined object.
In an embodiment, the first modality is an audio modality, and the second modality is a visual modality.
An embodiment may comprise providing event start indications for use in the association.
In an embodiment, the association comprises maximum likelihood detection, comprising calculating a likelihood that a given event in the first modality is associated with a given event of a specific object in the second modality.
In an embodiment, the maximum likelihood detection further comprises refining the likelihood based on repeated occurrences of the given event in the second modality.
In an embodiment, the maximum likelihood detection further comprises calculating a confirmation likelihood based on association of the event in the second modality with repeated occurrence of the event in the first modality.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
The present invention, in some embodiments thereof, relates to a method and apparatus for isolation of sources such as audio sources from complex scenes and, more particularly, but not exclusively, to the use of cross-modal association and/or visual localization for the same.
Cross-modal analysis offers information beyond that extracted from individual modalities. Consider a camcorder having a single microphone in a cocktail-party: it captures several moving visual objects which emit sounds. A task for audio-visual analysis is to identify the number of independent audio-associated visual objects (AVOs), pin-point the AVOs' spatial locations in the video and isolate each corresponding audio component. Part of these problems were considered by prior studies, which were limited to simple cases, e.g., a single AVO or stationary sounds. We describe an approach that seeks to overcome these challenges. The approach does not inspect the low-level data. Rather, it acknowledges the importance of mid-level features in each modality, which are based on significant temporal changes in each modality. A probabilistic formalism identifies temporal coincidences between these features, yielding cross-modal association and visual localization. This association is further utilized in order to isolate sounds that correspond to each of the localized visual features. This is of particular benefit in harmonic sounds, as it enables subsequent isolation of each audio source, without incorporating prior knowledge about the sources. We demonstrate this approach in challenging experiments. In these experiments, multiple objects move simultaneously, creating motion distractions for one another, and produce simultaneous sounds which mix. Yet, the results demonstrate spatial localization of correct visual features out of hundreds of possible candidates, and isolation of the non-stationary sounds that correspond to these distinct visual features.
This work deals with complex scenarios that are sometimes referred to as a cocktail party, multiple sources exist simultaneously in all modalities. This inhibits the interpretation of each source. In the domain of audio-visual analysis, a camera views multiple independent objects which move simultaneously, while some of them emanate sounds, which mix. The present disclosure presents a computer vision approach for dealing with this scenario. The approach has several notable results. First, it automatically identifies the number of independent sources.
Second, it tracks in the video the multiple spatial features, that move in synchrony with each of the (still mixed) sound sources. This is done even in highly non stationary sequences. Third, aided by the video data, it successfully separates the audio sources, even though only a single microphone is used. This completes the isolation of each contributor in this complex audio-visual scene, as depicted in
A single microphone is simpler to set up, but it cannot, on its own, provide accurate audio spatial localization. Hence, locating audio sources using a camera and a single microphone poses a significant computational challenge. In this context, Refs. [35, 43] spatially localize a single audio-associated visual object (AVO). Ref. [12] localizes multiple AVOs if their sounds are repetitive and non-simultaneous. Neither of these studies attempted audio separation. A pioneering exploration of audio separation [16] used complex optimization of mutual information based on Parzen windows. It can automatically localize an AVO if no other sound is present. Results demonstrated in Ref. [61] were mainly of repetitive sounds, without distractions by unrelated moving objects.
Here we propose an approach that appears to better manage obstacles faced by prior methods. It can use the simplest hardware: a single microphone and a camera.
Algorithmically, we are inspired by feature-based image registration methods, which use spatial significant changes (e.g, edges and corners). Analogously, we use as our features the temporal instances of significant changes in each modality. To match the two modalities, we look for cross-modal temporal coincidences of events. We formulate a likelihood criterion, and use it in a framework that sequentially localizes the AVOs. This results in a continuous audio-visual association throughout the sequence.
Following the visual localization of the AVOs, the sound produced by each AVO is isolated. The audio-isolation process is highly simplified and efficient when the mixed audio sources are harmonic ones. Harmonic sounds usually exhibit a sparse time-frequency (T-F) distribution. Therefore, they should rarely exhibit a time-frequency overlap.
Traditional audio-only isolation methods have also utilized harmonicity assumptions. However, the presented method is significantly aided by the essential visual information. This enables the isolation of mixed sounds in challenging scenes.
The present embodiments deal with the task of relating audio and visual data in a scene containing single and/or multiple AVOs, and recorded with a single and/or multiple camera and a single and/or multiple microphone. This analysis is composed of two subsequent tasks. The first one is spatial localization of the visual features that are associated with the auditory soundtrack. The second one is to utilize this localization to separately enhance the audio components corresponding to each of these visual features. This work approached the localization problem using a feature-based approach. Features are defined as the temporal instances in which a significant change takes place in the audio and visual modalities. The audio features we used are audio onsets (beginnings of new sounds). The visual features were visual onsets (instances of significant change in the motion of a visual object). These audio and visual events are meaningful, as they indeed temporally coincide in many real-life scenarios.
This temporal coincidence is used for locating the AVOs. We exploit the fact that typically, even for scenes containing simultaneous sounds and motions, audio and visual onsets are temporally sparse.
Using a maximum-likelihood criterion to match these events, we iteratively find the AVOs. This process also resulted in grouping of the audio onsets, where each group corresponds to a different visual feature.
These groups of audio-onsets are exploited in order to complete the second audio-visual analysis task: isolation of the independent audio sources. Each group of audio onsets points to instances in which the sounds belonging to a specific visual feature commence. In order to emphasize the onsets of the sounds of interest over interfering sounds, we calculate a measure similar to a temporal directional-derivative of the spectrogram. We inspect this derivative image in order to detect the pitch-frequency of the commencing sounds, that were assumed to be harmonic.
By following the pitch frequency through time, we determine which T-F components compose the sounds of interest. By keeping only these audio components (a binary-masking procedure), we synthesize a soundtrack containing only the sounds of a single AVO.
The principles posed here (namely, the audio-visual feature-based approach) utilize only a small part of the cues that are available for audio-visual association. Thus, the present embodiments may become the basis for a more elaborate audio-visual association process. Such a process may incorporate a requirement for consistency of auditory events into the matching criterion, and thereby improve the robustness of the algorithm, and its temporal resolution. We further suggest that our feature-based approach can be a basis for multi-modal areas other than audio and video domains.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
Referring now to the drawings,
In an embodiment the apparatus initially detects the spatial locations of objects in the video modality that are associated with the audio stream. This association is based on temporal co-occurrence of audio and visual change events. A change event may be on onset of an event or a change in the event, in particular measured as an acceleration from the video. An audio onset is an instance in which a new sound commences. A visual onset is defined as an instance in which a significant motion start or change such as a change in direction or a change in acceleration in the video takes place. Here we track the motion of features, namely objects in the video, and look for instances where there is a significant change in the motion of the object. In the present embodiments we look at the acceleration of the object. However we may use other measurements besides acceleration. Also, we do not have to track each object separately. We may equally well just look for significant temporal changes in the video, rather than those of a specific object, and associate them with the onsets of the audio.
The preferred embodiments use repeated occurrences of the onsets of single visual objects with those of sound onsets to calculate the likelihood that the object under consideration is associated with the audio. For instance: you may move your hand at the exact same time that I open my mouth to start to speak but this is mere coincidence. However, in the long run, the event of my mouth opening would have more co-occurrences with my sound onsets than your hand.
Once we identify the object/s whose onsets are associated with the audio onsets, this accomplishes a significant goal: telling which objects/locations in the video are associated with the audio.
Now we move on to the 2nd stage: we know at which instances sounds that belong to each object commence. We can therefore attempt to isolate the sounds of each of the objects. However it is noted that even without audio isolation, the present embodiments have the ability to say which spatial locations in the video are associated with the audio, and also which audio onsets are associated with the video we see.
Apparatus 10 is intended to identify events in the two modes. Then those events in the first mode that associate with events relating to an indicated object of the second mode are isolated. Thus in the case of video, where the first mode is audio and the second mode is moving imagery, an object such as a person's face may be selected. Events such as lip movement may be taken, and then sounds which associate to the lip motion may be isolated.
The apparatus comprises a first recording device 12 for recording the first mode, say audio. The apparatus further comprises a second recording device 14 for recording a second mode, say a camera, for recording video.
A correlator 16 then associates between events recorded in the first mode and events recorded in the second mode, and provides a association output. The coincidence does not have to be exact but the closer the coincidence the higher the recognition given to the coincidence.
A maximum likelihood correlator may be used which iteratively locates visual features that are associated with the audio onsets. These visual features are outputted in 19. The audio onsets that are associated to visual features in sound output 18 are also output. That is to say that the beginning of sounds that are related to visual objects are temporally identified. They are then further processed in sound output 37.
An associated sound output 37 then outputs only the filtered or isolated stream. That is to say it uses the correlator output to find audio events indicated as correlating with the events of interest in the video stream and outputs only these events.
Start of event detectors 20 and 22 may be placed between respective recording devices and the correlator 16, to provide event start indications. The times of event starts can then be compared in the correlator.
In an embodiment the correlator is a maximum likelihood detector. The correlator may calculate a likelihood that a given event in the first mode is associated with a given event in the second mode.
In a further embodiment the association process is repeated over the course of playing of the media, through multiple events module 24. The maximum likelihood detector refines the likelihood based on repeated occurrences of the given event in the second mode. That is to say, as the same video event recurs, if it continues to coincide with the same kind of sound events then the association is reinforced. If not then the association is reduced. Pure coincidences may dominate with small numbers of event occurrences but, as will be explained in greater detail below, will tend to disappear as more and more events are taken into account.
In one particular embodiment a reverse test module 26 is used. The reverse test module takes as its starting point the events in the first mode that have been found to coincide, in our example the audio events. Module 26 then calculates a confirmation likelihood based on association of the event in said second mode with repeated occurrence of the event in the first mode. That is to say it takes the audio event as the starting point and finds out whether it coincides with the video event.
Image and audio processing modules 28 and 30 are provided to identify the different events. These modules are well-known in the art.
Reference is now made to
1) the object in the 1st mode (the video) which is flagged as associated to the 2nd mode is marked (for instance, by an X as in
2) the events of the object can further be isolated for output. The maximum likelihood may be reinforced as discussed by repeat associations for similar events over the duration of the media. In addition the association may be reinforced by reverse testing, as explained.
As described hereinabove the present embodiments may provide automatic scene analysis, given audio and visual inputs. Specifically, we wish to spatially locate and track objects that produce sounds, and to isolate their corresponding sounds from the soundtrack.
The desired sounds may then be isolated from the audio. A simple single microphone may provide only coarse spatial data about the location of sound sources. Consequently, it is much more challenging to associate the auditory and visual data.
As a result, single-camera single-microphone (SCSM) methods have taken a variety of approaches in order to associate audio and visual descriptions of a scene.
These approaches can be roughly divided into two main schools. The first school is data-driven, and uses raw (or linearly processed) audio and visual data. Pixels (or clusters of pixels) are matched against raw audio data. Two main representatives of this approach are Refs. [16, 35]. These studies formulated the problem of audio-visual association as that of finding a linear combination of image patches, whose temporal behavior \best matches” the temporal behavior of a linear combination of acoustic frequency bands. The best match in Ref. [16] is the match that maximizes the mutual information between the linear combinations. In Ref. [35] it is the sparsest set of image patches that results in a full association. Neither study reports tests on scenes containing multiple audio-associated visual objects (AVOs). Furthermore in the framework of Ref. [35], it is not clear how consequent audio isolation can be performed. Audio isolation in Ref. [16] was demonstrated only with user guidance. Even then, the isolation procedure was heuristic by nature.
The second school in SCSM methods is feature-driven. The analysis no longer aimed at maximizing audio-visual association at each and every frame of the sequence. Rather, it aims at extracting higher-level features from each modality. These features are then compared, not necessarily on a frame-by-frame basis. In this context, Ref. [43] examines the visual data only at instances of maximal auditory energy.
If at these instances a visual patch has reached maximal spatial displacement from its initial location, it is deemed as being associated to the audio. A drawback of the method is its sensitivity to the reference coordinate system. Ref. [55] assumes that the scene contains only repetitive sounds, which are emitted by objects performing repetitive motions. Ref. [55] further assumes periodic motions and sounds. This naturally limits the applicability of these methods. None of these papers reports consequent audio isolation.
The approach presented in this work belongs categorically to the second school presented above. Here we propose an approach that better manages obstacles faced by these prior methods. Algorithmically, our approach is inspired by feature-based image registration methods, which use spatial significant changes (e.g, edges and corners). Analogously, we use as our features the temporal instances of significant changes in each modality. To match the two modalities, we look for cross-modal temporal coincidences of events. Based on a derived likelihood criterion, the AVOs are localized and traced throughout the sequence. The established audio-visual temporal coincidences then play a major role in the consequent audio-isolation stage.
Audio-Enhancement Methods
Audio-isolation and enhancement of independent sources from a soundtrack is a widely-addressed problem. The best results are generally achieved by utilizing arrays of microphones. These multi-microphone methods utilize the fact that independent sources are spatially separated from one another.
In the audio-visual context, these methods may be farther incorporated in a system containing one camera or more [46, 45].
The fact that independent sources are spatially distinct is of little use, however, when only a single microphone is available. A single microphone may provide only coarse spatial localization. Consequently, the inverse problem of extracting one or more sources from a single mixture is ill-posed. In order to lift this ill-posedness, one needs to limit the feasible solutions to the problem. This is commonly achieved by incorporating prior knowledge about the sources. Such a knowledge may be introduced into the problem in various ways. Some methods train on samples of the sources (or typical sources) that are to be mixed [57]. Others use an a-priori knowledge about the nature of the mixed sources, and particularly assuming that the sources have an harmonic structure [19, 38, 48]. These methods usually require advance knowledge of the number of mixed harmonic sounds [48,].
In the presently described embodiments we additionally assume that the mixed sounds are harmonic. The method is not of course necessarily limited to harmonic sounds. Unlike previous methods, however, we attempt to isolate the sound of interest from the audio mixture, without knowing the number of mixed sources, or their contents. Our audio isolation is applied here to harmonic sounds, but the method may be generalized to other sounds as well. The audio-visual association is based on significant changes in each modality
Hence, our approach relies heavily on an audio-visual association stage.
Background
Short Time Fourier Transform
Let s(n) denote a sound signal, where n is a discrete sample index of the sampled sound. This signal is analyzed in short temporal windows w, each being Nw-samples long. Consecutive windows are shifted by Nsft samples. The short-time Fourier transform of s(n) is
where f is the frequency index and t is the time index of the analyzed instance. As an example, the amplitude
A(t,f)=|S(t,f)| (3.2)
corresponding to a short speech segment is given in
To re-synthesize a discrete signal given its STFT S(t, f), the overlap-and-add (OLA) method may be used. It is given by
Here, COLA is a multiplicative constant. If for all n
then ^s(n)=s(n). Eq. (3.3) and (3.4) state that the overlap and add operation effectively eliminates the analysis window from the synthesized sequence. The intuition behind the process is that the redundancy within overlapping segments and the averaging of the redundant samples remove the effect of windowing.
Harmonic Sounds
Reference is now made to
A variety of sounds of interest are harmonic, at least for short periods of time. Examples include: musical instruments (violin, guitar, etc.), and voiced parts of speech. These parts are produced by quasi-periodic pulses of air which excite the vocal tract. Many methods of speech or music processing aimed at efficient and reliable extraction of the pitch-frequency from speech or music segments [10, 51].
The HPS Pitch-Detection Method
to extract the pitch-frequency of a sound from a given STFT-amplitude segment we chose to use the harmonic-product-spectrum (HPS) method. We now review it briefly based on [15].
The harmonic product spectrum is defined as
where K is the number of considered harmonics. Taking the logarithm gives
The pitch frequency is found as
Often, the pitch frequency estimated by HPS is double or half the true pitch. To correct for this error, some postprocessing should be performed [15]. The postprocessing evaluates the ratio
If the ratio is larger than a given threshold δhalf, then (^f0=2) is selected as the pitch frequency [15].
Audio Isolation by Binary Masking
In the present embodiments we attempt to isolate sounds from a mixture containing several sounds. Let sdesired,sinterfere and smix denote the source of interest, the interfering sounds, and the mixture, respectively. Then
smix=sdesired+sinterfere: (3.8)
If we observe the STFT-amplitude of sdesired in
Then the binary masked amplitude of the STFT of the desired signal is estimated by
Âdesired(t,f)=Mdesired(t,f)·Amix(t,f). (3.10)
Here · denotes bin-wise multiplication. The estimated A^desired(t, f) is combined with the short-time phase ∠Smix(t, f) into Eq. (3.3), in order to construct the estimated desired signal:
This binary masking process forms the basis for many methods [1, 57, 69] of audio isolation.
The mask Mdesired(t, f) may also include T-F components that contain energy of interfering sounds. Consider a T-F component denoted as (toverlap; foverlap), which contains energy from both the sound of interest sdesired and also energy of interfering sounds sinterfere. To deal with this situation, an empirical approach [57] backed by a theoretical model [4] may be taken. This approach associates the T-F component (toverlap; foverlap) with sdesired only if the estimated amplitude A^desired(toverlap; foverlap) is larger than the estimated amplitude of the interferences. Formally:
In order to evaluate Eq. (3.12), however, the amplitudes of the source of interest and of the interferences need to be estimated. This usually requires prior knowledge both about the source of interest, and about the interferences. This knowledge is usually incorporated into the system by means of a pre-processing training stage [1, 4, 57].
Significant Visual and Audio Events
How may we associate two modalities where each changes in time? Some prior methods use continuous valued variables to represent each modality, e.g., a weighted sum of pixel values. Maximal canonical association or mutual information was sought between these variables [16, 28, 35]. That approach is analogous to intensity-based image matching. It implicitly assumes some association (possibly nonlinear) between the raw data values in each modality. In this work we do not look at the raw data values during the cross-modal association. Rather, here we opt for feature-based matching: we seek correspondence between significant features in each modality. In our audio-visual matching problem, we use features having strong temporal variations in each of the modalities.
Visual Features
Reference is now made to
The present embodiments aim to spatially localize and track moving objects, and to isolate the sounds corresponding to them. Consequently, we do not rely on pixel data alone. Rather we look for a higher-level representation of the visual modality. Such a higher-level representation should enable us to track highly non-stationary objects, which move throughout the sequence.
A natural way to track exclusive objects in a scene is to perform feature tracking. The method we use is described hereinbelow. The method automatically locates image features in the scene. It then tracks their spatial positions throughout the sequence. The result of the tracker is a set of Nv visual features. Each visual feature is indexed by iε[1,Nv]. Each feature has a spatial trajectory vi(t)=[xi(t), yi(t)]T, where t is the temporal index (in units of frames), and x; y are the image coordinates, and T denotes transposition. An illustration for the tracking process is shown in
To do this, we first extract significant features from each trajectory. These features should be informative, and correspond to significant events in the motion of the tracked feature. We assume that such features are characterized by instances of strong temporal variation [54, 63], which we term visual onsets. Each visual feature is ascribed a binary vector vion that compactly summarizes its visual onsets:
For all features fig, the corresponding vectors vion have the same length Nf, which is the number of frames. In the following section we describe how the visual onsets corresponding to a visual feature are extracted.
Extraction of Visual Onsets.
We are interested in locating instances of significant temporal variation in the motion of a visual feature. An appropriate measure is the magnitude of the acceleration of the feature, since it implies a significant change in the motion speed or direction of the feature. Formally, we denote the velocity and the acceleration of feature i at instance t by:
{dot over (v)}i(t)=vi(t)−vi(t−1) (4.2)
{umlaut over (v)}i(t)={dot over (v)}i(t)−{dot over (v)}i(t−1), (4.3)
respectively. Then
oivisual(t)=∥{umlaut over (v)}i(t)∥ (4.4)
is a measure of significant temporal variation in the motion of feature i at time t. We note that before calculating the derivatives of Eq. (4.3), we need to suppress tracking noise. Further details are given hereinabove. From the measure oivisual(t), we deduce the set of discrete instances in which a visual onset occurs. Roughly speaking, the visual onsets are located right after instances in which oivisual (t) has local maxima. The process of locating the visual onsets is summarized in Table 2. Next we go into further details.
First, oivisual(t) normalized by its maximal value, so that its values are in the range [0, 1]:
Next, the normalized measure is adaptively thresholded (see Adaptive thresholds section). The adaptive thresholding process results in a discrete set of candidate visual onsets, which are local peaks of ôivisual(t), and exceed a given threshold. Denote this set of temporal instances by Vion
Next, Vion is temporally pruned. The motion of a natural object is generally temporally coherent [58]. Hence, the analyzed motion trajectory should typically not exhibit dense events of change. Consequently, we remove candidate onsets if they are closer than δvisualprune to another onset candidate having a higher peak of ôivisual(t). Formally, let t1; t2εVion. The visual onsets measure associated with each of these onset instances are ôivisual(t1) and ôivisual(t2), respectively.
Suppose that ôivisual(t1)<ôivisual(t2). Then, the candidate onset at t1 is excluded from Vion.
Typically in our experiments, δvisualprune=10 frames in movies having a 25 frames/sec rate. This effectively means that on average, we can detect up to 2.5 visual events of a feature per second.
Finally, the remaining instances in Vion are further processed in order to locate the visual onsets. Each temporal location tvonεVion is currently located at a local maximum of ôivisual(t). The last step is to shift the onset slightly forward in time, away from the local maximum, and towards a smaller value of ôivisual(t). The onset is iteratively shifted this way, while the following condition holds:
Typically, onsets are shifted in not more than 2 or 3 frames. To recap, the process is illustrated in
Audio Features
Instances in which aon equals 1 are instances in which a new sound begins. Detection of audio onsets is illustrated in
A Coincidence-Based Approach
Hereinabove, we showed how visual onsets and audio onsets are extracted from the visual and auditory modalities. Now we describe how the audio onsets are temporally matched to visual onsets. In the specific context of the audio and visual modalities, the choice of audio and visual onsets is not arbitrary. These onsets indeed coincide in many scenarios. For example: the sudden acceleration of a guitar string is accompanied by the beginning of the sound of the string; a sudden deceleration of a hammer hitting a surface is accompanied by noise; the lips of a speaker open as he utters a vowel. One approach for cross-modal association is based on a simple assumption. Consider a pair of significant events (onsets): one event per modality. We assume that if both events coincide in time, then they are possibly related. If such a coincidence re-occurs multiple times for the same feature i, then the likelihood of cross-modal correspondence is high. On the other hand, if there are many temporal mismatches, then the matching likelihood is inhibited. We formulate this principle in the following sections.
General Approach
Let us consider for the moment the correspondence of audio and visual onsets in some ideal cases. If just a single AVO exists in the scene, then ideally, there would be a one-to-one audio-visual temporal correspondence, i.e., vion=aon for a unique feature i. Now, suppose there are several independent AVOs, where the onsets of each object i are exclusive, i.e., they do not coincide with those of any other object. Then,
where J is the set of the indices of the true AVOs. To establish J, one may attempt to find the set of visual features that satisfies Eq. 5.1. However, such ideal cases of perfect correspondence usually do not occur in practice. There are outliers in both modalities, due to clutter and to imperfect detection of onsets, having false positives and negatives. We may detect false audio onsets, which should be overlooked, and on the other hand miss true audio onsets. This is also true for detection of visual onsets in the visual modality.
Thus, we take on a different path to establishing which visual features are associated with the audio. To do this, we take a sequential approach. We define a matching criterion that is based on a probabilistic argument and enables imperfect matching. It favors coincidences, and penalizes for mismatches.
Using a matching likelihood criterion, we sequentially locate the visual features most likely to be associated with the audio. We start by locating the first matching visual feature. We then remove the audio onsets corresponding to it from aon. This results in the vector of the residual audio onsets. We then continue to find the next best matching visual feature. This process re-iterates, until a stopping criterion is met.
The next sections are organized as follows. We first derive the matching criterion that quantifies which visual feature has the highest likelihood to be associated with the audio. We then incorporate this criterion in the sequential framework.
Matching Criterion
Here we derive the likelihood of a visual feature i, which has a corresponding visual onsets vector vion, to be associated to the audio onsets vector aon. Assume that vi(t) is a random variable which follows the probability law
In other words, at each instance, vi(t) has a probability p to be equal to aon(t), and a (1−p) probability to differ from it. Assuming that the elements aon(t) are statistically independent of each other, the matching likelihood of a vector vion is
Denote by Nagree the number of time instances in which aon(t)=vion(t). From Eqs. (5.2, 5.3),
L(i)=pN
Both aon and vion are binary, hence the number of time instances in which both are 1 is (aon)Tvion. The number of instances in which both are 0 is (1−aon)T(1−vion),
hence
Nagree=(aon)Tvion+(1−aon)T(1−vion). (5.5)
Plugging Eq. (5.5) in Eq. (5.4) and re-arranging terms,
We seek the feature i whose vector vion maximizes L(i). Thus, we eliminate terms that do not depend on vion. This yields an equivalent objective function of i,
It is reasonable to assume that if feature i is an AVO, then it has more onset coincidences than mismatches. Consequently, we may assume that p>0:5. Hence,
Thus, we may omit the multiplicative term
from Eq. (5.7).
We can now finally rewrite the likelihood function as)
{tilde over (L)}(i)=(aon)Tvion−(1−aon)Tvion. (5.8)
Eq. (5.8) has an intuitive interpretation. Let us begin with the second term. Recall that, by definition, aon equals 1 when an audio onset occurs, and equals 0 otherwise.
Hence, (1−aon) is exactly the opposite: it equals 1 when an audio onset does not occur, and equals 0 otherwise. Consequently, the second term of Eq. (5.8) effectively counts the number of the visual onsets of feature i that do not coincide with audio onsets. Notice that since the second term appears with a minus sign in Eq. (5.8), this term acts as a penalty term. On the other hand, the first term counts the number of the visual onsets of feature i that do coincide with audio onsets. Eq. (5.8) favors coincidences (which should increase the matching likelihood of a feature), and penalizes inconsistencies (which should inhibit this likelihood). Now we describe how this criterion is embedded in a framework, which sequentially extracts the prominent visual features.
Sequential Matching
Out of all the visual features iε[1, Nv], {tilde over (L)}(i) should be maximized by the one corresponding to an AVO. The visual feature that corresponds to the highest value of {tilde over (L)} is a candidate AVO. Let its index be ^i. This candidate is classified as an AVO, if its likelihood {tilde over (L)}(î) is above a threshold. Note that by definition, {tilde over (L)}(i)≦{tilde over (L)}(î) for all i.
Hence, if {tilde over (L)}(î) is below the threshold, neither ^i nor any other feature is an AVO.
At this stage, a major goal has been accomplished. Once feature ^i is classified as an AVO, it indicates audio-visual association not only at onsets, but for the entire trajectory vi(t), for all t. Hence, it marks a specific tracked feature as an AVO, and this AVO is visually traced continuously throughout the sequence. For example, consider the violin-guitar sequence, one of whose frames is shown in
Now, the audio onsets that correspond to AVO ^i are given by the vector
mon=aon·vîon, (5.9)
where · denotes the logical-AND operation per element. Let us eliminate these corresponding onsets from aon. The residual audio onsets are represented by
a1on≡aon−mon. (5.10)
The vector a1on becomes the input for a new iteration: it is used in Eq. (5.8), instead of aon. Consequently, a new candidate AVO is found, this time optimizing the match to the residual audio vector a1on.
This process re-iterates. It stops automatically when a candidate fails to be classified as an AVO. This indicates that the remaining visual features cannot explain the residual audio onset vector. The main parameter in this framework is the mentioned classification threshold of the AVO. We set it to {tilde over (L)}(î)=0. Using the definition of {tilde over (L)} from Eq. (5.8) amounts to:
0>(aon)Tvion−(1−aon)Tvion. (5.11)
Rearranging terms yield:
Consequently, when {tilde over (L)}(î)<0, more than half of the onsets in vion are not matched by audio ones. In other words, most of the significant visual events of i are not accompanied by any new sound. We thus interpret this object as not audio-associated.
To recap, our matching algorithm is given in Table 2 (in which 0 is a column vector, all of whose elements are null).
Note that the output accomplishes another goal of this work: the automatic estimation of the number of independent AVOs.
In the violin-guitar sequence mentioned above, this algorithm automatically detected that there are two independent AVOs: the guitar string, and the hand of the violin player (marked as crosses in
Temporal Resolution
The above discussion derives the theoretical framework for establishing audio-visual association. That framework relies on perfect temporal coincidences between audio and visual onsets: it assumes that an audio onset may be related to a visual onset, if both onsets take place simultaneously (Table 2, step 4). However, in practice, the temporal resolution of the present system is finite. As in any system, the terms coincidence and simultaneous are meaningful only within a tolerance range of time. In the real-world, coincidence of two events at an infinitesimal temporal range has just an infinitesimal probability. Thus, in practice, correspondence between two modalities can be established only up to a finite tolerance range. Our approach is no exception.
Specifically, each onset is determined up to a finite resolution, and audio-visual onset coincidence should be allowed to take place within a finite time window. This limits the temporal resolution of coincidence detection. Let tvon denote the temporal location of a visual onset. Let taon denote the temporal location of an audio onset. Then the visual onset may be related to the audio onset if
(5.13)|tvon−taon|≦δ1AV. (5.13)
In our experiments, we set δ1AV=3 frames. The frame rate of the video recording is 25 frames/sec. Consequently, an audio onset and a visual onset are considered to be coinciding if the visual onset occurred within 3/25≈⅛ sec of the audio onset.
Disambiguation of the AVO
A consequence of this finite resolution is that several visual features may achieve the maximum matching score to the audio onset vector (Table 2, step 4). Denote this set of visual features by Vcandidates={∞,ε, . . . }. Out of this set of potential candidates we wish to select a single best-matching visual feature. This feature is found as follows. Let iεVcandidates. The visual onsets of the visual feature i that
have corresponding audio onsets are given by
ViMATCH={tvon|mion(tvon)=1}. (5.14)
For each visual onset tvonεViMATCH, there is a corresponding audio onset taon. According to Eq. (5.13), there may be some temporal lag between this pair of audio and visual onsets. The temporal distance between the onsets is defined as
This distance function is shown in
We may now calculate, for a given visual feature i, the average distance of its visual onsets from their corresponding audio onsets:
This is simply the mean of distance between the visual onsets and their corresponding audio onsets. Finally, the single best-matching visual feature is established as follows:
Audio Processing and Isolation
In the above we described the procedure to find the visual features that are associated with the audio. This resulted in a set of AVOs, each with its vector of corresponding audio onsets: {îl, mlon}. The following describes how the sounds corresponding to each of these AVOs are extracted from the single-microphone soundtrack.
Audio Isolation Method
Out of the soundtrack smix, we wish to isolate the sounds corresponding to a given AVO ^i. To do this, we utilize the audio-visual association achieved. Recall that AVO ^i is associated with the audio onsets in the vector mon. In other words, mon points to instances in which a sound associated with the AVO commences. We now need to extract from the mixture only the sounds that begin at these onsets. We may do this sequentially: isolate each distinct sound, and then concatenate all of the sounds together to form the isolated soundtrack of the AVO. How may we isolate a single sound commencing at a given onset instance ton? To do this, we need to fit a mask Mt
We assume that frequency bins that have just become active at ton, all belong to the commencing sound. In this description, we further focus on harmonic sounds. Since a harmonic sound contains a pitch-frequency and its integer multiples (the harmonies), our task is simplified.
1. We may identify the frequency bins belonging to the commencing sound, simply by detecting the pitch f0 of the sound commencing at ton.
2. Since the sound is assumed to be harmonic, we may track the pitch frequency f0(t) through time.
3. When the sound fades away, at toff, the tracking is terminated.
4. This process provides the required mask that corresponds to the desired sound that commences at ton:
Γdesiredt
K being the number of considered harmonies. Eq. (6.1) states that an harmonic sound commencing at ton is composed from the integer multiplies of the pitch frequency, and this frequency changes through time.
To conclude: given only an onset instance ton, we determine Γdesiredt
Exploiting harmonicity for single-microphone source-separation is not new [10]. In contrast to previous methods, however, we do not assume that we have knowledge about the number of interferences, about the pitch-frequency of the interfering sounds, or about the pitch-frequency of the sound of interest in past or future instances. Consequently, our task in step-1 is a novel one: given only an onset instance of a sound, extract f0(ton). This is described next.
Pitch Detection at Onset Instances
Pitch-detection of single and of multiple mixed sounds is a highly studied field [10]. However, most methods that extract the pitch of multiple concurrent sources require knowledge about the nature of the interfering sounds, or the number of the concurrent sources. We assume that we do not have such information. Our task is formulated as following.
Given an onset instance ton, extract f0(ton), the pitch frequency of the commencing signal, while disregarding interferences of other sounds. We extract f0(ton) from the STFT-amplitude of the mixture Amix(t, f). To do this, we first need to remove the audio components of the interferences from Amix(t, f).
Elimination of Prior Sounds
The sound of interest is the one commencing at ton. Thus, the disturbing audio at ton is assumed by us to have commenced prior to ton. These disturbing sounds linger from the past. Hence, they can be eliminated by comparing the audio components at
t=ton to those at t<ton, particularly at t=ton−1. Specifically, Ref. [37] suggests the relative temporal difference
Eq. (6.2) emphasizes an increase of amplitude in frequency bins that have been quiet (no sound) just before t.
As a practical criterion, however, Eq. (6.2) is not robust. The reason is that sounds which have commenced prior to t may have a slow frequency drift. The point is illustrated in
Then, f aligned at t−1 corresponds to f at t, partially correcting the drift. The map
is indeed much less sensitive to drift, and is responsive to true onsets. Reference is made in this connection to
The map
{tilde over (D)}+(t,f)=max{0,{tilde over (D)}(t,f)} (6.5)
maintains the onset response, while ignoring amplitude decrease caused by fade-outs.
Pitch Detection at ton
As described in the previous section, the measure {tilde over (D)}+(ton, f) emphasizes the amplitude of frequency bins that correspond to a commencing sound. To detect the pitch frequency at ton, we use {tilde over (D)}+(ton, f) as the input to to Eq. (3.7), as described hereinabove:
An example for the detected pitch-frequencies at audio onsets in the violin-guitar sequence is given in
Following the detection of f0(ton), the pitch-frequency needs to be tracked during t≧ton, until toff. This procedure is described next.
Pitch Tracking
In the above we described how the pitch frequency f0(ton) of a sound commencing at ton is detected. We now describe how we track f0(t) through time, and how the instance of its termination toff is established.
Given the detected pitch frequency at f0(t), we wish to establish f0(t+1). It is assumed to lie in a frequency neighborhood Ωfreq of f0(t), since the pitch frequency of a source typically evolves gradually [10]. Recall that an harmonic sound contains multiples of the pitch frequency (the harmonies). Let the set of indices of active harmonies at time t be K(t). For initialization we set K(ton)=[1, . . . , K]. The estimated frequency f0(t) may be found as the one whose harmonies capture most of the energy of the signal f0(t+1)=arg max
where Amix(t,f) was defined in Eq. (3.2).
Eq. (6.7), however, does not account for the simultaneous existence of other audio sources. Disrupting sounds of high energy may be present around the harmonies (t+1, f·k) for some fεΩfreq, and kεK(t). This may distort the detection of f0(t+1). To reduce the effect of these sounds, we do not use the amplitude of the harmonies Amix(t+1, f·k) in Eq. (6.7). Rather, we use log [Amix(t+1, f·k)]. This resembles the approach taken by the HPS algorithm discussed above for dealing with noisy frequency components. Consequently, the estimation of f0(t+1) is more effectively dependent on many weak frequency bins. This significantly reduces the error induced by a few noisy components.
Recall that the pitch is tracked in order to identify the set Γdesiredt
The measure ρ(k, t) inspects the relative temporal change of the harmony's amplitude. Let ρinterfer and ρdead be two positive constants. When ρ(k, t)≧ρinterfer we deduce that an interfering signal has entered the harmony k. Therefore, it is removed from K(t). Similarly, when ρ(k; t)≦ρdead we deduce that the harmony k has faded out. Therefore, it is removed from K(t). Typically we used ρinterfer=2.5 and ρdead=0.5.
We initialize the tracking process with f0(ton) and K(ton)=[1, . . . , K], and iterate it through time. When the number of active harmonies |K(t)| drops below a certain threshold Kmin, termination of the signal at time toff is declared. Typically we used Kmin=3. The domain Γdesiredt
Γdesiredt
where tε[ton; toff] and kεK(t). The tracking process is summarized in Table 3.
Detection of Audio Onsets
In this section we briefly review the method used to extract audio onsets. Methods for audio-onset detection have been extensively studied [3]. Here we describe our particular method for onsets detection. Our criterion for significant signal increase is simply
where {tilde over (D)}+(t, f) is defined in Eq. (6.5). The criterion is similar to a criterion first suggested in Ref. [37], which was used to detect the onset of a single sound, rather than several mixed sounds. However, the criterion we use is more robust in the setup of several mixed sources, as it suppresses lingering sounds (Eq. 6.5).
In order to extract the discrete instances of audio onsets from Eq. (6.10), we perform the following. The measure oaudio(t) is normalized to the range [0, 1] by setting
Then ôaudio(t) goes through an adaptive thresholding process, which is explained hereinbelow.
The discrete peaks extracted from ôaudio(t) are then the desired audio onsets.
In the following we present experiments based on real recorded video sequences. We first describe the experiments and the association results. The following section provides a quantitative evaluation of the audio isolation for some of the analyzed scenes. This is followed by implementation details, and typical parameter values.
Results
In this section we detail experiments based on real video sequences. A first clip used was a violin-guitar sequence. This sequence features a close-up on a hand playing a guitar. At the same time, a violinist is playing. The soundtrack thus contains temporally-overlapping sounds. The algorithm automatically detected that there are two (and only two) independent visual features that are associated with this soundtrack. The first feature corresponds to the violinist's hand. The second is the correct string of the guitar, see
Another sequence used is referred to herein as the speakers #1 sequence. This movie has simultaneous speech by a male and a female speaker. The female is videoed frontally, while the male is videoed from the side. The algorithm automatically detected that there are two visual features that are associated with this soundtrack. They are marked in
The next experiment was the dual-violin sequence, a very challenging experiment. It contains two instances of the same violinist, who uses the same violin to play different tunes. Human listeners who had observed the scene found it difficult to correctly group the different notes into a coherent tune. However, our algorithm is able to correctly do so. First, it locates the relevant visual features (
Audio Isolation: Quantitative Evaluation
In this section we provide quantitative evaluation for the experimental separation of the audio sources. These measures are taken from Ref. [69]. They are aimed at evaluating the overall quality of a single-microphone source-separation method. The measures used are the preserved-signal-ratio (PSR), and the signal-to-interference-ratio (SIR), which is measured in Decibels. For a given source, the PSR quantifies the relative part of the sound's energy that was preserved during the audio isolation.
The SIR of an isolated source is compared to the SIR of the mixed source. Further details about these measures are given Hereinbelow. Table 4 summarizes the quality measures for the conducted experiments. The PSR numbers are relatively high: most of the energy of the sources was well preserved. The only exception is the female in the speakers #1 sequence. She loses almost half of her energy in the isolation process. However, her isolated speech is still very intelligible, since the informative parts of her speech were well preserved.
The SIR improvements of the sources is quite dramatic. The only exception is the guitar in the violin-guitar sequence, for which the SIR improvement is moderate. The reason for this moderation is that some of the T-F components of the violin were erroneously included in the binary mask corresponding to the guitar. Consequently, the isolated soundtrack of the guitar contains artifacts traced to the violin.
Implementation Details
This section describes the implementation details of the algorithm described in this thesis. It also lists the parameter values used in the implementation. Unless stated otherwise, the parameters required tuning for each analyzed sequence.
Temporal Tolerance
Audio and visual onsets need not happen at the exact same frame. As explained above, an audio onset and visual onsets are considered simultaneous, if they occur within 3 frames from one another.
Frequency Analysis
In all of the experiments, the audio is re-sampled to 16 kHz. It is analyzed using a Hamming window of 80 msec, equivalent to Nw=1280. Our use of M=Nw/2 (50% overlap) ensured synchronicity of the windows with the video frame rate (25 Hz).
Audio Onsets
The function oaudio(t) described hereinabove is adaptively thresholded. The adaptive thresholding parameters given hereinbelow are set to typical values of δfixed=1, δadaptive=0.5, and Ωtime=4. For pitch detection and tracking, the number of considered harmonies is set to K=10. Detection of pitch-halving is performed as described hereinabove. Typically, δhalf=0.9.
Visual Processing
Prior to calculating {umlaut over (v)}i(t) as described hereinabove, the trajectory vi(t) is filtered to remove tracking noise. The temporal filtering is performed separately on each of the vector components vi(t)=[xi(t), yi(t)]T. This means that xi(t) and yi(t) are separately filtered. The filtering process consists of performing temporal median filtering to account for abrupt tracking errors. The median window is typically set in the range between 3 to 7 frames. Consequent filtering consists of smoothing by convolution with a Gaussian kernel of standard deviation ρvisual. Typically, ρvisualε[0.5, 1.5]. Finally, the adaptive threshold parameters, see below are tuned in each analyzed scene. Typical thresholding values are δfixed=0, δadaptive=0.5, and Ωtime=8. We further remove visual onsets whose amplitudes of acceleration and velocity are smaller than specific values. Typically in our experiments, the velocity and acceleration amplitudes at an instance of a visual onset should exceed the values of 0.2.
Visual Pruning.
An algorithm according to the above tested embodiment groups audio onsets based on vision only. The temporal resolution of the audio-visual association is also limited. This implies that in a dense audio scene, any visual onset has a high probability of being matched by an audio onset. To avoid such an erroneous audio-visual association, it is possible to aggressively prune visual onsets. For example two onsets of a visual feature may not be accepted if closer than 10 frames to each other. This is equivalent to assuming an average event rate of 2:5 Hz. This has the advantage of making dense scenes easier to handle but limits the applicability of our current realization in the case of rapidly-moving AVOs.
Further Extensions
Audio-visual association. To avoid associating audio onsets with incorrect visual onsets, one may exploit the audio data better. This may be achieved by performing a consistency check, to make sure that sounds grouped together indeed belong together. Outliers may be detected by comparing different characteristics of the audio onsets. This would also alleviate the need to aggressively prune the visual onsets of a feature. Such a framework may also lead to automatically setting of parameters for a given scene. The reason is that a different set of parameter values would lead to a different visual-based auditory-grouping. Parameters resulting in consistent groups of sounds (having a small number of outliers) would then be chosen.
Single-microphone audio-enhancement methods are generally based on training on specific classes of sources, particularly speech and typical potential disturbances [57]. Such methods may succeed in enhancing continuous sounds, but may fail to group discontinuous sounds correctly to a single stream. This is the case when the audio-characteristics of the different sources are similar to one another. For instance, two speakers may have close-by pitch-frequencies. In such a setting, the visual data becomes very helpful, as it provides a complementary cue for grouping of discontinuous sounds. Consequently, incorporating our approach with traditional audio separation methods may prove to be worthy. The dual violin sequence above exemplifies this. The correct sounds are grouped together according to the audio-visual association.
Cross-Modal Association. This work described a framework for associating audio and visual data. The association relies on the fact that a prominent event in one modality is bound to be noticed in the other modality as well. This co-occurrence of prominent events may be exploited in other multi-modal research fields, such as weather forecasting and economic analysis.
Tracking of Visual Features
The algorithm used in the present embodiment is based on tracking of visual features throughout the analyzed video sequence, based on Ref. [5].
Adaptive Thresholds
We now describe the adaptive threshold functions used in the detection of the audio and the visual onsets. Given a measure o(t), the goal is to extract discrete instances in which o(t) has a local maximum. These instances should correspond to meaningful instances, and contain as few as possible nuisance events. Part of the description below is based on Ref. [3].
Fixed thresholding methods define significant events by peaks in the detection function that exceed a threshold
o(t)>δfixed. (B.1)
Here δfixed is a positive constant. This approach may be successful with signals that have little dynamics. However, each of the sounds in the recorded soundtrack may exhibit significant loudness changes. In such situations, a fixed threshold tends to miss onsets corresponding to relatively quiet sounds, while over-detecting the loud ones. For the visual modality, the same is also true. A motion path may include very abrupt changes in motion, but also some more subtle ones. In these cases, the measure o(t) spreads across a high range of values. For this reason, some adaptation of the threshold is required. We augment the fixed threshold with an adaptive nonlinear part. The adaptive threshold inspects the temporal neighborhood of o(t). This is similar in spirit to spatial reasoning in image edge-detection discussed above.
Given a time instance t, define a temporal neighborhood of it:
Ωtime(ω)=[t−ω, . . . , t+ω]. (B.2)
Here ω is an integer number of frames. In audio, we may expect that oaudio(ton) would be larger than the measure oaudio(t) in other tεΩtime(ω). Consequently, following Ref. [3], we set
{tilde over (δ)}audio=δfixed+δadaptive·mediantεΩ
Here the median operation may be interpreted as a robust estimation of the average of oaudio(t) around ton. By using the median operation, Eq. (B.3) enables the detection of close-by audio onsets that are expected in the single-microphone soundtrack.
In the video, we take a slightly different approach. We take
{tilde over (δ)}video=δfixed+δadaptive·maxtεΩ
where the median of Eq. (B.3) is replaced by the max operation. Unlike audio, the motion of a visual feature is assumed to be regular, without frequent strong variations. Therefore, two strong temporal variations should not be close-by. Consequently, it is not enough for o(t) to exceed the local average. It should exceed a local maximum. Therefore the median is replaced by the max.
The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of” and “consisting essentially of”.
As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
This Application is a National Phase of PCT Patent Application No. PCT/IL2008/000471 having International filing date of Apr. 6, 2008, which claims the benefit of U.S. Provisional Patent Application No. 60/907,536 filed on Apr. 6, 2007. The contents of the above Applications are all incorporated herein by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IL2008/000471 | 4/6/2008 | WO | 00 | 3/15/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2008/122974 | 10/16/2008 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6219639 | Bakis et al. | Apr 2001 | B1 |
6816836 | Basu et al. | Nov 2004 | B2 |
6910013 | Allegro et al. | Jun 2005 | B2 |
20020135618 | Maes et al. | Sep 2002 | A1 |
20030065655 | Syeda-Mahmood | Apr 2003 | A1 |
20040267536 | Hershey et al. | Dec 2004 | A1 |
20050251532 | Radhakrishnan et al. | Nov 2005 | A1 |
20060059120 | Xiong et al. | Mar 2006 | A1 |
20060075422 | Choi et al. | Apr 2006 | A1 |
20060235694 | Cross et al. | Oct 2006 | A1 |
20080193016 | Lim et al. | Aug 2008 | A1 |
Entry |
---|
Stauffer, C.: Automated audio-visual activity analysis, 2005, Tech. rep., MIT-CSAIL-TR-2005-057, Massachusetts Institute of Technology, Cambridge, MA. |
Chen et al, “Relating audio-visual events caused by multiple movements: in the case of entire object movement,”, Jul. 2002, Information Fusion, 2002. Proceedings of the Fifth International Conference on , vol. 1, No., pp. 213-219 vol. 1, 8-11. |
Barzelay et al, “Harmony in Motion,”, Jun. 2007 Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on , vol., No., pp. 1,8, 17-22. |
International Preliminary Report on Patentability Dated Oct. 15, 2009 From the International Bureau of WIPO Re.: Application No. PCT/IL2008/000471. |
International Search Report and the Written Opinion Dated Aug. 27, 2008 From the International Searching Authority Re.: Application No. PCT/IL2008/000471. |
Gianluca et al. “Analysis of Multimodal Sequences Using Geometric Video Representations”, Signal Processing, XP002489312, 86(12): 3534-3548, Fig.5, Dec. 2006. |
Jinji et al. “Finding Correspondence Between Visual and Auditory Events Based on Perceptual Grouping Laws Across Different Modalities”, 2000 IEEE International Conference on Nashville, XP010523409,W 1: 242-247, Figs.1-3, table 1, Oct. 2000. |
Jinji et al. “Relating Audio-Visual Events Caused by Multiple Movements: in the Case of Entire Object Movement”, Information Fusion, Proceedings of the Fifth International Conference, XP010595122, 1: 213-219, Jul. 2002. |
Zhu et al. “Major Cast Detection in Video Using Both Audio and Visual Information”, IEEE International Conference on Acoustics, Speech, and Signal Processing, XP010803152, 3: 1413-1416, Fig 1, May 2001. |
Zohar et al. “Harmony in Motion”, CCIT Technical Report 620, XP002491034, Apr. 2007. |
Number | Date | Country | |
---|---|---|---|
20100299144 A1 | Nov 2010 | US |
Number | Date | Country | |
---|---|---|---|
60907536 | Apr 2007 | US |