Pursuant to 35 U.S.C. § 119 and the Paris Convention Treaty, this application claims foreign priority to Chinese Patent Application No. 202011338294.0 filed Nov. 25, 2020, the contents of which, including any intervening amendments thereto, are incorporated herein by reference. Inquiries from the public to applicants or assignees concerning this document or the related applications should be directed to: Matthias Scholl P.C., Attn.: Dr. Matthias Scholl Esq., 245 First Street, 18th Floor, Cambridge, Mass. 02142.
The disclosure relates to the multi-modality analysis of visual and audio representation learning, and more particularly to a self-supervised curriculum learning method for enhancing audio-visual association.
In recent years, with the fast development of the acquisition capabilities of video capture devices, like smartphones, ground surveillance, and internet technology, video data is exponentially growing and can easily reach the scale of gigabytes per day. Rich visual and audio information is contained in those video data. Therefore, mining knowledge and understanding the content of those video data have significant academic and commercial value. However, the major difficulty of discovering video information using traditional supervised learning lies in the human annotations, which are laborious, time-consuming, and expensive but are necessary to enable the supervised training of the Convolutional Neural Networks (CNNs). To dig out inherent information and take advantage of such scale unlabeled video data generated every day, the community of self-supervised learning (SSL) has been developed for utilizing the intrinsic characteristics of unlabeled data and improving the performance of CNNs. Moreover, learning from the video data itself unleashes its potential of easy access property, and accelerates many applications in artificial intelligence where annotating data is difficult.
The self-supervised studies on visual and audio representations learning using co-occurrence property have become an important research direction. The visual and audio representation learning approach regards the pervasive property of audiovisual concurrency as latent supervision to extract features. To this end, various downstream tasks, like action recognition and audio recognition, are evaluated for extracted feature representation. The recent methods on visual and audio self-supervised representations learning can be generally categorized into two types:
(1) Audio-Visual Correspondence (AVC): the visual and audio are always presented in pairs for self-supervised learning
(2) Audio-Visual Synchronization (AVS): the audio is generated by the vibration of the surrounding object for self-supervised learning.
Both two types are mainly about setting up a verification task that predicts whether or not an input pair of an audio and a video clip is matched. The positive audio and video pairs are typically sampled from the same video. The main difference between AVC and AVS is how to treat the negative audio and video pair. Specifically, the negative pair in AVC is mostly constructed by audio and video from different videos while in AVS is to detect the misalignments between negative audio and video pair from the same video.
Conventionally, directly conducting the verification that whether the visual and audio modality derives from the same video for self-supervised representation learning leads to the following disadvantages:
(1) The verification mainly considers the information shared between two modalities for semantic representation learning, but neglects the important cues of the single audio and video modality structure. For example, both crowd cheering and announcer speaking are in basketball and football scenario, so one cannot distinguish it without hearing ball bouncing or kicking; the voice of ball bouncing and kicking is crucial in the audio modality, and the shape of the ball and dressing of the player is crucial in the visual modality.
(2) Besides, only considering the similarity between matching input audio and video visual pairs in a small number of cases is difficult to conduct non-matching pair mining in a complex case.
The disclosure provides a method for enhancing audio-visual association by adopting self-supervised curriculum learning, which not only focuses on the correlation between visual and audio modal, but also explores the inherited structure of a single modal. The teacher-student pipeline is adopted to learn the correspondence between visual and audio. Specifically, taking advantage of contrastive learning, a two-stage scheme is exploited, which transfers the cross-modal information between teacher and student model as a phased process. Moreover, the disclosure regards the pervasive property of audiovisual concurrency as latent supervision and mutually distills the structure knowledge of visual to audio data for model training. To this end, the learned discriminative audio and visual representations from the teacher-student pipeline are exploited for downstream action and audio recognition.
Specifically, the disclosure provides a method for enhancing audio-visual association by adopting self-supervised curriculum learning, the method comprising:
1) supposing an unlabeled video dataset comprising N samples and being expressed as ={Vi}i=1N where Vi represents a sampled clip of an i-th video in the dataset V and comprises T frames; T is a length of a clip Vi; pre-processing videos as visual frame sequence signals and audio spectrum signals, and a pre-processed video dataset being expressed as ={Vi=(xiv,xia)|xv∈v, xa∈a}i=1N, where v is a visual frame sequence set and a is an audio spectrum set, and xiv xia are an i-th visual sample and an audio sample, respectively:
extracting visual and audio features through convolution neural network to train a visual and audio encoder v, a to generate uni-modal representation fv, fa by exploiting a correlation of audio and visual within each video clip; wherein a feature extraction process is formulated as follows:
where fiv is an i-th visual feature and fia is an i-th audio feature, i={1, 2, . . . , N};
2) performing self-supervised curriculum learning with extracted visual features fiv and audio features fia;
2.1) performing a first stage curriculum learning; in this stage, training the visual features fiv through contrastive learning in a self-supervised manner; the contrastive learning being expressed as:
where [⋅] is an expected function, log(⋅) is a logarithmic function, exp(⋅) is an exponential function; τ denotes a temperate parameter, K denotes a number of negative samples; fiv′ is a feature extracted from visual sample xiv′ augmented from xiv, and a calculation thereof is fiv′=v(xiv′); a visual augmentation operations are formulated as:
where Tem(⋅) are visual clip sampling and temporal jittering function and s is a jitter step; Spa(⋅) are a set of image pre-processing functions comprising image cropping, image resizing, and image flipping, and T is a clip length;
training the audio features fia in a self-supervised manner through contrastive learning as follows:
where fia′ is a feature extracted from audio sample xia′ which is augmented from xia, and a calculation thereof denotes as fia′=a(xia′); an audio augmentation operation being denoted as:
x
i
a′
=Wf(Mfc(Mts(xia))),
where Mts(⋅) is a function of masking blocks of a time step, Mfc(⋅) denotes a function of masking blocks of frequency channels and Mf(⋅) is a feature wrapping function;
procedures in the first stage curriculum learning are seen as a self-instance discriminator by directly optimizing in feature space of visual or audio respectively; after the procedures, visual feature representations and audio feature representations are discriminative, which means resulting representations are distinguishable for different instances.
2.2) Performing a second stage curriculum learning; in this stage, transferring information between visual representation fiv and audio representation fia with a teacher-student framework for contrastive learning and training, the teacher-student framework being expressed as follows:
where (fiv, fia) is a positive pair, and (fiv,fja), i≠j is a negative pair;
with this stage, a student network output is encouraged to be as similar as possible to teachers' by optimizing above objective with input pairs.
3) Optimizing using a memory-bank mechanism;
In the first and second stages of curriculum learning, the key idea is to apply contrastive learning to learn the intrinsic structure of audio and visual in the video. However, solving the objective of this approach typically suffers the issue of the existence of trivial constant solutions. Therefore, the method uses one positive pair and K negative pairs for training. In the ideal case, the number of negative pairs should be set as K=N−1 in the whole video dataset V, which consumes a high computation cost and cannot directly deploy in practice. To address this issue, the method further comprises providing a visual memory bank v={mv}i=1K′, and an audio memory bank a={mia}i=1K′ to store negative pairs in the first stage curriculum learning and the second stage curriculum learning, wherein the visual memory bank and the audio memory bank are easily optimized without large computation consumption for training; a bank size K′ is set as 16384, and the visual memory bank and the audio memory bank are dynamically evolving during a curriculum learning process, with formulas as follows
where fiv, fia are visual and audio features learned in a specific iteration step of the curriculum learning process. The mentioned visual and audio banks are dynamically evolving with the video dataset and keep a fixed size, and thus the method has a variety of negative samples using a small cost. Both ways are able to replace negative samples with the bank representations without increasing the training batch size.
4) Performing downstream task of action and audio recognition;
following the curriculum learning process in a self-supervised manner, acquiring a pre-trained visual convolutional encoder v and an audio convolutional encoder a; to investigate a correlation between visual and audio representations, transferring the pre-trained visual convolutional encoder and the audio convolutional encoder to action recognition and audio recognition based on trained visual convolutional encoder v and audio convolutional encoder a, with formulas as follows:
where yv* is a predicted action label of visual frame sequence xv, ya* is a predicted audio label of audio signal xa, y is a label variable; argmax(⋅) is an argument of a maxima function, and (⋅) is a probability function.
To take advantage of the large-scale unlabeled video data and learn the visual and audio representation, the disclosure presented a self-supervised curriculum learning method for enhancing audio-visual association with contrastive learning in the context of a teacher-student network paradigm. This method can train the visual and audio model without human annotation and extracts meaningful visual and audio representations for a variety of downstream tasks. Specifically, a two-stage self-supervised curriculum learning scheme is proposed by solving the task of audio-visual correspondence learning. The rationale behind the disclosure is that the knowledge shared between audio and visual modality serves as a supervisory signal. Therefore, it is helpful for downstream tasks which have limited training data by using the pre-trained model learned with the large-scale unlabeled data. Concisely, without any human annotation, the disclosure exploits the relation between visual and audio to pre-train model. Afterward, it applies the pre-trained model in an end-to-end manner for downstream tasks.
To further illustrate, experiments detailing a method for enhancing audio-visual association by adopting self-supervised curriculum learning are described below. It should be noted that the following examples are intended to describe and not to limit the description.
The method, as shown in
Step 1: Using convolution neural network to extract visual and audio features.
Suppose an unlabeled video dataset V comprises N samples and expresses as ={Vi}i=1N, where Vi represents a sampled clip of the i-th video in the dataset and contains T frames; T is the length of clip Vi. Since the dataset comprises no ground-truth labels for later training, the videos are pre-processed as visual frame sequence signals and audio spectrum signals, and the pre-processed video dataset expresses as ={Vi=(xiv,xia)|xv∈v, xa∈a}i=1N, where v is visual frame sequence set and a is audio spectrum set; xiv and xia are i-th visual sample and an audio sample, respectively. Afterward, the method can utilize the latent correlation of visual and audio signal for self-supervised training. The goal is to effectively train a visual and audio encoder v,a to generate uni-modal representation fv, fa by exploiting the correlation of audio and visual within each video clip. The feature extraction process can be formulated as follows:
where fiv is the i-th visual feature and fia is the i-th audio feature, i={1, 2, . . . , N}.
Step 2: Self-supervised curriculum learning with the extracted visual features fiv and audio features fia.
Step 2.1: The first stage curriculum learning.
In this stage, contrastive learning is adopted to train the visual features f y in a self-supervised manner. The whole process is expressed as:
where [⋅] is the expected function, log(⋅) is the logarithmic function, exp(⋅) is the exponential function; τ denotes the temperate parameter, K denotes the number of negative samples; fiv′ is the feature extracted from visual sample xiv′ that is augmented from xiv, and the procedure is fiv′=v(xiv′). Additionally, the visual augmentation operations are formulated as:
where Tem(⋅) are visual clip sampling and temporal jittering function and s is the jitter step; Spa(⋅) are a set of image pre-processing functions, like image cropping, image resizing, image flipping, etc., and Tis the clip length.
Afterward, the same self-supervised pre-training process is also applied to audio features fia and expresses as:
where fia′ is the feature extracted from audio sample xia′ which is augmented from xia, and the procedure denotes as fia′=a(xia′). The audio augmentation operations denote as:
x
i
a′=a(Mfc(Mts(xia))),
where Mts(⋅) is the function of masking blocks of the time step, Mfc(⋅) denotes the function of masking blocks of frequency channels and Mf(⋅) is the feature wrapping function.
This first stage procedure in curriculum learning is seen as a self-instance discriminator by directly optimizing in feature space of visual or audio respectively. After the pre-trained process, the visual feature representations and audio feature representations are discriminative, which means the resulting representations are distinguishable for different instances.
Step 2.2: The second stage curriculum learning.
In this stage, the method transfers information between visual representation fiv and audio representation fia with a teacher-student framework. Contrastive learning is also adopted for training and is expressed as:
where (fiv,fia) is positive pair, while (fiv,fja), i≠j is negative pair.
With this process, the method encourages the student network output to be as similar as possible to the teachers' by optimizing the above objective with the input pairs.
Step 3. Using the memory-bank mechanism for optimizing.
In the first and second stages of curriculum learning, the key idea is to apply contrastive learning to learn the intrinsic structure of audio and visual in the video. However, solving the objective of this approach typically suffers the issue of the existence of trivial constant solutions. Therefore, the method uses one positive pair and K negative pairs for training. In the ideal case, the number of negative pairs should be set as K=N−1 in the whole video dataset V, but it will consume a high computation cost and cannot directly deploy in practice. To address this issue, the curriculum learning maintains a visual memory bank v={mv}i=1K′ and an audio memory bank a={mia}i=1K′ to store negative pairs, which can easily optimize without large computation consumption for training. The bank size K′ is set as 16384 in the method, and the two different banks are dynamically evolving during the curriculum learning process. It formulates as:
where fiv, fia are visual and audio features learned in a specific iteration step of the curriculum learning process. Since the mentioned visual and audio banks are dynamically evolving with the video dataset and keep a fixed size, so that the method has a variety of negative samples using a small cost. Both ways can be used to replace negative samples with the bank representations without increasing the training batch size.
Step 4: Downstream task of action and audio recognition.
After the self-supervised curriculum learning process, the method will obtain a pre-trained visual convolutional encoder v and audio convolutional encoder a. To further investigate the correlation between visual and audio representations, downstream tasks will be conducted by transferring the pre-trained visual convolutional encoder and the audio convolutional encoder to action recognition and audio recognition based on v and a with formulas as follows:
where yv* is the predicted action label of visual frame sequence xv, ya* is the predicted audio label of audio signal xa, y is the label variable; argmax(⋅) is the arguments of the maxima function and (⋅) is the probability function.
The disclosure first applies Kinetics-400 dataset as the pre-trained unlabeled benchmark, which comprises 306,000 video clips available on YouTube website. 221.065 videos among that are sampled from the training set for visual and audio representation learning. It is also a widely used dataset for self-supervised visual and audio representation learning. Afterward, the classification accuracies of downstream action and audio recognition are exploited for evaluating the pre-trained model in the disclosure. Specifically, top-k accuracy is adopted to evaluate the model generated in the disclosure. Top-k is the proportion of the correct label within the top k classes predicted by the model. It is a widely used metric in recognition area and set as 1 in the implementation. The large-scale action recognition benchmark of the UCF-101 and the EIMDB-51 datasets are exploited to evaluate the implementation of action recognition. The UCF-101 dataset comprises 101 action classes with 13320 short video clips. The EIMDB-51 dataset has 6766 video clips with 51 categories. The evaluation results about action recognition in this implementation are shown in Table 1.
Furthermore, ESC-50 and DCASE datasets are exploited to evaluate the audio representation. ESC-50 contains 2000 audio clips from 50 balanced environment sound classes, and DCASE has 100 audio clips from 10 balanced scene sound classes. The evaluation results about audio recognition in this implementation are shown in Table 2.
From Table 1 and Table 2, the learned visual and audio representation can be effectively applied to downstream action and audio recognition tasks and provides additional information for small-scale datasets.
To explore whether the features of audio-visual can be grouped together, this implementation conducts a cross-modal retrieval experiment with a ranked similar value. As shown in
It will be obvious to those skilled in the art that changes and modifications may be made, and therefore, the aim in the appended claims is to cover all such changes and modifications.
Number | Date | Country | Kind |
---|---|---|---|
202011338294.0 | Nov 2020 | CN | national |