Most of the currently available video search engines are based on “query by keyword” scenario, which are built on text search engines mainly using the associated textual information such as surrounding text from the web page, speech transcript, closed caption, and so on. However, the performance of text-based video search is yet unsatisfying, due to the mismatch between surrounding text and the associated video, as well as the low performance of automatic speech recognition (ASR), video text recognition and machine translation (MT) techniques.
Video search reranking can be regarded as recovering the “true” ranking list from the initial noisy one by using visual information, i.e., to refine the initial ranking list by incorporating the text cue and visual cue. As for text cue, we mean that the initial text-based search result provides a baseline for the “true” ranking list. Though noisy, it still reflects partial facts of the “true” list thus needs to be preserved to some extent, i.e., to keep the correct information in the initial list. The visual cue is introduced by taking visual consistency as a constraint, e.g., visually similar video shots should have close ranking scores and vice versa. Reranking is actually a trade-off between the two cues. It is worthy emphasizing that this is actually the basic underlying assumption of most of the existing video search reranking approaches, though it may not be clearly presented.
Content-based video search reranking can be regarded as a process that uses visual content to recover the “true” ranking list from the noisy one generated based on textual information. This paper explicitly formulates this problem in the Bayesian framework, i.e., maximizing the ranking score consistency among visually similar video shots while minimizing the ranking distance, which represents the disagreement between the objective ranking list and the initial text-based. Different from existing point-wise ranking distance measures, which compute the distance in terms of the individual scores, two new methods are proposed in this paper to measure the ranking distance based on the disagreement in terms of pair-wise orders. Specifically, hinge distance penalizes the pairs with reversed order according to the degree of the reverse, while preference strength distance further considers the preference degree. By incorporating the proposed distances into the optimization objective, two reranking methods are developed which are solved using quadratic programming and matrix computation respectively. Evaluation on TRECVID video search benchmark shows that the performance improvement up to 21% on TRECVID 2006 and 61.11% on TRECVID 2007 are achieved relative to text search baseline.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The claimed subject matter is described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject innovation. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject innovation.
As utilized herein, terms “component,” “system,” “data store,” “evaluator,” “sensor,” “device,” “cloud,” “network,” “optimizer,” and the like are intended to refer to a computer-related entity, either hardware, software (e.g., in execution), and/or firmware. For example, a component can be a process running on a processor, a processor, an object, an executable, a program, a function, a library, a subroutine, and/or a computer or a combination of software and hardware. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and a component can be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
Two cues from the probabilistic perspective within a Bayesian framework. The text cue is modeled as a likelihood which reflects the disagreement between the reranked list and the initial text-based one; and the visual cue is modeled as a conditional prior which indicates the ranking score consistency between visually similar samples. In the Bayesian framework, reranking is formulated as maximizing the product of the conditional prior and the likelihood. That is the reason that it is referred to herein as Bayesian Reranking. Existing random walk based methods can be unified into such a framework.
The focus is on the likelihood term while the conditional prior can be modeled by visual consistency directly. The likelihood is estimated by the ranking distance, i.e., the disagreement between the reranked list and the initial text-based one. Ranking distance is a crucial factor in video search reranking, which significantly affects the overall reranking performance but has not been well studied before. The point-wise ranking distance, which sums the individual score difference for each sample in the two ranking score lists, is used in existing video search reranking methods. However, such point-wise approach fails to capture the disagreement between two lists in terms of ranking accurately. To tackle this problem, two novel ranking distances are proposed based on the pair-wise order disagreement. Specifically, hinge distance penalizes the pairs with reversed order according to the degree to which they are reversed, while preference strength distance further considers the preference degree over pairs. By incorporating the distances into the optimization objective, hinge reranking and preference strength reranking are developed, which are solved by Quadratic Programming (QP) and matrix computation, respectively.
Firstly, existing video search reranking methods are reviewed. Then, reranking is formulated in a Bayesian framework and the general reranking model is derived. Next, two pair-wise ranking distances are developed and the corresponding reranking methods are presented. Implementation details for video search reranking are next considered. The connections between our proposed methods and “learning to rank” as well as random walk reranking are then presented. Experimental results and analysis are then given.
Recently many methods are proposed for video search reranking, which can be divided into three categories: PRF (Pseudo-Relevance Feedback) based, clustering based and random walk based.
The first category is PRF based. PRF is a concept introduced from text retrieval, which assumes that a fraction of the top-ranked documents in the initial search results are pseudo-positive. In PRF based video search reranking there are normally three steps: (1) select the pseudo-positive and pseudo-negative samples from the initial text-based search results; (2) train a classifier using the selected samples; (3) rerank the video shots with the relevance scores predicted by the trained classifier. Due to the low performance of text-based video search, the top ranked video shots cannot be used as pseudo positives directly. Alternatively, uses the query images or example video clips as the pseudo-positive samples. The pseudo-negative samples are selected from either the least relevant samples in the initial ranking list or the database with the assumption that few samples in the database are relevant to the query. In step (2), different classifiers, such as SVM, Boosting, and Ranking SVM, can be adopted. Although the above classifiers are effective, sufficient training data are demanded to achieve a satisfactory performance since a lot of parameters need to be estimated.
The second category is clustering based. In some prior art, each video shot is given a soft pseudo label according to the initial text-based ranking score, and then the Information Bottleneck principle is adopted to find optimal clustering which maximizes the mutual information between the clusters and the labels. Reranked list is achieved by ordering the clusters according to the cluster conditional probability firstly and then ordering the samples within a cluster based on their local feature density estimated via kernel density estimation. This method achieves good performance on the named-person queries as shown in while it is limited to those queries which have significant duplicate characteristic.
In the third category, random walk based methods, a graph is constructed with the samples (video shots) as the nodes and the edges between them being weighted by multi-modal similarity. Then, reranking is formulated as random walk over the graph and the ranking scores are propagated through the edges. To leverage the text-based search result, a “dongle” node is attached to each sample with the value fixed to be the initial text-based ranking score. The stationary probability of the random walk process is used as the reranked score directly. Random walk reranking can be unified into the proposed Bayesian reranking framework, while the adopted ranking distance is actually point-wise, which can not capture the “true” difference between the reranked list and the initial text-based one precisely.
There are also methods which incorporate auxiliary knowledge, including face detection, query example, and concept detection, into video search reranking. Though the incorporation of auxiliary knowledge leads to the performance improvement it is not a general treatment. They suffer from either the limited applicability to the specific queries (face detection), the desire of the specific uses interfaces (query example) or the limited detection performance and small vocabulary size (concept detection). In this paper, we only consider the general reranking problem which doesn't assume any auxiliary knowledge besides the visual information of samples.
Before formulating reranking, a few terms are defined below.
DEFINITION 1. A ranking score list (score list in brief), r=[r1, r2, . . . , rN]T, is a vector of the ranking scores, which corresponds to a sample set X={x1, x2, . . . , xN}.
DEFINITION 2. A ranking list 1 is a permutation of X sorted by the ranking scores with descending order.
Generally, reranking can be regarded as a mapping from the initial ranking list to the objective ranking list. However, the ranking scores are also useful in most situations. For this reason, we define reranking on the score list instead of the ranking list.
DEFINITION 3. A reranking function is defined as
r=ƒ(X,
where
By defining reranking on the score list instead of the ranking list more flexibility will be achieved. For the application scenarios where the initial ranking scores are un available, such as Google image search reranking, the initial score list
The difficulty in reranking is how to derive the optimal reranking function (1). The reranking problem is investigated from the probabilistic perspective and derives an optimal reranking function based on Bayesian analysis.
Supposing the ranking score list is a random variable, reranking can be regarded as a process to derive the most probable score list given the initial one as well as the visual content of samples. From the probabilistic perspective reranking is to derive the optimum r* with the maximum a posterior probability given the samples X and the initial score list
r*=arg maxr p(r|X,
According to Bayes' formula, the posterior is proportional to the product of the conditional prior probability and the likelihood
p(r|X,
where p(r|X) is the conditional prior of the score list given the visual content of samples. For instance, the ranking score list with dissimilar scores for visually similar video shots may be assigned a small probability.
p(
In most of the video search systems, the initial ranking score list is obtained by using the textual information regardless of the visual content, therefore the conditional independency assumption of the visual information X and the initial score list
p(
hence,
p(
Substituting (4) into (3) we obtain
p(r|X,
Replacing the posterior in (2) with (5), we formulate reranking as maximizing the product of a conditional prior and a likelihood, which is defined as Bayesian Reranking.
DEFINITION 4. Bayesian Reranking is reranking using the function
where
In Bayesian Reranking, the likelihood and the conditional prior need to be estimated to complete the reranking function. Below, it will be shown how to model the prior and likelihood using energy functions.
In video search reranking, it is expected that visually similar video shots should have close ranking scores. This can be modeled in the conditional prior in the Bayesian Reranking formulation. Specifically, the conditional prior is formulated as a pair-wise Markov network,
where Ψij(r, X) is the energy function defined on a pair of samples {ri, xi, ri, xj}, Z is a normalizing constant with Z=Σrexp(−Σi,jΨij(r, X)).
A graph G, as illustrated in
where σ is the scaling parameter.
Various methods can be used to derive the energy function Ψij(r|X). Based on the assumption that if the samples xi and xj are visually similar then the corresponding scores ri and rj should be close as well and vice versa, so-called visual consistency assumption, the energy function is defined as
hence the conditional prior is
which is widely used in semi-supervised learning, and the exponent is named as Laplacian Regularization. An alternative method, Normalized Laplacian Regularization can also be used to derive the prior,
where di=Σjwij.
From the experimental analysis, Laplacian Regularization performs better than Normalized Laplacian Regularization.
The likelihood is modeled as
where Z is the normalizing constant, c is a scaling parameter, and Dist(r,
The Bayesian Reranking formulation in Eq. (6) is equivalent to minimizing the following energy function,
E(r)=Σijwij(ri−rj)2+c×Dist(r,
where the first and second terms correspond to the conditional prior in Eq. (8) and the likelihood in Eq. (10) respectively, and c can be viewed as a trade-off parameter to the two terms. The main work of this paper focuses on the second term, i.e., the evaluation of ranking distance.
In the below we will analyze the issues in existing ranking distances and propose to measure the ranking distance from the pair-wise perspective. A toy example is given for illustration, which comprises five samples {x1, x2, x3, x4, x5} and four ranking score lists {r0, r1, r2, r3}, as shown in Table 1.
Sorting the samples by their scores, the corresponding ranking lists are derived from r0, r1, r2, r3 as
10=x1, x2, x3, x4, x5
11=x5, x4, x3, x2, x1
12=x1, x5, x4, x3, x2
13=x1, x2, x3, x4, x5
To measure the ranking distance between the score lists, one intuitive idea is to take each score list as an “instance” and then use the list-wise approach, which has been exploited in “learning to rank”. However, as shown in some prior art, which defines the distance of two score lists as the cross entropy between the two distributions of permutations conditioned respectively on the each of the score lists, the list-wise approach is computationally intractable since the number of permutations is O(N!) where N is the number of samples.
Alternatively, the most direct and simple method to measure the ranking distance between two score lists is to compute the individual score difference for each sample respectively and then sum them, so-called point-wise approach, as shown below,
Dist(r,
The corresponding graphical model representation is illustrated in
Point-wise ranking distance, however, fails to capture the disagreement between the score lists in terms of ranking in some situations. Take the toy example in Table 1 for illustration. The distances between r0 and r1, r2, r3 computed using Eq. (12) are: Dist(r1, r0)=0.63, Dist(r2, r0)=0.70, and Dist(r3, r0)=1.12. Dist(r3, r0) is the largest, however, in terms of ranking, the distance between r3 and r0 should be the smallest since 13 is identical with 10 while different from 11 and 12.
As the ranking information can be represented entirely by the pair-wise ordinal relations, the ranking distance between two score lists can be computed from the pairs, so-called pair-wise approach. The graphical model representation of pair-wise distance is illustrated in
Before further discussing pair-wise approach, we firstly define the notation
DEFINITION 5. xi xj is a relation on a pair (xi, xj) if ri>rj, i.e., xi is ranked before xj in the ranking list 1 derived from r.
All the pairs with (xi, xj) satisfying xixj compose a set Sr={(i, j):xixj}. For any two samples xi and xj either (i, j) or (j, i) belongs to Sr. Therefore, all the pair-wise ordinal relations are reflected in Sr.
The simplest pair-wise ranking distance could be defined as below,
Dist(r,
where
The basic idea of (13) is to count the number of pairs which disagree on the order relations in the two lists. Using (13), Dist(r1, r0)=10, Dist(r2, r0)=6, and Dist(r3, r0)=0. It really captures the differences of the ranking lists.
However, the optimization problem of (11) with the ranking distance (13) is computationally intractable. Below we will define two pair-wise ranking distances with which the optimization problem of (11) can be solvable.
The following description describes hinge reranking. Intuitively, if a pair's order relation keeps the same before and after reranking, the distance of this pair will be zero, just as shown in (13). However, if a pair's order is reversed after reranking, instead of giving equal penalization (1 in (13)) for each pair, the penalization should be given according to the degree to which the pair's order is reversed. Hence, we define hinge distance as
is the hinge function.
Substitute hinge distance (14) into (11) and the following optimization problem is derived
which is equivalent to
where ξij is a slack variable.
By introducing a factor a which is a small positive constant the following quadratic optimization problem is achieved,
Reranking with the above optimization problem is called hinge reranking since hinge distance is adopted. The optimization problem (16) can be solved using Interior-Point method. In some situations the computational cost is high especially when the constraints are excessive. For instance, if there are 1000 samples in the initial ranking list, there will be about one million constraints. Below, we will develop a more efficient method using a different ranking distance, which can be solved analytically by matrix computation.
In reranking, not only the order relation but also the preference strength, which means the score difference of the samples in a pair, i.e., ri−rj for the pair (xi, xj), is indicative. For example, given two pairs, one comprising two tigers with different typicality, and the other comprising a tiger and a stone, obviously the preference strength is different for these two pairs. Such information can be utilized in video search reranking and then an alternative ranking distance is defined, called preference strength distance, as follows
From Eq. (17) we can see that with preference strength the order relations on pairs are also reflected in preference strength ranking distance.
Replacing the distance function in (11) with the preference strength distance (17), the optimization problem of preference strength reranking is
Supposing one solution of (18) is r*, it is apparent that ŕ=r*+μe is also the solution of (18), where e is a vector with all elements equal 1 and μ is an arbitrary constant. Obviously, all solutions give the same ranking list. Here, a constraint rN=0 is added to (18) where N is the length of r and then the unique solution can be derived, as given in the following proposition.
PROPOSITION 1. The solution of (18) with a constraint rN=0 is
where {hacek over (L)} and {hacek over (c)} are obtained by replacing the last row of {tilde over (L)} with [0, 0, . . . , 0, 1]1×N and last element of {tilde over (c)} with zero respectively. {tilde over (L)}={tilde over (D)}−{tilde over (W)} and {tilde over (c)}=2c(Ae)T. {tilde over (W)}=[{tilde over (w)}ij]N×N with {tilde over (w)}ij=wij+cαij2. {tilde over (D)}=Diag({tilde over (d)}) is a degree matrix with {tilde over (d)}=[{tilde over (d)}1, . . . , {tilde over (d)}N]T and {tilde over (d)}i=Σj{tilde over (w)}ij. A=[αij]N×N is a anti-symmetric matrix with αij=1/({tilde over (r)}{tilde over (ri)}={tilde over (r)}{tilde over (rj)}).
PROOF.
Take derivatives and equate it to zero gives:
2{tilde over (L)}r={tilde over (c)} (19)
The solution of (19) is non-unique since the Laplacian matrix {tilde over (L)} is singular. With the constraint rN=0, we can replace the last row of {tilde over (L)} with [0,0, . . . , 0,1]1×N to obtain {hacek over (L)} and the last element of {tilde over (c)} with zero to obtain {hacek over (c)} respectively. Then, the solution is
As aforementioned, there are two methods for reranking. When applied into video search reranking some implementation details should be considered.
As can be observed in Eq. (11), ranking distance is actually employed to preserve the information of initial score list to some extent. Currently, all the pairs in S
In video search, the performance of text baseline is often poor and the text scores are mostly unreliable because of the inaccuracy and mismatch of ASR and MT from the video. Besides, in some situations the text search scores are unavailable for reranking, e.g., in web image search. There are three strategies disclosed to assign the initial scores.
Firstly we define the ranking function analogical to reranking function.
DEFINITION 7. A ranking function is defined as
r=ƒ(K)
where K={kj} is a set of features with kj being extracted from the pair comprising the query q and the sample xj and r is the objective ranking score list.
The goal of most “learning to rank” methods is to learn a ranking function automatically from the training data,
and then predict the ranking score list of the samples under a test query qt using the learned ranking function,
rt=ƒ*(Kt),
where Kt is the test feature set extracted from pairs of the test query qt and samples, {Ki, ri}i=1m is the training data comprising m pre-labeled ranking lists for m queries {qi}.
Reranking can be formulated as a learning to rank problem. Firstly a fraction of the initial ranking score list is selected based on some strategy as shown in Section 5.1; then the selected fractions of the initial ranking list are used to learn an optimal ranking function; finally the reranked list can be achieved using the learned ranking function. This is actually the method used in some prior art, which adopts Ranking SVM to learn a pair-wise ranking function.
The problem (20) can be regarded as inductive learning to rank, which learns an explicit ranking function without utilizing the unlabeled data. In reranking, however, an explicit ranking function is not necessarily needed and what we desire is just the reranked score list. A more effective way should be to deduce the optimal ranking list from this training data directly without explicitly learning a ranking function as
which corresponds to the transduction paradigm in machine learning.
Rewriting the reranking objective (2) as
Since in reranking only one query is involved the features are extracted from the samples regardless of the query. Except this the objective (21) and (22) have the same form. We can see that reranking is actually transductive learning to rank with only one training sample, i.e. the initial ranking score list. From this perspective, the proposed hinge reranking and preference strength reranking can be applied as transductive learning to rank method as well. Meanwhile, any transductive learning to rank methods which will be developed in the future can be used for reranking seamlessly.
The objective function of random walk reranking is derived as
from which we can see that random walk reranking actually have the similar objective as Bayesian Reranking (11). The two terms in the objective function correspond to the visual consistency regularization and the normalized point-wise ranking distance respectively.
Next, the reranking methods are evaluated on a widely used video search benchmark and compared to several existing approaches. Also discussed is the influence of different implementation strategies and parameters in our methods.
The experiments were conducted on the TRECVID 2006 and 2007 video search benchmark. TRECVID 2006 dataset consists of 259 videos with 79,484 shots while TRECVID 2007 dataset consists of 109 videos with 18, 142 shots. The data are collected from English, Chinese, and Arabic news programs, accompanied with automatic speech recognition (ASR) and machine translation (MT) transcripts in English provided by NIST.
The text search baseline we used in this paper is based on the Okapi BM-25 formula using ASR/MT transcripts at shot level. For each of the 48 queries, 24 for TRECVID 2006 and TRECVID 2007 respectively, at most top 1400 relevant shots are returned as initial text search result.
The low-level feature we used in reranking is 225-dimensional block-wise color moments extracted over 5×5 fixed grid partitions, each block of which is described by a 9-Dimensional feature. When constructing the graph G, each sample is connected with its K-nearest neighbors.
The performance is measured by the widely used non-interpolated Average Precision (AP). We average the APs over all the 24 queries in each year to get the Mean Average Precision (MAP) to measure the overall performance.
Our two methods are compared: hinge reranking and Preference Strength (PS) reranking with random walk and Ranking SVM PRF. In addition, we also compare them with two graph based transductive learning methods: GRF (Gaussian Random Filed) and LGC (Local and Global Consistency), which have the same form of objective functions as (11) so can be adopted into reranking directly. For GRF the point-wise ranking distance (12) and Laplacian regularization (8) are used while for LGC the point-wise ranking distance (12) and the Normalized Laplacian regularization (9) are adopted.
The ρ-adjacent strategy for pair selection and R strategy for initial score are adopted in our methods. The parameters are selected in global optimum for all the methods. However, in hinge reranking, K is fixed to 30 when constructing the graph G and only the top 500 samples in the text search result are involved considering for the efficiency in practical application.
The experimental results are summarized in Table 2. PS reranking achieves consistent and significant performance improvements on both years, 21% on TRECVID 2006 and 45.42% on TRECVID 2007. Hinge reranking performs the best and obtains 61.11 % improvement on TRECVID 2007 while little improvement is achieved on TRECVID 2006. As aforementioned, in hinge reranking, we only rerank the top 500 samples. However, the percentage of the positive samples in the top 500 among all the positive samples in the text based search result is 65.9% on TRECVID 2006. The remaining 34.1% positive samples beyond the top 500 will keep untouched in reranking.
The performances of the proposed methods on each query in TRECVID 2006 and 2007 are illustrated in
It can also be seen that the performances on some queries degrade after reranking, such as query182 (“soldiers or police with one or more weapons and military vehicles”) and query200 (“hands at a keyboard typing or using a mouse”). There could be two reasons. One is that the low-level feature is insufficient to represent the visual content with large variations. Hence, in the future semantic similarity will be incorporated into reranking methods. The other reason is that the parameters are set to be the same for each query, which is obviously not optimal. The performances of queries with optimal parameters for each in PS reranking (PS-Best) are shown in the last column of
Below, the performance of the proposed methods and strategies is analyzed. If not explicitly stated, the experiments and analysis are conducted on PS reranking since the QP solving in hinge reranking is extremely slow when the constraints are excessive.
Pair selection is a useful pre-processing step for video search reranking especially when the initial ranking score list is very noisy, as detailed in Section 5.1.
Firstly, we conduct the experiments using ρ-adjacent pairs with different ρ. As shown in
As illustrated in
baseline on TRECVID 2007 is noisier so that better preservation of the initial result is not beneficial; (2) For query219 (“the Cook character in the Klokhuis series”), which basically dominates the performance of TRECVID 2007 as illustrated in
The performance of ρ-adjacent pair selection is compared with the other strategies in Table 3. As shown, ρ-adjacent outperforms the others on both TRECVID 2006 and 2007 data set though the other two methods generate more “correct” pairs. The reason is that ρ-adjacent pairs preserve all the necessary order information.
Different strategies for initial scores as presented in Section 5.2 should have different effects for PS reranking. We will show some observations on it. As shown in Table 4, R and NR, which only use the rank instead of text scores, outperform NTS on both TRECVID 2006 and TRECVID 2007.
In addition, R performs better than NR especially on TRECVID 2006. The reason could be as follows. In R, the preference strength for 1-adjacent pairs equals 1 in each query. In NR, however, the preference strength for 1-adjacent pairs is 1/N, which is different among queries. Based on the statistics, N varies from 52 to 1400 in TRECVID 2006 and from 28 to 1400 in TRECVID 2007. The optimal parameters, such as the tradeoff parameter c, should be different according to the preference strength, as can be observed in the optimization objective. Since in our experiment the parameters are globally selected, it is more appropriate to assign each query with equal preference strength for 1-adjacent pairs, i.e., R is much better in this situation
K is an important parameter when constructing the graph G. A larger K can ensure more relevant samples being connected to each other, however, the edges between relevant and irrelevant samples will be added, too, which could degrade the performance because the score consistency between relevant and irrelevant samples is not necessary. With a smaller K, the “incorrect” edges will be eliminated while some of the “correct” edges between relevant samples are also missed, which will weaken the necessary consistency.
In
Specifically, on TRECVID 2006 the maximum MAP (0.0461) is obtained at K=30 while on TRECVID 2007 the maximum MAP (0.0445) is achieved at K=10. As analyzed from the data, the average numbers of relevant samples across queries are 55 in TRECVID 2006 and 24 in TRECVID 2007. We can observe that the optimal K are about half the average number of relevant samples. This can provide a rough guideline in setting K in the future practical application.
The trade-off parameter c is used to balance the effects of the two terms: consistency regularization and the ranking distance. A larger c indicates that more information is preserved from the text search baseline into the reranked list. When c=∞, the reranked list will be the same as the initial one if all the pairs are used. A smaller c means that the visual consistency term plays a major role in reranking. When c=0, the result would be totally dominated by the visual consistency regardless of the initial ranking score list at all.
As illustrated in
As shown before, the MAP of the text search baseline of TRECVID 2006 and 2007 are 0.0381 and 0.0306 respectively. However, the performance of TRECVID 2007 text search baseline is dominated by the query219 (“the Cook character in the Klokhuis series”) while it would reduce to 0.0139 if such a query is removed. Basically, a larger c (100) is appropriate for a better text search baseline as in TRECVID 2006, while a smaller c (0.01) for worse text search baseline as in TRECVID 2007. It can be concluded that the trade-off parameter c can be set according to the performance of text search baseline.
c is also related to the number of pairs used. Specifically, on TRECVID 2006, the optimal c is 100, 10 and 0.01 respectively for 1-adjacent, 10-adjacent and (N−1) -adjacent pairs.
In this application, a general framework for video search reranking is proposed which explicitly formulate reranking into a global optimization problem from the Bayesian perspective. Under this framework, with two novel pair-wise ranking distances, two effective video search reranking methods, hinge reranking and preference strength reranking, are proposed. The experiments conducted on the TRECVID dataset have demonstrated that our methods outperform several existing reranking approaches.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. Accordingly, the invention is not limited except as by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5802361 | Wang et al. | Sep 1998 | A |
5893095 | Jain et al. | Apr 1999 | A |
6243713 | Nelson et al. | Jun 2001 | B1 |
6275820 | Navin-Chandra et al. | Aug 2001 | B1 |
6412012 | Bieganski et al. | Jun 2002 | B1 |
6738764 | Mao et al. | May 2004 | B2 |
7262772 | Ebert | Aug 2007 | B2 |
7313269 | Xie et al. | Dec 2007 | B2 |
7349895 | Liu et al. | Mar 2008 | B2 |
20020169754 | Mao et al. | Nov 2002 | A1 |
20040183815 | Ebert | Sep 2004 | A1 |
20050131847 | Weston et al. | Jun 2005 | A1 |
20060004713 | Korte et al. | Jan 2006 | A1 |
20060106793 | Liang | May 2006 | A1 |
20060165354 | Kim | Jul 2006 | A1 |
20070094285 | Agichtein et al. | Apr 2007 | A1 |
20070136263 | Williams | Jun 2007 | A1 |
20070203942 | Hua et al. | Aug 2007 | A1 |
20070239778 | Gallagher | Oct 2007 | A1 |
20070255755 | Zhang et al. | Nov 2007 | A1 |
20080005105 | Lawler et al. | Jan 2008 | A1 |
20080233576 | Weston et al. | Sep 2008 | A1 |
20090019402 | Ke et al. | Jan 2009 | A1 |
20100070523 | Delgo et al. | Mar 2010 | A1 |
20110072012 | Ah-Pine et al. | Mar 2011 | A1 |
Number | Date | Country | |
---|---|---|---|
20100082614 A1 | Apr 2010 | US |