Due to rapid advances in video capture technology, the cost of video capture devices has dropped greatly in recent years. As a result, video capture devices have surged in availability and popularity. Video capture functionality is now available to consumers on a mass market level in a variety of different forms such as mobile phones, digital cameras, digital camcorders, web cameras and the like. Additionally, laptop computers are also now available with integrated web cameras. As a result, the quantity of digital video data being captured has recently surged to an unprecedented level. Furthermore, corollary advances in data storage, compression and network communication technologies have made it cost effective for mass market consumers to store and communicate this video data to others. A wide variety of mass market software applications and other tools also now exist which provide consumers with the ability to view, manipulate and further share this video data for a variety of different purposes.
This Summary is provided to introduce a selection of concepts, in a simplified form, that are further described hereafter in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Video concept detection (VCD) technique embodiments described herein are generally applicable to classifying visual concepts contained within a video clip based upon a prescribed set of target concepts, each concept corresponding to a particular semantic idea of interest. These technique embodiments and the classification resulting therefrom can be used to increase the speed and effectiveness by which a video clip can be browsed and searched for particular concepts of interest. In one exemplary embodiment, a video clip is segmented into shots and a multi-layer multi-instance (MLMI) structured metadata representation of each shot is constructed. A set of pre-generated trained models of the target concepts is validated using a set of training shots. An MLMI kernel is recursively generated which models the MLMI structured metadata representation of each shot by comparing prescribed pairs of shots. The MLMI kernel can subsequently be utilized to generate a learned objective decision function which learns a classifier for determining if a particular shot (that is not in the set of training shots) contains instances of the target concepts. In other exemplary embodiments, a regularization framework can be utilized in conjunction with the MLMI kernel to generate modified learned objective decision functions. The regularization framework introduces explicit constraints which serve to maximize the precision of the classifier.
In addition to the just described benefits, other advantages of the VCD technique embodiments described herein will become apparent from the detailed description which follows hereafter when taken in conjunction with the drawing figures which accompany the detailed description.
The specific features, aspects, and advantages of the video concept detection technique embodiments described herein will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In the following description of video concept detection technique embodiments reference is made to the accompanying drawings which form a part hereof, and in which are shown, by way of illustration, specific exemplary embodiments in which the VCD technique can be practiced. It is understood that other embodiments can be utilized and structural changes can be made without departing from the scope of the technique embodiments.
1.0 Introduction To VCD
As is appreciated by those of skill in the art of video/film digital processing, video annotation generally refers to a method for annotating a video clip with metadata that identifies one or more particular attributes of the clip. Video concept detection (VCD) is one possible method for performing video annotation based on a finite set of particular visual semantic concepts of interest (hereafter referred to simply as “target concepts”). Generally speaking, the VCD technique embodiments described herein classify the visual concepts contained within a video clip based upon a prescribed set of target concepts, and then generate structured metadata that efficiently describes the concepts contained within the clip at both the semantic and syntactic levels. These technique embodiments and the classification resulting therefrom are advantageous in that they can be used to increase the speed and effectiveness by which a video clip can be browsed and searched for target concepts. This is especially advantageous considering the aforementioned surge in the quantity of digital video data that is being captured, stored and communicated, and the large volume of data associated with a typical set of digital video data (herein referred to as a “video clip” or simply a “clip”).
A fundamental step in performing the aforementioned classification of a video clip is to first understand the semantics of the data for the clip. This step can be characterized as a learning or modeling process. It is noted that a semantic gap generally exists between the high-level semantics of a particular video clip and the low-level features contained therein. The VCD technique embodiments described herein are employed as a way to narrow this semantic gap. As such, these VCD technique embodiments serve an important role in the aforementioned learning/modeling process, and therefore also serve an important role towards achieving an understanding of the semantics of a video clip. The aforementioned structured metadata generated from these VCD technique embodiments can be used as the basis for creating a new generation of mass market software applications, tools and systems for quickly and effectively browsing video clips, searching the clips for target concepts, manipulating the clips, and communicating the clips to others.
2.0 MLMI Framework
Generally speaking, a video clip, which can include a plurality of different scenes along with one or more moving objects within each scene, has distinctive structure characteristics compared to a single image of a single scene. More particularly, as will be described in detail hereafter, a clip intrinsically contains hierarchical multi-layer metadata structures and multi-instance data relationships. Accordingly, the VCD technique embodiments described herein are based on a structure-based paradigm for representing a clip as hierarchically structured metadata. As such, these technique embodiments are advantageous since they avail themselves to the hierarchical multi-layer metadata structures and multi-instance data relationships contained within a clip. More particularly, as noted heretofore and as will be described in more detail hereafter, these VCD technique embodiments generally involve classifying the visual concepts contained within a video clip and generating hierarchically structured metadata therefrom, where data relationships inside the clip are modeled using a hierarchical multi-layer multi-instance (MLMI) learning/modeling (hereafter also referred to simply as modeling) framework. As will be described in more detail hereafter, this MLMI framework employs a root layer and a hierarchy of sub-layers which are rooted to the root layer.
Before the VCD technique embodiments described herein are applied to a particular video clip, it is assumed that an MLMI structured metadata representation of the clip has been constructed as follows. First, it is assumed that a conventional method such as a pixel value change-based detection method has been used to perform shot boundary detection on the clip such that the clip is segmented into a plurality of shots, thus constructing what will hereafter be termed a “shot layer.” Each shot (hereafter also referred to as a rooted tree or a shot T) contains a series of consecutive frames in the video that represent a distinctive coherent visual theme. As such, a video clip typically contains a plurality of different shots. Second, it is assumed that a conventional method such as a TRECVID (Text REtrieval Conference (TREC) Video Retrieval Evaluation) organizer has been used to extract one or more key-frames from each shot, thus constructing what will hereafter be termed a “key-frame sub-layer.” Each key-frame contains one or more target concepts. Third, it is assumed that a conventional method such as a J-value Segmentation (JSEG) method has been used to segment each key-frame into a plurality of key-regions and subsequently filtering out those key-regions that are smaller than a prescribed size, thus constructing what will hereafter be termed a “key-region sub-layer.” Finally, as will be described in more detail hereafter, it is assumed that a plurality of low-level feature descriptors have been prescribed to describe the visual concepts contained in the shot layer, key-frame sub-layer and key-region sub-layer, and that these various prescribed features have been extracted from the shot layer, key-frame sub-layer and key-region sub-layer. Exemplary low-level feature descriptors for these different layers will be provided hereafter.
Additionally, before the VCD technique embodiments described herein are applied to a particular video clip, it is also assumed that the following procedures have been performed. First, it is assumed that a conventional method such as a Large Scale Concept Ontology for Multimedia (LSCOM) method has been used to pre-define a set of prescribed target concepts. Second, it is assumed that a conventional method such as a Support Vector Machine (SVM) method has been used to pre-generate a set of statistical trained models of this set of target concepts. Third, it is assumed that the trained models are validated using a set of training shots selected from the aforementioned plurality of shots.
As will now be described in detail, each shot can generally be conceptually regarded as a “bag” which contains a plurality of paratactic “concept instances” (hereafter referred to simply as “instances”). More particularly, each key-frame can be conceptually regarded as a bag and each region contained therein can be regarded as an instance. Each key-frame can thus be conceptually regarded as a bag of region instances. Instance classification labels are applied to each key-frame bag as follows. A particular key-frame bag would be labeled as positive if at least one of the region instances within the bag falls within the target concepts; the key-frame bag would be labeled as negative if none of the region instances within the bag falls within the target concepts. As will become clear from the MLMI framework description which follows, this bag-instance correspondence can be further extended into the MLMI framework such that each shot can be conceptually regarded as a “hyper-bag” (herein also referred to as a “shot hyper-bag” for clarity) and each key-frame contained therein can also be regarded as an instance. In this case, instance classification labels are also applied to each shot hyper-bag as follows. A particular shot hyper-bag would be labeled as positive if one or more of the key-frame instances within the hyper-bag falls within the target concepts; the shot hyper-bag would be labeled as negative if none of the key-frame instances within the hyper-bag fall within the target concepts. Each shot can thus be conceptually regarded as a hyper-bag of key-frame instances. This hyper-bag, bag, instance and bag-instance terminology will be further clarified in the description which follows.
Referring again to
To summarize, in the MLMI framework described herein, a video clip is represented as a hierarchical “set-based” structure with bag-instance correspondence. Each successive layer down the hierarchy describes the visual concepts contained in the clip in a higher degree of granularity compared to the description contained within the layer which precedes it in the hierarchy. As will be described in more detail hereafter, these visual concept descriptions employed in the different layers can also include different modalities. As described heretofore, various low-level feature descriptors can be prescribed to describe the visual concepts contained within the different layers. Referring again to
3.0 Introduction To Kernel-Based Modeling
This section provides a brief, general introduction to kernel-based modeling methods. In general, a kernel k which models metadata structures within an input space X can be simplistically given by the equation k: X×XR, where the input space X is mapped to either an n-dimensional vector Rn or any other compound structure. For x,y ε X where x and y represent two different metadata structures within X, a kernel k(x,y) which models X, and compares x and y in order to determine a degree of similarity (or difference) between x and y, can be given by k(x,y)=<φ(x),φ(y)>, where φ is a mapping from the input space X to a high-dimensional (most likely infinite) space Φ embedded with an inner product. In a general sense, kernel k(x,y) can also be given by the following similarity measure:
d(φ(x),φ(y))=√{square root over (k(x,x)−2k(x,y)+k(y,y))}{square root over (k(x,x)−2k(x,y)+k(y,y))}{square root over (k(x,x)−2k(x,y)+k(y,y))}, (1)
where dφ(x),φ(y)) represents a distance function in mapping space Φ.
4.0 MLMI Kernel
Generally speaking, this section provides a description of an exemplary embodiment of an MLMI kernel which models the MLMI framework representation of a video clip described heretofore by comparing pairs of shots in a video clip. More particularly, referring again to
Referring again to
Given an L-layer tree corresponding to a particular shot T, the set of nodes n contained within T can be given by the equation:
N={ni}i=1|N|, (2)
where |N| is the total number of nodes in T. If S is given to represent a tree set and si is given to represent the set of node patterns whose parent node is ni, si can be given by the equation:
si={s|s εSparent(s)=ni}εpow(S), (3)
where pow(S) refers to the power set of S. Additionally, a bijection mapping of ni→si can be denoted. For each node niεN, a “node pattern” of ni can be defined to be all the metadata associated with ni, where this metadata is composed of the following three elements: layer information Ii, low-level feature descriptor information fi, and tree sets si rooted at ni. fi more particularly represents a set of low-level features in the video clip based on a plurality of various modalities described heretofore. The node pattern of node ni can then be given by the following triplet form equation:
{circumflex over (n)}i=<li,fi,si> (4)
T can thus be expanded to the following node pattern set:
{circumflex over (N)}={{circumflex over (n)}i}i−1|N|. (5)
A kernel of trees (herein also referred to as shots T or T′) can now be constructed using the expanded node pattern set given by equation (5). Based on the conventional definition of a convolution kernel, a relationship R can be constructed between an object and its parts. The kernel for the composite structured object can be defined based on the composition kernels of the parts of the object. First, x,y ε X can be defined to be the objects and {right arrow over (x)},{right arrow over (y)} ε(X1× . . . ×XD) can be defined to be tuples of parts of these objects, where D is the number of parts in each object. Given the relationship R: (X1× . . . ×XD)×X, x can then be decomposed as R−1(x)={{right arrow over (x)}: R({right arrow over (x)},x)}. Based on this decomposition, a convolution kernel kconv for comparing metadata structures x and y can be given by the following equation, with positive definite kernels on each part:
For the node pattern set {circumflex over (N)} defined in equation (5), R can be given by the set-membership equation
where {circumflex over (n)} is the node pattern of a particular node n in T and {circumflex over (n)}′ is the corresponding node pattern of corresponding particular node n′ in T′. Since {circumflex over (n)} and {circumflex over (n)}′ are each composed of three elements as given by triplet form equation (4), k{circumflex over (N)} is a kernel on this triplet space. Using the tensor product operation (K1K2((x,u),(y,v))=K1(x,y)K2(u,v)), k{circumflex over (N)} can be given by the equation:
k{circumflex over (N)}({circumflex over (n)},{circumflex over (n)}′)=kδ(ln,ln′)×kf(fn,fn′)×kst(sn,sn′). (8)
Generally speaking, kδ(x,y)=δx,y is a matching kernel that represents the layer structure for metadata structures x and y. Thus, kδ(ln,ln′) in equation (8) is a matching kernel that represents the layer structure for {circumflex over (n)} and {circumflex over (n)}′, since, as described heretofore, ln is the layer information for {circumflex over (n)} and ln′ is the layer information for {circumflex over (n)}′. kf is a feature-space kernel where fn is the low-level features in {circumflex over (n)} and fn′ is the low-level features in {circumflex over (n)}′. kst is a kernel of sub-trees where sn is the set of node patterns in T whose parent is n, and sn′ is the set of node patterns in T′ whose parent is n′. By embedding a multi-instance data relationship into sn and sn′, kst can be given by the equation:
kst(sn,sn′)=maxĉεs
which indicates that the similarity of two different node patterns is affected by the most similar pairs of their sub-structures. However, since the max function of equation (9) is non-differentiable, equation (9) can be approximated by choosing a conventional radial basis function (RBF) kernel for kf in equation (8). As a result, kf can be approximated by the equation:
kf(fn,fn′)=exp(|fn−fn′|2/2σ2). (10)
Using the definition of kf given by equation (10), equation (9) above can then be approximated by the equation:
where kst is set to be 1 for leaf nodes (i.e. when sn,sn′=0).
Since the maximal layer of T is L, the nodes can be divided into L groups given by {Gl}l=1L. As a result, {circumflex over (N)} can be transformed into a power set given by {circumflex over (N)}={Gl}l=1L where Gl={{circumflex over (n)}l|ll=l}. Based upon the aforementioned matching kernel kδ, equation (7) can be rewritten as the equation:
kMLMI given by equation (12) can be shown to be positive definite as follows. As is appreciated in the art of kernel-based machine-learning, kernels are considered closed under basic operations such as sum (K1+K2), direct sum (K1 ⊕ K2), product (K1×K2) and tensor product (K1K2). Since kMLMI given by equation (12) is completely constructed of these basic operations (i.e. the direct sum in equations (7) and (10), and the tensor product in equation (8), it is closed and positive definite.
In order to avoid an undesirable scaling problem, a conventional feature-space normalization algorithm can then be applied to kMLMI given by equation (12). More particularly, kMLMI given by equation (12) can be normalized by the equation:
From the MLMI kernel defined in equation (12), it is noted that the kernel kMLMI between two shots T and T′ is the combination of kernels defined on node patterns of homogeneous layers, and these node pattern kernels are constructed based upon the intrinsic structure of the shots utilizing the rich context and multiple-instance relationships implicitly contained therein. Referring again to
4.1 Generating MLMI Kernel
If on the other hand the nodes n and n′ are on the lowermost leaf layer of G3 310, a kernel k{circumflex over (N)} for the node pattern set {circumflex over (N)} associated with nodes n and n′ is generated by solving the equation k{circumflex over (N)}({circumflex over (n)},{circumflex over (n)}′)=kf(fn,fn′) 312. Then, kMLMI(T,T′) is updated by solving the equations kMLMI(T,T′)UD=kMLMI(T,T′)+k{circumflex over (N)}({circumflex over (n)},{circumflex over (n)}′), and then kMLMI(T,T′)=kMLMI(T,T′)UD 316.
Referring again to
If on the other hand the nodes n and n′ are on the lowermost leaf layer of G2 310, a kernel k{circumflex over (N)} for the node pattern set {circumflex over (N)} associated with nodes n and n′ is generated by solving the equation k{circumflex over (N)}({circumflex over (n)}{circumflex over (n)}′)=kf(fn,fn′) 312. Then, kMLMI(T,T′) is updated by solving the equations kMLMI(T,T′)UD=kMLMI(T,T′)+k{circumflex over (N)}({circumflex over (n)},{circumflex over (n)}′), and then kMLMI(T,T′)=kMLMI(T,T′)UD 316.
Referring again to
If on the other hand the nodes n and n′ are on the lowermost leaf layer of G1 310, a kernel k{circumflex over (N)} for the node pattern set {circumflex over (N)} associated with nodes n and n′ is generated by solving the equation k{circumflex over (N)}({circumflex over (n)},{circumflex over (n)}′)=kf(fn,fn′) 312. Then, kMLMI(T,T′) is updated by solving the equations kMLMI(T,T′)UD=kMLMI(T,T′)+k{circumflex over (N)}({circumflex over (n)},{circumflex over (n)}′), and then kMLMI(T,T′)=kMLMI(T,T′)UD 316.
Referring again to
4.2 VCD Using SVM With MLMI Kernel (SVM-MLMIK Technique)
The exemplary MLMI kernel embodiment described heretofore can be combined with any appropriate supervised learning method such as the conventional Support Vector Machine (SVM) method in order to perform improved VCD on a video clip. This section provides a description of an exemplary embodiment of an SVM-MLMIK VCD technique which combines the aforementioned MLMI kernel with the SVM method. It is noted that the conventional SVM method can generally be considered a single-layer (SL) method. As such, the conventional SVM method is herein also referred to as the SVM-SL method.
Generally speaking, in the paradigm of structured learning/modeling, the goal is to learn an objective decision function f(x): X→Y from a structured input space X to response values in Y. Referring again to
Given J different training shots xi segmented from a structured input space X, and related instance classification labels yi for xi which are given by the equation (x1,y1), . . . ,(xJ,yJ) ε X×Y, where Y={−1,1}, once a structured metadata kernel model kx is determined for X, the learning/modeling process can then be transformed to an SVM-based process as follows. The dual form of the objective decision function in the SVM-SL method can be given by the equation:
where Q is a Gram matrix given by the equation Qij=yiyjkx(xi,xj) kx(xi,xj) is the kernel model, 1 is a vector of all ones, αε RJ is a prescribed coefficient in the objective decision function, y ε RJ is an instance classification label vector, and C is a prescribed constant which controls the tradeoff between classification errors and the maximum margin. Parameters α and C can be optimized using a conventional grid search method.
Based on equation (14), an SVM-SL-based learned objective decision function f(x) can be given by the equation:
wherein b represents a bias coefficient. The learned objective decision function f(x) of equation (15) can then be improved by substituting the MLMI kernel of equation (12) for the kernel kx, resulting in an SVM-MLMIK learned objective decision function f′(x) which can be given by the equation:
In tested embodiments of the SVM-MLMIK technique σ and C were set as follows. σ was specified to vary from 1 to 15 with a step size of 2, and C was specified to be the set of values {2−22, 2−1, . . . 25}.
4.3 VCD Process Using SVM-MLMIK Technique
5.0 Regularization Framework
As is appreciated in the art of kernel-based machine-learning, not all instances in a positive bag should necessarily be labeled as positive. Accordingly, different instances in the same bag should ideally have different contributions to the kernel. In support of this ideal, this section provides a description of exemplary embodiments of a regularization framework for MLMI learning/modeling which introduces explicit constraints into the learned objective decision function described heretofore, where these constraints serve to restrict instance classification in the sub-layers of the MLMI framework.
As is appreciated in the art of kernel-based machine-learning, instance classification precision primarily depends on the kernel that is employed in the instance classification process. Referring again to
Referring again to
Given a structured metadata set {(Ti,yi)}i−1N
where Timl is the mth sub-structure in the lth layer for shot Ti, and Ni l is the number of sub-structures in the lth layer for shot Ti.
If H is given to be the Reproducing Kernel Hilbert Space (RKHS) of function f(T), and ∥f∥H2 is given to be the RKHS norm of function f, the mathematical optimization problem in MLMI learning/modeling can be given by the equation:
Referring again to
The regularization framework described heretofore can be combined with the MLMI framework and related MLMI kernel described heretofore, resulting in an improved MLMI kernel which models the MLMI framework and the multi-instance data relationships that are contained therein in a more straightforward manner. As such, the regularization framework can be combined with the SVM-MLMIK technique described heretofore in order to maximize the instance classification precision when performing VCD on a video clip compared to the instance classification precision produced by SVM-MLMIK technique without this regularization framework. It is noted that various different embodiments of the regularization framework are possible, where the different embodiments employ different combinations of constraints A, B and C and different loss functions for V. It is also noted that among these three constraints, constraint A is considered the most important since it is focused on classifying the shots. Constraints B and C are considered comparatively less important than constraint A since they are focused on restricting the instance classification function. Thus, constraint A is employed in all of the different embodiments of the regularization framework that will be described hereafter. It is also noted that the SVM-MLMIK technique described heretofore also employs constraint A. However, the SVM-MLMIK technique does not take advantage of constraints B and C. Three different particular embodiments of the regularization framework will now be described, one which employs constraints A and B, another which employs constraints A and C, and yet another which employs constraints A, B and C.
5.1 VCD Using SVM-MLMIK Technique With Regularization Framework Employing Constraints A and B (MLMI-FLCE Technique)
This section provides a description of an exemplary embodiment of a VCD technique which combines the SVM-MLMIK technique described heretofore with an embodiment of the regularization framework described heretofore that employs the combination of constraint A (which serves to minimize the shot layer instance classification errors) and constraint B (which serves to minimize the key-frame sub-layer and key-region sub-layer instance classification errors). This particular embodiment of the regularization framework employing constraints A and B is hereafter referred to as the full layers classification error (FLCE) approach, and this corresponding particular embodiment of the VCD technique is hereafter referred to as the MLMI-FLCE technique.
Referring again to
As in the conventional SVM-SL method described heretofore, a conventional Hinge Loss function can be used to represent the shot layer classification errors by the equation V1(yi,f(Ti))=max(0,1−yif(Ti)). The classification errors for the sub-layers can be given by the equation:
where the max function is adopted to reflect the multi-instance data relationships in the MLMI framework. λ1 and λl in equation (19) are prescribed regularization constants which determine a tradeoff between the shot layer classification errors and the key-frame and key-region sub-layer classification errors. {tilde over (E)} in equation (20) can be defined in a variety of different ways. If a weak restriction is used that penalizes only if a particular instance classification label and the ground-truth are inconsistent by sign, {tilde over (E)} in equation (20) can be given by the equation:
Finally, the learned objective decision function for the MLMI-FLCE technique can be given by the following quadratic concave-convex mathematical optimization equation:
5.2 VCD Using SVM-MLMIK Technique With Regularization Framework Employing Constraints A and C (MLMI-ILCC Technique)
This section provides a description of an exemplary embodiment of a VCD technique which combines the SVM-MLMIK technique described heretofore with an embodiment of the regularization framework described heretofore that employs the combination of aforementioned constraint A (which serves to minimize the shot layer instance classification errors) and constraint C (which serves to minimize the inter-layer inconsistency penalty). This particular embodiment of the regularization framework employing constraints A and C is hereafter referred to as the inter-layer consistency constraint (ILCC) approach, and this corresponding particular embodiment of the VCD technique is hereafter referred to as the MLMI-ILCC technique.
Referring again to
The aforementioned conventional Hinge loss function can be used to represent the shot layer classification errors as described in the MLMI-FLCE technique heretofore. The loss function for the inter-layer inconsistency penalty can be given by the equation:
λ1 and λl in equation (23) are prescribed regularization constants which determine a tradeoff between the shot layer instance classification errors and inter-layer inconsistency penalty. Various different loss functions can be employed for V in equation (24), such as a conventional L1 loss function, a conventional L2 loss function, etc. In the technique embodiments described herein, an L1 loss function is employed for V. As is appreciated in the art of kernel-based machine-learning, the L1 loss function is defined by the equation EL1(a,b)=|a−b|. As a result, {tilde over (E)} in equation (24) can be given by the equation:
Thus, the learned objective decision function for the MLMI-ILCC technique can be given by the following quadratic concave-convex mathematical optimization equation:
5.3 VCD Using SVM-MLMIK Technique With Regularization Framework Employing Constraints A, B and C (MLMI-FLCE-ILCC Technique)
This section provides a description of an exemplary embodiment of a VCD technique which combines the SVM-MLMIK technique described heretofore with an embodiment of the regularization framework described heretofore that employs the combination of constraint A (which serves to minimize the shot layer instance classification errors), constraint B (which serves to minimize the key-frame and key-region sub-layer instance classification errors), and constraint C (which serves to minimize the inter-layer inconsistency penalty). This particular embodiment of the regularization framework employing constraints A, B and C is hereafter referred to as the MLMI-FLCE-ILCC technique.
Based on the descriptions of the MLMI-FLCE and MLMI-ILCC techniques heretofore, the loss function for each shot can be given by the equation:
λ1, {tilde over (λ)}l and λl in equation (27) are prescribed regularization constants which determine a tradeoff between the constraints A, B and C. Using the equations for V1, {tilde over (V)}l and Vl provided heretofore, and assuming an L1 loss function is employed for V as described heretofore, the learned objective decision function for the MLMI-FLCE-ILCC technique can be given by the following quadratic concave-convex mathematical optimization equation:
6.0 Optimization Using CCCP
This section generally describes the use of a conventional constrained concave-convex quadratic programming (CCCP) method for practically solving the three different mathematical optimization problems given by equations (22), (26) and (28).
6.1 MLMI-ILCC Technique Using L1 Loss (Constraints A and C)
This section generally describes an embodiment of how the aforementioned conventional CCCP method can be used to practically solve the mathematical optimization problem given by equation (26) for the MLMI-ILCC technique described heretofore. More particularly, by introducing slack variables, equation (26) can be rewritten as the following constrained minimization equation:
where δ1=[δ11, δ12, . . . , δ1N
Now define f to be a linear function in the mapped high-dimensional space f(X)=WTφ(X)+b, where φ() is the mapping function. By ignoring b in ∥f∥H2 (as is done in the conventional SVM-SL method described heretofore) and substituting f into equation (29), equation (29) becomes:
It is noted that the second and third constraints in equation (30) are non-linear concave-convex inequalities, and all the other constraints are linear. Therefore, the CCCP method is well suited to solving the mathematical optimization problem in equation (30). By employing the following sub-gradient of the max function in equation (30):
where
and where R is the number of sub-structures with maximal response, equation (30) can be solved in an iterative fashion by fixing W and β in turn until W converges. More particularly, when fixing W equation (32) is solved, and then when fixing β the following equation is solved:
However, equation (33) cannot be solved directly since W lies in the mapped feature space which usually goes infinite. In order to address this issue, the explicit usage of W can be removed by forming a dual mathematical optimization problem as follows. Introducing the following Lagrange multiplier coefficients:
α=[α11, . . . ,αN
into the constraints A and C results in the following dual formulation equation according to the conventional Karush-Kuhn-Tucker (KKT) theorem:
where α,p,Y, and Λ are =(2L−1)×N1 dimensional vectors, and p,Y, and Λ have entrances given by the equations:
In equation (35), Q=ATKA is the Gram matrix with K being a kernel matrix and A being a sparse matrix of size , where = is the overall number of nodes in the training set, and is the dimension of α. Intuitively, A can be regarded as a multi-instance transform matrix that represents the inter-layer inconsistency penalty constraint C to the hyper-plane, where A is given by the equation:
where βI is a prescribed coefficient for node set (I), and βI corresponds to βiml in equation (31).
Eventually, equation (26) becomes a modified learned objective decision function given by the equation:
Then, in the same manner as described heretofore for the SVM-MLMIK technique, the modified learned objective decision function of equation (39) can be improved by substituting the MLMI kernel of equation (12) for the kernel kT(x), resulting in an improved modified learned objective decision function f′(x) given by the equation:
f′(x)=kMLMI(xi,x)Aα+b. (40)
In tested embodiments of the MLMI-ILCC technique λ1 was specified to be the set of values {2−2, 2−1, . . . , 25}. Additionally, λ2=λ3= . . . =λL were specified to be the set of values {10−3,10−2,10−1,1}.
6.2 MLMI-FLCE Technique Using L1 Loss (Constraints A and B)
This section generally describes an embodiment of how the aforementioned CCCP method can be used to practically solve the mathematical optimization problem given by equation (22) for the MLMI-FLCE technique described heretofore. An approach similar to that for the MLMI-ILCC technique just described is used to solve this optimization problem. More particularly, by introducing slack variables, equation (22) can be rewritten as the following constrained minimization equation:
where δ1, δl, and δl* are defined the same as for the MLMI-ILCC technique.
The CCCP method can be employed in the same iterative manner just described in detail above for the MLMI-ILCC technique in order to solve the mathematical optimization problem in an iterative manner and eventually derive an improved modified learned objective decision function for the MLMI-FLCE technique similar to equation (40). However, it is noted that in this case the variables in equation (35) differ as follows compared to the definitions provided for the MLMI-ILCC technique. α,p,Y, and Λ are =L×N1 dimensional vectors with entrances given by the equations:
and A is a multi-instance transform matrix given by the equation:
In tested embodiments of the MLMI-FLCE technique, σ was specified to vary from 1 to 15 with a step size of 2, and λ1 was specified to be the set of values {2−2, 2−1, . . . 25}. Additionally, λ2=λ3= . . . =λL were specified to be the set of values {10−3, 10−2, 10−1, 1}.
6.3 MLMI-FLCE-ILCC Technique Using L1 Loss (Constraints A, B and C)
This section generally describes an embodiment of how the aforementioned CCCP method can be used to practically solve the mathematical optimization problem given by equation (28) for the MLMI-FLCE-ILCC technique described heretofore. An approach similar to that for the MLMI-ILCC technique described heretofore is used to solve this optimization problem. More particularly, by introducing slack variables, equation (28) can be rewritten as the following constrained minimization equation:
where δ1, δl, and δl* are defined the same as for the MLMI-ILCC technique.
The CCCP method can be employed in the same iterative manner described in detail heretofore for the MLMI-ILCC technique in order to solve the mathematical optimization problem in an iterative manner and eventually derive an improved modified learned objective decision function for the MLMI-FLCE-ILCC technique similar to equation (40). However, it is noted that in this case the variables in equation (35) differ as follows compared to the definitions provided for the MLMI-ILCC technique. α,p,Y, and Λ are =(3L−2)×N1 dimensional vectors with entrances given by the equations:
and A is a multi-instance transform matrix given by the equation:
In tested embodiments of the MLMI-FLCE-ILCC technique, σ was specified to vary from 1 to 15 with a step size of 2, and λ1 was specified to be the set of values {22, 21, . . . 25}. Additionally, λ2=λ3=, . . . =λL, {tilde over (λ)}2={tilde over (λ)}3= . . . {tilde over (λ)}L were specified to be the set of values {10−3,10−2,10−1,1}.
6.4 VCD Process Using Regularization Framework
7.0 Computing Environment
This section provides a brief, general description of a suitable computing system environment in which portions of the VCD technique embodiments described herein can be implemented. These VCD technique embodiments are operational with numerous general purpose or special purpose computing system environments or configurations. Exemplary well known computing systems, environments, and/or configurations that can be suitable include, but are not limited to, personal computers (PCs), server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the aforementioned systems or devices, and the like.
As illustrated in
As exemplified in
As exemplified in
As exemplified in
As exemplified in
The VCD technique embodiments described herein can be further described in the general context of computer-executable instructions, such as program modules, which are executed by computing device 700. Generally, program modules include routines, programs, objects, components, and data structures, among other things, that perform particular tasks or implement particular abstract data types. The VCD technique embodiments can also be practiced in a distributed computing environment where tasks are performed by one or more remote computing devices 718 that are linked through a communications network 712/720. In a distributed computing environment, program modules can be located in both local and remote computer storage media including, but not limited to, memory 704 and storage devices 708/710.
8.0 Additional Embodiments
While the VCD technique has been described in detail by specific reference to embodiments thereof, it is understood that variations and modifications thereof can be made without departing from the true spirit and scope of the technique. It is also noted that any or all of the aforementioned embodiments can be used in any combination desired to form additional hybrid embodiments. Although the VCD technique embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described heretofore. Rather, the specific features and acts described heretofore are disclosed as example forms of implementing the claims.
Number | Name | Date | Kind |
---|---|---|---|
6442538 | Nojima | Aug 2002 | B1 |
6988245 | Janevski | Jan 2006 | B2 |
7010751 | Shneiderman | Mar 2006 | B2 |
7296231 | Loui et al. | Nov 2007 | B2 |
7783106 | Cooper et al. | Aug 2010 | B2 |
20030177503 | Sull et al. | Sep 2003 | A1 |
20070073749 | Fan | Mar 2007 | A1 |
20070112583 | Hua et al. | May 2007 | A1 |
20070255755 | Zhang et al. | Nov 2007 | A1 |
Number | Date | Country |
---|---|---|
9946702 | Sep 1999 | WO |
2005072239 | Aug 2005 | WO |
Entry |
---|
Iyengar, et al. “Discriminative Model Fusion for Semantic Concept Detection and Annotation in Video”, Proceedings of the eleventh ACM international conference on Multimedia, Nov. 2-8, 2003, Berkeley, CA, USA, pp. 255-258. |
Yang, et al., “Cross-Domain Video Concept Detection Using Adaptive SVMs”, International Multimedia Conference. Proceedings of the 15th international conference on Multimedia, Sep. 23-28, 2007, Augsburg, Bavaria, Germany, pp. 188-197. |
Wu, et al., “Ontology-Based Multi-Classification Learning for Video Concept Detection”, IEEE International Confetence on Multimedia and Expo (ICME), 2004, pp. 1003-1006. |
“TREC Video Retrieval Evaluation”, http://www-nlpir.nist.gov/projects/trecvid. |
Naphade, et al., “Large-scale Concept Ontology for Multimedia”, IEEE Multimedia, vol. 13, No. 3, 2006, pp. 86-91. |
Feng, et al., “Multiple Bernoulli Relevance Models for Image and Video Annotation”, Proc.IEEE Conf. on Computer Vision and Pattern Recognition, 2004, pp. 1-8. |
Ghosal, et al., “Hidden Markov Models for Automatic Annotation and Content-Based Retrieval of Images and Video”, Proc. ACM Conf. on Research & Development on Information Retrieval, 2005, 8 pages. |
Gu, et al., “MILC2: A Multi-Layer Multi-Instance Learning Approach to Video Concept Detection”, Proc. of International Conference on Multi-Media Modeling, Kyoto, Japan, Jan. 2008, pp. 1-11. |
Gu, et al., “Multi-Layer Multi-Instance Kernel for Video Concept Detection”, Proc. ACM Int'l Conf. on Multimedia, Augsburg, Germany, Sep. 2007, pp. 349-352. |
Tang, et al., “Structure-Sensitive Manifold Ranking for Video Concept Detection”, Proc. ACM Multimedia, Augsburg, Germany, Sep. 23-29, 2007, 10 pages. |
Dietterich, et al., “Solving the Multiple Instance Problem with Axis-Parallel Rectangles”, Artificial Intelligence, vol. 89, Nos. 1-2, 1997, pp. 31-71. |
Maron, et al., “Multiple-Instance Learning for Natural Scene Classification”, Proc. 15th Int'l Conf. Machine Learning, 1998, pp. 341-349. |
Gaertner, et al., “Multi-Instance Kernels”, Proc. 19th Int'l Conf. Machine Learning, 2002, pp. 179-186. |
Kwok, et al., “Marginalized Multi-Instance Kernels”, *. Proc. 20th Int'l Joint Conf. on Artificial Intelligence, India. Jan. 2007, pp. 901-906. |
Chen, et al., “Image Categorization by Learning and Reasoning with Regions”, J. Machine Learning Research, vol. 5, 2004, pp. 913-939. |
Naphade, et al., “A Generalized Multiple Instance Learning Algorithm for Large Scale Modeling of Multimedia Semantics”, Proc. IEEE Int'l Conf. on Acoustics, Speech and Signal Processing, Philadelphia, PA, May 2005, pp. V-341 to V-344. |
Gaertner, et al., “Kernels and Distances for Structured Data”, Machine Learning, 2004, pp. 1-32. |
Hofmann, et al., “A Review of Kernel Methods in Machine Learning”, Dated: Tech. Rep. 156, Dec. 14, 2006, pp. 1-36. |
Altun, et al., “Maximum Margin Semi-Supervised Learning for Structured Variables”, Advances in Neural Information Processing Systems 18, MIT Press, Cambridge, MA, 2006, pp. 33-40. |
Gaertner, “A Survey of Kernels for Structured Data”, SIGKDD Explorations, 2003 pp. 49-58. |
Collins, et al., “Convolution Kernels for Natural Language”, Advances in Neural Information Processing Systems, vol. 14, MIT Press, 2002, pp. 1-8. |
Kashima, et al., “Marginalized Kernels between Labeled Graphs”, Proc. 20th Int'l Conf. on Machine Learning, 2003, 8 pages. |
Haussler, “Convolution Kernels on Discrete Structures”, UC Santa Cruz, Tech. Rep. UCSC-CRL-99-10, Jul. 1999, pp. 1-38. |
Deng, et al., “Unsupervised Segmentation of Color-Texture Regions in Images and Video”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, No. 8, Aug. 2001, pp. 800-810. |
Kishida, “Property of Average Precision and its Generalization: An Examination of Evaluation Indicator for Information Retrieval Experiments”, Nii technical report (nii-2005-014e), NII, 2005, 20 pages. |
Smola, et al., “Kernel Methods for Missing Variables”, Proc 10th Int'l Workshop on Artificial Intelligence and Statistics. Barbados. 2005, pp. 1-8. |
Cheung, et al., “A Regularization Framework for Multiple-Instance Learning”, Proc. 23th Int'l Conf. on Machine Learning, Pittsburgh, USA, Jun. 2006, pp. 193-200. |
Blum, et al., “Combining labeled and unlabeled data with co-training”, Proc.Conf. on Computational Learning Theory, 1998, pp. 92-100. |
Yan, et al., “Semi-supervised Cross Feature Learning for Semantic Concept Detection in Videos”, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2005, pp. 657-663. |
Andrews, et al., “Support Vector Machines for Multiple-Instance Learning”, Advances in Neural Information Processing Systems, 2003, pp. 561-568. |
Chen, et al., “MILES; Multiple-instance learning via embedded instance selection”, IEEE Trans. Pattern Analysis Machine Intelligence, vol. 28, issue 12, 2006, pp. 1931-1947. |
Zhang, et al., “Automatic Partitioning of Full-motion Video”, Multimedia Systems, vol. 1, No. 1, 1993, pp. 10-28. |
Number | Date | Country | |
---|---|---|---|
20090274434 A1 | Nov 2009 | US |