1. Field of the Invention
This invention generally relates to data classification and more particularly relates to adaptively evolving a classifier using batch mode active learning.
2. Description of Related Art
Effective data classification plays an increasingly important role in real world applications. For instance, a computer vision application based on activity recognition applies a classifier to captured video streams in order to recognize various human activities. To ensure reliable performance of the classifier, it must be trained using a plurality of labeled examples. Such systems often rely on human oracles to manually label the data. Due to the tremendous increase in the amount of digital data, it is impractical for human beings to hand-label large datasets. In order to optimize the labeling effort associated with training data classifiers, active learning algorithms have been implemented which select only the promising and exemplar instances for manual labeling.
Conventional methods of active learning have focused on the pool-based strategy where the learner is presented with a pool of unlabeled data. The active learner applies a query function to select a single instance for labeling. The classifier is then retrained with the newly labeled datum and the process continues until a pre-defined stopping criterion is satisfied. This is not effective due to the time consuming process of retraining the classifier based on every new data point. In addition, multiple labeling oracles may be present simultaneously and selecting a single instance at a time may result in wastage of available resources. To overcome these, batch mode active learning (BMAL) systems have been implemented, which select a batch of unlabeled data points simultaneously from a given data corpus. The classifier is retrained once after every batch of data points is selected and the selection of multiple instances facilitates parallel labeling.
BMAL algorithms are important in applications such as video data. Modern video cameras have a high frame rate and consequently, the captured data has high redundancy. Selecting batches of relevant frames from a superfluous frame sequence in captured videos is a significant and valuable challenge.
A BMAL system can be conceptualized as consisting of two main steps: (i) deciding the batch size (number of data points to be queried from a given unlabeled set of points) and (ii) selecting the most appropriate data points from the unlabeled pool once the batch size has been determined. Both these steps are important in ensuring maximum generalization capability of the learner with minimum human labeling effort, which is the primary objective in any active learning application. However, the existing few efforts on batch mode active learning focus only on the second step of identifying a criteria for selecting informative batches of data samples and require the batch size to be specified in advance by the user. In an application like face-based biometric recognition for example, deciding on the batch size (number of relevant frames in a video) in advance and without any knowledge of the data stream being analyzed, is impractical. The batch size should depend on the quality and variability of the images in the unlabeled stream and also on the level of confidence of the current classifier on the unlabeled images.
Active learning methods can be broadly categorized as online and pool-based. In online active learning, the learner encounters the data points sequentially over time and at each instant it needs to decide whether the current point has to be queried for its class label. In contrast, in pool-based active learning, the learner is exposed to a pool of unlabeled data points and it iteratively selects instances for manual annotation.
Pool-based methods can be sub-categorized as serial query based, where a single point is queried at a time and batch mode, where a batch of points is queried simultaneously before updating the classifier. Majority of the active learning techniques have been applied in the serial query based setting and can be divided into 4 categories: (i) SVM based approaches, which decide the next point to be queried based on its distance from the hyperplane in the feature space (Tong and Koller, 2000), (ii) Statistical approaches, which query points such that some statistical property of the future learner (e.g. the learner variance) is optimized (Cohn et al., 1996), (iii) Query by Committee, which chooses points to be queried based on the level of disagreement among an ensemble of classifiers (Baram et al., 2004; Freund et al., 1997; Liere and Tadepalli, 1997) and (iv) Other miscellaneous approaches (McCallum and Nigam, 1998).
All the aforementioned techniques of batch mode active learning concentrate only on the development of a selection criteria assuming the batch size is chosen by the user in advance. In an application like face-based biometric recognition, this is not a practical assumption. For instance, one would expect the number of relevant frames to be large when the active learner is exposed to an unlabeled video containing many new identities unknown to the learner, and the number to be low when the unlabeled video contains images similar to the training data. Thus, there is a strong need for the active learner to adapt to different contexts and dynamically decide the batch size as well as the specific instances to be queried.
Existing technology on BMAL require the batch size (number of data points to be selected from an unlabeled corpus for manual annotation) to be supplied manually as an input to the system. It is difficult to decide this number without any knowledge of a given unlabeled corpus. Random choice of this number can result in selecting too few data points, with the effect that the current model is not updated accurately; or in selecting too many points incurring considerable human labeling effort to achieve a marginal improvement in the resulting model. Patent Application US 2010/0293117 A1, entitled “Method and System for Facilitating Batch Mode Active Learning,” discloses a method for performing batch mode active learning, wherein a batch of data points is selected from an unlabeled corpus for manual annotation. It assumes that the batch size (number of data points to be selected from an unlabeled data corpus) is supplied as an input to the algorithm. The method then computes a score based on the uncertainty and diversity of each unlabeled data point. The data points with the top k scores (k being the batch size) are selected in the batch. This method has the above described short-comings.
The referenced shortcomings are not intended to be exhaustive, but rather are among many that tend to impair the effectiveness of previously known techniques in performing adaptive BMAL to train a classifier; however, those mentioned here are sufficient to demonstrate that the methodologies appearing in the art have not been satisfactory and that a significant need exists for the techniques described and claimed in this disclosure.
Apparatuses, systems, and methods for adaptive batch mode active learning to for evolving a classifier are disclosed. A corpus of unlabeled data elements to be classified is received, a batch size is determined based on an objective function, a batch of unlabeled data elements having the determined batch size is selected from the corpus and labeled using a labeling agent or oracle, a classifier is retrained with the labeled data elements, and these steps are repeated until a stop criterion has been met, for example, the classifier obtains a desired performance on unlabeled data elements in the corpus. Batch size determination and selection of a batch of unlabeled data elements may be based on a single score function. Data elements may be video, audio, image, text and/or other multimedia data elements.
Methods for adaptive batch mode active learning for evolving a classifier are disclosed. In some embodiments, the method comprises: (a) receiving one or more datasets comprising a plurality of unlabeled data elements; (b) determining, using a processor, a batch size; (c) selecting, using the processing, a batch of unlabeled data elements having the batch size from the plurality of unlabeled data elements; (d) labeling, using a labeling agent or oracle, the batch of unlabeled data elements having the batch size; and (e) repeating steps (b)-(e) for the plurality of unlabeled data elements until a stop criterion has been met. A labeling agent may be, for example, a human, a program performing a set of labeling rules, or the like.
In some embodiments, the batch size is determined based on evaluating an objective function. The objective function may be based on distances between a batch of unlabeled data elements having the batch size in the plurality of unlabeled data elements and remaining unlabeled data elements in the plurality of unlabeled data elements. In some embodiment, a batch of unlabeled data elements having a previously determined batch size is selected based on evaluating an objective function. The objective function may be based on distances between a batch of unlabeled data elements having the batch size in the plurality of unlabeled data elements and remaining unlabeled data elements in the plurality of unlabeled data elements. In some embodiments, determining a batch size and selecting a batch of unlabeled data elements having the batch size are based on a single score function. Based on the nature of the application, a different objective function may also be adopted and the same concept can be used for adaptive batch mode active learning.
After the batch of unlabeled data elements having the batch size are labeled, the classifier may be updated by training the classifier with a set of labeled data elements, the set of labeled data elements comprising the batch of labeled data elements having the batch size.
In some embodiments, the plurality of unlabeled data elements may comprise video, image, text, web pages, or other kinds of data.
Systems for adaptive batch mode active learning to for evolving a classifier are disclosed. In some embodiments, the system may comprise a processor, a memory and/or other devices configured to perform: (a) receiving one or more datasets comprising a plurality of unlabeled data elements; (b) determining, using a processor, a batch size; (c) selecting, using the processing, a batch of unlabeled data elements having the batch size from the plurality of unlabeled data elements; (d) labeling, using a labeling agent, the batch of unlabeled data elements having the batch size; and (e) repeating steps (b)-(e) for the plurality of unlabeled data elements until a stop criterion has been met. A labeling agent may be, for example, a human, a program performing a set of labeling rules, or the like.
In some embodiments, batch size is not supplied to the system, and needs to be decided adaptively based on the data corpus. Disclosed methods automatically adjust to the level of complexity of the data and select a batch size accordingly, in addition to selecting the examples. Disclosed methods are adaptive in the sense that the batch size as well as the batch of specific data points are dynamically decided based on the given corpus of unlabeled points. The batch size and the specific data elements to be selected may be simultaneously determined through a single formulation with the same computational complexity as solving one of these problems. Thus, without any extra computational overhead, disclosed methods can dynamically adjust to the data corpus being analyzed.
In some embodiments, batches of unlabeled data points are selected from a corpus so as to optimize the value of a given objective. The objective may be an indication of increase in efficiency of the classifier (which may result if the current batch of points is used to update the classifier) and can be defined suitably based on the application at hand.
For a given objective function, it can be expected that its value will keep improving as one selects more and more data instances for labeling. For instance, if learner uncertainty is a part of the objective function, it will keep decreasing as more points are selected for annotation. This is because, as the learner gets exposed to more and more points, it becomes more confident or less uncertain on unseen data and thus the value of the objective keeps improving with larger batch sizes.
In some embodiments, a penalty on the batch size appended to the objective function to discourage a case which results in a large number of instances for manual annotation and involves significant human labor and defeats the basic purpose of active learning. An objective function with a penalty on the batch size may ensure that any and every data point is not selected in the batch—only points for which the reward of the objective outweighs the penalty, get selected. Therefore, introduction of the penalty term automatically enables the system to strike a balance between data complexity and labeling cost.
Apparatus, systems and methods described herein can be used with any kind of digital data, for example text data (where a model needs to be trained to categorize text documents into different classes based on their contents or to classify an email as spam/non-spam based on its content), web-page classification (where a web-page needs to be assigned into a specific category), content based image retrieval (CBIR) (where relevant images need to be retrieved from a vast collection, based on given keywords), image recognition (where a classifier needs to be developed to recognize images like medical images, objects, facial expressions, facial poses among others).
Any embodiment of any of the devices, systems, and methods can consist of or consist essentially of—rather than comprise/include/contain/have—any of the described steps, elements, and/or features. Thus, in any of the claims, the term “consisting of” or “consisting essentially of” can be substituted for any of the open-ended linking verbs recited above, in order to change the scope of a given claim from what it would otherwise be using the open-ended linking verb.
The feature or features of one embodiment may be applied to other embodiments, even though not described or illustrated, unless expressly prohibited by this disclosure or the nature of the embodiments.
Details associated with the embodiments described above and others are presented below.
The following drawings illustrate by way of example and not limitation. For the sake of brevity and clarity, every feature of a given structure is not always labeled in every figure in which that structure appears. Identical reference numbers do not necessarily indicate an identical structure. Rather, the same reference number may be used to indicate a similar feature or a feature with similar functionality, as may non-identical reference numbers.
Various features and advantageous details are explained more fully with reference to the nonlimiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well known starting materials, processing techniques, components, and equipment are omitted so as not to unnecessarily obscure the invention in detail. It should be understood, however, that the detailed description and the specific examples, while indicating embodiments of the invention, are given by way of illustration only, and not by way of limitation. Various substitutions, modifications, additions, and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure.
The term “coupled” is defined as connected, although not necessarily directly, and not necessarily mechanically; two items that are “coupled” may be unitary with each other. The terms “a” and “an” are defined as one or more unless this disclosure explicitly requires otherwise. The term “substantially” is defined as largely but not necessarily wholly what is specified (and includes what is specified; e.g., substantially 90 degrees includes 90 degrees and substantially parallel includes parallel), as understood by a person of ordinary skill in the art. In any disclosed embodiment, the terms “substantially,” “approximately,” and “about” may be substituted with “within [a percentage] of” what is specified, where the percentage includes 0.1, 1, 5, and 10 percent.
The terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”) and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a system or apparatus that “comprises,” “has,” “includes” or “contains” one or more elements possesses those one or more elements, but is not limited to possessing only those elements. Likewise, a method that “comprises,” “has,” “includes” or “contains” one or more steps possesses those one or more steps, but is not limited to possessing only those one or more steps.
Further, a device or system that is configured in a certain way is configured in at least that way, but it can also be configured in other ways than those specifically described.
In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of disclosed embodiments. One of ordinary skill in the art will recognize, however, that embodiments of the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The user interface device 110 is referred to broadly and is intended to encompass at least a suitable processor-based device such as a desktop computer, a laptop computer, a tablet computer, a Personal Digital Assistant (PDA), a mobile communication device, an organizer device, or the like. In a further embodiment, the user interface device 110 may access the Internet to access a web application or web service hosted by the server 102 and provide a user interface for enabling a user to enter or receive information. For example, the user may enter commands to retrieve one or more datasets comprising a plurality of unlabeled data elements to be analyzed.
The network 108 may facilitate communications of data between the server 102 and the user interface device 110. The network 108 may include any type of communications network including, but not limited to, a wireless communication link, a direct PC to PC connection, a local area network (LAN), a wide area network (WAN), a modem to modem connection, the Internet, a combination of the above, or any other communications network now known or later developed within the networking arts which permits two or more computers to communicate with another.
In some embodiments, the server 102 may be configured to perform instructions such as those described in
The data storage device 104 may include a hard disk, including hard disks arranged in an Redundant Array of Independent Disks (RAID) array, a tape storage drive comprising a magnetic tape data storage device, an optical storage device, or the like. In some embodiments, the data storage device 104 may store health related data, such as insurance claims data, consumer data, or the like. The data may be arranged in a database and accessible through Structured Query Language (SQL) queries, or other data base query languages or operations.
In some embodiments, the server 102 may submit a query to selected data storage devices 204-206 to collect a consolidated set of data elements associated with an individual or group of individuals. The server 102 may store the consolidated data set in a consolidated data storage device 210. In such an embodiment, the server 102 may refer back to the consolidated data storage device 210 to obtain a set of data elements associated with a specified individual. Alternatively, the server 102 may query each of the data storage devices 204-208 independently or in a distributed query to obtain the set of data elements associated with a specified individual. In another alternative embodiment, multiple databases may be stored on a single consolidated data storage device 210.
In various embodiments, the server 102 may communicate with the data storage devices 204-210 over the data bus 202. The data bus 202 may comprise a SAN, a LAN, a wireless connection, or the like. The communication infrastructure may include Ethernet, Fibre-Channel Arbitrated Loop (FC-AL), Small Computer System Interface (SCSI), and/or other similar data communication schemes associated with data storage and communication. For example, the server 102 may communicate indirectly with the data storage devices 204-210, the server first communicating with a storage server or storage controller 106.
The server 102 may host a software application configured for adaptive batch mode active learning. The software application may further include modules for interfacing with the data storage devices 204-210, interfacing a network 108, interfacing with a user, and the like. In some embodiments, the server 102 may host an engine, application plug-in, or application programming interface (API). In another embodiment, the server 102 may host a web service or web accessible software application.
The computer system 300 may include Random Access Memory (RAM) 308, which may be SRAM, DRAM, SDRAM, or the like. The computer system 300 may utilize RAM 308 to store the various data structures used by a software application configured for adaptive batch mode active learning. The computer system 300 may also include Read Only Memory (ROM) 306 which may be PROM, EPROM, EEPROM, optical storage, or the like. The ROM may store configuration information for booting the computer system 300. The RAM 308 and the ROM 306 hold user and system 100 data.
The computer system 300 may also include an input/output (I/O) adapter 310, a communications adapter 314, a user interface adapter 316, and a display adapter 322. The I/O adapter 310 and/or user the interface adapter 316 may, in certain embodiments, enable a user to interact with the computer system 300 in order to input information for data elements to be analyzed. In a further embodiment, the display adapter 322 may display a graphical user interface associated with a software or web-based application for adaptive batch mode active learning.
The I/O adapter 310 may connect to one or more storage devices 312, such as one or more of a hard drive, a Compact Disk (CD) drive, a floppy disk drive, a tape drive, to the computer system 300. The communications adapter 314 may be adapted to couple the computer system 300 to the network 108, which may be one or more of a wireless link, a LAN and/or WAN, and/or the Internet. The user interface adapter 316 couples user input devices, such as a keyboard 320 and a pointing device 318, to the computer system 300. The display adapter 322 may be driven by the CPU 302 to control the display on the display device 324.
Disclosed embodiments are not limited to the architecture of system 300. Rather the computer system 300 is provided as an example of one type of computing device that may be adapted to perform functions of a server 102 and/or the user interface device 110. For example, any suitable processor-based device may be utilized including without limitation, including smart phones, tablet computers, personal data assistants (PDAs), computer game consoles, and multi-processor servers. Moreover, the present embodiments may be implemented on application specific integrated circuits (ASIC) or very large scale integrated (VLSI) circuits. In fact, persons of ordinary skill in the art may utilize any number of suitable structures capable of executing logical operations according to the disclosed embodiments.
The schematic flow chart diagrams that follow are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of some of the present embodiments. Other steps and methods may be employed that vary in some details from the illustrated embodiment (e.g., that are equivalent in function, logic, and/or effect). Additionally, the format and symbols employed are provided to explain logical steps and should be understood as non-limiting the scope of an invention. Although various arrow types and line types may be employed in the flow chart diagrams, they should be understood as non-limiting the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
At step 404, a batch size is determined, of which a batch of unlabeled data elements would be analyzed next. The batch size may be determined based on evaluating a score function/an objective function. For example, the batch size may be selected such that an objective function is maximized or minimized. The maximization or minimization may be subject to one or more constraints. The objective function may based on distances between a batch of unlabeled data elements having the batch size in the plurality of unlabeled data elements and remaining unlabeled data elements in the plurality of unlabeled data elements. The distance between two data elements may be evaluated by, for example, Euclidean distance, zero norm, or the like.
At step 406, a batch of unlabeled data elements having the batch size (as determined in step 404) are selected from the plurality of unlabeled data elements. In some embodiments, the selection of a batch of unlabeled data elements having a specific batch size may be based on evaluating an objective function, such as an objective function described above. In some embodiments, the determination of a batch size and selection of a batch of unlabeled data elements having the batch size may be based on a single objective function.
At step 408, the selected batch of unlabeled data elements having the determined batch size is labeled, for example, by a classifier, where each data element is associated with a label which indicates a class to which the data element belongs. After the selected batch of unlabeled data elements having the batch size is labeled, this batch of labeled data may be appended to previously labeled data and together used to train the classifier and thereby update the classifier.
At step 412, it is determined whether a stop criterion has been met. If not, steps 404-410 are repeated for the plurality of unlabeled data elements until a stop criterion has been met. A stop criterion may be all the data elements in the plurality of unlabeled data elements have been labeled, or a certain percentage of the plurality of unlabeled data elements have been labeled, or a certain accuracy of classification has been obtained on the plurality of unlabeled data elements, or the like.
The following describe scenarios that may be used with various embodiments of the disclosed invention. These examples are not intended to be limiting, but rather to provide specific uses for different embodiments of the disclosed invention.
Consider a BMAL problem which has a current labeled set Lt and a current classifier wt trained on Lt. The classifier is exposed to an unlabeled video Ut at time t. The objective is to select a batch B from the unlabeled stream in such a way that the classifier wt+1, at time t+1, trained on LtU B has maximum generalization capability. An efficient method to judge the generalization capability of the updated learner is to compute its entropy on the remaining set of Ut−B images after batch selection (given that future data is unknown). To ensure high generalization power of the future learner, one way is to minimize the entropy of the updated learner on the remaining |Ut−B| images.
From a data geometry point of view, it is possible that an objective function with only the entropy criterion will select images from high-density regions in the space of the unlabeled data points. This is because, the set of Ut−B images may be dominated by samples from such high-density regions constituting a large portion of the data. To address this issue, a condition is imposed in the objective function which selects images from low-density regions in the data space, i.e. images that have a high distance from the remaining set. The objective function described in this disclosure can be suitably modified for a different application, wherein the same optimization based dynamic BMAL strategy can be used for batch selection.
Let C denote the total number of classes and pj denote the average Euclidean distance of an unlabeled image xj from other images in the video Ut. Greater values of pj denote that the point is located in a low-density region. The two conditions mentioned previously can be satisfied by defining a score function as follows:
The first term denotes the sum of the average distances of each selected point from other points in the unlabeled video, while the second term quantifies the sum of the entropies of the learner on each remaining point in the unlabeled stream. λ1 is a tradeoff parameter.
The problem therefore reduces to selecting a batch B of unlabeled images which produces the maximum score f(B). Let the batch size (number of images to be selected for annotation) be denoted by m, which is an unknown. Since there is no restriction on the batch size m, the obvious solution to this problem is to select all the images in the unlabeled video, leaving no image behind. Then, the entropy term becomes 0, and the density term attains its maximum value. Consequently, f(B) will also attain its maximum score. However, querying all the images for their class labels is not an elegant solution and defeats the basic purpose of active learning. To prevent this, the objective/score function is modified by enforcing a penalty on the batch size as follows:
The third term essentially reflects the cost associated with labeling the images, as the value of the objective function decreases with every single image that needs to be labeled. The extent of labeling penalty can be controlled through the weighting parameter λ2. Defining the score function in this way ensures that any and every image is not queried for its class label. Only images for which the density and entropy terms outweigh the labeling cost term, get selected.
Next there is a need to select a batch B of unlabeled images so as to maximize {tilde over (f)}(B). Since brute force search methods are prohibitive, the method employs numerical optimization techniques to solve this problem. In this regard, a binary vector M of size |Ut| is defined where each entry denotes whether the corresponding point is to be queried for its class label and the objective function in Equation 2 is rewritten into an equivalent function in terms of the defined vector M:
subject to the constraint:
M
jε{0,1}∀j (4)
In this formulation, note that if an entry of M is 1, the corresponding image will be selected for annotation and if it is 0, the image will not be selected. The number of images to be selected, is therefore equal to the number of non-zero entries in the vector M, or the zero-norm of M. Hence,
The zero norm of M is replaced by its tightest convex approximation, which is the one-norm of M (similar to (Weston et al., 2003). Also, from constraint 4, the one-norm is simply the sum of the elements of the vector M. Substituting m in terms of M, the formulation becomes:
subject to the constraint:
M
jε{0,1}∀j
The above optimization is an integer programming problem and is NP hard. The constraint is relaxed to make it a continuous optimization problem:
subject to the constraint: 0≦Mj≦1.
This problem is solved using the Quasi Newton method (Nocedal and Wright, 1999). The final value of M is used to govern the number of points and the specific points to be selected for the given data stream (by greedily setting the top m entries in M as 1 to recover the integer solution, where m=ΣjMj). Hence, solving a single optimization problem helps in dynamically deciding the batch size and selecting the specific points for manual labeling.
Optimization based dynamic BMAL can be extended for dynamic batch selection in situations where multiple sources of information (e.g. audio and video data) are available. For instance, Equation (6) can be modified by appending relevant terms from the respective sources, together with a penalty on the batch size:
Moreover, if contextual information is available (e.g. location of a subject, whether at home or in office), it can be used to construct a prior probability vector depicting the chances of seeing particular acquaintances in a given context. The entropy term can then be computed on the posterior probabilities obtained by multiplying the likelihood values returned by the classifier with the context aware prior. Thus, subjects not expected in a given context (e.g. a home acquaintance in an office setting) will have low priors and consequently, the corresponding posteriors will not contribute much in the entropy calculation. The framework can therefore be extended to perform context-aware adaptive batch selection. The preliminary studies in these directions have shown promising results.
While the proposed framework combines batch size and data sample selection in a single formulation, it is also possible to think of an intuitive approach to solve this problem using a clustering-based batch size selection step, followed by application of a traditional static BMAL algorithm (such as (Guo and Schuurmans, 2007)). For purposes of comparison of performance, the discussion below presents an alternative clustering-based approach for selecting the batch size in the latter case.
A strategy to decide the batch size is to use a clustering algorithm to segment the images in the unlabeled video stream followed by a method to compute the batch size. Since the number of subjects (and hence, the number of clusters) in the data stream is an unknown, there is a need to exploit the spatial distribution of the unlabeled points for clustering (and cannot use algorithms like k-means which require the number of clusters as an input). This motivates the application of the DBSCAN algorithm (which can automatically determine the number of clusters for a given set of points) to isolate high density regions as separate clusters. For details about this method, please refer (Tan et al., 2006). The initial studies (not presented here for brevity) confirmed the efficacy of DBSCAN in isolating images of different subjects into separate clusters.
The Silhouette Coefficient (based on the cohesion and separation measures of a cluster) is a natural choice to decide the number of points to be queried from each cluster. It can attain a maximum value of 1, where a high value denotes a compact and well separated cluster. In some cases, it is best to select few points for a compact and well-separated cluster and more points otherwise. Thus, the number of points to be selected from a cluster should be proportional to (1—the Silhouette coefficient). Also, there may be benefits to selecting more points from larger clusters. If m is the total number of points, mi is the number of points in cluster i, SCi is the Silhouette coefficient of cluster i and C is a constant, the number of points to be selected from cluster i can thus be defined as:
This operation is performed for each of the identified clusters to compute the number of points to be selected (the sum of the values obtained across all clusters provides the overall batch size). The dynamically computed batch size for each cluster can now be passed as an input to any standard static BMAL procedure for selecting the required number of points from the corresponding cluster.
The study consisted of three studies to validate the efficiency of the framework. Using preliminary studies, the parameters λ1 and λ2 were empirically set to 1 and C to 50 in the study. The entropy term in the objective function necessitates a classifier which can provide probability estimates of an unlabeled point with respect to all classes. So, Gaussian Mixture Models (GMMs) were used as the classifier in the studies. GMMs have been successfully used in face recognition (Kim et al., 2004) in earlier work.
Due to the high frame rate of modern cameras, video data has a lot of redundancy and hence, video based person recognition is taken as the exemplar application in this disclosure to describe the framework. Four challenging biometric datasets were used in the different studies:
The VidTIMIT, MOBIO and MBGC datasets represent videos captured under different real world settings (stationary, using handheld device and under uncontrolled lighting respectively). FacePix contains calibrated measurements of pose and illumination variations, which were useful to study the efficacy of the framework.
The purpose of this study was to demonstrate the efficacy of dynamic batch selection over static selection in applications like face recognition. The VidTIMIT and the MOBIO biometric datasets were used in this study. 25 subjects were randomly selected from each dataset. The preliminary studies (not presented here due to lack of space) confirmed that the Discrete Cosine Transform (DCT) feature could effectively differentiate the subjects and hence was used in this study (for details about the feature extraction process, please refer (Ekenel et al., 2007)). The feature extraction step was followed by PCA to reduce the dimension.
A classifier was induced with 1 training video of each of the 25 subjects, used in this study. Unlabeled video streams were then presented to the learner. To demonstrate the generalizability with different subject combinations, the number of subjects in each unlabeled stream was varied between 1 and 10. For each stream, the batch size and the specific points were selected simultaneously using the proposed optimization strategy (Equation 6). The classifier was updated with the selected points and tested on test videos containing the same subject(s) as in the corresponding unlabeled videos.
To illustrate the usefulness of dynamic batch size selection, the accuracy was compared against the case when all the frames in the unlabeled video were used for learning and also when the batch size was static and predetermined. The static batch size was selected as 10 (the effect of this parameter is studied later) and the optimization scheme was used to select the 10 points, for fair comparison. The results are shown in
In general, upon selection of a greater number of images from an unlabeled set, the updated learner will perform better on a test set containing the same subjects. Thus, if a higher value of the batch size in a static BMAL learner is selected, then the selection would be expected to perform better than in
A comparative study of the proposed optimization framework was performed with the two step process of clustering followed by static BMAL for dynamic batch selection. The VidTIMIT and MBGC datasets were used in this study. Contrary to the previous study, where all the 25 subjects were present in the training set, the subjects in this study were divided into two groups—a “known” group containing 20 subjects and an “unknown” group containing the remaining 5 subjects. A classifier was induced with 1 video of each of the known subjects. Unlabeled video streams were then presented to the learner and the batch size decided by the two schemes were noted. The proportion of unknown subjects in the unlabeled video was gradually increased from 0% (where all the subjects in the unlabeled video were from the training set) to 100% (where none of the subjects in the unlabeled video were present in the training set) in steps of 20%. The learner was not given any information about the composition of the video stream. Also, the size of each video stream was kept the same to facilitate fair comparison. The DCT feature, followed by PCA, was used again.
The results of the aforementioned study (averaged over 10 trials) are shown in
Besides the predicted batch size, it is equally important to analyze the accuracy obtained on test sets with similar compositions as the unlabeled videos. Since the proposed scheme appropriately reflects the uncertainty of the learner and queries points accordingly, it is expected to have a better accuracy on test videos as compared to the clustering technique. This is confirmed in Table 2 which shows the accuracy obtained on test videos from the MBGC dataset using the two strategies. It is evident that the proposed scheme achieved significantly better generalization as compared to the clustering based approach with varying proportions of new identities in the unlabeled stream. The result on the VidTIMIT dataset was similar and is not presented due to lack of space.
To further demonstrate the usefulness of the proposed approach in batch size selection under changing conditions, studies were conducted with unlabeled videos containing images with different poses and illumination conditions compared to the training images. These are detailed below:
Presence of images with unknown pose angles: The FacePix dataset was used in this study. The training set contained frontal images (−10 degree to 10 degree) of 25 randomly chosen subjects. Unlabeled sets of images of the same 25 subjects were presented to the learner and the percentage of profile images (−45 degree to −90 degree and 45 degree to 90 degree) was gradually increased from 0% (where all the unlabeled images were frontal) to 100% (where the unlabeled video contained only profile images) in steps of 20%. The Gabor feature was used here (as in Gokberk et al., 2002) and PCA was used to reduce the dimension.
Presence of images under unknown illumination: The FacePix dataset was used in this study also. As before, the training set contained images of 25 subjects where the illumination angle was −10 degree to 10 degree. Unlabeled images of the same subjects were presented to the learner and the percentage of images where the angle of illumination varied between −45 degree to −90 degree and 45 degree to 90 degree was gradually increased from 0% to 100% in steps of 20%. The Gabor feature, followed by PCA, was used in this study (as used in Liu et al., 2005). The results are shown in
The practical applicability of the end-to-end system was studied by analyzing its performance under real world settings. The VidTIMIT and the MOBIO datasets, representing challenging real-world conditions, were used in this study. A classifier was induced with 1 training video of each of 25 randomly chosen subjects. Unlabeled video streams (each containing about 250 frames) were then presented to the classifier sequentially. The images in the video streams were randomly chosen from all 25 subjects and did not have any particular proportion of subjects in them, to mimic general real-world conditions. For each video, optimization based dynamic BMAL was used to query images. The selected images were appended to the training set, the classifier updated and then tested on a test video containing about 5000 images spanning all the 25 subjects.
The proposed approach was compared with three heuristic BMAL schemes—(i) Random Sampling, (ii) Diversity based BMAL, as proposed by Brinker (2003) and (iii) Uncertainty Based Ranked Selection, where the top k uncertain points were queried from the unlabeled video, k being the batch size. For each video stream, the dynamically computed batch size was noted and used for the corresponding unlabeled video in each of the heuristic techniques, for fair comparison. The performance was also compared against the two step process of clustering followed by static BMAL.
The label complexity (number of batches of labeled examples needed to achieve a certain level of accuracy) was used as the metric for quantifying performance in this study. The average time taken by each approach, to query images from an unlabeled stream, was also noted. The results are shown in Table 3. As evident from the running time figures, the proposed approach is computationally intensive compared to the heuristic BMAL techniques. However, the label complexity values to attain a test accuracy of 85% is markedly lower for the proposed approach. This asserts the fact that the proposed scheme succeeds in selecting the salient and prototypical data points as compared to the heuristic approaches and attains a given level of accuracy with significantly reduced human labeling effort. The clustering scheme followed by the optimization framework achieves comparable label complexity as the approach. However, it is a two step process and therefore involves more computation than the approach which is depicted in the running time values.
All of the methods disclosed and claimed herein can be made and executed without undue experimentation in light of the present disclosure. While the apparatus and methods of this invention have been described in terms of preferred embodiments, it will be apparent to those of skill in the art that variations may be applied to the methods and in the steps or in the sequence of steps of the method described herein without departing from the concept, spirit and scope of the invention. In addition, modifications may be made to the disclosed apparatus and components may be eliminated or substituted for the components described herein where the same or similar results would be achieved. All such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope, and concept of the invention as defined by the appended claims.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/491,530 entitled “Apparatus, System, and Method for Adaptive Batch Mode Active Learning for Evolving a Classifier” filed May 31, 2011, the entire contents of which is incorporated herein by reference without disclaimer.
Number | Date | Country | |
---|---|---|---|
61491530 | May 2011 | US |