The present invention relates generally to information processing and classification. More particularly, the present invention relates to systems, methods and computer readable media for classifying information in a technology-assisted review (“TAR”) process using a scalable continuous active learning (“S-CAL”) approach. This S-CAL approach may be used to efficiently classify and rank each one of a plurality of documents in a collection of electronically stored information.
Technology-assisted review (“TAR”) involves the iterative retrieval and review of documents from a collection until a substantial majority (or “all”) of the relevant documents have been reviewed or at least identified. At its most general, TAR separates the documents in a collection into two classes or categories: relevant and non-relevant. Other (sub) classes and (sub) categories may be used depending on the particular application.
Presently, TAR lies at the forefront of information retrieval (“IR”) and machine learning for text categorization. Much like with ad-hoc retrieval (e.g., a Google search), TAR's objective is to find documents to satisfy an information need, given a query. However, the information need in TAR is typically met only when substantially all of the relevant documents have been retrieved. Accordingly, TAR relies on active transductive learning for classification over a finite population, using an initially unlabeled training set consisting of the entire document population. While TAR methods typically construct a sequence of classifiers, their ultimate objective is to produce a finite list containing substantially all relevant documents, not to induce a general classifier. In other words, classifiers generated by the TAR process are a means to the desired end (i.e., an accurately classified document collection).
Some applications of TAR include electronic discovery (“eDiscovery”) in legal matters, systematic review in evidence-based medicine, and the creation of test collections for IR evaluation. See G. V. Cormack and M. R. Grossman, Evaluation of machine-learning protocols for technology-assisted review in electronic discovery (Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 153-162, 2014); C. Lefebvre, E. Manheimer, and J. Glanville, Searching for studies (Cochrane handbook for systematic reviews of interventions. New York: Wiley, pages 95-150, 2008); M. Sanderson and H. Joho, Forming test collections with no system pooling (Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 33-40, 2004). As introduced above, in contrast to ad-hoc search, the information need in TAR is typically satisfied only when virtually all of the relevant documents have been discovered. As a consequence, a substantial number of documents are typically examined for each classification task. The reviewer is typically an expert in the subject matter, not in IR or data mining. In certain circumstances, it may be undesirable to entrust the completeness of the review to the skill of the user, whether expert or not. For example, in eDiscovery, the review is typically conducted in an adversarial context, which may offer the reviewer limited incentive to conduct the best possible search.
TAR systems and methods including unsupervised learning, supervised learning, and active learning are discussed in Cormack VI. Generally, the property that distinguishes active learning from supervised learning is that with active learning, the learning algorithm is able to choose the documents from which it learns, as opposed to relying on user- or random selection of training documents. In pool-based settings, the learning algorithm has access to a large pool of unlabeled examples, and requests labels for some of them. The size of the pool is limited by the computational effort necessary to process it, while the number of documents for which labels are requested is limited by the human effort required to label them.
Lewis and Gale in “A sequential algorithm for training text classifiers” (Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3-12, 1994) compared three strategies for requesting labels: random sampling, relevance sampling, and uncertainty sampling, concluding that, for a fixed labeling budget, uncertainty sampling generally yields a superior classifier. At the same time, however, uncertainty sampling offers no guarantee of effectiveness, and may converge to a sub-optimal classifier. Subsequent research in pool-based active learning has largely focused on methods inspired by uncertainty sampling, which seek to minimize classification error by requesting labels for the most informative examples. Over and above the problem of determining the most informative examples, the computational cost of selecting examples and re-training the classifier is of concern, motivating research into more efficient algorithms and batch learning methods.
For example, a baseline model implementation (“BMI”) employing Continuous Active Learning (“CAL”) and relevance feedback consistently achieved over 90% recall across the collections of the TREC 2015 Total Recall Track. Recall and other measures associated with information classification are discussed in Cormack VI. This BMI used a labeling and review budget for each topic equal to 2R+1000, where R is the number of documents in the collection relevant to the topic. R can also be expressed according to the following equation: R=ρ·D, where D is the number of documents in the collection and p is the prevalence of relevant documents in the collection.
The challenge of reliably and efficiently achieving high recall for large datasets is of critical importance, but has not been well addressed in the prior art. Within the context of electronic discovery (“eDiscovery”) in legal matters, this need has been particularly acute, as voiced by parties and their counsel, technology providers, and the courts. Yet a solution has remained elusive. In the absence of a solution, parties have agreed to—or been required to undertake burdensome protocols that offer little assurance of success.
Accordingly, there is a need for a solution to the TAR problem that further minimizes human review effort, such that the review effort is no longer simply proportional to the number of relevant documents. Furthermore, there is a need for a TAR solution to provide calibrated estimates of recall, precision, and/or prevalence in order to further provide a classification that meets one or more target criteria.
The invention provides novel systems and methods for classifying information such that classifiers generated during iterations of the classification process will be able to accurately classify information for an information need to which they are applied (e.g., accurately classify documents in a collection as relevant or non-relevant) and thus, achieve high quality (e.g., high recall). In addition, these novel systems and methods reduce human review effort, allow for the classification of larger document collections and are able to provide calibrated estimates of recall, precision, and/or prevalence in order to further provide a classification that meets one or more target criteria.
Systems and computerized methods for classifying information are provided. The systems and methods receive an identification of a relevant document, which is used as part of a training set. The systems and methods also select a set of documents U from a document collection. The document collection is stored on a non-transitory storage medium. The systems and methods assign a default classification to one or more documents in U, which is to be used as part of a training set. The systems and methods train a classifier using the training set and score the documents in U using the classifier. The systems and methods also remove documents from the training set. The systems and methods select a first batch size documents from U to form a set V and select a first sub-sample of documents from V to form a set W. The systems and methods further present documents in W to a reviewer and receive from the reviewer user coding decisions associated with the presented documents. The systems and methods add one documents presented to the reviewer to the training set and remove those documents from U. The systems and methods also estimate a number of relevant documents in V using the number of relevant documents identified in the user coding decisions received from the reviewer. The systems and methods further update the classifier using documents in the training set and estimate a prevalence of relevant documents in the document collection. Upon determining that a stopping criteria has been reached, the systems and methods calculate a threshold for the classifier using the estimated prevalence, and classify the documents in the document collection using the classifier and the calculated threshold.
In certain embodiments, upon determining that a stopping criteria has not been reached, the systems and methods further score the documents in U using an updated classifier, select a second batch size of documents from U to form a set V, and select a second sub-sample size of documents from V to form a set W, and repeat the steps of presenting documents to a reviewer, receiving user coding decisions, adding reviewed documents to the training set and removing said documents from U, estimating the number of relevant documents in V; updating the classifier, estimating the prevalence of relevant documents, and determining whether a stopping criteria has been reached.
In certain embodiments, the second batch size is calculated based on the first batch size. In certain embodiments, the size of the second sub-sample is varied between iterations. In certain embodiments, the number of relevant documents in V is estimated based on the estimate from a prior iteration, the number of relevant documents identified by the reviewer, and the number of documents in W presented to a reviewer. In certain embodiments, the number of documents presented to the reviewer is computed based on the second batch size, an estimate of the number of relevant documents, and the second sub sample size.
In certain embodiments, the prevalence of relevant documents is estimated using intermediate results of a plurality of iterations of a TAR process. In certain embodiments, the documents in set V are selected by random sampling. In certain embodiments, the documents in set W are the highest scoring documents from V. In certain embodiments, the stopping criteria is the exhaustion of the set U. In certain embodiments, the threshold is calculated using a targeted level of recall. In certain embodiments, the threshold is calculated by maximizing F1.
The inventive principles are illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, and in which:
Generally, S-CAL develops a classifier, which is used as part of a scoring function S, from a sample of N documents using a sub-sample size n of reviewed documents. This process incurs a labeling effort of I documents. In turn, the classifier (and scoring function) is used to classify a larger collection of documents.
More specifically, S-CAL uses an initially unlabeled training set, which may be drawn at random from a potentially infinite collection, and one or more relevant documents. The relevant documents may be located in the document collection to be classified using an ad-hoc search or be a synthetic document constructed for the purpose of approximating an actual relevant document in the document collection. Synthetic documents are discussed in Cormack VI. See e.g., Cormack VI, ¶¶ 184-186. Batches of documents of increasing (e.g., exponentially increasing) size are selected (e.g., using relevance feedback), and labels are requested for a sub-sample of each batch. Use of relevance feedback to select documents is also discussed in Cormack VI. See e.g., Cormack VI, ¶¶ 184-192. Accordingly, the labeled examples form a stratified statistical sample of the entire collection, which is used for training and statistical estimation.
In the following sections, S-CAL methods will be described. In addition, in certain embodiments, it will be demonstrated that the running time for such S-CAL methods is O(N log N), where N is the size of the unlabeled training set. Similarly, in certain embodiments, the labeling cost of S-CAL will be demonstrated to be O(log N). These associated running/labeling costs are an improvement over standard CAL techniques, which has a running time that is proportional to the size of the document collection (and may be unbounded), and a labeling cost of O(R), respectively. Furthermore, the effectiveness of the classifier and the accuracy of the estimates produced by S-CAL, comparable favorably to the best available TAR baselines.
The systems and methods described and claimed herein are particularly useful for transforming an unclassified collection of information into a collection of classified information by generating and applying one or more classifiers to the unclassified information (e.g., documents). Beyond effecting this particular transformation, the systems and methods described and claimed herein are more efficient than other systems and methods for classifying information, while still maintaining overall classification accuracy. The systems and methods described herein reduce classification running times for similar sized data sets when compared to other systems and methods. For example, the systems and methods described herein reduce the proportionality of the human review effort, which allows the size of the document collection to increase indefinitely and also allows for review of document collections with a much lower prevalence p while still producing accurate results. Thus, the efficiencies of the systems and methods described and claimed herein are not merely based on the use of computer technology to improve classification speed. Instead, these systems and methods represent a fundamental improvement in at least the field of information classification by virtue of their overall configuration.
In step 1040, one or more relevant documents are added to an initial training set. For example, the relevant documents identified in step 1020 may form an initial training set. In step 1060, a number of documents (N) are selected from the document collection to form a set U. In certain embodiments, the documents may be selected using a uniform random sample of the document collection. The documents, however, may be selected in any known manner (e.g., ad-hoc search, relevance feedback, uncertainty sampling). Techniques for selecting documents are discussed in Cormack VI. See e.g., Cormack VI, ¶¶ 65-70, 184-190. These selection techniques may be applied for each step calling for document selection.
In step 1080, one or more documents from U are selected and added to the initial training set. These documents may be selected in any known manner. In certain embodiments, one hundred random documents are selected and added to the initial training set. In step 1100, the one or more documents selected in step 1080 are given a default classification. In certain embodiments, the default classification is “non-relevant.” In certain embodiments, the documents are assigned a default classification of “relevant.” In certain embodiments, the documents are assigned a mixture of default classifications (e.g., “relevant” and “non-relevant”). When assigning a mixture of default classifications, the assignments may be made in any suitable proportion.
In step 1120, one or more classifiers are generated using the initial training set and any assigned labels/classifications (e.g., relevant, non-relevant). The classifiers may be generated in any known manner. For example, Cormack VI describes systems and methods for generating a classifier using document information profiles (e.g., n-grams) and user coding decisions (e.g., relevant, non-relevant). See e.g., Cormack VI, ¶¶ 90-119. In certain embodiments, such a classifier may be generated using Sofia ML and/or SVMlight.
In step 1140, the classifiers are applied to the documents in the document collection to generate document scores. The document score may be generated in any known manner. For example, Cormack VI describes systems and methods for generating a document score using a classifier and a document information profile. See e.g., Cormack VI, ¶¶ 90-123. In certain embodiments, document scores are generated only for the documents in U.
In step 1150, one or more documents in the training set are removed from the training set. In certain embodiments, the documents added in step 1080 are removed from the training set. In certain preferred embodiments, all of the documents receiving default classifications are removed from the training set. In step 1160, a number of documents of a batch size B are selected from U to form a set V. In certain embodiments, B is initially 1. The B documents may be selected in any known manner. In certain embodiments, the documents are the B documents with the highest document scores computed in step 1140. In certain embodiments, the documents are selected by uncertainty sampling.
In step 1180, one or more documents are selected from the document set V to form a sub-sample set W. The documents may be selected in any known manner. In certain embodiments, the documents are selected by randomly sampling V. In certain embodiments, the number of documents selected is equal to b. In certain embodiments, b=B if R<=1 or B≤n, otherwise b=n, where n is a desired sub-sample size. In certain embodiments, the limit n on b need not be a constant. For example, when {circumflex over (R)}=0 b may be allowed to grow beyond n to handle cases where the classifier has not yet located any relevant documents. In certain embodiments, the value of b is a complex function of {circumflex over (R)}. In certain embodiments, the sub-sample size b (or n) is varied between iterations. In certain embodiments, b (or n) and/or N may be selected to balance computational efficiency, labeling efficiency, classifier effectiveness, and reliability. Generally, any growth rate for b (or n) will be efficient, so long as it is smaller than the growth rate of B (discussed below). If the growth rate of b is less than B, the review effort is O(log N).
In step 1200, the documents in W are assigned user coding decisions (e.g., relevant, non-relevant). In step, 1220, any documents in W that were assigned user coding decisions may be added to the training set. In step 1240, the labeled documents in W may be removed from U.
In step 1260, a determination is made as to whether a stopping criteria is reached. For example, any of the stopping criteria for TAR processes discussed in Cormack I-III and VI may be used. In certain embodiments, a stopping criteria is reached when there are no more documents left in U (see, e.g., step 1240) or the entire document collection itself is assigned user coding decisions. In certain embodiments, if a stopping criteria is not reached, certain steps (e.g., steps 1060-1320 of the method 1000) are repeated until a stopping criteria is reached.
In step 1280, the number of relevant documents among those retrieved ({circumflex over (R)}) is estimated. In certain embodiments, {circumflex over (R)} is computed according to the following equation:
where {circumflex over (R)}prev is the value of {circumflex over (R)} from the previous iteration, r is the number of relevant documents in W, and b is the size of sub-sample set W (i.e., the number of documents selected from the batch V for review). In certain embodiments, {circumflex over (R)}prev is initially 0 (e.g., in the first iteration).
In step 1300, selection sizes may be updated. For example, a new batch size B may be selected. In certain embodiments, B is increased by ┌B·growth_rate┐. In certain embodiments, growth_rate is set to 1/10. In other embodiments, the growth_rate is varied (e.g., between iterations of method 1000). In certain embodiments, growth_rate remains constant between successive iterations of method 1000. In certain embodiments, growth_rate=0. Similarly, selections sizes for N, b, and/or n may also be updated as discussed with respect to steps 1060 and 1180. For example, like B−N, b, and/or n may be increased according to an associated growth rate. For certain data sets, it may be preferable to make N as large as possible and to select n to balance classifier effectiveness with labeling effort. Generally, as n increases so does labeling effort.
If a stopping criteria has been reached, in step 1320, the one or more classifiers are finally updated using the user coding decisions assigned to the documents. In step 1340, the documents in the collection are scored using the computed classifier.
where scale is a scaling factor designed to adjust for an over/underestimation of prevalence inherent in the selected TAR process. In certain embodiments, scale equals 1.05. Scale, however, can be estimated over time (e.g., over multiple classification efforts) by assigning labels (e.g., receiving user coding decisions) to additional un-reviewed documents (e.g., those N documents in set U) to get a better sense of whether the TAR process itself generally results in over/underestimation of prevalence.
In step 2040, a target criterion is selected. For example, a selected target criterion may be used to minimize or maximize a particular measure (e.g., a minimum level of recall or a maximization of F1). Other possibilities for target criteria include other F measures (e.g., F2) or any of the measures of quality or effort discussed in Cormack I, II, and/or III.
In step 2060, depending on the target criteria a threshold t may be computed. Generally, the threshold t is selected or computed such that when t is employed to discriminate relevant from non-relevant documents (e.g., by labeling them), the target criteria is satisfied. For example, t may be used to discriminate or label the documents according to the following relationship:
where Sk is the document scoring function for a classifier generated by an iteration k of a TAR process. In step 2080, the document collection may be classified using the scoring function Sk and the threshold t. For example, a document scoring function may be realized by applying the classifier to the document's information profile as discussed in Cormack VI. See e.g., Cormack VI, ¶¶ 120-123, 104.
To set a threshold t for maximizing F1 instead of targeting a minimal level of recall, m can instead be calculated at the recall-precision break-even point, which has been observed to represent the approximate maximum of F1. In this case, m can be computed as m={circumflex over (ρ)}N.
Thus, t may be computed to be the maximum score of a document in U that was not selected in any batch. In addition to using the results calculated at the end of each TAR process iteration, it is also possible to compute t using results interpolated during an iteration. In step 4080, the documents may be classified (e.g., as relevant, non-relevant) using the computed threshold t.
In certain embodiments, to set a threshold t related to F1 instead of targeting a minimal level of recall, the earliest iteration j may be found such that N−|Uj|≥{circumflex over (ρ)}·N.
In addition, the systems and platforms described with respect to FIGS. 1-3 and 10 of Cormack VI, which is incorporated by reference herein in its entirety, may be used either independently, combined, or in conjunction with other components as part of a classification system configured to perform the methods discussed and described with respect to
One of ordinary skill in the art will appreciate that, aside from providing advantages in e-discovery review, the improved active learning systems, methods and media discussed throughout the disclosure herein may be applicable to a wide variety of fields that require data searching, retrieval, and screening. This is particularly true for applications which require searching for predetermined information or patterns within electronically stored information (regardless of format, language and size), especially as additional documents are added to the collection to be searched. Exemplary areas of potential applicability are law enforcement, security, and surveillance, as well as internet alert or spam filtering, regulatory reporting and fraud detection (whether within internal organizations or for regulatory agencies).
For example, in law enforcement, security, and for surveillance applications, the principles of the invention could be used to uncover new potential threats using already developed classifiers or to apply newly-classified information to discover similar patterns in prior evidence (e.g., crime or counter-terrorism prevention, and detection of suspicious activities). As another example, the principles of the invention could be used for healthcare screening using already developed classifiers or to apply newly-classified information to discover similar patterns in prior evidence (e.g., as predictors for conditions and/or outcomes).
While there have been shown and described various novel features of the invention as applied to particular embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the systems, methods and media described and illustrated, may be made by those skilled in the art without departing from the spirit of the invention. For example, the various method steps described herein may be reordered, combined, or omitted where applicable. Those skilled in the art will recognize, based on the above disclosure and an understanding therefrom of the teachings of the invention, that the particular hardware and devices that are part of the invention, and the general functionality provided by and incorporated therein, may vary in different embodiments of the invention. Accordingly, the particular systems, methods and results shown in the figures are for illustrative purposes to facilitate a full and complete understanding and appreciation of the various aspects and functionality of particular embodiments of the invention as realized in system and method embodiments thereof. Any of the embodiments described herein may be hardware-based, software-based and preferably comprise a mixture of both hardware and software elements. Thus, while the description herein may describe certain embodiments, features or components as being implemented in software or hardware, it should be recognized that any embodiment, feature or component that is described in the present application may be implemented in hardware and/or software. Those skilled in the art will appreciate that the invention can be practiced in other than the described embodiments, which are presented for purposes of illustration and not limitation, and the present invention is limited only by the claims which follow.
The present application claims the benefit of U.S. Provisional Application No. 62/182,028, filed on Jun. 19, 2015, entitled “Systems and Methods for Conducting and Terminating a Technology-Assisted Review, and U.S. Provisional Application 62/182,072, filed on Jun. 19, 2015, entitled “Systems and Methods for Conducting a Highly Autonomous Technology-Assisted Review.” The present application is also related to concurrently filed U.S. patent application Ser. No. 15/186,360 (published as U.S. Patent Publication No. 2016/0371364, and which issued as U.S. Pat. No. 10,445,374) entitled “Systems and Methods for Conducting and Terminating a Technology-Assisted Review” by Cormack and Grossman (herein after “Cormack I”). The present application is also related to concurrently filed U.S. patent application Ser. No. 15/186,366 (published as U.S. Patent Publication No. 2016/0371260, and which issued as U.S. Pat. No. 10,353,961) entitled “Systems and Methods for Conducting and Terminating a Technology-Assisted Review” by Cormack and Grossman (herein after “Cormack II”). The present application is also related to concurrently filed U.S. patent application Ser. No. 15/186,377 (published as U.S. Patent Publication No. 2016/0371369, and which issued as U.S. Pat. No. 10,242,001) entitled “Systems and Methods for Conducting and Terminating a Technology-Assisted Review” by Cormack and Grossman (herein after “Cormack III”). The present application is also related to concurrently filed U.S. patent application Ser. No. 15/186,382 (published as U.S. Patent Publication No. 2016/0371261, and which issued as U.S. Pat. No. 10,229,117) entitled “Systems and Methods for Conducting a Highly Autonomous Technology-Assisted Review Classification” by Cormack and Grossman (herein after “Cormack IV”). The present application is also related to U.S. application Ser. No. 13/840,029 (now, U.S. Pat. No. 8,620,842), filed on Mar. 15, 2013 entitled “Systems and methods for classifying electronic information using advanced active learning techniques” by Cormack and Grossman and published as U.S. Patent Publication No. 2014/0279716 (herein after “Cormack VI”). The contents of all of the above-identified applications and patent publications are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4839853 | Deerwester et al. | Jun 1989 | A |
5675710 | Lewis | Oct 1997 | A |
5675819 | Schuetze | Oct 1997 | A |
6189002 | Roitblat | Feb 2001 | B1 |
6463430 | Brady et al. | Oct 2002 | B1 |
6678679 | Bradford | Jan 2004 | B1 |
6687696 | Hofman et al. | Feb 2004 | B2 |
6738760 | Krachman | May 2004 | B1 |
6751614 | Rao | Jun 2004 | B1 |
6778995 | Gallivan | Aug 2004 | B1 |
6847966 | Sommer et al. | Jan 2005 | B1 |
6888548 | Gallivan | May 2005 | B1 |
6954750 | Bradford | Oct 2005 | B2 |
6978274 | Gallivan et al. | Dec 2005 | B1 |
7113943 | Bradford et al. | Sep 2006 | B2 |
7197497 | Cossock | Mar 2007 | B2 |
7272594 | Lynch et al. | Sep 2007 | B1 |
7313556 | Gallivan et al. | Dec 2007 | B2 |
7328216 | Hofman et al. | Feb 2008 | B2 |
7376635 | Porcari et al. | May 2008 | B1 |
7440622 | Evans | Oct 2008 | B2 |
7461063 | Rios | Dec 2008 | B1 |
7483892 | Sommer et al. | Jan 2009 | B1 |
7502767 | Forman | Mar 2009 | B1 |
7529737 | Aphinyanaphongs et al. | May 2009 | B2 |
7529765 | Brants et al. | May 2009 | B2 |
7558778 | Carus et al. | Jul 2009 | B2 |
7574409 | Patinkin | Aug 2009 | B2 |
7574446 | Collier et al. | Aug 2009 | B2 |
7580910 | Price | Aug 2009 | B2 |
7610313 | Kawai et al. | Oct 2009 | B2 |
7657522 | Puzicha et al. | Feb 2010 | B1 |
7676463 | Thompson et al. | Mar 2010 | B2 |
7747631 | Puzicha et al. | Jun 2010 | B1 |
7809727 | Gallivan et al. | Oct 2010 | B2 |
7844566 | Wnek | Nov 2010 | B2 |
7853472 | Al-Abdulqader et al. | Dec 2010 | B2 |
7899871 | Kumar et al. | Mar 2011 | B1 |
7912698 | Statnikov et al. | Mar 2011 | B2 |
7933859 | Puzicha et al. | Apr 2011 | B1 |
8005858 | Lynch et al. | Aug 2011 | B1 |
8010534 | Roitblat et al. | Aug 2011 | B2 |
8015124 | Milo et al. | Sep 2011 | B2 |
8015188 | Gallivan et al. | Sep 2011 | B2 |
8024333 | Puzicha et al. | Sep 2011 | B1 |
8079752 | Rausch et al. | Dec 2011 | B2 |
8103678 | Puzicha et al. | Jan 2012 | B1 |
8126826 | Pollara et al. | Feb 2012 | B2 |
8165974 | Privault et al. | Apr 2012 | B2 |
8171393 | Rangan et al. | May 2012 | B2 |
8185523 | Lu et al. | May 2012 | B2 |
8189930 | Renders et al. | May 2012 | B2 |
8219383 | Statnikov et al. | Jul 2012 | B2 |
8275772 | Aphinyanaphongs et al. | Sep 2012 | B2 |
8296309 | Brassil et al. | Oct 2012 | B2 |
8326829 | Gupta | Dec 2012 | B2 |
8346685 | Ravid | Jan 2013 | B1 |
8392443 | Allon et al. | Mar 2013 | B1 |
8429199 | Wang et al. | Apr 2013 | B2 |
8527523 | Ravid | Sep 2013 | B1 |
8533194 | Ravid et al. | Sep 2013 | B1 |
8543520 | Diao | Sep 2013 | B2 |
8612446 | Knight | Dec 2013 | B2 |
8620842 | Cormack | Dec 2013 | B1 |
8706742 | Ravid et al. | Apr 2014 | B1 |
8713023 | Cormack et al. | Apr 2014 | B1 |
8751424 | Wojcik | Jun 2014 | B1 |
8838606 | Cormack et al. | Sep 2014 | B1 |
8996350 | Dub et al. | Mar 2015 | B1 |
9122681 | Cormack et al. | Sep 2015 | B2 |
9171072 | Scholtes et al. | Oct 2015 | B2 |
9223858 | Gummaregula et al. | Dec 2015 | B1 |
9235812 | Scholtes | Jan 2016 | B2 |
9269053 | Naslund et al. | Feb 2016 | B2 |
9595005 | Puzicha et al. | Mar 2017 | B1 |
9607272 | Yu | Mar 2017 | B1 |
9886500 | George et al. | Feb 2018 | B2 |
20020007283 | Anelli | Jan 2002 | A1 |
20030120653 | Brady et al. | Jun 2003 | A1 |
20030139901 | Forman | Jul 2003 | A1 |
20030140309 | Saito et al. | Jul 2003 | A1 |
20040064335 | Yang | Apr 2004 | A1 |
20050010555 | Gallivan | Jan 2005 | A1 |
20050027664 | Johnson et al. | Feb 2005 | A1 |
20050134935 | Schmidtler et al. | Jun 2005 | A1 |
20050171948 | Knight | Aug 2005 | A1 |
20050228783 | Shanahan | Oct 2005 | A1 |
20050289199 | Aphinyanaphongs et al. | Dec 2005 | A1 |
20060074908 | Selvaraj | Apr 2006 | A1 |
20060161423 | Scott et al. | Jul 2006 | A1 |
20060212142 | Madani et al. | Sep 2006 | A1 |
20060242098 | Wnek | Oct 2006 | A1 |
20060242190 | Wnek | Oct 2006 | A1 |
20060294101 | Wnek | Dec 2006 | A1 |
20070122347 | Statnikov et al. | May 2007 | A1 |
20070156615 | Davar et al. | Jul 2007 | A1 |
20070156665 | Wnek | Jul 2007 | A1 |
20070179940 | Robinson et al. | Aug 2007 | A1 |
20080052273 | Pickens | Feb 2008 | A1 |
20080059187 | Roitblat et al. | Mar 2008 | A1 |
20080086433 | Schmidtler et al. | Apr 2008 | A1 |
20080104060 | Abhyankar et al. | May 2008 | A1 |
20080141117 | King et al. | Jun 2008 | A1 |
20080154816 | Xiao | Jun 2008 | A1 |
20080288537 | Golovchinsky et al. | Nov 2008 | A1 |
20090006382 | Tunkelang et al. | Jan 2009 | A1 |
20090024585 | Back et al. | Jan 2009 | A1 |
20090077068 | Aphinyanaphongs et al. | Mar 2009 | A1 |
20090077570 | Oral et al. | Mar 2009 | A1 |
20090083200 | Pollara et al. | Mar 2009 | A1 |
20090119140 | Kuo et al. | May 2009 | A1 |
20090119343 | Jiao et al. | May 2009 | A1 |
20090157585 | Fu et al. | Jun 2009 | A1 |
20090164416 | Guha | Jun 2009 | A1 |
20090265609 | Rangan et al. | Oct 2009 | A1 |
20100030763 | Chi | Feb 2010 | A1 |
20100030798 | Kumar et al. | Feb 2010 | A1 |
20100049708 | Kawai et al. | Feb 2010 | A1 |
20100077301 | Bodnick et al. | Mar 2010 | A1 |
20100082627 | Lai et al. | Apr 2010 | A1 |
20100106716 | Matsuda | Apr 2010 | A1 |
20100150453 | Ravid et al. | Jun 2010 | A1 |
20100169244 | Zeljkovic et al. | Jul 2010 | A1 |
20100198864 | Ravid et al. | Aug 2010 | A1 |
20100217731 | Fu et al. | Aug 2010 | A1 |
20100250474 | Richards et al. | Sep 2010 | A1 |
20100253967 | Privault et al. | Oct 2010 | A1 |
20100257141 | Monet et al. | Oct 2010 | A1 |
20100287160 | Pendar | Nov 2010 | A1 |
20100293117 | Xu | Nov 2010 | A1 |
20100306206 | Brassil et al. | Dec 2010 | A1 |
20100312725 | Privault et al. | Dec 2010 | A1 |
20110004609 | Chitiveli | Jan 2011 | A1 |
20110029525 | Knight | Feb 2011 | A1 |
20110029526 | Knight et al. | Feb 2011 | A1 |
20110029527 | Knight et al. | Feb 2011 | A1 |
20110029536 | Knight et al. | Feb 2011 | A1 |
20110047156 | Knight et al. | Feb 2011 | A1 |
20110103682 | Chidlovskii et al. | May 2011 | A1 |
20110119209 | Kirshenbaum et al. | May 2011 | A1 |
20110125751 | Evans | May 2011 | A1 |
20110251989 | Kraaij et al. | Oct 2011 | A1 |
20110295856 | Roitblat et al. | Dec 2011 | A1 |
20110307437 | Aliferis et al. | Dec 2011 | A1 |
20110314026 | Pickens et al. | Dec 2011 | A1 |
20110320453 | Gallivan et al. | Dec 2011 | A1 |
20120047159 | Pickens et al. | Feb 2012 | A1 |
20120095943 | Yankov et al. | Apr 2012 | A1 |
20120102049 | Puzicha et al. | Apr 2012 | A1 |
20120158728 | Kumar et al. | Jun 2012 | A1 |
20120191708 | Barsony et al. | Jul 2012 | A1 |
20120278266 | Naslund et al. | Nov 2012 | A1 |
20120278321 | Traub | Nov 2012 | A1 |
20140108312 | Knight et al. | Apr 2014 | A1 |
20140280173 | Scholtes | Sep 2014 | A1 |
20150012448 | Bleiweiss et al. | Jan 2015 | A1 |
20150310068 | Pickens et al. | Oct 2015 | A1 |
20150324451 | Cormack et al. | Nov 2015 | A1 |
20160019282 | Lewis | Jan 2016 | A1 |
20160371260 | Cormack et al. | Dec 2016 | A1 |
20160371261 | Cormack et al. | Dec 2016 | A1 |
20160371364 | Cormack et al. | Dec 2016 | A1 |
20160371369 | Cormack et al. | Dec 2016 | A1 |
Number | Date | Country |
---|---|---|
103092931 | May 2013 | CN |
WO 2013010262 | Jan 2013 | WO |
Entry |
---|
Forman, “An extensive Empirical Study of Feature Selection Metrics for Text Classification,” Journal of Maching Learning Research 3 (2003) 1289-1305. |
Yang, et al. “Inflection points and singularities on C-curves,” Computer Aided Geometric Design 21 (2004) pp. 207-213. |
Almquist, “Mining for Evidence in Enterprise Corpora”, Doctoral Dissertation, University of Iowa, 2011, http://ir.uiowa.edu/etd/917. |
Analytics News Jul. 11, 2013, Topiary Discovery LLC blog, Critical Thought in Analytics and eDiscovery [online], [retrieved on Jul. 15, 2013]. Retrieved from the Internet: URL<postmodern-ediscovery.blogspot.com>. |
Bagdouri et al. “Towards Minimizing the Annotation Cost of Certified Text Classification,” CIKM '13, Oct. 27-Nov. 1, 2013. |
Ball, “Train, Don't Cull, Using Keywords”, [online] Aug. 5, 2012, [retrieved on Aug. 30, 2013]. Retrieved from the Internet: URL<ballinyourcout.wordpress.com/2012/08/05/train-don't-cull-using-keywords/. |
Büttcher et al., “Information Retrieval Implementing and Evaluating Search Engines”, The MIT Press, Cambridge, MA/London, England, Apr. 1, 2010. |
Cormack et al., “Efficient and Effective Spam Filtering and Re-ranking for Large Web Datasets”, Apr. 29, 2010. |
Cormack et al., “Machine Learning for Information Retrieval: TREC 2009 Web, Relevance Feedback and Legal Tracks”, Cheriton School of Computer Science, University of Waterloo. |
Cormack et al., “Power and Bias of Subset Pooling Strategies”, Published Jul. 23-27, 2007, SIGIR 2007 Proceedings, pp. 837-838. |
Cormack et al., “Reciprocal Rank Fusion outperforms Condorcet and Individual Rank Learning Methods”, SIGIR 2009 Proceedings, pp. 758-759. |
Cormack et al., “Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review,” Apr. 26, 2015. |
Cormack et al., “Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery,” Jan. 27, 2014. |
Cormack et al., “Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery,” SIGIR 14, Jul. 6-11, 2014. |
Cormack et al., “Multi-Faceted Recall of Continuous Active Learning for Technology-Assisted Review,” Sep. 13, 2015. |
Cormack et al., “Scalability of Continuous Active Learning for Reliable High-Recall Text Classification,” Feb. 12, 2016. |
Cormack et al., “Engineering Quality and Reliability in Technology-Assisted Review,” Jan. 21, 2016. |
Cormack et al., “Waterloo (Cormack) Participation in the TREC 2015 Total Recall Track,” Jan. 24, 2016. |
Godbole et al., “Document classification through interactive supervision of document and term labels”, PKDD 2004, pp. 12. |
Grossman et al., “Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review”, XVII Rich. J.L. & Tech. 11 (2011), http://jolt.richmond.edu/v117I3/article11.pdf. |
Lad et al., “Learning to Rank Relevant & Novel Documents Through User Feedback”, CIMM 2010, pp. 10. |
Lu et al., “Exploiting Multiple Classifier Types with Active Learning”, GECCO, 2009, pp. 1905-1908. |
Pace et al., “Where the Money Goes: Understanding Litigant Expenditures for Producing Electronic Discovery”, RAND Institute for Civil Justice, 2012. |
Pickens, “Predictive Ranking: Technology Assisted Review Designed for the Real World”, Catalyst Repository Systems, Feb. 1, 2013. |
Safedi et al., “active learning with multiple classifiers for multimedia indexing”, Multimed. Tools Appl., 2012, 60, pp. 403-417. |
Shafiei et al., “Document Representation and Dimension Reduction for Text Clustering”, Data Engineering Workshop, 2007, pp. 10. |
Seggebruch, “Electronic Discovery Utilizing Predictive Coding”, Recommind, Inc. [online], [retrieved on Jun. 30, 2013]. Retrieved from the Internet: URL<http://www.toxictortlitigationblog.com/Disco.pdf>. |
Wallace et al., “Active Learning for Biomedical Citation Screening,” KDD' 10 , Jul. 28-Aug. 1, 2010. |
Webber et al., “Sequential Testing in Classifier Evaluation Yields Biased Estimates of Effectiveness,” SIGIR '13, Jul. 28-Aug. 1, 2013. |
Number | Date | Country | |
---|---|---|---|
20160371262 A1 | Dec 2016 | US |
Number | Date | Country | |
---|---|---|---|
62182028 | Jun 2015 | US | |
62182072 | Jun 2015 | US |