Computer-implemented system and method for generating document training sets

Information

  • Patent Grant
  • 10332007
  • Patent Number
    10,332,007
  • Date Filed
    Monday, November 7, 2016
    8 years ago
  • Date Issued
    Tuesday, June 25, 2019
    5 years ago
  • Inventors
  • Original Assignees
    • Nuix North America Inc. (Herndon, VA, US)
  • Examiners
    • Fleurantin; Jean B
    Agents
    • Brown Rudnick LLP
    • Leonardo; Mark S.
  • CPC
  • Field of Search
    • US
    • 707 708000
    • 707 737000
    • CPC
    • G06F17/30011
    • G06F17/30017
    • G06F17/30598
  • International Classifications
    • G06N5/02
    • G06F16/28
    • G06F16/35
    • G06F16/40
    • G06F16/93
    • G06F17/30
Abstract
A computer-implemented system and method for generating document training sets is provided. Unclassified documents are provided to two or more classifiers. A classification code assigned to each unclassified document is received. A determination is made as to whether a disagreement exists between classification codes assigned to a common unclassified document via different classifiers. The common unclassified document with a disagreement in classification codes are provided for further review. Results of the further review include one of a new classification code and confirmation of one of the assigned classification codes. The unclassified documents for which a disagreement exists are grouped as a training set.
Description
FIELD

The invention relates in general to information retrieval and, specifically, to a computer-implemented system and method for generating document training sets.


BACKGROUND

Document review is an activity frequently undertaken in the legal field during the discovery phase of litigation. Typically, document classification requires reviewers to assess the relevance of documents to a particular topic as an initial step. Document reviews can be conducted manually by human reviewers, automatically by a machine, or by a combination of human reviewers and a machine.


Generally, trained reviewers analyze documents and provide a recommendation for classifying each document in regards to the particular legal issue being litigated. A set of exemplar documents is provided to the reviewer as a guide for classifying the documents. The exemplar documents are each previously classified with a particular code relevant to the legal issue, such as “responsive,” “non-responsive,” and “privileged.” Based on the exemplar documents, the human reviewers or machine can identify documents that are similar to one or more of the exemplar documents and assign the code of the exemplar document to the uncoded documents.


The set of exemplar documents selected for document review can dictate results of the review. A cohesive representative exemplar set can produce accurately coded documents, while effects of inaccurately coded documents can be detrimental to a legal proceeding. For example, a “privileged” document contains information that is protected by a privilege, meaning that the document should not be disclosed to an opposing party. Disclosing a “privileged” document can result in an unintentional waiver of privilege to the subject matter.


The prior art focuses on document classification and generally assumes that exemplar documents are already defined and exist as a reference set for use in classifying document. Such classification can benefit from having better reference sets generated to increase the accuracy of classified documents.


Thus, there remains a need for a system and method for generating a set of exemplar documents that are cohesive and which can serve as an accurate and efficient example for use in classifying documents.


SUMMARY

A system and method for providing generating reference sets for use during document review is provided. A collection of unclassified documents is obtained. Selection criteria are applied to the document collection and those unclassified documents that satisfy the selection criteria are selected as reference set candidates. A classification code is assigned to each reference set candidate. A reference set is formed from the classified reference set candidates. The reference set is quality controlled and shared between one or more users.


A further embodiment provides a computer-implemented system and method for generating document training sets. Unclassified documents are provided to two or more classifiers. A classification code assigned to each unclassified document is received. A determination is made as to whether a disagreement exists between classification codes assigned to a common unclassified document via different classifiers. The common unclassified document with a disagreement in classification codes are provided for further review. Results of the further review include one of a new classification code and confirmation of one of the assigned classification codes. The unclassified documents for which a disagreement exists are grouped as a training set.


Still other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein are described embodiments by way of illustrating the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram showing a system for generating a reference set for use during document review, in accordance with one embodiment.



FIG. 2 is a flow diagram showing a method for generating a reference set for use during document review, in accordance with one embodiment.



FIG. 3 is a data flow diagram showing examples of the selection criteria of FIG. 2.



FIG. 4 is a flow diagram showing, by way of example, a method for generating a reference set via hierarchical clustering.



FIG. 5 is a flow diagram showing, by way of example, a method for generating a reference set via iterative clustering.



FIG. 6 is a flow diagram showing, by way of example, a method for generating a reference set via document seeding.



FIG. 7 is a flow diagram showing, by way of example, a method for generating a reference set via random sampling.



FIG. 8 is a flow diagram showing, by way of example, a method for generating a reference set via user assisted means.



FIG. 9 is a flow diagram showing, by way of example, a method for generating a reference set via active learning.



FIG. 10 is a flow diagram showing, by way of example, a method for generating a training set.





DETAILED DESCRIPTION

Reference documents are each associated with a classification code and are selected as exemplar documents or a “reference set” to assist human reviewers or a machine to identify and code unclassified documents. The quality of a reference set can dictate the results of a document review project and an underlying legal proceeding or other activity. Use of a noncohesive or “bad” reference set can provide inaccurately coded documents and could negatively affect a pending legal issue during, for instance, litigation. Generally, reference sets should be cohesive for a particular issue or topic and provide accurate guidance to classifying documents.


Cohesive reference set generation requires a support environment to review, analyze, and select appropriate documents for inclusion in the reference set. FIG. 1 is a block diagram showing a system for generating a reference set for use in classifying documents, in accordance with one embodiment. By way of illustration, the system 10 operates in a distributed computing environment, including “cloud environments,” which include a plurality of systems and sources. A backend server 11 is coupled to a storage device 13, a database 30 for maintaining information about the documents, and a lookup database 38 for storing many-to-many mappings 39 between documents and document features, such as concepts. The storage device 13 stores documents 14a and reference sets 14b. The documents 14a can include uncoded or “unclassified” documents and coded or “classified” documents, in the form of structured or unstructured data. Hereinafter, the terms “classified” and “coded” are used interchangeably with the same intended meaning, unless otherwise indicated.


The uncoded and coded documents can be related to one or more topics or legal issues. Uncoded documents are analyzed and assigned a classification code during a document review, while coded documents that have been previously reviewed and associated with a classification code. The storage device 13 also stores reference documents 14b, which together form a reference set of trusted and known results for use in guiding document classification. A set of reference documents can be hand-selected or automatically selected, as discussed infra.


Reference sets can be generated for one or more topics or legal issues, as well as for any other data to be organized and classified. For instance, the topic can include data regarding a person, place, or object. In one embodiment, the reference set can be generated for a legal proceeding based on a filed complaint or other court or administrative filing or submission. Documents in the reference set 14b are each associated with an assigned classification code and can highlight important information for the current topic or legal issue. A reference set can include reference documents with different classification codes or the same classification code. Core reference documents most clearly exhibit the particular topic or legal matter, whereas boundary condition reference documents include information similar to the core reference documents, but which are different enough to require assignment of a different classification code.


Once generated, the reference set can be used as a guide for classifying uncoded documents, such as described in commonly-assigned U.S. Pat. No. 8,713,018, issued on Apr. 29, 2014; U.S. Pat. No. 8,515,957, issued on Aug. 20, 2013; U.S. Pat. No. 8,572,084, issued on Oct. 29, 2013; and U.S. Pat. No. 8,632,223, issued on Jan. 21, 2014 the disclosures of which are incorporated by reference.


In a further embodiment, a reference set can also be generated based on features associated with the document. The feature reference set can be used to identify uncoded documents associated with the reference set features and provide classification suggestions, such as described in commonly-assigned U.S. Pat. No. 8,700,627, issued on Apr. 15, 2014; U.S. Pat. No. 9,477,751, issued on Oct. 25, 2016; U.S. Pat. No. 8,645,378, issued on Feb. 4, 2014; and U.S. Pat. No. 8,515,958, issued on Aug. 20, 2013, the disclosures of which are incorporated by reference.


The backend server 11 is also coupled to an intranetwork 21 and executes a workbench suite 31 for providing a user interface framework for automated document management, processing, analysis, and classification. In a further embodiment, the backend server 11 can be accessed via an internetwork 22. The workbench software suite 31 includes a document mapper 32 that includes a clustering engine 33, selector 34, classifier 35, and display generator 36. Other workbench suite modules are possible. In a further embodiment, the clustering engine, selector, classifier, and display generator can be provided independently of the document mapper.


The clustering engine 33 performs efficient document scoring and clustering of uncoded documents and reference documents, such as described in commonly-assigned U.S. Pat. No. 7,610,313, issued on Oct. 27, 2009, the disclosure of which is incorporated by reference. The uncoded documents 14a can be grouped into clusters and one or more documents can be selected from at least one cluster to form reference set candidates, as further discussed below in detail with reference to FIGS. 4 and 5. The clusters can be organized along vectors, known as spines, based on a similarity of the clusters. The selector 34 applies predetermined criteria to a set of documents to identify candidates for inclusion in a reference set, as discussed infra. The classifier 35 provides a machine-generated classification code suggestion and confidence level for coding of selected uncoded documents.


The display generator 36 arranges the clusters and spines in thematic neighborhood relationships in a two-dimensional visual display space. Once generated, the visual display space is transmitted to a work client 12 by the backend server 11 via the document mapper 32 for presenting to a human reviewer. The reviewer can include an individual person who is assigned to review and classify one or more uncoded documents by designating a code. Other types of reviewers are possible, including machine-implemented reviewers.


The document mapper 32 operates on uncoded documents 14a, which can be retrieved from the storage 13, as well as from a plurality of local and remote sources. As well, the local and remote sources can also store the reference documents 14b. The local sources include documents 17 maintained in a storage device 16 coupled to a local server 15 and documents 20 maintained in a storage device 19 coupled to a local client 18. The local server 15 and local client 18 are interconnected to the backend server 11 and the work client 12 over an intranetwork 21. In addition, the document mapper 32 can identify and retrieve documents from remote sources over an internetwork 22, including the Internet, through a gateway 23 interfaced to the intranetwork 21. The remote sources include documents 26 maintained in a storage device 25 coupled to a remote server 24 and documents 29 maintained in a storage device 28 coupled to a remote client 27. Other document sources, either local or remote, are possible.


The individual documents 14a, 14b, 17, 20, 26, 29 include all forms and types of structured and unstructured data, including electronic message stores, word processing documents, electronic mail (email) folders, Web pages, and graphical or multimedia data. Notwithstanding, the documents could be in the form of structurally organized data, such as stored in a spreadsheet or database.


In one embodiment, the individual documents 14a, 14b, 17, 20, 26, 29 include electronic message folders storing email and attachments, such as maintained by the Outlook and Windows Live Mail products, licensed by Microsoft Corporation, Redmond, Wash. The database can be an SQL-based relational database, such as the Oracle database management system, Release 11, licensed by Oracle Corporation, Redwood Shores, Calif. Further, the individual documents 17, 20, 26, 29 can be stored in a “cloud,” such as in Windows Live Hotmail, licensed by Microsoft Corporation, Redmond, Wash. Additionally, the individual documents 17, 20, 26, 29 include uncoded documents and reference documents.


The system 10 includes individual computer systems, such as the backend server 11, work server 12, server 15, client 18, remote server 24 and remote client 27. The individual computer systems are general purpose, programmed digital computing devices that have a central processing unit (CPU), random access memory (RAM), non-volatile secondary storage, such as a hard drive or CD ROM drive, network interfaces, and peripheral devices, including user interfacing means, such as a keyboard and display. Program code, including software programs, and data are loaded into the RAM for execution and processing by the CPU and results are generated for display, output, transmittal, or storage.


Reference set candidates selected for inclusion in a reference set are identified using selection criteria, which can reduce the number of documents for selection. FIG. 2 is a flow diagram showing a method for generating a reference set for use in document review, in accordance with one embodiment. A collection of documents is obtained (block 51). The collection of documents can include uncoded documents selected from a current topic or legal matter, previously coded documents selected from a related topic or legal matter, or pseudo documents. Pseudo documents are created using knowledge obtained by a person familiar with the issue or topic that is converted into a document. For example, a reviewer who participated in a verbal conversation with a litigant or other party during which specifics of a lawsuit were discussed could create a pseudo document based on the verbal conversation. A pseudo document can exist electronically or in hardcopy form. In one embodiment, the pseudo document is created specifically for use during the document review. Other types of document collections are possible.


Filter criteria are optionally applied to the document collection to identify a subset of documents (block 52) for generating the reference set. The filter criteria can be based on metadata associated with the documents, including date, file, folder, custodian, or content. Other filter criteria are possible. In one example, a filter criteria could be defined as “all documents created after 1997;” and thus, all documents that satisfy the filter criteria are selected as a subset of the document collection.


The filter criteria can be used to reduce the number of documents in the collection. Subsequently, selection criteria are applied to the document subset (block 53) to identify those documents that satisfy the selection criteria as candidates (block 54) for inclusion in the reference set. The selection criteria can include clustering, feature identification, assignments or random selection, and are discussed in detail below with reference to FIG. 3. A candidate decision is applied (block 55) to the reference set candidates to identify the reference candidates for potential inclusion in the reference set (block 57). During the candidate decision, the reference set candidates are analyzed and a classification code is assigned to each reference set candidate. A human reviewer or machine can assign the classification codes to the reference set candidates based on features of each candidate. The features include pieces of information that described the document candidate, such as entities, metadata, and summaries, as well as other information. Coding instructions guide the reviewer or machine to assign the correct classification code using the features of the reference set candidates. The coding instructions can be provided by a reviewer, a supervisor, a law firm, a party to a legal proceeding, or a machine. Other sources of the coding instructions are possible.


Also, a determination as to whether that reference set candidate is a suitable candidate for including in the reference set is made. Once the reference set candidates are coded, each candidate is analyzed to ensure that candidates selected for the reference set cover or “span” the largest area of feature space provided by the document collection. In one embodiment, the candidates that are most dissimilar from all the other candidates are selected as the reference set. A first reference set candidate is selected and placed in a list. The remaining reference set candidates are compared to the first reference set candidate in the list and the candidate most dissimilar to all the listed candidates is also added to the list. The process continues until all the dissimilar candidates have been identified or other stop criteria have been satisfied. The stop criteria can include a predetermined number of dissimilar reference set criteria, all the candidates have been reviewed, or a measure of the most dissimilar document fails to satisfy a dissimilarity threshold. Identifying dissimilar documents is discussed in the paper, Sean M. McNee. “Meeting User Information Needs in Recommender Systems”. Ph.D. Dissertation, University of Minnesota-Twin Cities. Jun. 2006, which is hereby incorporated by reference. Other stop criteria are possible.


However, refinement (block 56) of the reference set candidates can optionally occur prior to selection of the reference set. The refinement assists in narrowing the number of reference set candidates used to generate a reference set of a particular size or other criteria. If refinement is to occur, further selection criteria are applied (block 53) to the reference set candidates and a further iteration of the process steps occurs. Each iteration can involve different selection criteria. For example, clustering criteria can be applied during a first pass and random sampling can be applied during a second pass to identify reference set candidates for inclusion in the reference set.


In a further embodiment, features can be used to identify documents for inclusion in a reference set. A collection of documents is obtained and features are identified from the document collection. The features can be optionally filtered to reduce the feature set and subsequently, selection criteria can be applied to the features. The features that satisfy the selection criteria are selected as reference set candidate features. A candidate decision, including assigning classification codes to each of the reference set candidate features, is applied. Refinement of the classified reference set candidate features is optionally applied to broaden or narrow the reference set candidate features for inclusion in the reference set. The refinement can include applying further selection criteria to reference set documents during a second iteration. Alternatively, the selection criteria can first be applied to documents and in a further iteration; the selection criteria are applied to features from the documents. Subsequently, documents associated with the reference set candidate features are grouped as the reference set.


The candidate criteria can be applied to a document set to identify reference set candidates for potential inclusion in the reference set. FIG. 3 is a data flow diagram 60 showing examples of the selection criteria of FIG. 2. The selection criteria 61 include clustering 62, features 63, assignments 64, document seeding 65, and random sampling 66. Other selection criteria are possible. Clustering 62 includes grouping documents by similarity and subsequently selecting documents from one or more of the clusters. A number of documents to be selected can be predetermined by a reviewer or machines, as further described below with reference to FIGS. 4 and 5. Features 63 include metadata about the documents, including nouns, noun phrases, length of document, “To” and “From” fields, date, complexity of sentence structure, and concepts. Assignments 64 include a subset of documents selected from a larger collection of uncoded document to be reviewed. The assignments can be generated based on assignment criteria, such as content, size, or number of reviewers. Other features, assignments, and assignment criteria are possible.


Document seeding 65 includes selecting one or more seed documents and identifying documents similar to the seed documents from a larger collection of documents as reference set candidates. Document seeding is further discussed below in detail with reference to FIG. 6. Random sampling 66 includes randomly selecting documents from a larger collection of documents as reference set candidates. Random sampling is further discussed below in detail with reference to FIG. 7.


The process for generating a reference set can be iterative and each pass through the process can use different selection criteria, as described above with reference to FIG. 2. Alternatively, a single pass through the process using only one selection criteria to generate a cohesive reference set is also possible. Use of the clustering selection criteria can identify and group documents by similarity. FIG. 4 is a flow diagram showing, by way of example, a method for generating a reference set via hierarchical clustering. A collection of documents is obtained (block 71) and filter criteria can optionally be applied to reduce a number of the documents (block 72). The documents are then clustered (block 73) to generate a hierarchical tree via hierarchical clustering. Hierarchical clustering, including agglomerative or divisive clustering, can be used to generate the clusters of documents, which can be used to identify a set of reference documents having a particular predetermined size. During agglomerative clustering, each document is assigned to a cluster and similar clusters are combined to generate the hierarchical tree. Meanwhile, during divisive clustering, all the documents are grouped into a single cluster and subsequently divided to generate the hierarchical tree.


The clusters of the hierarchical tree can be traversed (block 74) to identify n-documents as reference set candidates (block 75). The n-documents can be predetermined by a user or a machine. In one embodiment, the n-documents are influential documents, meaning that a decision made for the n-document, such as the assignment of a classification code, can be propagated to other similar documents. Using influential documents can improve the speed and classification consistency of a document review.


To obtain the n-documents, n-clusters can be identified during the traversal of the hierarchical tree and one document from each of the identified clusters can be selected. The single document selected from each cluster can be the document closest to the cluster center or another documents. Other values of n are possible, such as n/2. For example, n/2 clusters are identified during the traversal and two documents are selected from each identified cluster. In one embodiment, the selected documents are the document closest to the cluster center and the document furthest from the cluster center. However, other documents can be selected, such as randomly picked documents.


Once identified, the reference set candidates are analyzed and a candidate decision is made (block 76). During the analysis, a classification code is assigned to each reference set candidate and a determination of whether that reference set candidate is appropriate for the reference set is made. If one or more of the reference set candidates are not sufficient for the reference set, refinement of the reference set candidates may optionally occur (block 77) by reclustering the reference set candidates (block 73). Refinement can include changing input parameters of the clustering process and then reclustering the documents, changing the document collection by filtering different documents, or selecting a different subset of n-documents from the clusters. Other types of and processes for refinement are possible. The refinement assists in narrowing the number of reference set candidates to generate a reference set of a particular size during which reference set candidates can be added or removed. One or more of the reference set candidates are grouped to form the reference set (block 78). The size of the reference set can be predetermined by a human reviewer or a machine.


In a further embodiment, features can be used to identify documents for inclusion in a reference set. A collection of documents is obtained and features from the documents are identified. Filter criteria can optionally be applied to the features to reduce the number of potential documents for inclusion in the reference set. The features are then grouped into clusters, which are traversed to identify n-features as reference set candidate features. A candidate decision, including the assignment of classification codes, is applied to each of the reference set candidate features and refinement of the features is optional. Documents associated with the classified reference set candidate features are then grouped as the reference set.


Iterative clustering is a specific type of hierarchical clustering that provides a reference set of documents having an approximate size. FIG. 5 is a flow diagram showing, by way of example, a method for generating a reference set via iterative clustering. A collection of documents is obtained (block 81). The documents can be optionally divided into assignments (block 82), or groups of documents, based on document characteristics, including metadata about the document. In general, existing knowledge about the document is used to generate the assignments. Other processes for generating the assignments are possible. In one embodiment, attachments to the document can be included in the same assignment as the document, and in an alternative embodiment, the attachments are identified and set aside for review or assigned to a separate assignment. The documents are then grouped into clusters (block 83). One or more documents can be selected from the clusters as reference set candidates (block 84). In one embodiment, two documents are selected, including the document closest to the cluster center and the document closest to the edge of the cluster. The document closest to the center provides information regarding the center of the cluster, while the outer document provides information regarding the edge of the cluster. Other numbers and types of documents can be selected.


The selected documents are then analyzed to determine whether a sufficient number of documents have been identified as reference set candidates (block 85). The number of documents can be based on a predefined value, threshold, or bounded range selected by a reviewer or a machine. If a sufficient number of reference set candidates are not identified, further clustering (block 83) is performed on the reference set candidates until a sufficient number of reference set candidates exists. However, if a sufficient number of reference set candidates are identified, the candidates are analyzed and a candidate decision is made (block 86). For example, a threshold can define a desired number of documents for inclusion in the reference set. If the number of reference set candidates is equal to or below the threshold, those candidates are further analyzed, whereas if the number of reference set candidates is above the threshold, further clustering is performed until the number of candidates is sufficient. In a further example, a bounded range, having an upper limitation and a lower limitation, is determined and if the number of reference set candidates falls within the bounded range, those reference set candidates are further analyzed.


The candidate decision includes coding of the documents and a determination as to whether each reference set candidate is a good candidate for inclusion in the reference set. The coded reference set candidates form the reference set (block 87). Once formed, the reference set can be used as a group of exemplar documents to classify uncoded documents.


In a further embodiment, features can be used to identify documents for inclusion in the reference set. A collection of documents is obtained and features are identified within the documents. The features can optionally be divided into one or more assignments. The features are then grouped into clusters and at least one feature is selected from one or more of the clusters. The selected features are compared with a predetermined number of documents for inclusion in the reference set. If the predetermined number is not satisfied, further clustering is performed on the features to increase or reduce the number of features. However, if satisfied, the selected features are assigned classification codes. Refinement of the classified features is optional. Subsequently, documents associated with the classified features are identified and grouped as the reference set.


The selection criteria used to identify reference set candidates can include document seeding, which also groups similar documents. FIG. 6 is a flow diagram showing, by way of example, a method for generating a reference set via document seeding. A collection of documents is obtained (block 91). The collection of documents includes unmarked documents related to a topic, legal matter, or other theme or purpose. The documents can be optionally grouped into individual assignments (block 92). One or more seed documents are identified (block 93). The seed documents are considered to be important to the topic or legal matter and can include documents identified from the current matter, documents identified from a previous matter, or pseudo documents.


The seed documents from the current case can include the complaint filed in a legal proceeding for which documents are to be classified or other documents, as explained supra. Alternatively, the seed documents can be quickly identified using a keyword search or knowledge obtained from a reviewer. In a further embodiment, the seed documents can be identified as reference set candidates identified in a first pass through the process described above with reference to FIG. 2. The seed documents from a previous related matter can include one or more of the reference documents from the reference set generated for the previous matter. The pseudo documents use knowledge from a reviewer or other user, such as a party to a lawsuit, as described above with reference to FIG. 2.


The seed documents are then applied to the document collection or at least one of the assignments and documents similar to the seed documents are identified as reference set candidates (block 94). In a further embodiment, dissimilar documents can be identified as reference set candidates. In yet a further embodiment, the similar and dissimilar documents can be combined to form the seed documents. The similar and dissimilar documents can be identified using criteria, including document injection, linear search, and index look up. However, other reference set selection criteria are possible.


The number of reference set candidates are analyzed to determine whether there are a sufficient number of candidates (block 95). The number of candidates can be predetermined and selected by a reviewer or machine. If a sufficient number of reference set candidates exist, the reference set candidates form the reference set (block 97). However, if the number of reference set candidates is not sufficient, such as too large, refinement of the candidates is performed to remove one or more reference candidates from the set (block 96). Large reference sets can affect the performance and outcome of document classification. The refinement assists in narrowing the number of reference set candidates to generate a reference set of a particular size. If refinement is to occur, further selection criteria are applied to the reference set candidates. For example, if too many reference set candidates are identified, the candidate set can be narrowed to remove common or closely related documents, while leaving the most important or representative document in the candidate set. The common or closely related documents can be identified as described in commonly-assigned U.S. Pat. No. 6,745,197, entitled “System and Method for Efficiently Processing Messages Stored in Multiple Message Stores,” issued on Jun. 1, 2004, and U.S. Pat. No. 6,820,081, entitled “System and Method for Evaluating a Structured Message Store for Message Redundancy,” issued on Nov. 16, 2004, the disclosures of which are incorporated by reference. Additionally, the common or closely related documents can be identified based on influential documents, which are described above with reference to FIG. 4, or other measures of document similarity. After the candidate set has been refined, the remaining reference set candidates form the reference set (block 97).


In a further embodiment, features can be used to identify documents for inclusion in the reference set. A collection of documents is obtained and features from the documents are identified. The features are optionally divided into assignments. Seed features are identified and applied to the identified features. The features similar to the seed features are identified as reference set candidate features and the similar features are analyzed to determine whether a sufficient number of reference set candidate features are identified. If not, refinement can occur to increase or decrease the number of reference set candidate features until a sufficient number exists. If so, documents associated with the reference set candidate features are identified and grouped as the reference set.


Random sampling can also be used as selection criteria to identify reference set candidates. FIG. 7 is a flow diagram showing, by way of example, a method for generating a reference set via random sampling. A collection of documents is obtained (block 101), as described above with reference to FIG. 2. The documents are then grouped into categories (block 102) based on metadata about the documents. The metadata can include date, file, folder, fields, and structure. Other metadata types and groupings are possible. Document identification values are assigned (block 103) to each of the documents in the collection. The identification values can include letters, numbers, symbols or color coding, as well as other values, and can be human readable or machine readable. A random machine generator or a human reviewer can assign the identification values to the documents. Subsequently, the documents are randomly ordered into a list (block 104) and the first n-documents are selected from the list as reference candidates (block 105). In a further embodiment, the document identification values are provided to a random number generator, which randomly selects n document identification values. The documents associated with the selected identification values are then selected as the reference set candidates. The number of n-documents can be determined by a human reviewer, user, or machine. The value of n dictates the size of the reference set. The reference candidates are then coded (block 106) and grouped as the reference set (block 107).


In a further embodiment, features or terms selected from the documents in the collection can be sampled. Features can include metadata about the documents, including nouns, noun phrases, length of document, “To” and “From” fields, date, complexity of sentence structure, and concepts. Other features are possible. Identification values are assigned to the features and a subset of the features or terms are selected, as described supra. Subsequently, the subset of features is randomly ordered into a list and the first n-features are selected as reference candidate features. The documents associated with the selected reference candidate features are then grouped as the reference set. Alternatively, the number of n-features can be randomly selected by a random number generator, which provides n-feature identification values. The features associated with the selected n-feature identification values are selected as reference candidate features.


Reference sets for coding documents by a human reviewer or a machine can be the same set or a different set. Reference sets for human reviewers should be cohesive; but need not be representative of a collection of documents since the reviewer is comparing uncoded documents to the reference documents and identifying the similar uncoded documents to assign a classification code. Meanwhile, a reference or “training” set for classifiers should be representative of the collection of documents, so that the classifier can distinguish between documents having different classification codes. FIG. 8 is a flow diagram showing, by way of example, a method 110 for generating a reference set with user assistance. A collection of documents associated with a topic or legal issue is obtained (block 111). A reviewer marks one or more of the documents in the collection by assigning a classification code (block 112). Together, the classified documents can form an initial or candidate reference set, which can be subsequently tested and refined. The reviewer can randomly select the documents, receive review requests for particular documents by a classifier, or receive a predetermined list of documents for marking. In one embodiment, the documents marked by the reviewer can be considered reference documents, which can be used to train a classifier.


While the reviewer is marking the documents, a machine classifier analyzes the coding decisions provided by the reviewer (block 113). The analysis of the coding decisions by the classifier can include one or more steps, which can occur simultaneously or sequentially. In one embodiment, the analysis process is a training or retraining of the classifier. Retraining of the classifier can occur when new information, such as documents or coding decisions are identified. In a further embodiment, multiple classifiers are utilized. Thereafter, the classifier begins classifying documents (block 114) by automatically assigning classification codes to the documents. The classifier can begin classification based on factors, such as a predetermined number of documents for review by the classifier, after a predetermined time period has passed, or after a predetermined number of documents in each classification category is reviewed. For instance, in one embodiment, the classifier can begin classifying documents after analyzing at least two documents coded by the reviewer. As the number of documents analyzed by the classifier prior to classification increases, a confidence level associated with assigned classification codes by the classifier can increase. The classification codes provided by the classifier are compared (block 115) with the classification codes for the same documents provided by the reviewer to determine whether there is a disagreement between the assigned codes (block 116). For example, a disagreement exists when the reviewer assigns a classification code of “privileged” to a document and the classifier assigns the same document a classification code of “responsive.”


If a disagreement does not exist (block 116), the classifier begins to automatically classify documents (block 118). However, if a disagreement exists (block 116), a degree of the disagreement is analyzed to determine whether the disagreement falls below a predetermined threshold (block 117). The predetermined threshold can be measured using a percentage, bounded range, or value, as well as other measurements. In one embodiment, the disagreement threshold is set as 99% agreement, or alternatively as 1% disagreement. In a further embodiment, the predetermined threshold is based on a number of agreed upon documents. For example, the threshold can require that the last 100 documents coded by the reviewer and the classifier be in agreement. In yet a further embodiment, zero-defect testing can be used to determine the threshold. A defect can be a disagreement in a coding decision, such as an inconsistency in the classification code assigned. An error rate for classification is determined based on the expected percentages that a particular classification code will be assigned, as well as a confidence level. The error rate can include a percentage, number, or other value. A collection of documents is randomly sampled and marked by the reviewer and classifier. If a value of documents with disagreed upon classification codes exceeds the error rate, further training of the classifier is necessary. However, if the value of documents having a disagreement falls below the error rate, automated classification can begin.


If the disagreement value is below the threshold, the classifier begins to automatically classify documents (block 118). If not, the reviewer continues to mark documents from the collection set (block 112), the classifier analyzes the coding decisions (block 113), the classifier marks documents (block 114), and the classification codes are compared (block 115) until the disagreement of the classification codes assigned by the classifier and the reviewer falls below the predetermined threshold.


In one embodiment, the disagreed upon documents can be selected and grouped as the reference set. Alternatively, all documents marked by the classifier can be included in the reference set, such as the agreed and disagreed upon documents.


In a further embodiment, features can be used to identify documents for inclusion in the reference set. A collection of documents is obtained and features are identified from the collection. A reviewer marks one or more features by assigning classification codes and provides the marked features to a classifier for analysis. After the analysis, the classifier also begins to assign classification codes to the features. The classification codes assigned by the reviewer and the classifier for a common feature are compared to determine whether a disagreement exists. If there is no disagreement, classification of the features becomes automated. However, if there is disagreement, a threshold is applied to determine whether the disagreement falls below threshold. If so, classification of the features becomes automated. However, if not, further marking of the features and analysis occurs.


Reference sets generated using hierarchical clustering, iterative clustering, random sampling, and document seeding rely on the human reviewer for coding of the reference documents. However, a machine, such as a classifier, can also be trained to identify reference sets for use in classifying documents. FIG. 9 is a flow diagram showing, by way of example, a method for generating a reference set via active learning. A set of coded documents is obtained (block 121). The set of documents can include a document seed set or a reference set, as well as other types of document sets. The document set can be obtained from a previous related topic or legal matter, as well as from documents in the current matter. The coding of the document set can be performed by a human reviewer or a machine. The document set can be used to train one or more classifiers (block 122) to identify documents for inclusion in a reference set. The classifiers can be the same or different, including nearest neighbor or Support Vector Machine classifiers, as well as other types of classifiers. The classifiers review and mark a set of uncoded documents for a particular topic, legal matter, theme, or purpose by assigning a classification code (block 123) to each of the uncoded documents. The classification codes assigned by each classifier for the same document are compared (block 124) to determine whether there is a disagreement in classification codes provided by the classifiers (block 125). A disagreement exists when one document is assigned different classification codes. If there is no disagreement, the classifiers continue to review and classify the uncoded documents (block 123) until there are no uncoded documents remaining. Otherwise, if there is a disagreement, the document is provided to a human reviewer for review and marking. The human reviewer provides a new classification code or confirms a classification code assigned by one of the classifiers (block 126). The classifiers that incorrectly marked the document and reviewer assigned classification code (block 127) can be analyzed for further training. For the classifiers that correctly marked the document (block 127), no additional training need occur. The documents receiving inconsistent classification codes by the classifiers form the reference set (block 128). The reference set can then be used to train further classifiers for classifying documents.


In a further embodiment, features can be analyzed to identify reference documents for inclusion in a reference set. A collection of coded documents, such as a seed set or reference set, is obtained. The document set can be obtained from a previous related topic, legal matter, theme or purpose, as well as from documents in the current matter. Features within the document set are identified. The features can include metadata about the documents, including nouns, noun phrases, length of document, to and from fields, date, complexity of sentence structure, and concepts. Other features are possible. The identified features are then classified by a human reviewer and used to train one or more classifiers. Once trained, the classifiers review a further set of uncoded documents, identify features within the further set of uncoded documents, and assign classification codes to the features. The classification codes assigned to a common feature by each classifier are compared to determine whether a discrepancy in the assigned classification code exists. If not, the classifiers continue to review and classify the features of the uncoded documents until no uncoded documents remain. If there is a classification disagreement, the feature is provided to a human reviewer for analysis and coding. The classification code is received from the user and used to retrain the classifiers, which incorrectly coded the feature. Documents associated with the disagreed upon features are identified and grouped to form the reference set.


Feature selection can be used to identify specific areas of two or more documents that are interesting based on the classification disagreement by highlighting or marking the areas of the documents containing the particular disagreed upon features. Documents or sections of documents can be considered interesting based on the classification disagreement because the document data is prompting multiple classifications and should be further reviewed by a human reviewer.


In yet a further embodiment, a combination of the reference documents identified by document and the reference documents identified by features can be combined to create a single reference set of documents.


The reference set can be provided to a reviewer for use in manually coding documents or can be provided to a classifier for automatically coding the documents. In a further embodiment, different reference sets can be used for providing to a reviewer and a classifier. FIG. 10 is a flow diagram 130 showing, by way of example, a method for generating a training set for a classifier. A set of coded document, such as a reference set, is obtained (block 131). One or more classifiers can be trained (block 132) using the reference set. The classifiers can be the same or different, such as a nearest neighbor classifier or a Support Vector Machine classifier. Other types of classifiers are possible. Once trained, the classifiers are each run over a common sample of assignments to classify documents in that assignment (block 133). The classification codes assigned by each classifier are analyzed for the documents and a determination of whether the classifiers disagree on a particular classification code is made (block 134). If there is no disagreement (block 134), the classifiers are run over further common samples (block 133) of assignments until disagreed upon documents are identified. However, if there is disagreement between the classifiers on a document marking, the classified document in disagreement must then be reviewed (block 135) and identified as training set candidates. A further classification code is assigned to the classified document in disagreement (block 137). The further classification code can be assigned by a human reviewer or a machine, such as one of the classifiers or a different classifier. The classifiers can each be optionally updated (block 132) with the newly assigned code. The review and document coding can occur manually by a reviewer or automatically. The training set candidates are then combined with the reference set (block 137). A stop threshold is applied (block 138) to the combined training set candidates and reference set to determine whether each of the documents is appropriate for inclusion in the training set. The stop threshold can include a predetermined training set size, a breadth of the training set candidates with respect to the feature space of the reference set, or the zero defect test. Other types of tests and processes for determining the stopping threshold are possible. If the threshold is not satisfied, the classifiers are run over further assignments (block 133) for classifying and comparing. Otherwise, if satisfied, the combined training set candidates and reference set form the training set (block 139). Once generated, the training set can be used for automatic classification of documents, such as described above with reference to FIG. 8.


In a further embodiment, features can be used to identify documents for inclusion in the reference set. A set of coded documents is obtained and features are identified from the coded documents. Classifiers are trained using the features and then run over a random sample of features to assign classification codes to the features. The classification codes for a common feature are compared to determine whether a disagreement exists. If not, further features can be classified. However, if so, the disagreed upon features are provided to a reviewer for further analysis. The reviewer can assign further classification codes to the features, which are grouped as training set candidate features. The documents associated with the training set candidate features can be identified as training set candidates and combined with the coded documents. A stop threshold is applied to determine whether each of the documents is appropriate for inclusion in the reference set. If so, the training set candidates and coded documents are identified as the training set. However, if not, further coding of features is performed to identify training set candidates appropriate for inclusion in the reference set.


While the invention has been particularly shown and described as referenced to the embodiments thereof, those skilled in the art will understand that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims
  • 1. A computer-implemented method for generating document training sets, comprising: providing a set of unclassified documents to each of two or more trained classifiers and receiving a classification code assigned to each unclassified document from each classifier;comparing via a server the classification codes assigned to each unclassified document by two or more of the classifiers, wherein the server comprises a central processing unit, memory, an input port to receive the set of unclassified documents, and an output port to provide a training set for a matter;determining for at least one of the unclassified documents that a disagreement exists between the classification codes from the two or more classifiers;providing via the server for further review the unclassified document with a disagreement in classification codes, wherein results of the further review comprise one of a new classification code and confirmation of one of the assigned classification codes;generating the training set for the matter via the server by grouping the unclassified documents for which the disagreement exists; andgenerating a further training set for a same or different matter, comprising: training two or more other classifiers by identifying features within one or more coded documents, classifying the features, and utilizing the classified features for training the other classifiers;identifying via the other classifiers one or more features within at least one of the unclassified documents;assigning by each of the other classifiers, a classification code to each of the identified features;comparing the classification codes assigned to each feature;determining whether a disagreement exists between the classification codes assigned to at least one of the features via the other classifiers;providing the features with a disagreement in classification codes for further review, wherein results of the further review comprise one of a new classification code and confirmation of one of the assigned classification codes; andgrouping as the further training set the unclassified documents associated with the features for which a disagreement exists.
  • 2. A method according to claim 1, further comprising: comparing the classification codes for each of the unclassified documents with the results of the further review;identifying those classification codes that differ from the results of the further review; andre-training the classifiers that assigned the classification codes that differ from the results.
  • 3. A method according to claim 1, further comprising: training the classifiers with coded documents, each coded document assigned with a classification code.
  • 4. A method according to claim 1, wherein the feature classification is performed by an individual.
  • 5. A method according to claim 1, further comprising: combining the training set and the further training set as a combined training set.
  • 6. A method according to claim 5, further comprising: providing the combined training set to one of an individual for manual classification and to one or more of the other classifiers.
  • 7. A method according to claim 1, further comprising: highlighting areas of the unclassified documents that include the features in disagreement; andidentifying via the highlighted areas, areas of the unclassified documents that are interesting.
  • 8. A method according to claim 1, wherein the further review is performed by an individual.
  • 9. A computer-implemented system for generating document training sets, comprising: a set of unclassified documents provided to each of two or more trained classifiers, wherein a classification code assigned to each unclassified document from each classifier; anda server comprising a central processing unit, memory, an input port to receive the set of unclassified documents, and an output port to provide a training set for a matter, wherein the central processing unit is configured to: compare the classification codes assigned to each unclassified document by two or more of the classifiers;determine for at least one of the unclassified documents that a disagreement exists between the classification codes from the two or more classifiers;provide for further review the unclassified document with a disagreement in classification codes;receiving results of the further review comprising one of a new classification code and confirmation of one of the assigned classification codes;generate the training set for the matter by grouping the unclassified documents for which the disagreement exists; andgenerate a further training set for a same or different matter, comprising: train two or more other classifiers by identifying features within one or more coded documents, classifying the features, and utilizing the classified features for training the classifiers;identify via the other classifiers one or more features within at least one of the unclassified documents;assign by each of the other classifiers, a classification code to each of the identified features;compare the classification codes assigned to each feature;determine whether a disagreement exists between the classification codes assigned to at least one of the features via the other classifiers;provide the features with a disagreement in classification codes for further review, wherein results of the further review comprise one of a new classification code and confirmation of one of the assigned classification codes; andgroup as the further training set the unclassified documents associated with the features for which a disagreement exists.
  • 10. A system according to claim 9, further comprising: a further comparison module to compare the classification codes for each of the unclassified documents with the results of the further review;an identification module to identify those classification codes that differ from the results of the further review; anda re-training module to re-train the classifiers that assigned the classification codes that differ from the results.
  • 11. A system according to claim 9, further comprising: a training module to train the classifiers with coded documents, each coded document assigned with a classification code.
  • 12. A system according to claim 9, wherein the feature classification is performed by an individual.
  • 13. A system according to claim 9, further comprising: a combination module to combine the training set and the further training set as a combined training set.
  • 14. A system according to claim 13, further comprising: a delivery module to provide the combined training set to one of an individual for manual classification and to one or more of the other classifiers.
  • 15. A system according to claim 9, further comprising: an identification module to highlight areas of the unclassified documents that include the features in disagreement and to identify via the highlighted areas, areas of the unclassified documents that are interesting.
  • 16. A system according to claim 9, wherein the further review is performed by an individual.
CROSS-REFERENCE TO RELATED APPLICATION

This patent application is a continuation of U.S. Pat. No. 9,489,446, issued Nov. 8, 2016, which is a divisional of U.S. Pat. No. 8,612,446, issued Dec. 17, 2013, which claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application, Ser. No. 61/236,490, expired, the disclosures of which are incorporated by reference.

US Referenced Citations (391)
Number Name Date Kind
3416150 Lindberg Dec 1968 A
3426210 Agin Feb 1969 A
3668658 Flores et al. Jun 1972 A
4893253 Lodder Jan 1990 A
4991087 Burkowski et al. Feb 1991 A
5056021 Ausborn Oct 1991 A
5121338 Lodder Jun 1992 A
5133067 Hara et al. Jul 1992 A
5182773 Bahl et al. Jan 1993 A
5278980 Pedersen et al. Jan 1994 A
5359724 Earle Oct 1994 A
5371673 Fan Dec 1994 A
5371807 Register et al. Dec 1994 A
5442778 Pedersen et al. Aug 1995 A
5450535 North Sep 1995 A
5477451 Brown et al. Dec 1995 A
5488725 Turtle et al. Jan 1996 A
5524177 Suzuoka Jun 1996 A
5528735 Strasnick et al. Jun 1996 A
5619632 Lamping et al. Apr 1997 A
5619709 Caid et al. Apr 1997 A
5635929 Rabowsky et al. Jun 1997 A
5649193 Sumita et al. Jul 1997 A
5675819 Schuetze Oct 1997 A
5696962 Kupiec Dec 1997 A
5706497 Takahashi et al. Jan 1998 A
5737734 Schultz Apr 1998 A
5754938 Herz et al. May 1998 A
5754939 Herz et al. May 1998 A
5787422 Tukey et al. Jul 1998 A
5794178 Caid et al. Aug 1998 A
5794236 Mehrle Aug 1998 A
5799276 Komissarchik et al. Aug 1998 A
5819258 Vaithyanathan et al. Oct 1998 A
5819260 Lu et al. Oct 1998 A
5835905 Pirolli et al. Nov 1998 A
5842203 D'Elena et al. Nov 1998 A
5844991 Hochberg et al. Dec 1998 A
5857179 Vaithyanathan et al. Jan 1999 A
5860136 Fenner Jan 1999 A
5862325 Reed et al. Jan 1999 A
5864846 Voorhees et al. Jan 1999 A
5864871 Kitain et al. Jan 1999 A
5867799 Lang et al. Feb 1999 A
5870740 Rose et al. Feb 1999 A
5895470 Pirolli et al. Apr 1999 A
5909677 Broder et al. Jun 1999 A
5915024 Kitaori et al. Jun 1999 A
5915249 Spencer Jun 1999 A
5920854 Kirsch et al. Jul 1999 A
5924105 Punch et al. Jul 1999 A
5940821 Wical Aug 1999 A
5943669 Numata Aug 1999 A
5950146 Vapnik Sep 1999 A
5950189 Cohen et al. Sep 1999 A
5966126 Szabo Oct 1999 A
5974412 Hazlehurst et al. Oct 1999 A
5987446 Corey et al. Nov 1999 A
5987457 Ballard Nov 1999 A
6006221 Liddy et al. Dec 1999 A
6012053 Pant et al. Jan 2000 A
6026397 Sheppard Feb 2000 A
6038574 Pitkow et al. Mar 2000 A
6070133 Brewster et al. May 2000 A
6089742 Warmerdam et al. Jul 2000 A
6091424 Madden Jul 2000 A
6092059 Straforini et al. Jul 2000 A
6092091 Sumita et al. Jul 2000 A
6094649 Bowen et al. Jul 2000 A
6100901 Mohda et al. Aug 2000 A
6108446 Hoshen Aug 2000 A
6119124 Broder et al. Sep 2000 A
6122628 Castelli et al. Sep 2000 A
6134541 Castelli et al. Oct 2000 A
6137499 Tesler Oct 2000 A
6137545 Patel et al. Oct 2000 A
6137911 Zhilyaev Oct 2000 A
6144962 Weinberg Nov 2000 A
6148102 Stolin Nov 2000 A
6154213 Rennison et al. Nov 2000 A
6154219 Wiley et al. Nov 2000 A
6167368 Wacholder Dec 2000 A
6173275 Caid et al. Jan 2001 B1
6202064 Julliard Mar 2001 B1
6216123 Robertson et al. Apr 2001 B1
6243713 Nelson et al. Jun 2001 B1
6243724 Mander et al. Jun 2001 B1
6253218 Aoki et al. Jun 2001 B1
6260038 Martin et al. Jul 2001 B1
6300947 Kanebsky Oct 2001 B1
6326962 Szabo Dec 2001 B1
6338062 Liu Jan 2002 B1
6345243 Clark Feb 2002 B1
6349296 Broder et al. Feb 2002 B1
6349307 Chen Feb 2002 B1
6360227 Aggarwal et al. Mar 2002 B1
6363374 Corston-Oliver et al. Mar 2002 B1
6377287 Hao et al. Apr 2002 B1
6381601 Fujiwara et al. Apr 2002 B1
6389433 Bolonsky et al. May 2002 B1
6389436 Chakrabarti et al. May 2002 B1
6408294 Getchius et al. Jun 2002 B1
6414677 Robertson et al. Jul 2002 B1
6415283 Conklin Jul 2002 B1
6418431 Mahajan et al. Jul 2002 B1
6421709 McCormick et al. Jul 2002 B1
6438537 Netz et al. Aug 2002 B1
6438564 Morton et al. Aug 2002 B1
6442592 Alumbaugh et al. Aug 2002 B1
6446061 Doerre et al. Sep 2002 B1
6449612 Bradley et al. Sep 2002 B1
6453327 Nielsen Sep 2002 B1
6460034 Wical Oct 2002 B1
6470307 Turney Oct 2002 B1
6480843 Li Nov 2002 B2
6480885 Olivier Nov 2002 B1
6484168 Pennock et al. Nov 2002 B1
6484196 Maurille Nov 2002 B1
6493703 Knight et al. Dec 2002 B1
6496822 Rosenfelt et al. Dec 2002 B2
6502081 Wiltshire, Jr. et al. Dec 2002 B1
6507847 Fleischman Jan 2003 B1
6510406 Marchisio Jan 2003 B1
6519580 Johnson et al. Feb 2003 B1
6523026 Gillis Feb 2003 B1
6523063 Miller et al. Feb 2003 B1
6542635 Hu et al. Apr 2003 B1
6542889 Aggarwal et al. Apr 2003 B1
6544123 Hiromichi et al. Apr 2003 B1
6549957 Hanson et al. Apr 2003 B1
6560597 Dhillon et al. May 2003 B1
6571225 Oles et al. May 2003 B1
6564202 Schuetze et al. Jun 2003 B1
6584564 Olkin et al. Jun 2003 B2
6594658 Woods Jul 2003 B2
6598054 Schuetze et al. Jul 2003 B2
6606625 Muslea et al. Aug 2003 B1
6611825 Billheimer et al. Aug 2003 B1
6628304 Mitchell et al. Sep 2003 B2
6629097 Keith Sep 2003 B1
6640009 Zlotnick Oct 2003 B2
6651057 Jin et al. Nov 2003 B1
6654739 Apte et al. Nov 2003 B1
6658423 Pugh et al. Dec 2003 B1
6675159 Lin et al. Jan 2004 B1
6675164 Kamath et al. Jan 2004 B2
6678705 Berchtold et al. Jan 2004 B1
6684205 Modha et al. Jan 2004 B1
6697998 Damerau et al. Feb 2004 B1
6701305 Holt et al. Mar 2004 B1
6711585 Copperman et al. Mar 2004 B1
6714929 Micaelian et al. Mar 2004 B1
6714936 Nevin Mar 2004 B1
6728752 Chen Apr 2004 B1
6735578 Shetty et al. May 2004 B2
6738759 Wheeler et al. May 2004 B1
6747646 Gueziec et al. Jun 2004 B2
6751628 Coady Jun 2004 B2
6757646 Marchisio Jun 2004 B2
6785679 Dane et al. Aug 2004 B1
6789230 Katariya et al. Sep 2004 B2
6804665 Kreulen et al. Oct 2004 B2
6816175 Hamp et al. Nov 2004 B1
6819344 Robbins Nov 2004 B2
6823333 McGreevy Nov 2004 B2
6826724 Shimacia et al. Nov 2004 B1
6841321 Matsumoto et al. Jan 2005 B2
6847966 Sommer et al. Jan 2005 B1
6862710 Marchisio Mar 2005 B1
6879332 Decombe Apr 2005 B2
6880132 Uemura Apr 2005 B2
6883001 Abe Apr 2005 B2
6886010 Kostoff Apr 2005 B2
6888584 Suzuki et al. May 2005 B2
6915308 Evans et al. Jul 2005 B1
6922699 Schuetze et al. Jul 2005 B2
6941325 Benitez et al. Sep 2005 B1
6968511 Robertson et al. Nov 2005 B1
6970881 Mohan et al. Nov 2005 B1
6970931 Bellamy et al. Nov 2005 B1
6976207 Rujan et al. Dec 2005 B1
6978419 Kantrowitz Dec 2005 B1
6990238 Saffer et al. Jan 2006 B1
6993517 Naito et al. Jan 2006 B2
6993535 Bolle et al. Jan 2006 B2
6996575 Cox et al. Feb 2006 B2
7003551 Malik Feb 2006 B2
7146361 Broder et al. Feb 2006 B2
7013435 Gallo et al. Mar 2006 B2
7020645 Bisbee et al. Mar 2006 B2
7039638 Zhang et al. May 2006 B2
7039856 Peairs et al. May 2006 B2
7051017 Marchisio May 2006 B2
7054870 Holbrook May 2006 B2
7080320 Ono Jul 2006 B2
7096431 Tambata et al. Aug 2006 B2
7099819 Sakai et al. Aug 2006 B2
7107266 Breyman et al. Sep 2006 B1
7117151 Iwahashi et al. Oct 2006 B2
7117246 Christenson et al. Oct 2006 B2
7117432 Shanahan et al. Oct 2006 B1
7130807 Mikurak Oct 2006 B1
7131060 Azuma Oct 2006 B1
7137075 Hoshito et al. Nov 2006 B2
7139739 Agrafiotis et al. Nov 2006 B2
7155668 Holland et al. Dec 2006 B2
7158957 Joseph et al. Jan 2007 B2
7188107 Moon et al. Mar 2007 B2
7188117 Farahat et al. Mar 2007 B2
7194458 Micaelian et al. Mar 2007 B1
7194483 Mohan et al. Mar 2007 B1
7197497 Cassock Mar 2007 B2
7209949 Mousseau et al. Apr 2007 B2
7233843 Budhraja et al. Jun 2007 B2
7233886 Wegerich et al. Jun 2007 B2
7233940 Bamberger et al. Jun 2007 B2
7239986 Golub et al. Jul 2007 B2
7240199 Tomkow Jul 2007 B2
7246113 Cheetham et al. Jul 2007 B2
7251637 Caid et al. Jul 2007 B1
7266365 Ferguson et al. Sep 2007 B2
7266545 Bergman et al. Sep 2007 B2
7269598 Marchisio Sep 2007 B2
7271801 Toyozawa et al. Sep 2007 B2
7277919 Dohono et al. Oct 2007 B1
7292244 Vafiadis et al. Nov 2007 B2
7308451 Lamping et al. Dec 2007 B1
7325127 Olkin et al. Jan 2008 B2
7353204 Liu Apr 2008 B2
7359894 Liebman et al. Apr 2008 B1
7363243 Arnett et al. Apr 2008 B2
7366759 Trevithick et al. Apr 2008 B2
7373612 Risch et al. May 2008 B2
7376635 Porcari et al. May 2008 B1
7379913 Steele et al. May 2008 B2
7383282 Whitehead et al. Jun 2008 B2
7401087 Copperman et al. Jul 2008 B2
7412462 Margolus et al. Aug 2008 B2
7418397 Kojima et al. Aug 2008 B2
7430688 Matsuno et al. Sep 2008 B2
7430717 Spangler Sep 2008 B1
7433893 Lowry Oct 2008 B2
7440662 Antona et al. Oct 2008 B2
7444356 Calistri-Yeh et al. Oct 2008 B2
7457948 Bilicksa et al. Nov 2008 B1
7472110 Achlioptas Dec 2008 B2
7478403 Allavarpu Jan 2009 B1
7490092 Morton et al. Feb 2009 B2
7499923 Kawatani Mar 2009 B2
7509256 Iwahashi et al. Mar 2009 B2
7516419 Petro et al. Apr 2009 B2
7519565 Prakash et al. Apr 2009 B2
7523349 Barras Apr 2009 B2
7558769 Scott et al. Jul 2009 B2
7571177 Damle Aug 2009 B2
7574409 Patinkin Aug 2009 B2
7584221 Robertson et al. Sep 2009 B2
7603628 Park et al. Oct 2009 B2
7639868 Perrizo et al. Dec 2009 B1
7640219 Perrizo Dec 2009 B2
7647345 Trespess et al. Jan 2010 B2
7668376 Lin et al. Feb 2010 B2
7668789 Forman et al. Feb 2010 B1
7698167 Batham et al. Apr 2010 B2
7712049 Williams et al. May 2010 B2
7716223 Haveliwala et al. May 2010 B2
7730425 De los Reyes et al. Jun 2010 B2
7743059 Chan et al. Jun 2010 B2
7756974 Blumenau Jul 2010 B2
7761447 Brill et al. Jul 2010 B2
7801841 Mishra et al. Sep 2010 B2
7831928 Rose et al. Nov 2010 B1
7885901 Hull et al. Feb 2011 B2
7899274 Baba et al. Mar 2011 B2
7971150 Raskutti et al. Jun 2011 B2
7984014 Song et al. Jul 2011 B2
8010466 Patinkin Aug 2011 B2
8010534 Roitblat et al. Aug 2011 B2
8032409 Mikurak Oct 2011 B1
8060259 Budhraja et al. Nov 2011 B2
8065156 Gazdzinski Nov 2011 B2
8065307 Haslam et al. Nov 2011 B2
8165974 Privault et al. Apr 2012 B2
8275773 Donnelly et al. Sep 2012 B2
8290778 Gazdzinski Oct 2012 B2
8296146 Gazdzinski Oct 2012 B2
8296666 Wright et al. Oct 2012 B2
8311344 Dunlop et al. Nov 2012 B2
8326823 Grandhi et al. Dec 2012 B2
8381122 Louch et al. Feb 2013 B2
8401710 Budhraja et al. Mar 2013 B2
8515946 Marcucci et al. Aug 2013 B2
8676605 Familant Mar 2014 B2
8712777 Gazdzinski Apr 2014 B1
8719037 Gazdzinski May 2014 B2
8719038 Gazdzinski May 2014 B1
8781839 Gazdzinski Jul 2014 B1
8819569 SanGiovanni et al. Aug 2014 B2
9015633 Takamura et al. Apr 2015 B2
20020002556 Yoshida et al. Jan 2002 A1
20020032735 Bumstein et al. Mar 2002 A1
20020055919 Mikheev May 2002 A1
20020065912 Catchpole et al. May 2002 A1
20020078044 Song et al. Jun 2002 A1
20020078090 Hwang et al. Jun 2002 A1
20020122543 Rowen Sep 2002 A1
20020184193 Cohen Dec 2002 A1
20030018652 Heckerman et al. Jan 2003 A1
20030046311 Baidya et al. Mar 2003 A1
20030084066 Waterman et al. May 2003 A1
20030110181 Schuetze et al. Jun 2003 A1
20030120651 Bernstein et al. Jun 2003 A1
20030130991 Reijerse et al. Jul 2003 A1
20030172048 Kauffman Sep 2003 A1
20030174179 Suermondt et al. Sep 2003 A1
20040024739 Copperman et al. Feb 2004 A1
20040024755 Rickard Feb 2004 A1
20040034633 Rickard Feb 2004 A1
20040078577 Feng et al. Apr 2004 A1
20040083206 Wu et al. Apr 2004 A1
20040090472 Risch et al. May 2004 A1
20040133650 Miloushev et al. Jul 2004 A1
20040163034 Colbath Aug 2004 A1
20040181427 Stobbs et al. Sep 2004 A1
20040205482 Basu Oct 2004 A1
20040205578 Wolf et al. Oct 2004 A1
20040215608 Gourlay Oct 2004 A1
20040220895 Carus et al. Nov 2004 A1
20040243556 Ferrucci et al. Dec 2004 A1
20050004949 Trepess et al. Jan 2005 A1
20050022106 Kawai Jan 2005 A1
20050025357 Landwehr et al. Feb 2005 A1
20050091211 Vernau Apr 2005 A1
20050097435 Prakash et al. May 2005 A1
20050171772 Iwahashi et al. Aug 2005 A1
20050203924 Rosenberg Sep 2005 A1
20050283473 Rousso et al. Dec 2005 A1
20060008151 Lin et al. Jan 2006 A1
20060010145 Al-Kofahi et al. Jan 2006 A1
20060012297 Lee et al. Jan 2006 A1
20060021009 Lunt Jan 2006 A1
20060053382 Gardner et al. Mar 2006 A1
20060080311 Potok et al. Apr 2006 A1
20060089924 Raskutti Apr 2006 A1
20060106847 Eckardt et al. May 2006 A1
20060122974 Perisic Jun 2006 A1
20060122997 Lin Jun 2006 A1
20060164409 Borchardt et al. Jul 2006 A1
20060242013 Agarwal et al. Oct 2006 A1
20070020642 Deng et al. Jan 2007 A1
20070043774 Davis et al. Feb 2007 A1
20070044032 Mollitor et al. Feb 2007 A1
20070112758 Livaditis May 2007 A1
20070109297 Borchardt et al. Jun 2007 A1
20070150801 Chidlovskii et al. Jun 2007 A1
20070214133 Liberty et al. Sep 2007 A1
20070288445 Kraftsow Dec 2007 A1
20080005081 Green et al. Jan 2008 A1
20080109762 Hundal et al. Jun 2008 A1
20080140643 Ismalon Jun 2008 A1
20080162478 Pugh et al. Jul 2008 A1
20080183855 Agarval et al. Jul 2008 A1
20080189273 Kraftsow Aug 2008 A1
20080215427 Kawada et al. Sep 2008 A1
20080228675 Daffy et al. Sep 2008 A1
20080249999 Renders et al. Oct 2008 A1
20080270946 Risch et al. Oct 2008 A1
20090018995 Chidlovskii et al. Jan 2009 A1
20090041329 Nordell et al. Feb 2009 A1
20090043797 Dorie Feb 2009 A1
20090049017 Gross Feb 2009 A1
20090097733 Hero et al. Apr 2009 A1
20090106239 Getner et al. Apr 2009 A1
20090125505 Bhalotia et al. May 2009 A1
20090222444 Chowdhury et al. Sep 2009 A1
20090228499 Schmidtler et al. Sep 2009 A1
20090228811 Adams et al. Sep 2009 A1
20090259622 Kolz et al. Oct 2009 A1
20090265631 Sigurbjomsson et al. Oct 2009 A1
20090307213 Deng et al. Dec 2009 A1
20100010968 Redlich Jan 2010 A1
20100076857 Deo et al. Mar 2010 A1
20100100539 Davis et al. Apr 2010 A1
20100198802 Kraftsow et al. Aug 2010 A1
20100250477 Yadav Sep 2010 A1
20100250541 Richards et al. Sep 2010 A1
20100262571 Schmidtler et al. Oct 2010 A1
20100268661 Levy et al. Oct 2010 A1
20100312725 Privault et al. Dec 2010 A1
20110016118 Edala et al. Jan 2011 A1
20120124034 Jing et al. May 2012 A1
Foreign Referenced Citations (8)
Number Date Country
0886227 Dec 1998 EP
1024437 Aug 2000 EP
1049030 Nov 2000 EP
2000067162 Nov 2000 WO
2003052627 Jun 2003 WO
2003060766 Jul 2003 WO
2006008733 Jul 2004 WO
2005073881 Aug 2005 WO
Non-Patent Literature Citations (50)
Entry
O'Neill et al., “DISCO: Intelligent Help for Document Review,” 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, Jun. 8, 2009, pp. 1-10, ICAIL 2009, Association for Computing Machinery, Red Hook, New York (Online); XP 002607216.
McNee, “Meeting User Information Needs in Recommender Systems,” Ph.D. Dissertation, University of Minnesota—Twin Cities, Jun. 2006.
Slaney, M., et al., “Multimedia Edges: Finding Hierarchy in all Dimensions” Proc. 9-th ACM Intl. Conf. on Multimedia, pp. 29-40, ISBN. 1-58113-394-4, Sep. 30, 2001, XP002295016 Ottawa (Sep. 3, 2001).
Strehl et al., “Cluster Ensembles—A Knowledge Reuse Framework for Combining Partitioning,” Journal of Machine Learning Research, MIT Press, Cambridge, MA, US, ISSN: 1533-7928, vol. 3, No. 12, pp. 583-617, XP002390603 (Dec. 2002).
Sullivan, Dan., “Document Warehousing and Text Mining: Techniques for Improving Business Operations, Marketing and Sales,” Ch. 1-3, John Wiley & Sons, New York, NY (2001).
V. Faber, “Clustering and the Continuous K-Means Algorithm,” Los Alamos Science, The Laboratory, Los Alamos, NM, US, No. 22, Jan. 1, 1994, pp. 138-144 (Jan. 1, 1994).
Wang et al., “Learning text classifier using the domain concept hierarchy,” Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on Jun. 29-Jul. 1, 2002, Piscataway, NJ, USA, IEEE, vol. 2, pp. 1230-1234 (2002).
Whiting et al., “Image Quantization: Statistics and Modeling,” SPIE Conference of Physics of Medical Imaging, San Diego, CA, USA , vol. 3336, pp. 260-271 (Feb. 1998).
Ryall et al., “An Interactive Constraint-Based System for Drawing Graphs,” UIST '97 Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, pp. 97-104 (1997).
S.S. Weng, C.K. Liu, “Using text classification and multiple concepts to answer e-mails.” Expert Systems with Applications, 26 (2004), pp. 529-543.
Kawano, Hiroyuki., “Overview of Mondou Web Search Engine Using Text Mining and Information Visualizing Technologies,” IEEE, 2001, pp. 234-241.
Kazumasa Ozawa, “A Stratificational Overlapping Cluster Scheme,” Information Science Center, Osaka Electro-communication University, Neyagawa-shi, Osaka 572, Japan, Pattern Recognition, vol. 18, pp. 279-286 (1985).
Kohonen, T., “Self-Organizing Maps,” Ch. 1-2, Springer-Verlag (3rd ed.) (2001).
Kurimo M., “Fast Latent Semantic Indexing of Spoken Documents by Using Self-Organizing Maps” IEEE International Conference on Accoustics, Speech, and Signal Processing, vol. 6, pp. 2425-2428 (Jun. 2000).
Lam et al., “A Sliding Window Technique for Word Recognition,” SPIE, vol. 2422, pp. 38-46, Center of Excellence for Document Analysis and Recognition, State University of New Yrok at Baffalo, NY, USA (1995).
Lio et al., “Funding Pathogenicity Islands and Gene Transfer Events in Genome Data,” Bioinformatics, vol. 16, pp. 332-940, Department of Zoology, University of Cambridge, UK (Jan. 25, 2000).
Artero et al., “Viz3D: Effective Exploratory Visualization of Large Multidimensional Data Sets,” IEEE Computer Graphics and Image Processing, pp. 340-347 (Oct. 20, 2004).
Magarshak, Greg., Theory & Practice. Issue 01. May 17, 2000. http://www.flipcode.com/articles/tp.sub.—issue01-pf.shtml (May 17, 2000).
Maria Cristin Ferreira de Oliveira et al., “From Visual Data Exploration to Visual Data Mining: A Survey,” Jul.-Sep. 2003, IEEE Transactions on Visualization and Computer Graphics, vol. 9, No. 3, pp. 378-394 (Jul. 2003).
Rauber et al., “Text Mining in the SOMLib Digital Library System: The Representation of Topics and Genres,” Applied Intelligence 18, pp. 271-293, 2003 Kluwer Academic Publishers (2003).
Miller et al., “Topic Islands: A Wavelet Based Text Visualization System,” Proceedings of the IEEE Visualization Conference. 1998, pp. 189-196.
North et al. “A Taxonomy of Multiple Window Coordinations,” Institute for Systems Research & Department of Computer Science, University of Maryland, Maryland, USA, http://www.cs.umd.edu/localphp/hcil/tech-reports-search.php?number=97-18 (1997).
Shuldberg et al., “Distilling Information from Text: The EDS TemplateFiller System,” Journal of the American Society or Information Science, vol. 44, pp. 493-507 (1993).
Pelleg et al., “Accelerating Exact K-Means Algorithms With Geometric Reasoning,” pp. 277-281, Conf on Knowledge Discovery in Data, Proc fifth ACM SIGKDD (1999).
R.E. Horn, “Communication Units, Morphology, and Syntax,” Visual Language: Global Communication for the 21st Century, 1998, Ch. 3, pp. 51-92, MacroVU Press, Bainbridge Island, Washington, USA.
DeLoura et al., Game Programming Gems 2, Charles River Media, Inc., pp. 182-190, 2001.
Liu et al. “Robust Multi-Class Transdructive learning with graphs”, Jun. 2009.
Paul N. Bennet et al., “Probabilistic Combination of Text Classifiers Using Reliability Indicators”, 2002, ACM, 8 pages.
Salton G. et al., “Extended Boolean Information Retrieval” Communications of the Association for Computing Machinery, ACM, New York, NY, US., vol. 26, p. 12, Nov. 1, 1983, pig1022-1036, XP000670417.
Cutting, Douglass R., et al. “Scatter/gather: A cluster-based approach to browsing large document collections.” Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 1992.
Anna Sachinopoulou, “Multidimensional Visualization,” Technical Research Centre of Finland, ESPOO 2001, VTT Research Notes 2114, pp. 1-37 (2001).
B.B. Hubbard, “The World According the Wavelet: The Story of a Mathematical Technique in the Making,” AK Peters (2nd ed.), pp. 227-229, Massachusetts, USA (1998).
Baeza-Yates et al., “Modern Information Retrieval,” Ch. 2 “Modeling,” Modern Information Retrieval, Harlow: Addison-Wesley, Great Britain 1999, pp. 18-71 (1999).
Bernard et al.: “Labeled Radial Drawing of Data Structures” Proceedings of the Seventh International Conference on Information Visualization, Infovis. IEEE Symposium, Jul. 16-18, 2003, Piscataway, NJ, USA, IEEE, Jul. 16, 2003, pp. 479-484, XP010648809, IS.
Bier et al. “Toolglass and Magic Lenses: The See-Through Interface”, Computer Graphics Proceedings, Proceedings of Siggraph Annual International Conference on Computer Graphics and Interactive Techniques, pp. 73-80, XP000879378 (Aug. 1993).
Boukhelifa et al., “A Model and Software System for Coordinated and Multiple Views in Exploratory Visualization,” Information Visualization, No. 2, pp. 258-269, GB (2003).
C. Yip Chung et al., “Thematic Mapping—From Unstructured Documents to Taxonomies,” CIKM'02, Nov. 4-9, 2002, pp. 608-610, ACM, McLean, Virginia, USA (Nov. 4, 2002).
Chen An et al., “Fuzzy Concept Graph and Application in Web Document Clustering,” IEEE, pp. 101-106 (2001).
Davison et al., “Brute Force Estimation of the Number of Human Genes Using EST Clustering as a Measure,” IBM Journal of Research & Development, vol. 45, pp. 439-447 (May 2001).
Eades et al. “Multilevel Visualization of Clustered Graphs,” Department of Computer Science and Software Engineering, University of Newcastle, Australia, Proceedings of Graph Drawing '96, Lecture Notes in Computer Science, NR. 1190, Sep. 18, 1996-Se.
Eades et al., “Orthogonal Grid Drawing of Clustered Graphs,” Department of Computer Science, the University of Newcastle, Australia, Technical Report 96-04, [Online] 1996, Retrieved from the internet: URL:http://citeseer.ist.psu.edu/eades96ort hogonal.ht.
Estivill-Castro et al. “Amoeba: Hierarchical Clustering Based on Spatial Proximity Using Delaunaty Diagram”, Department of Computer Science, The University of Newcastle, Australia, 1999 ACM SIGMOD International Conference on Management of Data, vol. 28, N.
F. Can, Incremental Clustering for Dynamic Information Processing: ACM Transactions on Information Systems, ACM, New York, NY, US, vol. 11, No. 2, pp. 143-164, XP-002308022 (Apr. 1993).
Fekete et al., “Excentric Labeling: Dynamic Neighborhood Labeling for Data Visualization,” CHI 1999 Conference Proceedings Human Factors in Computing Systems, Pittsburgh, PA, pp. 512-519 (May 15-20, 1999).
http://em-ntserver.unl.edu/Math/mathweb/vecors/vectors.html© 1997.
Inxight VizServer, “Speeds and Simplifies the Exploration and Sharing of Information”, www.inxight.com/products/vizserver, copyright 2005.
Jain et al., “Data Clustering: A Review,” ACM Computing Surveys, vol. 31, No. 3, Sep. 1999, pp. 264-323, New York, NY, USA (Sep. 1999).
James Osborn et al., “JUSTICE: A Jidicial Search Tool Using Intelligent Cencept Extraction,” Department of Computer Science and Software Engineering, University of Melbourne, Australia, ICAIL-99, 1999, pp. 173-181, ACM (1999).
Jiang Linhui, “K-Mean Algorithm: Iterative Partitioning Clustering Algorithm,” http://www.cs.regina.ca/-linhui/K.sub.—mean.sub.—algorithm.html, (2001) Computer Science Department, University of Regina, Saskatchewan, Canada (2001).
Kanungo et al., “The Analysis of a Simple K-Means Clustering Algorithm,” pp. 100-109, Proc 16th annual symposium of computational geometry (May 2000).
Related Publications (1)
Number Date Country
20170076203 A1 Mar 2017 US
Provisional Applications (1)
Number Date Country
61236490 Aug 2009 US
Divisions (1)
Number Date Country
Parent 12862682 Aug 2010 US
Child 14108257 US
Continuations (1)
Number Date Country
Parent 14108257 Dec 2013 US
Child 15345471 US