System and method for providing a classification suggestion for electronically stored information

Information

  • Patent Grant
  • 8635223
  • Patent Number
    8,635,223
  • Date Filed
    Friday, July 9, 2010
    14 years ago
  • Date Issued
    Tuesday, January 21, 2014
    10 years ago
Abstract
A system and method for providing a classification suggestion for electronically stored information is provided. A corpus of electronically stored information including reference electronically stored information items each associated with a classification and uncoded electronically stored information items are maintained. A cluster of uncoded electronically stored information items and reference electronically stored information items is provided. A neighborhood of reference electronically stored information items in the cluster is determined for at least one of the uncoded electronically stored information items. A classification of the neighborhood is determined using a classifier. The classification of the neighborhood is suggested as a classification for the at least one uncoded electronically stored information item.
Description
FIELD

This application relates in general to information classification, in particular, to a system and method for providing a classification suggestion for electronically stored information.


BACKGROUND

Historically, document review during the discovery phase of litigation and for other types of legal matters, such as due diligence and regulatory compliance, have been conducted manually. During document review, individual reviewers, generally licensed attorneys, are typically assigned sets of documents for coding. A reviewer must carefully study each document and categorize the document by assigning a code or other marker from a set of descriptive classifications, such as “privileged,” “responsive,” and “non-responsive.” The classifications can affect the disposition of each document, including admissibility into evidence. As well, during discovery, document review can potentially affect the outcome of the legal underlying matter, and consistent and accurate results are crucial.


Manual document review is tedious and time-consuming. Marking documents is performed at the sole discretion of each reviewer and inconsistent results can occur due to misunderstanding, time pressures, fatigue, or other factors. A large volume of documents reviewed, often with only limited time, can create a loss of mental focus and a loss of purpose for the resultant classification. Each new reviewer also faces a steep learning curve to become familiar with the legal matter, coding categories, and review techniques.


Currently, with the increasingly widespread movement to electronically stored information (ESI), manual document review is becoming impracticable and outmoded. The often exponential growth of ESI can exceed the bounds reasonable for conventional manual human review and the sheer scale of staffing ESI review underscores the need for computer-assisted ESI review tools.


Conventional ESI review tools have proven inadequate for providing efficient, accurate, and consistent results. For example, DiscoverReady LLC, a Delaware limited liability company, conducts semi-automated document review through multiple passes over a document set in ESI form. During the first pass, documents are grouped by category and basic codes are assigned. Subsequent passes refine and assign further encodings. Multiple pass ESI review also requires a priori project-specific knowledge engineering, which is generally applicable to only a single project, thereby losing the benefit of any inferred knowledge or experiential know-how for use in other review projects.


Thus, there remains a need for a system and method for increasing the efficiency of document review by providing classification suggestions based on reference documents while ultimately ensuring independent reviewer discretion.


SUMMARY

Document review efficiency can be increased by identifying relationships between reference ESI, which is ESI that has been assigned classification codes, and uncoded ESI and providing a suggestion for classification based on the classification relationships. Uncoded ESI is formed into thematic or conceptual clusters. The uncoded ESI for a cluster is compared to a set of reference ESI. Those reference ESI most similar to the uncoded ESI are identified based on, for instance, semantic similarity and are used to form a classification suggestion. The classification suggestion can be provided with a confidence level that reflects the amount of similarity between the uncoded ESI and reference ESI in the neighborhood. The classification suggestion can then be accepted, rejected, or ignored by a reviewer.


One embodiment provides a system and method for providing a classification suggestion for electronically stored information is provided. A corpus of electronically stored information including reference electronically stored information items each associated with a classification and uncoded electronically stored information items are maintained. A cluster of uncoded electronically stored information items and reference electronically stored information items is provided. A neighborhood of reference electronically stored information items in the cluster is determined for at least one of the uncoded electronically stored information items. A classification of the neighborhood is determined using a classifier. The classification of the neighborhood is suggested as a classification for the at least one uncoded electronically stored information item.


A further embodiment provides a system and method for providing a classification suggestion for a document is provided. A corpus of documents including reference documents each associated with a classification and uncoded documents is maintained. A cluster of uncoded documents is generated. A neighborhood of reference documents is determined for at least one of the uncoded documents in the cluster. A classification of the neighborhood is determined using a classifier. The classification of the neighborhood is suggested as a classification for the at least one uncoded document.


Still other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein are described embodiments by way of illustrating the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram showing a system for providing reference electronically stored information as a suggestion for uncoded electronically stored information, in accordance with one embodiment.



FIG. 2 is a process flow diagram showing a method for providing a classification suggestion for uncoded electronically stored information, in accordance with one embodiment.



FIG. 3 is a process flow diagram showing a method for providing a confidence level for a classification suggestion for use in the method of FIG. 2.



FIG. 4 is a process flow diagram showing a method for accepting or rejecting a classification suggestion for use in the method of FIG. 2.



FIG. 5 is a block diagram showing, by way of example, ways to generate a neighborhood of reference documents for a clustered uncoded document for use in the method of FIG. 2.



FIG. 6 is a block diagram showing, by way of example, classifier routines for suggesting a classification for an uncoded document for use in the method of FIG. 2.



FIG. 7 is a screenshot showing, by way of example, a visual display of reference documents in relation to uncoded documents.



FIG. 8 is a block diagram showing, by way of example, a cluster with a combination of classified reference documents, uncoded documents, and documents given a classification.





DETAILED DESCRIPTION

In a sense, previously classified ESI capture valuable knowledge gleaned from earlier work on similar or related legal projects, and can consequently serve as a known reference point in classifying uncoded ESI in subsequent projects.


Providing Classification Suggestions Using Reference Documents


Reference ESI is ESI that has been previously classified and which is selected as representative of correctly coded ESI under each of the classifications. Specifically, the relationship between uncoded ESI and reference ESI in terms of semantic similarity or distinction can be used as an aid in providing suggestions for classifying the uncoded ESI.


End-to end ESI review requires a computerized support environment within which classification can be performed. FIG. 1 is a block diagram showing a system 10 for providing reference electronically stored information as a suggestion for uncoded electronically stored information, in accordance with one embodiment. By way of illustration, the system 10 operates in a distributed computing environment, which includes a plurality of heterogeneous systems and ESI sources. Henceforth, a single item of ESI will be referenced as a “document,” although ESI can include other forms of non-document data, as described infra. A backend server 11 is coupled to a storage device 13, which stores documents 14a in the form of structured or unstructured data, a database 30 for maintaining information about the documents, and a look up database 37 for storing many-to-many mappings 38 between documents and document features, such as themes and concepts. The storage device 13 also stores reference documents 14b, which provide a training set of trusted and known results for use in guiding ESI classification. The reference documents 14b can be hand-selected or automatically determined. Additionally, the set of reference documents can be predetermined or can be generated dynamically, as the selected uncoded documents are classified and subsequently added to the set of reference documents.


The backend server 11 is coupled to an intranetwork 21 and executes a workbench software suite 31 for providing a user interface framework for automated document management, processing, analysis, and classification. In a further embodiment, the backend server 11 can be accessed via an internetwork 22. The workbench suite 31 includes a document mapper 32 that includes a clustering engine 33, similarity searcher 34, classifier 35, and display generator 36. Other workbench suite modules are possible.


The clustering engine 33 performs efficient document scoring and clustering of uncoded documents, such as described in commonly-assigned U.S. Pat. No. 7,610,313, U.S. Patent Application Publication No. 2011/0029526, published Feb. 3, 2011, pending, U.S. Patent Application, Publication No. 2011/0029536, published Feb. 3, 2011, pending, and U.S. Patent Application, Publication No. 2011/0029527, published Feb. 3, 2011 pending, the disclosures of which are incorporated by reference.


Briefly, clusters of uncoded documents 14a are formed and can be organized along vectors, known as spines, based on a similarity of the clusters. The similarity can be expressed in terms of distance. The content of each uncoded document within the corpus can be converted into a set of tokens, which are word-level or character-level n-grams, raw terms, concepts, or entities. Other tokens are possible. An n-gram is a predetermined number of items selected from a source. The items can include syllables, letters, or words, as well as other items. A raw term is a term that has not been processed or manipulated. Concepts typically include nouns and noun phrases obtained through part-of-speech tagging that have a common semantic meaning. Entities further refine nouns and noun phrases into people, places, and things, such as meetings, animals, relationships, and various other objects. Entities can be extracted using entity extraction techniques known in the field. Clustering of the uncoded documents can be based on cluster criteria, such as the similarity of tokens, including n-grams, raw terms, concepts, entities, email addresses, or other metadata.


The similarity searcher 34 identifies the reference documents 14b that are similar to selected uncoded documents 14a, clusters, or spines. The classifier 35 provides a machine-generated suggestion and confidence level for classification of the selected uncoded documents 14a, clusters, or spines, as further described below beginning with reference to FIG. 2. The display generator 36 arranges the clusters and spines in thematic or conceptual relationships in a two-dimensional visual display space. Once generated, the visual display space is transmitted to a work client 12 by the backend server 11 via the document mapper 32 for presenting to a reviewer. The reviewer can include an individual person who is assigned to review and classify the documents 14a by designating a code. Hereinafter, unless otherwise indicated, the terms “reviewer” and “custodian” are used interchangeably with the same intended meaning. Other types of reviewers are possible, including machine-implemented reviewers.


The document mapper 32 operates on documents 14a, which can be retrieved from the storage 13, as well as a plurality of local and remote sources. The reference documents 14b can be also be stored in the local and remote sources. The local sources include documents 17 maintained in a storage device 16 coupled to a local server 15 and documents 20 maintained in a storage device 19 coupled to a local client 18. The local server 15 and local client 18 are interconnected to the backend server 11 and the work client 12 over the intranetwork 21. In addition, the document mapper 32 can identify and retrieve documents from remote sources over the internetwork 22, including the Internet, through a gateway 23 interfaced to the intranetwork 21. The remote sources include documents 26 maintained in a storage device 25 coupled to a remote server 24 and documents 29 maintained in a storage device 28 coupled to a remote client 27. Other document sources, either local or remote, are possible.


The individual documents 14a, 14b, 17, 20, 26, 29 include all forms and types of structured and unstructured ESI including electronic message stores, word processing documents, electronic mail (email) folders, Web pages, and graphical or multimedia data. Notwithstanding, the documents could be in the form of structurally organized data, such as stored in spreadsheets or databases.


In one embodiment, the individual documents 14a, 1413, 17, 20, 26, 29 can include electronic message folders storing email and attachments, such as maintained by the Outlook and Outlook Express products, licensed by Microsoft Corporation, Redmond, Wash. The database can be on SQL-based relational database, such as the Oracle database management system, Release 8, licensed by Oracle Corporation, Redwood Shores, Calif.


Additionally, the individual documents 17, 20, 26, 29 include uncoded documents, reference documents, and previously uncoded documents that have been assigned a classification code. The number of uncoded documents may be too large for processing in a single pass. Typically, a subset of uncoded documents are selected for a document review assignment and stored as a document corpus, which can also include one or more reference documents as discussed infra.


The reference documents are initially uncoded documents that can be selected from the corpus or other source of uncoded documents and subsequently classified. When combined with uncoded documents, such as described in commonly-assigned U.S. Patent Application Publication No. 2011/0029526, published Feb. 3, 2011, pending, U.S. Patent Application Publication No. 2011/0029536, published Feb. 3, 2011, pending, and U.S. Patent Application Publication No. 2011/0029527, published Feb. 3, 2011, pending, the disclosures of which are incorporated by reference, the reference documents can provide suggestions for classification of the remaining uncoded documents in the corpus based on visual relationships between the reference documents and uncoded documents. The reviewer can classify one or more of the uncoded documents by assigning a code to each document, representing a classification, based on the suggestions, if desired. The suggestions can also be used for other purposes, such as quality control. Documents given a classification code by the reviewer are then stored. Additionally, the now-coded documents can be used as reference documents in related document review assignments. The assignment is completed once all uncoded documents in the assignment have been assigned a classification code.


In a further embodiment, the reference documents can be used as a training set to form machine-generated suggestions for classifying uncoded documents. The reference documents can be selected as representative of the document corpus for a project in which data organization or classification is desired. A set of reference documents can be generated for each document review project or alternatively, the reference documents can be selected from a previously conducted document review project that is related to the current document review project. Guided review assists a reviewer in building a reference document set representative of the corpus for use in classifying uncoded documents. Alternatively, the reference document set can be selected from a previously conducted document review that is related to the current document review project.


During guided review, uncoded documents that are dissimilar to each other are identified based on a similarity threshold. Other methods for determining dissimilarity are possible. Identifying a set of dissimilar documents provides a group of documents that is representative of the corpus for a document review project. Each identified dissimilar document is then classified by assigning a particular code based on the content of the document to generate a set of reference documents for the document review project. Guided review can be performed by a reviewer, a machine, or a combination of the reviewer and machine.


Other methods for generating a reference document set for a document review project using guided review are possible, including clustering. A set of uncoded document to be classified can be clustered, such as described in commonly-assigned U.S. Pat. No. 7,610,313, U.S. Patent Application Publication No. 2011/0029526, published Feb. 3, 2011, pending, U.S. Patent Application Publication No. 2011/0029536, published Feb. 3, 2011, pending, and U.S. Patent Application Publication No. 2011/0029527, published Feb. 3, 2011, pending, the disclosures of which are incorporated by reference.


Briefly, a plurality of the clustered uncoded documents is selected based on selection criteria, such as cluster centers or sample clusters. The cluster centers can be used to identify uncoded documents in a cluster that are most similar or dissimilar to the cluster center. The identified uncoded documents are then selected for classification. After classification, the previously uncoded documents represent at reference set. In a further example, sample clusters can be used to generate a reference set by selecting one or more sample clusters based on cluster relation criteria, such as size, content, similarity, or dissimilarity. The uncoded documents in the selected sample clusters are then selected for classification by assigning codes. The classified documents represent a reference document set for the document review project. Other methods for selecting uncoded documents for use as a reference set are possible. Although the above process has been described with reference to documents, other objects or tokens are possible.


For purposes of legal discovery, the codes used to classify uncoded documents can include “privileged,” “responsive,” or “non-responsive.” Other codes are possible. A “privileged” document contains information that is protected by a privilege, meaning that the document should not be disclosed to an opposing party. Disclosing a “privileged” document can result in unintentional waiver of the subject matter. A “responsive” document contains information that is related to a legal matter on which the document review project is based and a “non-responsive” document includes information that is not related to the legal matter. During taxonomy generation, a list of codes to be used during classification can be provided by a reviewer or determined automatically. The uncoded documents to be classified can be divided into subsets of documents, which are each provided to a particular reviewer as an assignment. To maintain consistency, the same codes can be used across all assignments in the document review project.


Obtaining reference sets and cluster sets, and identifying the most similar reference documents can be performed by the system 10, which includes individual computer systems, such as the backend server 11, work server 12, server 15, client 18, remote server 24 and remote client 27. The individual computer systems are general purpose, programmed digital computing devices consisting of a central processing unit (CPU), random access memory (RAM), non-volatile secondary storage, such as a hard drive or CD ROM drive, network interfaces, and peripheral devices, including user interfacing means, such as a keyboard and display 39. The various implementations of the source code and object and byte codes can be held on a computer-readable storage medium, such as a floppy disk, hard drive, digital video disk (DVD), random access memory (RAM), read-only memory (ROM) and similar storage mediums. For example, program code, including software programs, and data are loaded into the RAM for execution and processing by the CPU and results are generated for display, output, transmittal, or storage.


Classification code suggestions associated with a confidence level can be provided to assist a reviewer in making classification decisions for uncoded documents. FIG. 2 is a process flow diagram showing a method for providing a classification suggestion for uncoded electronically stored information, in accordance with one embodiment. A set of uncoded documents is first identified, then clustered, based on thematic or conceptual relationships (block 41). The clusters can be generated on-demand or previously-generated and stored, as described in commonly-assigned U.S. Pat. No. 7,610,313, the disclosure of which is incorporated by reference.


Once obtained, an uncoded document within one of the clusters is selected (block 42). A neighborhood of reference documents that is most relevant to the selected uncoded document is identified (block 43). Determining the neighborhood of the selected uncoded document is further discussed below with reference to FIG. 5. The neighborhood of reference documents is determined separately for each cluster and can include one or more reference documents within that cluster. The number of reference documents in a neighborhood can be determined automatically or by an individual reviewer. In a further embodiment, the neighborhood of reference documents is defined for each available classification code or subset of class codes. A classification for the selected uncoded document is suggested based on the classification of the similar coded reference documents in the neighborhood (block 44). The suggested classification can then be accepted, rejected, or ignored by the reviewer, as further described below with reference to FIG. 4. Optionally, a confidence level for the suggested classification can be provided (block 45), as further described below with reference to FIG. 3.


The machine-generated suggestion for classification and associated confidence level can be determined by the classifier as further discussed below with reference to FIGS. 3 and 5. Once generated, the reference documents in the neighborhood and the selected uncoded document are analyzed to provide a classification suggestion. The analysis of the selected uncoded document and neighborhood reference documents can be based on one or more routines performed by the classifier, such as a nearest neighbor (NN) classifier, as further discussed below with reference to FIG. 5. The classification suggestion is displayed to the reviewer through visual display, such as textually or graphically, or other ways of display. For example, the suggestion can be displayed as part of a visual representation of the uncoded document, as further discussed below with reference to FIGS. 7 and 8, and as described in commonly-assigned U.S. Pat. No. 7,271,804, the disclosure of which is incorporated by reference.


Once the suggested classification code is provided for the selected uncoded document, the classifier can provide a confidence level for the suggested classification, which can be presented as an absolute value or percentage. FIG. 3 is a process flow diagram showing a method for providing a confidence level for a classification suggestion for use in the method of FIG. 2. The confidence level is determined from a distance metric based on the amount of similarity of the uncoded document to the reference documents used for the classification suggestion (block 51). In one embodiment, the similarity between each reference document in the neighborhood the selected uncoded document is determined as the cos σ of the score vectors for the document and each reference document being compared. The cos σ provides a measure of relative similarity or dissimilarity between tokens, including the concepts in the documents and is equivalent to the inner products between the score vectors for the uncoded document and the reference document.


In the described embodiment, the cos σ is calculated in accordance with the equation:







cos






σ
AB


=






S


A

·


S


B









S


A








S


B










where cos σAB comprises the similarity metric between uncoded document A and reference document B, {right arrow over (S)}A comprises a score vector for the uncoded document A, and {right arrow over (S)}B comprises a score vector for the reference document B. Other forms of determining similarity using a distance metric are feasible, as would be recognized by one skilled in the art, such as using Euclidean distance. Practically, a reference document in the neighborhood that is identical to the uncoded document would result in a confidence level of 100%, while a reference document that is completely dissimilar would result in a confidence level of 0%.


Alternatively, the confidence level can take into account the classifications of reference documents in the neighborhood that are different than the suggested classification and adjust the confidence level accordingly (block 52). For example, the confidence level of the suggested classification can be reduced by subtracting the calculated similarity metric of the unsuggested classification from the similarity metric of the reference document of the suggested classification. Other confidence level measures are possible. The reviewer can consider confidence level when assigning a classification to a selected uncoded document. Alternatively, the classifier can automatically assign the suggested classification upon determination. In one embodiment, the classifier only assigns an uncoded document with the suggested classification if the confidence level is above a threshold value (block 53), which can be set by the reviewer or the classifier. For example, a confidence level of more than 50% can be required for a classification to be suggested to the reviewer. Finally, once determined, the confidence level for the suggested classification is provided to the reviewer (block 54).


The suggested classification can be accepted, rejected, or ignored by the reviewer. FIG. 4 is a process flow diagram showing a method for accepting or rejecting a classification suggestion for use in the method of FIG. 2. Once the classification has been suggested (block 61), the reviewer can accept or reject the suggestion (block 62). If accepted, the previously uncoded document is coded with the suggested classification (block 63). Additionally, the now-coded document can be stored as a coded document. In a further embodiment, the suggested classification is automatically assigned to the uncoded document, as further described below with reference to FIG. 6. If rejected, the uncoded document remains uncoded and can be manually classified by the reviewer under a different classification code (block 64). Once the selected uncoded document is assigned a classification code, either by the reviewer or automatically, the newly classified document can be added to the set of reference documents for use in classifying further uncoded documents. Subsequently, a further uncoded document can be selected for classification using similar reference documents.


In a further embodiment, if the manual classification is different from the suggested classification, a discordance is identified by the system (block 65). Optionally, the discordance can be visually depicted to the reviewer (block 66). For example, the discordance can be displayed as part of a visual representation of the discordant document, as further discussed below with reference to FIG. 8. Additionally, the discordance is flagged if a discordance threshold value is exceeded, which can be set by the reviewer or the classifier. The discordance threshold is based on the confidence level. In one embodiment, the discordance value is identical to the confidence level of the suggested classification. In a further embodiment, the discordance value is the difference between the confidence level of the suggested classification and the confidence level of the manually-assigned classification.


In a yet further embodiment, an entire cluster, or a cluster spine containing multiple clusters of uncoded documents can be selected and a classification for the entire cluster or cluster spine can be suggested. For instance, for cluster classification, a cluster is selected and a score vector for the center of the cluster is determined as described in commonly-assigned U.S. Patent Application Publication No. 2011/0029526, published Feb. 3, 2011, pending, U.S. Patent Application Publication No. 2011/0029536, published Feb. 3, 2011, pending, and U.S. Patent Application Publication No. 2011/0029527, published Feb. 3, 2011, pending, the disclosures of which are incorporated by reference.


Briefly, a neighborhood for the selected cluster is determined based on a distance metric. Each reference document in the selected cluster is associated with a score vector and the distance is determined by comparing the score vector of the cluster center with the score vector for each of the reference documents to determine a neighborhood of reference documents that are closest to the cluster center. However, other methods for generating a neighborhood are possible. Once determined, one of the classification measures is applied to the neighborhood to determine a suggested classification for the selected cluster, as further discussed below with reference to FIG. 6.


One or more reference documents nearest to a selected uncoded document are identified and provided as a neighborhood of reference documents for the selected uncoded document. FIG. 5 is a block diagram showing, by way of example, ways to generate a neighborhood 70 of reference documents for a clustered uncoded document for use in the method of FIG. 2. Types of neighborhood generation include inclusion 71, injection 72, and nearest neighbor 73. Other ways to generate the neighborhood are possible. Inclusion 71 includes using uncoded documents and reference documents to generate clusters, such as described in commonly-assigned U.S. Patent Application Publication No. 2011/0029526, published Feb. 3, 2011, pending, the disclosure of which is incorporated by reference. Briefly, a set of reference documents is grouped with one or more uncoded documents and are organized into clusters containing both uncoded and reference documents, as discussed above. The reference documents in the cluster, or a subset thereof, is then used as the neighborhood for an uncoded document.


Injection 72 includes inserting reference documents into clusters of uncoded documents based on similarity, such as described in commonly-assigned U.S. Patent Application Publication No. 2011/0029536, published Feb. 3, 2011, pending, the disclosure of which is incorporated by reference. Briefly, a set of clusters of uncoded documents is obtained, as discussed above. Once obtained, a cluster center is determined for each cluster. The cluster center is representative of all the documents in that particular cluster. One or more cluster centers can be compared with a set of reference documents and those reference documents that satisfy a threshold of similarity to that cluster center are selected. The selected reference documents are then inserted into the cluster associated with that cluster center. The selected reference documents injected into the cluster can be the same or different as the selected reference documents injected into another cluster. The reference documents in the cluster, or a subset thereof, is then used as the neighborhood for an uncoded document.


Nearest Neighbor 73 includes a comparison of uncoded documents and reference documents, such as described in commonly-assigned U.S. Patent Application Publication No. 2011/0029527, published Feb. 3, 2011, pending, the disclosure of which is incorporated by reference. Briefly, uncoded documents are identified and clustered, as discussed above. A reference set of documents is also identified. An uncoded document is selected from one of the clusters and compared against the reference set to identify one or more reference documents that are similar to the selected uncoded document. The similar reference documents are identified based on a similarity measure calculated between the selected uncoded document and each reference document. Once identified, the similar reference documents, or a subset thereof, is then used as the neighborhood.


Suggesting Classification of Uncoded Documents


An uncoded document is compared to one or more reference documents to determine a suggested classification code for the uncoded document. FIG. 6 is a block diagram showing, by way of example, classifier routines 80 for suggesting a classification for an uncoded document for use in the method of FIG. 2. Types of classifier routines include minimum distance classification measure 82, minimum average distance classification measure 83, maximum count classification measure 84, and distance weighted maximum count classification measure 85. Other types of classification measures and classifiers are possible.


The minimum distance classification measure 82, also known as closest neighbor, includes determining the closest reference document neighbor in the neighborhood to the selected uncoded document. Once determined, the classification of the closest reference document is used as the classification suggestion for the selected uncoded document. Score vectors for the selected uncoded document and for each of a number of reference documents are compared as the cos σ to determine a distance metric. The distance metrics for the reference documents are compared to identify the reference document closest to the selected uncoded document.


The minimum average distance classification distance measure 83 determines the distances of all reference documents in the neighborhood, averages the determined distances based on classification, and uses the classification of the closest average distance reference documents as the classification suggestion. The maximum count classification measure 84, also known as the voting classification measure, includes calculating the number of reference documents in the neighborhood and assigning a count, or “vote”, to each reference document. The classification that has the most “votes” is used as the classification suggestion for the uncoded document.


The distance weighted maximum count classification measure 85 is a combination of the minimum average distance 81 and maximum count classification measures 82. Each reference document in the neighborhood is given a count, but the count is differentially weighted based on the distance that reference document is from the selected uncoded document. For example, a vote of a reference document closer to the uncoded document is weighted heavier than a reference document further away. The classification determined to have the highest vote count is suggested as the classification of the selected uncoded document.


A confidence level can be provided for the suggested classification code, as described further above with reference to FIG. 3. For example, the neighborhood of a particular uncoded document can contain a total of five reference documents, with three classified as “responsive” and two classified as “non-responsive.” Determining the classification suggestion using the maximum count classification measure 84 results in a classification suggestion of “responsive” for the uncoded document, but the confidence level provided can be penalized for each of the non-suggested classification documents in the neighborhood. The penalty reduces the confidence level of the classification. Other ways of determining the confidence level are possible.


Displaying the Reference Documents


The clusters of uncoded documents and reference documents can be provided as a display to the reviewer. FIG. 7 is a screenshot 90 showing, by way of example, a visual display 91 of reference documents in relation to uncoded documents. Clusters 93 can be located along a spine, which is a vector, based on a similarity of the uncoded documents in the clusters 93. Each cluster 93 is represented by a circle; however, other shapes, such as squares, rectangles, and triangles are possible, as described in U.S. Pat. No. 6,888,584, the disclosure of which is incorporated by reference. The uncoded documents 94 are each represented by a smaller circle within the clusters 93, while the reference documents 95 are each represented by a circle with a diamond-shape within the boundaries of the circle. The reference documents 95 can be further represented by their assigned classification code. Classification codes can include “privileged,” “responsive,” and “non-responsive,” as well as other codes. Other classification categories are possible. For instance, privileged reference documents can include a circle with an “X” in the center and non-responsive reference documents can include a circle with striped lines. Other classification representations for the reference documents and other classified documents are possible, such as by color. Each cluster spine 96 is represented as a vector along which the clusters are placed.


The display 91 can be manipulated by a individual reviewer via a compass 92, which enables the reviewer to navigate, explore, and search the clusters 93 and spines 96 appearing within the compass 92, as further described in commonly-assigned U.S. Pat. No. 7,356,777, the disclosure of which is incorporated by reference. The compass 92 visually emphasizes clusters 93 located within the borders of the compass 92, while deemphasizing clusters 93 appearing outside of the compass 92.


Spine labels 99 appear outside of the compass 92 at an end of each cluster spine 96 to connect the outermost cluster of the cluster spine 96 to preferably the closest point along the periphery of the compass 92. In one embodiment, the spine labels 99 are placed without overlap and circumferentially around the compass 92. Each spine label 99 corresponds to one or more concepts for the cluster that most closely describes a cluster spine 96 appearing within the compass 92. Additionally, the cluster concepts for each of the spine labels 99 can appear in a concepts list (not shown) also provided in the display. Toolbar buttons 97 located at the top of the display 91 enable a user to execute specific commands for the composition of the spine groups displayed. A set of pull down menus 98 provide further control over the placement and manipulation of clusters 93 and cluster spines 96 within the display 91. Other types of controls and functions are possible.


The toolbar buttons 97 and pull down menus 98 provide control to the reviewer to set parameters related to classification. For example, the confidence suggestion threshold and discordance threshold can be set at a document, cluster, or cluster spine level. Additionally, the reviewer can display the classification suggestion, as well as further details about the reference documents used for the suggestion by clicking an uncoded document, cluster, or spine. For example, a suggestion guide 100 can be placed in the display 91 and can include a “Suggestion” field, a “Confidence Level” field. The “Suggestion” field in the suggestion guide 100 provides the classification suggestion for a selected document, cluster, or spine. The “Confidence Level” field provides a confidence level of the suggested classification. Alternatively, the classification suggestion details can be revealed by hovering over the selection with the mouse.


In one embodiment, a garbage can 101 is provided to remove tokens, such as cluster concepts from consideration in the current set of clusters 93. Removed cluster concepts prevent those concepts from affecting future clustering, as may occur when a reviewer considers a concept irrelevant to the clusters 93.


The display 91 provides a visual representation of the relationships between thematically related documents, including uncoded documents and similar reference documents. The uncoded documents and reference documents located within a cluster or spine can be compared based on characteristics, such as a type of classification of the reference documents, a number of reference documents for each classification code, and a number of classification category types in the cluster to identify relationships between the uncoded documents and reference documents. The reference documents in the neighborhood of the uncoded document can be used to provide a classification code suggestion for the uncoded document. For example, FIG. 8 is a block diagram showing, by way of example, a cluster 110 with a combination of classified reference documents, uncoded documents, and documents given a classification. The cluster 110 can include one “privileged” reference document 111, two “non-responsive” documents 112, seven uncoded documents 113, one uncoded document with a “privileged” code suggestion 114, one previously uncoded document with an accepted “non-responsive” code suggestion 115, and one previously uncoded document showing a discordance 116 between the classification code suggested and the classification code manually assigned by the reviewer.


The combination of “privileged” 111 and “non-responsive” 112 reference documents within the cluster can be used by a classifier to provide a classification suggestion to a reviewer for the uncoded reference documents 113, as further described above with reference to FIG. 6. Uncoded document 114 has been assigned a suggested classification code of “privileged” by the classier. The classification suggestion can be displayed textually or visually to the reviewer. Other ways of displaying a suggested classification are possible. In one embodiment, uncoded documents are assigned a color and each classification code is assigned an individual color. Placing the color code of the suggestion on a portion 117 of the uncoded document 114 denotes the suggested classification code. Similarly, the classification suggestion for an entire cluster can be displayed textually or visually, for example by assigning a color to the cluster circle matching the color of the suggested classification code.


A reviewer can choose to accept or reject the suggested classification, as described further above with reference to FIG. 4. If accepted, the now-classified document is given the color code of the suggested classification. For example, document 115 previously assigned a suggestion of “no-responsive,” which was subsequently accepted by the reviewer, and given the visual depiction of “non-responsive.” In a further embodiment, the suggested classification code is automatically assigned to the uncoded document without the need of prior reviewer approval.


In a further embodiment, discordance between the classification code suggested and the actual classification of the document is noted by the system. For example, discordant document 116 is assigned a classification suggestion of “privileged” but coded as “non-responsive.” With the discordant option selected, the classification suggested by the classifier is retained and displayed after the uncoded document is manually classified.


The classification of uncoded documents has been described in relation to documents; however, in a further embodiment, the classification process can be applied to tokens. For example, uncoded tokens are clustered and similar reference tokens are used to provide classification suggestions based on relationships between the uncoded tokens and similar reference tokens. In one embodiment, the tokens include concepts, n-grams, raw terms, and entities.


While the invention has been particularly shown and described as referenced to the embodiments thereof, those skilled in the art will understand that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope.

Claims
  • 1. A system for providing a classification suggestion for electronically stored information, comprising: a database to store a corpus of electronically stored information (ESI) comprising reference ESI items each associated with a classification and uncoded ESI items;a clustering engine to provide a cluster of uncoded ESI items and reference ESI items;a neighborhood module to determine a neighborhood of reference ESI items in the cluster for at least one of the uncoded ESI items;a classification module to determine a classification of the neighborhood using a classifier;a suggestion module to suggest the classification of the neighborhood as a suggested classification code for the at least one uncoded ESI item;a difference module to assign a further classification code to the at least one uncoded ESI item based on instructions from a user and to identify a difference between the assigned classification code and the suggested classification code;a display module to display the difference between the assigned classification code and the suggested classification code; anda processor to execute the modules.
  • 2. The system according to claim 1, further comprising at least one of: a marking module to mark the at least one uncoded ESI item based on the suggested classification with a visual indicator; andan addition module to add the at least one uncoded ESI item to the corpus of ESI as a coded ESI item.
  • 3. The system according to claim 1, further comprising a confidence module to provide a confidence level of the suggested classification code.
  • 4. The system according to claim 3, further comprising a display to display the confidence level only when above a confidence level threshold.
  • 5. The system according to claim 1, further comprising: a distance module to determine a distance metric based on the similarity of each reference ESI item in the neighborhood to the at least one uncoded ESI item; andan assign module to assign the classification of the reference ESI item in the neighborhood with the closest distance metric as the classification of the neighborhood.
  • 6. The system according to claim 1, further comprising: a distance module to determine a distance metric based on the similarity of each reference ESI item in the neighborhood to the at least one uncoded ESI item;a calculation module to sum the distance metrics of the reference ESI items associated with the same classification and to average the sums of the distance metrics in each classification; andan assign module to assign the classification of the reference ESI items in the neighborhood with the closest average distance metric as the classification of the neighborhood.
  • 7. The system according to claim 1, further comprising: a vote module to calculate a vote for each reference ESI item in the neighborhood; andan assign module to assign the classification of the reference ESI items in the neighborhood with the highest calculated vote total as the classification of the neighborhood.
  • 8. The system according to claim 1, further comprising: a vote module to calculate a vote for each reference ESI item in the neighborhood;a distance module to determine a distance metric based on the similarity of each reference ESI item in the neighborhood to the at least one uncoded ESI item;a weight module to differentially weigh the votes based on the distance metric; andan assign module to assign the classification of the reference ESI items in the neighborhood with the highest differentially weighted vote total as the classification of the neighborhood.
  • 9. A method for providing a classification suggestion for electronically stored information, comprising the steps of: maintaining a corpus of electronically stored information (ESI) comprising reference ESI items each associated with a classification and uncoded ESI items;providing a cluster of uncoded ESI items and reference ESI items;determining a neighborhood of reference ESI items in the cluster for at least one of the uncoded ESI items;determining a classification of the neighborhood using a classifier;suggesting the classification of the neighborhood as a suggested classification code for the at least one uncoded ESI item;assigning a further classification code to the at least one uncoded ESI item based on instructions from a user;identifying a difference between the assigned classification code and the suggested classification code; anddisplaying the difference between the assigned classification code and the suggested classification code,wherein the steps are performed by a suitably programmed computer.
  • 10. The method according to claim 9, further comprising at least one of: marking the at least one uncoded ESI item based on the suggested classification with a visual indicator; andadding the at least one uncoded ESI item to the corpus of ESI as a coded ESI item.
  • 11. The method according to claim 9, further comprising providing a confidence level of the suggested classification code.
  • 12. The method according to claim 11, further comprising: displaying the confidence level only when above a confidence level threshold.
  • 13. The method according to claim 9, further comprising: determining a distance metric based on the similarity of each reference ESI item in the neighborhood to the at least one uncoded ESI item; andassigning the classification of the reference ESI item in the neighborhood with the closest distance metric as the classification of the neighborhood.
  • 14. The method according to claim 9, further comprising: determining a distance metric based on the similarity of each reference ESI item in the neighborhood to the at least one uncoded ESI item;summing the distance metrics of the reference ESI items associated with the same classification;averaging the sums of the distance metrics in each classification; andassigning the classification of the reference ESI items in the neighborhood with the closest average distance metric as the classification of the neighborhood.
  • 15. The method according to claim 9, further comprising: calculating a vote for each reference ESI item in the neighborhood; andassigning the classification of the reference ESI items in the neighborhood with the highest calculated vote total as the classification of the neighborhood.
  • 16. The method according to claim 9, further comprising: calculating a vote for each reference ESI item in the neighborhood;determining a distance metric based on the similarity of each reference ESI item in the neighborhood to the at least one uncoded ESI item;differentially weighing the votes based on the distance metric; andassigning the classification of the reference ESI items in the neighborhood with the highest differentially weighted vote total as the classification of the neighborhood.
  • 17. A system for providing a classification suggestion for a document, comprising: a database to store a corpus of documents comprising reference documents each associated with a classification and uncoded documents;a clustering engine to generate a cluster of uncoded documents;a neighborhood module to determine a neighborhood of reference documents for at least one of the uncoded documents in the cluster;a classification module to determine a classification of the neighborhood using a classifier; anda suggestion module to suggest the classification of the neighborhood as a suggested classification code for the at least one uncoded document;a difference module to assign a further classification code to the at least one uncoded document item based on instructions from a user and to identify a difference between the assigned classification code and the suggested classification code;a display module to display the difference between the assigned classification code and the suggested classification code; anda processor to execute the modules.
  • 18. A system according to claim 17, further comprising: a mark module to mark the at least one uncoded document with a different assigned classification code than the suggested classification code with a visual indicator.
  • 19. The system according to claim 17, the display module further comprising: a threshold module to display the difference only when above a discordance threshold.
  • 20. The method according to claim 17, wherein the neighborhood is determined based on one of inclusion, injection, and nearest neighbor.
  • 21. The system according to claim 17, wherein the classifier is one of minimum distance, minimum average distance, maximum counts, and distance weighted maximum count.
  • 22. A method for providing a classification suggestion for a document, comprising the steps of: maintaining a corpus of documents comprising reference documents each associated with a classification and uncoded documents;generating a cluster of uncoded documents;determining a neighborhood of reference documents for at least one of the uncoded documents in the cluster;determining a classification of the neighborhood using a classifier; andsuggesting the classification of the neighborhood as a suggested classification code for the at least one uncoded document;assigning a further classification code to the at least one uncoded document based on instructions from a user;identifying a difference between the assigned classification code and the suggested classification code; anddisplaying the difference between the assigned classification code and the suggested classification code,wherein the steps are performed by a suitably programmed computer.
  • 23. The method according to claim 22, further comprising: marking the at least one uncoded ESI item with a different assigned classification code than the suggested classification code with a visual indicator.
  • 24. The method according to claim 22, further comprising: displaying the difference only when above a discordance threshold.
  • 25. The method according to claim 22, wherein the neighborhood is determined based on one of inclusion, injection, and nearest neighbor.
  • 26. The method according to claim 22, wherein the classifier is one of minimum distance, minimum average distance, maximum counts, and distance weighted maximum count.
CROSS-REFERENCE TO RELATED APPLICATION

This non-provisional patent application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application Ser. No. 61/229,216, filed Jul. 28, 2009, and U.S. Provisional Patent Application Ser. No. 61/236,490, filed Aug. 24, 2009, the disclosures of which are incorporated by reference.

US Referenced Citations (271)
Number Name Date Kind
3416150 Lindberg Dec 1968 A
3426210 Agin Feb 1969 A
3668658 Flores et al. Jun 1972 A
4893253 Lodder Jan 1990 A
5056021 Ausborn Oct 1991 A
5121338 Lodder Jun 1992 A
5133067 Hara et al. Jul 1992 A
5278980 Pedersen et al. Jan 1994 A
5371673 Fan Dec 1994 A
5442778 Pedersen et al. Aug 1995 A
5477451 Brown et al. Dec 1995 A
5488725 Turtle et al. Jan 1996 A
5524177 Suzuoka Jun 1996 A
5528735 Strasnick et al. Jun 1996 A
5619632 Lamping et al. Apr 1997 A
5619709 Caid et al. Apr 1997 A
5635929 Rabowsky et al. Jun 1997 A
5649193 Sumita et al. Jul 1997 A
5675819 Schuetze Oct 1997 A
5696962 Kupiec Dec 1997 A
5737734 Schultz Apr 1998 A
5754938 Herz et al. May 1998 A
5794236 Mehrle Aug 1998 A
5799276 Komissarchik et al. Aug 1998 A
5819258 Vaithyanathan et al. Oct 1998 A
5842203 D'Elena et al. Nov 1998 A
5844991 Hochberg et al. Dec 1998 A
5857179 Vaithyanathan et al. Jan 1999 A
5860136 Fenner Jan 1999 A
5862325 Reed et al. Jan 1999 A
5864846 Voorhees et al. Jan 1999 A
5864871 Kitain et al. Jan 1999 A
5867799 Lang et al. Feb 1999 A
5870740 Rose et al. Feb 1999 A
5909677 Broder et al. Jun 1999 A
5915024 Kitaori et al. Jun 1999 A
5920854 Kirsch et al. Jul 1999 A
5924105 Punch et al. Jul 1999 A
5940821 Wical Aug 1999 A
5950146 Vapnik Sep 1999 A
5950189 Cohen et al. Sep 1999 A
5966126 Szabo Oct 1999 A
5987446 Corey et al. Nov 1999 A
6006221 Liddy et al. Dec 1999 A
6012053 Pant et al. Jan 2000 A
6026397 Sheppard Feb 2000 A
6038574 Pitkow et al. Mar 2000 A
6070133 Brewster et al. May 2000 A
6089742 Warmerdam et al. Jul 2000 A
6092059 Straforini et al. Jul 2000 A
6094649 Bowen et al. Jul 2000 A
6100901 Mohda et al. Aug 2000 A
6119124 Broder et al. Sep 2000 A
6122628 Castelli et al. Sep 2000 A
6137499 Tesler Oct 2000 A
6137545 Patel et al. Oct 2000 A
6137911 Zhilyaev Oct 2000 A
6148102 Stolin Nov 2000 A
6154219 Wiley et al. Nov 2000 A
6167368 Wacholder Dec 2000 A
6173275 Caid et al. Jan 2001 B1
6202064 Julliard Mar 2001 B1
6216123 Robertson et al. Apr 2001 B1
6243713 Nelson et al. Jun 2001 B1
6243724 Mander et al. Jun 2001 B1
6260038 Martin et al. Jul 2001 B1
6326962 Szabo Dec 2001 B1
6338062 Liu Jan 2002 B1
6345243 Clark Feb 2002 B1
6349296 Broder et al. Feb 2002 B1
6349307 Chen Feb 2002 B1
6360227 Aggarwal et al. Mar 2002 B1
6363374 Corston-Oliver et al. Mar 2002 B1
6377287 Hao et al. Apr 2002 B1
6381601 Fujiwara et al. Apr 2002 B1
6389433 Bolosky et al. May 2002 B1
6389436 Chakrabarti et al. May 2002 B1
6408294 Getchius et al. Jun 2002 B1
6414677 Robertson et al. Jul 2002 B1
6415283 Conklin Jul 2002 B1
6418431 Mahajan et al. Jul 2002 B1
6421709 McCormick et al. Jul 2002 B1
6438537 Netz et al. Aug 2002 B1
6438564 Morton et al. Aug 2002 B1
6442592 Alumbaugh et al. Aug 2002 B1
6446061 Doerre et al. Sep 2002 B1
6449612 Bradley et al. Sep 2002 B1
6453327 Nielsen Sep 2002 B1
6460034 Wical Oct 2002 B1
6470307 Turney Oct 2002 B1
6480843 Li Nov 2002 B2
6480885 Olivier Nov 2002 B1
6484168 Pennock et al. Nov 2002 B1
6484196 Maurille Nov 2002 B1
6493703 Knight et al. Dec 2002 B1
6496822 Rosenfelt et al. Dec 2002 B2
6502081 Wiltshire, Jr. et al. Dec 2002 B1
6507847 Fleischman Jan 2003 B1
6510406 Marchisio Jan 2003 B1
6519580 Johnson et al. Feb 2003 B1
6523026 Gillis Feb 2003 B1
6523063 Miller et al. Feb 2003 B1
6542889 Aggarwal et al. Apr 2003 B1
6544123 Tanaka et al. Apr 2003 B1
6549957 Hanson et al. Apr 2003 B1
6560597 Dhillon et al. May 2003 B1
6571225 Oles et al. May 2003 B1
6584564 Olkin et al. Jun 2003 B2
6594658 Woods Jul 2003 B2
6598054 Schuetze et al. Jul 2003 B2
6606625 Muslea et al. Aug 2003 B1
6611825 Billheimer et al. Aug 2003 B1
6628304 Mitchell et al. Sep 2003 B2
6629097 Keith Sep 2003 B1
6651057 Jin et al. Nov 2003 B1
6654739 Apte et al. Nov 2003 B1
6658423 Pugh et al. Dec 2003 B1
6675159 Lin et al. Jan 2004 B1
6675164 Kamath et al. Jan 2004 B2
6678705 Berchtold et al. Jan 2004 B1
6684205 Modha et al. Jan 2004 B1
6697998 Damerau et al. Feb 2004 B1
6701305 Holt et al. Mar 2004 B1
6711585 Copperman et al. Mar 2004 B1
6714929 Micaelian et al. Mar 2004 B1
6735578 Shetty et al. May 2004 B2
6738759 Wheeler et al. May 2004 B1
6747646 Gueziec et al. Jun 2004 B2
6751628 Coady Jun 2004 B2
6757646 Marchisio Jun 2004 B2
6785679 Dane et al. Aug 2004 B1
6804665 Kreulen et al. Oct 2004 B2
6816175 Hamp et al. Nov 2004 B1
6819344 Robbins Nov 2004 B2
6823333 McGreevy Nov 2004 B2
6841321 Matsumoto et al. Jan 2005 B2
6847966 Sommer et al. Jan 2005 B1
6862710 Marchisio Mar 2005 B1
6879332 Decombe Apr 2005 B2
6883001 Abe Apr 2005 B2
6886010 Kostoff Apr 2005 B2
6888584 Suzuki et al. May 2005 B2
6915308 Evans et al. Jul 2005 B1
6922699 Schuetze et al. Jul 2005 B2
6941325 Benitez et al. Sep 2005 B1
6970881 Mohan et al. Nov 2005 B1
6978419 Kantrowitz Dec 2005 B1
6990238 Saffer et al. Jan 2006 B1
6993535 Bolle et al. Jan 2006 B2
6996575 Cox et al. Feb 2006 B2
7003551 Malik Feb 2006 B2
7013435 Gallo et al. Mar 2006 B2
7020645 Bisbee et al. Mar 2006 B2
7051017 Marchisio May 2006 B2
7054870 Holbrook May 2006 B2
7080320 Ono Jul 2006 B2
7096431 Tambata et al. Aug 2006 B2
7099819 Sakai et al. Aug 2006 B2
7117246 Christenson et al. Oct 2006 B2
7130807 Mikurak Oct 2006 B1
7137075 Hoshino et al. Nov 2006 B2
7139739 Agrafiotis et al. Nov 2006 B2
7146361 Broder et al. Dec 2006 B2
7155668 Holland et al. Dec 2006 B2
7188107 Moon et al. Mar 2007 B2
7188117 Farahat et al. Mar 2007 B2
7194458 Micaelian et al. Mar 2007 B1
7194483 Mohan et al. Mar 2007 B1
7197497 Cossock Mar 2007 B2
7209949 Mousseau et al. Apr 2007 B2
7233886 Wegerich et al. Jun 2007 B2
7233940 Bamberger et al. Jun 2007 B2
7240199 Tomkow Jul 2007 B2
7246113 Cheetham et al. Jul 2007 B2
7251637 Caid et al. Jul 2007 B1
7266365 Ferguson et al. Sep 2007 B2
7266545 Bergman et al. Sep 2007 B2
7269598 Marchisio Sep 2007 B2
7271801 Toyozawa et al. Sep 2007 B2
7277919 Donoho et al. Oct 2007 B1
7325127 Olkin et al. Jan 2008 B2
7353204 Liu Apr 2008 B2
7359894 Liebman et al. Apr 2008 B1
7363243 Arnett et al. Apr 2008 B2
7366759 Trevithick et al. Apr 2008 B2
7373612 Risch et al. May 2008 B2
7379913 Steele et al. May 2008 B2
7383282 Whitehead et al. Jun 2008 B2
7401087 Copperman et al. Jul 2008 B2
7412462 Margolus et al. Aug 2008 B2
7418397 Kojima et al. Aug 2008 B2
7430717 Spangler Sep 2008 B1
7433893 Lowry Oct 2008 B2
7440662 Antona et al. Oct 2008 B2
7444356 Calistri-Yeh et al. Oct 2008 B2
7457948 Bilicksa et al. Nov 2008 B1
7472110 Achlioptas Dec 2008 B2
7490092 Morton et al. Feb 2009 B2
7516419 Petro et al. Apr 2009 B2
7523349 Barras Apr 2009 B2
7558769 Scott et al. Jul 2009 B2
7571177 Damle Aug 2009 B2
7574409 Patinkin Aug 2009 B2
7584221 Robertson et al. Sep 2009 B2
7639868 Regli et al. Dec 2009 B1
7640219 Perrizo Dec 2009 B2
7647345 Trepess et al. Jan 2010 B2
7668376 Lin et al. Feb 2010 B2
7698167 Batham et al. Apr 2010 B2
7716223 Haveliwala et al. May 2010 B2
7743059 Chan et al. Jun 2010 B2
7761447 Brill et al. Jul 2010 B2
7801841 Mishra et al. Sep 2010 B2
7885901 Hull et al. Feb 2011 B2
7971150 Raskutti et al. Jun 2011 B2
8010534 Roitblat et al. Aug 2011 B2
8165974 Privault et al. Apr 2012 B2
20020032735 Burnstein et al. Mar 2002 A1
20020065912 Catchpole et al. May 2002 A1
20020078044 Song et al. Jun 2002 A1
20020078090 Hwang et al. Jun 2002 A1
20020122543 Rowen Sep 2002 A1
20020184193 Cohen Dec 2002 A1
20030046311 Baidya et al. Mar 2003 A1
20030130991 Reijerse et al. Jul 2003 A1
20030172048 Kauffman Sep 2003 A1
20030174179 Suermondt et al. Sep 2003 A1
20040024739 Copperman et al. Feb 2004 A1
20040024755 Rickard Feb 2004 A1
20040034633 Rickard Feb 2004 A1
20040205482 Basu et al. Oct 2004 A1
20040205578 Wolff et al. Oct 2004 A1
20040215608 Gourlay Oct 2004 A1
20040243556 Ferrucci et al. Dec 2004 A1
20050025357 Landwehr et al. Feb 2005 A1
20050097435 Prakash et al. May 2005 A1
20050171772 Iwahashi et al. Aug 2005 A1
20050203924 Rosenberg Sep 2005 A1
20050283473 Rousso et al. Dec 2005 A1
20060008151 Lin et al. Jan 2006 A1
20060021009 Lunt Jan 2006 A1
20060053382 Gardner et al. Mar 2006 A1
20060122974 Perisic Jun 2006 A1
20060122997 Lin Jun 2006 A1
20070020642 Deng et al. Jan 2007 A1
20070043774 Davis et al. Feb 2007 A1
20070044032 Mollitor et al. Feb 2007 A1
20070112758 Livaditis May 2007 A1
20070150801 Chidlovskii et al. Jun 2007 A1
20070214133 Liberty et al. Sep 2007 A1
20070288445 Kraftsow Dec 2007 A1
20080005081 Green et al. Jan 2008 A1
20080140643 Ismalon Jun 2008 A1
20080183855 Agarwal et al. Jul 2008 A1
20080189273 Kraftsow Aug 2008 A1
20080215427 Kawada et al. Sep 2008 A1
20080228675 Daffy et al. Sep 2008 A1
20090041329 Nordell et al. Feb 2009 A1
20090043797 Dorie et al. Feb 2009 A1
20090049017 Gross Feb 2009 A1
20090097733 Hero et al. Apr 2009 A1
20090106239 Getner et al. Apr 2009 A1
20090222444 Chowdhury et al. Sep 2009 A1
20090228499 Schmidtler et al. Sep 2009 A1
20090228811 Adams et al. Sep 2009 A1
20100100539 Davis et al. Apr 2010 A1
20100198802 Kraftsow Aug 2010 A1
20100250477 Yadav Sep 2010 A1
20100262571 Schmidtler et al. Oct 2010 A1
20100268661 Levy et al. Oct 2010 A1
20120124034 Jing et al. May 2012 A1
Foreign Referenced Citations (8)
Number Date Country
1024437 Aug 2000 EP
1049030 Nov 2000 EP
0886227 Oct 2003 EP
WO 0067162 Nov 2000 WO
03052627 Jun 2003 WO
03060766 Jul 2003 WO
2006008733 Jul 2004 WO
WO 2005073881 Aug 2005 WO
Non-Patent Literature Citations (45)
Entry
Anna Sachinopoulou, “Multidimensional Visualization,” Technical Research Centre of Finland, ESPOOO 2001, VTT Research Notes 2114, pp. 1-37 (2001).
B.B. Hubbard, “The World According the Wavelet: The Story of a Mathematical Technique in the Making,” AK Peters (2nd ed.), pp. 227-229, Massachusetts, USA (1998).
Baeza-Yates et al., “Modern Information Retrieval,” Ch. 2 “Modeling,” Modern Information Retrieval, Harlow: Addison-Wesley, Great Britain 1999, pp. 18-71 (1999).
Bernard et al.: “Labeled Radial Drawing of Data Structures” Proceedings of the Seventh International Conference on Information Visualization, Infovis. IEEE Symposium, Jul. 16-18, 2003, Piscataway, NJ, USA, IEEE, Jul. 16, 2003, pp. 479-484, XP010648809 (2003).
Bier et al. “Toolglass and Magic Lenses: The See-Through Interface”, Computer Graphics Proceedings, Proceedings of Siggraph Annual International Conference on Computer Graphics and Interactive Techniques, pp, 73-80, XP000879378 (Aug. 1993).
Boukhelifa et al., “A Model and Software System for Coordinated and Multiple Views in Exploratory Visualization,” Information Visualization, No. 2, pp. 258-269, GB (2003).
Chung et al., “Thematic Mapping-From Unstructured Documents to Taxonomies,” CIKM'02, Nov. 4-9, 2002, pp. 608-610, ACM, McLean, Virginia, USA (Nov. 4, 2002).
Chen An et al., “Fuzzy Concept Graph and Application in Web Document Clustering,” IEEE, pp. 101-106 (2001).
Davison et al., “Brute Force Estimation of the Number of Human Genes Using EST Clustering as a Measure,” IBM Journal of Research & Development, vol. 45, pp. 439-447 (May 2001).
Eades et al. “Multilevel Visualization of Clustered Graphs,” Department of Computer Science and Software Engineering, University of Newcastle, Australia, Proceedings of Graph Drawing '96, Lecture Notes in Computer Science, NR. 1190 (Sep. 1996).
Eades et al., “Orthogonal Grid Drawing of Clustered Graphs,” Department of Computer Science, the University of Newcastle, Australia, Technical Report 96-04, [Online] 1996, Retrieved from the internet: URL:http://citeseer.ist.psu.edu/eades96ort hogonal.html (1996).
Estivill-Castro et al. “Amoeba: Hierarchical Clustering Based on Spatial Proximity Using Delaunaty Diagram”, Department of Computer Science, The University of Newcastle, Australia, 1999 ACM Sigmod International Conference on Management of Data, vol. 28, No. 2, Jun. 1, 1999, Jun. 3, 1999, pp. 49-60, Philadelphia, PA, USA (Jun. 1999).
F. Can, Incremental Clustering for Dynamic Information Processing: ACM Transactions on Information Systems, ACM, New York, NY, US, vol. 11, No. 2, pp. 143-164, XP-002308022 (Apr. 1993).
Fekete et al., “Excentric Labeling: Dynamic Neighborhood Labeling For Data Visualization,” CHI 1999 Conference Proceedings Human Factors in Computing Systems, Pittsburgh, PA, pp. 512-519 (May 15-20, 1999).
http://em-ntserver.unl.edu/Math/mathweb/vecors/vectors.html © 1997.
Inxight VizServer, “Speeds and Simplifies The Exploration and Sharing of Information”, www.inxight.com/products/vizserver, copyright 2005.
Jain et al., “Data Clustering: A Review,” ACM Computing Surveys, vol. 31, No. 3, Sep. 1999, pp. 264-323, New York, NY, USA (Sep. 1999).
Osborn et al., “Justice: a Jidicial Search Tool Using Intelligent Cencept Extraction,” Department of Computer Science and Software Engineering, University of Melbourne, Australia, ICAIL-99, 1999, pp. 173-181, ACM (1999).
Jiang Linhui, “K-Mean Algorithm: Iterative Partitioning Clustering Algorithm,” http://www.cs.regina.ca/-linhui/K.sub.--mean.sub.--algorithm.html, (2001) Computer Science Department, University of Regina, Saskatchewan, Canada (2001).
Kanungo et al., “The Analysis Of A Simple K-Means Clustering Algorithm,” pp. 100-109, PROC 16th annual symposium of computational geometry (May 2000).
Hiroyuki Kawano, “Overview of Mondou Web Search Engine Using Text Mining and Information Visualizing Technologies,” IEEE, 2001, pp. 234-241 (2001).
Kazumasa Ozawa, “A Stratificational Overlapping Cluster Scheme,” Information Science Center, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572, Japan, Pattern Recognition, vol. 18, pp. 279-286 (1985).
Kohonen, “Self-Organizing Maps,” Ch. 1-2, Springer-Verlag (3rd ed.) (2001).
M. Kurimo, “Fast Latent Semantic Indexing of Spoken Documents by Using Self-Organizing Maps” IEEE International Conference on Accoustics, Speech, And Signal Processing, vol. 6, pp. 2425-2428 (Jun. 2000).
Lam et al., “A Sliding Window Technique for Word Recognition,” SPIE, vol. 2422, pp. 38-46, Center of Excellence for Document Analysis and Recognition, State University of New Yrok at Baffalo, NY, USA (1995).
Lio et al., “Funding Pathogenicity Islands And Gene Transfer Events in Genome Data,” Bioinformatics, vol. 16, pp. 932-940, Department of Zoology, University of Cambridge, UK (Jan. 25, 2000).
Artero et al., “Viz3D: Effective Exploratory Visualization of Large Multidimensional Data Sets,” IEEE Computer Graphics and Image Processing, pp. 340-347 (Oct. 20, 2004).
Magarshak, Theory & Practice. Issue 01. May 17, 2000. http://www.flipcode.com/articles/tp.sub.--issue01-pf.shtml (May 17, 2000).
Maria Cristin Ferreira De Oliveira et al., “From Visual Data Exploration to Visual Data Mining: A Survey,” Jul.-Sep. 2003, IEEE Transactions on Visualization and Computer Graphics, vol. 9, No. 3, pp. 378-394 (Jul. 2003).
Miller et al., “Topic Islands: A Wavelet Based Text Visualization System,” Proceedings of the IEEE Visualization Conference, pp. 189-196 (1998).
North et al. “A Taxonomy of Multiple Window Coordinations,” Institute for Systems Research & Department of Computer Science, University of Maryland, Maryland, USA, http://www.cs.umd.edu/localphp/hcil/tech-reports-search.php?number=97-18 (1997).
Pelleg et al., “Accelerating Exact K-Means Algorithms With Geometric Reasoning,” pp. 277-281, CONF on Knowledge Discovery in Data, Proc fifth ACM SIGKDD (1999).
R.E. Horn, “Communication Units, Morphology, and Syntax,” Visual Language: Global Communication for the 21st Century, 1998, Ch. 3, pp. 51-92, MacroVU Press, Bainbridge Island, Washington, USA.
Rauber et al., “Text Mining in the SOMLib Digital Library System: The Representation of Topics and Genres,” Applied Intelligence 18, pp. 271-293, 2003 Kluwer Academic Publishers (2003).
Shuldberg et al., “Distilling Information from Text: The EDS TemplateFiller System,” Journal of the American Society for Information Science, vol. 44, pp. 493-507 (1993).
Slaney et al., “Multimedia Edges: Finding Hierarchy in all Dimensions” PROC. 9-th ACM Intl. Conf. on Multimedia, pp. 29-40, ISBN. 1-58113-394-4, Sep. 30, 2001, XP002295016 Ottawa (Sep. 30, 2001).
Strehl et al., “Cluster Ensembles—A Knowledge Reuse Framework for Combining Partitioning,” Journal of Machine Learning Research, MIT Press, Cambridge, MA, US, ISSN: 1533-7928, vol. 3, No. 12, pp. 583-617, XP002390603 (Dec. 2002).
Dan Sullivan, “Document Warehousing and Text Mining: Techniques for Improving Business Operations, Marketing and Sales,” Ch. 1-3, John Wiley & Sons, New York, NY (2001).
V. Faber, “Clustering and the Continuous K-Means Algorithm,” Los Alamos Science, The Laboratory, Los Alamos, NM, US, No. 22, Jan. 1, 1994, pp. 138-144 (Jan. 1, 1994).
Wang et al., “Learning text classifier using the domain concept hierarchy,” Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on Jun. 29-Jul. 1, 2002, Piscataway, NJ, USA, IEEE, vol. 2, pp. 1230-1234 (2002).
Whiting et al., “Image Quantization: Statistics and Modeling,” SPIE Conference of Physics of Medical Imaging, San Diego, CA, USA , vol. 3336, pp. 260-271 (Feb. 1998).
Ryall et al., “An Interactive Constraint-Based System for Drawing Graphs,” UIST '97 Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, pp. 97-104 (1997).
O'Neill et al., “DISCO: Intelligent Help for Document Review,” 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, Jun. 8, 2009, pp. 1-10, ICAIL 2009, Association for Computing Machinery, Red Hook, New York (Online); XP 002607216.
McNee, “Meeting User Information Needs in Recommender Systems,” Ph.D. Dissertation, University of Minnesota—Twin Cities, Jun. 2006.
S.S. Weng, C.K. Liu, “Using text classification and multiple concepts to answer e-mails.” Expert Systems with Applications, 26 (2004), pp. 529-543.
Related Publications (1)
Number Date Country
20110029525 A1 Feb 2011 US
Provisional Applications (2)
Number Date Country
61229216 Jul 2009 US
61236490 Aug 2009 US