System and method for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor

Information

  • Patent Grant
  • 8572084
  • Patent Number
    8,572,084
  • Date Filed
    Friday, July 9, 2010
    14 years ago
  • Date Issued
    Tuesday, October 29, 2013
    11 years ago
Abstract
A system and for providing reference documents as a suggestion for classifying uncoded documents is provided. Reference electronically stored information items and a set of uncoded electronically stored information items are designated. Each of the reference information items are previously classified. At least one uncoded electronically stored information item is compared with the reference electronically stored information items. One or more of the reference electronically stored information items similar to the at least one uncoded electronically stored information items are identified. Relationships are depicted between the at least one uncoded electronically stored information item and the similar reference electronically stored information items for classifying the at least one uncoded electronically stored information item.
Description
FIELD

This application relates in general to using documents as a reference point and, in particular, to a system and method for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor.


BACKGROUND

Historically, document review during the discovery phase of litigation and for other types of legal matters, such as due diligence and regulatory compliance, have been conducted manually. During document review, individual reviewers, generally licensed attorneys, are assigned sets of documents for coding. A reviewer must carefully study each document and categorize the document by assigning a code or other marker from a set of descriptive classifications, such as “privileged,” “responsive,” and “non-responsive.” The classifications can affect the disposition of each document, including admissibility into evidence.


During discovery, document review can potentially affect the outcome of the underlying legal matter, so consistent and accurate results are crucial. Manual document review is tedious and time-consuming. Marking documents is solely at the discretion of each reviewer and inconsistent results may occur due to misunderstanding, time pressures, fatigue, or other factors. A large volume of documents reviewed, often with only limited time, can create a loss of mental focus and a loss of purpose for the resultant classification. Each new reviewer also faces a steep learning curve to become familiar with the legal matter, classification categories, and review techniques.


Currently, with the increasingly widespread movement to electronically stored information (ESI), manual document review is no longer practicable. The often exponential growth of ESI exceeds the bounds reasonable for conventional manual human document review and underscores the need for computer-assisted ESI review tools.


Conventional ESI review tools have proven inadequate to providing efficient, accurate, and consistent results. For example, DiscoverReady LLC, a Delaware limited liability company, custom programs ESI review tools, which conduct semi-automated document review through multiple passes over a document set in ESI form. During the first pass, documents are grouped by category and basic codes are assigned. Subsequent passes refine and further assign codings. Multiple pass review requires a priori project-specific knowledge engineering, which is only useful for the single project, thereby losing the benefit of any inferred knowledge or know-how for use in other review projects.


Thus, there remains a need for a system and method for increasing the efficiency of document review that bootstraps knowledge gained from other reviews while ultimately ensuring independent reviewer discretion.


SUMMARY

Document review efficiency can be increased by identifying relationships between reference ESI and uncoded ESI, and providing a suggestion for classification based on the relationships. The uncoded ESI for a document review project are identified and clustered. At least one of the uncoded ESI is selected from the clusters and compared with the reference ESI based on a similarity metric. The reference ESI most similar to the selected uncoded ESI are identified. Classification codes assigned to the similar reference ESI can be used to provide suggestions for classification of the selected uncoded ESI. Further, a machine-generated suggestion for classification code can be provided with a confidence level.


An embodiment provides a system and method for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor. Reference electronically stored information items and a set of uncoded electronically stored information items are designated. Each of the reference information items are previously classified. At least one uncoded electronically stored information item is compared with the reference electronically stored information items. One or more of the reference electronically stored information items similar to the at least one uncoded electronically stored information items are identified. Relationships are depicted between the at least one uncoded electronically stored information item and the similar reference electronically stored information items for classifying the at least one uncoded electronically stored information item.


A further embodiment provides a system and method for identifying reference documents for use in classifying uncodcd documents. A set of reference documents is designated. Each reference document is associated with a classification code. A set of clusters each including uncoded documents is designated. At least one uncoded document is selected and compared with each of the reference documents. One or more reference documents that satisfy a threshold of similarity with the at least one uncoded document is identified. Relationships between the at least one uncoded document and the similar reference documents are displayed based on the associated classification codes as suggestions for classifying the at least one uncoded document.


Still other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein are described embodiments by way of illustrating the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modifications in various obvious respects, all without departing, from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram showing a system for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor, in accordance with one embodiment.



FIG. 2 is a process flow diagram showing a method for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor, in accordance with one embodiment.



FIG. 3 is a block diagram showing, by way of example, measures for selecting a document reference subset.



FIG. 4 is a process flow diagram showing, by way of example, a method for comparing an uncoded document to reference documents for use in the method of FIG. 2.



FIG. 5 is a screenshot showing, by way of example, a visual display of reference documents in relation to uncoded documents.



FIG. 6 is an alternative visual display of the similar reference documents and uncoded documents.



FIG. 7 is a process flow diagram showing, by way of example, a method for classifying uncoded documents for use in the method of FIG. 2.





DETAILED DESCRIPTION

The ever-increasing volume of ESI underlies the need for automating document review for improved consistency and throughput. Previously coded documents offer knowledge gleaned from earlier work in similar legal projects, as well as a reference point for classifying uncoded ESI.


Providing Suggestions Using Reference Documents


Reference documents are documents that have been previously classified by content and can be used to influence classification of uncoded, that is unclassified, ESI. Specifically, relationships between the uncoded ESI and the reference ESI can be visually depicted to provide suggestions, for instance to a human reviewer, for classifying the visually-proximal uncoded ESI.


Complete ESI review requires a support environment within which classification can be performed. FIG. 1 is a block diagram showing a system 10 for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor, in accordance with one embodiment. By way of illustration, the system 10 operates in a distributed computing environment, which includes a plurality of heterogeneous systems and ESI sources. Henceforth, a single item of ESI will be referenced as a “document,” although ESI can include other forms of non-document data, as described infra. A backend server 11 is coupled to a storage device 13, which stores documents 14a, such as uncoded documents, in the form of structured or unstructured data, a database 30 for maintaining information about the documents, and a lookup database 38 for storing many-to-many mappings 39 between documents and document features, such as concepts. The storage device 13 also stores reference documents 14b, which can provide a training set of trusted and known results for use in guiding ESI classification. The reference documents 14b are each associated with an assigned classification code and considered as classified or coded. Hereinafter, the terms “classified” and “coded” are used interchangeably with the same intended meaning, unless otherwise indicated. A set of reference documents can be hand-selected or automatically selected through guided review, which is further discussed below. Additionally, the set of reference documents can be predetermined or can be generated dynamically, as the selected uncoded documents are classified and subsequently added to the set of reference documents.


The backend server 11 is coupled to an intranetwork 21 and executes a workbench suite 31 for providing a user interface framework for automated document management, processing, analysis, and classification. In a further embodiment, the backend server 11 can be accessed via an internetwork 22. The workbench software suite 31 includes a document mapper 32 that includes a clustering engine 33, similarity searcher 34, classifier 35, and display generator 36. Other workbench suite modules are possible.


The clustering engine 33 performs efficient document scoring and clustering of documents, including uncoded and coded documents, such as described in commonly-assigned U.S. Pat. No. 7,610,313, the disclosure of which is incorporated by reference. Clusters of uncoded documents 14a can be formed and organized along vectors, known as spines, based on a similarity of the clusters, which can be expressed in terms of distance. During clustering, groupings of related documents are provided. The content of each document can be converted into a set of tokens, which are word-level or character-level n-grams, raw terms, concepts, or entities. Other tokens are possible. An n-gram is a predetermined number of items selected from a source. The items can include syllables, letters, or words, as well as other items. A raw term is a term that has not been processed or manipulated. Concepts typically include nouns and noun phrases obtained through part-of-speech tagging that have a common semantic meaning. Entities further refine nouns and noun phrases into people, places, and things, such as meetings, animals, relationships, and various other objects. Entities can be extracted using entity extraction techniques known in the field. Clustering of the documents can be based on cluster criteria, such as the similarity of tokens, including n-grams, raw terms, concepts, entities, email addresses, or other metadata.


In a further embodiment, the clusters can include uncoded and coded documents, which are generated based on a similarity with the uncoded documents, as discussed in commonly-owned U.S. patent application Ser. No. 12/833,860, entitled “System and Method for Displaying Relationships Between Electronically Stored Information to Provide Classification Suggestions via Inclusion,” filed Jul. 9, 2010, pending, and U.S. patent application Ser. No. 12/833,872, entitled “System and Method for Displaying Relationships Between Electronically Stored Information to Provide Classification Suggestions via Injection,” filed Jul. 9, 2010, pending, the disclosures of which are incorporated by reference.


The similarity searcher 34 identifies the reference documents 14b that are most similar to selected uncoded documents 14a, clusters, or spines, as further described below with reference to FIG. 4. For example, the uncoded documents, reference documents, clusters, and spines can each be represented by a score vector, which includes paired values consisting of a token, such as a term occurring in that document, cluster or spine, and the associated score for that token. Subsequently, the score vector of the uncoded document, cluster, or spine is then compared with the score vectors of the reference documents to identify similar reference documents.


The classifier 35 provides a machine-generated suggestion and confidence level for classification of selected uncoded documents 14a, clusters, or spines, as further described below with reference to FIG. 7. The display generator 36 arranges the clusters and spines in thematic relationships in a two-dimensional visual display space, as further described below beginning with reference to FIG. 5. Once generated, the visual display space is transmitted to a work client 12 by the backend server 11 via the document mapper 32 for presenting to a reviewer on a display 37. The reviewer can include an individual person who is assigned to review and classify one or more uncoded documents by designating a code. Hereinafter, the terms “reviewer” and “custodian” are used interchangeably with the same intended meaning, unless otherwise indicated. Other types of reviewers are possible, including machine-implemented reviewers.


The document mapper 32 operates on uncoded 14a and coded documents 14b, which can be retrieved from the storage 13, as well as from a plurality of local and remote sources. The local sources include a local server 15, which is coupled to a storage device 16 with documents 17 and a local client 18, which is coupled to a storage device 19 with documents 20. The local server 15 and local client 18 are interconnected to the backend server 11 and the work client 12 over an intranetwork 21. In addition, the document mapper 32 can identify and retrieve documents from remote sources over an internetwork 22, including the Internet, through a gateway 23 interfaced to the intranetwork 21. The remote sources include a remote server 24, which is coupled to a storage device 25 with documents 26 and a remote client 27, which is coupled to a storage device 28 with documents 29. Other document sources, either local or remote, are possible.


The individual documents 17, 20, 26, 29 include all forms and types of structured and unstructured ESI, including electronic message stores, word processing documents, electronic mail (email) folders, Web pages, and graphical or multimedia data. Notwithstanding, the documents could be in the form of structurally organized data, such as stored in a spreadsheet or database.


In one embodiment, the individual documents 14a, 14b, 17, 20, 26, 29 include electronic message folders storing email and attachments, such as maintained by the Outlook and Outlook Express products, licensed by Microsoft Corporation, Redmond, Wash. The database can be an SQL-based relational database, such as the Oracle database management system, Release 8, licensed by Oracle Corporation, Redwood Shores, Calif.


The individual documents 17, 20, 26, 29 can be designated and stored as uncoded documents or reference documents. The uncoded documents, which are unclassified, are selected for a document review project and stored as a document corpus for classification. The reference documents are initially uncoded documents that can be selected from the corpus or other source of uncoded documents, and subsequently classified. The reference documents can assist in providing suggestions for classification of the remaining uncoded documents based on visual relationships between the uncoded documents and reference documents. In a further embodiment, the reference documents can provide classification suggestions for a document corpus associated with a related document review project. In yet a further embodiment, the reference documents can be used as a training set to form machine-generated suggestions for classifying uncoded documents, as further described below with reference to FIG. 7.


The document corpus for a document review project can be divided into subsets of uncoded documents, which are each provided to a particular reviewer as an assignment. To maintain consistency, the same classification codes can be used across all assignments in the document review project. Alternatively, the classification codes can be different for each assignment. The classification codes can be determined using taxonomy generation, during which a list of classification codes can be provided by a reviewer or determined automatically. For purposes of legal discovery, the list of classification codes can include “privileged,” “responsive,” or “non-responsive;” however, other classification codes are possible. A “privileged” document contains information that is protected by a privilege, meaning that the document should not be disclosed or “produced” to an opposing party. Disclosing a “privileged” document can result in an unintentional waiver of the subject matter disclosed. A “responsive” document contains information that is related to a legal matter on which the document review project is based and a “non-responsive” document includes information that is not related to the legal matter.


The system 10 includes individual computer systems, such as the backend server 11, work server 12, server 15, client 18, remote server 24 and remote client 27. The individual computer systems are general purpose, programmed digital computing devices consisting of a central processing unit (CPU), random access memory (RAM), non-volatile secondary storage, such as a hard drive or CD ROM drive, network interfaces, and peripheral devices, including user interfacing means, such as a keyboard and display. The various implementations of the source code and object and byte codes can be held on a computer-readable storage medium, such as a floppy disk, hard drive, digital video disk (DVD), random access memory (RAM), read-only memory (ROM) and similar storage mediums. For example, program code, including software programs, and data are loaded into the RAM for execution and processing by the CPU and results are generated for display, output, transmittal, or storage.


Identifying relationships between the reference documents and uncoded documents includes clustering and similarity measures. FIG. 2 is a process flow diagram showing a method 40 for displaying relationships between electronically stored information to provide classification suggestions via nearest neighbor, in accordance with one embodiment. A set of document clusters is obtained (block 41). In one embodiment, the clusters can include uncoded documents, and in a further embodiment, the clusters can include uncoded and coded documents. The clustered uncoded documents can represent a corpus of uncoded documents for a document review project, or one or more assignments of uncoded documents. The document corpus can include all uncoded documents for a document review project, while, each assignment can include a subset of uncoded documents selected from the corpus and assigned to a reviewer. The corpus can be divided into assignments using assignment criteria, such as custodian or source of the uncoded document, content, document type, and date. Other criteria are possible. Prior to, concurrent with, or subsequent to obtaining the cluster set, reference documents are identified (block 42). The reference documents can include all reference documents generated for a document review project, or alternatively, a subset of the reference documents. Obtaining reference documents is further discussed below with reference to FIG. 3.


An uncoded document is selected from one of the clusters in the set and compared against the reference documents (block 43) to identify one or more reference documents that are similar to the selected uncoded document (block 44). The similar reference documents are identified based on a similarity measure calculated between the selected uncoded document and each reference document. Comparing the selected uncoded document with the reference documents is further discussed below with reference to FIG. 4. Once identified, relationships between the selected uncoded document and the similar reference documents can be identified (block 45) to provide classification hints, including a suggestion for the selected uncoded document, as further discussed below with reference to FIG. 5. Additionally, machine-generated suggestions for classification can be provided (block 46) with an associated confidence level for use in classifying the selected uncoded document. Machine-generated suggestions are further discussed below with reference to FIG. 7. Once the selected uncoded document is assigned a classification code, either by the reviewer or automatically, the newly classified document can be added to the set of reference documents for use in classifying further uncoded documents. Subsequently, a further uncoded document can be selected for classification using similar reference documents.


In a further embodiment, similar reference documents can also be identified for a selected cluster or a selected spine along which the clusters are placed.


Selecting a Document Reference Subset


After the clusters have been generated, one or more uncoded documents can be selected from at least one of the clusters for comparing with a reference document set or subset. FIG. 3 is a block diagram showing, by way of example, measures 50 for selecting a document reference subset 51. The subset of reference documents 51 can be previously defined 54 and maintained for related document review projects or can be specifically generated for each review project. A predefined reference subset 54 provides knowledge previously obtained during the related document review project to increase efficiency, accuracy, and consistency. Reference subsets newly generated for each review project can include arbitrary 52 or customized 53 reference subsets that are determined automatically or by a human reviewer. An arbitrary reference subset 52 includes reference documents randomly selected for inclusion in the reference subset. A customized reference subset 53 includes reference documents specifically selected for inclusion in the reference subset based on criteria, such as reviewer preference, classification category, document source, content, and review project. Other criteria are possible.


The subset of reference documents, whether predetermined or newly generated, should be selected from a set of reference documents that are representative of the document corpus for a review project in which data organization or classification is desired. Guided review assists a reviewer or other user in identifying reference documents that are representative of the corpus for use in classifying uncoded documents. During guided review, the uncoded documents that are dissimilar to all other uncoded documents are identified based on a similarity threshold. In one embodiment, the dissimilarity can be determined as the cos σ of the score vectors for the uncoded documents. Other methods for determining dissimilarity are possible. Identifying the dissimilar documents provides a group of documents that are representative of the corpus for a document review project. Each identified dissimilar document is then classified by assigning a particular classification code based on the content of the document to collectively generate the reference documents. Guided review can be performed by a reviewer, a machine, or a combination of the reviewer and machine.


Other methods for generating reference documents for a document review project using guided review are possible, including clustering. A set of uncoded documents to be classified is clustered, as described in commonly-assigned U.S. Pat. No. 7,610,313, the disclosure of which is incorporated by reference. A plurality of the clustered uncoded documents are selected based on selection criteria, such as cluster centers or sample clusters. The cluster centers can be used to identify uncoded documents in a cluster that are most similar or dissimilar to the cluster center. The selected uncoded documents are then assigned classification codes. In a further embodiment, sample clusters can be used to generate reference documents by selecting one or more sample clusters based on cluster relation criteria, such as size, content, similarity, or dissimilarity. The uncoded documents in the selected sample clusters are then selected for classification by assigning classification codes. The classified documents represent reference documents for the document review project. The number of reference documents can be determined automatically or by a reviewer. Other methods for selecting documents for use as reference documents are possible.


Comparing a Selected Uncoded Document to Reference Documents


An uncoded document selected from one of the clusters can be compared to the reference documents to identify similar reference documents for use in providing suggestions regarding classification of the selected uncoded document. FIG. 4 is a process flow diagram showing, by way of example, a method 60 for comparing an uncoded document to reference documents for use in the method of FIG. 2. The uncoded document is selected from a cluster (block 61) and applied to the reference documents (block 62). The reference documents can include all reference documents for a document review project or a subset of the reference documents. Each of the reference documents and the selected uncoded document can be represented by a score vector having paired values of tokens occurring within that document and associated token scores. A similarity between the uncoded document and each reference document is determined (block 63) as the cos σ of the score vectors for the uncoded document and reference document being compared and is equivalent to the inner product between the score vectors. In the described embodiment, the cos σ is calculated in accordance with the equation:







cos






σ
AB


=






S


A

·


S


B









S


A








S


B










where cos σAB comprises a similarity between uncoded document A and reference document B, {right arrow over (S)}A comprises a score vector for uncoded document A, and {right arrow over (S)}B comprises a score vector for reference document B. Other forms of determining similarity using a distance metric are possible, as would be recognized by one skilled in the art, including using Euclidean distance.


One or more of the reference documents that are most similar to the selected uncoded document, based on the similarity metric, are identified. The most similar reference documents can be identified by satisfying a predetermined threshold of similarity. Other methods for determining the similar reference documents are possible, such as setting a predetermined absolute number of the most similar reference documents. The classification codes of the identified similar reference documents can be used as suggestions for classifying the selected uncoded document, as further described below with reference to FIG. 5. Once identified, the similar reference documents can be used to provide suggestions regarding classification of the selected uncoded document, as further described below with reference to FIGS. 5 and 7.


Displaying the Reference Documents


The similar reference documents can be displayed with the clusters of uncoded documents. In the display, the similar reference documents can be provided as a list, while the clusters can be can be organized along spines of thematically related clusters, as described in commonly-assigned U.S. Pat. No. 7,271,804, the disclosure of which is incorporated by reference. The spines can be positioned in relation to other cluster spines based on a theme shared by those cluster spines, as described in commonly-assigned U.S. Pat. No. 7,610,313, the disclosure of which is incorporated by reference. Other displays of the clusters and similar reference documents are possible.


Organizing the clusters into spines and groups of cluster spines provides an individual reviewer with a display that presents the documents according to a theme while maximizing the number of relationships depicted between the documents. FIG. 5 is a screenshot 70 showing, by way of example, a visual display 71 of similar reference documents 74 and uncoded documents 74. Clusters 72 of the uncoded documents 73 can be located along a spine, which is a vector, based on a similarity of the uncoded documents 73 in the clusters 72. The uncoded documents 73 are each represented by a smaller circle within the clusters 72.


Similar reference documents 74 identified for a selected uncoded document 73 can be displayed in a list 75 by document title or other identifier. Also, classification codes 76 associated with the similar reference documents 74 can be displayed as circles having a diamond shape within the boundary of the circle. The classification codes 76 can include “privileged,” “responsive,” and “non-responsive” codes, as well as other codes. The different classification codes 76 can each be represented by a color, such as blue for “privileged” reference documents and yellow for “non-responsive” reference documents. Other display representations of the uncoded documents, similar reference documents, and classification codes are possible, including by symbols and shapes.


The classification codes 76 of the similar reference documents 74 can provide suggestions for classifying the selected uncoded document based on factors, such as a number of different classification codes for the similar reference documents and a number of similar reference documents associated with each classification code. For example, the list of reference documents includes four similar reference documents identified for a particular uncoded document. Three of the reference documents are classified as “privileged,” while one is classified as “non-responsive.” In making a decision to assign a classification code to a selected uncoded document, the reviewer can consider classification factors based on the similar reference documents, such as such as a presence or absence of similar reference documents with different classification codes and a quantity of the similar reference documents for each classification code. Other classification factors are possible. In the current example, the display 81 provides suggestions, including the number of “privileged” similar reference documents, the number of “non-responsive” similar reference documents, and the absence of other classification codes of similar reference documents. Based on the number of “privileged” similar reference documents compared to the number of “non-responsive” similar reference documents, the reviewer may be more inclined to classify the selected uncoded documents as “privileged.” Alternatively, the reviewer may wish to further review the selected uncoded document based on the multiple classification codes of the similar reference documents. Other classification codes and combinations of classification codes are possible. The reviewer can utilize the suggestions provided by the similar reference documents to assign a classification to the selected uncoded document. In a further embodiment, the now classified and previously uncoded document can be added to the set of reference documents for use in classifying other uncoded documents.


In a further embodiment, similar reference documents can be identified for a cluster or spine to provide suggestions for classifying the cluster and spine. For a cluster, the similar reference documents are identified based on a comparison of a score vector for the cluster, which is representative of the cluster center and the reference document score vectors. Meanwhile, identifying similar reference documents for a spine is based on a comparison between the score vector for the spine, which is based on the cluster center of all the clusters along that spine, and the reference document score vectors. Once identified, the similar reference documents are used for classifying the cluster or spine.


In an even further embodiment, the uncoded documents, including the selected uncoded document, and the similar reference documents can be displayed as a document list. FIG. 6 is a screenshot 80 showing, by way of example, an alternative visual display of the similar reference documents 85 and uncoded documents 82. The uncoded documents 82 can be provided as a list in an uncoded document box 81, such as an email inbox. The uncoded documents 82 can be identified and organized using uncoded document factors, such as file name, subject, date, recipient, sender, creator, and classification category 83, if previously assigned.


At least one of the uncoded documents can be selected and displayed in a document viewing box 84. The selected uncoded document can be identified in the list 81 using a selection indicator (not shown), including a symbol, font, or highlighting. Other selection indicators and uncoded document factors are possible. Once identified, the selected uncoded document can be compared to a set of reference documents to identify the reference documents 85 most similar. The identified similar reference documents 85 can be displayed below the document viewing box 84 with an associated classification code 83. The classification code of the similar reference document 85 can be used as a suggestion for classifying the selected uncoded document. After assigning a classification code, a representation 83 of the classification can be provided in the display with the selected uncoded document. In a further embodiment, the now classified and previously uncoded document can be added to the set of reference documents.


Machine Classification of Uncoded Documents


Similar reference documents can be used as suggestions to indicate a need for manual review of the uncoded documents, when review may be unnecessary, and hints for classifying the uncoded documents, clusters, or spines. Additional information can be generated to assist a reviewer in making classification decisions for the uncoded documents, such as a machine-generated confidence level associated with a suggested classification code, as described in common-assigned U.S. patent application Ser. No. 12/833,769, entitled “System and Method for Providing a Classification Suggestion for Electronically Stored Information,” filed on Jul. 9, 2010, pending, the disclosure of which is incorporated by reference.


The machine-generated suggestion for classification and associated confidence level can be determined by a classifier. FIG. 7 is a process flow diagram 90 showing, by way of example, a method for classifying uncoded documents by a classifier for use in the method of FIG. 2. An uncoded document is selected from a cluster (block 91) and compared to a neighborhood of x-similar reference documents (block 92) to identify those similar reference documents that are most relevant to the selected uncoded document. The selected uncoded document can be the same as the uncoded document selected for identifying similar reference documents or a different uncoded document. In a further embodiment, a machine-generated suggestion can be provided for a cluster or spine by selecting and comparing the cluster or spine to a neighborhood of x-reference documents for the cluster or spine.


The neighborhood of x-similar reference documents is determined separately for each selected uncoded document and can include one or more similar reference documents. During neighborhood generation, a value for x similar reference documents is first determined automatically or by an individual reviewer. The neighborhood of similar reference documents can include the reference documents, which were identified as similar reference documents according to the method of FIG. 4, or reference documents located in one or more clusters, such as the same cluster as the selected uncoded document or in one or more files, such as an email file. Next, the x-number of similar reference documents nearest to the selected uncoded document are identified. Finally, the identified x-number of similar reference documents are provided as the neighborhood for the selected uncoded document. In a further embodiment, the x-number of similar reference documents are defined for each classification code, rather than across all classification codes. Once generated, the x-number of similar reference documents in the neighborhood and the selected uncoded document are analyzed by the classifier to provide a machine-generated classification suggestion for assigning a classification code (block 93). A confidence level for the machine-generated classification suggestion is also provided (block 94).


The machine-generated analysis of the selected uncoded document and x-number of similar reference documents can be based on one or more routines performed by the classifier, such as a nearest neighbor (NN) classifier. The routines for determining a suggested classification code include a minimum distance classification measure, also known as closest neighbor, minimum average distance classification measure, maximum count classification measure, and distance weighted maximum count classification measure. The minimum distance classification measure for a selected uncoded document includes identifying a neighbor that is the closest distance to the selected uncoded document and assigning the classification code of the closest neighbor as the suggested classification code for the selected uncoded document. The closest neighbor is determined by comparing the score vectors for the selected uncoded document with each of the x-number of similar reference documents in the neighborhood as the cos σ to determine a distance metric. The distance metrics for the x-number of similar reference documents are compared to identify the similar reference document closest to the selected uncoded document as the closest neighbor.


The minimum average distance classification measure includes calculating an average distance of the similar reference documents for each classification code. The classification code of the similar reference documents having the closest average distance to the selected uncoded document is assigned as the suggested classification code. The maximum count classification measure, also known as the voting classification measure, includes counting a number of similar reference documents for each classification code and assigning a count or “vote” to the similar reference documents based on the assigned classification code. The classification code with the highest number of similar reference documents or “votes” is assigned to the selected uncoded document as the suggested classification code. The distance weighted maximum count classification measure includes identifying a count of all similar reference documents for each classification code and determining a distance between the selected uncoded document and each of the similar reference documents. Each count assigned to the similar reference documents is weighted based on the distance of the similar reference document from the selected uncoded document. The classification code with the highest count, after consideration of the weight, is assigned to the selected uncoded document as the suggested classification code.


The machine-generated suggested classification code is provided for the selected uncoded document with a confidence level, which can be presented as an absolute value or a percentage. Other confidence level measures are possible. The reviewer can use the suggested classification code and confidence level to assign a classification to the selected uncoded document. Alternatively, the x-NN classifier can automatically assign the suggested classification code. In one embodiment, the x-NN classifier only assigns an uncoded document with the suggested classification code if the confidence level is above a threshold value, which can be set by the reviewer or the x-NN classifier.


Machine classification can also occur on a cluster or spine level once one or more documents in the cluster have been classified. For instance, for cluster classification, a cluster is selected and a score vector for the center of the cluster is determined as described above with reference to FIG. 4. A neighborhood for the selected cluster can be determined based on a distance metric. The x-number of similar reference documents that are closest to the cluster center can be selected for inclusion in the neighborhood, as described above. Each document in the selected cluster is associated with a score vector from which the cluster center score vector is generated. The distance is then determined by comparing the score vector of the cluster center with the score vector for each of the similar reference documents to determine an x-number of similar reference documents that are closest to the cluster center. However, other methods for generating a neighborhood are possible. Once determined, one of the classification routines is applied to the neighborhood to determine a suggested classification code and confidence level for the selected cluster. The neighborhood of x-number of reference documents is determined for a spine by comparing a spine score vector with the vector for each similar reference document to identify the neighborhood of similar documents that are the most similar.


Providing classification suggestions and suggested classification codes has been described in relation to uncoded documents and reference documents. However, in a further embodiment, classification suggestions and suggested classification codes can be provided for the uncoded documents based on a particular token identified within the uncoded documents. The token can include concepts, n-grams, raw terms, and entities. In one example, the uncoded tokens, which are extracted from uncoded documents, can be clustered. A token can be selected from one of the clusters and compared with reference tokens. Relationships between the uncoded token and similar reference tokens can be displayed to provide classification suggestions for the uncoded token. The uncoded documents can then be classified based on the classified tokens.


While the invention has been particularly shown and described as referenced to the embodiments thereof, those skilled in the art will understand that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope.

Claims
  • 1. A system for providing reference documents as a suggestion for classifying electronically stored information using nearest neighbor, comprising: a clustering module to provide a set of uncoded electronically stored information items and a different set of reference electronically stored information items that are each classified with a code;a similarity module to compare at least one of the uncoded electronically stored information items from the set with the set of reference electronically stored information items and to identify one or more of the reference electronically stored information items that are similar to the at least one uncoded electronically stored information item;a processing module to process the classification codes associated with the similar reference electronically stored information items, comprising: a type module to determine a number of different types of the classification codes associated with the similar reference electronically stored information items;a presence module to determine one or more of a presence and absence of the similar reference electronically stored information items with each type of the different classification codes; anda quantity module to determine for each type of the classification codes a quantity of the similar reference electronically stored information items;a suggestion module to display a visual classification suggestion based on at least one of the presence and the absence and the quantity and the number of the types via a display of the at least one uncoded electronically stored information item and the similar reference electronically stored information items;a receipt module to receive a classification code of one of the types for the at least one uncoded electronically stored information item from a human reviewer based on the suggestion; anda processor to execute the modules.
  • 2. A system according to claim 1, further comprising: a reference set module to generate the set of reference electronically stored information items, comprising at least one of: a comparison module to obtain a set of electronically stored information items, to identify one or more electronically stored information items that are dissimilar from each other electronically stored information item, and to assign a classification code to each of the dissimilar electronically stored information items, as the reference electronically stored information items; anda reference clustering module to group electronically stored information items for a document review project into one or more clusters, to select one or more of the electronically stored information items in at least one cluster, and to assign a classification code to each of the selected electronically stored information items, as the reference electronically stored information items.
  • 3. A system according to claim 1, further comprising: a score module to form a score vector for each uncoded electronically stored information item and each reference electronically stored information item; andthe similarity module to calculate a similarity metric by comparing the score vectors for the uncoded electronically stored information items and the reference electronically stored information items.
  • 4. A system according to claim 3, wherein the similarity metric is calculated according to the following equation:
  • 5. A system according to claim 1, further comprising: a generation module to automatically generate an additional classification code for the at least one of the uncoded electronically stored information item; andan assignment module to assign one of the automatically generated classification code and the classification code received from the user to the at least one uncoded reference electronically stored information item.
  • 6. A method for providing reference documents as a suggestion for classifying electronically stored information using nearest neighbor, comprising the steps of: designating a set of uncoded electronically stored information items and a different set of reference electronically stored information items that are each classified with a code;comparing at least one of the uncoded electronically stored information items from the set with the set of reference electronically stored information items and identifying one or more of the reference electronically stored information items that are similar to the at least one uncoded electronically stored information item;processing the classification codes associated with the similar reference electronically stored information items, comprising: determining a number of different types of the classification codes associated with the similar reference electronically stored information items;determining one or more of a presence and absence of the similar reference electronically stored information items with each type of the different classification codes; anddetermining for each type of the classification codes a quantity of the similar reference electronically stored information items;displaying a visual classification suggestion based on at least one of the presence and the absence and the quantity and the number of the types via a display of the uncoded electronically stored information item and the similar reference electronically stored information items; andreceiving a classification code of one of the types for the at least one uncoded electronically stored information item from a human reviewer based on the suggestion,wherein the steps are performed by a suitably programmed computer.
  • 7. A method according to claim 6, further comprising: generating the reference electronically stored information items from a set of electronically stored information items, comprising at least one of: identifying the electronically stored information items that are dissimilar from each other electronically stored information item and assigning a classification code to each of the dissimilar electronically stored information items, as the reference electronically stored information items; andgrouping a set of electronically stored information items associated with a document review project into one or more clusters, selecting one or more of the electronically stored information items in at least one cluster, and assigning a classification code to each of the selected electronically stored information items, as the reference electronically stored information items.
  • 8. A method according to claim 6, further comprising: forming a score vector for each uncoded electronically stored information item and each reference electronically stored information item; andcalculating a similarity metric by comparing the score vectors for the uncoded electronically stored information items and the reference electronically stored information items in the reference set.
  • 9. A method according to claim 8, wherein the similarity metric is calculated according to the following equation:
  • 10. A method according to claim 6, further comprising: automatically generating an additional classification code for the at least one uncoded electronically stored information items; andassigning one of the generated classification code and the classification code received from the user to the at least one uncoded reference electronically stored information item.
  • 11. A system for identifying reference documents for use in classifying uncoded documents, comprising: a database to store a set of reference documents that are each classified with a code;a clustering module to designate a set of clusters each comprising uncoded documents from a different set than the reference documents;a similarity module to select at least one uncoded document from the set, to compare the at least one uncoded document with each of the reference documents, and to identify one or more reference documents that satisfy a threshold of similarity with the at least one uncoded document;a processing module to process the classification codes associated with the similar reference documents, comprising: a type module to determine a number of different types of the classification codes associated with the similar reference documents;a presence module to determine one or more of a presence and absence of the similar reference documents with each type of the different classification codes; anda quantity module to determine for each type of the classification codes a quantity of the similar reference documents;a suggestion module to display a visual classification suggestion based on at least one of the presence and the absence and the quantity and the number of the types via a display of the at least one of the uncoded document and the similar reference documents;a classification module receiving a classification code of one of the types for the at least one uncoded document from a human reviewer based on the suggestion; anda processor to execute the modules.
  • 12. A system according to claim 11, further comprising: a reference set module to generate the set of reference documents, comprising at least one of: a reference similarity module to obtain a set of documents, to identify the documents that are dissimilar from each other document, and to assign a classification code to each of the dissimilar documents; anda reference cluster module to generate clusters of documents for a document review project, to select one or more of the documents in at least one of the clusters, and to assign a classification code to each of the documents.
  • 13. A system according to claim 11, wherein the one or more reference documents are selected from at least one of a predefined, customized, or arbitrary reference document set.
  • 14. A system according to claim 11, wherein the similarity module comprises: a score module to form a score vector for each uncoded document and each reference document; anda vector similarity module to calculate a similarity metric between the score vectors for the uncoded documents and reference documents.
  • 15. A system according to claim 14, wherein the similarity metric is calculated according to the following equation:
  • 16. A method for identifying reference documents for use in classifying uncoded documents, comprising the steps of: designating a set of reference documents that are each classified with a code;designing a set of clusters each comprising uncoded documents from a different set than the reference documents;selecting at least one uncoded document from the set and comparing the at least one uncoded document with each of the reference documents and identifying one or more reference documents that satisfy a threshold of similarity with the at least one uncoded document;a processing module to process the classification codes associated with the similar reference documents, comprising: determining a number of different types of the classification codes associated with the similar reference documents;determining one or more of a presence and absence of the similar reference documents with each type of the different classification codes; anddetermining a quantity of the similar reference documents for each type of the different classification codes;displaying a visual classification suggestion based on at least one of the presence and the absence and the quantity and the number of the types via a display of the at least one of the uncoded document and the similar reference documents; andreceiving a classification code of one of the types for the at least one uncoded document from a human reviewer based on the suggestion,wherein the steps are performed by a suitably programmed computer.
  • 17. A method according to claim 16, further comprising: generating the set of reference documents, comprising at least one of: obtaining a set of documents, identifying the documents that are dissimilar from each other document, and assigning a classification code to each of the dissimilar documents; andgenerating clusters of documents, selecting one or more of the documents in at least one of the clusters and assigning a classification code to each of the documents.
  • 18. A method according to claim 16, wherein the one or more reference documents are selected from at least one of a predefined, customized, or arbitrary reference document set.
  • 19. A method according to claim 16, further comprising: forming a score vector for each uncoded document and each reference document; andcalculating a similarity metric between the score vectors for the uncoded documents and reference documents.
  • 20. A method according to claim 19, wherein the similarity metric is calculated according to the following equation:
CROSS-REFERENCE TO RELATED APPLICATION

This non-provisional patent application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application Ser. No. 61/229,216, filed Jul. 28, 2009, and U.S. Provisional Patent Application Ser. No. 61/236,490, filed Aug. 24, 2009, the disclosures of which are incorporated by reference.

US Referenced Citations (280)
Number Name Date Kind
3416150 Lindberg Dec 1968 A
3426210 Agin Feb 1969 A
3668658 Flores et al. Jun 1972 A
4893253 Lodder Jan 1990 A
5056021 Ausborn Oct 1991 A
5121338 Lodder Jun 1992 A
5133067 Hara et al. Jul 1992 A
5278980 Pedersen et al. Jan 1994 A
5371673 Fan Dec 1994 A
5442778 Pedersen et al. Aug 1995 A
5477451 Brown et al. Dec 1995 A
5488725 Turtle et al. Jan 1996 A
5524177 Suzuoka Jun 1996 A
5528735 Strasnick et al. Jun 1996 A
5619632 Lamping et al. Apr 1997 A
5619709 Caid et al. Apr 1997 A
5635929 Rabowsky et al. Jun 1997 A
5649193 Sumita et al. Jul 1997 A
5675819 Schuetze Oct 1997 A
5696962 Kupiec Dec 1997 A
5737734 Schultz Apr 1998 A
5754938 Herz et al. May 1998 A
5794236 Mehrle Aug 1998 A
5799276 Komissarchik et al. Aug 1998 A
5819258 Vaithyanathan et al. Oct 1998 A
5842203 D'Elena et al. Nov 1998 A
5844991 Hochberg et al. Dec 1998 A
5857179 Vaithyanathan et al. Jan 1999 A
5860136 Fenner Jan 1999 A
5862325 Reed et al. Jan 1999 A
5864846 Voorhees et al. Jan 1999 A
5864871 Kitain et al. Jan 1999 A
5867799 Lang et al. Feb 1999 A
5870740 Rose et al. Feb 1999 A
5909677 Broder et al. Jun 1999 A
5915024 Kitaori et al. Jun 1999 A
5920854 Kirsch et al. Jul 1999 A
5924105 Punch et al. Jul 1999 A
5940821 Wical Aug 1999 A
5950146 Vapnik Sep 1999 A
5950189 Cohen et al. Sep 1999 A
5966126 Szabo Oct 1999 A
5987446 Corey et al. Nov 1999 A
6006221 Liddy et al. Dec 1999 A
6012053 Pant et al. Jan 2000 A
6026397 Sheppard Feb 2000 A
6038574 Pitkow et al. Mar 2000 A
6070133 Brewster et al. May 2000 A
6089742 Warmerdam et al. Jul 2000 A
6092059 Straforini et al. Jul 2000 A
6094649 Bowen et al. Jul 2000 A
6100901 Mohda et al. Aug 2000 A
6119124 Broder et al. Sep 2000 A
6122628 Castelli et al. Sep 2000 A
6137499 Tesler Oct 2000 A
6137545 Patel et al. Oct 2000 A
6137911 Zhilyaev Oct 2000 A
6148102 Stolin Nov 2000 A
6154219 Wiley et al. Nov 2000 A
6167368 Wacholder Dec 2000 A
6173275 Caid et al. Jan 2001 B1
6202064 Julliard Mar 2001 B1
6216123 Robertson et al. Apr 2001 B1
6243713 Nelson et al. Jun 2001 B1
6243724 Mander et al. Jun 2001 B1
6260038 Martin et al. Jul 2001 B1
6326962 Szabo Dec 2001 B1
6338062 Liu Jan 2002 B1
6345243 Clark Feb 2002 B1
6349296 Broder et al. Feb 2002 B1
6349307 Chen Feb 2002 B1
6360227 Aggarwal et al. Mar 2002 B1
6363374 Corston-Oliver et al. Mar 2002 B1
6377287 Hao et al. Apr 2002 B1
6381601 Fujiwara et al. Apr 2002 B1
6389433 Bolosky et al. May 2002 B1
6389436 Chakrabarti et al. May 2002 B1
6408294 Getchius et al. Jun 2002 B1
6414677 Robertson et al. Jul 2002 B1
6415283 Conklin Jul 2002 B1
6418431 Mahajan et al. Jul 2002 B1
6421709 McCormick et al. Jul 2002 B1
6438537 Netz et al. Aug 2002 B1
6438564 Morton et al. Aug 2002 B1
6442592 Alumbaugh et al. Aug 2002 B1
6446061 Doerre et al. Sep 2002 B1
6449612 Bradley et al. Sep 2002 B1
6453327 Nielsen Sep 2002 B1
6460034 Wical Oct 2002 B1
6470307 Turney Oct 2002 B1
6480843 Li Nov 2002 B2
6480885 Olivier Nov 2002 B1
6484168 Pennock et al. Nov 2002 B1
6484196 Maurille Nov 2002 B1
6493703 Knight et al. Dec 2002 B1
6496822 Rosenfelt et al. Dec 2002 B2
6502081 Wiltshire et al. Dec 2002 B1
6507847 Fleischman Jan 2003 B1
6510406 Marchisio Jan 2003 B1
6519580 Johnson et al. Feb 2003 B1
6523026 Gillis Feb 2003 B1
6523063 Miller et al. Feb 2003 B1
6542889 Aggarwal et al. Apr 2003 B1
6544123 Tanaka et al. Apr 2003 B1
6549957 Hanson et al. Apr 2003 B1
6560597 Dhillon et al. May 2003 B1
6571225 Oles et al. May 2003 B1
6584564 Olkin et al. Jun 2003 B2
6594658 Woods Jul 2003 B2
6598054 Schuetze et al. Jul 2003 B2
6606625 Muslea et al. Aug 2003 B1
6611825 Billheimer et al. Aug 2003 B1
6628304 Mitchell et al. Sep 2003 B2
6629097 Keith Sep 2003 B1
6640009 Zlotnick Oct 2003 B2
6651057 Jin et al. Nov 2003 B1
6654739 Apte et al. Nov 2003 B1
6658423 Pugh et al. Dec 2003 B1
6675159 Lin et al. Jan 2004 B1
6675164 Kamath et al. Jan 2004 B2
6678705 Berchtold et al. Jan 2004 B1
6684205 Modha et al. Jan 2004 B1
6697998 Damerau et al. Feb 2004 B1
6701305 Holt et al. Mar 2004 B1
6711585 Copperman et al. Mar 2004 B1
6714929 Micaelian et al. Mar 2004 B1
6735578 Shetty et al. May 2004 B2
6738759 Wheeler et al. May 2004 B1
6747646 Gueziec et al. Jun 2004 B2
6751628 Coady Jun 2004 B2
6757646 Marchisio Jun 2004 B2
6785679 Dane et al. Aug 2004 B1
6804665 Kreulen et al. Oct 2004 B2
6816175 Hamp et al. Nov 2004 B1
6819344 Robbins Nov 2004 B2
6823333 McGreevy Nov 2004 B2
6841321 Matsumoto et al. Jan 2005 B2
6847966 Sommer et al. Jan 2005 B1
6862710 Marchisio Mar 2005 B1
6879332 Decombe Apr 2005 B2
6883001 Abe Apr 2005 B2
6886010 Kostoff Apr 2005 B2
6888584 Suzuki et al. May 2005 B2
6915308 Evans et al. Jul 2005 B1
6922699 Schuetze et al. Jul 2005 B2
6941325 Benitez et al. Sep 2005 B1
6970881 Mohan et al. Nov 2005 B1
6978419 Kantrowitz Dec 2005 B1
6990238 Saffer et al. Jan 2006 B1
6993535 Bolle et al. Jan 2006 B2
6996575 Cox et al. Feb 2006 B2
7003551 Malik Feb 2006 B2
7013435 Gallo et al. Mar 2006 B2
7020645 Bisbee et al. Mar 2006 B2
7039856 Peairs et al. May 2006 B2
7051017 Marchisio May 2006 B2
7054870 Holbrook May 2006 B2
7080320 Ono Jul 2006 B2
7096431 Tambata et al. Aug 2006 B2
7099819 Sakai et al. Aug 2006 B2
7107266 Breyman et al. Sep 2006 B1
7117151 Iwahashi et al. Oct 2006 B2
7117246 Christenson et al. Oct 2006 B2
7130807 Mikurak Oct 2006 B1
7137075 HoshiTo et al. Nov 2006 B2
7139739 Agrafiotis et al. Nov 2006 B2
7146361 Broder et al. Dec 2006 B2
7155668 Holland et al. Dec 2006 B2
7188107 Moon et al. Mar 2007 B2
7188117 Farahat et al. Mar 2007 B2
7194458 Micaelian et al. Mar 2007 B1
7194483 Mohan et al. Mar 2007 B1
7197497 Cossock Mar 2007 B2
7209949 Mousseau et al. Apr 2007 B2
7233886 Wegerich et al. Jun 2007 B2
7233940 Bamberger et al. Jun 2007 B2
7239986 Golub et al. Jul 2007 B2
7240199 Tomkow Jul 2007 B2
7246113 Cheetham et al. Jul 2007 B2
7251637 Caid et al. Jul 2007 B1
7266365 Ferguson et al. Sep 2007 B2
7266545 Bergman et al. Sep 2007 B2
7269598 Marchisio Sep 2007 B2
7271801 Toyozawa et al. Sep 2007 B2
7277919 Donoho et al. Oct 2007 B1
7325127 Olkin et al. Jan 2008 B2
7353204 Liu Apr 2008 B2
7359894 Liebman et al. Apr 2008 B1
7363243 Arnett et al. Apr 2008 B2
7366759 Trevithick et al. Apr 2008 B2
7373612 Risch et al. May 2008 B2
7376635 Porcari et al. May 2008 B1
7379913 Steele et al. May 2008 B2
7383282 Whitehead et al. Jun 2008 B2
7401087 Copperman et al. Jul 2008 B2
7412462 Margolus et al. Aug 2008 B2
7418397 Kojima et al. Aug 2008 B2
7430717 Spangler Sep 2008 B1
7433893 Lowry Oct 2008 B2
7440662 Antona et al. Oct 2008 B2
7444356 Calistri-Yeh et al. Oct 2008 B2
7457948 Bilicksa et al. Nov 2008 B1
7472110 Achlioptas Dec 2008 B2
7490092 Morton et al. Feb 2009 B2
7509256 Iwahashi et al. Mar 2009 B2
7516419 Petro et al. Apr 2009 B2
7523349 Barras Apr 2009 B2
7558769 Scott et al. Jul 2009 B2
7571177 Damle Aug 2009 B2
7574409 Patinkin Aug 2009 B2
7584221 Robertson et al. Sep 2009 B2
7610313 Kawai et al. Oct 2009 B2
7639868 Regli et al. Dec 2009 B1
7640219 Perrizo Dec 2009 B2
7647345 Trepess et al. Jan 2010 B2
7668376 Lin et al. Feb 2010 B2
7698167 Batham et al. Apr 2010 B2
7716223 Haveliwala et al. May 2010 B2
7743059 Chan et al. Jun 2010 B2
7761447 Brill et al. Jul 2010 B2
7801841 Mishra et al. Sep 2010 B2
7885901 Hull et al. Feb 2011 B2
7971150 Raskutti et al. Jun 2011 B2
8010466 Patinkin Aug 2011 B2
8010534 Roitblat et al. Aug 2011 B2
8165974 Privault et al. Apr 2012 B2
20020032735 Burnstein et al. Mar 2002 A1
20020065912 Catchpole et al. May 2002 A1
20020078044 Song et al. Jun 2002 A1
20020078090 Hwang et al. Jun 2002 A1
20020122543 Rowen Sep 2002 A1
20020184193 Cohen Dec 2002 A1
20030046311 Baidya et al. Mar 2003 A1
20030130991 Reijerse et al. Jul 2003 A1
20030172048 Kauffman Sep 2003 A1
20030174179 Suermondt et al. Sep 2003 A1
20040024739 Copperman et al. Feb 2004 A1
20040024755 Rickard Feb 2004 A1
20040034633 Rickard Feb 2004 A1
20040205482 Basu et al. Oct 2004 A1
20040205578 Wolff et al. Oct 2004 A1
20040215608 Gourlay Oct 2004 A1
20040243556 Ferrucci et al. Dec 2004 A1
20050025357 Landwehr et al. Feb 2005 A1
20050097435 Prakash et al. May 2005 A1
20050171772 Iwahashi et al. Aug 2005 A1
20050203924 Rosenberg Sep 2005 A1
20050283473 Rousso et al. Dec 2005 A1
20060008151 Lin et al. Jan 2006 A1
20060021009 Lunt Jan 2006 A1
20060053382 Gardner et al. Mar 2006 A1
20060122974 Perisic Jun 2006 A1
20060122997 Lin Jun 2006 A1
20070020642 Deng et al. Jan 2007 A1
20070043774 Davis et al. Feb 2007 A1
20070044032 Mollitor et al. Feb 2007 A1
20070112758 Livaditis May 2007 A1
20070150801 Chidlovskii et al. Jun 2007 A1
20070214133 Liberty et al. Sep 2007 A1
20070288445 Kraftsow Dec 2007 A1
20080005081 Green et al. Jan 2008 A1
20080140643 Ismalon Jun 2008 A1
20080183855 Agarwal et al. Jul 2008 A1
20080189273 Kraftsow Aug 2008 A1
20080215427 Kawada et al. Sep 2008 A1
20080228675 Duffy et al. Sep 2008 A1
20090041329 Nordell et al. Feb 2009 A1
20090043797 Dorie et al. Feb 2009 A1
20090049017 Gross Feb 2009 A1
20090097733 Hero et al. Apr 2009 A1
20090106239 Getner et al. Apr 2009 A1
20090222444 Chowdhury et al. Sep 2009 A1
20090228499 Schmidtler et al. Sep 2009 A1
20090228811 Adams et al. Sep 2009 A1
20100100539 Davis et al. Apr 2010 A1
20100198802 Kraftsow Aug 2010 A1
20100250477 Yadav Sep 2010 A1
20100262571 Schmidtler et al. Oct 2010 A1
20100268661 Levy et al. Oct 2010 A1
20120124034 Jing et al. May 2012 A1
Foreign Referenced Citations (9)
Number Date Country
1024437 Aug 2000 EP
1049030 Nov 2000 EP
0886227 Oct 2003 EP
WO 0067162 Nov 2000 WO
WO 200067162 Nov 2000 WO
03052627 Jun 2003 WO
03060766 Jul 2003 WO
2006008733 Jul 2004 WO
WO 2005073881 Aug 2005 WO
Non-Patent Literature Citations (45)
Entry
Anna Sachinopoulou, “Multidimensional Visualization,” Technical Research Centre of Finland, ESPOO 2001, Vtt Research Notes 2114, pp. 1-37 (2001).
B.B. Hubbard, “The World According the Wavelet: the Story of a Mathematical Technique in the Making,” Ak Peters (2nd ed.), pp. 227-229, Massachusetts, USA (1998).
Baeza-Yates et al., “Modern Information Retrieval,” Ch. 2 “Modeling,” Modern Information Retrieval, Harlow: Addison-Wesley, Great Britain 1999, pp. 18-71 (1999).
Bernard et al.: “Labeled Radial Drawing of Data Structures” Proceedings of the Seventh International Conference on Information Visualization, Infovis. IEEE Symposium, Jul. 16-18, 2003, Piscataway, NJ, USA, IEEE, Jul. 16, 2003, pp. 479-484, XP010648809 (2003).
Bier et al. “Toolglass and Magic Lenses: The See-Through Interface”, Computer Graphics Proceedings, Proceedings of Siggraph Annual International Conference on Computer Graphics and Interactive Techniques, pp. 73-80, XP000879378 (Aug. 1993).
Boukhelifa et al., “A Model and Software System for Coordinated and Multiple Views in Exploratory Visualization,” Information Visualization, No. 2, pp. 258-269, GB (2003).
Chung et al., “Thematic Mapping-From Unstructured Documents to Taxonomies,” CIKM'02, Nov. 4-9, 2002, pp. 608-610, ACM, McLean, Virginia, USA (Nov. 4, 2002).
Chen an et al., “Fuzzy Concept Graph and Application in Web Document Clustering,” IEEE, pp. 101-106 (2001).
Davison et al., “Brute Force Estimation of the No. Of Human Genes Using EST Clustering as a Measure,” IBM Journal of Research & Development, vol. 45, pp. 439-447 (May 2001).
Eades et al. “Multilevel Visualization of Clustered Graphs,” Department of Computer Science and Software Engineering, University of Newcastle, Australia, Proceedings of Graph Drawing '96, Lecture Notes in Computer Science, NR. 1190 (Sep. 1996).
Eades et al., “Orthogonal Grid Drawing of Clustered Graphs,” Department of Computer Science, the University of Newcastle, Australia, Technical Report 96-04, [Online] 1996, Retrieved from the internet: URL:http://citeseer.ist.psu.edu/eades96ort hogonal.html (1996).
Estivill-Castro et al. “Amoeba: Hierarchical Clustering Based on Spatial Proximity Using Delaunaty Diagram”, Department of Computer Science, The University of Newcastle, Australia, 1999 ACM Sigmod International Conference on Management of Data, vol. 28, No. 2, Jun. 1, 1999, Jun. 3, 1999, pp. 49-60, Philadelphia, PA, USA (Jun. 1999).
F. Can, Incremental Clustering for Dynamic Information Processing: ACM Transactions on Information Systems, ACM, New York, NY, US, vol. 11, No. 2, pp. 143-164, XP-002308022 (Apr. 1993).
Fekete et al., “Excentric Labeling: Dynamic Neighborhood Labeling for Data Visualization,” CHI 1999 Conference Proceedings Human Factors in Computing Systems, Pittsburgh, PA, pp. 512-519 (May 15-20, 1999).
http://em-ntserver.unl.edu/Math/mathweb/vecors/vectors.html © 1997.
Inxight VizServer, “Speeds and Simplifies the Exploration and Sharing of Information”, www.inxight.com/products/vizserver, copyright 2005.
Jain et al., “Data Clustering: A Review,” ACM Computing Surveys, vol. 31, No. 3, Sep. 1999, pp. 264-323, New York, NY, USA (Sep. 1999).
Osborn et al., “Justice: A Jidicial Search Tool Using Intelligent Cencept Extraction,” Department of Computer Science and Software Engineering, University of Melbourne, Austrailia, ICAIL-99, p. 173-181, ACM (1999).
Jiang Linhui, “K-Mean Algorithm: Iterative Partitioning Clustering Algorithm,” http://www.cs.regina.ca/-linhui/K.sub.--mean.sub.--algorithm.html, (2001) Computer Science Department, University of Regina, Saskatchewan, Canada (2001).
Kanungo et al., “The Analysis of a Simple K-Means Clustering Algorithm,” pp. 100-109, PROC 16th annual symposium of computational geometry (May 2000).
O'Neill et al., “DISCO: Intelligent Help for Document Review,” 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, Jun. 8, 2009, pp. 1-10, ICAIL 2009, Association for Computing Machinery, Red Hook, New York (Online); XP 002607216.
McNee, “Meeting User Information Needs in Recommender Systems,” Ph.D. Dissertation, University of Minnesota-Twin Cities, Jun. 2006.
Slaney et al., “Multimedia Edges: Finding Hierarchy in all Dimensions” PROC. 9-th ACM Intl. Conf. on Multimedia, pp. 29-40, ISBN. 1-58113-394-4, Sep. 30, 2001, XP002295016 Ottawa (Sep. 30, 2001).
Strehl et al., “Cluster Ensembles-A Knowledge Reuse Framework for Combining Partitioning,” Journal of Machine Learning Research, MIT Press, Cambridge, MA, US, ISSN: 1533-7928, vol. 3, No. 12, pp. 583-617, XP002390603 (Dec. 2002).
Dan Sullivan, “Document Warehousing and Text Mining: Techniques for Improving Business Operations, Marketing and Sales,” Ch. 1-3, John Wiley & Sons, New York, NY (2001).
V. Faber, “Clustering and the Continuous K-Means Algorithm,” Los Alamos Science, The Laboratory, Los Alamos, NM, US, No. 22, Jan. 1, 1994, pp. 138-144 (Jan. 1, 1994).
Wang et al., “Learning text classifier using the domain concept hierarchy,” Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on Jun. 29-Jul. 1, 2002, Piscataway, NJ, USA, IEEE, vol. 2, pp. 1230-1234 (2002).
Whiting et al., “Image Quantization: Statistics and Modeling,” SPIE Conference of Physics of Medical Imaging, San Diego, CA, USA, vol. 3336, pp. 260-271 (Feb. 1998).
Ryall et al., “An Interactive Constraint-Based System for Drawing Graphs,” UIST '97 Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, pp. 97-104 (1997).
Hiroyuki Kawano, “Overview of Mondou Web Search Engine Using Text Mining and Information Visualizing Technologies,” IEEE, 2001, pp. 234-241 (2001).
Kazumasa Ozawa, “A Stratificational Overlapping Cluster Scheme,” Information Science Center, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572, Japan, Pattern Recognition, vol. 18, pp. 279-286 (1985).
Kohonen, “Self-Organizing Maps,” Ch. 1-2, Springer-Verlag (3rd ed.) (2001).
M. Kurimo, “Fast Latent Semantic Indexing of Spoken Documents by Using Self-Organizing Maps” IEEE International Conference on Accoustics, Speech, and Signal Processing, vol. 6, pp. 2425-2428 (Jun. 2000).
Lam et al., “A Sliding Window Technique for Word Recognition,” SPIE, vol. 2422, pp. 38-46, Center of Excellence for Document Analysis and Recognition, State University of New Yrok at Baffalo, NY, USA (1995).
Lio et al., “Funding Pathogenicity Islands and Gene Transfer Events in Genome Data,” Bioinformatics, vol. 16, pp. 932-940, Department of Zoology, University of Cambridge, UK (Jan. 25, 2000).
Artero et al., “Viz3D: Effective Exploratory Visualization of Large Multidimensional Data Sets,” IEEE Computer Graphics and Image Processing, pp. 340-347 (Oct. 20, 2004).
Magarshak, Theory & Practice. Issue 01. May 17, 2000. http://www.flipcode.com/articles/tp.sub.--issue01-pf.shtml (May 17, 2000).
Maria Cristin Ferreira De Oliveira et al., “From Visual Data Exploration to Visual Data Mining: A Survey,” Jul.-Sep. 2003, IEEE Transactions on Visualization and Computer Graphics, vol. 9, No. 3, pp. 378-394 (Jul. 2003).
Miller et al., “Topic Islands: A Wavelet Based Text Visualization System,” Proceedings of the IEEE Visualization Conference, pp. 189-196 (1998).
North et al. “A Taxonomy of Multiple Window Coordinations,” Institute for Systems Research & Department of Computer Science, University of Maryland, Maryland, USA, http://www.cs.umd.edu/localphp/hcil/tech-reports-search.php?number=97-18 (1997).
Pelleg et al., “Accelerating Exact K-Means Algorithms With Geometric Reasoning,” pp. 277-281, CONF on Knowledge Discovery in Data, PROC fifth ACM SIGKDD (1999).
R.E. Horn, “Communication Units, Morphology, and Syntax,” Visual Language: Global Communication for the 21st Century, 1998, Ch. 3, pp. 51-92, MacroVU Press, Bainbridge Island, Washington, USA.
Rauber et al., “Text Mining in the SOMLib Digital Library System: The Representation of Topics and Genres,” Applied Intelligence 18, pp. 271-293, 2003 Kluwer Academic Publishers (2003).
Shuldberg et al., “Distilling Information from Text: The EDS TemplateFiller System,” Journal of the American Society for Information Science, vol. 44, pp. 493-507 (1993).
S.S. Weng, C.K. Liu, “Using text classification and multiple concepts to answer e-mails.” Expert Systems with Applications, 26 (2004), pp. 529-543.
Related Publications (1)
Number Date Country
20110029527 A1 Feb 2011 US
Provisional Applications (2)
Number Date Country
61229216 Jul 2009 US
61236490 Aug 2009 US