The invention relates in general to user interfaces and, in particular, to a computer-implemented system and method for analyzing clusters of coded documents.
Text mining can be used to extract latent semantic content from collections of structured and unstructured text. Data visualization can be used to model the extracted semantic content, which transforms numeric or textual data into graphical data to assist users in understanding underlying semantic principles. For example, clusters group sets of concepts into a graphical element that can be mapped into a graphical screen display. When represented in multi-dimensional space, the spatial orientation of the clusters reflect similarities and relatedness. However, forcibly mapping the display of the clusters into a three-dimensional scene or a two-dimensional screen can cause data misinterpretation. For instance, a viewer could misinterpret dependent relationships between adjacently displayed clusters or erroneously misinterpret dependent and independent variables. As well, a screen of densely-packed clusters can be difficult to understand and navigate, particularly where annotated text labels overlie clusters directly. Other factors can further complicate visualized data perception, such as described in R.E. Horn, “Visual Language: Global Communication for the 21st Century,” Ch. 3, MacroVU Press (1998), the disclosure of which is incorporated by reference.
Physically, data visualization is constrained by the limits of the screen display used. Two-dimensional visualized data can be accurately displayed, yet visualized data of greater dimensionality must be artificially projected into two-dimensions when presented on conventional screen displays. Careful use of color, shape and temporal attributes can simulate multiple dimensions, but comprehension and usability become increasingly difficult as additional layers are artificially grafted into the two-dimensional space and screen density increases. In addition, large sets of data, such as email stores, document archives and databases, can be content rich and can yield large sets of clusters that result in a complex graphical representation. Physical display space, however, is limited and large cluster sets can appear crowded and dense, thereby hindering understandability. To aid navigation through the display, the cluster sets can be combined, abstracted or manipulated to simplify presentation, but semantic content can be lost or skewed.
Moreover, complex graphical data can be difficult to comprehend when displayed without textual references to underlying content. The user is forced to mentally note “landmark” clusters and other visual cues, which can be particularly difficult with large cluster sets. Visualized data can be annotated with text, such as cluster labels, to aid comprehension and usability. However, annotating text directly into a graphical display can be cumbersome, particularly where the clusters are densely packed and cluster labels overlay or occlude the screen display. A more subtle problem occurs when the screen is displaying a two-dimensional projection of three-dimensional data and the text is annotated within the two-dimensional space. Relabeling the text based on the two-dimensional representation can introduce misinterpretations of the three-dimensional data when the display is reoriented. Also, reorienting the display can visually shuffle the displayed clusters and cause a loss of user orientation. Furthermore, navigation can be non-intuitive and cumbersome, as cluster placement is driven by available display space and the labels may overlay or intersect placed clusters.
Therefore, there is a need for providing a user interface for focused display of dense visualized three-dimensional data representing extracted semantic content as a combination of graphical and textual data elements. Preferably, the user interface would facilitate convenient navigation through a heads-up display (HUD) logically provided over visualized data and would enable large- or fine-grained data navigation, searching and data exploration.
An embodiment provides a system and method for providing a user interface for a dense three-dimensional scene. Clusters are placed in a three dimensional scene arranged proximal to each other such cluster to form a cluster spine. Each cluster includes one or more concepts. Each cluster spine is projected into a two-dimensional display relative to a stationary perspective. Controls operating on a view of the cluster spines in the display are presented. A compass logically framing the cluster spines within the display is provided. A label to identify one such concept in one or more of the cluster spines appearing within the compass is generated. A plurality of slots in the two-dimensional display positioned circumferentially around the compass is defined. Each label is assigned to the slot outside of the compass for the cluster spine having a closest angularity to the slot.
A further embodiment provides a system and method for providing a dynamic user interface for a dense three-dimensional scene with a navigation assistance panel. Clusters are placed in a three-dimensional scene arranged proximal to each other such cluster to form a cluster spine. Each cluster includes one or more concepts. Each cluster spine is projected into a two-dimensional display relative to a stationary perspective. Controls operating on a view of the cluster spines in the display are presented. A compass logically framing the cluster spines within the display is provided. A label is generated to identify one such concept in one or more of the cluster spines appearing within the compass. A plurality of slots in the two-dimensional display is defined positioned circumferentially around the compass. Each label is assigned to the slot outside of the compass for the cluster spine having a closest angularity to the slot. A perspective-altered rendition of the two-dimensional display is generated. The perspective-altered rendition includes the projected cluster spines and a navigation assistance panel framing an area of the perspective-altered rendition corresponding to the view of the cluster spines in the display.
A still further embodiment provides a computer-implemented system and method for analyzing clusters of coded documents. A display of clusters of documents is provided and at least a portion of the documents in the display are each associated with a classification code. A representation of each of the documents is provided within the display based on one of the associated classification code and an absence of the associated classification code. A search query is received and includes one or more search terms. Each search term is associated with one of the classification codes based on the documents. Those documents that satisfy the search query are identified and the representations of the identified documents are changed based on the classification codes associated with one or more of the search terms. The change in representation provides one of an indication of agreement between the classification code associated with one such document and the classification codes of the one or more search terms, and an indication of disagreement between the classification code associated with the document and the classification codes of the search terms.
Still other embodiments of the invention will become readily apparent to those skilled in the art from the following detailed description, wherein are embodiments of the invention by way of illustrating the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
The foregoing terms are used throughout this document and, unless indicated otherwise, are assigned the meanings presented above.
System Overview
The document mapper 32 operates on documents retrieved from a plurality of local sources. The local sources include documents 17 maintained in a storage device 16 coupled to a local server 15 and documents 20 maintained in a storage device 19 coupled to a local client 18. The local server 15 and local client 18 are interconnected to the production system 11 over an intranetwork 21. In addition, the document mapper 32 can identify and retrieve documents from remote sources over an internetwork 22, including the Internet, through a gateway 23 interfaced to the intranetwork 21. The remote sources include documents 26 maintained in a storage device 25 coupled to a remote server 24 and documents 29 maintained in a storage device 28 coupled to a remote client 27.
The individual documents 17, 20, 26, 29 include all forms and types of structured and unstructured data, including electronic message stores, such as word processing documents, electronic mail (email) folders, Web pages, and graphical or multimedia data. Notwithstanding, the documents could be in the form of organized data, such as stored in a spreadsheet or database.
In one embodiment, the individual documents 17, 20, 26, 29 include electronic message folders, such as maintained by the Outlook and Outlook Express products, licensed by Microsoft Corporation, Redmond, Wash.
The database is an SQL-based relational database, such as the Oracle database management system, release 8, licensed by Oracle Corporation, Redwood Shores, Calif.
The individual computer systems, including backend server 11, production server 32, server 15, client 18, remote server 24 and remote client 27, are general purpose, programmed digital computing devices consisting of a central processing unit (CPU), random access memory (RAM), non-volatile secondary storage, such as a hard drive or CD ROM drive, network interfaces, and peripheral devices, including user interfacing means, such as a keyboard and display. Program code, including software programs, and data are loaded into the RAM for execution and processing by the CPU and results are generated for display, output, transmittal, or storage.
Display Generator
Individual documents 14 are analyzed by the clustering component 41 to form clusters 45 of semantically scored documents, such as described in commonly-assigned U.S. Pat. No. 7,610,313, issued Oct. 27, 2009, the disclosure of which is incorporated by reference. In one embodiment, document concepts 46 are formed from concepts and terms extracted from the documents 14 and the frequencies of occurrences and reference counts of the concepts and terms are determined. Each concept and term is then scored based on frequency, concept weight, structural weight, and corpus weight. The document concept scores are compressed and assigned to normalized score vectors for each of the documents 14. The similarities between each of the normalized score vectors are determined, preferably as cosine values. A set of candidate seed documents is evaluated to select a set of seed documents 44 as initial cluster centers based on relative similarity between the assigned normalized score vectors for each of the candidate seed documents or using a dynamic threshold based on an analysis of the similarities of the documents 14 from a center of each cluster 45, such as described in commonly-assigned U.S. Pat. No. 7,610,313, issued Oct. 27, 2009, the disclosure of which is incorporated by reference. The remaining non-seed documents are evaluated against the cluster centers also based on relative similarity and are grouped into the clusters 45 based on best-fit, subject to a minimum fit criterion.
The clustering component 41 analyzes cluster similarities in a multidimensional problem space, while the cluster spine placement component 42 maps the clusters into a three-dimensional virtual space that is then projected onto a two-dimensional screen space, as further described below with reference to
During visualization, cluster “spines” and certain clusters 45 are placed as cluster groups 49 within a virtual three-dimensional space as a “scene” or world 56 that is then projected into two-dimensional space as a “screen” or visualization 54. Candidate spines are selected by surveying the cluster concepts 47 for each cluster 45. Each cluster concept 47 shared by two or more clusters 45 can potentially form a spine of clusters 45. However, those cluster concepts 47 referenced by just a single cluster 45 or by more than 10% of the clusters 45 are discarded. Other criteria for discarding cluster concepts 47 are possible. The remaining clusters 45 are identified as candidate spine concepts, which each logically form a candidate spine. Each of the clusters 45 are then assigned to a best fit spine 48 by evaluating the fit of each candidate spine concept to the cluster concept 47. The candidate spine exhibiting a maximum fit is selected as the best fit spine 48 for the cluster 45. Unique seed spines are next selected and placed. Spine concept score vectors are generated for each best fit spine 48 and evaluated. Those best fit spines 48 having an adequate number of assigned clusters 45 and which are sufficiently dissimilar to any previously selected best fit spines 48 are designated and placed as seed spines and the corresponding spine concept 50 is identified. Any remaining unplaced best fit spines 48 and clusters 45 that lack best fit spines 48 are placed into spine groups 49. Anchor clusters are selected based on similarities between unplaced candidate spines and candidate anchor clusters. Cluster spines are grown by placing the clusters 45 in similarity precedence to previously placed spine clusters or anchor clusters along vectors originating at each anchor cluster. As necessary, clusters 45 are placed outward or in a new vector at a different angle from new anchor clusters 55. The spine groups 49 are placed by translating the spine groups 49 in a radial manner until there is no overlap, such as described in commonly-assigned U.S. Pat. No. 7,271,804, issued Sep. 18, 2007, the disclosure of which is incorporated by reference.
Finally, the HUD generator 43 generates a user interface, which includes a HUD that logically overlays the spine groups 49 placed within the visualization 54 and which provides controls for navigating, exploring and searching the cluster space, as further described below with reference to
In one embodiment, a single compass is provided. Referring next to
Each module or component is a computer program, procedure or module written as source code in a conventional programming language, such as the C++ programming language, and is presented for execution by the CPU as object or byte code, as is known in the art. The various implementations of the source code and object and byte codes can be held on a computer-readable storage medium or embodied on a transmission medium in a carrier wave. The display generator 32 operates in accordance with a sequence of process steps, as further described below with reference to
Cluster Projection
First, the n-dimensional space 61 is projected into a virtual three-dimensional space 62 by logically group the document concepts 46 into thematically-related clusters 45. In one embodiment, the three-dimensional space 62 is conceptualized into a virtual world or “scene” that represents each cluster 45 as a virtual sphere 66 placed relative to other thematically-related clusters 45, although other shapes are possible. Importantly, the three-dimensional space 62 is not displayed, but is used instead to generate a screen view. The three-dimensional space 62 is projected from a predefined perspective onto a two-dimensional space 63 by representing each cluster 45 as a circle 69, although other shapes are possible.
Although the three-dimensional space 62 could be displayed through a series of two-dimensional projections that would simulate navigation through the three-dimensional space through yawing, pitching and rolling, comprehension would quickly be lost as the orientation of the clusters 45 changed. Accordingly, the screens generated in the two-dimensional space 63 are limited to one single perspective at a time, such as would be seen by a viewer looking at the three-dimensional space 62 from a stationary vantage point, but the vantage point can be moved. The viewer is able to navigate through the two-dimensional space 63 through zooming and panning. Through the HUD, the user is allowed to zoom and pan through the clusters 45 appearing within compass 67 and pin select document concepts 46 into place onto the compass 67. During panning and zooming, the absolute three-dimensional coordinates 65 of each cluster 45 within the three-dimensional space 64 remain unchanged, while the relative two-dimensional coordinates 68 are updated as the view through the HUD is modified. Finally, spine labels are generated for the thematic concepts of cluster spines appearing within the compass 67 based on the underlying scene in the three-dimensional space 64 and perspective of the viewer, as further described below with reference to
User Interface Example
In one embodiment, the controls are provided by a combination of mouse button and keyboard shortcut assignments, which control the orientation, zoom, pan, and selection of placed clusters 83 within the compass 82, and toolbar buttons 87 provided on the user interface 81. By way of example, the mouse buttons enable the user to zoom and pan around and pin down the placed clusters 83. For instance, by holding the middle mouse button and dragging the mouse, the placed clusters 83 appearing within the compass 82 can be panned. Similarly, by rolling a wheel on the mouse, the placed clusters 83 appearing within the compass 82 can be zoomed inwards to or outwards from the location at which the mouse cursor points. Finally, by pressing a Home toolbar button or keyboard shortcut, the placed clusters 83 appearing within the compass 82 can be returned to an initial view centered on the display screen. Keyboard shortcuts can provide similar functionality as the mouse buttons.
Individual spine concepts 50 can be “pinned” in place on the circumference of the compass 82 by clicking the left mouse button on a cluster spine label 91. The spine label 91 appearing at the end of the concept pointer connecting the outermost cluster of placed clusters 83 associated with the pinned spine concept 50 are highlighted. Pinning fixes a spine label 91 to the compass 82, which causes the spine label 91 to remain fixed to the same place on the compass 82 independent of the location of the associated placed clusters 83 and adds weight to the associated cluster 83 during reclustering.
The toolbar buttons 87 enable a user to execute specific commands for the composition of the spine groups 49 displayed. By way of example, the toolbar buttons 87 provide the following functions:
Visually, the compass 82 emphasizes visible placed clusters 83 and deemphasizes placed clusters 84 appearing outside of the compass 82. The view of the cluster spines appearing within the focus area of the compass 82 can be zoomed and panned and the compass 82 can also be resized and disabled. In one embodiment, the placed clusters 83 appearing within the compass 82 are displayed at full brightness, while the placed clusters 84 appearing outside the compass 82 are displayed at 30 percent of original brightness, although other levels of brightness or visual accent, including various combinations of color, line width and so forth, are possible. Spine labels 91 appear at the ends of concept pointers connecting the outermost cluster of select placed clusters 83 to preferably the closest point along the periphery of the compass 82. In one embodiment, the spine labels 91 are placed without overlap and circumferentially around the compass 82, as further described below with reference to
In one embodiment, a set of set-aside trays 85 are provided to graphically group those documents 86 that have been logically marked into sorting categories. In addition, a garbage can 90 is provided to remove cluster concepts 47 from consideration in the current set of placed spine groups 49. Removed cluster concepts 47 prevent those concepts from affecting future clustering, as may occur when a user considers a concept irrelevant to the placed clusters 84.
Referring next to
Referring finally to
User Interface
User Interface Controls Examples
Referring first to
In one embodiment, the unfocused area 123 appears under a visual “velum” created by decreasing the brightness of the placed cluster spines 124 outside the compass 121 by 30 percent, although other levels of brightness or visual accent, including various combinations of color, line width and so forth, are possible. The placed cluster spines 124 inside of the focused area 122 are identified by spine labels 125, which are placed into logical “slots” at the end of concept pointers 126 that associate each spine label 125 with the corresponding placed cluster spine 124. The spine labels 125 show the common concept 46 that connects the clusters 83 appearing in the associated placed cluster spine 124. Each concept pointer 126 connects the outermost cluster 45 of the associated placed cluster spine 124 to the periphery of the compass 121 centered in the logical slot for the spine label 125. Concept pointers 126 are highlighted in the HUD when a concept 46 within the placed cluster spine 124 is selected or a pointer, such as a mouse cursor, is held over the concept 46. Each cluster 83 also has a cluster label 128 that appears when the pointer is used to select a particular cluster 83 in the HUD. The cluster label 128 shows the top concepts 46 that brought the documents 14 together as the cluster 83, plus the total number of documents 14 for that cluster 83.
In one embodiment, spine labels 125 are placed to minimize the length of the concept pointers 126. Each spine label 125 is optimally situated to avoid overlap with other spine labels 125 and crossing of other concept pointers 126, as further described below with reference to
Referring next to
In one embodiment, the compass 121 zooms towards or away from the location of the pointer, rather than the middle of the compass 121. Additionally, the speed at which the placed cluster spines 124 within the focused area 122 changes can be varied. For instance, variable zooming can move the compass 121 at a faster pace proportionate to the distance to the placed cluster spines 124 being viewed. Thus, a close-up view of the placed cluster spines 124 zooms more slowly than a far away view. Finally, the spine labels 125 become more specific with respect to the placed cluster spines 124 appearing within the compass 121 as the zooming changes. High level details are displayed through the spine labels 125 when the compass 121 is zoomed outwards and low level details are displayed through the spine labels 125 when the compass 121 is zoomed inwards. Other zooming controls and orientations are possible.
Referring next to
Referring lastly to
Example Multiple Compasses
Example Single and Multiple Compasses
Example Cluster Spine Group
Next, each of the unplaced remaining singleton clusters 222 are loosely grafted onto a placed best fit spine 211,216, 219 by first building a candidate anchor cluster list. Each of the remaining singleton clusters 222 are placed proximal to an anchor cluster that is most similar to the singleton cluster. The singleton clusters 222 are placed along a vector 212, 217, 219, but no connecting line is drawn in the visualization 54. Relatedness is indicated by proximity only.
Cluster Spine Group Placement Example
Cluster Spine Group Overlap Removal Example
Method Overview
As an initial step, documents 14 are scored and clusters 45 are generated (block 251), such as described in commonly-assigned U.S. Pat. No. 7,610,313, issued Oct. 27, 2009, the disclosure of which is incorporated by reference. Next, clusters spines are placed as cluster groups 49 (block 252), such as described in commonly-assigned U.S. Pat. No. 7,191,175, issued Mar. 13, 2007, and U.S. Pat. No. 7,440,622, issued Oct. 21, 2008, the disclosures of which are incorporated by reference, and the concepts list 103 is provided. The HUD 104 is provided (block 253) to provide a focused view of the clusters 102, as further described below with reference to
HUD Generation
Initially, the compass 82 is generated to overlay the placed clusters layer 102 (block 261). In a further embodiment, the compass 82 can be disabled. Next, cluster concepts 47 are assigned into the slots 51 (block 263), as further described below with reference to
Concept Assignment to Slots
Initially, a set of slots 51 is created (block 271). The slots 51 are determined circumferentially defined around the compass 82 to avoid crossing of navigation concept pointers and overlap between individual spine labels 91 when projected into two dimensions. In one embodiment, the slots 51 are determined based on the three-dimensional Cartesian coordinates 65 (shown in
Next, a set of slice objects is created for each cluster concept 47 that occurs in a placed cluster 83 appearing within the compass 82 (block 272). Each slice object defines an angular region of the compass 82 and holds the cluster concepts 47 that will appear within that region, the center slot 51 of that region, and the width of the slice object, specified in number of slots 51. In addition, in one embodiment, each slice object is interactive and, when associated with a spine label 91, can be selected with a mouse cursor to cause each of the cluster concepts 47 in the display to be selected and highlighted. Next, framing slice objects are identified by iteratively processing each of the slice objects (blocks 273-276), as follows. For each slice object, if the slice object defines a region that frames another slice object (block 274), the slice objects are combined (block 275) by changing the center slot 51, increasing the width of the slice object, and combining the cluster concepts 47 into a single slice object. Next, those slice objects having a width of more than half of the number of slots 51 are divided by iteratively processing each of the slice objects (block 277-280), as follows. For each slice object, if the width of the slice object exceeds the number of slots divided by two (block 278), the slice object is divided (block 279) to eliminate unwanted crossings of lines that connect spine labels 91 to associated placed clusters 83. Lastly, the cluster concepts 47 are assigned to slots 51 by a set of nested processing loops for each of the slice objects (blocks 281-287) and slots 51 (blocks 282-286), as follows. For each slot 51 appearing in each slice object, the cluster concepts 47 are ordered by angular position from the slot 51 (block 283), as further described below with reference to
Cluster Assignment Example
Those slots 292 appearing within the slice object 291 are identified. A spine label 293 is assigned to the slot 292 corresponding to the cluster spine having the closest angularity to the slot 292.
Alternate User Interface
The folders representation 302 in the alternate user interface 301 can be accessed independently from or in conjunction with the two-dimensional cluster view in the original user interface 81. When accessed independently, the cluster data is presented in the folders representation 302 in a default organization, such as from highest scoring spine groups on down, or by alphabetized spine groups. Other default organizations are possible. When accessed in conjunction with the two-dimensional cluster view, the cluster data currently appearing within the focus area of the compass 82 is selected by expanding folders and centering the view over the folders corresponding to the cluster data in focus. Other types of folder representation access are possible.
Referring next to
Each classification code can be assigned a color. For instance, privileged documents can be associated with the color red, responsive with blue, and non-responsive with white. A number of documents assigned to one such classification code is totaled for each of the classification codes and used to generate a pie chart based on the total number of documents in the cluster. Thus, if a cluster has 20 documents and five documents are assigned with the privileged classification code and two documents with the non-responsive code, then 25% of the pie chart would be colored red, 10% would be colored white, and if the remaining documents have no classification codes, the remainder of the pie chart can be colored grey. However, other classification codes, colors, and representations of the classification codes are possible.
In one embodiment, each portion of the pie, such as represented by a different classification code or no classification code, can be selected to provide further information about the documents associated with that portion. For instance, a user can select the red portion of the pie representing the privileged documents, which can provide a list of the privileged documents with information about each document, including title, date, custodian, and the actual document or a link to the actual document. Other document information is possible.
The display can include a compass 331 that provides a focused view of the clusters 332, concept labels 333 that are arranged circumferentially and non-overlappingly around the compass, and statistics about the clusters appearing within the compass. In one embodiment, the compass is round, although other enclosed shapes and configurations are possible. Labeling is provided by drawing a concept pointer from the outermost cluster to the periphery of the compass at which the label appears. Preferably, each concept pointer is drawn with a minimum length and placed to avoid overlapping other concept pointers. Focus is provided through a set of zoom, pan and pin controls
In a further embodiment, a user can zoom into a display of the clusters, such as by scrolling a mouse, to provide further detail regarding the documents. For instance, when a certain amount of zoom has been applied, the pie chart representation of each cluster can revert to a representation of the individual documents, which can be displayed as circles, such as described above with reference to
The cluster display can include one or more search fields into which search terms 345 can be entered. The search terms can be agreed upon by both parties subject to a case under litigation, a document reviewer, an attorney, or other individual associated with the litigation case. Allowing a user to search the documents for codes helps that user to easily find relevant documents via the display. Additionally, the search provides a display based on what the user thinks is important based on search terms provided and what the system thinks is important based on how the documents are clustered and the classification codes are assigned. In one example, the display can resemble a heat map with a “glow” or highlighting 344 provided around one or more representations of the documents and clusters, as described below.
Once the search terms are entered, a search is conducted to identify those documents related to the search terms of interest. Prior to or during the search, each of the search terms are associated with a classification code based on the documents displayed. For instance, based on a review of the documents for each classification code, a list of popular or relevant terms across all the documents for that code are identified and associated with that classification code. For example, upon review of the privileged document, terms for particular individuals, such as the attorney name and CEO name are identified as representative of the privileged documents. Additionally, junk terms can be identified, such as those terms frequently found in junk email, mail, and letters. For example, certain pornographic terms may be identified as representative of junk terms.
Upon identification of the documents associated with the search terms, a color 343 is provided around the document based on the search term to which the document is related or includes. The colors can be set by attorneys, administrators, reviewers, or set as a default. For example, a document that includes the CEO's name would be highlighted red, such as around the red circle if the document is associated with the classification code of privileged. Alternatively, if the circle representing the document is colored blue for responsive, the red color for the search term is colored around the blue circle.
The strength of the highlighted color, including darker or lighter, can represent a relevance of the highlighted document or concept to the search terms. For example, a document highly related to one or more of the search terms can be highlighted with a dark color, while a document with lower relevance can be associated with a lighter highlighted color.
When the document color and the search term highlight colors match, an agreement between what the user believes to be important matches with what the system identifies to be important. However, if the colors do not match, a discrepancy may exist between the user and the system and documents represented by disparate colors may require further review by the user. Alternatively, one or more junk terms can be entered as the search terms to identify those documents that are likely “junk” and to ensure that those documents are not coded as privileged or responsive, but if so, the user can further review those documents. Further, clusters that do not include any documents related to the search terms can also be highlighted a predetermined color.
In addition to the documents, a cluster can also include a highlighting 343 around the cluster based on a relevance of that cluster to the search terms. The relevance can be based on all the documents in that cluster. For instance, if 20 of the documents are related to a privileged term and another is related to a non-responsive term, the cluster can be highlighted around the cluster circle with the color red to represent the privileged classification. Additionally, if each document related to a privileged term and is also coded as privileged, there is a strong agreement that the cluster correctly includes privileged documents.
In a further embodiment, the search terms can be used to identify documents, such as during production to fulfill a production request. In a first scenario, the search terms are provided and a search is conducted. Based on the number of terms or the breadth of the terms, few documents may be identified as relevant to the search terms. Additionally, the search terms provided may not be representative of the results desired by the user. Therefore, in this case, the user can review terms or concepts of a responsive cluster and enter the further terms to conduct a further search based on the new terms. A responsive cluster can include those clusters with documents that are highlighted based on the search terms and considered relevant to the user.
Alternatively, if the terms are overly broad, a large number of documents will show highlighting as related to the terms and the results may be over-inclusive of documents that have little relevance to the actual documents desired by the user. The user can then identify non-responsive clusters with highlighted documents not desired by the user to identify terms having false positives, such that they appear relevant to the search terms. The user can then add exclusionary terms to the search, remove or replace one or more of the terms, and add a new term to narrow the search for desired documents. To identify new terms or replace terms, a user can review the terms or concepts of a responsive cluster. The search terms can include Boolean features and proximity features to conduct the search. For example, the search terms for “fantasy,” “football,” and “statistics” may provide over-inclusive results. A user then looks at responsive clusters to identify a concept for “gambling” and conducts a new search based on the four search terms. The terms or concepts can be identified from one or more documents or from the cluster labels.
A list of the search terms can be provided adjacent to the display with a number of documents or concepts identified in the display as relevant to that search term. Fields for concepts, selected concepts, quick codes, blinders, issues, levels, and saved searches can also be provided.
Massive amounts of data can be available and may be too much data to reasonably display at a single time. Accordingly, the data can be prioritized and divided to display reasonable data chunks at a single time. In one example, the data can be prioritized based on user-selected factors, such as search term relation, predictive coding results, date, code, or custodian, as well as many other factors. Once one or more of the factors are selected, the documents 352 in a corpus are prioritized based on the selective factor. Next, the documents are ordered based on the prioritization, such as with the highest priority document at a top order and the lowest priority document at the bottom. The ordered documents can then be divided into bins of predetermined sizes, randomly selected sizes, or as needed sizes. Each bin can have the same or a different number of documents. The 5 documents in each bin are then provided as one page of the cluster display.
The documents are divided by family, such that a bin will include documents of the same family. For example, an original email will include in its family, all emails in the same thread, such as reply and forwarded emails, and all attachments. The cluster display is dependent on the documents in each bin. For instance, the bin number may be set at 10,000 documents. If a single cluster includes 200 documents and 198 of the documents are prioritized with numbers before 10,000 and the remaining two documents have priorities over 10,000, then the cluster will be displayed as a first page with the 198 documents, but not the two documents of lower priority. In one embodiment, the two documents may show as a cluster together on the page corresponding with the bin to which the two documents belong. The next page will provide a cluster display of the next 10,000 documents, and so on. In this manner, the user can review the documents based on priority, such that the highest priority documents will be displayed first. A list 353 of pages can be provided at a bottom of the display for the user to scroll through.
In a further example, the displays are also dependent on the prioritized documents and their families. For instance, one or more of the first 10,000 prioritized documents, may have additional family members, which can change the number of documents included in a bin, such as to 10,012.
While the invention has been particularly shown and described as referenced to the embodiments thereof, those skilled in the art will understand that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope of the invention.
This non-provisional patent application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application, Ser. No. 62/344,986, filed Jun. 2, 2016, the disclosure of which is incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3416150 | Lindberg | Dec 1968 | A |
3426210 | Agin | Feb 1969 | A |
3668658 | Flores et al. | Jun 1972 | A |
4893253 | Lodder | Jan 1990 | A |
4991087 | Burkowski et al. | Feb 1991 | A |
5056021 | Ausborn | Oct 1991 | A |
5121338 | Lodder | Jun 1992 | A |
5133067 | Hara et al. | Jul 1992 | A |
5182773 | Bahl et al. | Jan 1993 | A |
5276789 | Besaw et al. | Jan 1994 | A |
5278980 | Pedersen et al. | Jan 1994 | A |
5359724 | Earle | Oct 1994 | A |
5371673 | Fan | Dec 1994 | A |
5371807 | Register et al. | Dec 1994 | A |
5442778 | Pedersen et al. | Aug 1995 | A |
5450535 | North | Sep 1995 | A |
5477451 | Brown et al. | Dec 1995 | A |
5488725 | Turtle et al. | Jan 1996 | A |
5524177 | Suzuoka | Jun 1996 | A |
5528735 | Strasnick et al. | Jun 1996 | A |
5619632 | Lamping et al. | Apr 1997 | A |
5619709 | Caid et al. | Apr 1997 | A |
5635929 | Rabowsky et al. | Jun 1997 | A |
5649193 | Sumita et al. | Jul 1997 | A |
5675819 | Schuetze | Oct 1997 | A |
5696962 | Kupiec | Dec 1997 | A |
5706497 | Takahashi et al. | Jan 1998 | A |
5737734 | Schultz | Apr 1998 | A |
5754938 | Herz et al. | May 1998 | A |
5754939 | Herz et al. | May 1998 | A |
5787422 | Tukey et al. | Jul 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5794236 | Mehrle | Aug 1998 | A |
5799276 | Komissarchik et al. | Aug 1998 | A |
5819258 | Vaithyanathan et al. | Oct 1998 | A |
5819260 | Lu et al. | Oct 1998 | A |
5835905 | Pirolli et al. | Nov 1998 | A |
5842203 | D'Elena et al. | Nov 1998 | A |
5844991 | Hochberg et al. | Dec 1998 | A |
5857179 | Vaithyanathan et al. | Jan 1999 | A |
5860136 | Fenner | Jan 1999 | A |
5862325 | Reed et al. | Jan 1999 | A |
5864846 | Voorhees et al. | Jan 1999 | A |
5864871 | Kitain et al. | Jan 1999 | A |
5867799 | Lang et al. | Feb 1999 | A |
5870740 | Rose et al. | Feb 1999 | A |
5895470 | Pirolli et al. | Apr 1999 | A |
5909677 | Broder et al. | Jun 1999 | A |
5915024 | Kitaori et al. | Jun 1999 | A |
5915249 | Spencer | Jun 1999 | A |
5920854 | Kirsch et al. | Jul 1999 | A |
5924105 | Punch et al. | Jul 1999 | A |
5940821 | Wical | Aug 1999 | A |
5943669 | Numata | Aug 1999 | A |
5950146 | Vapnik | Sep 1999 | A |
5950189 | Cohen et al. | Sep 1999 | A |
5966126 | Szabo | Oct 1999 | A |
5974412 | Hazlehurst et al. | Oct 1999 | A |
5987446 | Corey et al. | Nov 1999 | A |
5987457 | Ballard | Nov 1999 | A |
6006221 | Liddy et al. | Dec 1999 | A |
6012053 | Pant et al. | Jan 2000 | A |
6026397 | Sheppard | Feb 2000 | A |
6038574 | Pitkow et al. | Mar 2000 | A |
6070133 | Brewster et al. | May 2000 | A |
6089742 | Warmerdam et al. | Jul 2000 | A |
6091424 | Madden | Jul 2000 | A |
6092059 | Straforini et al. | Jul 2000 | A |
6092091 | Sumita et al. | Jul 2000 | A |
6094649 | Bowen et al. | Jul 2000 | A |
6100901 | Mohda et al. | Aug 2000 | A |
6108446 | Hoshen | Aug 2000 | A |
6119124 | Broder et al. | Sep 2000 | A |
6122628 | Castelli et al. | Sep 2000 | A |
6134541 | Castelli et al. | Oct 2000 | A |
6137499 | Tesler | Oct 2000 | A |
6137545 | Patel et al. | Oct 2000 | A |
6137911 | Zhilyaev | Oct 2000 | A |
6144962 | Weinberg | Nov 2000 | A |
6148102 | Stolin | Nov 2000 | A |
6154213 | Rennison et al. | Nov 2000 | A |
6154219 | Wiley et al. | Nov 2000 | A |
6167368 | Wacholder | Dec 2000 | A |
6173275 | Caid et al. | Jan 2001 | B1 |
6202064 | Julliard | Mar 2001 | B1 |
6216123 | Robertson et al. | Apr 2001 | B1 |
6243713 | Nelson et al. | Jun 2001 | B1 |
6243724 | Mander et al. | Jun 2001 | B1 |
6253218 | Aoki et al. | Jun 2001 | B1 |
6260038 | Martin et al. | Jul 2001 | B1 |
6300947 | Kanebsky | Oct 2001 | B1 |
6326962 | Szabo | Dec 2001 | B1 |
6338062 | Liu | Jan 2002 | B1 |
6345243 | Clark | Feb 2002 | B1 |
6349296 | Broder et al. | Feb 2002 | B1 |
6349307 | Chen | Feb 2002 | B1 |
6360227 | Aggarwal et al. | Mar 2002 | B1 |
6363374 | Corston-Oliver et al. | Mar 2002 | B1 |
6377287 | Hao et al. | Apr 2002 | B1 |
6381601 | Fujiwara et al. | Apr 2002 | B1 |
6389433 | Bolosky et al. | May 2002 | B1 |
6389436 | Chakrabarti et al. | May 2002 | B1 |
6408294 | Getchius et al. | Jun 2002 | B1 |
6414677 | Robertson et al. | Jul 2002 | B1 |
6415283 | Conklin | Jul 2002 | B1 |
6418431 | Mahajan et al. | Jul 2002 | B1 |
6421709 | McCormick et al. | Jul 2002 | B1 |
6438537 | Netz et al. | Aug 2002 | B1 |
6438564 | Morton et al. | Aug 2002 | B1 |
6442592 | Alumbaugh et al. | Aug 2002 | B1 |
6446061 | Doerre et al. | Sep 2002 | B1 |
6449612 | Bradley et al. | Sep 2002 | B1 |
6453327 | Nielsen | Sep 2002 | B1 |
6460034 | Wical | Oct 2002 | B1 |
6470307 | Turney | Oct 2002 | B1 |
6480843 | Li | Nov 2002 | B2 |
6480885 | Olivier | Nov 2002 | B1 |
6484168 | Pennock et al. | Nov 2002 | B1 |
6484196 | Maurille | Nov 2002 | B1 |
6493703 | Knight et al. | Dec 2002 | B1 |
6496822 | Rosenfelt et al. | Dec 2002 | B2 |
6502081 | Wiltshire, Jr. et al. | Dec 2002 | B1 |
6507847 | Fleischman | Jan 2003 | B1 |
6510406 | Marchisio | Jan 2003 | B1 |
6519580 | Johnson et al. | Feb 2003 | B1 |
6523026 | Gillis | Feb 2003 | B1 |
6523063 | Miller et al. | Feb 2003 | B1 |
6542635 | Hu et al. | Apr 2003 | B1 |
6542889 | Aggarwal et al. | Apr 2003 | B1 |
6544123 | Tanaka et al. | Apr 2003 | B1 |
6549957 | Hanson et al. | Apr 2003 | B1 |
6560597 | Dhillon et al. | May 2003 | B1 |
6564202 | Schuetze et al. | May 2003 | B1 |
6571225 | Oles et al. | May 2003 | B1 |
6584564 | Olkin et al. | Jun 2003 | B2 |
6594658 | Woods | Jul 2003 | B2 |
6598054 | Schuetze et al. | Jul 2003 | B2 |
6606625 | Muslea et al. | Aug 2003 | B1 |
6611825 | Billheimer et al. | Aug 2003 | B1 |
6628304 | Mitchell et al. | Sep 2003 | B2 |
6629097 | Keith | Sep 2003 | B1 |
6640009 | Zlotnick | Oct 2003 | B2 |
6651057 | Jin et al. | Nov 2003 | B1 |
6654739 | Apte et al. | Nov 2003 | B1 |
6658423 | Pugh et al. | Dec 2003 | B1 |
6675159 | Lin et al. | Jan 2004 | B1 |
6675164 | Kamath et al. | Jan 2004 | B2 |
6678705 | Berchtold et al. | Jan 2004 | B1 |
6684205 | Modha et al. | Jan 2004 | B1 |
6697998 | Damerau et al. | Feb 2004 | B1 |
6701305 | Holt et al. | Mar 2004 | B1 |
6711585 | Copperman et al. | Mar 2004 | B1 |
6714929 | Micaelian et al. | Mar 2004 | B1 |
6714936 | Nevin | Mar 2004 | B1 |
6728752 | Chen | Apr 2004 | B1 |
6735578 | Shetty et al. | May 2004 | B2 |
6738759 | Wheeler et al. | May 2004 | B1 |
6747646 | Gueziec et al. | Jun 2004 | B2 |
6751628 | Coady | Jun 2004 | B2 |
6757646 | Marchisio | Jun 2004 | B2 |
6785679 | Dane et al. | Aug 2004 | B1 |
6789230 | Katariya et al. | Sep 2004 | B2 |
6804665 | Kreulen et al. | Oct 2004 | B2 |
6816175 | Hamp et al. | Nov 2004 | B1 |
6819344 | Robbins | Nov 2004 | B2 |
6823333 | McGreevy | Nov 2004 | B2 |
6826724 | Shimada et al. | Nov 2004 | B1 |
6841321 | Matsumoto et al. | Jan 2005 | B2 |
6847966 | Sommer et al. | Jan 2005 | B1 |
6862710 | Marchisio | Mar 2005 | B1 |
6879332 | Decombe | Apr 2005 | B2 |
6880132 | Uemura | Apr 2005 | B2 |
6883001 | Abe | Apr 2005 | B2 |
6886010 | Kostoff | Apr 2005 | B2 |
6888584 | Suzuki et al. | May 2005 | B2 |
6915308 | Evans et al. | Jul 2005 | B1 |
6922699 | Schuetze et al. | Jul 2005 | B2 |
6941325 | Benitez et al. | Sep 2005 | B1 |
6968511 | Robertson et al. | Nov 2005 | B1 |
6970881 | Mohan et al. | Nov 2005 | B1 |
6970931 | Bellamy et al. | Nov 2005 | B1 |
6976207 | Rujan et al. | Dec 2005 | B1 |
6978419 | Kantrowitz | Dec 2005 | B1 |
6990238 | Saffer et al. | Jan 2006 | B1 |
6993517 | Naito et al. | Jan 2006 | B2 |
6993535 | Bolle et al. | Jan 2006 | B2 |
6996575 | Cox et al. | Feb 2006 | B2 |
7003551 | Malik | Feb 2006 | B2 |
7013435 | Gallo et al. | Mar 2006 | B2 |
7020645 | Bisbee et al. | Mar 2006 | B2 |
7039638 | Zhang et al. | May 2006 | B2 |
7039856 | Peairs et al. | May 2006 | B2 |
7051017 | Marchisio | May 2006 | B2 |
7054870 | Holbrook | May 2006 | B2 |
7080320 | Ono | Jul 2006 | B2 |
7096431 | Tambata et al. | Aug 2006 | B2 |
7099819 | Sakai et al. | Aug 2006 | B2 |
7107266 | Breyman et al. | Sep 2006 | B1 |
7117151 | Iwahashi et al. | Oct 2006 | B2 |
7117246 | Christenson et al. | Oct 2006 | B2 |
7117432 | Shanahan et al. | Oct 2006 | B1 |
7130807 | Mikurak | Oct 2006 | B1 |
7131060 | Azuma | Oct 2006 | B1 |
7137075 | Hoshito et al. | Nov 2006 | B2 |
7139739 | Agrafiotis et al. | Nov 2006 | B2 |
7146361 | Broder et al. | Dec 2006 | B2 |
7155668 | Holland et al. | Dec 2006 | B2 |
7158957 | Joseph et al. | Jan 2007 | B2 |
7188107 | Moon et al. | Mar 2007 | B2 |
7188117 | Farahat et al. | Mar 2007 | B2 |
7194458 | Micaelian et al. | Mar 2007 | B1 |
7194483 | Mohan et al. | Mar 2007 | B1 |
7197497 | Cossock | Mar 2007 | B2 |
7209949 | Mousseau et al. | Apr 2007 | B2 |
7233886 | Wegerich et al. | Jun 2007 | B2 |
7233940 | Bamberger et al. | Jun 2007 | B2 |
7239986 | Golub et al. | Jul 2007 | B2 |
7240199 | Tomkow | Jul 2007 | B2 |
7246113 | Cheetham et al. | Jul 2007 | B2 |
7251637 | Caid et al. | Jul 2007 | B1 |
7266365 | Ferguson et al. | Sep 2007 | B2 |
7266545 | Bergman et al. | Sep 2007 | B2 |
7269598 | Marchisio | Sep 2007 | B2 |
7271801 | Toyozawa et al. | Sep 2007 | B2 |
7277919 | Donoho et al. | Oct 2007 | B1 |
7292244 | Vafiadis et al. | Nov 2007 | B2 |
7308451 | Lamping et al. | Dec 2007 | B1 |
7325127 | Olkin et al. | Jan 2008 | B2 |
7353204 | Liu | Apr 2008 | B2 |
7356777 | Borchardt | Apr 2008 | B2 |
7359894 | Liebman et al. | Apr 2008 | B1 |
7363243 | Arnett et al. | Apr 2008 | B2 |
7366759 | Trevithick et al. | Apr 2008 | B2 |
7373612 | Risch et al. | May 2008 | B2 |
7376635 | Porcari et al. | May 2008 | B1 |
7379913 | Steele et al. | May 2008 | B2 |
7383282 | Whitehead et al. | Jun 2008 | B2 |
7401087 | Cooperman et al. | Jul 2008 | B2 |
7412462 | Margolus et al. | Aug 2008 | B2 |
7418397 | Kojima et al. | Aug 2008 | B2 |
7430688 | Matsuno et al. | Sep 2008 | B2 |
7430717 | Spangler | Sep 2008 | B1 |
7433893 | Lowry | Oct 2008 | B2 |
7440662 | Antona et al. | Oct 2008 | B2 |
7444356 | Calistri-Yeh et al. | Oct 2008 | B2 |
7457948 | Bilicksa et al. | Nov 2008 | B1 |
7472110 | Achlioptas | Dec 2008 | B2 |
7478403 | Allavarpu | Jan 2009 | B1 |
7490092 | Morton et al. | Feb 2009 | B2 |
7499923 | Kawatani | Mar 2009 | B2 |
7509256 | Iwahashi et al. | Mar 2009 | B2 |
7516419 | Petro et al. | Apr 2009 | B2 |
7519565 | Prakash et al. | Apr 2009 | B2 |
7523349 | Barras | Apr 2009 | B2 |
7558769 | Scott et al. | Jul 2009 | B2 |
7571177 | Damle | Aug 2009 | B2 |
7574409 | Patinkin | Aug 2009 | B2 |
7584221 | Robertson et al. | Sep 2009 | B2 |
7603628 | Park et al. | Oct 2009 | B2 |
7607083 | Gong et al. | Oct 2009 | B2 |
7639868 | Regli et al. | Dec 2009 | B1 |
7640219 | Perrizo | Dec 2009 | B2 |
7647345 | Trepess et al. | Jan 2010 | B2 |
7668376 | Lin et al. | Feb 2010 | B2 |
7668789 | Forman et al. | Feb 2010 | B1 |
7698167 | Batham et al. | Apr 2010 | B2 |
7712049 | Williams et al. | May 2010 | B2 |
7716223 | Haveliwala et al. | May 2010 | B2 |
7730425 | De los Reyes et al. | Jun 2010 | B2 |
7743059 | Chan et al. | Jun 2010 | B2 |
7756974 | Blumenau | Jul 2010 | B2 |
7761447 | Brill et al. | Jul 2010 | B2 |
7801841 | Mishra et al. | Sep 2010 | B2 |
7831928 | Rose et al. | Nov 2010 | B1 |
7885901 | Hull et al. | Feb 2011 | B2 |
7899274 | Baba et al. | Mar 2011 | B2 |
7971150 | Raskutti et al. | Jun 2011 | B2 |
7984014 | Song et al. | Jul 2011 | B2 |
8010466 | Patinkin | Aug 2011 | B2 |
8010534 | Roitblat | Aug 2011 | B2 |
8032409 | Mikurak | Oct 2011 | B1 |
8060259 | Budhraja et al. | Nov 2011 | B2 |
8065156 | Gazdzinski | Nov 2011 | B2 |
8065307 | Haslam et al. | Nov 2011 | B2 |
8165974 | Privault et al. | Apr 2012 | B2 |
8275773 | Donnelly et al. | Sep 2012 | B2 |
8290778 | Gazdzinski | Oct 2012 | B2 |
8296146 | Gazdzinski | Oct 2012 | B2 |
8296666 | Wright et al. | Oct 2012 | B2 |
8311344 | Dunlop et al. | Nov 2012 | B2 |
8326823 | Grandhi et al. | Dec 2012 | B2 |
8381122 | Louch et al. | Feb 2013 | B2 |
8401710 | Budhraja et al. | Mar 2013 | B2 |
8515946 | Marcucci et al. | Aug 2013 | B2 |
8671353 | Varadarajan | Mar 2014 | B1 |
8676605 | Familant | Mar 2014 | B2 |
8712777 | Gazdzinski | Apr 2014 | B1 |
8719037 | Gazdzinski | May 2014 | B2 |
8719038 | Gazdzinski | May 2014 | B1 |
8781839 | Gazdzinski | Jul 2014 | B1 |
8819569 | SanGiovanni et al. | Aug 2014 | B2 |
9015633 | Takamura et al. | Apr 2015 | B2 |
9256664 | Chakerian et al. | Feb 2016 | B2 |
20020002556 | Yoshida et al. | Jan 2002 | A1 |
20020032735 | Burnstein et al. | Mar 2002 | A1 |
20020055919 | Mikheev | May 2002 | A1 |
20020065912 | Catchpole et al. | May 2002 | A1 |
20020078044 | Song et al. | Jun 2002 | A1 |
20020078090 | Hwang et al. | Jun 2002 | A1 |
20020122543 | Rowen | Sep 2002 | A1 |
20020184193 | Cohen | Dec 2002 | A1 |
20030018652 | Heckerman et al. | Jan 2003 | A1 |
20030046311 | Baidya et al. | Mar 2003 | A1 |
20030065635 | Sahami | Apr 2003 | A1 |
20030084066 | Waterman et al. | May 2003 | A1 |
20030110181 | Schuetze et al. | Jun 2003 | A1 |
20030120651 | Bernstein et al. | Jun 2003 | A1 |
20030130991 | Reijerse et al. | Jul 2003 | A1 |
20030172048 | Kauffman | Sep 2003 | A1 |
20030174179 | Suermondt et al. | Sep 2003 | A1 |
20040024739 | Cooperman et al. | Feb 2004 | A1 |
20040024755 | Rickard | Feb 2004 | A1 |
20040034633 | Rickard | Feb 2004 | A1 |
20040078577 | Feng et al. | Apr 2004 | A1 |
20040083206 | Wu et al. | Apr 2004 | A1 |
20040090472 | Risch et al. | May 2004 | A1 |
20040133650 | Miloushev et al. | Jul 2004 | A1 |
20040163034 | Colbath | Aug 2004 | A1 |
20040181427 | Stobbs et al. | Sep 2004 | A1 |
20040205482 | Basu | Oct 2004 | A1 |
20040205578 | Wolff et al. | Oct 2004 | A1 |
20040215608 | Gourlay | Oct 2004 | A1 |
20040220895 | Carus et al. | Nov 2004 | A1 |
20040243556 | Ferrucci et al. | Dec 2004 | A1 |
20050004949 | Trepess et al. | Jan 2005 | A1 |
20050025357 | Landwehr et al. | Feb 2005 | A1 |
20050091211 | Vernau et al. | Apr 2005 | A1 |
20050097435 | Prakash et al. | May 2005 | A1 |
20050171772 | Iwahashi et al. | Aug 2005 | A1 |
20050203924 | Rosenberg | Sep 2005 | A1 |
20050283473 | Rousso et al. | Dec 2005 | A1 |
20060008151 | Lin et al. | Jan 2006 | A1 |
20060010145 | Al-Kofahi et al. | Jan 2006 | A1 |
20060012297 | Lee et al. | Jan 2006 | A1 |
20060021009 | Lunt | Jan 2006 | A1 |
20060053382 | Gardner et al. | Mar 2006 | A1 |
20060080311 | Potok et al. | Apr 2006 | A1 |
20060106847 | Eckardt et al. | May 2006 | A1 |
20060122974 | Perisic | Jun 2006 | A1 |
20060122997 | Lin | Jun 2006 | A1 |
20060164409 | Borchardt et al. | Jul 2006 | A1 |
20060242013 | Agarwal | Oct 2006 | A1 |
20070020642 | Deng et al. | Jan 2007 | A1 |
20070043774 | Davis et al. | Feb 2007 | A1 |
20070044032 | Mollitor et al. | Feb 2007 | A1 |
20070109297 | Borchardt et al. | May 2007 | A1 |
20070112758 | Livaditis | May 2007 | A1 |
20070150801 | Chidlovskii et al. | Jun 2007 | A1 |
20070214133 | Liberty et al. | Sep 2007 | A1 |
20070288445 | Kraftsow | Dec 2007 | A1 |
20080005081 | Green et al. | Jan 2008 | A1 |
20080109762 | Hundal et al. | May 2008 | A1 |
20080140643 | Ismalon | Jun 2008 | A1 |
20080162478 | Pugh et al. | Jul 2008 | A1 |
20080183855 | Agarwal et al. | Jul 2008 | A1 |
20080189273 | Kraftsow | Aug 2008 | A1 |
20080215427 | Kawada et al. | Sep 2008 | A1 |
20080228675 | Daffy et al. | Sep 2008 | A1 |
20080249999 | Renders et al. | Oct 2008 | A1 |
20080270946 | Risch | Oct 2008 | A1 |
20090018995 | Chidlovskii et al. | Jan 2009 | A1 |
20090041329 | Nordell et al. | Feb 2009 | A1 |
20090043797 | Dorie et al. | Feb 2009 | A1 |
20090049017 | Gross | Feb 2009 | A1 |
20090097733 | Hero et al. | Apr 2009 | A1 |
20090106239 | Getner et al. | Apr 2009 | A1 |
20090125505 | Bhalotia et al. | May 2009 | A1 |
20090222444 | Chowdhury et al. | Sep 2009 | A1 |
20090228499 | Schmidtler et al. | Sep 2009 | A1 |
20090228811 | Adams et al. | Sep 2009 | A1 |
20090259622 | Kolz et al. | Oct 2009 | A1 |
20090265631 | Sigurbjornsson et al. | Oct 2009 | A1 |
20090307213 | Deng et al. | Dec 2009 | A1 |
20100010968 | Redlich | Jan 2010 | A1 |
20100076857 | Deo et al. | Mar 2010 | A1 |
20100100539 | Davis et al. | Apr 2010 | A1 |
20100198802 | Kraftsow | Aug 2010 | A1 |
20100250477 | Yadav | Sep 2010 | A1 |
20100250541 | Richards et al. | Sep 2010 | A1 |
20100262571 | Schmidtler et al. | Oct 2010 | A1 |
20100268661 | Levy et al. | Oct 2010 | A1 |
20100312725 | Privault et al. | Dec 2010 | A1 |
20110016118 | Edala et al. | Jan 2011 | A1 |
20120093421 | Kletter | Apr 2012 | A1 |
20120124034 | Jing et al. | May 2012 | A1 |
20140236947 | Knight | Aug 2014 | A1 |
Number | Date | Country |
---|---|---|
0886227 | Jun 1998 | EP |
1049030 | Jan 2005 | EP |
1024437 | Jul 2005 | EP |
200067162 | Nov 2000 | WO |
2003052627 | Jun 2003 | WO |
2003060766 | Jul 2003 | WO |
2005073881 | Aug 2005 | WO |
2006008733 | Jan 2006 | WO |
Entry |
---|
Gorg et al., Combining Computation Analyses and Interactive Visualization for Document Exploration and Sensemaking in Kigsaw, IEEE Transaction on Visualization and Computer Graphics, vol. 19, No. 10, Oct. 2013, provided by IDS (Year: 2013). |
Anna Sachinopoulou, “Multidimensional Visualization,” Technical Research Centre of Finland, ESPOO 2001, VTT Research Notes 2114, pp. 1-37 (2001). |
Artero et al., “Viz3D: Effective Exploratory Visualization of Large Multidimensional Data Sets,” IEEE Computer Graphics and Image Processing, pp. 340-347 (Oct. 20, 2004). |
B.B. Hubbard, “The World According the Wavelet: The Story of a Mathematical Technique in the Making,” AK Peters (2nd ed.), pp. 227-229, Massachusetts, USA (1998). |
Baeza-Yates et al., “Modern Information Retrieval,” Ch. 2 “Modeling,” Modern Information Retrieval, Harlow: Addison-Wesley, Great Britain 1999, pp. 18-71 (1999). |
Bernard et al.: “Labeled Radial Drawing of Data Structures” Proceedings of the Seventh International Conference on Information Visualization, Infovis. IEEE Symposium, Jul. 16-18, 2003, Piscataway, NJ, USA, IEEE, Jul. 16, 2003, pp. 479-484, XP010648809 (2003). |
Bier et al. “Toolglass and Magic Lenses: The See-Through Interface”, Computer Graphics Proceedings, Proceedings of Siggraph Annual International Conference on Computer Graphics and Interactive Techniques, pp. 73-80, XP000879378 (Aug. 1993). |
Boukhelifa et al., “A Model and Software System for Coordinated and Multiple Views in Exploratory Visualization,” Information Visualization, No. 2, pp. 258-269, GB (2003). |
C. Yip Chung et al., “Thematic Mapping-From Unstructured Documents To Taxonomies,” CIKM'02, Nov. 4-9, 2002, pp. 608-610, ACM, McLean, Virginia, USA (Nov. 4, 2002). |
Chen An et al., “Fuzzy Concept Graph And Application In Web Document Clustering,” IEEE, pp. 101-106 (2001). |
Davison et al. “Brute Force Estimation of the Number of Human Genes Using EST Clustering as a Measure,” IBM Journal of Research & Development, vol. 45, pp. 439-447 (May 1997). |
D. Sullivan, “Document Warehousing and Text Mining: Techniques for Improving Business Operations, Marketing and Sales,” Ch. 1-3, John Wiley & Sons, New York, NY (2001). |
DeLoura et al., Game Programming Gems 2, Charles River Media, Inc., pp. 182-190, 2001. |
Eades et al. “Multilevel Visualization of Clustered Graphs,” Department of Computer Science and Software Engineering, University if Newcastle, Australia, Proceedings of Graph Drawing '96, Lecture Notes in Computer Science, NR. 1190, (Sep. 1996). |
Eades et al., “Orthogonal Grid Drawing of Clustered Graphs,” Department of Computer Science, the University of Newcastle, Australia, Technical Report 96-04, [Online] 1996, Retrieved from the internet: URL: http://citeseer.ist.psu.edu/eades96ort hogonal.html (1996). |
Estivill-Castro et al. “Amoeba: Hierarchical Clustering Based On Spatial Proximity Using Delaunaty Diagram”, Department of Computer Science, The University of Newcastle, Australia, 1999 ACM Sigmod International Conference on Management of Data, vol. 28, No. 2, Jun. 1999, pp. 49-60, Philadelphia, PA, USA (Jun. 1999). |
F. Can, Incremental Clustering For Dynamic Information Processing: ACM Transactions On Information Systems, ACM, New York, NY, US, vol. 11, No. 2, pp. 143-164, XP-002308022 (Apr. 1993). |
Fekete et al., “Excentric Labeling: Dynamic Neighborhood Labeling For Data Visualization,” CHI 1999 Conference Proceedings Human Factors In Computing Systems, Pittsburgh, PA, pp. 512-519 (May 15-20, 1999). |
Gorg Carsten et al., “Combining Computational Analyses and Interactive Visualization for Document Exploration and Sensemaking in Jigsaw.” IEEE Transactions on Visualization and Computer Graphics, vol. 19, No. 10, Oct. 1, 2013, pp. 1646-1663, XP011526228, ISSN: 1077-2626, DOI: 10.1109/TVCG,2012.324. |
H. Kawano, “Overview of Mondou Web Search Engine Using Text Mining And Information Visualizing Technologies,”IEEE, 2001, pp. 234-241 (2001). |
http://em-ntserver.unl.edu/Math/mathweb/vecors/vectors.html © 1997. |
Inxight VizServer, “Speeds and Simplifies The Exploration and Sharing of Information”, www.inxight.com/products/vizserver, copyright 2005. |
Jain et al., “Data Clustering: A Review,” ACM Computing Surveys, vol. 31, No. 3, Sep. 1999, pp. 264-323, New York, NY, USA (Sep. 1999). |
Jiang Linhui, “K-Mean Algorithm: Iterative Partitioning Clustering Algorithm,” http://www.cs.regina.ca/-linhui/K.sub.—mean.sub.--algorithm.html, (2001) Computer Science Department, University of Regina, Saskatchewan, Canada (2001). |
Kanungo et al., “The Analysis Of A Simple K-Means Clustering Algorithm,” pp. 100-109, PROC 16th annual symposium of computational geometry (May 2000). |
Kazumasa Ozawa, “A Stratificational Overlapping Cluster Scheme,” Information Science Center, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572, Japan, Pattern Recognition, vol. 18, pp. 279-286 (1985). |
Kohonen, “Self-Organizing Maps,” Ch. 1-2, Springer-Verlag (3rd ed.) (2001). |
Kurimo, “Fast Latent Semantic Indexing of Spoken Documents by Using Self-Organizing Maps” IEEE International Conference on Accoustics, Speech, And Signal Processing, vol. 6, pp. 2425-2428 (Jun. 2000). |
Lam et al., “A Sliding Window Technique for Word Recognition,” SPIE, vol. 2422, pp. 38-46, Center of Excellence for Document Analysis and Recognition, State University of New Yrok at Baffalo, NY, USA (1995). |
Lio et al., “Funding Pathogenicity Islands And Gene Transfer Events in Genome Data,” Bioinformatics, vol. 16, pp. 932-940, Department of Zoology, University of Cambridge, UK (Jan. 25, 2000). |
Liu et al. “Robust Multi-Class Transdructive learning with graphs”, Jun. 2009. |
Liu et al., “TopicPanorama: a Full Picture of Relevant Topics,” 2014 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE. Oct. 25, 2014. pp. 183-192, XP032735860, DOI: 10.1109/VAST.2014.7042494. |
Magarshak, Theory & Practice. Issue 01. May 17, 2000. http://www.flipcode.com/articles/tp.sub.—issue01-pf.shtml (May 17, 2000). |
Maria Cristin Ferreira de Oliveira et al., “From Visual Data Exploration to Visual Data Mining: A Survey,” Jul.-Sep. 2003, IEEE Transactions onVisualization and Computer Graphics, vol. 9, No. 3, pp. 378-394 (Jul. 2003). |
McNee, “Meeting User Information Needs in Recommender Systems,” Ph.D. Dissertation, University of Minnesota—Twin Cities, (Jun. 2006). |
Miller et al., “Topic Islands: A Wavelet Based Text Visualization System,” Proceedings of the IEEE Visualization Conference. 1998, pp. 189-196. |
Nan Cao et al., g-Miner: Human Factors in Computing Systems, ACM, 2 Penn Plaza, Suite 701 New York NY 10121-0701 USA, Apr. 17, 2015, pp. 279-288,XP058068337, DOI: 10.1145/2702123.2702446, ISBN: 978-1-4503-3145-6. |
North et al. “A Taxonomy of Multiple Window Coordinations,” Institute for Systems Research & Department of Computer Science, University of Maryland, Maryland, USA, http://drum.lib.umd.edu/bitstream/1903/927/2/CS-TR-3854.pdf (1997). |
O'Neill et al., “DISCO: Intelligent Help for Document Review,” 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, Jun. 8, 2009, pp. 1-10, ICAIL 2009, Association For Computing Machinery, Red Hook, New York (Online); XP 0026 (Jun. 2009). |
Osborn et al., “JUSTICE: A Jidicial Search Tool Using Intelligent Cencept Extraction,” Department of Computer Science and Software Engineering, University of Melbourne, Australia, ICAIL-99, 1999, pp. 173-181, ACM (1999). |
Paul N. Bennet et al., “Probabilistic Combination of Text Classifiers Using Reliability Indicators”, 2002, ACM, 8 pages. |
Pelleg et al., “Accelerating Exact K-Means Algorithms With Geometric Reasoning,” pp. 277-281, Conf on Knowledge Discovery in Data, Proc fifth ACM SIGKDD (1999). |
R.E. Horn, “Communication Units, Morphology, and Syntax,” Visual Language: Global Communication for the 21st Century, 1998, Ch. 3, pp. 51-92, MacroVU Press, Bainbridge Island, Washington, USA (1998). |
Rauber et al., “Text Mining in the SOMLib Digital Library System: The Representation of Topics and Genres,” Applied Intelligence 18, pp. 271-293, 2003 Kluwer Academic Publishers (2003). |
Ryall et al., “An Interactive Constraint-Based System For Drawing Graphs,” UIST '97 Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, pp. 97-104 (1997). |
Shuldberg et al., “Distilling Information from Text: The EDS TemplateFiller System,” Journal of the American Society for Information Science, vol. 44, pp. 493-507 (1993). |
Slaney et al., “Multimedia Edges: Finding Hierarchy in all Dimensions” Proc. 9-th ACM Intl. Conf. on Multimedia, pp. 29-40, ISBN.1-58113-394-4, Sep. 30, 2001, XP002295016 Ottawa (Sep. 30, 2001). |
Strehl et al., “Cluster Ensembles-A Knowledge Reuse Framework for Combining Partitioning,” Journal of Machine Learning Research, MIT Press, Cambridge, MA, US, ISSN: 1533-7928, vol. 3, No. 12, pp. 583-617, XP002390603 (Dec. 2002). |
V. Faber, “Clustering and the Continuous K-Means Algorithm,” Los Alamos Science, The Laboratory, Los Alamos, NM, US, No. 22, Jan. 1, 1994, pp. 138-144 (Jan. 1, 1994). |
Wang et al., “Learning text classifier using the domain concept hierarchy,” Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on Jun. 29-Jul. 1, 2002, Piscataway, NJ, USA, IEEE, vol. 2, pp. 1230-1234 (2002). |
Whiting et al., “Image Quantization: Statistics and Modeling,” SPIE Conference of Physics of Medical Imaging, San Diego, CA, USA , vol. 3336, pp. 260-271 (Feb. 1998). |
S.S.Weng, C.K.Liu, “Using text classification and multiple concepts to answer emails.”Expert Systems with Applications, 26 (2004), pp. 529-543. |
Salton G. et al., “Extended Boolean Information Retrieval” Communications of the Association for Computing Machinery, ACM, New York, NY, US., vol. 26, p. 12, Nov. 1, 1983, pig1022-1036, XP000670417. |
Cutting, Douglass R., et al. “Scatter/gather: A cluster-based approach to browsing large document collections.” Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 1992. |
Barnett, T., Renders, J.M., Privault, C., Schneider J. and Wickstrom, R. (2009). “Machine Learning Classification for Document Review”. In Proc. Of the DESI III workshop on Setting Standards for Searching Electronivally Stored Information. ICAIL 2009 (Year:2009). |
Number | Date | Country | |
---|---|---|---|
20170351668 A1 | Dec 2017 | US |
Number | Date | Country | |
---|---|---|---|
62344986 | Jun 2016 | US |