The present invention relates in general to data visualization and, in particular, to a system and method for generating groups of cluster spines for display.
In general, data visualization transforms numeric or textual information into a graphical display format to assist users in understanding underlying trends and principles in the data. Effective data visualization complements and, in some instances, supplants numbers and text as a more intuitive visual presentation format than raw numbers or text alone. However, graphical data visualization is constrained by the physical limits of computer display systems. Two-dimensional and three-dimensional visualized information can be readily displayed. However, visualized information in excess of three dimensions must be artificially compressed if displayed on conventional display devices. Careful use of color, shape and temporal attributes can simulate multiple dimensions, but comprehension and usability become difficult as additional layers of modeling are artificially grafted into a two- or three-dimensional display space.
Mapping multi-dimensional information into a two- or three-dimensional display space potentially presents several problems. For instance, a viewer could misinterpret dependent relationships between discrete objects displayed adjacently in a two or three dimensional display. Similarly, a viewer could erroneously interpret dependent variables as independent and independent variables as dependent. This type of problem occurs, for example, when visualizing clustered data, which presents discrete groupings of related data. Other factors further complicate the interpretation and perception of visualized data, based on the Gestalt principles of proximity, similarity, closed region, connectedness, good continuation, and closure, such as described in R. E. Horn, “Visual Language: Global Communication for the 21st Century,” Ch. 3, MacroVU Press (1998), the disclosure of which is incorporated by reference.
Conventionally, objects, such as clusters, modeled in multi-dimensional concept space are generally displayed in two- or three-dimensional display space as geometric objects. Independent variables are modeled through object attributes, such as radius, volume, angle, distance and so forth. Dependent variables are modeled within the two or three dimensions. However, poor cluster placement within the two or three dimensions can mislead a viewer into misinterpreting dependent relationships between discrete objects.
Consider, for example, a group of clusters, which each contain a group of points corresponding to objects sharing a common set of traits. Each cluster is located at some distance from a common origin along a vector measured at a fixed angle from a common axis. The radius of each cluster reflects the number of objects contained. Clusters located along the same vector are similar in traits to those clusters located on vectors separated by a small cosine rotation. However, the radius and distance of each cluster from the common origin are independent variables relative to other clusters. When displayed in two dimensions, the overlaying or overlapping of clusters could mislead the viewer into perceiving data dependencies between the clusters where no such data dependencies exist.
Conversely, multi-dimensional information can be advantageously mapped into a two- or three-dimensional display space to assist with comprehension based on spatial appearances. Consider, as a further example, a group of clusters, which again each contain a group of points corresponding to objects sharing a common set of traits and in which one or more “popular” concepts or traits frequently appear in some of the clusters. Since the distance of each cluster from the common origin is an independent variable relative to other clusters, those clusters that contain popular concepts or traits may be placed in widely separated regions of the display space and could similarly mislead the viewer into perceiving no data dependencies between the clusters where such data dependencies exist.
The placement of cluster groups within a two-dimensional display space, such as under a Cartesian coordinate system, also imposes limitations on semantic interrelatedness, density and user interface navigation. Within the display space, cluster groups can be formed into “spines” of semantically-related clusters, which can be placed within the display space with semantically-related groups of cluster spines appearing proximally close to each other and semantically-unrelated cluster spine groups appearing in more distant regions. This form of cluster spine group placement, however, can be potentially misleading. For instance, larger cluster spine groups may need to be placed to accommodate the placement of smaller cluster spine groups while sacrificing the displaying of the semantic interrelatedness of the larger cluster spine groups. Moreover, the density of the overall display space is limited pragmatically and the placement of too many cluster spine groups can overload the user. Finally, navigation within such a display space can be unintuitive and cumbersome, as large cluster spine group placement is driven by available display space and the provisioning of descriptive labels necessarily overlays or intersects placed cluster spine groups.
One approach to depicting thematic relationships between individual clusters applies a force-directed or “spring” algorithm. Clusters are treated as bodies in a virtual physical system. Each body has physics-based forces acting on or between them, such as magnetic repulsion or gravitational attraction. The forces on each body are computed in discrete time steps and the positions of the bodies are updated. However, the methodology exhibits a computational complexity of order O(n2) per discrete time step and scales poorly to cluster formations having a few hundred nodes. Moreover, large groupings of clusters tend to pack densely within the display space, thereby losing any meaning assigned to the proximity of related clusters.
Therefore, there is a need for an approach to providing a visual display space reflecting tighter semantic interrelatedness of cluster spine groups with increased display density. Preferably, such an approach would further form the cluster spine groups by semantically relating entire cluster spines, rather than individual anchor points within each cluster spine.
There is a further need for an approach to orienting semantically-related cluster spine groups within a two-dimensional visual display space relative to a common point of reference, such as a circle. Preferably, such an approach would facilitate improved user interface features through increased cluster spine group density and cluster spine group placement allowing improved descriptive labeling.
Relationships between concept clusters are shown in a two-dimensional display space by combining connectedness and proximity. Clusters sharing “popular” concepts are identified by evaluating thematically-closest neighboring clusters, which are assigned into linear cluster spines arranged to avoid object overlap. The cluster arrangement methodology exhibits a highly-scalable computational complexity of order O(n).
An embodiment provides a system and method for arranging concept clusters in thematic neighborhood relationships in a shaped two-dimensional visual display space. A set of clusters is selected from a concept space. The concept space includes a multiplicity of clusters with concepts visualizing document content based on extracted concepts. A theme in each of a plurality of the clusters is identified. Each theme includes at least one such concept ranked within the cluster. A plurality of unique candidate spines is logically formed. Each candidate spine includes clusters commonly sharing at least one such concept. One or more of the clusters are assigned to one such candidate spine having a substantially best fit. Each best fit candidate spine sufficiently unique from each other such best fit candidate spine is identified. The identified best fit candidate spine is placed in a visual display space. Each non-identified best fit candidate spine is placed in the visual display space relative to an anchor cluster on one such identified best fit candidate spine.
A further embodiment provides a computer-implemented system and method for organizing cluster groups within a display. Cluster groups each having one or more spines of clusters are obtained. Each cluster includes at least one document. At least one of the cluster groups is placed around a ring centrally defined in a display. Circle sectors are defined around the ring and an initial target angle is identified within each of the sectors. The at least one cluster group is positioned within one of the circle sectors at the initial target angle. A further one of the cluster groups is positioned within a different circle sector up to one of a maximum and minimum angle relative to the initial target angle for that sector.
Still other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein are one embodiments of the invention by way of illustrating the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
The document mapper 32 operates on documents retrieved from a plurality of local sources. The local sources include documents 17 maintained in a storage device 16 coupled to a local server 15 and documents 20 maintained in a storage device 19 coupled to a local client 18. The local server 15 and local client 18 are interconnected to the production system 11 over an intranetwork 21. In addition, the document mapper 32 can identify and retrieve documents from remote sources over an internetwork 22, including the Internet, through a gateway 23 interfaced to the intranetwork 21. The remote sources include documents 26 maintained in a storage device 25 coupled to a remote server 24 and documents 29 maintained in a storage device 28 coupled to a remote client 27.
The individual documents 17, 20, 26, 29 include all forms and types of structured and unstructured data, including electronic message stores, such as word processing documents, electronic mail (email) folders, Web pages, and graphical or multimedia data. Notwithstanding, the documents could be in the form of organized data, such as stored in a spreadsheet or database.
In one embodiment, the individual documents 17, 20, 26, 29 include electronic message folders, such as maintained by the Outlook and Outlook Express products, licensed by Microsoft Corporation, Redmond, Wash. The database is an SQL-based relational database, such as the Oracle database management system, release 8, licensed by Oracle Corporation, Redwood Shores, Calif.
The individual computer systems, including backend server 11, production server 32, server 15, client 18, remote server 24 and remote client 27, are general purpose, programmed digital computing devices consisting of a central processing unit (CPU), random access memory (RAM), non-volatile secondary storage, such as a hard drive or CD ROM drive, network interfaces, and peripheral devices, including user interfacing means, such as a keyboard and display. Program code, including software programs, and data are loaded into the RAM for execution and processing by the CPU and results are generated for display, output, transmittal, or storage.
Display Generator
The theme generator 41 evaluates the document concepts 47 assigned to each of the clusters 50 and identifies cluster concepts 53 for each cluster 50, as further described below with reference to
The cluster placement component 42 places spines and certain clusters 50 into a two-dimensional display space as a visualization 43. The cluster placement component 42 performs four principal functions. First, the cluster placement component 42 selects candidate spines 55, as further described below with reference to
Second, the cluster placement component 42 assigns each of the clusters 50 to a best fit spine 56, as further described below with reference to
Third, the cluster placement component 42 selects and places unique seed spines 58, as further described below with reference to
The cluster placement component 42 places any remaining unplaced best fit spines 56 and clusters 50 that lack best fit spines 56 into spine groups, as further described below with reference to
Each module or component is a computer program, procedure or module written as source code in a conventional programming language, such as the C++ programming language, and is presented for execution by the CPU as object or byte code, as is known in the art. The various implementations of the source code and object and byte codes can be held on a computer-readable storage medium or embodied on a transmission medium in a carrier wave. The display generator 32 operates in accordance with a sequence of process steps, as further described below with reference to
Method Overview
As an initial step, documents 14 are scored and clusters 50 are generated (block 101), such as described in commonly-assigned U.S. Pat. No. 7,610,313, issued Oct. 27, 2009, the disclosure of which is incorporated by reference. Next, one or more cluster concepts 53 are generated for each cluster 50 based on cumulative cluster concept scores 51 (block 102), as further described below with reference to
Cluster Concept Generation
A cluster concept 53 is identified by iteratively processing through each of the clusters 50 (blocks 111-118). During each iteration, the cumulative score 51 of each of the document concepts 47 for all of the documents 14 appearing in a cluster 50 are determined (block 112). The cumulative score 51 can be calculated by summing over the document concept scores 48 for each cluster 50. The document concepts 47 are then ranked by cumulative score 51 as ranked cluster concepts 52 (block 113). In the described embodiment, the ranked cluster concepts 52 appear in descending order, but could alternatively be in ascending order. Next, a cluster concept 53 is determined. The cluster concept 53 can be user provided (block 114). Alternatively, each ranked cluster concept 52 can be evaluated against an acceptance criteria (blocks 115 and 116) to select a cluster concept 53. In the described embodiment, cluster concepts 53 must meet the following criteria:
If acceptable (blocks 115 and 116), the ranked cluster concept 52 is selected as a cluster concept 53 (block 117) and processing continues with the next cluster (block 118), after which the routine returns.
Candidate Spine Selection
Each cluster concept 53 shared by two or more clusters 50 can potentially form a spine of clusters 50. Thus, each cluster concept 53 is iteratively processed (blocks 121-126). During each iteration, each potential spine is evaluated against an acceptance criteria (blocks 122-123). In the described embodiment, a potential spine cannot be referenced by only a single cluster 50 (block 122) or by more than 10% of the clusters 50 in the potential spine (block 123). Other criteria and thresholds for determining acceptable cluster concepts 53 are possible. If acceptable (blocks 122, 123), the cluster concept 53 is selected as a candidate spine concept 54 (block 124) and a candidate spine 55 is logically formed (block 125). Processing continues with the next cluster (block 126), after which the routine returns.
Cluster to Spine Assignment
The best fit spines 56 are evaluated by iteratively processing through each cluster 50 and candidate spine 55 (blocks 131-136 and 132-134, respectively). During each iteration for a given cluster 50 (block 131), the spine fit of a cluster concept 53 to a candidate spine concept 54 is determined (block 133) for a given candidate spine 55 (block 132). In the described embodiment, the spine fit F is calculated according to the following equation:
where popularity is defined as the number of clusters 50 containing the candidate spine concept 54 as a cluster concept 53, rank is defined as the rank of the candidate spine concept 54 for the cluster 50, and scale is defined as a bias factor for favoring a user specified concept or other predefined or dynamically specified characteristic. In the described embodiment, a scale of 1.0 is used for candidate spine concept 54 while a scale of 5.0 is used for user specified concepts. Processing continues with the next candidate spine 55 (block 134). Next, the cluster 50 is assigned to the candidate spine 55 having a maximum spine fit as a best fit spine 56 (block 135). Processing continues with the next cluster 50 (block 136). Finally, any best fit spine 56 that attracts only a single cluster 50 is discarded (block 137) by assigning the cluster 50 to a next best fit spine 56 (block 138). The routine returns.
Generate Unique Spine Group Seeds
Candidate unique seed spines are selected by first iteratively processing through each best fit spine 56 (blocks 141-144). During each iteration, a spine concept score vector 57 is generated for only those spine concepts corresponding to each best fit spine 56 (block 142). The spine concept score vector 57 aggregates the cumulative cluster concept scores 51 for each of the clusters 50 in the best fit spine 56. Each spine concept score in the spine concept score vector 57 is normalized, such as by dividing the spine concept score by the length of the spine concept score vector 57 (block 143). Processing continues for each remaining best fit spine 56 (block 144), after which the best fit spines 56 are ordered by number of clusters 50. Each best fit spine 56 is again iteratively processed (blocks 146-151). During each iteration, best fit spines 56 that are not sufficiently large are discarded (block 147). In the described embodiment, a sufficiently large best fit spine 56 contains at least five clusters 50. Next, the similarities of the best fit spine 56 to each previously-selected unique seed spine 58 is calculated and compared (block 148). In the described embodiment, best fit spine similarity is calculated as the cosine of the spine concept score vectors 59, which contains the cumulative cluster concept scores 51 for the cluster concepts 53 of each cluster 50 in the best fit spine 56 or previously-selected unique seed spine 58. Best fit spines 56 that are not sufficiently dissimilar are discarded (block 149). Otherwise, the best fit spine 56 is identified as a unique seed spine 58 and is placed in the visualization 43 (block 150). Processing continues with the next best fit spine 56 (block 151), after which the routine returns.
Remaining Spine Placement
First, any remaining unplaced best fit spines 56 are ordered by number of clusters 50 assigned (block 161). The unplaced best fit spine 56 are iteratively processed (blocks 162-175) against each of the previously-placed spines (blocks 163-174). During each iteration, an anchor cluster 60 is selected from the previously placed spine 58 (block 164), as further described below with reference to
If the cluster 50 is placed (block 167), the best fit spine 56 is labeled as containing candidate anchor clusters 60 (block 171). If the current vector forms a maximum line segment (block 172), the angle of the vector is changed (block 173). In the described embodiment, a maximum line segment contains more than 25 clusters 50, although any other limit could also be applied. Processing continues with each seed spine (block 174) and remaining unplaced best fit spine 56 (block 175). Finally, any remaining unplaced clusters 50 are placed (block 176). In one embodiment, unplaced clusters 50 can be placed adjacent to a best fit anchor cluster 60 or in a display area of the visualization 43 separately from the placed best fit spines 56. The routine then returns.
Anchor Cluster Selection
Cluster Spine Example
The cluster spine 202 visually associates those clusters 204-206 sharing a common popular concept. A theme combines two or more concepts. During cluster spine creation, those clusters 204-206 having available anchor points are identified for use in grafting other cluster spines sharing popular thematically-related concepts, as further described below with reference to
Anchor Points Example
An open edge is a point along the edge of a cluster at which another cluster can be adjacently placed. In the described embodiment, clusters are placed with a slight gap between each cluster to avoid overlapping clusters. Otherwise, a slight overlap within 10% with other clusters is allowed. An open edge is formed by projecting vectors 214a-e outward from the center 213 of the endpoint cluster 212, preferably at normalized angles. The clusters in the cluster spine 211 are arranged in order of cluster similarity.
In one embodiment, given 0≦σ<Π, where σ is the angle of the current cluster spine 211, the normalized angles for largest endpoint clusters are at one third Π to minimize interference with other spines while maximizing the degree of interrelatedness between spines. If the cluster ordinal spine position is even, the primary angle is
and the secondary angle is
Otherwise, the primary angle is
and the secondary angle is
Other evenly divisible angles could be also used.
Referring next to
In one embodiment, given 0≦σ<Π, where σ is the angle of the current cluster spine 221, the normalized angles for smallest endpoint clusters are at one third Π, but only three open edges are available to graft other thematically-related cluster spines. If the cluster ordinal spine position is even, the primary angle is
and the secondary angle is
Otherwise, the primary angle is
and the secondary angle is
Other evenly divisible angles could be also used.
Referring finally to
In one embodiment, given 0≦σ<Π, where σ is the angle of the current cluster spine 231, the normalized angles for midpoint clusters are at one third Π, but only two open edges are available to graft other thematically-related cluster spines. Empirically, limiting the number of available open edges to those facing the direction of cluster similarity helps to maximize the interrelatedness of the overall display space.
Grafting a Spine Cluster Onto a Spine
An angle for placing the cluster 50 is determined (block 241), dependent upon whether the cluster against which the current cluster 50 is being placed is a starting endpoint, midpoint, or last endpoint cluster, as described above with reference to
and the secondary angle is
Otherwise, the primary angle is
and the secondary angle is
Other evenly divisible angles could be also used. The cluster 50 is then placed using the primary angle (block 242). If the cluster 50 is the first cluster in a cluster spine but cannot be placed using the primary angle (block 243), the secondary angle is used and the cluster 50 is placed (block 244). Otherwise, if the cluster 50 is placed but overlaps more than 10% with existing clusters (block 245), the cluster 50 is moved outward (block 246) by the diameter of the cluster 50. Finally, if the cluster 50 is satisfactorily placed (block 247), the function returns an indication that the cluster 50 was placed (block 248). Otherwise, the function returns an indication that the cluster was not placed (block 249).
Cluster Placement Relative to an Anchor Point Example
in one embodiment, relative to the vector 268 forming the cluster spine 268.
Completed Cluster Placement Example
Display Generator
Briefly, the cluster placement component 301 performs five principal functions. First, the cluster placement component 42 selects candidate spines 55, as further described above with reference to
Second, the cluster placement component 42 assigns each of the clusters 50 to a best fit spine 56, as further described above with reference to
Third, the cluster placement component 42 selects and places unique seed spines 58, as further described above with reference to
Fourth, the cluster placement component 42 places any remaining unplaced best fit spines 56 are placed into spine groups 303, as further described below with reference to
Finally, any remaining singleton clusters 50 are placed into spine groups 303, as further described below with reference to
The cluster spine group placement component 302 places the spine groups 303 within the visualization 43, as further described below with reference to
Method Overview
As an initial step, documents 14 are scored and clusters 50 are generated (block 311), such as described in commonly-assigned U.S. Pat. No. 7,610,313, issued Oct. 27, 2009, the disclosure of which is incorporated by reference. Next, one or more cluster concepts 53, that is, “themes,” are generated for each cluster 50 based on cumulative cluster concept scores 51 (block 312), as further described above with reference to
Spine groups 303 are then formed and placed within the visualization 43 in the display space, as follows. First, the best fit spines 56 are ordered based on spine length using, for instance, the number of clusters 50 contained in the spine (block 315). Thus, longer best fit spines 56 are selected first. Other orderings of the best fit spines 56 are possible. Unique seed spines are identified from the ordered best fit spines 56 and placed to create best fit spines (block 316), as further described above with reference to
Cluster Assignment
The best fit spines 56 are evaluated by iteratively processing through each cluster 50 and candidate spine 55 (blocks 321-326 and 322-324, respectively). During each iteration for a given cluster 50 (block 321), the spine fit of a cluster concept 53 to a candidate spine concept 54 is determined (block 323) for a given candidate spine 55 (block 322). In the described embodiment, the spine fit F is calculated according to the following equation:
where v is defined as the number of clusters 50 containing the candidate spine concept 54 as a cluster concept 53, v is defined as the rank order of the cluster concept 53, and w is defined as bias factor. In the described embodiment, a bias factor of 5.0 is used for user-specified concepts, while a bias factor of 1.0 is used for all other concepts. Processing continues with the next candidate spine 55 (block 324). Next, the cluster 50 is assigned to the candidate spine 55 having a maximum spine fit as a best fit spine 56 (block 325). Processing continues with the next cluster 50 (block 326). Finally, any best fit spine 56 that attracts only a single cluster 50 is discarded (block 327) by assigning the cluster 50 to a next best fit spine 56 (block 328). The routine returns.
In a further embodiment, each cluster 50 can be matched to a best fit candidate spine 56 as further described above with reference to
Remaining Cluster Spine Placement
Each of the remaining unplaced cluster spines 56 is iteratively processed (blocks 331-349), as follows. For each unplaced cluster spine 56 (block 331), a list of candidate anchor clusters 60 is first built from the set of placed seed best fit spines 56 (block 332). In the described embodiment, a candidate anchor cluster 60 has been placed in a best fit spine 56, has at least one open edge for grafting a cluster spine 56, and belongs to a best fit spine 56 that has a minimum similarity of 0.1 with the unplaced cluster spine 56, although other minimum similarity values are possible. The similarities between the unplaced cluster spine 56 and the best fit spine of each candidate anchor cluster 60 in the list are determined (block 333). The similarities can be determined by taking cosine values over a set of group concept score vector 304 formed by aggregating the concept scores for all clusters 56 in the unplaced cluster spine 56 and in the best fit spine of each candidate anchor cluster 60 in the list. Strong candidate anchor clusters 60, which contain the same concept as the unplaced cluster spine 56, are identified (block 334). If no qualified placed anchor clusters 60 are found (block 335), weak candidate anchor clusters 60, which, like the strong candidate anchor clusters 60, are placed, have an open edge, and reflect the minimum best fit spine similarity, are identified (block 336).
Next, the unplaced cluster spine 56 is placed. During spine placement (blocks 338-348), the strong candidate anchor clusters 60 are selected before the weak candidate anchor clusters 60. The best fit spine 56 having a maximum similarity to the unplaced cluster spine 56 is identified (block 337). If a suitable best fit spine 56 is not found (block 338), the largest cluster 60 on the unplaced cluster spine 56 is selected and the unplaced cluster spine 56 becomes a new spine group 303 (block 339). Otherwise, if a best fit spine 56 is found (block 338), the cluster 60 on the unplaced cluster spine 56 that is most similar to the selected anchor cluster 60 is selected (block 340). The unplaced cluster spine 56 is placed by grafting onto the previously placed best fit spine 56 along a vector defined from the center of the anchor cluster 55 (block 341), as further described above with reference to
If the unplaced cluster spine 56 is placed (block 342), the now-placed best fit spine 56 is labeled as containing candidate anchor clusters 60 (block 346). If the current vector forms a maximum line segment (block 347), the angle of the vector is changed (block 348). In the described embodiment, a maximum line segment contains more than 25 clusters 50, although any other limit could also be applied. Processing continues with each remaining unplaced best fit spine 56 (block 349), after which the routine then returns.
Remaining Cluster Placement
Each of the remaining unplaced clusters 60 is iteratively processed (blocks 351-358), as follows. For each unplaced cluster 60, a list of candidate anchor clusters 60 is first built from the set of placed seed best fit spines 56 (block 352). In the described embodiment, a candidate anchor cluster 60 has at least one open edge for grafting a cluster 60. The similarities between the unplaced cluster 60 and each candidate anchor cluster 60 in the list are determined (block 353). The similarities can be determined by taking cosine values of the respective clusters 60. The candidate anchor cluster 60 having the closest similarity to the unplaced cluster 60 is identified (block 354). If a sufficiently similar candidate anchor cluster 60 found (block 355), the unplaced cluster 60 is placed in proximity to the selected candidate anchor cluster 60 (block 356). Otherwise, the unplaced cluster 60 are placed in a display area of the visualization 43 separately from the placed best fit spines 56 (block 357). Processing continues with each remaining unplaced cluster 60 (block 358), after which the routine then returns.
Example Cluster Spine Group
Next, each of the unplaced remaining singleton clusters 382 are loosely grafted onto a placed best fit spine 371, 376, 379 by first building a candidate anchor cluster list. Each of the remaining singleton clusters 382 are placed proximal to an anchor cluster 60 that is most similar to the singleton cluster. The singleton clusters 373, 382 are placed along a vector 372, 377, 379, but no connecting line is drawn in the visualization 43. Relatedness is indicated by proximity only.
Cluster Spine Group Placement
The spine groups 303 are first sorted by order of importance (block 381). In the described embodiment, the spine groups 303 are sorted by size and concept emphasized state, which corresponds to specific user-specified selections. The spine groups 303 are arranged circumferentially to a central shape defined logically within the visualization 43. In the described embodiment, a circle is defined within the visualization 43. Referring to
Referring back to
where Seeds is a number of initial seed spine groups 303 to be placed circumferentially to the innermost circle 401 and MaxY is a maximum extent along a y-axis of the placed best fit candidate spine groups 303. A group concept score vector 304 is generated (block 383) by aggregating the cluster theme concepts for each spine group 303. In the described embodiment, the group concept score vector 304 is limited to the top 50 concepts based on score, although other limits could also be used. The set of unique seed spine groups 303 are selected and placed at equal distance angles about the innermost circle 401 (block 384). The unique seed spine groups 303 are chosen such that each unique seed spine group 303 is sufficiently dissimilar to the previously-placed unique seed spine groups 303. In the described embodiment, a cosine value of at least 0.2 is used, although other metrics of cluster spine group dissimilarity are possible. Each of the unique seed spine groups 303 are translated to the x-axes, where x=0.5×radius r and y=0.0, and are further rotated or moved outwards away from the innermost circle 401 to avoid overlap.
Each of the remaining spine groups 303 are iteratively processed (blocks 385-393), as follows. The similarities of each unplaced spine group 303 to each previously-placed spine group 303 are determined (block 386) and the seed spine group 303 that is most similar to the unplaced spine group 303 is selected (block 387). The unplaced spine group 303 is placed at the radius 402 of the innermost circle 401 at the angle 404 of the selected seed spine group 303 (block 388). If the unplaced spine group 303 overlaps any placed spine group 303 (block 389), the unplaced spine group 303 is rotated (block 390). However, if the unplaced spine group 303 exceeds the maximum angle 406a or minimum angle 406b after rotation (block 391), the unplaced spine group 303 is translated outwards and rotated in an opposite direction until the overlap is removed (block 392). Referring to
Cluster Spine Group Placement Example
While the invention has been particularly shown and described as referenced to the embodiments thereof, those skilled in the art will understand that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope of the invention.
This patent application is a continuation of U.S. patent application Ser. No. 13/758,982, filed Feb. 4, 2013, pending, which is a continuation of U.S. Pat. No. 8,369,627, issued Feb. 5, 2013, which is a continuation of U.S. Pat. No. 8,155,453, issued Apr. 10, 2012, which is a continuation of U.S. Pat. No. 7,983,492, issued Jul. 19, 2011, which is a continuation of U.S. Pat. No. 7,885,468, issued Feb. 8, 2011, which is a continuation of U.S. Pat. No. 7,720,292, issued May 18, 2010, which is a continuation of U.S. Pat. No. 7,440,622, issued Oct. 21, 2008, which is a continuation-in-part of U.S. Pat. No. 7,191,175, issued Mar. 13, 2007, the priority dates of which are claimed and the disclosures of which are incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3416150 | Lindberg | Dec 1968 | A |
3426210 | Agin | Feb 1969 | A |
3668658 | Flores et al. | Jun 1972 | A |
4893253 | Lodder | Jan 1990 | A |
5056021 | Ausborn | Oct 1991 | A |
5121338 | Lodder | Jun 1992 | A |
5133067 | Hara et al. | Jul 1992 | A |
5278980 | Pedersen et al. | Jan 1994 | A |
5371673 | Fan | Dec 1994 | A |
5442778 | Pedersen et al. | Aug 1995 | A |
5477451 | Brown et al. | Dec 1995 | A |
5488725 | Turtle et al. | Jan 1996 | A |
5524177 | Suzuoka | Jun 1996 | A |
5528735 | Strasnick et al. | Jun 1996 | A |
5619632 | Lamping et al. | Apr 1997 | A |
5619709 | Caid et al. | Apr 1997 | A |
5635929 | Rabowsky et al. | Jun 1997 | A |
5649193 | Sumita et al. | Jul 1997 | A |
5675819 | Schuetze | Oct 1997 | A |
5696962 | Kupiec | Dec 1997 | A |
5737734 | Schultz | Apr 1998 | A |
5754938 | Herz et al. | May 1998 | A |
5794236 | Mehrle | Aug 1998 | A |
5799276 | Komissarchik et al. | Aug 1998 | A |
5819258 | Vaithyanathan et al. | Oct 1998 | A |
5842203 | D'Elena et al. | Nov 1998 | A |
5844991 | Hochberg et al. | Dec 1998 | A |
5857179 | Vaithyanathan et al. | Jan 1999 | A |
5860136 | Fenner | Jan 1999 | A |
5862325 | Reed et al. | Jan 1999 | A |
5864846 | Voorhees et al. | Jan 1999 | A |
5864871 | Kitain et al. | Jan 1999 | A |
5867799 | Lang et al. | Feb 1999 | A |
5870740 | Rose et al. | Feb 1999 | A |
5909677 | Broder et al. | Jun 1999 | A |
5915024 | Kitaori et al. | Jun 1999 | A |
5920854 | Kirsch et al. | Jul 1999 | A |
5924105 | Punch et al. | Jul 1999 | A |
5940821 | Wical | Aug 1999 | A |
5950146 | Vapnik | Sep 1999 | A |
5950189 | Cohen et al. | Sep 1999 | A |
5966126 | Szabo | Oct 1999 | A |
5987446 | Corey et al. | Nov 1999 | A |
6006221 | Liddy et al. | Dec 1999 | A |
6012053 | Pant et al. | Jan 2000 | A |
6026397 | Sheppard | Feb 2000 | A |
6038574 | Pitkow et al. | Mar 2000 | A |
6070133 | Brewster et al. | May 2000 | A |
6089742 | Warmerdam et al. | Jul 2000 | A |
6092059 | Straforini et al. | Jul 2000 | A |
6094649 | Bowen et al. | Jul 2000 | A |
6100901 | Mohda et al. | Aug 2000 | A |
6119124 | Broder et al. | Sep 2000 | A |
6122628 | Castelli et al. | Sep 2000 | A |
6137499 | Tesler | Oct 2000 | A |
6137545 | Patel et al. | Oct 2000 | A |
6137911 | Zhilyaev | Oct 2000 | A |
6148102 | Stolin | Nov 2000 | A |
6154219 | Wiley et al. | Nov 2000 | A |
6167368 | Wacholder | Dec 2000 | A |
6173275 | Caid et al. | Jan 2001 | B1 |
6202064 | Julliard | Mar 2001 | B1 |
6216123 | Robertson et al. | Apr 2001 | B1 |
6243713 | Nelson et al. | Jun 2001 | B1 |
6243724 | Mander et al. | Jun 2001 | B1 |
6260038 | Martin et al. | Jul 2001 | B1 |
6326962 | Szabo | Dec 2001 | B1 |
6338062 | Liu | Jan 2002 | B1 |
6345243 | Clark | Feb 2002 | B1 |
6349296 | Broder et al. | Feb 2002 | B1 |
6349307 | Chen | Feb 2002 | B1 |
6360227 | Aggarwal et al. | Mar 2002 | B1 |
6363374 | Corston-Oliver et al. | Mar 2002 | B1 |
6377287 | Hao et al. | Apr 2002 | B1 |
6381601 | Fujiwara et al. | Apr 2002 | B1 |
6389433 | Bolosky et al. | May 2002 | B1 |
6389436 | Chakrabarti et al. | May 2002 | B1 |
6408294 | Getchius et al. | Jun 2002 | B1 |
6414677 | Robertson et al. | Jul 2002 | B1 |
6415283 | Conklin | Jul 2002 | B1 |
6418431 | Mahajan et al. | Jul 2002 | B1 |
6421709 | McCormick et al. | Jul 2002 | B1 |
6438537 | Netz et al. | Aug 2002 | B1 |
6438564 | Morton et al. | Aug 2002 | B1 |
6442592 | Alumbaugh et al. | Aug 2002 | B1 |
6446061 | Doerre et al. | Sep 2002 | B1 |
6449612 | Bradley et al. | Sep 2002 | B1 |
6453327 | Nielsen | Sep 2002 | B1 |
6460034 | Wical | Oct 2002 | B1 |
6470307 | Turney | Oct 2002 | B1 |
6480843 | Li | Nov 2002 | B2 |
6480885 | Olivier | Nov 2002 | B1 |
6484168 | Pennock et al. | Nov 2002 | B1 |
6484196 | Maurille | Nov 2002 | B1 |
6493703 | Knight et al. | Dec 2002 | B1 |
6496822 | Rosenfelt et al. | Dec 2002 | B2 |
6502081 | Wiltshire, Jr. et al. | Dec 2002 | B1 |
6507847 | Fleischman | Jan 2003 | B1 |
6510406 | Marchisio | Jan 2003 | B1 |
6519580 | Johnson et al. | Feb 2003 | B1 |
6523026 | Gillis | Feb 2003 | B1 |
6523063 | Miller et al. | Feb 2003 | B1 |
6542889 | Aggarwal et al. | Apr 2003 | B1 |
6544123 | Tanaka et al. | Apr 2003 | B1 |
6549957 | Hanson et al. | Apr 2003 | B1 |
6560597 | Dhillon et al. | May 2003 | B1 |
6571225 | Oles et al. | May 2003 | B1 |
6584564 | Olkin et al. | Jun 2003 | B2 |
6594658 | Woods | Jul 2003 | B2 |
6598054 | Schuetze et al. | Jul 2003 | B2 |
6606625 | Muslea et al. | Aug 2003 | B1 |
6611825 | Billheimer et al. | Aug 2003 | B1 |
6628304 | Mitchell et al. | Sep 2003 | B2 |
6629097 | Keith | Sep 2003 | B1 |
6651057 | Jin et al. | Nov 2003 | B1 |
6654739 | Apte et al. | Nov 2003 | B1 |
6658423 | Pugh et al. | Dec 2003 | B1 |
6675159 | Lin et al. | Jan 2004 | B1 |
6675164 | Kamath et al. | Jan 2004 | B2 |
6678705 | Berchtold et al. | Jan 2004 | B1 |
6684205 | Modha et al. | Jan 2004 | B1 |
6697998 | Damerau et al. | Feb 2004 | B1 |
6701305 | Holt et al. | Mar 2004 | B1 |
6711585 | Copperman et al. | Mar 2004 | B1 |
6714929 | Micaelian et al. | Mar 2004 | B1 |
6735578 | Shetty et al. | May 2004 | B2 |
6738759 | Wheeler et al. | May 2004 | B1 |
6747646 | Gueziec et al. | Jun 2004 | B2 |
6751628 | Coady | Jun 2004 | B2 |
6757646 | Marchisio | Jun 2004 | B2 |
6785679 | Dane et al. | Aug 2004 | B1 |
6804665 | Kreulen et al. | Oct 2004 | B2 |
6816175 | Hamp et al. | Nov 2004 | B1 |
6819344 | Robbins | Nov 2004 | B2 |
6823333 | McGreevy | Nov 2004 | B2 |
6841321 | Matsumoto et al. | Jan 2005 | B2 |
6847966 | Sommer et al. | Jan 2005 | B1 |
6862710 | Marchisio | Mar 2005 | B1 |
6879332 | Decombe | Apr 2005 | B2 |
6883001 | Abe | Apr 2005 | B2 |
6886010 | Kostoff | Apr 2005 | B2 |
6888584 | Suzuki et al. | May 2005 | B2 |
6915308 | Evans et al. | Jul 2005 | B1 |
6922699 | Schuetze et al. | Jul 2005 | B2 |
6941325 | Benitez et al. | Sep 2005 | B1 |
6970881 | Mohan et al. | Nov 2005 | B1 |
6978419 | Kantrowitz | Dec 2005 | B1 |
6990238 | Saffer et al. | Jan 2006 | B1 |
6996575 | Cox et al. | Feb 2006 | B2 |
7003551 | Malik | Feb 2006 | B2 |
7013435 | Gallo et al. | Mar 2006 | B2 |
7020645 | Bisbee et al. | Mar 2006 | B2 |
7051017 | Marchisio | May 2006 | B2 |
7054870 | Holbrook | May 2006 | B2 |
7080320 | Ono | Jul 2006 | B2 |
7096431 | Tambata et al. | Aug 2006 | B2 |
7099819 | Sakai et al. | Aug 2006 | B2 |
7117246 | Christenson et al. | Oct 2006 | B2 |
7130807 | Mikurak | Oct 2006 | B1 |
7137075 | Hoshino et al. | Nov 2006 | B2 |
7146361 | Broder et al. | Dec 2006 | B2 |
7155668 | Holland et al. | Dec 2006 | B2 |
7188107 | Moon et al. | Mar 2007 | B2 |
7188117 | Farahat et al. | Mar 2007 | B2 |
7194458 | Micaelian et al. | Mar 2007 | B1 |
7194483 | Mohan et al. | Mar 2007 | B1 |
7197497 | Cossock | Mar 2007 | B2 |
7209949 | Mousseau et al. | Apr 2007 | B2 |
7233886 | Wegerich et al. | Jun 2007 | B2 |
7233940 | Bamberger et al. | Jun 2007 | B2 |
7240199 | Tomkow | Jul 2007 | B2 |
7246113 | Cheetham et al. | Jul 2007 | B2 |
7251637 | Caid et al. | Jul 2007 | B1 |
7266365 | Ferguson et al. | Sep 2007 | B2 |
7266545 | Bergman et al. | Sep 2007 | B2 |
7269598 | Marchisio | Sep 2007 | B2 |
7271801 | Toyozawa et al. | Sep 2007 | B2 |
7277919 | Donoho et al. | Oct 2007 | B1 |
7325127 | Olkin et al. | Jan 2008 | B2 |
7353204 | Liu | Apr 2008 | B2 |
7359894 | Liebman et al. | Apr 2008 | B1 |
7363243 | Arnett et al. | Apr 2008 | B2 |
7366759 | Trevithick et al. | Apr 2008 | B2 |
7373612 | Risch et al. | May 2008 | B2 |
7379913 | Steele et al. | May 2008 | B2 |
7383282 | Whitehead et al. | Jun 2008 | B2 |
7401087 | Copperman et al. | Jul 2008 | B2 |
7412462 | Margolus et al. | Aug 2008 | B2 |
7418397 | Kojima et al. | Aug 2008 | B2 |
7433893 | Lowry | Oct 2008 | B2 |
7444356 | Calistri-Yeh et al. | Oct 2008 | B2 |
7457948 | Bilicksa et al. | Nov 2008 | B1 |
7472110 | Achlioptas | Dec 2008 | B2 |
7490092 | Morton et al. | Feb 2009 | B2 |
7516419 | Petro et al. | Apr 2009 | B2 |
7519565 | Prakash et al. | Apr 2009 | B2 |
7571177 | Damle | Aug 2009 | B2 |
7584221 | Robertson et al. | Sep 2009 | B2 |
7639868 | Regli et al. | Dec 2009 | B1 |
7647345 | Trepess et al. | Jan 2010 | B2 |
7668376 | Lin et al. | Feb 2010 | B2 |
7698167 | Batham et al. | Apr 2010 | B2 |
7716223 | Haveliwala et al. | May 2010 | B2 |
7761447 | Brill et al. | Jul 2010 | B2 |
7885901 | Hull et al. | Feb 2011 | B2 |
20020032735 | Burnstein et al. | Mar 2002 | A1 |
20020065912 | Catchpole et al. | May 2002 | A1 |
20020078090 | Hwang et al. | Jun 2002 | A1 |
20020122543 | Rowen | Sep 2002 | A1 |
20020184193 | Cohen | Dec 2002 | A1 |
20030130991 | Reijerse et al. | Jul 2003 | A1 |
20030172048 | Kauffman | Sep 2003 | A1 |
20040024756 | Rickard | Feb 2004 | A1 |
20040034633 | Rickard | Feb 2004 | A1 |
20040205578 | Wolf et al. | Oct 2004 | A1 |
20040215608 | Gourlay | Oct 2004 | A1 |
20040243556 | Ferrucci et al. | Dec 2004 | A1 |
20050283473 | Rousso et al. | Dec 2005 | A1 |
20060021009 | Lunt | Jan 2006 | A1 |
20060122974 | Perisic | Jun 2006 | A1 |
20060122997 | Lin | Jun 2006 | A1 |
20070020642 | Deng et al. | Jan 2007 | A1 |
Number | Date | Country |
---|---|---|
0886227 | Dec 1998 | EP |
1024437 | Aug 2000 | EP |
1049030 | Nov 2000 | EP |
0067162 | Nov 2000 | WO |
03052627 | Jun 2003 | WO |
03060766 | Jul 2003 | WO |
2005073881 | Aug 2005 | WO |
2006008733 | Jan 2010 | WO |
Entry |
---|
Shuldberg et al., “Distilling Information from Text: The EDS TemplateFiller System,” Journal of the American Society for Information Science, vol. 44, pp. 493-507 (1993). |
V. Faber, “Clustering and the Continuous K-Means Algorithm,” Los Alamos Science, The Laboratory, Los Alamos, NM, US, No. 22, Jan. 1, 1994, pp. 138-144. |
http://em-ntserver.unl.edu/Math/mathweb/vecors/vectors.html © 1997. |
North et al. “A Taxonomy of Multiple Window Coordinations,” Institute for Systems Research & Department of Computer Science, University of Maryland, Maryland, USA, http://www.cs.umd.edu/localphp/hcil/tech-reports-search.php?number=97-18 (1997). |
R.E. Horn, “Communication Units, Morphology, and Syntax,” Visual Language: Global Communication for the 21st Century, 1998, Ch. 3, pp. 51-92, MacroVU Press, Bainbridge Island, Washington, USA. |
B.B. Hubbard, “The World According the Wavelet: The Story of a Mathematical Technique in the Making,” AK Peters (2nd ed.), pp. 227-229, Massachusetts, USA (1998). |
Whiting et al., “Image Quantization: Statistics and Modeling,” SPIE Conference of Physics of Medical Imaging, San Diego, CA, USA, vol. 3336, pp. 260-271 (Feb. 1998). |
Miller et al., “Topic Islands: A Wavelet Based Text Visualization System,” Proceedings of the IEEE Visualization Conference. 1998, pp. 189-196. |
Fekete et al., “Excentric Labeling: Dynamic Neighborhood Labeling for Data Visualization,” CHI 1999 Conference Proceedings Human Factors in Computing Systems, Pittsburgh, PA, pp. 512-519 (May 15-20, 1999). |
Estivill-Castro et al. “Amoeba: Hierarchical Clustering Based on Spatial Proximity Using Delaunaty Diagram”, Department of Computer Science, The University of Newcastle, Australia, 1999 ACM Sigmod International Conference on Management of Data, vol. 28, No. 2, Jun. 1-Jun. 3, 1999 pp. 49-60, Philadelphia, PA, USA (Jun. 1, 1999). |
Jain et al., “Data Clustering: A Review,” ACM Computing Surveys, vol. 31, No. 3, Sep. 1999, pp. 264-323, New York, NY, USA (Sep. 1999). |
Baeza-Yates et al., “Modern Information Retrieval,” Ch. 2 “Modeling,” Modern Information Retrieval, Harlow: Addison-Wesley, Great Britain 1999, pp. 18-71 (1999). |
Pelleg et al., “Accelerating Exact K-Means Algorithms With Geometric Reasoning,” pp. 277-281, Conf on Knowledge Discovery in Data, Proc fifth ACM SIGKDD (1999). |
Eades et al., “Orthogonal Grid Drawing of Clustered Graphs,” Departnment of Computer Science, the University of Newcastle, Australia, Technical Report 96-04, [Online] 1996, Retrieved from the internet: URL:http://citeseer.ist.psu.edu/eades96ort hogonal.ht. |
Kanungo et al., “The Analysis of a Simple K-Means Clustering Algorithm,” pp. 100-109, Proc 16th annual symposium of computational geometry (May 2000). |
Kurimo M., “Fast Latent Semantic Indexing of Spoken Documents by Using Self-Organizing Maps” IEEE International Conference on Accoustics, Speech, and Signal Processing, vol. 6, pp. 2425-2428 (Jun. 2000). |
Magarshak, Greg., Theory & Practice. Issue 01. May 17, 2000. http://www.flipcode.com/articles/tp.sub.--issue01-pf.shtml (May 17, 2000). |
Anna Sachinopoulou, “Multidimensional Visualization,” Technical Research Centre of Finland, Espoo 2001, VTT Research Notes 2114, pp. 1-37 (2001). |
Bernard et al.: “Labeled Radial Drawing of Data Structures” Proceedings of the Seventh International Conference on Information Visualization, Infovis. IEEE Symposium, Jul. 16-18, 2003, Piscataway, NJ, USA, IEEE, Jul. 16, 2003, pp. 479-484, XP010648809, IS. |
Boukhelifa et al., “A Model and Software System for Coordinated and Multiple Views in Exploratory Visualization,” Information Visualization, No. 2, pp. 258-269, GB (2003). |
Christina Yip Chung et al., “Thematic Mapping—From Unstructured Documents to Taxonomies,” CIKM'02, Nov. 4-9, 2002, pp. 608-610, ACM, McLean, Virginia, USA. |
Hiroyuki Kawano, “Overview of Mondou Web Search Engine Using Text Mining and Information Visualizing Technologies,” IEEE, 2001, pp. 234-241. |
James Osborn et al., “Justice: A Judicial Search Tool Using Intelligent Concept Extraction,” ICAIL-99, 1999, pp. 173-181, ACM. |
Chen An et al., “Fuzzy Concept Graph and Application in Web Document Clustering,” 2001, pp. 101-106, IEEE. |
Can F., “Incremental Clustering for Dynamic Information Processing,” ACM Transactions on Information Systems, Association for Computing Machinery, Apr. 1993, pp. 143-164, vol. 11. No. 2, New York, US. |
Robert E. Horn, “Visual Language: Global Communication for the 21st Century,” 1998, pp. 51-92, Bainbridge, Washington, USA. |
Lam et al., “A Sliding Window Technique for Word Recognition,” SPIE, vol. 2422, pp. 38-46, Center of Excellence for Document Analysis and Recognition, State University of New Yrok at Baffalo, NY, USA (1995). |
Jiang Linhui, “K-Mean Algorithm: Iterative Partitioning Clustering Algorithm,” http://www.cs.regina.ca/-linhui/K.sub.--mean.sub.--algorithm.html, (2001) Computer Science Department, University of Regina, Saskatchewan, Canada (2001). |
Kazummasa Ozawa, “A Stratificational Overlapping Cluster Scheme,” Information Science Center, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572, Japan, Pattern Recognition, vol. 18, pp. 279-286 (1985). |
James Osborn et al., “JUSTICE: A Jidicial Search Tool Using Intelligent Cencept Extraction,” Department of Computer Science and Software Engineering, University of Melbourne, Australia, ICAIL-99, 1999, pp. 173-181, ACM (1999). |
Davison et al., “Brute Force Estimation of the Number of Human Genes Using EST Clustering as a Measure,” IBM Journal of Research & Development, vol. 45, pp. 439-447 (May 2001). |
Eades et al. “Multilevel Visualization of Clustered Graphs,” Department of Computer Science and Software Engineering, University if Newcastle, Australia, Proceedings of Graph Drawing '96, Lecture Notes in Computer Science, NR. 1190, Sep. 18, 1996-Sep. 20, 1996, pp. 101-112, Berkeley, CA, USA, ISBN: 3-540-62495-3 (Sep. 18, 1996). |
Bier et al. “Toolglass and Magic Lenses: The See-Through Interface”, Computer Graphics Proceedings, Proceedings of Siggraph Annual International Conference on Computer Graphics and Interactive Techniques, pp. 73-80, XP000879378 (Aug. 1993). |
Kohonen, T., “Self-Organizing Maps,” Ch. 1-2, Springer-Verlag (3rd ed.) (2001). |
Maria Cristin Ferreira De Oliveira et al., “From Visual Data Exploration to Visual Data Mining: A Survey,” Jul.-Sep. 2003, IEEE Transactions onVisualization and Computer Graphics, vol. 9, No. 3, pp. 378-394 (Jul. 2003). |
Rauber et al., “Text Mining in the SOMLib Digital Library System: The Representation of Topics and Genres,” Applied Intelligence 18, pp. 271-293, 2003 Kluwer Academic Publishers (2003). |
Slaney, M., et al., “Multimedia Edges: Finding Hierarchy in all Dimensions” Proc. 9-th ACM Intl. Conf. on Multimedia, pp. 29-40, ISBN.1-58113-394-4, Sep. 30, 2001, XP002295016 Ottawa (Sep. 3, 2001). |
Strehl et al., “Cluster Ensembles—A Knowledge Reuse Framework for Combining Partitioning,” Journal of Machine Learning Research, MIT Press, Cambridge, MA, US, ISSN: 1533-7928, vol. 3, No. 12, pp. 583-617, XP002390603 (Dec. 2002). |
Sullivan, Dan., “Document Warehousing and Text Mining: Techniques for Improving Business Operations, Marketing and Sales,” Ch. 1-3, John Wiley & Sons, New York, NY (2001). |
Wang et al., “Learning text classifier using the domain concept hierarchy,” Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on Jun. 29-Jul. 1, 2002, Piscataway, NJ, USA, IEEE, vol. 2, pp. 1230-1234 (2002). |
Lio et al., “Funding Pathogenicity Islands and Gene Transfer Events in Genome Data,” Bioinformatics, vol. 16, pp. 932-940, Department of Zoology, University of Cambridge, UK (Jan. 25, 2000). |
Ryall et al., “An Interactive Constraint-Based System for Drawing Graphs,” UIST '97 Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, pp. 97-104 (1997). |
Number | Date | Country | |
---|---|---|---|
20140140631 A1 | May 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13758982 | Feb 2013 | US |
Child | 14165557 | US | |
Parent | 13442782 | Apr 2012 | US |
Child | 13758982 | US | |
Parent | 13179130 | Jul 2011 | US |
Child | 13442782 | US | |
Parent | 13022580 | Feb 2011 | US |
Child | 13179130 | US | |
Parent | 12781763 | May 2010 | US |
Child | 13022580 | US | |
Parent | 12254739 | Oct 2008 | US |
Child | 12781763 | US | |
Parent | 10911375 | Aug 2004 | US |
Child | 12254739 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10778416 | Feb 2004 | US |
Child | 10911375 | US |