This document relates generally to user interfaces and more particularly to computer-implemented generation of node-link representations for display on user interfaces.
Historically, hierarchical data has been represented in a structured layout that helps in the understanding of parent-child relationships in the data. One approach has been to display the data in a top-down manner wherein children nodes are shown connected to their parent node with positions below the parent node. Another approach includes displaying the data in a left-right manner wherein children nodes are shown connected to their parent node with positions to the right of a parent node.
These approaches encounter multiple difficulties when the display is altered, such as when a user changes the focus of a node display. Several approaches encounter difficulty in illustrating parent-child relationship even from the initial display of the nodes. Such approaches remove the hierarchical relationship hints present in a structured hierarchical arrangement, thus making the layout more difficult to comprehend.
As an example,
In accordance with the teachings provided herein, computer-implemented systems and methods are provided. As an example, a method and system include displaying nodes on a display device, wherein the nodes have a hierarchical context. Positional information associated with a plurality of nodes is used to generate a display for the nodes in response to a change in focal position. The generated node display maintains hierarchical contextual information associated with the nodes. As another example of a system and method, a conal transformation can be performed upon the nodes when generating the display.
When a display change is to occur, such as by changing the display's focus or viewing a portion of the display in greater/lesser detail, node positional data 102 as well as information 104 about the display change are provided to a node positional calculator software 106. The node position calculator 106 determines new positions 108 for the nodes that maintain all or substantially most of the nodes' contextual information. All or some of the nodes (as the case may be) are displayed at their new positions 108 on a user interface 110 for a user 112, or the nodes' new positional information 108 can be provided to another software program 114 for processing by that software program 114.
As an illustration,
It should be understood that node displays may be depicted in many different ways. For example,
After the input data are obtained at step 202, step 204 determines what angular transformation is to be performed. This can be done by calculating an angular distortion strength (dt) based on focal position. The angular distortion strength factor determines how much angular transformation is to be used.
A system can be configured so as to allow a user designate the degree of angular transformation. For example, if a user wants to perform less angular transformation, then the user specifies a lower value. If the user wants to perform a greater amount of angular transformation, then the user specifies a higher value. The angular distortion strength factor can also be based upon the focal position relative to the center of the display screen. The further away the focal position is from the center of the display screen, the greater is the angular distortion strength factor. For example, if a user wishes to have maximum angular distortion, the user may specify a distortion factor (“fac”) which is the maximum value within a distortion range of one to ten. The x and y values in this example for the focus position are: fx=0.5 and fy=0.0 relative to the center of the display screen. The angular distortion strength factor (“dt”) is determined as:
dt=fac*sqrt(fx*fx+fy*fy)
dt=10*sqrt(0.5*0.5+0*0)
dt=10*0.5
dt=5
This determined value may be used in the enhancing angular transformation to effect how much angular displacement is performed upon the nodes. It should be understood that an angular distortion strength factor may be determined in many different ways so as to suit the application at hand. As an illustration, a default value for the angular distortion strength factor can be used so that the user does not have to specify a value.
Step 206 applies a non-linear geometry transformation that preserves the angular transformation (e.g., preserving the distance from the node to the root—the nodes are shifted towards the edges of the cone along the concentric arcs). Step 208 calculates conal fisheye lens strength based on the distance from the apex of the cone along its center axis (note: the cone's apex and center axis are illustrated in
Decision step 212 examines whether any more nodes of the tree remain to be processed. If there are, then processing continues at step 202 so that the next node may be obtained. If no more nodes remain to be processed, then the distorted nodes are displayed at step 214 before processing for this operational scenario terminates at end block 216.
It should be understood that many different variations can be utilized in this operational scenario, such as different types of transformations other than cone restricted angular and radial transforms such as using different mathematical functions for the transformations to control the shape of the distortions, provided that the context is preserved. Still further, a combination of parameters can be provided to decide which nodes are displayed with full level of detail. These parameters may be selected to display full level of detail for the complete path from the root node of the node tree to the node of interest. As another example, the level of detail for each node can be determined based on how far it is from the focus point and a direction of interest. These parameters can be chosen such that all the nodes on the path from the node near the focus point to the root are displayed with full level of detail.
Step 304 calculates the geometric distance from the center. The center is typically the center position of the display region of the screen, but may also include the center of the radial tree or another location as may be determined by the user or automatically by a computer software program. The node's geometric distance from the center is determined using the function:
distance=sqrt(tx*tx+ty*ty)
Decision step 306 examines whether the calculated distance is zero. If the distance is zero, then step 308 sets the new node positions to zero:
nx=0, ny=0
End block 310 returns the processing to the main flowchart (of
theta=a tan 2(ty,tx)
Processing continues on
At decision step 318, the values of Theta and FocusTheta are compared. If Theta is less than FocusTheta, then the following calculation is performed at step 320:
relTheta=(theta−FocusTheta)/(FocusTheta−MinAngle)
If Theta is equal to FocusTheta, then relTheta is set to zero at step 322. If Theta is greater than FocusTheta, then the following calculation is performed at step 324:
relTheta=(theta−FocusTheta)/(FocusTheta−MaxAngle)
Processing continues at decision step 326 wherein the value of relTheta is checked. If the value is positive, then the following calculation is performed at step 328:
theta2=−(1+dt)*(FocusTheta−minAng)/(dt−1/relTheta)+FocusTheta
If the value of relTheta is zero, then the following calculation is performed at step 330:
theta2=FocusTheta
If the value of relTheta negative, then the following calculation is performed at step 332:
theta2=(1+dt)*(maxAng−FocusTheta)/(dt+1/relTheta)+FocusTheta
Step 334 calculates a new node position as follows:
nx=distance*cos(theta2), ny=distance*sin(theta2)
Processing of the node terminates at end block 336. Additional nodes are processed in a similar manner.
Is (x>fx) ?
If yes: nx=(x−fx)*(screenWidth/(2*fx))
If no: nx=(x−fx)*(screenWidth/(2*(screenWidth−fx))
Is (y>fy) ?
If yes: ny=(y−fy)*(screenHeight/(2*fy))
If no: ny=(y−fy)*(screenHeight/(2*(screenHeight−fy))
Step 404 calculates the geometric distance of a node from the focus center using the function:
distance=sqrt(nx*nx+ny*ny)
Step 406 calculates conal lens strength as follows:
Ct2=ct*(height/maxHeight).
Step 408 calculates a conal distortion value as follows:
fac=log(ct2*dist)/log(ct2*maxR+1)
Step 410 calculates a new radius as follows:
newrad=max(fac*maxR, maxR)
With reference to
If the distance does not equal zero as determined by decision step 414, then step 418 calculates the angle theta as follows using the arctangent function:
theta=a tan 2(ny,nx)
Step 420 calculates the new offsets from the focus center (nx2, ny2) as follows wherein aspect is the aspect ratio, ScreenHeight/ScreenWidth:
Is (cos(theta)>0) ?
If yes: nx2=newrad*cos(theta)*(screenWidth−fx)*2/screenWidth
If no: nx2=newrad*cos(theta)*fx*2/screenWidth
Is (sin(theta)>0) ?
If yes: ny2=aspect*newrad*sin(theta)*2*(screenHeight−fy)/screenHeight
If no: ny2=aspect*newrad*sin(theta)*2*fy/screenHeight
The program ends at end block 422. It should be understood that similar to the other processing flows described herein, one or more steps and the order of the steps in the flowcharts described herein may be altered, deleted, modified and/or augmented and still achieve the desired outcome.
The calculations shown in this operational scenario allow a tree to maintain its cone geometry even after a focal point is moved. This is further illustrated in
While examples have been used to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention, the patentable scope of the invention is defined by claims, and may include other examples that occur to those skilled in the art. For example, the systems and methods can be used in the display of dense node-link diagrams that graphically represent hierarchical data. These include decision trees, organizational charts, OLAP (Online Analytical Processing) data viewers, etc. It is also noted that various fisheye distortion operations can be utilized with the systems and methods disclosed herein. For example, a non-linear expansion angular transformation as disclosed in U.S. Pat. No. 6,693,633 and issued to the assignee of this application can be used and is hereby incorporated herein by reference. A node position calculator can use such a non-linear expansion angular transformation in combination with a conal transformation. As disclosed in the patent, an angular transformation can include receiving first positions for use in locating a first node and a second node. The first node and second node are separated from each other and are at least substantially equidistant from a focal position. Second positions are determined for the first node and second node such that angular shift of the first node from its first position to its second position is different in magnitude than angular shift of the second node from its first position to its second position. These angular shifts for the first node and second node are with respect to the focal position. As another example disclosed in the patent, a non-linear angular transformation can include receiving first positions for locating a first node and a second node on the display device. The first node and second node are separated from each other and are at least substantially equidistant from a predetermined position on the display device. The second positions are determined for the first node and second node such that angular shift of the first node from its first position to its second position is different in magnitude than angular shift of the second node from its first position to its second position. The angular shifts for the first node and second node are determined based upon a focus position. The angular shifts for the first node and second node are with respect to the center position, and the first node and second node are displayed on the display device based upon the determined second positions for the first node and second node.
As another example of the wide scope of the systems and methods disclosed herein, the level of detail can be picked for each node based on the radial distortion factor and conal distortion factor. Also, the initial node layout step (e.g., step 202 on
It is further noted that the systems and methods may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation, or on a networked system, or in a client-server configuration, or in an application service provider configuration. In multiple computer systems, data signals may be conveyed via networks (e.g., local area network, wide area network, internet, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication among multiple computers or computing devices.
The systems' and methods' data (e.g., associations, mappings, etc.) may be stored and implemented in one or more different types of computer-implemented ways, such as different types of storage devices and programming constructs (e.g., data stores, RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions for use in execution by a processor to perform the methods' operations and implement the systems described herein.
The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5295243 | Robertson et al. | Mar 1994 | A |
| 5333254 | Robertson | Jul 1994 | A |
| 5546529 | Bowers et al. | Aug 1996 | A |
| 5555354 | Strasnick et al. | Sep 1996 | A |
| 5565888 | Selker | Oct 1996 | A |
| 5590250 | Lamping et al. | Dec 1996 | A |
| 5619632 | Lamping et al. | Apr 1997 | A |
| 5786820 | Robertson | Jul 1998 | A |
| 5812134 | Pooser et al. | Sep 1998 | A |
| 6057843 | Van Overveld et al. | May 2000 | A |
| 6204850 | Green | Mar 2001 | B1 |
| 6259451 | Tesler | Jul 2001 | B1 |
| 6281899 | Gould et al. | Aug 2001 | B1 |
| 6297824 | Hearst et al. | Oct 2001 | B1 |
| 6300957 | Rao et al. | Oct 2001 | B1 |
| 6304260 | Wills | Oct 2001 | B1 |
| 6326988 | Gould et al. | Dec 2001 | B1 |
| 6377259 | Tenev et al. | Apr 2002 | B1 |
| 6449744 | Hansen | Sep 2002 | B1 |
| 6628304 | Mitchell et al. | Sep 2003 | B2 |
| 6628312 | Rao et al. | Sep 2003 | B1 |
| 6646652 | Card et al. | Nov 2003 | B2 |
| 6654761 | Tenev et al. | Nov 2003 | B2 |
| 6693633 | Loomis et al. | Feb 2004 | B2 |
| 20020163517 | Loomis et al. | Nov 2002 | A1 |
| 20030007002 | Hida et al. | Jan 2003 | A1 |
| 20050273730 | Card et al. | Dec 2005 | A1 |
| 20060074926 | Yakowenko et al. | Apr 2006 | A1 |
| 20060156228 | Gallo et al. | Jul 2006 | A1 |
| Number | Date | Country | |
|---|---|---|---|
| 20060074926 A1 | Apr 2006 | US |