The invention relates to the field of computer graphics processing, and more specifically, to a method and system for generating detail-in-context lens presentations for terrain or elevation data.
Display screens are the primary visual display interface for computers. One problem with display screens is that they are limited in size, thus presenting a challenge to user interface design, particularly when large amounts of visual information are to be displayed. This problem is often referred to as the “screen real estate problem”. Known tools for addressing this problem include panning and zooming. While these tools are suitable for a large number of display applications, they become less effective when sections of the visual information are spatially related, for example in layered maps and three-dimensional representations. In this type of visual information display, panning and zooming are not as effective as much of the context of the visual information may be hidden in the panned or zoomed display.
A more recent solution to the screen real estate problem involves the application of “detail-in-context” presentation techniques. Detail-in-context is the magnification of a particular region-of-interest (the “focal region” or “detail”) in a presentation while preserving visibility of the surrounding information (the “context”). This technique has applicability to the display of large surface area media (e.g., digital maps) on display screens of variable size including those of graphics workstations, laptop computers, personal digital assistants (“PDAs”), and cellular telephones.
In general, a detail-in-context presentation may be considered as a distorted view (or distortion) of a region-of-interest in an original image or representation where the distortion is the result of the application of a “lens” like distortion function to the original image. The lens distortion is typically characterized by magnification of a region-of-interest (the “focal region”) in an image where detail is desired in combination with compression of a region of the remaining information surrounding the region-of-interest (the “shoulder region”). The area of the image affected by the lens includes the focal region and the shoulder region. These regions define the perimeter of the lens. The shoulder region and the area surrounding the lens provide “context” for the “detail” in the focal region of the lens. The resulting detail-in-context presentation resembles the application of a lens to the image. A detailed review of various detail-in-context presentation techniques such as “Elastic Presentation Space” (“EPS”) may be found in a publication by Marianne S. T. Carpendale, entitled “A Framework for Elastic Presentation Space” (Carpendale, Marianne S. T., A Framework for Elastic Presentation Space (Burnaby, British Columbia: Simon Fraser University, 1999)), which is incorporated herein by reference.
Note that in the detail-in-context discourse, differentiation is often made between the terms “representation” and “presentation”. A representation is a formal system, or mapping, for specifying raw information or data that is stored in a computer or data processing system. For example, a digital map of a city is a representation of raw data including street names and the relative geographic location of streets and utilities. Such a representation may be displayed on a display screen or printed on paper. On the other hand, a presentation is a spatial organization of a given representation that is appropriate for the task at hand. Thus, a presentation of a representation organizes such things as the point of view and the relative emphasis of different parts or regions of the representation. For example, a digital map of a city may be presented with a region magnified to reveal street names.
One shortcoming of existing detail-in-context presentation methods is their inability to effectively distort terrain or other elevation data including digital elevations model (“DEM”) data. In general, a DEM is a representation of cartographic information in a raster, vector, or other data format. Typically, a DEM consists of a sampled array of elevations for a number of ground positions at regularly spaced intervals. The intervals may be, for example, 7.5-minute, 15-minute, 2-arc-second (also known as 30-minute), and 1-degree units. The 7.5- and 15-minute DEMs may be categorized as large-scale, 2-arc-second DEMs may be categorized as intermediate-scale, and 1-degree DEMs may be categorized as small-scale. Often, for example, the distortion of DEM data using existing detail-in-context methods will result in a detail-in-context presentation in which the viewer appears to be “underneath” the data.
A need therefore exists for an effective method and system for generating detail-in-context presentations for elevation or terrain data. Accordingly, a solution that addresses, at least in part, the above and other shortcomings is desired.
According to one aspect of the invention, there is provided a method for generating a presentation of a region-of-interest in a terrain data representation for display on a display screen, comprising: translating each point of the representation within a lens bounds to a rotated plane being normal to a vector defined by a position for the region-of-interest with respect to a base plane for the representation and an apex above the base plane, the lens bounds defining a shoulder region at least partially surrounding a focal bounds defining a focal region in which the position is located, each point having a respective height above the base plane; displacing each translated point from the rotated plane by a function of the respective height and a magnification for the focal region, the magnification varying across the shoulder region in accordance with a drop-off function; rotating each displaced point toward a viewpoint for the region-of-interest to maintain visibility of each displaced point and each point of the data representation beyond the lens bounds when viewed from the viewpoint; and, adjusting each rotated point corresponding to the shoulder region to provide a smooth transition to the data representation beyond the lens bounds.
The method may further include projecting each adjusted point within the shoulder region, each rotated point within the focal region, and each point of the representation beyond the lens bounds onto a plane in a direction aligned with the viewpoint to produce the presentation. The method may further include displaying the presentation on the display screen. The step of translating each point may further include determining a maximum translation for a point on the lens bounds and determining a translation for each point within the lens bounds by scaling the maximum translation in accordance with a distance of each point from the lens bounds. The function may be a product of the magnification and a difference between a magnitude of a vector defined by an origin of the representation with respect to the base plane and the viewpoint and the respective height. The step of rotating each displaced point may further include determining an axis of rotation for the rotating from a cross product of a vector defined by an origin of the representation with respect to the base plane and the viewpoint and a vector defined by the origin and the apex. The step of adjusting each rotated point corresponding to the shoulder region may further include adding to each rotated point a weighted average of first and second difference vectors scaled by the drop-off function, the first and second difference vectors corresponding to a difference between first and seconds points on the lens bound and corresponding first and second displaced points, respectively, the first and second points being on a line drawn through the rotated point. The method may further include approximating the representation with a mesh. And, the method may further include approximating the respective height using height information from surrounding points.
In accordance with further aspects of the present invention there are provided apparatus such as a data processing system, a method for adapting this system, as well as articles of manufacture such as a computer readable medium having program instructions recorded thereon for practising the method of the invention.
Further features and advantages of the embodiments of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
It will be noted that throughout the appended drawings, like features are identified by like reference numerals.
In the following description, details are set forth to provide an understanding of the invention. In some instances, certain software, circuits, structures and methods have not been described or shown in detail in order not to obscure the invention. The term “data processing system” is used herein to refer to any machine for processing data, including the computer systems and network arrangements described herein. The present invention may be implemented in any computer programming language provided that the operating system of the data processing system provides the facilities that may support the requirements of the present invention. Any limitations presented would be a result of a particular type of operating system or computer programming language and would not be a limitation of the present invention.
As mentioned above, a detail-in-context presentation may be considered as a distorted view (or distortion) of a portion of the original representation or image where the distortion is the result of the application of a “lens” like distortion function to the original representation. In general, detail-in-context data presentations are characterized by magnification of areas of an image where detail is desired, in combination with compression of a restricted range of areas of the remaining information, the result typically giving the appearance of a lens having been applied to the display surface. Using the techniques described by Carpendale, points in a representation are displaced in three dimensions and a perspective projection is used to display the points on a two-dimensional presentation display. Thus, when a lens is applied to a two-dimensional continuous surface representation, for example, the resulting presentation appears to be three-dimensional. In other words, the lens transformation appears to have stretched the continuous surface in a third dimension. In EPS graphics technology, a two-dimensional visual representation is placed onto a surface; this surface is placed in three-dimensional space; the surface, containing the representation, is viewed through perspective projection; and the surface is manipulated to effect the reorganization of image details. The presentation transformation is separated into two steps: surface manipulation or distortion and perspective projection.
EPS is applicable to multidimensional data and is well suited to implementation on a computer for dynamic detail-in-context display on an electronic display surface such as a monitor. In the case of two dimensional data, EPS is typically characterized by magnification of areas of an image where detail is desired 233, in combination with compression of a restricted range of areas of the remaining information (i.e., the context) 234, the end result typically giving the appearance of a lens 230 having been applied to the display surface. The areas of the lens 230 where compression occurs may be referred to as the “shoulder” or shoulder region 234 of the lens 230. The area of the representation transformed by the lens may be referred to as the “lensed area”. The lensed area thus includes the focal region 233 and the shoulder region 234. To reiterate, the source image or representation to be viewed is located in the base plane 210. Magnification 233 and compression 234 are achieved through elevating elements of the source image relative to the base plane 210, and then projecting the resultant distorted surface onto the reference view plane 201. EPS performs detail-in-context presentation of n-dimensional data through the use of a procedure wherein the data is mapped into a region in an (n+1) dimensional space, manipulated through perspective projections in the (n+1) dimensional space, and then finally transformed back into n-dimensional space for presentation. EPS has numerous advantages over conventional zoom, pan, and scroll technologies, including the capability of preserving the visibility of information outside 210, 234 the local region of interest 233.
For example, and referring to
Thus, the data processing system 300 includes computer executable programmed instructions for directing the system 300 to implement the embodiments of the present invention. The programmed instructions may be embodied in one or more hardware or software modules 331 resident in the memory 330 of the data processing system 300. Alternatively, the programmed instructions may be embodied on a computer readable medium (such as a CD disk or floppy disk) which may be used for transporting the programmed instructions to the memory 330 of the data processing system 300. Alternatively, the programmed instructions may be embedded in a computer-readable, signal or signal-bearing medium that is uploaded to a network by a vendor or supplier of the programmed instructions, and this signal or signal-bearing medium may be downloaded through an interface to the data processing system 300 from the network by end users or potential buyers.
As mentioned, detail-in-context presentations of data using techniques such as pliable surfaces, as described by Carpendale, are useful in presenting large amounts of information on display surfaces of variable size. Detail-in-context views allow magnification of a particular region-of-interest (the “focal region”) 233 in a data presentation while preserving visibility of the surrounding information 210.
Now, referring to
Step 1: Define the terrain dataspace 470 in which the terrain dataset 450 is viewed. The terrain dataspace 470 consists of a perspective viewing volume 471 that is defined by an apex (or camera position) 440 and a viewing frustum 420. The terrain dataset 450 is defined with respect to the z=0 base plane 410 (i.e., the x, y plane). A user can view the terrain dataset 450 from any point above the terrain surface 410 . The viewpoint is referred to as the view reference point vrp in
Step 2: Calculate the apex-aligned vector 460. The apex-aligned vector 460 is a vector from the three-dimensional lens position 480 to the apex 440 of the viewing frustum 420. The x, y coordinates of the three-dimensional lens position 480 are defined by the user in the z=0 plane. The z coordinate of the lens position 480 is found by approximation using the surrounding terrain dataset 450 elevation values. The method of approximation is described in more detail in the optimizations section below. Mathematically, the apex-aligned vector is defined as a=apex−lenspos, where apex is the apex 440 of the viewing frustum 420 and lenspos is the three-dimensional lens position 480.
Step 3: Rotate each point of the dataset 450 that falls within the lens bounds 482 such that a corresponding portion 510 of the base plane 410 in which the terrain dataset 450 is defined remains perpendicular to the apex-aligned vector 460. As stated above, the terrain dataset 450 is defined with respect to the z=0 plane. As the lens position 480 is moved, the apex-aligned vector 460 will no longer be perpendicular to the z=0 plane (see
Each point of the dataset 450 within the lens bounds 482 is rotated by an appropriate amount such that each point maintains its perpendicular spatial relationship with respect to the apex-aligned vector 460. Since the displacement algorithm utilizes a perspective viewing volume 471, and the terrain dataset 450 is assumed to be viewed through the perspective viewing volume 471, the rotation of each point is specified as a translation instead of using a rotation matrix. This is due to the fact that, when viewed through a perspective viewing volume 471, objects do not visually maintain their shapes as they are rotated about arbitrary axes. For example, a circle that is defined in the z=0 plane within a viewing frustum that has an apex defined along the positive z axis will visually become an oval when rotated about the x or y axes. In order to maintain the visual shape of the lens and focal region, each point within the lens bounds 482 is translated an appropriate distance along the apex-aligned vector 460. This ensures that the bounds 482 of the lens remain visually constant as the lens is moved around the dataspace 470. The calculations for determining the amount of translation for each point that falls within the lens bounds 482 are described in the following.
Step 3a: Calculate the maximum translation 610 that can occur. The pseudo-rotation of the points within a lens bounds 482 occurs about an axis of rotation. The axis of rotation can be found by taking the cross-product of the unit vector (0,0,1) with the apex-aligned vector 460 or a, that is, (0,0,1)×a=axis.
The maximum translation 610 occurs for the points (when taken with respect to the centre of the lens 480) that are on the lens bounds 482 and that are perpendicular to the axis of rotation. Mathematically, a point p for which maximum translation occurs is a point for which the following equation holds true: (p−lenspos)·axis=0, when p is on the lens bounds 482. The maximum translation 610 that can occur for a point p is found and is used to interpolate the translation values for all points interior to the lens bounds 482.
The maximum translation value 610 is found by taking a point p that is perpendicular to the axis of rotation (as stated above), and projecting it onto the rotated plane (see
Step 3b: Calculate the magnitude of the translation for each point p. To find the magnitude of the translation for each point p, the apex-aligned vector a or 460 is projected onto the z=0 plane which results in a two-dimensional vector a2D (i.e., where the z coordinate is 0). This is equivalent to taking the x and y coordinates, and disregarding the z coordinate: a2D=(ax, ay). The a2D vector is then normalized. Each point p within the lens bounds 482 is projected onto the z=0 plane (but no normalization occurs), and is specified as a vector with respect to the centre of the lens lenspos or 480: p2D=(px, py)−(lensposx, lensposy). The vectors a2D and p2D are used to find a scaling factor 25 that will scale the maximum translation value maxt 610, which will result in the magnitude of translation for the point p. The scaling factor scalep is found by projecting p2D onto a2D, and taking the magnitude of the resulting vector (see
Step 3c: Translate each point p with respect to the lens position 480. Each point p is translated with respect to the lens position 480 in order to maintain the spatial relationships between points. Therefore, each point p is projected onto the plane that contains the lens position point 480 and that is parallel to the z=0 plane. Once the point has been projected onto this plane, the point is translated along the apex-aligned vector a or 460, by a magnitude of translation transp: ptranslated=(px, py, lensposz)+transp*anormalized, where anormalized is the unit apex-aligned vector. Since the elevation value of the point was eliminated when the point was projected onto the plane that contains the lens position 480, the elevation value must be added back to the point: ptranslated=ptranslated+(pz−lensposz)*anormalized.
Step 4: Displace each point p by the appropriate magnification factor. As shown in
Step 5: Rotate the lens 810 towards the view reference point vrp. Since the terrain dataset 450 can be viewed from any point vrp above the terrain surface, it would be useful to be able to see the lens focal region (i.e., the region between the lens position 480 and the focal bounds 481) at all times from the viewpoint. To accomplish this, each point p that falls within the lens bounds 482 is rotated towards the vrp. Given the origin o or 490 of the dataspace 470 in which the terrain data 450 is defined, two vectors ao=(apex−o)normalized and vo=(vrp−o)normalized are defined. The axis of rotation is computed using the cross-product of the two vectors axis=ao×vo, and the angle of rotation is θ=arccos(ao·vo) (see
Step 6: Create smooth shoulders 1010 that are connected to the context data 450. After displacement and rotation, the shoulders 1010 of the lens 810 do not line up correctly with the context data 450 (i.e., points that fall outside of the lens bounds 482).
Step 6a: Find the axis of rotation that a point p was rotated about. Each point p that falls within the shoulder 1010 of the lens 810 has undergone two rotation transformations (i.e., the pseudo-rotation towards the apex and the rotation towards the vrp). The axes of rotation for these two transformations may have been different. In other words, the rotation of any given point p is the result of two separate rotations. These two rotations can be expressed as a single rotation about a vector axis resulting from the cross product of ao and v (i.e., axis=ao×v ). The vector ao is defined above and the vector v=protated−p.
Step 6b: Project the point p onto the axis of rotation. The two-dimensional version of the point p was defined above as p2D=(px, py)−(lensposx, lensposy). The vector p2D is projected onto the two-dimensional version of the axis of rotation axis2D (see
Step 6c: Find two points that are on the edge of the lens bounds 482 that form a line through pprojected that is perpendicular to axis2D. In order to find two edge points that correspond to this definition, the equation of a line is used pt=pto+td, where pto=pprojected, and the direction vector is defined as d=(p2D−pprojected)/∥(p2D−pprojected)∥. The parameter t can be found using Pythagorean's Theorem: t=√(radius2−∥pprojected∥2) where radius or r is the radius of the lens bounds 482. Two edge points pt12D and pt22D are found by using ±t in the line equation. The z elevation coordinates of these two points are found using the approximation method that is described in the optimizations section below, yielding the three-dimensional edge points pt1 and pt2.
Step 6d: Apply rotation and displacement transformations to each edge point pt1, pt2 and find the difference vectors diff1, diff2 between the original and transformed edge points. Each edge point will undergo the pseudo-rotation, displacement, and final rotation transformations that are specified in Steps 2-5 above in order to obtain the difference between the original edge points (pt1 and pt2) and the transformed edge points (pt1transformed and pt2transformed, respectively). This difference specifies the magnitude and direction of translation that the edge points will undergo, which will essentially connect the lens shoulder region 1010 back to the context data 450. The difference vectors for each edge are used as a weighted average to find the amount of translation that is needed for points p that are interior to the lens bounds 282 (i.e., points that do not fall on the lens bounds 282 but rather fall between the lens bounds 282 and the focal bounds 481).
Step 6e: Calculate the amount of translation for a point p to obtain smooth shoulders 1010. The difference vectors diff1 and diff2 that were found for each edge point are used as a weighted average to find the amount of translation for a point p. The weight w for diff2 is given by w=∥(px, py)=pt22D∥/∥Pt12D−pt22D∥, where pt12D and pt22D are as defined above. The difference vector diffp for point p is diffp=(1−shoulder (p)) ((1−w) diff1+w diff2), where shoulder(p) is the shoulder drop-off function. Since the weighted average is taken across the entire bounds of the lens, and since the points that fall within the focal region 1040 of the lens 810 should not be translated (i.e., only the shoulders 1010 of the lens 810 are altered), the difference vector must take into consideration the focal region 1040 of the lens 810. For this reason, the factor (1−shoulder (p)) is introduced. The final displacement of a point p is pdisplaced=protated+diffp.
The optimizations referred to above are described in the following.
Terrain Lens Mesh. The terrain datasets 450 that are used in terrain visualization are often very large in size, consisting of thousands of data points. When this is the case, due to processing limitations, it may not be feasible to run each point through the terrain displacement method described above. To increase efficiency of the method, a terrain lens mesh may be used to visualize the displacement of a terrain lens 1210. The mesh bounds are defined as the bounds 482 of the lens 1210. Two-dimensional points are inserted into the mesh and a Delauney triangulation is calculated. In order to visualize the terrain elevations, the z elevation of each point is approximated using the surrounding terrain dataset elevation values as described below. Once the z elevations for each point within the mesh have been approximated, each three-dimensional mesh point can be run through the terrain displacement method described above.
Elevation Approximations. Since terrain elevation datasets 450 are discrete and finite, any given coordinate that is within the bounds of the terrain dataset may not have an explicit elevation value associated with it. Therefore, within the terrain displacement method and the terrain lens mesh optimization both described above, an approximation for the z elevation for any given (x, y) coordinate may be used. This approximation uses the surrounding terrain dataset coordinates to compute the estimated elevation for an (x, y) coordinate. The terrain dataset coordinates can be random or ordered, but ordered points (such as a grid structure) will increase efficiency of the approximation algorithm. According to one embodiment, a bilinear approximation may be used given a grid structured terrain dataset. That is, given an (x, y) coordinate, its elevation may be approximated by finding the four enclosing grid coordinates that surround the (x, y) coordinate. Then, a bilinear interpolation is computed using the four elevation values associated with the four enclosing grid coordinates.
The above described method may be summarized with the aid of a flowchart.
At step 1301, the operations 1300 start.
At step 1302, each point p of the representation 450 within a lens bounds 482 is translated to a rotated plane 510 being normal to a vector 460 defined by a position 480 for the region-of-interest with respect to a base plane 410 for the representation 450 and an apex 440 above the base plane 410, the lens bounds 482 defining a shoulder region (i.e., between 482 and 481) at least partially surrounding a focal bounds 481 defining a focal region (i.e., between 481 and 480) in which the position 480 is located, each point p having a respective height pz above the base plane 410.
At step 1303, each translated point ptranslated is displaced from the rotated plane 510 by a function heightp of the respective height pz and a magnification mag for the focal region 481, 482, the magnification mag varying across the shoulder region 481, 482 in accordance with a drop-off function shoulder(p).
At step 1304, each displaced point pdisplaced is rotated toward a viewpoint vrp for the region-of-interest to maintain visibility of each displaced point pdisplaced and each point p of the data representation 450 beyond the lens bounds 482 when viewed from the viewpoint vrp.
At step 1305, each rotated point protated corresponding to the shoulder region 481, 482 is adjusted to provide a smooth transition 1230 to the data representation 450 beyond the lens bounds 482.
At step 1306, the operations 1300 end.
The method may further include projecting each adjusted point protated+diffp within the shoulder region 1230, each rotated point protated within the focal region 1220, and each point p of the representation 450 beyond the lens bounds 482 onto a plane 201 in a direction 231 aligned with the viewpoint vrp to produce the presentation. The method may further include displaying the presentation on the display screen 340. The step of translating 1302 each point p may further include determining a maximum translation maxt 610 for a point p on the lens bounds 482 and determining a translation transp for each point p within the lens bounds (i.e., between 482 and 480) by scaling the maximum translation 610 in accordance with a distance scalep/radius of each point from the lens bounds 482. The function heightp may be a product of the magnification mag and a difference h between a magnitude of a vector∥vo∥ defined by an origin 490 of the representation 450 with respect to the base plane 410 and the viewpoint vrp and the respective height pz. The step of rotating 1304 each displaced point pdisplaced may further include determining an axis of rotation axis for the rotating from a cross product of a vector ao defined by an origin 490 of the representation 450 with respect to the base plane 410 and the viewpoint vrp and a vector vo defined by the origin 290 and the apex apex 440. The step of the adjusting 1305 each rotated point protated corresponding to the shoulder region 481, 482 may further include adding to each rotated point protated a weighted average diffp=(1−shoulder (p)) ((1−w) diff1+w diff2) of first and second difference vectors diff1, diff2 scaled by the drop-off function, the first and second difference vectors diff1, diff2 corresponding to a difference between first and seconds points pt1, pt2 on the lens bound 482 and corresponding first and second displaced points pt1transformed, pt2transformed, respectively, the first and second points pt1, pt2 being on a line pt=pto+td drawn through the rotated point protated. The method may further include approximating the representation 450 with a mesh. And, the method may further include approximating the respective height pz using height information from surrounding points.
While this invention is primarily discussed as a method, a person of ordinary skill in the art will understand that the apparatus discussed above with reference to a data processing system 300, may be programmed to enable the practice of the method of the invention. Moreover, an article of manufacture for use with a data processing system 300, such as a pre-recorded storage device or other similar computer readable medium including program instructions recorded thereon, may direct the data processing system 300 to facilitate the practice of the method of the invention. It is understood that such apparatus and articles of manufacture also come within the scope of the invention.
In particular, the sequences of instructions which when executed cause the method described herein to be performed by the exemplary data processing system 300 of
The embodiments of the invention described above are intended to be exemplary only. Those skilled in the art will understand that various modifications of detail may be made to these embodiments, all of which come within the scope of the invention.
This application claims priority from U.S. Provisional Patent Application No. 60/670,646, filed Apr. 13, 2005, and incorporated herein by reference.
| Number | Name | Date | Kind |
|---|---|---|---|
| 3201546 | Richardson | Aug 1965 | A |
| 4581647 | Vye | Apr 1986 | A |
| 4630110 | Cotton et al. | Dec 1986 | A |
| 4688181 | Cottrell et al. | Aug 1987 | A |
| 4790028 | Ramage | Dec 1988 | A |
| 4800379 | Yeomans | Jan 1989 | A |
| 4885702 | Ohba | Dec 1989 | A |
| 4888713 | Falk | Dec 1989 | A |
| 4985849 | Hideaki | Jan 1991 | A |
| 4992866 | Morgan | Feb 1991 | A |
| 5048077 | Wells et al. | Sep 1991 | A |
| 5175808 | Sayre | Dec 1992 | A |
| 5185599 | Dormink et al. | Feb 1993 | A |
| 5185667 | Zimmermann | Feb 1993 | A |
| 5200818 | Neta et al. | Apr 1993 | A |
| 5206721 | Ashida et al. | Apr 1993 | A |
| 5227771 | Kerr et al. | Jul 1993 | A |
| 5250934 | Denber et al. | Oct 1993 | A |
| 5258837 | Gormley | Nov 1993 | A |
| 5321807 | Mumford | Jun 1994 | A |
| 5329310 | Liljegren et al. | Jul 1994 | A |
| 5341466 | Perlin et al. | Aug 1994 | A |
| 5416900 | Blanchard et al. | May 1995 | A |
| 5432895 | Myers | Jul 1995 | A |
| 5451998 | Hamrick | Sep 1995 | A |
| 5459488 | Geiser | Oct 1995 | A |
| 5473740 | Kasson | Dec 1995 | A |
| 5521634 | McGary | May 1996 | A |
| 5523783 | Cho | Jun 1996 | A |
| 5528289 | Cortjens et al. | Jun 1996 | A |
| 5539534 | Hino et al. | Jul 1996 | A |
| 5581670 | Bier et al. | Dec 1996 | A |
| 5583977 | Seidl | Dec 1996 | A |
| 5588098 | Chen et al. | Dec 1996 | A |
| 5594859 | Palmer et al. | Jan 1997 | A |
| 5596690 | Stone et al. | Jan 1997 | A |
| 5598297 | Yamanaka et al. | Jan 1997 | A |
| 5610653 | Abecassis | Mar 1997 | A |
| 5613032 | Cruz et al. | Mar 1997 | A |
| 5638523 | Mullet et al. | Jun 1997 | A |
| 5644758 | Patrick | Jul 1997 | A |
| 5651107 | Frank et al. | Jul 1997 | A |
| 5652851 | Stone et al. | Jul 1997 | A |
| 5657246 | Hogan et al. | Aug 1997 | A |
| 5670984 | Robertson et al. | Sep 1997 | A |
| 5680524 | Maples et al. | Oct 1997 | A |
| 5682489 | Harrow et al. | Oct 1997 | A |
| 5689287 | Mackinlay et al. | Nov 1997 | A |
| 5689628 | Robertson | Nov 1997 | A |
| 5721853 | Smith | Feb 1998 | A |
| 5729673 | Cooper et al. | Mar 1998 | A |
| 5731805 | Tognazzini et al. | Mar 1998 | A |
| 5742272 | Kitamura et al. | Apr 1998 | A |
| 5745166 | Rhodes et al. | Apr 1998 | A |
| 5751289 | Myers | May 1998 | A |
| 5754348 | Soohoo | May 1998 | A |
| 5764139 | Nojima et al. | Jun 1998 | A |
| 5786814 | Moran et al. | Jul 1998 | A |
| 5798752 | Buxton et al. | Aug 1998 | A |
| 5808670 | Oyashiki et al. | Sep 1998 | A |
| 5812111 | Fuji et al. | Sep 1998 | A |
| 5818455 | Stone et al. | Oct 1998 | A |
| 5848231 | Teitelbaum et al. | Dec 1998 | A |
| 5852440 | Grossman et al. | Dec 1998 | A |
| 5872922 | Hogan et al. | Feb 1999 | A |
| 5909219 | Dye | Jun 1999 | A |
| 5923364 | Rhodes et al. | Jul 1999 | A |
| 5926209 | Glatt | Jul 1999 | A |
| 5949430 | Robertson et al. | Sep 1999 | A |
| 5950216 | Amro et al. | Sep 1999 | A |
| 5969706 | Tanimoto et al. | Oct 1999 | A |
| 5973694 | Steele et al. | Oct 1999 | A |
| 5991877 | Luckenbaugh | Nov 1999 | A |
| 5999879 | Yano | Dec 1999 | A |
| 6005611 | Gullichsen et al. | Dec 1999 | A |
| 6037939 | Kashiwagi et al. | Mar 2000 | A |
| 6052110 | Sciammarella et al. | Apr 2000 | A |
| 6057844 | Strauss | May 2000 | A |
| 6064401 | Holzman et al. | May 2000 | A |
| 6067372 | Gur et al. | May 2000 | A |
| 6073036 | Heikkinen et al. | Jun 2000 | A |
| 6075531 | DeStefano | Jun 2000 | A |
| 6081277 | Kojima | Jun 2000 | A |
| 6084598 | Chekerylla | Jul 2000 | A |
| 6091771 | Seeley et al. | Jul 2000 | A |
| 6108005 | Starks et al. | Aug 2000 | A |
| 6128024 | Carver | Oct 2000 | A |
| 6133914 | Rogers et al. | Oct 2000 | A |
| 6154840 | Pebly et al. | Nov 2000 | A |
| 6160553 | Robertson et al. | Dec 2000 | A |
| 6184859 | Kojima | Feb 2001 | B1 |
| 6198484 | Kameyama | Mar 2001 | B1 |
| 6201546 | Bodor et al. | Mar 2001 | B1 |
| 6201548 | Cariffe et al. | Mar 2001 | B1 |
| 6204845 | Bates et al. | Mar 2001 | B1 |
| 6204850 | Green | Mar 2001 | B1 |
| 6215491 | Gould | Apr 2001 | B1 |
| 6219052 | Gould | Apr 2001 | B1 |
| 6241609 | Rutgers | Jun 2001 | B1 |
| 6246411 | Strauss | Jun 2001 | B1 |
| 6249281 | Chen et al. | Jun 2001 | B1 |
| 6256043 | Aho et al. | Jul 2001 | B1 |
| 6256115 | Adler et al. | Jul 2001 | B1 |
| 6256737 | Bianco et al. | Jul 2001 | B1 |
| 6266082 | Yonezawa et al. | Jul 2001 | B1 |
| 6271854 | Light | Aug 2001 | B1 |
| 6278443 | Amro et al. | Aug 2001 | B1 |
| 6278450 | Arcuri et al. | Aug 2001 | B1 |
| 6288702 | Tachibana et al. | Sep 2001 | B1 |
| 6304271 | Nehme | Oct 2001 | B1 |
| 6307612 | Smith et al. | Oct 2001 | B1 |
| 6320599 | Sciammarella et al. | Nov 2001 | B1 |
| 6337709 | Yamaashi et al. | Jan 2002 | B1 |
| 6346938 | Chan et al. | Feb 2002 | B1 |
| 6346962 | Goodridge | Feb 2002 | B1 |
| 6359615 | Singh | Mar 2002 | B1 |
| 6381583 | Kenney | Apr 2002 | B1 |
| 6384849 | Morcos et al. | May 2002 | B1 |
| 6396648 | Yamamoto et al. | May 2002 | B1 |
| 6396962 | Haffey et al. | May 2002 | B1 |
| 6400848 | Gallagher | Jun 2002 | B1 |
| 6407747 | Chui et al. | Jun 2002 | B1 |
| 6411274 | Watanabe et al. | Jun 2002 | B2 |
| 6416186 | Nakamura | Jul 2002 | B1 |
| 6417867 | Hallberg | Jul 2002 | B1 |
| 6438576 | Huang et al. | Aug 2002 | B1 |
| 6487497 | Khavakh et al. | Nov 2002 | B2 |
| 6491585 | Miyamoto et al. | Dec 2002 | B1 |
| 6504535 | Edmark | Jan 2003 | B1 |
| 6515678 | Boger | Feb 2003 | B1 |
| 6522341 | Nagata | Feb 2003 | B1 |
| 6542191 | Yonezawa | Apr 2003 | B1 |
| 6552737 | Tanaka et al. | Apr 2003 | B1 |
| 6559813 | DeLuca et al. | May 2003 | B1 |
| 6577311 | Crosby et al. | Jun 2003 | B1 |
| 6577319 | Kashiwagi et al. | Jun 2003 | B1 |
| 6584237 | Abe | Jun 2003 | B1 |
| 6590568 | Astala et al. | Jul 2003 | B1 |
| 6590583 | Soohoo | Jul 2003 | B2 |
| 6608631 | Milliron | Aug 2003 | B1 |
| 6612930 | Kawagoe et al. | Sep 2003 | B2 |
| 6631205 | Melen et al. | Oct 2003 | B1 |
| 6633305 | Sarfeld | Oct 2003 | B1 |
| 6690387 | Zimmerman et al. | Feb 2004 | B2 |
| 6720971 | Yamamoto et al. | Apr 2004 | B1 |
| 6727910 | Tigges | Apr 2004 | B2 |
| 6731315 | Ma et al. | May 2004 | B1 |
| 6744430 | Shimizu | Jun 2004 | B1 |
| 6747610 | Taima et al. | Jun 2004 | B1 |
| 6747611 | Budd et al. | Jun 2004 | B1 |
| 6760020 | Uchiyama et al. | Jul 2004 | B1 |
| 6768497 | Baar et al. | Jul 2004 | B2 |
| 6798412 | Cowperthwaite | Sep 2004 | B2 |
| 6833843 | Mojaver et al. | Dec 2004 | B2 |
| 6842175 | Schmalstieg et al. | Jan 2005 | B1 |
| 6882755 | Silverstein et al. | Apr 2005 | B2 |
| 6906643 | Samadani et al. | Jun 2005 | B2 |
| 6911975 | Iizuka et al. | Jun 2005 | B2 |
| 6919921 | Morota et al. | Jul 2005 | B1 |
| 6924822 | Card et al. | Aug 2005 | B2 |
| 6938218 | Rosen | Aug 2005 | B1 |
| 6956590 | Barton et al. | Oct 2005 | B1 |
| 6961071 | Montagnese et al. | Nov 2005 | B2 |
| 6975335 | Watanabe | Dec 2005 | B2 |
| 6985865 | Packingham et al. | Jan 2006 | B1 |
| 7038680 | Pitkow | May 2006 | B2 |
| 7071971 | Elberbaum | Jul 2006 | B2 |
| 7084886 | Jetha et al. | Aug 2006 | B2 |
| 7088364 | Lantin | Aug 2006 | B2 |
| 7106349 | Baar et al. | Sep 2006 | B2 |
| 7133054 | Aguera Arcas | Nov 2006 | B2 |
| 7134092 | Fung et al. | Nov 2006 | B2 |
| 7158878 | Rasmussen | Jan 2007 | B2 |
| 7173633 | Tigges | Feb 2007 | B2 |
| 7173636 | Montagnese | Feb 2007 | B2 |
| 7197719 | Doyle et al. | Mar 2007 | B2 |
| 7213214 | Baar et al | May 2007 | B2 |
| 7233942 | Nye | Jun 2007 | B2 |
| 7246109 | Ramaswamy | Jul 2007 | B1 |
| 7256801 | Baar et al. | Aug 2007 | B2 |
| 7274381 | Mojaver et al. | Sep 2007 | B2 |
| 7275219 | Shoemaker | Sep 2007 | B2 |
| 7280105 | Cowperthwaite | Oct 2007 | B2 |
| 7283141 | Baar et al. | Oct 2007 | B2 |
| 7310619 | Baar et al. | Dec 2007 | B2 |
| 7312806 | Tigges | Dec 2007 | B2 |
| 7321824 | Nesbitt | Jan 2008 | B1 |
| 7411610 | Doyle | Aug 2008 | B2 |
| 7472354 | Jetha et al. | Dec 2008 | B2 |
| 7486302 | Shoemaker | Feb 2009 | B2 |
| 7489321 | Jetha et al. | Feb 2009 | B2 |
| 7495678 | Doyle et al. | Feb 2009 | B2 |
| 20010040585 | Hartford et al. | Nov 2001 | A1 |
| 20010040636 | Kato et al. | Nov 2001 | A1 |
| 20010048447 | Jogo | Dec 2001 | A1 |
| 20010055030 | Han | Dec 2001 | A1 |
| 20020033837 | Munro | Mar 2002 | A1 |
| 20020038257 | Joseph et al. | Mar 2002 | A1 |
| 20020044154 | Baar et al. | Apr 2002 | A1 |
| 20020062245 | Niu et al. | May 2002 | A1 |
| 20020075280 | Tigges | Jun 2002 | A1 |
| 20020087894 | Foley et al. | Jul 2002 | A1 |
| 20020089520 | Baar et al. | Jul 2002 | A1 |
| 20020093567 | Cromer et al. | Jul 2002 | A1 |
| 20020101396 | Huston et al. | Aug 2002 | A1 |
| 20020122038 | Cowperthwaite | Sep 2002 | A1 |
| 20020135601 | Watanabe et al. | Sep 2002 | A1 |
| 20020143826 | Day et al. | Oct 2002 | A1 |
| 20020171644 | Reshetov et al. | Nov 2002 | A1 |
| 20020180801 | Doyle et al. | Dec 2002 | A1 |
| 20030006995 | Smith et al. | Jan 2003 | A1 |
| 20030007006 | Baar et al. | Jan 2003 | A1 |
| 20030048447 | Harju et al. | Mar 2003 | A1 |
| 20030052896 | Higgins et al. | Mar 2003 | A1 |
| 20030061211 | Shultz et al. | Mar 2003 | A1 |
| 20030100326 | Grube et al. | May 2003 | A1 |
| 20030105795 | Anderson et al. | Jun 2003 | A1 |
| 20030112503 | Lantin | Jun 2003 | A1 |
| 20030118223 | Rahn et al. | Jun 2003 | A1 |
| 20030137525 | Smith | Jul 2003 | A1 |
| 20030151625 | Shoemaker et al. | Aug 2003 | A1 |
| 20030151626 | Komar et al. | Aug 2003 | A1 |
| 20030174146 | Kenoyer | Sep 2003 | A1 |
| 20030179198 | Uchiyama | Sep 2003 | A1 |
| 20030179219 | Nakano et al. | Sep 2003 | A1 |
| 20030179237 | Nelson et al. | Sep 2003 | A1 |
| 20030196114 | Brew et al. | Oct 2003 | A1 |
| 20030227556 | Doyle | Dec 2003 | A1 |
| 20030231177 | Montagnese et al. | Dec 2003 | A1 |
| 20040026521 | Colas et al. | Feb 2004 | A1 |
| 20040056869 | Jetha et al. | Mar 2004 | A1 |
| 20040056898 | Jetha et al. | Mar 2004 | A1 |
| 20040111332 | Baar et al. | Jun 2004 | A1 |
| 20040125138 | Jetha et al. | Jul 2004 | A1 |
| 20040150664 | Baudisch | Aug 2004 | A1 |
| 20040217979 | Baar et al. | Nov 2004 | A1 |
| 20040240709 | Shoemaker | Dec 2004 | A1 |
| 20040257375 | Cowperthwaite | Dec 2004 | A1 |
| 20040257380 | Herbert et al. | Dec 2004 | A1 |
| 20050041046 | Baar et al. | Feb 2005 | A1 |
| 20050134610 | Doyle et al. | Jun 2005 | A1 |
| 20050278378 | Frank | Dec 2005 | A1 |
| 20050285861 | Fraser | Dec 2005 | A1 |
| 20060026521 | Hotelling et al. | Feb 2006 | A1 |
| 20060033762 | Card et al. | Feb 2006 | A1 |
| 20060036629 | Gray | Feb 2006 | A1 |
| 20060082901 | Shoemaker | Apr 2006 | A1 |
| 20060098028 | Baar | May 2006 | A1 |
| 20060139375 | Rasmussen et al. | Jun 2006 | A1 |
| 20060192780 | Lantin | Aug 2006 | A1 |
| 20060214951 | Baar et al. | Sep 2006 | A1 |
| 20070033543 | Ngari et al. | Feb 2007 | A1 |
| 20070064018 | Shoemaker et al. | Mar 2007 | A1 |
| 20070097109 | Shoemaker et al. | May 2007 | A1 |
| Number | Date | Country |
|---|---|---|
| 2350342 | Nov 2002 | CA |
| 2386560 | Nov 2003 | CA |
| 2393708 | Jan 2004 | CA |
| 2394119 | Jan 2004 | CA |
| 0635779 | Jan 1995 | EP |
| 0650144 | Apr 1995 | EP |
| 0816983 | Jul 1998 | EP |
| Entry |
|---|
| “Non-Final Office Action”, U.S. Appl. No. 11/410,024, (Mar. 11, 2009), 35 pages. |
| “Foreign Office Action”, Application Ser. No. 2002-536993, (Mar. 11, 2009), 2 pages. |
| Schmalstieg, Dieter et al., “Using transparent props for interaction with the virtual table”, Application Ser. No. 11/410,024, Proceedings of the 1999 symposium on Interactive 3D graphics, (Apr. 26, 1999), 8 pages. |
| “Non Final Office Action”, U.S. Appl. No. 10/705,199, (May 12, 2009), 46 pages. |
| “Non Final Office Action”, U.S. Appl. No. 11/541,778, (Jun. 19, 2009), 36 pages. |
| “Non Final Office Action”, U.S. Appl. No. 11/935,222, (Feb. 20, 2009), 12 pages. |
| Ikedo, T. “A Realtime Video-Image Mapping User Polygon Rendering Techniques”. IEEE Intl. conf on Ottawa, ONT, Canada Jun. 3-6, 1997, Los Alamitos, CA, USA; IEEE Comput. Soc, US, XP010239181, ISBN: 0-8186-7819-4 Sections 2, 4.4; Multimedia Computing and Systems '97 Proceedings, (Jun. 3, 1997), pp. 127-134. |
| Bouju, A. et al., “Client Server Architecture for Accessing Multimedia and Geographic Databases within Embedded Systems”, Database and Expert Systems Applications, 1999 Proceedings Tenth International Workshop on Florence, Italy Sep. 1-3, 1999, Los Alamitos, CA, USA, IEEE Comput. Soc, US, XP010352370, ISBN:0-7895-0281-4, abstract, figure 2,(Sep. 1-3, 1999). pp. 750-764. |
| Robertson, G et al., “The Document Lens”, UIST. Proceedings of the Annual ACM Symposium on User Interface Software and Technology, abstract figures 3, 4,(Nov. 3, 1993), pp. 101-108. |
| Dursteler, Juan C., “The digital magazine of InfoVis.net”, Retrieved from: http://www.infovis.net/printMag.php?num=85&lang=2; (Apr. 22, 2002). |
| Kuederle, Oliver “Presentation of Image Sequences: A Detail-In-Context Approach”, Thesis, Simon Fraser University; (Aug. 2000), pp. 1-3, 5-10, 29-31. |
| Microsoft Corp., “Microsoft Paint”, Microsoft Corp.,(1981-1998), Paint 1-14. |
| “Electronic Magnifying Glasses”, IBM Technical Disclosure Bulletin, IBM Corp., New York, US, vol. 37, No. 3; XP000441501, ISSN: 0018-8689 the whole document; (Mar. 1, 1994), pp. 353-354. |
| Keahey, T. A., “The Generalized Detail-In-Context Problem” Information Visualization 1998, Proceedings; IEEE Symposium On Research Triangle, CA, USA; Los Alamitos, CA, USA, IEEE Comput, Soc, US; XP010313304; ISBN: 0-8186-9093.(Oct. 19-20, 1998), pp. 44-51, 152. |
| Carpendale, M S T et al., “Extending distortion viewing from 2D to 3D”, IEEE Computer Graphics and Applications, IEEE Inc. New York, US, vol. 17, No. 4. XP000927815 ISSN: 0272-1716. (Jul. 1997), pp. 42-51. |
| Viega, J et al., “3D magic lenses, Proceedings of the 9th annual ACM symposium on User interface software and technology”; Pub 1996 ACM Press New York, NY, USA, (1996), pp. 51-58. |
| Carpendale, M. Sheelagh T., et al., “3-Dimensional Pliable Surfaces: For the Effective Presentation of Visual Information”, UIST '95, 8th Annual Symposium on User Interface Software and Technology, Proceedings of The ACM Symposium on User Interface Software and Technology, Pittsburgh, PA, ACM Symposium on User Interface Software and Technology, New York, Nov. 14, 1995 (1995-, (Nov. 14-17, 1995), pp. 217-226. |
| Tominski, Christian et al., “Fisheye Tree Views and Lenses for Graph Visualization”, Jul. 2006, pp. 1-8. |
| Keahey T. A., “Getting Along: Composition of Visualization Paradigms”, Visual Insights, Inc.; (2001). |
| Sakamoto, Chikara et al., “Design and Implementation of a Parallel Pthread Library (PPL) with Parallelism and Portability”, Systems and Computers in Japan, New York, US, vol. 29, No. 2; XP000752780, ISSN:0882-1666 abstract,(Feb. 1, 1998), pp. 28-35. |
| Deng, K. et al., “Texture Mapping with a Jacobian-Based Spatially-Variant Filter”, Proceedings 10th Pacific Conference on Computer Graphics and Applications, Beijing, China, 2002 Los Alamitos, CA, USA, IEEE Comput. Soc, USA, XP00224932, ISBN, 0-7695-1784-6 the whole document, (Oct. 9-11, 2002), pp. 460-461. |
| Welsh, Michelle “Futurewave Software”, Business Wire; (Nov. 15, 1993). |
| Lamar, et al., “A Magnification Lens for Interactive Volume Visualization”, ACM; pp. 1-10, Oct. 2001. |
| Fitzmaurice, G. et al., “Tracking Menus”, UIST; (2003), pp, 71-79. |
| Stone, et al., “The movable filter as a user interface tool”, Proceedings of CHI ACM; (1992), pp. 306-312. |
| Baudisch, P. et al., “Halo: a Technique For Visualizing Off-Screen Locations”, CHI; (Apr. 5-10, 2003). |
| Baudisch, P. et al., “Drag-And-Pop: Techniques for Accessing Remote Screen Content on Touch-And-Pen-Operated Systems”, Interact '03, (2003). |
| Carpendale, M. S. T. et al., “Making Distortions Comprehensible”, Visual Languages, Proceedings, 1997 IEEE Symposium On Isle of Capri, Italy, Sep. 23-26, 1997, Los Alamitos, CA, USA, IEEE Comput. Soc., US, Sep. 23, 1997; XP010250566, ISBN: 0-8186-8144-6,(Sep. 23-26, 1997), pp. 36-45. |
| Ito, Minoru et al., “A Three-Level Checkerboard Pattern (TCP) Projection Method for Curved Surface Measurement”, Pattern Recognition, Pergamon Press Inc., Elmsford, N.Y., US vol. 28, No. 1; XP004014030, ISSN 0031-3203,(1995), pp. 27-40. |
| Keahey, T A., et al., “Nonlinear Magnification Fields”, Information Visualization, 1997, Proceedings, IEEE Symposium on Phoenix, AZ, USA, Los Alamitos, CA, USA, IEEE Comput. Soc., US; XP010257169; ISBN: 0-8186-8189-6,(Oct. 20-21, 1997), pp. 51-58 and 121. |
| Rauschenbach, U., “The Rectangular Fish Eye View as an Efficient Method for the Transmission and Display of Large Images”, Image Processing, ICIP 99, Proceedings, 1999 International Conference On, Kobe, Japan, Oct. 24-28, 1999, Piscataway, NJ, USA, IEEE, US, XP010368852, ISBN 0-7803-5467-2 p. 115, left-hand column—p. 116, paragraph 3, p. 118, paragraph 7.1; (1999), pp. 115-119. |
| Boots, B. N., “Delauney Triangles: An Alternative Approach to Point Pattern Analysis” Proc. Assoc. Am. Geogr. 6, (1974), p. 26-29. |
| Carpendale et al., “Distortion Viewing Techniques for 3-Dimensional Data”, Information Visualization '96, Proceedings IEEE Symposium on San Francisco, CA, USA, Los Alamitos, CA, USA, IEEE Comput. Soc, US Oct. 28, 1996; XP010201944; ISBN: 0-8186-7668-X,(Oct. 28-29, 1996), pp. 46-53, 119. |
| Leung, Y. K., et al., —A Review and Taxonomy of Distortion-Oriented Presentation Techniques, ACM Transactions on Computer-Human Interaction, 'Online! vol. 1, No. 2, XP002252314; Retrieved from the Internet: <URL:http://citeseernj.nec.com/ leung94review.html> ' retrieved on Aug. 21, 2003 the whole document, (Jun. 1994), pp. 126-160. |
| “Non Final Office Action”, U.S. Appl. No. 10/358,394, (Mar. 13, 2009). |
| Sarkar, et al., “Stretching the Rubber Sheet: A Metaphor for Viewing Large Layouts on Small Screens”, Proc. of the 6th annual ACM symp. on User interface software an technology, Atlanta, GA, (Dec. 1993), p. 81-91. |
| Carpendale, et al., “Graph Folding. Extending Detail and Context Viewing into a Tool for Subgraph Comparisons”, In Proceedings of Graph Drawing 1995, Passau, Germany., (1995), pp. 127-139. |
| Carpendale, M.S.T. “A Framework for Elastic Presentation Space”, http://pades.cpsc.ucaldary.ca/—sheelagh/personal/thesis/, (Nov. 19, 1999). |
| “Non Final Office Action”, U.S. Appl. No. 11/542,120, (Jan. 22, 2009), 20 pages. |
| Cowperthwaite, David J., “Occlusion Resolution Operators for Three-Dimensional Detail-In-Context”, Burnaby, British Columbia: Simon Fraser University, (2000). |
| Carpendale, M.S.T. “A Framework for Elastic Presentation Space”, Thesis Simon Fraser University; XP001051168; cited in the application, Chapter 3-5, appendix A,B; (Mar. 1999), pp. 1-271. |
| Carpendale, M.S.T et al., “Exploring Distinct Aspects of the Distortion Viewing Paradigm”, Technical Report TR 97-08; School of Computer Science, Simon Fraser University, Burnaby, British Columbia, Canada; (Sep. 1997). |
| Cowperthwaite, David J., et al., “Visual Access for 3D Data”, in Proceedings of ACM CHI 96 Conference on Human Factors in Computer Systems, vol. 2 of Short Papers: Alternative Methods of Interaction; (1996),pp. 175-176. |
| Keahey, T. A., “Visualization of High-Dimensional Clusters Using NonLinear Magnification”, Technical Report LA-UR-98-2776, Los Alamos National Laboratory, (1998). |
| Tigges, M. et al., “Generalized Distance Metrics For Implicit Surface Modeling”, Proceedings of the Tenth Western Computer Graphics Symposium; (Mar. 1999). |
| Bossen, F. J., “Anisotropic Mesh Generation With Particles”, Technical Report CMU-CS-96-134, CS Dept, Carnegie Mellon University; (May 1996). |
| Bossen, F. J., et al., “A Pliant Method for Anisotropic Mesh Generation”, 5th Intl. Meshing Roundtable; (Oct. 1996), pp. 63-74. |
| Wilson, et al., “Direct Volume Rendering Via 3D Textures”, Technical Report UCSC-CRL-94-19, University of California, Santa Cruz, Jack Baskin School of Engineering; (Jun. 1994). |
| Carpendale, M. S. T., Montagnese, C., A framework for unifying presentation space, Nov. 2001, ACM Press, Proceedings of the 14th annual ACM symposium on User interface software and technology, vol. 3, Issue 2, pp. 61-70. |
| Carpendale, Marianne S.T., “A Framework for Elastic Presentation Space” (Burnaby, British Columbia: Simon Fraser University, 1999). |
| Robertson, et al., ““The Document Lens””, (1993),pp. 101-108. |
| “Presentation for CGDI Workshop”, Retrieved from: http://www.geoconnections.org/developersCorner/devCorner13 devNetwork/meetings/2002.05.30/IDELIX—CGDI—20020530—dist.pdf, (May 2002). |
| Carpendale, M.S.T. “A Framework for Elastic Presentation Space”, PhD thesis, Simon Fraser University; (1999),pp. 69, 72, 78-83,98-100, 240 and 241. |
| Keahey, T. A., et al., ““Techniques For Non-Linear Magnification Transformations””, Information Visualization '96, Proceedings IEEE Symposium on, San Francisco, CA, Los Alamitos, CA, USA, IEEE Comput. Soc, US: XP010201943; ISBN: 0-8186-7668-X the whole document,(Oct. 28, 1996),pp. 38-45. |
| Keahey, T. A., “Nonlinear Magnification”, (Indiana University Computer Science), (1997). |
| Watt, et al., “Advanced Animation and Rendering Techniques”, (Addison-Wesley Publishing), (1992),p. 106-108. |
| Number | Date | Country | |
|---|---|---|---|
| 60670646 | Apr 2005 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 11401349 | Apr 2006 | US |
| Child | 13216950 | US |