Claims
- 1. A method, comprising temporally controlling display of a transitional effect with a same mode of control used for spatial navigation control.
- 2. A method, comprising controlling a transition effect when a viewpoint or virtual camera is interactively moved out of or off of an original camera surface.
- 3. A method according to claim 2, wherein the moving out of or off of the original camera surface occurs when the viewpoint or virtual camera reaches an edge of the original camera surface.
- 4. A method according to claim 2, wherein the moving out of or off of the original camera surface occurs when the viewpoint or virtual camera reaches a predefined point on the original camera surface.
- 5. A method according to claim 2, wherein the moving out of or off of the original surface activates an event that changes an interaction state associated with the original camera surface.
- 6. A method according to claim 2, further comprising displaying a view from the viewpoint or virtual camera on the original camera surface, which is nonadjacent to a destination camera surface.
- 7. A method according to claim 6, further comprising interactively inputting a continuous stream of two-dimensional data to translate the viewpoint or virtual camera on the original camera surface, and then transitioning the viewpoint or virtual camera to the nonadjacent destination camera surface.
- 8. A method according to claim 7, further comprising:
displaying a transitional effect during the transitioning; and temporally controlling the displaying of the transitional effect with the continuous stream of two-dimensional data.
- 9. A method according to claim 8, wherein the temporal controlling directly follows the translating, in response to the viewpoint or virtual camera moving out of or off of the camera surface.
- 10. A method according to claim 9, wherein the controlling occurs while continuing to interactively inputting the stream of two-dimensional data.
- 11. A method according to claim 8, wherein the transitional effect is at least one of a pre-determined video clip, an interpolation of the viewpoint or virtual camera moving from the original camera surface to the destination camera surface, a two-dimensional slate, a pre-arranged camera movement, and a combination or concatenation of two or more of the preceding transitional effects.
- 12. A method according to claim 7, wherein a gain of the translation automatically changes according to a position of the view on the original camera surface.
- 13. A method according to claim 7, wherein the transitioning is constrained to a transition surface relating the nonadjacent camera surface to the original camera surface.
- 14. A method of unified spatial and temporal control, comprising:
allowing a user to spatially move a viewpoint on a first viewpoint surface; and allowing a user to temporally control a sequence of transition images when spatial movement on the first viewpoint surface encounters an edge or predefined point of the first viewpoint surface.
- 15. A method according to claim 14, wherein encountering the edge activates an event that changes an interaction state associated with the viewpoint.
- 16. A method of integrated spatial and temporal navigation, comprising:
determining that a viewpoint or virtual camera for viewing a model or scene has been interactively translated off of or to a pre-defined point or edge of a first camera surface or viewpoint surface; in response to the determining, transitioning the viewpoint or virtual camera to a second camera surface or viewpoint surface that is not adjacent to the first camera surface or viewpoint surface; and enabling temporal navigation of display of an animation during the transitioning.
- 17. A method according to claim 16, wherein the transitioning is based on a transition surface relating the first and second camera surfaces.
- 18. A method according to claim 16, wherein encountering the predefined point or edge activates an event that changes an interaction state associated with the viewpoint or virtual camera.
- 19. A method according to claim 16, wherein, in further response to the determining, activating a script or programmatic logic.
- 20. A method according to claim 16, wherein, in further response to the determining, activating a script or programmatic logic that in turn dynamically changes an interaction state associated with the camera surfaces.
- 21. A method according to claim 16, wherein a single continuous drag or stroke input controls both the translation and the temporal navigation.
- 22. A method according to claim 16, wherein a same mode of interactive control of the viewpoint or virtual camera is kept active for both the temporal navigation and the interactive translating.
- 23. A method of integrated navigation control, comprising:
displaying or rendering a subject or scene according to a current viewpoint on an original camera surface that is facing the subject or scene and that is spatially separated from a destination viewing surface that is also facing the subject or scene; generating a continuous stream of two-dimensional input data; according to a first portion of the two-dimensional input data, translating the current viewpoint on the original camera surface; and after the translating and according to a later portion of the continuous stream of two-dimensional input data, temporally controlling the display of a transition effect as the current viewpoint transitions from the original camera surface to the destination camera surface.
- 24. A method according to claim 23, wherein the temporal control is constrained to one of a transition surface and a transition path relating the original and destination camera surfaces.
- 25. A method according to claim 23, further comprising manipulating an input device to generate the continuous stream of two-dimensional input data.
- 26. A method according to claim 23, wherein the transition effect is at least one of a pre-determined video clip, an interpolated movement of the current view from the original viewing surface to the destination viewing surface, an image, a slate, a pre-arranged camera movement, an animation, and a combination of two or more of the preceding transition effects.
- 27. A method according to claim 23, wherein a gain affecting the translation automatically changes according to a position of the current viewpoint on the original view surface.
- 28. A method, comprising interactively controlling a rate of display of a visual transition between two spatially navigable regions.
- 29. A method according to claim 28, wherein the controlling is done with a same interaction control technique used to spatially navigate the two regions.
- 30. A method according to claim 29, wherein the visual transition is displayed while transitioning between the two regions.
- 31. A method according to claim 30, wherein the two regions are nonadjacent.
- 32. A method according to claim 31, wherein the spatially navigable regions are regions within which a viewpoint is translated according to input generated with the interaction technique.
- 33. A method according to claim 32, wherein in response to the input generated with the interaction technique for translating the viewpoint to a periphery of one of the regions, controlling the rate of display of the visual transition according to continued and uninterrupted generation of the input with the interaction technique.
- 34. A method, comprising:
seamlessly switching from an interactive spatial navigation mode to an interactive temporal display mode with one ongoing interactive input operation.
- 35. A method according to claim 34, wherein the switching is responsive to one of navigating into or out of an area for spatial navigation.
- 36. A method according to claim 34, wherein the interactive input command is one of a stroke operation or a drag operation.
- 37. A method, comprising:
interactively generating input data with a constant mode of interactive control; according to a first part of the input data, spatially navigating a subject by determining spatial navigation points in a bounded locus of three-dimensional spatially navigable points, where, before the generating, the locus of points are arranged in relation to the subject to be viewed; displaying images portraying the subject as viewed from the determined spatial navigation points; and after displaying the images, and when the spatial navigating indicates navigation out of or off of the locus of points, using a second part of the continuous stream of two-dimensional input data to control a rate of displaying a sequence of other images; where the second part of the continuous stream of input data chronologically follows the first part of the continuous stream of input data.
- 38. A method for interactive visual navigation, comprising:
interactively generating a continuous stream of two-dimensional input data with a single mode of interaction; according to a first part of the continuous stream of two-dimensional input data, spatially navigating a subject by determining spatial navigation points in a bounded locus of three-dimensional spatially navigable points, where before the generating the locus of points are arranged in relation to the subject to be viewed; displaying images portraying the subject as viewed from the determined spatial navigation points; and after displaying the images, and when the spatial navigating indicates navigation out of or off of the locus of points, using a second part of the continuous stream of two-dimensional input data to control a rate of displaying a sequence of other images; where the second part of the continuous stream of input data chronologically follows the first part of the continuous stream of input data.
- 39. A method of integrated spatial and temporal navigation of a virtual space, comprising:
displaying a rendering or image portraying the virtual space as viewed by a virtual camera at a first location on or in a spatially navigable camera surface within the virtual space, where the virtual camera has an orientation that is either normal to the spatially navigable camera surface or is pointed toward a fixed look-at point in the virtual space; beginning a drag or move operation of a two-dimensional input device; based on the moving or dragging, spatially translating the virtual camera from the first location in the spatially navigable region to a second location in or on the spatially navigable region; automatically setting the orientation of the virtual camera at the second location to either point towards the pre-defined look-at point or to point in a direction normal to the spatially navigable region at the second location; displaying a rendering or image portraying the virtual space in accordance with the location and orientation of the virtual camera at the second location in the spatially navigable camera surface; continuing the drag or move operation of the two-dimensional input device; determining that further translating the virtual camera according to the continued dragging or moving of the two-dimensional input device would place the virtual camera beyond the spatially navigable region; in response to the determining, beginning the display of a transition comprising at least one of an interpolated animation of the virtual camera, an animation semi-transparently blended with a slate, and a pre-authored animation of the virtual camera; and while further continuing the drag or move operation of the two-dimensional input device, performing at least one of
advancing display of the transition based on the further continuing drag or move operation of the two-dimensional input device, reversing display of the transition based on the further continuing drag or move operation of the two-dimensional input device, and pausing, stopping, or automatically completing display of the transition based on cessation of the drag or move operation of the two-dimensional input device.
- 40. A data structure, comprising:
a set of interrelated surfaces for constrained camera navigation, none adjoining another, which together define views of a scene or model that can be spatially navigated by a user.
- 41. A data structure, comprising:
data describing a set of mutually non-adjacent camera surfaces for constrained spatial camera navigation; and data describing a set of interactively controllable transition effects between the surfaces.
- 42. A data structure according to claim 41, wherein interacting to transition from navigation of a camera surface and control of a transition appears seamless to a user performing the interacting.
- 43. A data structure comprising a set of non-adjacent camera surfaces that are spatially navigable and that are connected by interactively controllable visual transitions.
- 44. A data structure comprising:
view surface boundaries; a transition between the view surface boundaries; and a transition duration indicating a length of the transition.
- 45. A computer-readable storage for enabling a computer to perform a process, the process comprising:
allowing a user to spatially move a viewpoint on a first viewpoint surface; and allowing a user to temporally control a sequence of transition images when spatial movement on the first viewpoint surface encounters an edge or predefined point of the first viewpoint surface.
- 46. A computer-readable storage for enabling a computer to perform a process, the process comprising:
determining that a viewpoint or virtual camera for viewing a model or scene has been interactively translated off of or to a pre-defined point or edge of a first camera surface or viewpoint surface; in response to the determining, transitioning the viewpoint or virtual camera to a second camera surface or viewpoint surface that is not adjacent to the first camera surface or viewpoint surface; and enabling temporal navigation of display of an animation during the transitioning.
- 47. An apparatus, comprising:
a spatial navigation unit allowing a user to spatially move a viewpoint on a first viewpoint surface; and a temporal navigation unit allowing a user to temporally control a sequence of transition images when spatial movement on the first viewpoint surface encounters an edge or predefined point of the first viewpoint surface.
- 48. An apparatus, comprising:
a determining unit, determining that a viewpoint or virtual camera for viewing a model or scene has been interactively translated off of or to a pre-defined point or edge of a first camera surface or viewpoint surface; a transitioning unit, in response to the determining, transitioning the viewpoint or virtual camera to a second camera surface or viewpoint surface that is not adjacent to the first camera surface or viewpoint surface; and a navigation unit enabling temporal navigation of display of an animation during the transitioning.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application is related to a U.S application entitled “A PUSH-TUMBLE THREE DIMENSIONAL NAVIGATION SYSTEM” having Ser. No. 10/183,432, by Azam Khan, filed Jun. 28, 2002 and incorporated by reference herein.