The present description relates to enhanced presentations and management of online information and, in particular, to techniques for generating a representation of a user's connectivity relationships with network accessible devices, services, people, and data and for manipulating objects within that representation.
On-line communications have become an increasingly important aspect of people's lives. These communications can take many forms, including explicit person-to-person communication such as email, instant messaging, or other forms of sending electronic messages; communication with on-line services such as web sites, email servers, and other Internet Service Provider (“ISP”) services; and communication with local and remote devices, such as printers, scanners, or fax machines on a home network or, for example, on-line phones, cameras, PDAs, and other portable computers or devices.
Managing and communicating with the morass of types of devices and connections has become difficult and not very “user-friendly” to a casual, or not necessarily technically savvy, user. Interfaces to devices are inconsistent, and the requirements for accessing services are not uniform or even well-known. In a typical conventional computing environment, a user conducts such communications by locating a user interface (“UI”) associated with the desired target or task, figuring out how to use it, and then invoking it to conduct the desired communication. For example, to set up access to or to change default properties associated with access to a particular printer, a user is forced to find and invoke a “setup” tool (e.g., a printer configuration tool) from a user interface presented by the underlying operating system, for example the Microsoft Windows™ “desktop.” The setup tool displays a series of dialog boxes or other windows, whose user interface controls are dedicated to interacting with the target printer. The user is then forced to find the correct control, dialog, or other UI component to perform a desired operation. If the user can locate the appropriate user interface, recognize it as the needed one, and find the appropriate specific component to invoke, then the user can perform the desired task. However, for many users, management tools such as this one are impenetrable black boxes with limited options for control and little instruction.
One reason for these difficulties is that the current metaphor for operating system user interfaces for personal computers is typically an “office” desktop or derivative thereof. The desktop metaphor was developed in the 1970's and was originally targeted to the office automation market. However, the office automation environment for which these user interfaces were designed no longer represents a reasonable facsimile of or metaphor for how many people today incorporate computers in everyday life.
For example,
Embodiments described herein provide enhanced computer- and network-based methods, systems, computer-readable instructions, and techniques for modeling and interacting with a user's universe of on-line relationships, including, for example, people, devices, content, services, and other entities that are connected to the user, directly or indirectly, via one or more networks. Each entity is associated with the user through an object, such as a physical or logical device, data collection, or service that is connected to the network. The objects associated with the entities to which a user has or potentially can have relationships are referred to collectively as a user's connectivity universe. Thus, for ease of description, a user's relationships with a set of entities are considered synonymous with the user's relationships with the objects that correspond to such entities, and the words “entity” and “object” are used interchangeably unless noted otherwise.
Example embodiments provide a WorldView Display System (“WVDS”), which automatically organizes a user's online relationships with such entities according to similarities of “access proximity” and provides a user interface for accessing and interacting with these entities and their corresponding objects. Access proximity is an assessment of the “closeness” of the relationship between an object, such as a device, a collection of data, a service, or other object that corresponds to an entity, and the user, as evaluated according to (or measured by) any one of a number of characteristics. Example characteristics include:
The WVDS automatically determines the universe of objects that the user has relationships with, automatically groups objects having similar assessments of access proximity, displays a representation of these groups of objects on a display device, and provides a uniform user interface for initiating an interaction with any displayed (represented) object. For example, the user can activate an object and “zoom in” to see what data content it contains; invoke a native user interface of the object (e.g., “open” the object); set up a data sharing relationship between data content; configure access permissions; set attributes for what is displayed in conjunction with an object's representation and what input is forwarded to a represented device; etc. The user invokes these operations in a uniform way, that does not rely upon knowledge specific to the object.
Thus, in one aspect the WVDS provides an operating environment that models a connectivity universe from a user's point of view and that provides a metaphor for interacting with objects of potential interest to the user that is on-line centric as opposed to desktop centric. In addition, WVDS orients the user to focus on the media and media types that are present on devices in the user's connectivity universe as opposed to the configuration settings of particular devices. In another aspect, the WVDS provides a navigational model for viewing and interacting with a three-dimensional representation of the user's connectivity universe using graphics and rendering techniques that give an impression of moving (e.g., “flying”) through a virtual world (a 3-D universe) to locate, view, activate, and open objects. In yet another aspect, the WVDS provides a graphical user interface for easily setting up data sharing relationships between any two objects in a uniform manner. Other aspects will be apparent and can be gleaned from the description that follows.
In one embodiment, the WVDS groups objects and displays each group as a “proximity band” in the user's connectivity universe. Each proximity band displays a set of objects that are related to each other, from the user's perspective, in that they have similar characteristics as measured by access proximity. That is, each proximity band corresponds to a different class of access proximity, as assessed by whatever characteristic(s) is (are) currently configured for evaluating access proximity. Each proximity band displays representations of the objects that belong to (are grouped in) that band and a representation of the data content that is present on each such object.
For example, in the example embodiment illustrated in
In one embodiment, each device representation is displayed along with a device ring that simultaneously shows the contents (as data collections) associated with that particular device. Other embodiments may incorporate different types of graphical indicators, which may partially surround or totally surround a device representation. The device representation may indicate a physical or virtual device, such as a virtual “device” that represents a means to get access to a relationship such as another user's data collection. Other embodiments may require the user to navigate to a closer “level of perspective” (for example, by “zooming in” to the object) before displaying an associated device ring. In addition, some embodiments may permit a user to configure whether a device ring is displayed on a per device level, per device type, per proximity band, entire WVDS, etc., or in any combination.
Each device (or other displayed object) is considered active or inactive. The WVDS typically allows only one object (device or data collection) to be active at a time to control clutter and confusion; however such settings are configurable. In some embodiments, the device needs to be made active before its device ring is displayed. In other embodiments, a device ring is displayed if appropriate to the device type, for example, without regard to whether a device is active/inactive. In a typical WVDS, the user activates a device (or data collection) by selecting the object using an input device, such as, for example, clicking on the object representation with a mouse cursor. The user can also select the object by “hovering” an input device cursor over the object representation. Once an object is active, a user interface is displayed, such as palette 230, to allow the user to change, for example, WVDS attributes, device related attributes or access privileges associated with the object. The user can also zoom in or out, thereby potentially changing how much detail of the object is shown and/or how large or small the components of the object appear, or can invoke a native user interface associated with the device (e.g., “open” the object). An object's representation is typically changed to indicate that the device is active.
Also, in some embodiments, the WVDS may recognize that a device is not currently on-line (accessible). In such a case, the WVDS may display an indication (not shown), such as a dashed line, or other demarcated indicator, connecting the device ring associated with the device representation to the network cabling displayed in the associated proximity band. In other embodiments, portions of a representation of an on-line device are omitted or changed when the device is off-line, such as graying out the device ring or leaving out a connector cable from a device ring to the network, etc.
Note that, in the illustrated embodiment shown in
Note that representations of devices may or may not display data content when the device representations are depicted in a WVDS universe. That is, in a typical configuration, the device representations display data collections which in turn contain and are used to view data content. The device representations may also be configured to include “screen forwarding” capabilities. That is, device models depicted in WVDS, which have associated display screens, may be configured to receive screen display output updates from the corresponding device and display these (device output) updates within the WVDS environment, thereby “forwarding” screen updates from the device to the WVDS. Depending upon characteristics of the underlying system, such as performance capabilities, these updates may be received and displayed in near real-time. Such screen forwarding permits a user to see what is happening on the associated device, but from within the context of the user's connectivity universe. Screen forwarding may be configured as with other WVDS configuration settings; for example, through a WVDS supported user interface. Accordingly, the WVDS may be configured to support screen forwarding for all devices that support the capability; for just the active device; on a per device, per user, or per proximity band basis; according to certain parameters or heuristics that take into account factors such as clutter, performance, privacy, and/or security; etc.
Device representations may also be associated with some form of animation to highlight when they are active, or at other times.
In one embodiment, the WorldView Display System comprises one or more functional components/modules that work together with other parts of a user's online environment to model a user's connectivity universe. The components and/or sub-components of a WVDS may be implemented in software or hardware or a combination of both.
Although the techniques of modeling a user's connectivity universe and the WorldView Display System are described with reference to a external application running as a separate module(s) in addition to a native operating system, the techniques of the presented embodiments can also be used directly by an operating system to present an alternative metaphor to its own devices and data collections, as well as to other devices and data collections to which the operating system has access. Also, as illustrated with respect to
Example embodiments described herein provide applications, tools, data structures and other support to implement a WorldView Display System to be used for managing resources and relationships in a user's online world. Other embodiments of the described techniques may be used for other purposes, including for other types of user interfaces. In the following description, numerous specific details are set forth, such as data formats and code sequences, etc., in order to provide a thorough understanding of the described techniques. However, the described embodiments also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the code flow or module arrangement or using different algorithms entirely.
As referred to in step 801 in
In some embodiments, the WVDS supports an interface for adding new devices, collections, and types of data collections. By initiating a user interface dialog with the WVDS from a particular location on the displayed presentation, the user can bring up a dialog with the WVDS to specify a new device, collection, or collection type (media viewer/player). For example, by right clicking on a proximity band, the user can indicate a new device to be recognized (e.g., a specific computer system or a newly attached printer) and added to that particular proximity band. In an alternative embodiment, the dialog and new device are not proximity band specific (or the user can specify that they are not) and the WVDS automatically determines where to add the new device in its internal model. Similarly, the user can right click on the Media Viewers proximity band to add a new type of collection viewer to be discoverable. Media Viewers are described below with reference to
When initially executed, the WVDS creates and stores an initial inventory of the objects with which the user has a relationship. Since objects may come and go and relationships may change, this inventory is modified on some determined basis. For example, the WVDS may perform updates at specific times (such as the beginning of a session), at preprogrammed times (such as once a day), by registering a callback routine to be invoked by the operating system when a device is accessed or its settings changed, or, for example, in response to a specific update request initiated by the user. The initial inventory of objects may be constructed by discovering objects from a variety of resources, including, for example, from operating system services, which enumerate registered devices (e.g., local disk drives, connected printers, scanners, email servers, and web page histories); application programs that interact with network devices; and user input provided in response to a specific query or provided as configuration information using a user interface of the WVDS, etc.
Once the universe of objects is determined, then the WVDS determines as assessment of access proximity for each object in the inventory. (See step 802 in
After automatically determining the access proximity for each object, the WVDS arranges the objects with similar measurements into groups according to an internal model of groups and any relevant WVDS configuration parameters. (See step 803 in
Table 1 below illustrates an example inventory created by a WVDS using one or more sources as described in step 801 and arranged according to the access proximity assessments described in steps 802-803.
The objects shown in Table 1 are arranged according to the internal model of the WVDS, a portion of which is depicted in
The “Buddy Rooms” indicated in Table 1 are virtual devices that are used to navigate to or represent data collections to which the user has access rights but that are hosted remotely and are not represented to the user through some other device relationship that the user has sufficient access rights to see as a device in some other grouping. Thus, Buddy Rooms provide a means of user interface access to the user that otherwise wouldn't be available from the other groupings of objects. For example, if a data sharing relationship is established with a second user's photo collection, and that photo collection resides on a disk drive of the second user to which the user does not otherwise have access, then the WVDS may present the second user's photo collection as a Buddy Room in the user's connectivity universe. A chat room provides another example of a buddy room.
Once an inventory of objects (and their data collections) has been created and grouped according to the WVDS internal model (or modified as directed), then the WVDS renders the groups of objects in a multi-dimensional rendering such as the proximity bands illustrated in
In some embodiments, the WVDS represents the groups of objects in a user's connectivity universe using proximity bands and renders them to look three dimensional. In one such example embodiment, the WVDS defines several different views of default proximity bands—a device centric view, a media centric view, and a combination view. In an example embodiment, a device view displays data content present in the user's connectivity universe from within the context of the devices on which the data resides. Using device view, a user can easily view and specify settings for, and interact with devices. A media view displays data content present in the user's connectivity universe based on its media type, independent of the devices on which the data resides. Using media view, a user can easily view and manipulate data based upon its type regardless of where the data resides—and thus does not have to search for the data and perform a desired operation multiple times in multiple locations. The different views are toggled on and off, for example, using buttons 130 and 140 in
In a typical default device view, such as that shown in
As mentioned, views other than a device view or a media view can be supported by a WVDS. For example, a view that filters the connectivity universe by a user's relationships with certain individuals can be incorporated. In one embodiment, a People filter button (shown for example in
Once the WVDS has rendered a representation of the connectivity universe associated with a user, the user (or a program through an API) can navigate within the representation to perform a variety of functions. (See step 805 in
A user navigates the connectivity universe presented by the WVDS using an input device, such as a mouse, to control which portion of the universe is currently displayed on the display screen (e.g., a viewing “camera” location/angle) and to control the size of data that is presented on the display screen and potentially how much detail is displayed (e.g., the “zoom” level). (Depending upon how zoom operations are implemented, “zooming” can refer to how big or small objects appear and/or how much detail is displayed.) In a typical WVDS embodiment, zooming is generally performed using smooth animations. In some embodiments, zoom levels or ranges of zoom levels are associated with different levels of perspective, which can be further used, along with other considerations, to define what is presented at different ranges or values of “zoom” levels. When a user zooms in or out to a particular zoom level, the WVDS determines to which level of perspective (“LP”) the zoom level corresponds and then uses the determined LP to decide what should be displayed. For example, for some zoom ranges (which may, for example, correspond to a range of distances from a virtual camera to the object(s) at or near the center of the screen), it may make sense for the WVDS to display all of the possible detail that is available to be shown. For example, when the camera is within a certain distance to the object or, using an another metric, close enough so that the object occupies at least a certain portion of the display screen, the user may find it helpful to see all of the detail of the object. For other ranges, such as viewing an entire connectivity universe representation, it may not make sense to display all of the detail associated with all objects. Metrics other than distance from an object and/or portion of a display that is occupied may also be used to associate LP's to zoom levels.
Note that zooming operations can be implemented without levels of perspective, such that all detail that is potentially viewable depending on the object size, distance of the camera from the object, and the resolution of the display upon which the connectivity universe representation is being viewed. Such detail may include, for example, sub-collections, device and collection rings, UI components associated with objects, etc. For example, when the viewing camera is located a certain distance from an object, that object and its associated detail might occupy 100×100 pixels on the display screen. However, when the camera pulls further back away from the object (for example in response to a user zoom out request), then the object will appear smaller and may only occupy, for example, 10×10 pixels on the display screen. If the camera pulls back even further, the object will occupy less of the display screen, even possibly to a point where the object occupies a single pixel and/or perhaps progressing even further to a point where the graphics rendering system determines that the object is too small to display at all (even though logically the object is still there and may appear again when the camera zooms in closer to it). Note that when an object's representation size gets smaller, its associated detail also gets accordingly smaller such that, at certain points, some of that the associated detail has become too small to discern and/or to even occupy a single pixel on the display even though the object itself, being a larger size, is still being displayed and is potentially discernable. In some embodiments, certain associated detail (e.g., object attributes) may retain a constant display size, regardless of zoom level and/or the display size of the associated object, and such constant-size attributes may further optionally be suppressed from being displayed when their associated object becomes too small to be displayed at all. For example, in one embodiment, when the user hovers over an object, the name of the object is displayed at a constant size for easy readability. The particular techniques used to determine the displayed size of objects are dependent upon many factors such as the graphics engine employed, the resolution of the host computer display, the hardware, the operating system, etc.
Other combinations of implementing zoom levels to correspond to one or more levels of perspective are also possible. In any case, the WVDS may be implemented and/or configured to control what is displayed in correspondence to how close-in or far away the user is.
In one embodiment, the WVDS provides the following levels of perspective in increasing order (farthest away to closest):
1. World
2. Proximity Band
3. Device
4. Active Device
5. Native UI
6. Collection
7. Active Collection
8. Sub-collection
9. Active Sub-collection
10. etc. (further levels of sub-collections)
Level 1 represents the outermost level of perspective. For example, in this embodiment, a definition of Level 1 specifies that the entire world is displayed and accessible (to the extent it can be viewed on the device). In Level 2, the focus is on proximity bands. Level 10 represents further inner levels of perspective until there are no more sub collection levels to be displayed or accessed. Level 5 represents zooming in to an object close enough to display the native user interface that is specific to the object or one provided by the WVDS (for example, if the device is not capable of providing access to its user interface from within the WVDS). Thus, when a user accesses a native user interface of an object in the WVDS, the user does so in the context of the user's entire connectivity universe and, by zooming in and out, the user can access different portions of the user's universe.
Depending upon the WVDS configuration settings, the different levels of perspective correspond to transitions in the amount of detail displayed in the connectivity universe representation. According to one WVDS definition, the level of perspective at which device rings are displayed around devices is termed the “device ring display level.” In one embodiment, the device ring display level is the Proximity Band level, although it is configurable. In another embodiment, the device ring display level is the World level, and thus device rings are always displayed. The level of perspective at which collections are displayed on the device rings is known as the “collection display level.” Typically, this occurs at the Proximity Band level, although, as with all of the other levels of perspective, this behavior is configurable. The level of perspective at which sub-collections are displayed on the data collection rings is known as the “sub-collection display level.” Typically, this occurs at the Active Collection level, although, as with all of the other levels, this behavior is configurable.
Note that in some embodiments, the levels of perspective may be effectively reduced to a single level of perspective if no transitions between amount of detail or no differences between levels are defined. For example, if all objects are displayed and all functionality is available all (or most) of the time, the zoom in and zoom out behavior of the WVDS user interface may set the viewport and camera angle without necessarily affecting how much or what detail is displayed. Similarly, if a couple or few display detail transitions are described, the WVDS definition may incorporate only a couple or a few levels of perspective.
For example, in one embodiment of a WVDS, all objects are always displayed except that only sub-collections of active devices/collections are shown. When the user activates an object, and, for example, zooms in to a sufficient level, the zoom level may correspond to a level of perspective (e.g., an Active Object LP) that indicates a display transition to display sub-collections of that object. In this example, two levels of perspective are sufficient to define the display characteristics: a level in which (sub-)collections are shown for an active object if any, and one in which they are not shown. If more than one “level” of sub-collections is allowed (for example, nested sub-collections), then additional levels of perspective are incorporated.
In addition, different levels of perspective may be associated with different default WVDS configuration parameters. For example, screen forwarding for device representations that correspond to devices with display screens may default to being turned on at some levels but not for other levels. Considerations such as clutter, performance, and security may be incorporated in determining at which levels screen forwarding makes sense. Other WVDS configuration parameters can similarly be associated with different levels of perspective. In some embodiments, even if these parameters are set as defaults for a particular LP, a user may be allowed to override such settings on a per object basis (a proximity band basis, or a system-wide basis), for example, through a WVDS user interface associated with a particular device object. In addition, the WVDS may use heuristics to automatically determine when certain parameters are set or not.
The WVDS renders the objects in the user's connectivity universe based upon the current configuration of these levels of perspective.
Assuming that, at least at some point, the representation of the connectivity universe is larger than can fit on the display screen, the user controls the portion of the universe displayed by moving the input device to reflect the user's position. According to one embodiment, the input device behaves like a camera view finder. That is, as the user moves (as the input device indicates motion) in a forward direction, the user will see more objects ahead while those objects that were previously closest to the user will move behind the user and fall out of view. Also, as the user moves in a direction so that the user appears to be looking more directly downward (moves the point of view source higher), the user will see more of the top of objects and less of a side view. Similarly, as the user moves to the side, the user will see those objects to that side while objects on the other side fall out of view.
Many different graphics and rendering techniques are available to navigate through a two or three dimensional representation of the connectivity universe displayed on a display device. The following definition describes one user interface to effectuate the camera position, angle, and orientation movements described above. Many equivalent user interface definitions can be similarly incorporated and that different user interfaces can be optimized for different input devices. For example, definitions may be created to support other input devices, such as joy sticks, that can control multidirectional, 3-D movement.
In addition to general navigation, the user can also further manipulate objects and their content by activating them. As briefly mentioned with reference to
Other buttons for other capabilities can be easily incorporated and that other iconic representations or symbols can be displayed. For example, in one embodiment, the WVDS supports a uniform “media control” type interface on a data collection for manipulation of the contents of the collection. Media controls includes commands such as a “play” command, “pause” command, a “next” command, a “previous” command, a “fast forward” command, and a “rewind” command, which are supported in the form of buttons or other UI components. The user can invoke these media controls to easily cycle through the data contents of a collection and to invoke the appropriate player/viewer to present the contents.
Using the access control button of either a device or data collection UI palette, for example buttons 2122 or 2222, the user can cause the WVDS to display an access control dialog (not shown) to configure access permissions on the corresponding device or collection to the extent that the user has permission to do so. Setting access permissions from this dialog allows the user to easily specify for one or more users access permissions at the object level, which may be different for each user or group of users, instead of setting them one at a time for other users to whom the user desires to grant access. Alternatively, access control cards can be used to manage access permissions at an individual level. As described above, in one embodiment, access control cards are presented along with an active object's representation (and at other times).
In one embodiment, each access card has a front side and a back side. Once an access control card has been set up for a particular object, the WVDS may be configured to display the current settings on the front side of the card or a symbol of the user (or an avatar representing the user) as part of the representation of the object. In addition, in some embodiments, an access control card associated with a device or data collection may be displayed for each user that has some type of access to the object, resulting in potentially multiple access control cards being displayed at the same time for a single object. Typically, the WVDS displays the (front side of) associated access control cards for active objects. When a user then selects an access control card (to the extent the user's permissions allow), an animation turns the card from the front to the backside, resulting in the card as shown, for example, in
In other embodiments, the WVDS can incorporate other types of settings and/or access control parameters. For example, controls that limit access based upon the type of content or device in combination with certain characteristics of a user, or based upon other limits such as time, may be implemented to effect a parental control interface. Such interfaces can be integrated into the WVDS, for example, as part of the settings or access control buttons available on the WVDS UI palettes, for example, robot button (2121 and 2221 in
For example, in one embodiment screen forwarding and input device redirection capabilities may be configured using these buttons for particular devices. Many combinations are contemplated, such as defining an initial WVDS configuration definition that generally enables or disables screen forwarding, for example, to allow a user to follow what's happening on multiple devices simultaneously or to reduce clutter, but still allow a user to override these settings on a per object, proximity band, or system-wide basis.
The sync/share interface cable present on a UI palette of a data collection, for example the cable 2224 in
Data sharing relationships may be one-way or two-way. A one way relationship implies that one data collection serves as a source for data updates and one data collection serves as a target. A two-way relationship implies that each collection acts as a source collection for the other when their respective data content changes and that each collection acts as a target collection (recipient) for the other's changed data content. Thus, the shared data is transferred in two directions and the sharing relationship can be termed bidirectional.
Also, data sharing relationships may involve the actual transfer of data or may involve “virtual” transfers, in which the device associated with the target collection receives a description of the modified data content, but the actual transfer is delayed until a user tries to access it (e.g., the recipient collection may contain a link to the shared source data).
The WVDS also provides a user with an ability to set up “functional agents” at each end of the data sharing relationship. These functional agents provide hooks into code that is executed as appropriate upon the sending or receiving of data by a collection. Many such functional agents can be defined. In an example embodiment, the WVDS supports the following functional agents:
In one embodiment, data sharing relationships are established by connecting (such as by dragging or using other direct manipulation input commands) a representation of a sync/share cable from a source collection or device onto a target collection or device. More specifically, the user drags a “sharing cable” with a plug within the displayed universe (using other navigation commands as appropriate) and “plugs” the cable plug into a “receptor” on a target data collection or device by, for example, a drop movement. Upon plugging in the sharing cable, the WVDS automatically establishes a (typically) synchronized data sharing relationship between a corresponding source data collection and a (direct or implied) target data collection. Optionally, a sharing “settings” configuration dialog or a confirmation dialog may be displayed before completing the connection. In one embodiment that utilizes a mouse, when the user clicks on a sharing cable plug, the mouse can be used to drag the cable (which is pulled out from the collection/device as needed to follow the mouse around) without depressing any buttons. (The WVDS accomplishes this functionality by implementing modal operation when the mouse is used to click on the cable plug.) The user is thus able to use full navigation commands, including zooming to find an appropriate target collection. While a cable is being dragged, appropriate candidate target collections may be highlighted or otherwise given emphasis (or devices when the level of perspective or zoom level is too far away to present collections). In some embodiments, inappropriate targets are dimmed. When the cable plug nears a candidate collection, the candidate collection may display a receptor or other target indicator to indicate to the user that the cable can be attached to that collection. A further mouse click or other type of selection indication by the user on the receptor or other target indicator may be used to indicate that a connection should be made with the target collection.
Devices can also display receptors when a cable plug comes near them. Even though data sharing relationships are ultimately established between data collections, the WVDS will automatically attempt to set up a relationship between corresponding types of data collections when the user specifies a device as either the source or target of a drag operation of a sharing cable. For example, when the user drags a cable from a collection to a target device, the WVDS creates a data sharing relationship between the source collection and a collection of the same type on the target device. If there is more that one collection of that type on the device, then the user is queried to determine the desired target. If there are no collections of that type yet on the device, then a new collection is created.
Both data collections and devices can include sharing cables with plugs. For example, the sync/share interface cable 2224 in
Data sharing relationships also may be set up between an external source object to which the user has limited current access rights (the user can access data from the object and is not currently viewing the object within the WVDS display) and a target object to which the user has access rights and which is being displayed in the current WVDS representation. The WVDS will display an appropriate indication to show that data for the data sharing relationship is coming from an external object. For example, the WVDS may display a cable representation that appears to go “off-screen” with a terminator that indicates the source of the data. A corresponding data sharing relationship can be set up with an external object as the target object and an appropriate indicator displayed to indicate that data sharing relationship.
When a data sharing relationship is established, the WVDS may present a user interface to allow the user to configure the parameters and settings of the relationship, including whether the relationship is one-way or two-way (uni- or bi-directional), parameters such as frequency of updates, desirability of virtual transfers, etc., and the specification of special functional agents.
Once the relationship is fully established and configured, in one embodiment, the WVDS indicates the data sharing relationship graphically on the presentation of the connectivity universe, for example, using a colored cable between the relevant collections.
One of the functions available through the WVDS interface is to allow a user to invoke the native interface (or WVDS provided object-specific interface) associated with a particular object within the context of the WVDS. When a device is active, the user can cause input from an input device (for example, a mouse or keyboard) to be “redirected” to the active device, such as by “opening” the active device (by maximizing it or clicking within the device screen) as described with reference to
In addition, if screen forwarding is also turned on for the active device, then, when device input is redirected to the active device, updates to the display screen of the associated real device, potentially based upon the redirected input, are automatically displayed in the WVDS device representation. For example, in one embodiment of the WVDS, a device's display screen as modeled by the WVDS (for example, the display screen of the computer system 201 in
In addition to interacting with the native UI of a device within the connectivity universe representation, the user can zoom in closer to a device until the device's native UI becomes “full screen” (e.g., maximized) on the host device's display screen. The standard navigation techniques for zooming and/or changing levels of perspective (e.g., rolling a mouse wheel to zoom in and out, using a zoom handles button, etc.) can be used to accomplish this function. In addition, on device representations with which a user can interact in full screen mode, the user is able typically to select an area within the device representation display screen (e.g., a maximize button)—effectively zooming in (and/or changing the level of perspective where appropriate) to “maximize” the interface shown on the device representation display screen. In some embodiments, a maximize operation automatically causes input redirection. For example, as shown in
Once the user has maximized a device representation so that the user is viewing only the native UI of the device's underlying operating system (and similarly when the user wishes to enter the WVDS initially from a device's native user interface), the user can enter (or return to) the WVDS connectivity universe representation by selecting a “WVDS Restore” button or other user interface component superimposed on (or otherwise integrated with) the native UI's display presentation. The specific user interface component added to each device's native user interface to accomplish this restore functionality typically depends upon the type of device, its native user interface, and the operating system of the device. If the device is a Window's operating system driven device, then, for example, the WVDS can add a WVDS Restore button on each window's title bar that invokes a type of “restore” function to render the window(s) containing the native UI to a smaller replica of the window(s) within the connectivity universe representation.
In some embodiments, when an appropriate user interface component is used to invoke the “restore” functionality, the WVDS gradually and smoothly zooms out from the full screen display to a higher (further away) level of perspective, which presents the device within the context of the user's connectivity universe representation. This “zoom out” transition animates a zoom out action in a manner that allows a user to easily see and understand the transition as the view shifts to a broader connectivity universe view. In some embodiments, when a device's screen representation is restored, input is automatically redirected from the device to the WVDS.
As described, this zoom out can be presented as a gradual and smooth transition out to the WVDS representation.
In
The sequence of
In some embodiments, a user can easily switch between interacting with a represented device's native user interface and the WVDS user interface by selecting an area (e.g., using a mouse click) within the device's screen representation in WVDS or outside the device's screen representation in WVDS. For example, if input is currently being redirected to the device's native UI, then, when the user clicks on an area outside of the device's screen representation (other than on a representation of a keyboard or other input device simile displayed by WVDS for the purpose of redirecting input), then WVDS interprets the click and input that follows as intended for the WVDS and not for the device. If, however, a device's WVDS screen representation is maximized, then the user cannot click outside of the screen representation, and employs other techniques to send subsequent input to the WVDS, such as by restoring the display as described above, which in some embodiments will automatically redirect input back to the WVDS. Conversely, if the user clicks on an area inside of the device's (non-maximized) WVDS screen representation, then the WVDS interprets the click as a command to send subsequent input to the device (and some indication of the redirection is typically displayed).
The WorldView Display System also supports the ability for a user to “open” a collection to invoke a native user interface associated with the collection. The “rose” (open) button in an active data collection's UI palette (see, for example, button 2223 in
As mentioned with reference to
In addition to the operations and functions described, the WVDS can offer many additional enhancements. For example, the WVDS may also support a general “settings” user interface, accessible from a button or other component on the screen for configuring devices, collection types, and other WVDS configuration parameters. Such an interface can be used, for example, to configure the modeling parameters of the WVDS; configure thresholds such as the maximum number of proximity bands to display at certain levels of perspective; hide and unhide the display of particular proximity bands; set up characteristics to assess access proximity; map access proximity classes to proximity bands; specify that particular devices are mapped to particular proximity bands; specify collections on devices, etc. Lots of alternative interfaces to these functions can also be easily incorporated and are contemplated to operate with the techniques described herein.
In the embodiment shown, computer system 2500 comprises a computer memory (“memory”) 2501, a display 2502, a Central Processing Unit (“CPU”) 2503, Input/Output devices 2504, and Network Connections 2505. The WorldView Display System (“WVDS”) 2510 is shown residing in memory 2501. The components of the WorldView Display System 2510 preferably execute on CPU 2503 and manage the generation and use of connectivity universes, as described in previous figures. Other downloaded code 2530, terminal emulators as required 2540 and potentially other data repositories, such as data repository 2520, also reside in the memory 2501, and preferably execute on one or more CPUs 2503. In addition, one or more components of the native operating system for the computer system 2550 reside in the memory 2501 and execute on one or more CPUs 2503. In a typical embodiment, the WVDS 2510 includes one or more Display Managers 2511, at least one Rendering Engine 2512, user interface support modules 2513, API support 2514, and WVDS data repository 2515, which contains for example WVDS configuration and connectivity universe information.
In an example embodiment, components of the WVDS 2510 are implemented using standard programming techniques, including object-oriented techniques as well as monolithic programming techniques. In addition, programming interfaces to the data stored as part of the WVDS can be available by standard means such as through C, C++, C#, and Java API and through scripting languages such as XML, or through web servers supporting such. The WVDS data repository 2515 is preferably implemented for scalability reasons as a database system rather than as a text file, however any method for storing such information may be used.
The WVDS 2510 may be incorporated into a distributed environment that is comprised of multiple, even heterogeneous, computer systems and networks. For example, in one embodiment, the Display Manager 2511, the Rendering Engine 2512, and the WVDS data repository 2515 are all located in physically different computer systems. In another embodiment, various components of the WVDS 2510 are hosted each on a separate server machine and may be remotely located from the tables which are stored in the WVDS data repository 2515. Different configurations and locations of programs and data are contemplated for use with techniques of the described embodiments. In example embodiments, these components may execute concurrently and asynchronously; thus the components may communicate using well-known or proprietary message passing techniques. Equivalent synchronous embodiments are also supported by an WVDS implementation.
The capabilities of the WVDS described above can be implemented on a general purpose computer system, such as that described with reference to
For example, as described above, one capability of the WVDS interface that is available to a user once the connectivity universe is presented is to allow a user to interact with a native user interface associated with a represented device. In one embodiment, the WVDS supports the ability to “open” an active device, thereby enabling the user to send (redirect) input to the active device. In other embodiments, the WVDS supports the ability to “open” (and redirect input to) any device, regardless of whether it is active or not. In addition, the WVDS supports the ability to forward screen output from a native interface of a represented device to a device representation (whether or not the device is “active”—depending upon the WVDS configuration settings). The manner in which the WVDS can implement a native user interface mode of a device that supports combinations of input redirection and screen forwarding is device, and its underlying operating system, dependent.
Some devices and their native operating systems may support drawing directly to a window specified by the WVDS on the host computer display. In situations in which the WVDS doesn't need to process the output, for example when native device output is presented in full screen mode, the WVDS may, depending upon, for example, configuration settings, provide such a window and the represented device can directly draw to it. However for other devices, and in situations where the WVDS desires to process the output, for example when the WVDS is configured to support rendering, in which displayed content is sometimes rotated, other techniques are incorporated. For example, in one embodiment, the WVDS requests the operating system of the host device to execute a terminal emulator for communicating between the WVDS and the particular device to which input is being directed and/or from which output is being received. The WVDS also invokes a corresponding host software routine (code) on the particular device's native operating system for communicating with the terminal emulator that is executing on the host device. (The device terminal emulator on the host device thus communicates with the host software routine on the particular device.) When input redirection is enabled, input received by the WVDS host input devices is passed to the terminal emulator, which forwards the input to the corresponding software routine on the particular device. When screen forwarding is enabled, screen updates (output) that originate on the particular device are then passed from the corresponding software routine to the terminal emulator executing on the host device, which are then forwarded through to the WVDS to render them on the display screen representation of the device representation that corresponds to the particular device. (When the device representation is full screen, the terminal emulator may be able to write directly to a WVDS display “window” thus expediting screen updates.) Other alternative implementations are possible.
Some devices, for example, currently cameras, do not support even terminal emulation capabilities. In such cases, if there are no alternatives for interacting with the native UI of a device, then the WVDS may disable this function for that device or offer an alternative user interface.
The user can also “open” an active collection to invoke a native user interface associated with the collection. In one embodiment, when the collection resides on the host device, the WVDS directs the host device's operating system to execute the default application for displaying (or otherwise presenting) the active collection's content. When the collection resides on a device other than the host device, the WVDS may direct the host device's operating system to execute the default application for displaying (or otherwise presenting) the active collection's content or may use terminal emulation techniques as described above, or other means of communicating with the native UI of the device associated with the collection, to present the designated collection.
All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, including but not limited to U.S. Provisional Patent Application No. 60/562,848, entitled “A METHOD AND SYSTEM FOR MANAGING PERSONAL NETWORK RELATIONSHIPS,” filed Apr. 16, 2004, U.S. Provisional Patent Application No. 60/566,507, entitled “A METHOD AND SYSTEM FOR MANAGING PERSONAL NETWORK RELATIONSHIPS,” filed Apr. 29, 2004, U.S. Provisional Patent Application No. 60/630,764, entitled “A METHOD AND SYSTEM FOR MANAGING PERSONAL NETWORK RELATIONSHIPS,” filed Nov. 24, 2004, and U.S. application Ser. No. 11/109,487, entitled “MANIPULATION OF OBJECTS IN A MULTI-DIMENSTIONAL REPRESENTATION OF AN ON-LINE CONNECTIVITY UNIVERSE,” filed Apr. 18, 2005, are incorporated herein by reference, in their entirety.
From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. For example, the methods and systems for performing presentation and rendering discussed herein are applicable to other architectures other than a Microsoft Windows operating system architecture. The methods and systems discussed herein are applicable to differing protocols, communication media (optical, wireless, cable, etc.) and devices (such as wireless handsets, electronic organizers, personal digital assistants, portable email machines, game machines, pagers, navigation devices such as GPS receivers, etc.).
Number | Name | Date | Kind |
---|---|---|---|
5561753 | Coulombe et al. | Oct 1996 | A |
5999179 | Kekic et al. | Dec 1999 | A |
6628304 | Mitchell et al. | Sep 2003 | B2 |
6700592 | Kou et al. | Mar 2004 | B1 |
7120874 | Shah et al. | Oct 2006 | B2 |
7367028 | Kodosky et al. | Apr 2008 | B2 |
20010009420 | Kamiwada et al. | Jul 2001 | A1 |
20020026474 | Wang et al. | Feb 2002 | A1 |
20030158855 | Farnham et al. | Aug 2003 | A1 |
Number | Date | Country |
---|---|---|
0167759 | Sep 2001 | WO |
Entry |
---|
Beier, “Virtual Reality: A Short Introduction,” University of Michigan Virtual Reality Laboratory at the College of Engineering, pp. 1-6, Feb. 10, 2004. |
Windows, Windows XP Pro, Service Pack 2, Copyright © 1981-2001 Microsoft Corporation, 4 pages. |
Windows, Windows XP Pro, Service Pack 2, Copyright © 1981-2001 Microsoft Corporation, pp. 1-3. |
Number | Date | Country | |
---|---|---|---|
20060053388 A1 | Mar 2006 | US |
Number | Date | Country | |
---|---|---|---|
60562848 | Apr 2004 | US | |
60566507 | Apr 2004 | US | |
60630764 | Nov 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11109487 | Apr 2005 | US |
Child | 11213676 | US |