The technology disclosed relates to apparatuses, methods, and systems for digital collaboration, and more particularly to digital display systems which facilitate multiple simultaneous users having access to global workspace data.
Digital displays are often used for interactive presentations and other purposes in a manner analogous to whiteboards. Some displays are networked and can be used for collaboration, so that modifications made to the display image on one display are replicated on another display. Collaboration systems can be configured to operate collaboration sessions in which users located at different client platforms share a workspace as described in our co-pending U.S. application Ser. No. 14/090,830, entitled “Collaboration System Including A Spatial Event Map”, filed 26 Nov. 2013 (US 2014-0222916-A1, published 7 Aug. 2014). The distributed nature of such systems allows multiple users in different places to interact with, and change, data in the same workspace at the same time, and also at times when no other user is observing the workspace. Also, the workspace can be very large, essentially unbounded in some systems.
One problem associated with collaboration systems using large workspaces, relates to navigation around the workspace. Because the workspace can be essentially unbounded, and users can place graphical objects anywhere in the workspace, it can be difficult to discover and track the work being done by collaborators.
A system is disclosed that supports the storing and tracking of a plurality of collaboration sessions, each of which is accessible across multiple devices and locations. The technology disclosed includes a method for one client to find and track on their display the transactions generated by another client within a shared workspace.
One system described herein comprises one or more data processors including memory storing computer programs for a database including one or more workspace data structures for corresponding workspaces. A workspace data structure can include a spatial event map for a specific workspace.
The system described includes a first network node including a display having a physical display space, a user input device, a processor and a communication port. The first network node can be configured with logic to establish communication with one or more other network nodes, which can include for example server-side network nodes and peer client-side network nodes, as a participant client in a workspace session. The first network node can have memory, or have access to memory, to store collaboration data identifying graphical targets having locations in a virtual workspace used in the workspace session. The collaboration data can be a spatial event map, as described herein, or other type of data structure, including locations in a virtual workspace of respective graphical targets. The first network node in this system has logic to define a local client viewport having a location and dimensions within the workspace, and to map the local client viewport to a local client screen space in the physical display space at the first network node. The first network node can also provide a user interface displaying a list of participant clients in the session at other network nodes, and for receiving input indicating a selected other participant client from the list. The first network node can receive messages containing a location in the workspace of a participant client viewport in use at the selected other participant client. Using the location of the participant client viewport, the first network nodes can update the location of the local client viewport to the identified location of the participant client viewport in use at the selected other participant client, and render on the screen space graphical targets having locations within the updated local client viewport. This implements an optional behavior for a client in the collaboration session than can be called “follow.”
A node for use in a collaboration system is described that comprises a display having a physical display space, a user input device, a processor and a communication port, the processor being configured with logic to implement the follow mode. The logic can be configured to:
The local client screen space in a network node has an aspect ratio, and a resolution including a number of pixels determined by the display and display driver at the network node. The resolution can be static in the sense that it is unchanging during changes of the local client viewport in the follow mode. The messages containing locations and changes of location include a specification of dimensions having an aspect ratio of the remote client viewport. The logic to change the local client viewport can define dimensions of the local client viewport as a function of differences between the aspect ratio of the local client screen space and the aspect ratio of the remote client viewport. In this way, the graphical objects rendered on remote client screen space using the remote client viewport are reproduced on the local client screen space using the local client viewport without clipping.
Also, the local client screen space has a screens space resolution, that can be static, including a number of pixels, and the dimensions of the local client viewport define a changeable resolution including a number of virtual pixels in the virtual workspace. The logic to compute a mapping determines a zoom factor based on differences in the static, screen space resolution and the changeable resolution.
A node for use in a collaboration system is described that comprises a display having a physical display space, a user input device, a processor and a communication port, the processor being configured with logic to implement dynamic location marker creation, movement, searching and selection. The logic at a particular network can be configured to:
The logic to determine the location and dimensions of the local client viewport includes logic to change the local client viewport in response to pointer or touch based user input gestures at the first network node indicating movement or zoom of the local client viewport.
The user interface displaying a list of location markers also displays a selectable entry for default location for the workspace and including logic to update the location of the local client viewport to the default location upon selection of the selectable entry.
The logic can be configured, so that in response to the selection of the selected location marker, the local client viewport location is changed without changing its dimensions in the virtual workspace.
The user interface displaying a list of location markers includes a search function based on querying location marker tags in embodiments described herein.
The actions usable in the records of an events in the log of events include creation, movement and deletion of location markers.
The network nodes can include logic to send messages to other network nodes, the messages identifying events including creation or movement of a location marker, and the logic is responsive to receipt of a message from a second network node to update the list of location markers.
In a described implementation, the first network node can accept input data from the user input device creating events relating to modification of the local client viewport, and create a client viewport data object defining the location of the local client viewport within the workspace. The first network node can send the client viewport data object to one or more other network nodes participating in the session, to support the follow option.
Also described is a system comprising a first network node providing a user interface displaying a list of location markers in the workspace in the session, where a location marker has a marked location in the workspace. The user interface can receive input indicating a selected location marker from the list. In response to selection of a selected location marker, the system can update the location of the local client viewport to the marked location and render on the screen space graphical targets having locations within the updated local client viewport.
In another implementation, a system can include a spatial event map system, comprising a data processor, and memory storing a spatial event map which locates events in a virtual workspace, and provides communication resources for collaborative displays.
Also, methods of operating client nodes in a collaboration session as described herein, and non-transitory computer readable media storing computer programs, implement logical functions as described herein, for a client in a collaboration session are described.
The above summary is provided in order to provide a basic understanding of some aspects of the collaboration system described herein. This summary is not intended to identify key or critical elements of the technology disclosed or to delineate a scope of the technology disclosed.
The included drawings are for illustrative purposes and serve to provide examples of structures and process operations for one or more implementations of this disclosure. These drawings in no way limit any changes in form and detail that can be made by one skilled in the art without departing from the spirit and scope of this disclosure. A more complete understanding of the subject matter can be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures. The technology disclosed will be described with respect to specific embodiments thereof, and reference will be made to the drawings, which are not drawn to scale, and in which:
The following description is presented to enable any person skilled in the art to make and use the technology disclosed and is provided in the context of particular applications and requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the embodiments shown but is to be accorded the widest scope consistent with the principles and features disclosed herein.
The “unlimited workspace” problem includes the need to track how people and devices interact with the workspace over time. In one implementation, this can be addressed by allowing a first system to follow the actions of a second system. In another implementation, a first system can save a location with a location marker and make the location marker available to a second system. In order to solve this core problem, a Spatial Event Map has been created, which includes a system architecture supporting collaboration using a plurality of spatial event maps and a plurality of collaboration groups. The Spatial Event Map contains information needed to define objects and events in a workspace. The Spatial Event Map comprises a data structures specifying events having locations in a virtual collaboration space. The events, maps of events in the space, and access to the space by multiple users, including multiple simultaneous users support collaboration from users around the globe.
Space: In order to support an unlimited amount of spatial information for a given collaboration session, a way is provided to organize a virtual space termed the workspace, which can for example be characterized by a two-dimensional Cartesian plane with essentially unlimited extent in one or both of the dimensions for example, in such a way that new content can be added to the space, that content can be arranged and rearranged in the space, that a user can navigate from one part of the space to another, and that a user can easily find needed things in the space when required.
Events: Interactions with the workspace are handled as events. People, via tangible user interface devices, and systems can interact with the workspace. Events have data that can define or point to a target graphical object to be displayed on a physical display, and an action as creation, modification, movement within the workspace and deletion of a target graphical object, and metadata associated with them. Metadata can include information such as originator, date, time, location in the workspace, event type, security information, and other metadata.
Tracking events in a workspace enables the system to not only present the spatial events in a workspace in its current state, but to share it with multiple users on multiple displays, to share relevant external information that may pertain to the content, and understand how the spatial data evolves over time. Also, the spatial event map can have a reasonable size in terms of the amount of data needed, while also defining an unbounded workspace.
There can be several different kinds of events in the system. Events can be classified as persistent events, also referred to as history events that are stored permanently, or for a length of time required by the system for maintaining a workspace during its useful life. Events can be classified as ephemeral events that are useful or of interest for only a short time and shared live among other clients involved in the session. Persistent events may include history events stored in an undo/playback event stream, which event stream can be the same as or derived from the spatial event map of a session. Ephemeral events may include events not stored in an undo/playback event stream for the system. A spatial event map, or maps, can be used by a collaboration system to track the times and locations in the workspace in some embodiments of both persistent and ephemeral events on workspaces in the system.
Map: A map of events in the workspace can include the sum total of discrete spatial events that relate to graphical objects having locations in the workspace. Events in the map of a workspace can be “mapped” to a physical display or screen that has a displayable area referred to herein as a screen space, of specific size. A client can specify a viewport in the workspace, having a location and a dimension in the workspace, and then map the events in the viewport area to the screen space for display.
Multi-User Access: One key characteristic is that all users, or multiple users, who are working on a workspace simultaneously, should be able to see the interactions of the other users in a near-real-time way. The spatial event map allows users having displays at different physical locations to experience near-real-time events, including both persistent and ephemeral events, within their respective viewports, for all users on any given workspace. The collaboration system architectures described herein enable operation of many workspaces and many user groups.
Widget: A widget is a component of a workspace that the user can interact with or view, e.g. Notes, Images, Clocks, Web Browsers, Video Players, Location Markers, etc. A Window is a widget that is a rectangular region with two diagonally opposite corners. Most widgets are also windows.
In a collaborative environment, it can be beneficial to see what others are working on within the environment. The technology disclosed allows a first network node to follow the events that occur on a second network node without any significant increase in network utilization. This can be accomplished by exchanging messages carrying simple text event records that can include JSON data structures or the like, rather than sending images between network nodes. The first network node receives descriptions of events from all other participating network nodes within a virtual workspace, and stores at least some of them in a local log. The first network node also creates its own events, and stores at least some of them in the local log. The first network node has a viewport into the virtual workspace that can include any number of graphical targets defined by the events. The first network node can render the objects described by the event records that have coordinates within its viewport, ignoring the event records describing events relating to graphical object located outside of its viewport.
In one example, an operator of the first network node might be interested in watching the events that are happening within a viewport of the second network node as they occur, and as the viewport of the second network node is moved around in the workspace. The first network node can extract event information from the local log that describes the viewport of the second network node, and the graphical targets within the viewport, of the second network node, and render those graphical targets on a local screen space. In another example, an operator of the first network node might want to visit a place within the workspace that was saved by the second network node as a location marker. The first network node can extract event information from the local log that describes the workspace, to identify a location marker saved by the second network node, and then move its viewport to the location of the location marker.
An environment is illustrated by
In the illustrated example, the network node can include touch sensors on the screen space 105 that can perform as a user input device. The collaboration system client on the network node can access a local log file 111 that can store event records defining spatial event map or other type of data structure representing contents of a currently used workspace. In this example, a set of graphical targets 191, and a location marker 195 are displayed in the screen space 105.
A network node can generate an event to record the creation of a graphical target such as a text box, a location marker, a web page, or a viewport within a virtual workspace. The event can include the location of the graphical target within the virtual workspace, a time of the event, and a target identifier of the graphical target. The network node can then communicate the event to other network nodes participating in the workspace. Each participating network node can store the event in its local log 111, 161. In this example, an event exists in the local log 111, 161 for each of the events creating or modifying or moving the graphical targets 191, 193, the location markers 195, 197, and the viewports 175, 177 within the virtual workspace 165. The graphical targets of the events can be rendered on the screen space 105, 155 by a processor with logic to render the graphical targets.
The processor includes logic to render graphical targets having locations in a viewport to the screen space, and to render only those graphical targets, or portions of graphical targets, that are within the boundaries of the viewport, using a zoom level that is a function of the local screen space resolution and the dimensions of the local client viewport.
A screen space can have a fixed aspect ratio, and fixed resolution in terms of pixels per line, and lines per screen. This aspect ratio and resolution can be used to compute the mapping of the viewport to the screen space. For example, a starting viewport in the workspace can include an array of 1000 points by 1000 lines. The screen space can have the same resolution of 1000 by 1000. However, if a user executes a zoom out operation, the screen space resolution remains the same, but the workspace resolution increases to for example 2000 points by 2000 lines. In this case, the graphical targets of the events in the larger viewport are scaled to fit within the smaller number of pixels in the screen space as a function of the zoom factor. Likewise, if the user executes a zoom in operation, the screen space resolution remains the same, but the workspace resolution decrease to for example 500 points by 500 lines. In this case, the graphical targets of the events in the smaller viewport are scaled to fit within the larger number of pixels in the screen space. A viewport can be specified by a location in the workspace, an aspect ratio of the client screen space, and a zoom level, or ratio of resolution of the viewport compared to that of the screen space.
This allows various devices such as mobile devices, computers, and walls to display respective viewports at a common zoom level and at aspect ratios that match the respective screen spaces. The technology disclosed allows clients to specify viewports independently, so that two viewports may overlap. In one example, a first user modifies a viewport so that it includes an area already included in the viewport of a second user. In this example, the viewports are independent of each other, and one viewport can be modified without affecting the other. In another example, a first user “follows” a second user, whereby the viewport of the first user is determined by the viewport specifications of the second user. In this case, if the screen spaces have the same aspect ratio and resolution, then the screen space of the first user displays the same graphical targets and can be like a replica of the screen space of the second user. In the case in which the aspect ratio and/or resolutions do not match, then the following node can translate the dimensions of the remote client viewport to a local client viewport based on the aspect ratio and resolution of the local client screen space.
A difference between
A display is a device comprised of an array of pixels. Complex displays, such as walls 1202, comprise multiple displays in a rectangular array, and have consecutive pixels in the X and Y coordinates managed by a controller. In one implementation, a display can have multiple windows, each window comprising a separate screen space.
For example, a workspace can have a set of objects laid out between coordinates x0=−10000, y0=4500 and x1=5200, y1=−1400, in abstract units. If a client wants to see that set of objects, then it defines a viewport with those coordinates, and then renders that viewport within its screen space, mapping the abstract units of the workspace to the physical units of displayable pixels. If the client wants to see more of the workspace, they can zoom out so that more distributed x0, y0, x1, y1 coordinates of the viewport map to the available space in the screen space. If they want to see a smaller area of the workspace, they zoom in to whatever x0′, y0′, x1′, y1′ coordinates they want, and those coordinates are mapped to the screen space of the client. In rendering the viewport to the screen space, scaling of the contents of the viewport can be accomplished through standard graphical scaling techniques.
A change to a viewport is characterized as an event which, in this example, causes a “vc” (Viewport Change) record to be created, stored in the local log, and communicated to other participant clients in the collaborative workspace. An example of a “vc” record for use in an API like that described below, is represented as follow:
//server<-->client
[sender-id, “vc”, viewport-rect]
Other network nodes participating in the collaborative workspace can receive the “vc” record, where it can be stored in the local log. The other network nodes will not act on the “vc” record unless they invoke logic to follow the network node that generated the “vc” record.
The first screen space 450 has a first aspect ratio, and displays objects having locations within the second viewport 420A (set of graphical targets 425). The second viewport has a matching aspect ratio. The dimension of the viewport in virtual space in the workspace can be a multiple of the aspect ratio that is different from the dimensions in pixels in the screen space 450. The multiple of the aspect ratio establishes a zoom level in the first screen space 450, and this multiple can be changed by the local client viewport.
The second screen space 475 has a second aspect ratio and uses a viewport 420B having the same aspect ratio as the second screen space, and encompasses the first viewport 420A. The third viewport 420B has dimensions determined using the viewport change messages from the client operating the first screen space, and the parameters of the second screen space 475. In this example, it has the same zoom level and same center point as the first viewport 420A, and thus displays objects having locations within the second viewport 420 (set of graphical targets 425), and has an aspect ratio that matches the second screen space. The graphical target 415A within the virtual workspace 410 exists outside of 420A but overlaps viewport 420B. Thus, graphical target 415A is not rendered on the first screen space 450 and is partially rendered on the second screen space 475. Graphic target 415B is outside the viewports 420A, 420B and 430; and thus, does not appear in the screen space 450 or screen space 475.
In this implementation, the participating network nodes execute a graphical user interface that includes an object such as a menu tab 482 labeled “Users” in this example. The menu tab can be positioned in or next to the screen space 475. (In this illustration the screen space 475 is a small window on the display, for the purposes of the discussion of aspect ratio matching. In other systems, the screen space can include a different portion of the displayable area on the display, including all of the displayable area.) The selection of the menu tab causes a user menu to appear, where the user menu lists the identity of clients, preferably all clients, participating in the collaborative workspace. In this example, the participants in the collaborative workspace are a First Node 484 and a Second Node 486. The First Node 484 is the username associated with the screen space 450. The Second Node 486 is the username associated with the screen space 475 of
The client can include logic that upon selecting an item from the user menu list, causes a “ve” (Volatile Event) of type “bf” (Begin Follow) to be generated using a template form the API described below, like the following:
//server<-->client
[client-id, “ye”, target-id, event-type, event-properties]
Volatile Event Types that can be used in this template relating to the follow mode can include:
First Node 484 is identified by the client-id, and is associated with the first screen space 450, and Second Node 486 is identified by the target-id and is associated with the second screen space 475 of
In this implementation, the Second Node 486 with screen space 475 chooses to follow the First Node 484 with screen space 450. Second Node 484's network node can create and transmit a “ve” record of type “bf” to notify the collaboration session that it is beginning to follow the First Node 484. The Second Node 486 can read through its local log finding the most recent viewport change “vc” record from First Node 484. The specifications of the First Node 486 viewport 420A from the “vc” record, are used to produce specifications for the viewport 420B of the Second Node 484. The Second Node 484 can then use the local viewport 420B to find graphical targets 425 (See,
Any changes in the viewport 420A, such as location or zoom level, of the First Node 484 are shared in the session, and are used to update the local viewport 420B of the Second Node 486, as long as the Second Node 486 follows the First Node 484.
In this example, a first network node establishes communication with one or more other network nodes 510 that are participating in a collaborative workspace. This can include logging in to a session at a server, and communication through the server to the other network nodes, or other communication pathways.
For initialization, the first network node receives collaboration data, such as a spatial event map in the form of a log of the historic events that have occurred within the workspace, or other data structure that identifies the location of graphical objects in the workspace. The first network node also receives, for example as a part of the log of historic events or otherwise, a list of network nodes participating in the session. The first network node can store the collaboration data identifying graphical targets having locations in a virtual workspace used in the workspace session, the collaboration data including locations in a virtual workspace of respective graphical targets 520. As part of this initialization, the first network node can instantiate an initial “vc” (Viewport Change) record and communicate this record to other participant clients in the workspace. The initial “vc” record can be retrieved from the log from a previous session. The initial “vc” record can also be a default “vc” record within the workspace or can be instantiated by some other method such logic to identify the most used area of the workspace. The first network node maps the local client viewport having a location and dimensions within the virtual workspace to a local client screen space 530. In order to set the “follow” flag, a first subject user associated with the first network node is provided a user interface displaying a list of participants in the session at other network nodes 540. The first subject user can then select a target user from the list of identified users. The first network node creates a “ve:bf” record and communicates the record to the participating network nodes of the workspace, which can include the network node of the selected user. The first subject network node is now “following” the selected network node.
The first network node continues to receive messages from other participants in the session, including messages containing a location in the workspace of a participant client viewport 550 in use at the selected other participant client. The first network node only renders those graphical targets that appear within the coordinates of the viewport of the selected user. As the selected other participant client changes the viewport of the selected network node, the first network node can update the location of the local client viewport to the identified location of the participant client viewport in use at the selected other participant client 560.
The first network node can then render on the screen space graphical targets having locations within the updated local client viewport 570.
The technology disclosed can render the contents of the viewport onto the local screen space accounting for possible differences in attributes such as aspect ratio, height, and width of the local screen space. Changes to the virtual workspace, such as the addition, deletion, or modification of graphical targets that intersect the viewport of the selected user are also rendered on the screen space of the first network node.
A third user can also set a follow flag within their network node to follow the first network node, the selected user's network node, or some other network node. At any point, the first user can unset the follow flag and return to their original viewport.
In this example, a plurality of clients within a workspace can be a subject network node or a target network node for the “follow method”. In one implementation, a network node can follow multiple targets, and can both follow a target in one viewport as well as collaborate within a second viewport simultaneously.
Initially, in this implementation, a second client-side network node 619 is participating in a collaborative workspace. The second client-side network node 619 can create events that are shared with other network nodes through a spatial event map on a server-side network node 615. The second client-side network node 619 can also receive events from other client-side network nodes through the server-side network node 615. The transmission of events occurs through a communication of events 620 between one or more client-side network nodes and the server-side network node. The server-side node distributes the events to other participating network nodes in this example.
In this example, the first client-side network node 611 joins the collaborative workspace by establishing communications with the server-side network node 630. The server-side network node sends the collaboration data, including a user list, the viewport change records of the network nodes, the spatial event map, and location markers, to the first client-side network node 640. The first client-side network node then stores the collaboration data to a local log. The first client-side network node 611 sets an initial viewport, as described in
The first client-side network node 611 and the second client-side network node 619 can both create, transmit, and receive events within the workspace, and can view events that have occurred within their viewports. Events can be communicated to all participating network nodes through the server-side network node 650. The technology disclosed allows a first user to follow a second user. In this example, a first client-side network node 611 can select from the list of users in its local log the identity of a target user working on a second client-side network node 619. This selection causes the first client-side network node to create a begin following “ve:bf” record, which can be communicated to the second client-side network node through the server-side network node 660. Events are still communicated to all participating network nodes. However, the first client-side network node produces a local client viewport that encompasses the viewport specified at the second client-side network node, and only displays those events that intersect the updated local client viewport, and in effect follows the viewport of the second client-side network node. Events can be generated by the second client-side network node 619, or by a third client-side network node not shown in
In this example, the second client-side network node 619 can change its viewport. The change in viewport causes the second client-side network node to generate a new viewport change “vc” record and transmit that record to other participants in the workspace 680. The first client-side network node, while following the second client-side network node, changes its local client viewport to match, then renders the graphical targets from its log that intersects the new local client viewport.
In another viewport finding technology, events that set location markers having locations in the workspace can be shared in a spatial event map, or other kind of collaboration data structure. A first user at a first network node can select a location marker set by any participant at any time. A location marker can be used as a record of where a second user has been, rather than where they are now.
As illustrated in
The location marker names are searchable tags that can reflect content of the material having locations in the virtual workspace. In a workspace having large numbers of location markers, the user interface can include search function base on querying the location marker tags.
The client running in the network node that creates a new location marker, includes logic to send messages to other network nodes, the messages identifying events including creation or movement of a location marker. Also, the logic in the client running at the network node, and at other network nodes in the collaboration system, s responsive to receipt of a message carrying the marker create or marker move events from another (second) network node to update the list of location markers.
In this example, a first network node establishes communication with one or more other network nodes as a participant client in a collaborative workspace session 1010. The first network node stores at least part of a log of events relating to graphical targets, including creation or movement of location markers. The targets of the events have locations in a virtual workspace used in the workspace session, entries in the log including a location in the virtual workspace of a graphical target of an event, an action related to the graphical target, a time of the event, and a target identifier of the graphical target 1020. The first network node maps a local client viewport to a local client screen space in the physical display space at the first network node 1030. The network node provides a user interface displaying a list of location markers in the workspace in the session, a location marker having a marked location in the workspace, and for receiving input indicating a selected location marker from the list 1040.
In response to selection of a selected location marker, the local network node updates the location of the local client viewport to the marked location 1050 and renders on the screen space graphical targets having locations within the updated local client viewport 1060.
Initially, in this implementation, a second client-side network node 1119 is participating in a collaborative workspace. The second client-side network node 1119 can create events that are shared with other network nodes through a spatial event map on a server-side network node 1115. The second client-side network node 1119 can also receive events from other client-side network nodes through the server-side network node 1115. The transmission of events occurs through a communication of events 1120 between one or more client-side network nodes and the server-side network node.
In this example, the first client-side network node 1111 joins the collaborative workspace by establishing communication with the server-side network node 1130. The server-side network node sends the collaboration data, including graphical targets such as location markers, to the first client-side network node 1140. The first client-side network node then stores the collaboration data to a local log. The first client-side network node 1111 sets an initial viewport, as described in
The first client-side network node 1111 and the second client-side network node 1119 can both create, transmit, and receive events within the workspace, and can view events that have occurred within their viewports. Events can be communicated to all participating network nodes through the server-side network node 1150. The technology disclosed allows a first user to create a location marker that is shared with a second user. In this example, a first client-side network node 1111 can add a location marker, as illustrated in
As used herein, a physical network node is an active electronic device that is attached to a network, and is capable of sending, receiving, or forwarding information over a communication channel. Examples of electronic devices which can be deployed as network nodes, include all varieties of computers, workstations, laptop computers, handheld computers and smart phones. As used herein, the term “database” does not necessarily imply any unity of structure. For example, two or more separate databases, when considered together, still constitute a “database” as that term is used herein.
The application running at the collaboration server 1305 can be hosted using Web server software such as Apache or nginx, or a runtime environment such as node.js. It can be hosted for example on virtual machines running operating systems such as LINUX. The server 1305 is illustrated, heuristically, in
The database 1306 stores, for example, a digital representation of workspace data sets for a spatial event map of each session where the workspace data set can include or identify events related to objects displayable on a display canvas. A workspace data set can be implemented in the form of a spatial event stack, managed so that at least persistent spatial events (called historic events) are added to the stack (push) and removed from the stack (pop) in a first-in-last-out pattern during an undo operation. There can be workspace data sets for many different workspaces. A data set for a given workspace can be configured in a database, or as a machine-readable document linked to the workspace. The workspace can have unlimited or virtually unlimited dimensions. The workspace data includes event data structures identifying objects displayable by a display client in the display area on a display wall and associates a time and a location in the workspace with the objects identified by the event data structures. Each device 1202 displays only a portion of the overall workspace. A display wall has a display area for displaying objects, the display area being mapped to a corresponding area in the workspace that corresponds to a viewport in the workspace centered on, or otherwise located with, a user location in the workspace. The mapping of the display area to a corresponding viewport in the workspace is usable by the display client to identify objects in the workspace data within the display area to be rendered on the display, and to identify objects to which to link user touch inputs at positions in the display area on the display.
The server 1305 and database 1306 can constitute a server-side network node, including memory storing a log of events relating to graphical targets having locations in a workspace, entries in the log including a location in the workspace of the graphical target of the event, a time of the event, and a target identifier of the graphical target of the event. The server can include logic to establish links to a plurality of active client-side network nodes, to receive messages identifying events relating to modification and creation of graphical targets having locations in the workspace, to add events to the log in response to said messages, and to distribute messages relating to events identified in messages received from a particular client-side network node to other active client-side network nodes.
The logic in the server 1305 can comprise an application program interface, including a specified set of procedures and parameters, by which to send messages carrying portions of the log to client-side network nodes, and to receive messages from client-side network nodes carrying data identifying events relating to graphical targets having locations in the workspace.
Also, the logic in the server 1305 can include an application interface including a process to distribute events received from one client-side network node to other client-side network nodes.
The events compliant with the API can include a first class of event (history event) to be stored in the log and distributed to other client-side network nodes, and a second class of event (ephemeral event) to be distributed to other client-side network nodes but not stored in the log.
The server 1305 can store workspace data sets for a plurality of workspaces and provide the workspace data to the display clients participating in the session. The workspace data is then used by the computer systems 1310 with appropriate software 1312 including display client software, to determine images to display on the display, and to assign objects for interaction to locations on the display surface. The server 1305 can store and maintain a multitude of workspaces, for different collaboration sessions. Each workspace can be associated with a group of users and configured for access only by authorized users in the group.
In some alternatives, the server 1305 can keep track of a “viewport” for each device 1202, indicating the portion of the canvas viewable on that device, and can provide to each device 1202 data needed to render the viewport.
Application software running on the client device responsible for rendering drawing objects, handling user inputs, and communicating with the server can be based on HTML5 or other markup based procedures and run in a browser environment. This allows for easy support of many different client operating system environments.
The user interface data stored in database 1306 includes various types of objects including graphical constructs, such as image bitmaps, video objects, multi-page documents, scalable vector graphics, and the like. The devices 1202 are each in communication with the collaboration server 1305 via a communication network 1304. The communication network 1304 can include all forms of networking components, such as LANs, WANs, routers, switches, WiFi components, cellular components, wired and optical components, and the internet. In one scenario two or more of the users 1201 are located in the same room, and their devices 1202 communicate via WiFi with the collaboration server 1305. In another scenario two or more of the users 1201 are separated from each other by thousands of miles and their devices 1202 communicate with the collaboration server 1305 via the internet. The walls 1202c, 1202d, 1202e can be multi-touch devices which not only display images, but also can sense user gestures provided by touching the display surfaces with either a stylus or a part of the body such as one or more fingers. In some embodiments, a wall (e.g. 1202c) can distinguish between a touch by one or more fingers (or an entire hand, for example), and a touch by the stylus. In an embodiment, the wall senses touch by emitting infrared light and detecting light received; light reflected from a user's finger has a characteristic which the wall distinguishes from ambient received light. The stylus emits its own infrared light in a manner that the wall can distinguish from both ambient light and light reflected from a user's finger. The wall 1202c may, for example, be an array of Model No. MT553UTBL MultiTaction Cells, manufactured by MultiTouch Ltd, Helsinki, Finland, tiled both vertically and horizontally. In order to provide a variety of expressive means, the wall 1202c is operated in such a way that it maintains “state.” That is, it may react to a given input differently depending on (among other things) the sequence of inputs. For example, using a toolbar, a user can select any of a number of available brush styles and colors. Once selected, the wall is in a state in which subsequent strokes by the stylus will draw a line using the selected brush style and color.
In an illustrative embodiment, a display array can have a displayable area usable as a screen space totaling on the order of 6 feet in height and 30 feet in width, which is wide enough for multiple users to stand at different parts of the wall and manipulate it simultaneously.
Events can be classified as persistent history events and as ephemeral events. Processing of the events for addition to workspace data and sharing among users can be dependent on the classification of the event. This classification can be inherent in the event type parameter, or an additional flag or field can be used in the event data structure to indicate the classification.
A spatial event map can include a log of events having entries for history events, where each entry comprises a structure such as illustrated in
The system can encrypt communications with client-side network nodes and can encrypt the database in which the spatial event maps are stored. Also, on the client-side network nodes, cached copies of the spatial event map are encrypted in some embodiments, to prevent unauthorized access to the data by intruders who gain access to the client-side computers.
The physical hardware components of network interfaces are sometimes referred to as network interface cards (NICs), although they need not be in the form of cards: for instance, they could be in the form of integrated circuits (ICs) and connectors fitted directly onto a motherboard, or in the form of macrocells fabricated on a single integrated circuit chip with other components of the computer system.
User interface input devices 1522 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touch screen incorporated into the display (including the touch sensitive portions of large format digital display 1202c), audio input devices such as voice recognition systems, microphones, and other types of tangible input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into the computer system or onto communication network 1304.
User interface output devices 1520 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include a cathode ray tube (CRT), a flat panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. In the embodiment of
Storage subsystem 1524 stores the basic programming and data constructs that provide the functionality of certain embodiments of the technology disclosed. The storage subsystem 1524 includes computer program instructions implementing a spatial event map collaboration system client, or a spatial event map collaboration system server, and can include logic for modules such as “follow me” and “location marker”.
The storage subsystem 1524 when used for implementation of server side network-nodes, comprises a product including a non-transitory computer readable medium storing a machine readable data structure including a spatial event map which locates events in a workspace, wherein the spatial event map includes a log of events, entries in the log having a location of a graphical target of the event in the workspace and a time. Also, the storage subsystem 1524 comprises a product including executable instructions for performing the procedures described herein associated with the server-side network node.
The storage subsystem 1524 when used for implementation of client side network-nodes, comprises a product including a non-transitory computer readable medium storing a machine readable data structure including a spatial event map in the form of a cached copy as explained below, which locates events in a workspace, wherein the spatial event map includes a log of events, entries in the log having a location of a graphical target of the event in the workspace and a time. Also, the storage subsystem 1524 comprises a product including executable instructions for performing the procedures described herein associated with the client-side network node.
For example, the various modules implementing the functionality of certain embodiments of the technology disclosed may be stored in storage subsystem 1524. These software modules are generally executed by processor subsystem 1514.
Memory subsystem 1526 typically includes a number of memories including a main random-access memory (RAM) 1530 for storage of instructions and data during program execution and a read only memory (ROM) 1532 in which fixed instructions are stored. File storage subsystem 1528 provides persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD ROM drive, an optical drive, or removable media cartridges. The databases and modules implementing the functionality of certain embodiments of the technology disclosed may have been provided on a computer readable medium such as one or more CD-ROMs and may be stored by file storage subsystem 1528. The host memory subsystem 1526 contains, among other things, computer instructions which, when executed by the processor subsystem 1514, cause the computer system to operate or perform functions as described herein. As used herein, processes and software that are said to run in or on “the host” or “the computer,” execute on the processor subsystem 1514 in response to computer instructions and data in the host memory subsystem 1526 including any other local or remote storage for such instructions and data.
Bus subsystem 1512 provides a mechanism for letting the various components and subsystems of a computer system communicate with each other as intended. Although bus subsystem 1512 is shown schematically as a single bus, alternative embodiments of the bus subsystem may use multiple busses.
The computer system itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, or any other data processing system or user device. In one embodiment, a computer system includes several computer systems, each controlling one of the tiles that make up the large format display 1202c. Due to the ever-changing nature of computers and networks, the description of computer system 1310 depicted in
Aspects of an application program interface API supporting use of spatial event maps, with follow me and location marker functions are set out here for the purposes of example of technology to implement the techniques described herein.
Socket Requests Server (WebSockets)—used for updating clients with relevant data (new strokes, cards, clients, etc.) once connected. Also handles the initial connection handshake.
Service Requests Server (HTTPS/REST)—used for cacheable responses, as well as posting data (i.e. images and cards)
Client-side network nodes are configured according to the API and include corresponding socket requests clients and service requests clients.
All messages are individual UTF-8 encoded JSON arrays. For example:
[sender-id, message-type, . . . ]
Establishing a Connection
Clients use the Configuration Service to retrieve the configuration information for the environment. The socket server URL is provided by the ws_collaboration_service_address key.
1) To Open the WebSocket and Join a Specific Workspace
<collaboration_service_address>/<workspaceId>/socket?device=<device>array=<array_name>
2) To Join the Lobby
The lobby allows a client to open a web socket before it knows the specific workspace it will display. The lobby also provides a 5-digit PIN which users can use to send a workspace to the wall from their personal device (desktop/ios).
<collaboration_service_address>/lobby/socket?device=<device>&array=<array_name>
3) Server Response
When a client establishes a new web-socket connection with the server, the server first chooses a unique client ID and sends it in an “id” message to the client with the unique client ID.
4) Message Structure
The first element of each message array is a sender-id, specifying the client that originated the message. Sender-ids are unique among all sessions on the server. The id and cr messages sent from the server to the client have their sender-id set to a default value, such as −1. The second element of each message array is a two-character code. This code defines the remaining arguments in the array as well as the intended action. Messages sent with a sender-id of −1 are messages that originate from the server.
Message Types
The following messages types are officially supported. Since Spring 2013 there has been an effort to use he and ve when possible instead of adding new top level message types.
1) cs Change Session
Inform a client or siblings in a display array that the workspace has changed. The server sends this message to the client when it receives request to send a workspace to a wall.
//server-->client
[sender-id, “cs”, workspaceId]
Echos an optional body back to the originating client. Used to verify that the socket connection and the server are still healthy.
After “echo” the message can take any arguments. They will all be echoed back to the client unaltered if the service and the client's socket connection are healthy. When using the echo message to verify socket health we recommend the following:
This message was added to the protocol because the current implementation of WebSockets in Chrome and other supported browsers do not correctly change readyState or fire onclose when network connection dies.
3) error Error
Informs clients of an error on the server side.
//server->client
[“−1”, “error”, target-id, message]
This message is only sent by the server and currently only used when an upload fails during asynchronous processing.
4) id Client Id
The server sends this message when the client connects to the socket. Clients are required to store the assigned client ID for use in subsequent socket requests.
//server-->client
[“−1”, “id”, client-id]
5) jr Join Room
Rooms are communication channels that a client can subscribe to. There can be channels for specific workspaces (sessions), for display arrays, or potentially for other groups of clients. The server repeats/sends messages to all the clients connected to a specific room as events happen in that room. A client joins a room to get messages for that display array or workspace (session). There are several types of join room requests.
General jr
Join any room if you know the id.
//server<--client
[sender-id, “jr”, room-id, [data]]
Joins the lobby channel. Used by clients that wish to keep a web socket open while not displaying a workspace.
//server<--client
[sender-id, “jr”, “lobby”]
Session Jr
Joins the room for a specific workspace (session).
//server<--client
[sender-id, “jr”, “session”, workspace-id]
Joins the room for a specific display array.
The server responds to successful room join (jr) messages with a room message.
General Room
//server-->client
[“−1”, “room”, [room-id], [databag]]
6) rl Room List
Informs the client of the room memberships. Room memberships include information regarding clients visiting the same room as you.
//server-->client
[“−1”, “rl”, roomMembershipList]
(Deprecated)
7) un Undo
Undoes the last undo-able action (move, set text, stroke, etc).
Undo Example: Move a Window and then Undo that Move
The following example shows a move, and how that move is undone.
The server removes the history event from the workspace history and notifies all clients subscribed to the room that this record will no longer be a part of the workspace's historical timeline. Future requests of the history via the HTTP API will not include the undone event (until we implement redo).
8) up User Permissions
Gets the permissions that the current user has for this workspace. Only relevant when a client is acting as an agent for a specific user not relevant when using public key authentication (walls).
//server-->client
[sender-id, “up”, permissions]
9) vc Viewport Change
Updates other clients in a session that one client's viewport has changed. This is designed to support the “jump to user's view” and “follow me” features. Clients send a viewport change “vc” event upon entering a session for the first time. This ensures that other clients will be able to follow their movements. When processing incoming viewport change “vc” events, clients keep a cache of viewports, keyed by client ID. This is in order to handle occasions where room list membership (rl) events with missing viewports arrive after associated VC events. A change in a target viewport to a revised target viewport can include a change in the size of the viewport in one or the other dimension or both, which does not maintain the aspect ratio of the viewport. A change in a target viewport can also include a change in the page zoom of the viewport. When subject client-side viewports in “jump to user's view” or “follow-me” mode receive a first ‘vc’ record, it is an instruction for mapping a displayable area of the subject client-side viewport to the area of a target viewport. A subsequent ‘vc’ record results in a remapped displayable area of the subject client-side viewport to the target viewport. When the “jump to user's view” or “follow me” feature is disabled, the subject client-side viewport returns to its prior window.
//server<-->client
[sender-id, “vc”, viewport-rect]
10) he History Event
History events are pieces of data that are persisted to the database. Any information that is necessary for recreating a visual workspace should be sent to the collaborative service via the messages.
When the server receives a history event it does the following:
All properties included in a message will be stored on the server and echoed back to clients. They will also be included in the history sent over http.
//server-->client
[client-id, “he”, target-id, event-id, event-type, event-properties]
In order to ensure ordering of tightly coupled events, many can be sent in a batch message by changing the event payload to be an array of event payloads.
//server<--client
[client-id, “bhe”, [event1, event2, event3, event4]]
In this case, each event is a packet send as a standard web socket history message.
The event structure is:
[targetId, eventType, props]
///
So, the clientId portion is not repeated, but all else is as a standard event.
Current History Event Types
History Event Details
Comments
Clients send ‘create’ to the collaboration server to add a widget to a workspace. For ‘create’ messages the target-id is the id of the containing element, usually the workspace-id.
Generic Widget Create Example
Most widgets will also have a location property, usually a rect and order, but potentially a point or other representation.
Card Create Example
PDF Create Example
Group Create Example
Replaces the target object's children. Used for grouping items.
Removes a widget from a workspace.
position
Used to save the position of a widget after a move, fling, resize, etc
Generic Widget Position Example
template
Used to change the template for a note. This allows changing the background color.
Note Template Example
//server-->client
[client-id, “he”, workspace-id, event-id, “template”, {“baseName”: “sessions/all/Beige” }]
Used to pin a widget and prevent users from moving or resizing that widget. Also used to remove an existing pin.
Generic Widget Position Example
//server-->client
[client-id, “he”, workspace-id, event-id, “pin”, {“pin”: true}]
Used to add a stroke to a widget or to the workspace background.
Generic Stroke Example
Rendering note: strokes should be rendered with end caps centered on the first and last points of the stroke. The end cap's diameter should be equal to the brush size. Rendering end caps is the responsibility of each client.
text
Set the text and style of text on a widget. Both the text attribute and style attribute are optional.
Generic Text Example
markercreate
Creates a location marker (map marker, waypoint) at a specific place in the workspace
Alternative Form Accepted by Browser Client
markermove
Moves an existing location marker (map marker, waypoint) to a new place in the workspace.
Alternative Form Accepted by Browser Client
markerdelete
Delete an existing location marker.
tsxappevent
TSXappevent sends a history event to various widgets on the tsx system.
Example of Creating a Web Browser
Example of Deleting a Web Browser
navigate
Example of navigating to different item in the payload. One could use this for example for a browser widget navigating to an URL
11) ve Volatile Event
Volatile events are not recorded in the database, so they are good for in-progress streaming events like dragging a card around the screen, and once the user lifts their finger, a HistoryEvent is used to record its final place.
Volatile Event Basic Message Format
//server<-->client
[client-id, “ye”, target-id, event-type, event-properties]
Current Volatile Event Types
Volatile Events by Widget Type
Workspace
The following fields are properties of several volatile events.
Used to broadcast intermediate steps of a window moving around the workspace.
Generic Position Example
//server<-->client
[client-id, “ye”, target-id, “position”, {position-info}]
sb
Used to broadcast the beginning of a stroke to the other clients.
sc:
Continues the stroke specified by the stroke id.
//server<-->client
[client-id, “ve”, target-id, “sc”, {“x”:100, “y”:300, “strokeId”:“395523d316e942b496a2c8a6fe5f2cac” }]
se:
Ends the stroke specified by stroke-id.
//server<-->client
[client-id, “ye”, target-id, “se”, {“strokeId”:“395523d316e942b496a2c8a6fe5f2cac” }]
Begin Follow: User A begins to follow User B. Used to notify User A that user B is following. For this global volatile event, the target ID is the session id. The user being followed will update the UI to indicate that user B is following.
//server<-->client
[follower-client-id, “ve”, session-id, “bf”,
End Follow: User B is no longer following user A. Used to notify user A that user B is no longer following. For this global volatile event, the target ID is the session id. The user being followed will update the UI to indicate that user B is no longer following. If user B leaves the session, user A will receive a room list message which does not contain user B. User A's room list will then be rebuilt, no longer showing user B as a follower.
//server<-->client
[follower-client-id, “ve”, session-id, “ef”,
A good example illustrating some of the HistoryEvent/VolatileEvent-related changes is moving an object. While the object is being moved/resized by dragging, a series of volatile events (VEs) is sent to the server, and re-broadcast to all clients subscribed to the workspace:
Once the user finishes moving the object, the client should send a history event is sent to specify the rect and order of the object:
The server will respond with the newly persisted he record. Note the inclusion of the record's eventId.
12) disconnect Disconnect
Inform other app instances opened by the same user to close their connection and cease reconnect attempts. This is consumed by browser clients in order to prevent the “frantic reconnect” problem seen when two tabs are opened with the same workspace.
//server-->client
[−1, “disconnect”]
13) ls List Streams
Inform a client of the current streams in a list. Triggered by other events, similar to a room list.
//server-->client
[send-id, “ls”, [Stream List for Session]]
14) bs Begin Stream
Informs the server of a new AV stream starting. The server responds with a List Streams message.
//server<--client
[sender-id, “bs”, conferenceId, conferenceProvider, streamId, streamType]
15) es End Stream
Informs the server of a new AV stream ending. The server responds with a List Streams message.
//server<--client
[sender-id, “es”, conferenceId, streamId]
16) ss Stream State
Informs the server of an existing AV stream changing state. The server responds with a List Streams message.
//server<--client
[sender-id, “ss”, streamId, streamType]
17) oid Object ID Reservation
Use this to create a new unique object id that is acceptable for creating new history events which create an object.
‘‘‘javascript
//server<--client
[sender-id, “oid”]
Server responds with:
//server-->client
[“−1”, ‘oid’, <new-object-id>]
The API described above provides one example message structure. Other structures may be utilized as well, as suits a particular implementation.
As used herein, the “identification” of an item of information does not necessarily require the direct specification of that item of information. Information can be “identified” in a field by simply referring to the actual information through one or more layers of indirection, or by identifying one or more items of different information which are together sufficient to determine the actual item of information. In addition, the term “indicate” is used herein to mean the same as “identify”.
Also, as used herein, a given signal, event or value is “responsive” to a predecessor signal, event or value if the predecessor signal, event or value influenced the given signal, event or value. If there is an intervening processing element, step or time period, the given signal, event or value can still be “responsive” to the predecessor signal, event or value. If the intervening processing element or step combines more than one signal, event or value, the signal output of the processing element or step is considered “responsive” to each of the signal, event or value inputs. If the given signal, event or value is the same as the predecessor signal, event or value, this is merely a degenerate case in which the given signal, event or value is still considered to be “responsive” to the predecessor signal, event or value. “Dependency” of a given signal, event or value upon another signal, event or value is defined similarly.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the technology disclosed may consist of any such feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the technology disclosed.
The foregoing description of preferred embodiments of the technology disclosed has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the technology disclosed to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. For example, though the displays described herein are of large format, small format displays can also be arranged to use multiple drawing regions, though multiple drawing regions are more useful for displays that are at least as large as 12 feet in width. In particular, and without limitation, any and all variations described, suggested by the Background section of this patent application or by the material incorporated by reference are specifically incorporated by reference into the description herein of embodiments of the technology disclosed. In addition, any and all variations described, suggested or incorporated by reference herein with respect to any one embodiment are also to be considered taught with respect to all other embodiments. The embodiments described herein were chosen and described in order to best explain the principles of the technology disclosed and its practical application, thereby enabling others skilled in the art to understand the technology disclosed for various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology disclosed be defined by the following claims and their equivalents.
As with all flowcharts herein, it will be appreciated that many of the steps can be combined, performed in parallel or performed in a different sequence without affecting the functions achieved. In some cases, as the reader will appreciate, a rearrangement of steps will achieve the same results only if certain other changes are made as well. In other cases, as the reader will appreciate, a rearrangement of steps will achieve the same results only if certain conditions are satisfied. Furthermore, it will be appreciated that the flow charts herein show only steps that are pertinent to an understanding of the technology disclosed, and it will be understood that numerous additional steps for accomplishing other functions can be performed before, after and between those shown.
While the technology disclosed is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the technology disclosed and the scope of the following claims. It is contemplated that technologies described herein can be implemented using collaboration data structures other that the spatial event map.
This application is a continuation of U.S. Non-Provisional application Ser. No. 17/027,571, entitled “Virtual Workspace Viewport Following in Collaboration Systems”, filed 21 Sep. 2020, which is a continuation of U.S. Non-Provisional application Ser. No. 15/147,576, entitled “Virtual Workspace Viewport Following in Collaboration Systems”, filed 5 May 2016, which claims the benefit of U.S. Provisional Application No. 62/157,911, entitled “System and Method for Emulation of a Viewport Within a Shared Workspace”, filed 6 May 2015. Both above-referenced applications are incorporated herein by reference. The following co-pending, commonly owned, U.S. Patent Application is incorporated by reference as if fully set forth herein, U.S. application Ser. No. 14/090,830, entitled “Collaboration System Including A Spatial Event Map”, filed 26 Nov. 2013.
Number | Name | Date | Kind |
---|---|---|---|
4686332 | Greanias et al. | Aug 1987 | A |
5008853 | Bly et al. | Apr 1991 | A |
5220657 | Bly et al. | Jun 1993 | A |
5309555 | Akins et al. | May 1994 | A |
5394521 | Henderson, Jr. et al. | Feb 1995 | A |
5446842 | Schaeffer et al. | Aug 1995 | A |
5537526 | Anderson et al. | Jul 1996 | A |
5563996 | Tchao | Oct 1996 | A |
5613134 | Lucus et al. | Mar 1997 | A |
5689628 | Robertson | Nov 1997 | A |
5727002 | Miller et al. | Mar 1998 | A |
5781732 | Adams | Jul 1998 | A |
5812847 | Joshi et al. | Sep 1998 | A |
5818425 | Want et al. | Oct 1998 | A |
5835713 | FitzPatrick et al. | Nov 1998 | A |
5867156 | Beard | Feb 1999 | A |
5872924 | Nakayama et al. | Feb 1999 | A |
5938724 | Pommier et al. | Aug 1999 | A |
5940082 | Brinegar et al. | Aug 1999 | A |
6078921 | Kelley | Jun 2000 | A |
6084584 | Nahi et al. | Jul 2000 | A |
6128014 | Nakagawa et al. | Oct 2000 | A |
6167433 | Maples et al. | Dec 2000 | A |
6320597 | Ieperen | Nov 2001 | B1 |
6342906 | Kumar et al. | Jan 2002 | B1 |
6343313 | Salesky et al. | Jan 2002 | B1 |
6518957 | Lehtinen et al. | Feb 2003 | B1 |
6564246 | Varma et al. | May 2003 | B1 |
6710790 | Fagioli | Mar 2004 | B1 |
6778989 | Bates et al. | Aug 2004 | B2 |
6911987 | Mairs et al. | Jun 2005 | B1 |
6930673 | Kaye et al. | Aug 2005 | B2 |
6930679 | Wu et al. | Aug 2005 | B2 |
7003728 | Berque | Feb 2006 | B2 |
7043529 | Simonoff | May 2006 | B1 |
7100195 | Underwood | Aug 2006 | B1 |
7129934 | Luman et al. | Oct 2006 | B2 |
7171448 | Danielsen et al. | Jan 2007 | B1 |
7356563 | Leichtling et al. | Apr 2008 | B1 |
7450109 | Halcrow et al. | Nov 2008 | B2 |
D600703 | LaManna et al. | Sep 2009 | S |
7908325 | Pabla et al. | Mar 2011 | B1 |
8209308 | Rueben et al. | Jun 2012 | B2 |
D664562 | McCain et al. | Jul 2012 | S |
8402391 | Doray et al. | Mar 2013 | B1 |
8407290 | Abt, Jr. et al. | Mar 2013 | B2 |
8543929 | Holloway | Sep 2013 | B1 |
8898590 | Okada et al. | Nov 2014 | B2 |
8954862 | Mabie et al. | Feb 2015 | B1 |
9201854 | Kloiber et al. | Dec 2015 | B1 |
9298834 | Kleppner et al. | Mar 2016 | B2 |
9465434 | Mason | Oct 2016 | B2 |
9471192 | Mason | Oct 2016 | B2 |
9479548 | Jensen et al. | Oct 2016 | B2 |
9479549 | Pearson | Oct 2016 | B2 |
10255023 | Liu et al. | Apr 2019 | B2 |
10802783 | Santhakumar et al. | Oct 2020 | B2 |
11262969 | Santhakumar et al. | Mar 2022 | B2 |
20030020671 | Santoro et al. | Jan 2003 | A1 |
20030058227 | Hara et al. | Mar 2003 | A1 |
20040060037 | Damm et al. | Mar 2004 | A1 |
20040083264 | Veselov | Apr 2004 | A1 |
20040093331 | Garner et al. | May 2004 | A1 |
20040150627 | Luman et al. | Aug 2004 | A1 |
20040155871 | Perski et al. | Aug 2004 | A1 |
20040174398 | Luke et al. | Sep 2004 | A1 |
20040243605 | Bernstein et al. | Dec 2004 | A1 |
20040243663 | Johanson et al. | Dec 2004 | A1 |
20050060656 | Martinez et al. | Mar 2005 | A1 |
20050195216 | Kramer et al. | Sep 2005 | A1 |
20050237380 | Kakii et al. | Oct 2005 | A1 |
20050262107 | Bergstraesser et al. | Nov 2005 | A1 |
20050273700 | Champion et al. | Dec 2005 | A1 |
20060012580 | Perski et al. | Jan 2006 | A1 |
20060066588 | Lyon et al. | Mar 2006 | A1 |
20060195507 | Baek et al. | Aug 2006 | A1 |
20060211404 | Cromp et al. | Sep 2006 | A1 |
20060220982 | Ueda | Oct 2006 | A1 |
20060224427 | Salmon | Oct 2006 | A1 |
20060294473 | Keely et al. | Dec 2006 | A1 |
20070070066 | Bakhash | Mar 2007 | A1 |
20070136685 | Bhatla et al. | Jun 2007 | A1 |
20070198744 | Wensley et al. | Aug 2007 | A1 |
20070262964 | Zotov et al. | Nov 2007 | A1 |
20080114844 | Sanchez et al. | May 2008 | A1 |
20080143818 | Ferren et al. | Jun 2008 | A1 |
20080163053 | Hwang et al. | Jul 2008 | A1 |
20080177771 | Vaughn | Jul 2008 | A1 |
20080201339 | McGrew et al. | Aug 2008 | A1 |
20080207188 | Ahn et al. | Aug 2008 | A1 |
20080301101 | Baratto et al. | Dec 2008 | A1 |
20090049381 | Robertson et al. | Feb 2009 | A1 |
20090089682 | Baier et al. | Apr 2009 | A1 |
20090128516 | Rimon et al. | May 2009 | A1 |
20090153519 | Suarez Rovere | Jun 2009 | A1 |
20090160786 | Finnegan | Jun 2009 | A1 |
20090174679 | Westerman | Jul 2009 | A1 |
20090195518 | Mattice et al. | Aug 2009 | A1 |
20090207146 | Shimasaki et al. | Aug 2009 | A1 |
20090217177 | DeGrazia | Aug 2009 | A1 |
20090234721 | Bigelow et al. | Sep 2009 | A1 |
20090251457 | Walker et al. | Oct 2009 | A1 |
20090278806 | Duarte et al. | Nov 2009 | A1 |
20090282359 | Saul et al. | Nov 2009 | A1 |
20090309846 | Trachtenberg et al. | Dec 2009 | A1 |
20090309853 | Hildebrandt et al. | Dec 2009 | A1 |
20100011316 | Sar et al. | Jan 2010 | A1 |
20100017727 | Offer et al. | Jan 2010 | A1 |
20100073454 | Lovhaugen et al. | Mar 2010 | A1 |
20100132034 | Pearce et al. | May 2010 | A1 |
20100192091 | Oishi et al. | Jul 2010 | A1 |
20100194701 | Hill | Aug 2010 | A1 |
20100205190 | Morris et al. | Aug 2010 | A1 |
20100211920 | Westerman et al. | Aug 2010 | A1 |
20100306650 | Oh et al. | Dec 2010 | A1 |
20100306696 | Groth et al. | Dec 2010 | A1 |
20100309148 | Fleizach et al. | Dec 2010 | A1 |
20100315481 | Wijngaarden et al. | Dec 2010 | A1 |
20100318470 | Meinel et al. | Dec 2010 | A1 |
20100318921 | Trachtenberg et al. | Dec 2010 | A1 |
20100328306 | Chau et al. | Dec 2010 | A1 |
20110047505 | Fillion et al. | Feb 2011 | A1 |
20110050640 | Lundback et al. | Mar 2011 | A1 |
20110055773 | Agarawala et al. | Mar 2011 | A1 |
20110060992 | Jevons et al. | Mar 2011 | A1 |
20110063191 | Leung et al. | Mar 2011 | A1 |
20110069184 | Go | Mar 2011 | A1 |
20110109526 | Bauza et al. | May 2011 | A1 |
20110148926 | Koo et al. | Jun 2011 | A1 |
20110154192 | Yang et al. | Jun 2011 | A1 |
20110183654 | Lanier et al. | Jul 2011 | A1 |
20110191343 | Heaton et al. | Aug 2011 | A1 |
20110197147 | Fai | Aug 2011 | A1 |
20110197157 | Hoffman et al. | Aug 2011 | A1 |
20110197263 | Stinson, III | Aug 2011 | A1 |
20110202424 | Chun et al. | Aug 2011 | A1 |
20110208807 | Shaffer | Aug 2011 | A1 |
20110214063 | Saul | Sep 2011 | A1 |
20110216064 | Dahl et al. | Sep 2011 | A1 |
20110225494 | Shmuylovich et al. | Sep 2011 | A1 |
20110243380 | Forutanpour et al. | Oct 2011 | A1 |
20110246875 | Parker et al. | Oct 2011 | A1 |
20110264785 | Newman et al. | Oct 2011 | A1 |
20110271229 | Yu | Nov 2011 | A1 |
20120011465 | Rezende | Jan 2012 | A1 |
20120019452 | Westerman | Jan 2012 | A1 |
20120026200 | Okada et al. | Feb 2012 | A1 |
20120030193 | Richberg et al. | Feb 2012 | A1 |
20120038572 | Kim et al. | Feb 2012 | A1 |
20120050197 | Kemmochi | Mar 2012 | A1 |
20120075212 | Park et al. | Mar 2012 | A1 |
20120124124 | Beaty et al. | May 2012 | A1 |
20120127126 | Mattice et al. | May 2012 | A1 |
20120169772 | Werner et al. | Jul 2012 | A1 |
20120176328 | Brown et al. | Jul 2012 | A1 |
20120179994 | Knowlton et al. | Jul 2012 | A1 |
20120229425 | Barrus | Sep 2012 | A1 |
20120254858 | Moyers et al. | Oct 2012 | A1 |
20120260176 | Sehrer | Oct 2012 | A1 |
20120274583 | Haggerty | Nov 2012 | A1 |
20120275683 | Adler et al. | Nov 2012 | A1 |
20120278738 | Kruse et al. | Nov 2012 | A1 |
20120320073 | Mason | Dec 2012 | A1 |
20130004069 | Hawkins et al. | Jan 2013 | A1 |
20130027404 | Sarnoff | Jan 2013 | A1 |
20130047093 | Reuschel et al. | Feb 2013 | A1 |
20130086487 | Findlay et al. | Apr 2013 | A1 |
20130093695 | Davidson | Apr 2013 | A1 |
20130198653 | Tse et al. | Aug 2013 | A1 |
20130218998 | Fischer et al. | Aug 2013 | A1 |
20130222371 | Reitan | Aug 2013 | A1 |
20130246969 | Barton | Sep 2013 | A1 |
20130282820 | Jabri et al. | Oct 2013 | A1 |
20130320073 | Yokoo et al. | Dec 2013 | A1 |
20130346878 | Mason | Dec 2013 | A1 |
20130346910 | Mason | Dec 2013 | A1 |
20140013234 | Beveridge et al. | Jan 2014 | A1 |
20140022334 | Lockhart et al. | Jan 2014 | A1 |
20140032770 | Pegg | Jan 2014 | A1 |
20140033067 | Pittenger et al. | Jan 2014 | A1 |
20140040767 | Bolia | Feb 2014 | A1 |
20140055400 | Reuschel | Feb 2014 | A1 |
20140062957 | Perski et al. | Mar 2014 | A1 |
20140063174 | Junuzovic et al. | Mar 2014 | A1 |
20140101122 | Oren | Apr 2014 | A1 |
20140149880 | Farouki | May 2014 | A1 |
20140149901 | Hunter | May 2014 | A1 |
20140222916 | Foley et al. | Aug 2014 | A1 |
20140223334 | Jensen et al. | Aug 2014 | A1 |
20140223335 | Pearson | Aug 2014 | A1 |
20140282229 | Laukkanen et al. | Sep 2014 | A1 |
20140325431 | Vranjes et al. | Oct 2014 | A1 |
20140380193 | Coplen et al. | Dec 2014 | A1 |
20150007040 | Xu et al. | Jan 2015 | A1 |
20150007055 | Lemus et al. | Jan 2015 | A1 |
20150084055 | Nagata et al. | Mar 2015 | A1 |
20150089452 | Dorninger | Mar 2015 | A1 |
20150185990 | Thompson | Jul 2015 | A1 |
20150248384 | Luo et al. | Sep 2015 | A1 |
20150279071 | Xin | Oct 2015 | A1 |
20150331489 | Edwardson | Nov 2015 | A1 |
20160085381 | Parker | Mar 2016 | A1 |
20160142471 | Tse | May 2016 | A1 |
20160179754 | Borza | Jun 2016 | A1 |
20160232647 | Carlos | Aug 2016 | A1 |
20160328098 | Santhakumar et al. | Nov 2016 | A1 |
20160328114 | Santhakumar et al. | Nov 2016 | A1 |
20160378291 | Pokrzywka | Dec 2016 | A1 |
20170090852 | Harada | Mar 2017 | A1 |
20170235537 | Liu et al. | Aug 2017 | A1 |
20170315767 | Rao | Nov 2017 | A1 |
20170330150 | Foley et al. | Nov 2017 | A1 |
Number | Date | Country |
---|---|---|
2014275495 | Dec 2015 | AU |
101630240 | Jan 2010 | CN |
103023961 | Apr 2013 | CN |
103229141 | Jul 2013 | CN |
104112440 | Oct 2014 | CN |
104395883 | Mar 2015 | CN |
104412257 | Mar 2015 | CN |
2002116996 | Apr 2002 | JP |
2005258877 | Sep 2005 | JP |
2010079834 | Apr 2010 | JP |
2010129093 | Jun 2010 | JP |
2010134897 | Jun 2010 | JP |
2010165178 | Jul 2010 | JP |
2010218313 | Sep 2010 | JP |
2012043251 | Mar 2012 | JP |
2016012252 | Jan 2016 | JP |
1020090085068 | Aug 2009 | KR |
0161633 | Aug 2001 | WO |
2008063833 | May 2008 | WO |
2009018314 | Feb 2009 | WO |
2011029067 | Mar 2011 | WO |
2011048901 | Apr 2011 | WO |
2012162411 | Nov 2012 | WO |
2014023432 | Feb 2014 | WO |
2014121209 | Aug 2014 | WO |
Entry |
---|
Albin, T., “Comfortable Portable Computing: The Ergonomic Equation,” Copyright 2008 Ergotron, Inc., 19 pgs. |
Keller, Ariane, “The ANA Project, Development of the ANA-Core Software” Masters Thesis, dated Sep. 21, 2007, ETH Zurich, 92 pages. |
“Ergonomics Data and Mounting Heights,” Ergonomic Ground Rules, last revised Sep. 22, 2010, 2 pgs. |
Anacore, “Anacore Presents Synthesis”, InfoComm 2012: Las Vegas, NV, USA, Jun. 9-15, 2012, 2 pages, screen shots taken fromhttp://www.youtube.com/watch?v=FbQ9P1c5aNk (visited Nov. 1, 2013). |
Dodds, T.J. and Ruddle, R.A. (2008) Using teleporting, awareness and multiple views to improve teamwork in collaborative virtual environments. In: Mohler, B. and van Liere, R., (eds.) Virtual Environments 2008. 14th Eurographics Symposium on Virtual Environments, May 29-30, 2008, (Year: 2008). |
Ionescu, Ama, et al., “WorkspaceNavigator: Tools for Capture, Recall and Reuse using Spatial Cues in an Interactive Workspace,” 2002, <URL=https://hci.stanford.edu/research/wkspcNavTR.pdf>, 16 p. |
Johanson, Brad et al., “The Event Heap: an Enabling Infastructure for Interactive Workspaces.” 2000 <Url=https://raphics.stanford.edu/papers/eheap/>, 11 pages. |
Screen captures from YouTube video clip entitled “Enable Drag-and-Drop Between TreeView,” 7 pages, uploaded on Oct. 27, 2010 by user “Intersoft Solutions”. Retrieved from Internet: <https :l/www.youtube.com/watch?v=guoj1AnKu_s>. |
Taylor, Allen G, SQL for Dummies, 7th Edition, published Jul. 5, 2012, 11 pages. |
Villamor, C., et al., “Touch Gesture Reference Guide”, Apr. 15, 2010, retrieved from the internet: http://web.archive.org/web/20100601214053; http://www.lukew.com/touch/TouchGestureGuide.pdf, 7 pages, retrieved on Apr. 10, 2014. |
W3C Working Group Note, HTML: The Markup Language (an HTML language reference), http://www.w3.org/TR/html-markup/iframe.html[Aug. 4, 2016 2:01:59 PM], 3 pages. |
Number | Date | Country | |
---|---|---|---|
20230033682 A1 | Feb 2023 | US |
Number | Date | Country | |
---|---|---|---|
62157911 | May 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17027571 | Sep 2020 | US |
Child | 17684354 | US | |
Parent | 15147576 | May 2016 | US |
Child | 17027571 | US |