The present technology relates to collaboration systems that enable users to collaborate in a virtual workspace in a collaboration session. More specifically, the technology relates to collaboration systems that facilitate multiple simultaneous users in accessing global workspace data using devices with different display sizes.
Collaboration systems are used in a variety of environments to allow users to contribute and participate in content generation and review. Users of collaboration systems can join collaboration sessions from remote locations around the globe. A participant in a collaboration session can share content such as digital assets with other participants in the collaboration session, using a digital whiteboard. The digital assets can be documents such as word processor files, spreadsheets, slide decks, notes, program code, etc. Digital assets can also be graphical objects such as images, videos, line drawings, annotations, etc. Digital displays are often used for interactive presentations and other purposes in a manner analogous to whiteboards. In many scenarios, one of the participants in the collaboration session shares content with other participants in the meeting. This participant can share the content using a large format display and one or more other participants may view the shared content on devices with small format display. A large difference in display sizes can cause issues in proper viewing of the content by participants of a collaboration session. For example, the content may appear as very small on displays of small format devices which can make it very difficult to review by the participants.
An opportunity arises to provide a technique to automatically adjust the content on respective displays of devices with different display sizes.
A system and method for operating a server node are disclosed. The method of a server node includes sending data identifying digital assets in a workspace. The method includes receiving, at the server node and from a leader node, data identifying digital assets in the workspace. The method includes identifying, by the server node and from the received data, data identifying first digital assets from the digital assets. The first digital assets have locations outside mapped display coordinates of a display linked to a follower node following the leader node. The method includes sending, to the follower node, the received data identifying the digital assets in the workspace and the data identifying the first digital assets. The data sent to the follower node allows display, on the display linked to the follower node, of only digital assets in the workspace with locations mapping inside the mapped display coordinates of the display linked to the follower node. The date sent to the follower node prevents display of the first digital assets with locations outside mapped display coordinates of the display linked to the follower node.
A size of a display linked to the leader node is at least four times larger than a size of the display linked to the follower node. Larger display sizes linked to the leader node can be used, e.g., eight times, ten times or twelve times larger than the size of the display linked to the follower node.
The leader node can be used by a leader participant presenting collaboration data to a follower participant using the follower node.
The data sent to the follower node, allows a reduction in a size of the displayed digital assets, when displaying the digital assets on the display linked to the follower node. The reduction in the size of the digital assets can reduce the size of the digital assets by ½ times (i.e., one half)) the size of the digital assets as displayed on a display linked to the leader node. Further reduction in the size of digital assets can be performed, e.g., the reduction in the size of the digital assets can reduce the size of the digital assets by ¼ times (i.e., one fourth) the size of the digital assets as displayed on the display linked to the leader node, or up to 1/10 times (i.e., one tenth) the size of the digital assets as displayed on the display linked to the leader node.
In one implementation, the method includes, generating, by the server node and from the received data identifying the digital assets in the workspace, a reduced set of data by removing the data identifying the first digital assets. The method includes sending, from the server node to the follower node, the reduced set of data. The reduced set of data can identify digital assets in the workspace with locations mapping inside the mapped display coordinates of the display linked to the follower node and the reduced set of data does not include the first digital assets.
In one implementation, the method includes, receiving, at the server node, an update event from the follower node indicating a pan operation in response to an input received at the follower node. The pan operation can move at least a portion of the digital assets on the display linked to the follower node. The method includes, generating, from the received data identifying the digital assets in the workspace, a second reduced set of data by removing data identifying digital assets moved outside of the mapped display coordinates of the display linked to the follower node and including one or more of the first digital assets that are inside of the coordinates of the display linked to the follower node as a result of the update event. The method includes, sending, from the server node and to the follower node, the second reduced set of data. The second reduced set of data identifies digital assets in the workspace with locations mapping inside the mapped display coordinates of the display linked to the follower node and the second reduced set of data does not include one or more of the first digital assets that have locations outside mapped display coordinates of the display linked to the follower node as a result of the update event.
The received data, at the server node, can further include toolbar data identifying a toolbar including user interface elements, the toolbar data further identifying a source location and a source dimension of the toolbar as displayed on the display linked the leader node. The method further includes determining a target location and a target dimension of the toolbar for display on the display linked to the follower node. The target location maps inside the mapped display coordinates of the display linked to the follower node and the target location and the target dimension prevent overlap of the toolbar with digital assets displayed in the display linked to the follower node.
A system including one or more processors coupled to memory is provided. The memory is loaded with computer instructions to operate a server node to send data identifying digital assets in a workspace. The instructions, when executed on the one or more processors, implement operations presented in the method described above.
Computer program products which can execute the methods presented above are also described herein (e.g., a non-transitory computer-readable recording medium having a program recorded thereon, wherein, when the program is executed by one or more processors the one or more processors can perform the methods and operations described above).
Other aspects and advantages of the present technology can be seen on review of the drawings, the detailed description, and the claims, which follow.
The technology is described with respect to specific embodiments thereof, and reference will be made to the drawings, which are not drawn to scale, described below.
A detailed description of embodiments of the present technology is provided with reference to
The following description is presented to enable a person skilled in the art to make and use the technology and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present technology. Thus, the present technology is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Collaboration systems are used in a variety of environments to allow users to contribute and participate in content generation, presentation and review. Users of collaboration systems can join collaboration sessions from remote locations around the world. A participant in a collaboration session can share digital assets or content with other participants in the collaboration session, using a digital whiteboard (also referred as a virtual workspace, a workspace, an online whiteboard, etc.). The digital assets can be documents such as word processor files, spreadsheets, slide decks, notes, program code, etc. Digital assets can also be native or non-native graphical objects such as images, videos, line drawings, annotations, etc. The digital assets can also be websites, webpages, web applications, cloud-based or other types of software applications that execute in a window or in a browser displayed on the workspace.
Digital displays are often used for interactive presentations and other purposes in a manner analogous to whiteboards. In many scenarios, one of the participants in the collaboration session shares content with other participants in the meeting. This participant may be considered as a leader and other participants viewing the content presented by the leader may be considered as followers. The followers view the shared content at remote locations using the displays associated with their respective computing devices. Different participants can use different types of computing devices to participate in a collaboration session. The computing devices can have a variety of display sizes. In one scenario, the leader may be presenting content using a large format display while one or more followers may be viewing that content using mobile devices such as cell phones that have small format displays. The large format displays can have display sizes ranging from a few feet to more than ten feet while mobile devices can have a display size as small as few inches.
The leader of a collaboration session may share content displayed on a viewport of their large format display. The shared content from the viewport of the leader node may not be adequately presented for viewing on display of a small format device (i.e., a computing device with a small format display) of a follower. A size of a large format display can range from a few feet to more than 10 feet and a size of the small format display can be a few inches. The size can be measured diagonally between a top right corner and a bottom left corner (or a top left corner and a bottom right corner) of the display screen of a device. The content displayed on the small format device may become too small if the entire content displayed on the viewport of a large format display is presented on the small format display. In another scenario, the leader can present content using a small format device and the follower may view that content on a large format display. In the above-mentioned collaboration scenarios, the effectiveness of the collaboration session can be reduced if content is not adjusted for presentation according to the display size of client devices. The technology disclosed can automatically adjust the size of content on displays of computing devices of followers when the followers use computing devices with a large difference in display size with respect to the size of the display on which the leader is presenting the content. The technology disclosed can automatically prevent display of some content shared by a leader using a large format display when displaying content on a small format device of a follower. This prevention of display of some content allows relevant content from the viewport of the leader node (i.e., the computing device used by the leader of the collaboration session) to be presented on a small format device in a reasonable size such that the content is easily viewable by the follower.
The technology disclosed is related to automatic adjustment of content displayed on displays of various sizes used by participants in a collaboration session. In particular, the technology disclosed adjusts the size and/or the amount of content displayed on the display of a computing device used by a follower when the display size of the follower node has a large difference in size with respect to the display size of the leader node. For example, the technology disclosed enables a follower to view content, shared by the leader, on a small format display of a mobile device in an efficient manner when the content is shared by the leader using a large format digital display. The follower can pan and zoom to view content shared by the leader. Similarly, the technology disclosed also enables the follower to efficiently view the content on a large format display when the content is shared by the leader using a mobile device with a small format display.
Some key elements of the collaboration system are presented below, followed by further details of the technology disclosed.
In order to support an unlimited amount of spatial information for a given collaboration session, the technology disclosed provides a way to organize a virtual space termed the “workspace”. The workspace can be characterized by a multi-dimensional and in some cases two-dimensional plane with essentially unlimited extent in one or more dimensions for example, in such a way that new content can be added to the space. The content can be arranged and rearranged in the space, and a user can navigate from one part of the space to another.
Digital assets (or objects), as described above in more detail, are arranged on the virtual workspace (or shared virtual workspace). Their locations in the workspace are important for performing various types of interactions (e.g., editing, deleting, re-sizing, etc.) and gestures. One or more digital displays in the collaboration session can display a portion of the workspace, where locations on the display are mapped to locations in the workspace. The digital assets can be arranged in canvases (also referred to as sections or containers). Multiple canvases can be created in a workspace.
The technology disclosed provides a way to organize digital assets in a virtual space termed as the workspace (or virtual workspace), which can, for example, be characterized by a 2-dimensional plane (along X-axis and Y-axis) with essentially unlimited extent in one or both dimensions, for example. The workspace is organized in such a way that new content such as digital assets can be added to the space, the content can be arranged and rearranged in the space, a user can navigate from one part of the space to another, and a user can easily find desired content (such as digital assets) in the space. The technology disclosed can also organize content on a 3-dimensional workspace (along X-axis, Y-axis, and Z-axis).
One or more digital displays in the collaboration session can display a portion of the workspace, where locations on the display are mapped to locations in the workspace. A mapped area, also known as a viewport within the workspace is rendered on a physical screen space. Because the entire workspace is addressable using coordinates of locations, any portion of the workspace that a user may be viewing itself has a location, width, and height in coordinate space. The concept of a portion of a workspace can be referred to as a “viewport”. The coordinates of the viewport are mapped to the coordinates of the screen space. The coordinates of the viewport can be changed which can change the objects contained within the viewport, and the change would be rendered on the screen space of the display client. Details of workspace and viewport are presented in our U.S. application Ser. No. 15/791,351 (Atty. Docket No. HAWT 1025-1), entitled, “Virtual Workspace Including Shared Viewport Markers in a Collaboration System,” filed Oct. 23, 2017, which is incorporated by reference and fully set forth herein. Participants in a collaboration session can use digital displays of various sizes ranging from large format displays of sizes five feet or more and small format devices that have display sizes of a few inches. One participant of a collaboration session may share content (or a viewport) from their large format display, wherein the shared content or viewport may not be adequately presented for viewing on the small format device of another user in the same collaboration session. The technology disclosed can automatically adjust the zoom sizes of the various display devices so that content is displayed at an appropriate zoom level. Further, the technology disclosed includes the logic to automatically select an appropriate portion of the content from the workspace to display on a device with a small format display. Even if the content is displayed in a smaller size, the devices with small format displays may not have enough display area. This may cause too much reduction in size of content on a device with small format display causing issues in review and analysis of content.
Participants of the collaboration session can work on the workspace (or virtual workspace) that can extend in two dimensions (along x and y coordinates) or three dimensions (along x, y, z coordinates). The size of the workspace can be extended along any dimension as desired and therefore can considered as an “unlimited workspace”. The technology disclosed includes data structures and logic to track how people (or users) and devices interact with the workspace over time. The technology disclosed includes a so-called “spatial event map” (SEM) to track interaction of participants with the workspace (and the digital assets placed on the workspace) over time. The spatial event map contains information needed to define digital assets and events in a workspace. It is useful to consider the technology from the point of view of space, events, maps of events in the space, and access to the space by multiple users, including multiple simultaneous users. The spatial event map can be considered (or represent) a sharable container of digital assets that can be shared with other users. The spatial event map includes location data of the digital assets in a two-dimensional or a three-dimensional space. The technology disclosed uses the location data and other information related to the digital assets (such as the type of digital asset, shape, color, etc.) to display digital assets on the digital display linked to computing devices used by the participants of the collaboration session.
A spatial event map contains content in the workspace for a given collaboration session. The spatial event map defines arrangement of digital assets on the workspace. Their locations in the workspace are important for performing gestures. The spatial event map contains information needed to define digital assets, their locations, and events in the workspace. A spatial events map system, maps portions of a workspace to a digital display e.g., a touch enabled display. Details of workspace and spatial event map are presented in our U.S. application Ser. No. 14/090,830 (Atty. Docket No. HAWT 1011-2), entitled, “Collaboration System Including a Spatial Event Map,” filed on Nov. 26, 2013, now issued as U.S. Pat. No. 10,304,037, which is incorporated by reference and fully set forth herein.
The information related to the display of digital assets on a display of a client node (i.e., a computing device used by a participant to participate in the collaboration session) can be included in the spatial event map. For example, the spatial event map can include a data structure that can store the display sizes of different client nodes (or client devices or computing devices) participating the collaboration session. The spatial event map can include information about the current zoom-levels at the displays of client nodes participating in the collaboration session. The spatial event map can also include information about the current status of each participant in collaboration session. For example, the current status can indicate whether the participant is participating in the collaboration session as a leader or as a follower. The status of a participant can change during a collaboration session as a follower can become a leader and a leader can become a follower. The collaboration server stores the current status of a participant in the spatial event map. Client nodes can send update events to the server node (or collaboration server) including any updates to the zoom-level, current status of the participant, etc. The server node can then update the spatial event map at all client nodes participating in the collaboration session.
The client nodes include logic to use the information in the spatial event map to automatically adjust display of content on the display attached to computing devices used by participants. This adjustment in display of content can be performed when a follower participant is using a computing device that has display size with a large difference in size with respect to the size of the display of the computing device used by the leader participant (or leading participant). For example, the leader may be using a large format digital display with a display size of several feet while the follower may be using a mobile computing device with a small format display having a display size of a few inches. In some scenarios, the leader may be using a computing device with a small format display while the follower may be using a large format digital display.
Interactions with the workspace (or virtual workspace) can be handled as events. People, via tangible user interface devices, and systems can interact with the workspace. Events have data that can define or point to a target digital asset to be displayed on a physical display, and an action as creation, modification, movement within the workspace and deletion of a target digital asset, and metadata associated with them. Metadata can include information such as originator, date, time, location in the workspace, event type, security information, and other metadata. Events can also be generated when gestures are performed by a participant. For example, a participant can draw a circle around several digital assets. The server includes the logic to receive an event including the gesture information. The server can use that information in the event to perform an operation e.g., the server can group the digital assets placed inside the circle. The server can also initiate one or more workflows in response to an event indicating a gesture. For example, when a user or a participant draws a line passing through several digital assets, the server can attach copies of these digital assets to an email and send the email to participants of the collaboration session.
The movement and editing of the digital assets can generate an update event related to a particular digital asset of the digital assets. A leader or a presenter, using a leader node (also referred to as a client node), presents or shares content with other participants (also referred to as follower). The leader can pan to display different content in the viewport to the workspace. A viewport update or viewport change event can be generated in response to the pan operation. The server node includes logic to send the event data (such as viewport update event) to the follower nodes (i.e., client nodes used by participants following the leader) so that following participants follow the viewport of the leader and are able to view the content shared by the leader. The spatial event map (SEM), received at respective client nodes, is updated to identify the update event and to allow display of one or more digital assets at an identified location in the workspace in respective display spaces of respective client nodes. The identified location of the particular digital asset can be received by the server node in an input event from a client node. Further details of the leader-follower technology are presented in our U.S. application Ser. No. 15/147,576 (Atty. Docket No. HAWT 1019-2A), entitled, “Virtual Workspace Viewport Following in Collaboration Systems,” filed on May 5, 2016, now issued as U.S. Pat. No. 10,802,783, which is incorporated by reference and fully set forth herein.
Tracking events in a workspace enables the collaboration system to not only present the spatial events in a workspace in its current state, but to share it with multiple users on multiple displays with different display sizes. Also, the spatial event map can have a reasonable size in terms of the amount of data needed, while also defining an unbounded workspace. Further details of the technology disclosed are presented below with reference to
In one implementation, a display array can have a displayable area usable as a screen space totaling on the order of 6 feet in height and 30 feet in width, which is wide enough for multiple users to stand at different parts of the wall and manipulate it simultaneously. It is understood that large format displays with displayable area greater than or less than the example displayable area presented above can be used by participants of the collaboration system. One or more users can also use mobile devices with small format displays to participate in a collaboration session. A mobile device 102f is shown as an example. The devices 102, which are also referred to as client nodes, have displays on which a screen space is allocated for displaying events in a workspace. The screen space for a given user may comprise the entire screen of the display, a subset of the screen, a window to be displayed on the screen and so on, such that each has a limited area or extent compared to the virtually unlimited extent of the workspace.
As used herein, a physical network node is an active electronic device that is attached to a network, and is capable of sending, receiving, or forwarding information over a communication channel. Examples of electronic devices which can be deployed as network nodes, include all varieties of computers, workstations, laptop computers, handheld computers and smart phones. As used herein, the term “database” does not necessarily imply any unity of structure. For example, two or more separate databases, when considered together, still constitute a “database” as that term is used herein.
The application running at the collaboration server 205 can be hosted using software such as Apache or nginx, or a runtime environment such as node.js. It can be hosted for example on virtual machines running operating systems such as LINUX. The collaboration server 205 is illustrated, heuristically, in
The database 206 stores, for example, a digital representation of workspace data sets for a spatial event map of each session where the workspace data set can include or identify events related to objects displayable on a display canvas, which is a portion of a virtual workspace. The database 206 can store digital assets and information associated therewith. A workspace data set can be implemented in the form of a spatial event stack, managed so that at least persistent spatial events (called historic events or history events) are added to the stack (push) and removed from the stack (pop) in a first-in-last-out pattern during an undo operation. There can be workspace data sets for many different workspaces. A data set for a given workspace can be configured in a database or as a machine-readable document linked to the workspace. The workspace can have unlimited or virtually unlimited dimensions. The workspace data includes event data structures identifying digital assets displayable by a display client in the display area on a display wall and associates a time and a location in the workspace with the digital assets identified by the event data structures. Each device 102 displays only a portion of the overall workspace. A display wall has a display area for displaying objects, the display area being mapped to a corresponding area in the workspace that corresponds to a viewport in the workspace centered on, or otherwise located with, a user location in the workspace. The mapping of the display area to a corresponding viewport in the workspace is usable by the display client to identify digital assets in the workspace data within the display area to be rendered on the display, and to identify digital assets to which to link user touch inputs at positions in the display area on the display.
The collaboration server 205 and database 206 can constitute a server node, including memory storing a log of events relating to digital assets having locations in a workspace, entries in the log including a location in the workspace of the digital asset of the event, a time of the event, a target identifier of the digital asset of the event, as well as any additional information related to digital assets, as described herein. The collaboration server 205 can include logic to establish links to a plurality of active client nodes (e.g., devices 102), to receive messages identifying events relating to modification, creation, deletion, movement or resizing of digital assets having locations in the workspace, to add events to the log in response to said messages, and to distribute messages relating to events identified in messages received from a particular client node to other active client nodes.
The collaboration server 205 includes logic that implements an application program interface, including a specified set of procedures and parameters, by which to send messages carrying portions of the log to client nodes, and to receive messages from client nodes carrying data identifying events relating to digital assets which have locations in the workspace. Also, the logic in the collaboration server 205 can include an application interface including a process to distribute events received from one client node to other client nodes.
The events compliant with the API can include a first class of event (history event) to be stored in the log and distributed to other client nodes, and a second class of event (ephemeral event) to be distributed to other client nodes but not stored in the log.
The collaboration server 205 can store workspace data sets for a plurality of workspaces and provide the workspace data to the display clients participating in the session. The workspace data is then used by the computer systems 210 with appropriate (client) software 212 including display client software, to determine images to display on the display, and to assign digital assets for interaction to locations on the display surface. The server 205 can store and maintain a multitude of workspaces, for different collaboration sessions. Each workspace can be associated with an organization or a group of users and configured for access only by authorized users in the group.
In some alternative implementations, the collaboration server 205 can keep track of a “viewport” for each device 102, indicating the portion of the display canvas (or canvas) viewable on that device, and can provide to each device 102 data needed to render the viewport. The display canvas is a portion of the virtual workspace. Application software running on the client device responsible for rendering drawing objects, handling user inputs, and communicating with the server can be based on HTML5 or other markup-based procedures and run in a browser environment. This allows for easy support of many different client operating system environments.
The user interface data stored in database 206 includes various types of digital assets including graphical constructs (drawings, annotations, graphical shapes, etc.), image bitmaps, video objects, multi-page documents, scalable vector graphics, and the like. The devices 102 are each in communication with the collaboration server 205 via a communication network 204. The communication network 204 can include all forms of networking components, such as LANs, WANs, routers, switches, Wi-Fi components, cellular components, wired and optical components, and the internet. In one scenario two or more of the users 101 are located in the same room, and their devices 102 communicate via Wi-Fi with the collaboration server 205.
Two or more of the users 101 can be separated from each other by thousands of miles and their devices 102 communicate with the collaboration server 205 via the internet. The walls 102c, 102d, 102e can be multi-touch devices which not only display images, but also can sense user gestures provided by touching the display surfaces with either a stylus or a part of the body such as one or more fingers. In some embodiments, a wall (e.g., 102c) can distinguish between a touch by one or more fingers (or an entire hand, for example), and a touch by the stylus. In one embodiment, the wall senses touch by emitting infrared light and detecting light received; light reflected from a user's finger has a characteristic which the wall distinguishes from ambient received light. The stylus emits its own infrared light in a manner that the wall can distinguish from both ambient light and light reflected from a user's finger. The wall 102c may, for example, be an array of Model No. MT553UTBL MultiTaction Cells, manufactured by MultiTouch Ltd, Helsinki, Finland, tiled both vertically and horizontally. In order to provide a variety of expressive means, the wall 102c is operated in such a way that it maintains a “state.” That is, it may react to a given input differently depending on (among other things) the sequence of inputs. For example, using a toolbar, a user can select any of a number of available brush styles and colors. Once selected, the wall is in a state in which subsequent strokes by the stylus will draw a line using the selected brush style and color.
Large differences in display sizes of large format displays and small format displays make it difficult to view the collaboration data (or digital assets) displayed on the large format display by directly mapping it to the small format displays without any adjustments in the size of the content. For example, if the circle, square and triangle, as displayed on the large format digital display 102c, were all displayed the mobile device 102f at the same time, then the sizes of the circle, square and triangle would be so small that the user would have difficulty viewing the specific details of circle, square and triangle. Suppose there were specific details within the circle. The user would not be able to see them, because each of the circle, square and triangle would be so small so that they can fit onto the screen of the mobile device 102f at the same time. Furthermore, situations can arise when the differences in screen size is so substantial that not all content (or digital assets) in the collaboration data (or workspace) displayed on the large format display can be displayed on the small format display as the available display space on small format displays is too limited.
The technology disclosed includes logic to adjust the size of the content shared by the leader on the large format display when displaying the content on small format display used by the follower so that it is easy to view. For example, when the display size of the small format device is one tenth of the large format display, the server node can reduce (or decrease) the size of the content (digital assets) by ten times. It is understood that the server node can use other size reduction values when displaying content on a small format display. For example, if the display size of the small format device is one tenth of the display size of the large format display, the server node can decrease the size of the content by five times or if the display size of the small format device is one fourth of the display size of the large format display, the server node can reduce the size of the digital assets by one half i.e., ½ times the size of the digital assets as displayed on the display linked to the large format display. In one implementation, the system allows the user to select a size reduction value for reducing the size of the content when displaying the content on a small format display.
The four example views A, B, C, and D of the small format display in
In the first view labeled as “A”, a circle-shaped graphical object from canvas on the large format digital display 102c is displayed on small format digital display of the mobile device 102f. As illustrated, the square and the triangle that is in the viewport of the leader is not displayed on the small format digital display. The follower can pan to the right, thus allowing the content positioned on the right-side of the circle to be displayed on the small format digital display of the mobile device. For example, in the second view labeled as “B” a square-shaped graphical object is partially displayed on the small format display of the mobile device along with a part of the circle-shaped graphical object. As the follower keeps panning towards the right, the square-shaped graphical object is completely displayed on the small format display of the mobile device in the view labeled as “C.” However, note that the circle-shaped graphical object is no longer displayed on the small format display of the mobile device in the view “C.” This is because there is not enough display space on the small format display of the mobile device hence only a part of the canvas from the leader's large format digital display is displayed in one view. Finally, in the view labeled as “D”, an almost complete triangle-shaped graphical object is displayed on the small format display of the mobile device 102f.
The follower can adjust the zoom level on the display of the small format display of the mobile device to zoom-in or zoom-out when viewing the content displayed on the canvas shared by the leader in a collaboration session. When the display is zoomed-in, the follower can view less content from the leader's canvas on the display of the mobile device. When the display is zoomed-out, the follower can view more content on the display of the mobile device.
The example presented in
In one implementation, the collaboration server can send the complete spatial event map (SEM) containing all digital assets in the shared canvas to the client-side device of the follower (i.e., the mobile computing device in
In one implementation, the collaboration server includes logic to send only a part of the spatial event map (or SEM) to follower node (i.e., the client-side device used by the following user) that matches the display size of the large format display of the leader node. In this implementation, the server node (or collaboration server) can send all of the collaboration data (or digital assets) within the viewport of the leader node with large format display to the follower node with small format display. However, the server node includes logic to identify (or flag) the data (i.e., the digital assets) that is within the smaller viewport of the follower node. This identification or flagging allows the follower node to display only the digital assets that are within the smaller viewport of the small format display of the follower node. The identification data allows the follower node to prevent display of digital assets that are outside the viewport of the small format display of the follower node. The data sent to follower node allows the follower node to select the part of the SEM that includes digital assets having locations mapped to current viewport of the small format display so only a portion of the digital assets from the viewport of the large format display are displayed on the small format display of the follower node. This allows the follower node (such as mobile device 102f) with small format display to follow the viewport of the large format display but only shows a portion (or part) of the SEM data that includes the digital assets that are within the viewport of the leader. This allows the follower node to always keep the following user bound to viewport of the leader.
In another implementation, the collaboration server includes logic to send only a part of the spatial event map (or SEM) to follower node that includes digital assets mapped to a smaller portion of the current viewport of the large format display. This allows the server node to send even less data (i.e., digital assets) to a follower node than the data sent in the implementation described above. In this implementation, the server node sends SEM data (to follower node) that only includes a sub-set of the digital assets that are within the viewport of the leader. The partial SEM data sent by the collaboration server (or the server node) to the follower node (i.e., the client device or the mobile device used by the follower) allows the follower node to display the partially received SEM as is without making any further changes to the displayed content. Therefore, in this implementation, the logic to adjust the display of content on the small format display of the follower node is implemented at the server node or the collaboration server. As the follower pans to view additional content in the canvas shared by the leader, the follower node sends an update event to the collaboration server with updated viewport. The collaboration server upon receiving the updated viewport, determines the content in the shared canvas that maps to the location of the updated canvas of the mobile device. The digital assets in the updated mapping of the SEM to the viewport of the follower node, are then sent to the follower node. The follower node then uses the updated SEM to update digital assets displayed on the small format display.
The technology disclosed can make an initial determination as to what portion of the leader's viewport should be provided to (or focused in on by) the mobile device for display. For example, if the center area of the leader node's viewport displays the square, then the default would be for the mobile device to initially focus in on the square, as illustrated in view “C” of
The server node includes logic to determine a target location of the toolbar for display on the display linked to the follower node. The example illustrated in
The technology disclosed can automatically adjust the zoom level of the content on the shared canvas for display on the large format display so that it is suitable for viewing by the follower.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the technology disclosed may consist of any such feature or combination of features. In view of the foregoing description, it will be evident to a person skilled in the art that various modifications may be made within the scope of the technology disclosed. Some or all of the operations that are described above as being performed by the server node can also be performed by a client node (such as the leader node and/or the follower node).
The physical hardware component of network interfaces is sometimes referred to as network interface cards (NICs), although they need not be in the form of cards: for instance, they could be in the form of integrated circuits (ICs) and connectors fitted directly onto a motherboard, or in the form of macrocells fabricated on a single integrated circuit chip with other components of the computer system.
User interface input devices 622 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touch screen incorporated into the display (including the touch sensitive portions of large format digital display such as 102c), audio input devices such as voice recognition systems, microphones, and other types of tangible input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into the computer system or onto communication network 204.
User interface output devices 620 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from the computer system to the user or to another machine or computer system.
Storage subsystem 624 stores the basic programming and data constructs that provide the functionality of certain embodiments of the present invention.
The storage subsystem 624 when used for implementation of server nodes, comprises a product including a non-transitory computer readable medium storing a machine-readable data structure including a spatial event map which locates events in a workspace, wherein the spatial event map includes a log of events, entries in the log having a location of a graphical target of the event in the workspace and a time. Also, the storage subsystem 624 comprises a product including executable instructions for performing the procedures described herein associated with the server node.
The storage subsystem 624 when used for implementation of client-nodes, comprises a product including a non-transitory computer readable medium storing a machine readable data structure including a spatial event map in the form of a cached copy as explained below, which locates events in a workspace, wherein the spatial event map includes a log of events, entries in the log having a location of a graphical target of the event in the workspace and a time. Also, the storage subsystem 624 comprises a product including executable instructions for performing the procedures described herein associated with the client node.
For example, the various modules implementing the functionality of certain embodiments of the invention may be stored in storage subsystem 624. These software modules are generally executed by processor subsystem 614.
Memory subsystem 626 typically includes a number of memories including a main random-access memory (RAM) 630 for storage of instructions and data during program execution and a read only memory (ROM) 632 in which fixed instructions are stored. File storage subsystem 628 provides persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD ROM drive, an optical drive, or removable media cartridges. The databases and modules implementing the functionality of certain embodiments of the invention may have been provided on a computer readable medium such as one or more CD-ROMs and may be stored by file storage subsystem 628. The host memory 626 contains, among other things, computer instructions which, when executed by the processor subsystem 614, cause the computer system to operate or perform functions as described herein. As used herein, processes and software that are said to run in or on the “host” or the “computer,” execute on the processor subsystem 614 in response to computer instructions and data in the host memory subsystem 626 including any other local or remote storage for such instructions and data.
Bus subsystem 612 provides a mechanism for letting the various components and subsystems of a computer system communicate with each other as intended. Although bus subsystem 612 is shown schematically as a single bus, alternative embodiments of the bus subsystem may use multiple busses.
The computer system 610 itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, or any other data processing system or user device. In one embodiment, a computer system includes several computer systems, each controlling one of the tiles that make up the large format display such as 102c. Due to the ever-changing nature of computers and networks, the description of computer system 210 depicted in
Certain information about the drawing regions active on the digital display 102c are stored in a database accessible to the computer system 210 of the display client. The database can take on many forms in different embodiments, including but not limited to a MongoDB database, an XML database, a relational database, or an object-oriented database.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present technology may consist of any such feature or combination of features. In view of the foregoing description, it will be evident to a person skilled in the art that various modifications may be made within the scope of the technology.
The foregoing description of preferred embodiments of the present technology has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. For example, though the displays described herein are of large format, small format displays can also be arranged to use multiple drawing regions, though multiple drawing regions are more useful for displays that are at least as large as 12 feet in width. In particular, and without limitation, any and all variations described, suggested by the Background section of this patent application or by the material incorporated by reference are specifically incorporated by reference into the description herein of embodiments of the technology. In addition, any and all variations described, suggested or incorporated by reference herein with respect to any one embodiment are also to be considered taught with respect to all other embodiments. The embodiments described herein were chosen and described in order to best explain the principles of the technology and its practical application, thereby enabling others skilled in the art to understand the technology for various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the following claims and their equivalents.
While the technology disclosed is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the technology disclosed and the scope of the following claims. It is contemplated that technologies described herein can be implemented using collaboration data structures other that the spatial event map.
This application claims the benefit of U.S. Provisional Patent Application No. 63/359,709 (Attorney Docket No. HAWT 1043-1), entitled, “Virtual Workspace Viewport Following in Collaboration Systems,” filed on Jul. 8, 2022, and also claims benefit of U.S. Provisional Patent Application No. 63/459,223 (Attorney Docket No. HAWT 1047-1), entitled, “Method and System for Summoning Adaptive Toolbar Items and Digital Assets Associated Therewith on a Large Format Screen Within a Digital Collaboration Environment”, filed on Apr. 13, 2023, both of the above-listed applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63359709 | Jul 2022 | US | |
63459223 | Apr 2023 | US |