There are over 12 million conference rooms worldwide; most are underserved by their current information display solution. Video cables are capable of carrying a single source from one device to a display and, other than physically connecting and disconnecting the video cable, there is little control as to what from that device is displayed when. Video switching hardware overcomes part of this problem by allowing different clients to display to a single screen simultaneously but this hardware is restricted by video transmission standards leaving little control to the users of each device. Finally, screen sharing software offers some capability for users to send images of their devices to a display but these approaches do little more than replace the physical cable with software and, potentially, IP-based networks. What is needed is a system that allows simultaneous interaction for collaboration among multiple users, with the capability of multiple sources, potentially from a single device, being simultaneously published to a common display device that can be controlled from any of the connected devices without restriction.
The present system comprises software that transforms commodity display hardware and a standard compute device connected to that display into a managed, shared presentation and collaboration environment. By providing a common communications, presentation, and management system with software, the display can act as part of public infrastructure, allowing mobile users to connect publish media, share with collaborators, and better communicate. Thus, the present method allows display devices to be used and managed like other aspects of typical IT infrastructure, such as shared storage, printers, cloud-based computing.
The present system also involves software that is installed on a variety of devices that allows them to connect to this display infrastructure. The software provides interconnection of many user display devices and associated media to a single shared display. Users are able to utilize the same display at the same time allowing many different devices, with potentially disparate datasets to be visually compared on the same display. The present system uses a centralized sophisticated state and control management system to enable this type of simultaneous media publishing without the each user display device having manage interaction.
In addition to selectively publishing media from each user display device to any managed display, users can simultaneously control each media element that has been posted to the shared display. Because the display software transmits state updates to all connected users, each display device can provide the ability to individuals to select and control media that may or may not have been published by that user. The present system, therefore, allows multiple users to share media, collaborate, and communicate freely without restrictions imposed by more traditional video switchers, direct video cabling of each device, or simple screen sharing applications.
As an example of the type of collaboration that the present system makes possible, consider a conference room with a single display that is connected to a commodity PC. Three users each have laptop computers that are connected to a wireless network that is shared by the commodity PC. The present system provides the following behaviors:
In distinction to existing video switching systems, each connected device is both capable of publishing imagery to the shared display and controlling the shared display simultaneously with other devices. Traditional video switching systems provide distinct video-inputs (typically based on existing video standards) that capture, route, and display various video signals. These sources, because they have been passed through a traditional video cable, do not carry extensive control information and, therefore, video sources are unable to control the display. In distinction to video switches, the present system exchanges media (video, imagery, application window pixels), control information, and other metadata (user information, location of source, etc.).
In distinction to software-based screen sharing applications, the present system is capable of providing group control over all published and shared media simultaneously.
In one embodiment, a set of wirelessly enabled display are embedded or attached to a general compute device containing the display software module. Client software modules, installed on at least two different devices allow those devices to wirelessly connect to the display module. The display module can optionally display presence information and metadata about connected users based on the incoming connection signal from the two devices. Each of the users of the devices can then selectively identify a media source to be transmitted to the display for rendering. The display module receives live media streams and simultaneously renders the streams in a position, scale, and with visual effects onto the physical display devices that are controlled by each of the connected users.
Client hosts 116 include local memory 111 for temporarily storing applications, context and other current information, and may be connected to a local database or other repository of media files 109. Typically, a client host is integrated with a PC, laptop, or a hand-held device such as a smart phone or tablet This integrated client host 116 and display 108 is herein after referred to as “client display device” or simply “client” 106. Dotted line 120 indicates a network interconnection between clients 106 and host shared display 102, which may be wireless or wired.
Networking software 204 provides connection control and packet receipt/delivery of information sent between host controller 104 and clients 106. Networking software 204 receives client-originated packets 212 from, and sends system-originated packets 211 to, client host 116 executing client software 107. Video encoder/decoder 206 encodes/decodes packetized transmitted/received video information.
Control and state management software 205 maintains the global state of the shared display including media streams that have been created by users and each of their unique identifiers, metadata 122 about each user such as their user class geospatial position and user name, and which users are currently expressing control commands. Messages from users invoke updates to the control and state management module to ensure consistent state at all times. In addition, this state is shared periodically with all connected users. In doing so, each connected client is able to see information about all media that has been shared to the display (for example a list of all media posts maintained as thumbnails) and even select and control media originating from other devices.
Scene graph and display manager 208 maintains a 3D geometric state of all visual elements displayed on screen 510 of display device 102 (shown in
Layout control, and other client commands are used to add or remove particular posted media elements from the scene graph 117. For example, a ‘presentation’ mode command builds a scene graph 117 that includes only a single element whose position and scale is as large as physical constraints of screen 510 permit. Scene graph 117 comprises indicia of the layout of the currently-connected media posts 408 on screen 510 of display device 102. ‘Free form’ mode builds a scene graph 117, in which all posted media are positioned based on each element's individual placement and scale offsets that can be controlled by users.
Presentation engine 209 takes decoded video (from decoder 206) and interprets the current scene graph 117 into a complete composite display that is rendered as a composite video image 210, which drives one or more shared display devices 102.
As shown in
Media files 109 are encoded and decoded via video encoder/decoder 306 and published by client software 107, to one or more displays 102.
User interface 308 presents control options and global state of the system to a user viewing a local client display 108. For example, buttons to select various layouts of the shared display, and to selectively publish new media are presented. In addition, updates to the global state (when a second user publishes a new media element, for example) are drawn by the user interface to alert the user to the change and to provide control over the newly published element.
Networking layer 302 receives system-originated packets 211 from, and sends client-originated packets 212 to, client host 106 executing client software 107.
By selecting buttons 421-425, for example, the client can modify how the (potentially many) different media posts are displayed together. Presentation mode button 421 instructs the shared display to only display the selected media post 408 and to hide all other media posts. Quadmode mode button 422 instructs the server to display the selected element and its three neighbors (as shown in the list of buttons 411 in a regular grid, with each element being the same size). Show All mode button 423 instructs the server to display all elements in a 3D cylindrical grid at a size that allows every post to fit on the scree. FlipBook model button 424 instructs the display software to show the selected element in the foreground of the display at a size that takes up a large portion of the screen, while other media posts are rotated in a perspective mode and rendered as a stack preceding and following the selected element. Free Roam mode button 425 places all the media in their last position and scale defined by all users who are able to position and scale each element individually using the joystick controller 430.
In an exemplary embodiment, the lower section of client display 108 includes a media preview bar 440 which displays thumbnails of media posts sent from display software to clients to allow each client to see, select, and update the system shared state for all published media (including media generated by other clients). Each of the media shaded boxes 412 indicates published media posts currently on shared display 102 based on layout and display settings, i.e., based on signals/video incoming from the server that will alert the client, for example, that the layout has changed to quad-view (4 items on the screen). Posts that are not currently covered by the shared box 412 are not visible on the screen based on the current layout settings but remain part of the shared display session, continue to be updated live, and can be brought into view on the shared display though user selection and control. Buttons 413 and 414 allow a user to scroll through the published media entries in media preview bar 440, potentially bringing each of the media posts corresponding to the thumbnails into view on the shared display.
For example, in presentation mode only one element is displayed on the shared screen at any given moment, in this case, box 412 covers only a single element (media post). By using arrows 414 and 413 to rotate the thumbnail's preview windows of the various published elements, the elements ‘slide’ into area 412 and a message is transmitted to the display software to modify what is being displayed to correspond to the new thumbnail that is now in area 412.
In an exemplary embodiment, media preview bar 440 also allows users to select any shared media post by left clicking the corresponding thumbnail. The selected media element is then drawn as a large preview into window 409 and can be controlled via a user input device such as a joystick 430. Free-form control (‘Virtual’) joystick 430 is a software-generated image that acts as a user input device which allows a user to position and scale media and other types of images displayed in application window 409.
As indicated in the left-hand half of frame 2, client 1, using device 106(1) may, for example, (1) move media post A below window B (as indicated by arrow 521), and, as indicated by arrow 511, (2) resize window A, and also (3) enlarge or ‘zoom in on’ window A, by manipulating window A on client device 106(1). As indicated in the right-hand half of frame 2, a user of device 106(1) may move (arrow 521), and as indicated by arrow 512, may resize or zoom in on another client's window, observing that window D was posted (published) by client 3 via device 106(3). Any of the above actions by different users may occur simultaneously.
Icons 501(1)-501(2) presented on display device 102 represent currently-connected clients/users.
As shown in frame 1 of
As indicated in frame 2, client 1, using device 106(1) may, for example, (1) move media post A below window B (as indicated by arrow 531), (2) resize window A, as indicated by arrow 532, and also (3) enlarge or ‘zoom in on’ window A, by manipulating window A on client device 106(1), as described above.
At step 610, a client 106(*) initiates display discovery by detecting identity packets 651 and examining the packet data to determine what servers 115 and corresponding displays 102 are available on the network. At step 615, a user selects a shared display 102 from a list of available severs/host displays on the local network.
At step 620, a connection request packet 652 containing user access information is transmitted by client to the selected server 115. As indicated by block 632, after a predetermined delay, host server 115 continues to check for a connection request packet 652 from client 106, at step 635. When the request packet 652 is received, server 115 responds with a packet indicating that access is granted, or in the case where the server was configured for password access, responds with a packet that requests a password. Process connections between server 115 and clients 106 occur in a separate and parallel process so that multiple connection requests can be published at the same time.
At step 640, the present connection is processed for display access by updating the display software control and state management module 205 to reflect that the display has accepted a new user. At step 645, a unique user ID (UID) 646 is generated, and visual information about that user is updated, for example, a unique identifying color and icon is selected that reflects information about that user including client host hardware type, physical location, and connection type. This information can be rendered to the display to visually notify other users proximate to the display. The UID 646 is then sent to client 106. At step 650, display 102 is graphically updated to reflect video data published by client 108 to the display.
If a publish command was selected, then at step 718, client 106 sends a new media post (publish) request 792 to server 115. At step 720, a media frame 726 is captured by client pixel capture module 323 by decoding media on disk or capturing screen pixels corresponding to the source (media, application, screen area) being published. At step 725, the updated video frame 726 is sent to server 115 and host 104, as long as the connection remains established (step 730). If the client-host connection is dropped, then pixel capture is terminated and a message is sent to host 104 indicating that the present client's media posts are to be deleted, at step 735.
If a media control command was selected at step 710, then at step 712, client 106 initiates control of an media post presently posted by any client 106(*). Control flow continues as shown in
At step 750, server 115 receives media post request packet 792. At step 755, display software 105 creates a unique identifier (UID) 646 that it associates with the newly created window 408. At step 765, server 115 creates a decoded video stream 766 for updates. At step 770, host server 115 sends client 106 an update 794 including additional display state information, including the media streams (and their UIDs) that have been posted, and the number of users connected.
At step 645 (per
At step 780, display software draws the pixel data frame into the current position, scale, offset of the media post (as indicated by scene graph 117) to create a visible frame on display 102 that corresponds to the application view that was captured on client device 106.
At step 740, client 106 receives the update including the additional display state information and updates/stores the information at step 742.
At this point, client 106 returns to step 710, and may either publish another window (and continue with the above set of steps) or move joystick (or other user input device) 430, at step 712, and continue as indicated below in the description of
Steps in shaded area 701 (steps 720, 725, 730, 775, 780, 785) are performed asynchronously. Continued messages from a client 106 that contain a video stream are also received in parallel by the networking layer and are decoded by the video decoder module 206 to create pixel updates that are drawn into the new position of the shared media post on display 102. If a media post ceases to be active (step 785), then the media post is deleted from display 102, at step 790.
In this example, two users, A and B, use the present method to simultaneously share and control a shared display 102. In this example, host display software 105 is run on a PC 104 connected to the shared display. The PC is on a local network with client devices 106 A and B.
[Step 710] User A [using client display 106(1)] selects a media post control option, and selects an application or media to publish to the host display.
[Step 718] User A selects “publish” from its interface control panel and transmits a request to the host to add a new media post to (i.e., publish) the shared display.
[Steps 720-725] Client A begins transmitting a video stream 726 corresponding to the encoded data for the media post.
[Step 750] Server receives publish request.
[Step 755] Shared display software creates a unique identifier (UID 646) that it associates with the newly created window, and [step 645] then sends this UID to user A to acknowledge the creation of a new media post.
[Step 775] Host server 115 receives the video stream from client A via the networking layer and, based on its type, decodes the data via video encoder/decoder module 206 to create a pixel data frame 776.
[Step 780] The pixel data frame is then drawn into the current position, scale, offset of the media post to create a visible frame on display 510 that corresponds to the application view that was captured on client A.
[Step 770] Host server 115 sends client A and other connected clients an update including additional display state information, including new media streams (and their UIDs 646) that have been posted, and the number of users connected.
[Step 740] Client A receives the update including the additional display state information and updates/stores the information [Step 742].
At this point, client A returns to step 710, and may either publish another window (and continue with the above set of steps) or move joystick (or other user input device) 430, at step 712, and continue as indicated below. Client B connects to the shared display using a similar set of steps as described above for client A.
[Step 740] Client B receives the update to the user interface via state update packets that now contain information including media post UIDs 646 and corresponding video frames being shown on the host display 102 (from step 770).
[Step 710 with different selection] User B selects a media post UID 646 by clicking an entry in the media preview bar 440.
[Step 712] Dragging the joystick initiates a message, sent to server 115, [at step 713] that includes:
(1) a transform (i.e. translation) and
(2) the unique identifier [UID 646] of the media post being manipulated
[Step 714] The server receives this packet and decodes it using the control and state management module 205. The state of the server (stored in this module) is updated to reflect the movement that was just applied to a particular UID 646.
[Step 715] The scene graph 117 is updated to graphically depict the new state.
[Step 780] Presentation engine 209 then draws the new scene according to the new graphical state of media.
At step 714, host 104 receives packet 723 from client 106, decodes the packet, and applies the transform to the current geometric and visual appearance state of the correct media element. In this example, a media element is translated by applying the translation to the current position, stored in the control and state manager 205 for that element. At step 715, scene graph 117 is updated to graphically depict the new state of display 102. Finally, the presentation engine 209 interprets the entire scene graph 117 that holds information about the visual appearance of all media elements, including the one that was just updated, and creates a composite image 210 for the display 102. Control resumes at step 780 in
Two users, A and B, using the present method to connect and simultaneously share and control a display:
The above description of certain embodiments of the invention is not intended to be exhaustive or to limit the invention to the precise forms disclosed. The terms used in the claims should not be construed to limit the invention to the specific embodiments disclosed in the specification, rather, the scope of the invention is to be determined by the following claims.
This application claims priority to U.S. Provisional Patent Application Ser. No. 61/769,609 filed Feb. 26, 2013, the disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61769609 | Feb 2013 | US |