Methods and systems for collaborative remote application sharing and conferencing

Information

  • Patent Grant
  • 10454979
  • Patent Number
    10,454,979
  • Date Filed
    Monday, April 24, 2017
    7 years ago
  • Date Issued
    Tuesday, October 22, 2019
    5 years ago
Abstract
Systems and method for providing a collaborative conferencing capability to an application remotely-accessed by client computing devices. A client media sharing application is provided in a client tier, and the client media sharing application allows at least one of the client computing devices to share media with the client computing devices. A conferencing manager application that receives the shared media is provided to the server tier. The conferencing manager application makes the shared media available to the client computing devices.
Description
BACKGROUND

Ubiquitous remote access to services, application programs and data has become commonplace as a result of the growth and availability of broadband and wireless network access. As such, users are accessing application programs and data using an ever-growing variety of client devices (e.g., mobile devices, table computing devices, laptop/notebook/desktop computers, etc.). Data may be communicated to the devices from a remote server over a variety of networks including, 3G and 4G mobile data networks, wireless networks such as WiFi and WiMax, wired networks, etc. Clients may connect to a server offering the services, applications programs and data across many disparate network bandwidths and latencies.


In such an environment, applications may also be shared among remote participants in a collaborative session. However, when collaborating, participants may be limited solely to the functionalities provided by the shared application, thus limiting the collaborative session. Specifically, participants may be limited because they are unable to share media, i.e., audio, video, desktop screen scrapes, image libraries, etc., with other participants in the collaborative session.


SUMMARY

Disclosed herein are systems and methods for providing a collaborative conferencing capability to a remotely-accessed application. A method of providing a collaborative conferencing capability to a remotely-accessed application may include providing a tiered remote access framework comprising an application tier, a server tier and a client tier, the tiered remote access framework communicating first information regarding the remotely-accessed application between client computing devices accessing the remotely-accessed application within a state model that is used to display the remotely-accessed application at the client computing devices; providing a server remote access application in the server tier, the server remote application being capable of modifying the state model; providing a client remote access application in either the client tier or the application tier; providing a client media sharing application in the client tier, the client media sharing application allowing at least one of the client computing devices to share media with the client computing devices; providing a conferencing manager application to the server tier, the conferencing manager application receiving the shared media; and modifying the state model to further include the shared media such that the shared media is provided in at least one of the client computing devices.


In another implementation, a method of providing a collaborative conferencing capability may include providing a tiered remote access framework comprising a server tier and a client tier, the tiered remote access framework communicating information regarding shared media between client computing devices accessing the shared media within a state model that is used to display the shared media at the client computing devices; providing a server remote access application in the server tier, the server remote application being capable of modifying the state model; providing a client media sharing application in the client tier, the client media sharing application allowing at least one of the client computing devices to share the shared media with the client computing devices; providing a conferencing manager application to the server tier, the conferencing manager application receiving the shared media; and modifying the state model to further include the shared media such that the shared media is provided in at least one of the client computing devices.


Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.



FIG. 1 is a simplified block diagram illustrating a system for providing remote access to an application at a remote device via a computer network;



FIG. 2 is a state model in accordance with the present disclosure;



FIG. 3 illustrates a tree within an XML state model document;



FIG. 4 illustrates additional aspects of the system of FIG. 1;



FIG. 5A is a simplified block diagram illustrating systems for providing conferencing around a remotely-accessed application program;



FIG. 5B is a simplified block diagram illustrating systems for providing conferencing in a remote environment;



FIGS. 6A-B illustrates flow diagrams of example operations performed within the systems of FIGS. 5A-B;



FIG. 7 illustrates an example user interface of a viewing-participant's client computing device during a collaborative conferencing session;



FIG. 8 illustrates an example user interface of a sharing-participant's client computing device during a collaborative conferencing session;



FIG. 9 illustrates a second example user interface of a viewing-participant's client computing device during a collaborative conferencing session;



FIG. 10 illustrates a third example user interface of a viewing-participant's client computing device during a collaborative conferencing session;



FIG. 11 illustrates an example user interface including a conferencing manager view of a sharing-participant's client computing device during a collaborative conferencing session; and



FIG. 12 illustrates an exemplary computing device.





DETAILED DESCRIPTION

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure. While implementations will be described for remotely accessing applications, it will become evident to those skilled in the art that the implementations are not limited thereto, but are applicable for remotely accessing any type of data or service via a remote device.


Referring to FIG. 1, a system 100 for providing remote access to an application, data or other service via a computer network. The system comprises a client computer 112A or 112B, such as a wireless handheld device such as, for example, an IPHONE 112A or a BLACKBERRY 112B connected via a computer network 110 such as, for example, the Internet, to a server 102B. Similarly, the client computing devices may also include a desktop/notebook personal computer 112C or a tablet device 112N that are connected by the communication network 110 to the server 102B. It is noted that the connections to the communication network 110 may be any type of connection, for example, Wi-Fi (IEEE 802.11x), WiMax (IEEE 802.16), Ethernet, 3G, 4G, etc.


The server 102B is connected, for example, via the computer network 110 to a Local Area Network (LAN) 109 or may be directly connected to the computer network 110. For example, the LAN 109 is an internal computer network of an institution such as a hospital, a bank, a large business, or a government department. Typically, such institutions still use a mainframe computer 102A and a database 108 connected to the LAN 109. Numerous application programs 107A may be stored in memory 106A of the mainframe computer 102A and executed on a processor 104A. Similarly, numerous application programs 107B may be stored in memory 106B of the server 102B and executed on a processor 104B. The application programs 107A and 107B may be “services” offered for remote access. The mainframe computer 102A, the server 102B and the client computers 112A, 112B, 112C or 112N may be implemented using hardware such as that shown in the general purpose computing device of FIG. 12.


A client remote access application 121A, 121B, 121C, 121N may be designed for providing user interaction for displaying data and/or imagery in a human comprehensible fashion and for determining user input data in dependence upon received user instructions for interacting with the application program using, for example, a graphical display with touch-screen 114A or a graphical display 114B/114N and a keyboard 116B/116C of the client computers 112A, 112B, 112C, 112N, respectively. For example, the client remote access application is performed by executing executable commands on processor 118A, 118B, 118C, 118N with the commands being stored in memory 120A, 120B, 120C, 120N of the client computer 112A, 112B, 112C, 112N, respectively.


Alternatively or additionally, a user interface program is executed on the server 102B (as one of application programs 107B) which is then accessed via an URL by a generic client application such as, for example, a web browser executed on the client computer 112A, 112B. The user interface is implemented using, for example, Hyper Text Markup Language HTML 5. In some implementations, the server 102B may participate in a collaborative session with the client computing devices 112A, 112B, 112C . . . 112N. For example, the aforementioned one of the application programs 107B may enable the server 102B to collaboratively interact with the application program 107A or another application program 107B and the client remote access applications 121A, 121B, 121C, 121N. As such, the server 102B and each of the participating client computing devices 112A, 112B, 112C . . . 112N may present a synchronized view of the display of the application program.


The operation of a server remote access application 111B with the client remote access application (any of 121A, 121B, 121C, 121N, or one of application programs 107B) is performed in cooperation with a state model 200, as illustrated in FIG. 2. An example of the server remote access program is PUREWEB, available from Calgary Scientific, Alberta, Canada. When executed, the client remote access application updates the state model 200 in accordance with user input data received from a user interface program. The remote access application may generate control data in accordance with the updated state model 200, and provide the same to the server remote access application 111B running on the server 102B.


Upon receipt of application data from an application program 107A or 1097B, the server remote access application 111B updates the state model 200 in accordance with the screen or application data, generates presentation data in accordance with the updated state model 200, and provides the same to the client remote access application 121A, 121B, 121C, 121N on the client computing device. The state model 200 comprises an association of logical elements of the application program with corresponding states of the application program, with the logical elements being in a hierarchical order. For example, the logical elements may be a screen, a menu, a submenu, a button, etc. that make up the application program user interface. This enables the client device, for example, to natively display the logical elements. As such, a menu of the application program that is presented on a mobile phone will look like a native menu of the mobile phone. Similarly, the menu of the application program that is presented on desktop computer will look like a native menu of the desktop computer operating system.


The state model 200 is determined such that each of the logical elements is associated with a corresponding state of the application program 107A or 107B. The state model 200 may be determined such that the logical elements are associated with user interactions. For example, the logical elements of the application program are determined such that the logical elements comprise transition elements with each transition element relating a change of the state model 200 to one of control data and application representation data associated therewith.


In some implementations, two or more of the client computing devices 112A, 112B, 112C . . . 112N and/or the server 1026 may collaboratively interact with the application program 107A or 107B. As such, by communicating state information between each of the client computing devices 112A, 112B, 112C . . . 112N and/or the server 1026 and/or the mainframe computer 102A participating in a collaborative session, each of the participating client computing devices 112A, 112B, 112C . . . 112N may present a synchronized view of the display of the application program 107A or 107B.


In accordance with some implementations, the system 100 may provide for decoupled application extensions. Such extensions are provided as part of the server remote access application 111B (e.g., as a plug-in), the client remote access applications 121A, 121B, 121C, 121N (e.g., as part of a client software development kit (SDK)), one of the applications 107B (e.g., as part of a server SDK), or combinations thereof to provide features and functionalities that are otherwise are not provided by the application programs 107A or 107B. These are described more fully with regard to FIG. 4, below. These features and functionalities may be provided without a need to modify the application programs 107A or 107B, as they are integral with the remote access applications. As such, the decoupled application extensions are agnostic to the application itself, i.e., the application extensions do not depend on the application being displayed within the server remote access application 111B and client remote access application 121A, 121B, 121C, 121N. Further, the application extensions may be made available within controls presented by the server remote access application 111B or client remote access application 121A, 121B, 121C, 121N, and may be always available.


For example, an “interactive digital surface layer” may be provided as an application extension to enable participants in a collaborative session to make annotations on top of the application running in the session. The interactive digital surface layer functions like a scribble tool to enable a user to draw lines, arrows, symbols, scribbles, etc. on top of an application to provide collaboration of both the application and the interactive digital surface layer. As will be described below with reference to FIGS. 4A and 4B, the interactive digital surface layer is available as a control within the environment of FIG. 1.



FIG. 3 illustrates a tree within an XML state model document that describes a decoupled application extension, such as the interactive digital surface layer, which may be implemented in conjunction with aspects of the present disclosure. The implementation of the interactive digital surface layer (or “acetate layer”) is described in U.S. Provisional Patent Application No. 61/541,540 and U.S. patent application Ser. No. 13/632,245, which are incorporated herein by reference in their entireties. Within the XML tree, there is a collaboration node defined that includes one or more sessions. The sessions are associated with the application extensions, such as the interactive digital surface layer. The participants in the sessions are identified by a UserInfo tag, and may be, for example Glen and Jacquie. Each participant is assigned a default color (DefaultColor) to represent the user's annotations within the interactive digital surface layer (e.g., blue for Glen and green for Jacquie). Any displayable color may be selected as a default color for participants to the collaborative session. A prioritization of colors may be defined, such that a first user is assigned blue, a second user is assigned green, a third user is assigned orange, etc.


Under the collaboration node there are also one or more views defined. In the example of FIG. 3, Glen and Jacquie may be collaborating within a medical imaging application. As such, there may be two views defined—an axial view and a coronal view. Sessions are associated with each of the views, where the sessions include the users to the collaboration. For the axial view, Glen's session has associated therewith a cursor position (CP) and certain markups, e.g., a scribble, arrow and circle. In the axial view, Jacquie has an associated cursor position, but since she has not made any markups to the interactive digital surface layer, there is no additional information associated with Jacquie's axial session view. Under the coronal session, each user only has a cursor position associated therewith.


The above information is displayed by the client remote access application, which may be displayed on a client computing device associated with Glen and Jacquie, respectively. For example, Glen may be viewing the application on a client computing device such as a laptop, which has a mid-sized display. As such, Glen is able to view both the axial view and the coronal view at the same time. In contrast, Jacquie may be viewing the application on a smaller computing device, such as a handheld wireless device. As such, only the axial view may be presented due to the more limited display area of such a device.


Below is an example section of a state model 200 in accordance with the tree of FIG. 3. The state model 200 may be represented by, e.g., an Extensible Markup Language (XML) document. Other representations of the state model 200 may be used. Information regarding the application program and interactive digital surface layer is communicated in the state model 200. Because the interactive digital surface layer is decoupled from the application, the information regarding the interactive digital surface layer is not part of the application state (i.e., it is abstracted from the application). Rather, the interactive digital surface layer information is separately maintained in the state model 200.














<ApplicationState >









<Screens>









<Screen id=″0″ name=″Axial″>









<Fields>









<Field name=″name″ description=″Name″



default=″″>



<Type fieldType=″Text″ maxChars=″128″ />



<Validation />



</Field>









</Fields>









</Screen>



<Screen id=″1″ name=″ Coronal″ />









</Screens>



< Screen Data>









<CurrentScreen id=″0″ />



<Screen id=″0″>



</Screen>









</ScreenData>







</ApplicationState>


<Collaboration>









<Sessions>









<UserInfo=″Glen″ DefaultColor=″Blue″ />



<UserInfo=″Jacquie″ DefaultColor=″Green″ />









</Sessions>



<Views>









<Axial>









<Sessions>









<UserName=″Glen″ CP=″XY″ Markups=″Scribble



Arrow Circle″ />



<UserName=″Jacquie″ CP=″XY″/>









</Sessions>









</Axial>



<Coronal>









<Sessions>









<UserName=″Glen″ CP=″XY″ />



<UserName=″Jacquie″ CP=″XY″/>









</Sessions>









</Coronal>









</Views>







</Collaboration>









Information regarding the application (107A or 107B) is maintained in the ApplicationState node in a first portion of the XML state model. Different states of the application program associated with the axial view and the coronal view are defined, as well as related triggers. For example, in the axial view a “field” is defined for receiving a name as user input data and displaying the same. The decoupled collaboration states and application extension states (e.g., interactive digital surface layer) are maintained in a second portion of the XML document.


The state model 200 may thus contain session information about the application itself, the application extension information (e.g., interactive digital surface layer), information about views, and how to tie the annotations to specific views (e.g., scribble, arrow, circle tied to axial view).



FIG. 4 illustrates aspects of the system 100 of FIG. 1 in greater detail. FIG. 4 illustrates the system 100 as having a tiered software stack. The client remote application 121A, 121B, 121C, 121N may sit on top of a client software development kit (SDK) 704 in a client tier. The client tier communicates to the server remote access application 111B in a server tier. The server tier communicates to a state manager 708 sitting on top of the applications 107A/107B and a server SDK 712 in an application tier. As noted above, the application extensions may be implemented in any of the tiers, i.e., within the server tier as a plug-in 706, the client tier as client application extension 702, the application tier as application extension 710, or combinations thereof. The state model 200 is communicated among the tiers and may be modified in any of the tiers by the application extensions 702 and 710, and the plug-in 706.


In yet another example, in the application tier, the application extension 710 may be a separate executable program that includes new business logic to enhance the applications 107A/107B. The application extension 710 may consume the state model 200 and produce its own document 714 (i.e., a state model of the application extension 710) that may include: (1) information from the state model 200 and information associated with the application extension 710, (2) only information associated with the application extension 710, or (3) a combination of some of the state model information and information associated with the application extension 714. The state model 714 may be communicated to the server remote access application 111B, where the server remote access application 111B may compose an updated state model 200 to include the information in the state model 714. Alternatively or additionally, the client remote access application 121A, 121B, 121C, 121N may receive both the state model 200 and the state model 714, and the client remote access application may compose an updated state model 200 to include the information in the state model 714.



FIG. 5A is a simplified block diagram illustrating a system for providing conferencing around a remotely-accessed application program. As discussed above, participants in a collaborative session may be limited to interacting solely with the shared, remotely-accessed application, i.e., participants may be unable to interact with various media stored on, or accessed by, the client computing devices 112A, 112B, 112C or 112N of other participants. However, according to the implementation illustrated in FIG. 5A, a participant may be capable of sharing various media such as, for example, video, audio, desktop screen scrapes, text messages, libraries of images, etc., with other participants in the collaborative session.


The system of FIG. 5A includes the client computing devices 112A, 112B, 112C and/or 112N, an application server machine (i.e., the server 102B or the mainframe computer 102A) and the server remote access application 111B, which runs on the server 102B, as discussed with regard to FIGS. 1, 2 and 4. As discussed above, the server remote access application 111B provides access to one or more application programs 107A/107B, which is displayed by the client remote access applications 121A, 121B, 121C or 121N. Operation of the server remote access application 111B with the client remote access application 121A, 121B, 121C or 121N or one of the application programs 107A/107B is performed in cooperation with the state model 200. According to the above implementations, each of the client computing devices 112A, 112B, 112C or 112N participating in the collaborative session may present a synchronized view of the applications programs 107A/107B by communicating the state model 200 between each of the client computing devices 112A, 112B, 112C or 112N and/or the server 102B and/or the mainframe computer 102A.


In order to provide conferencing capability, i.e., share various media with the other participants in a collaborative session, FIG. 5A also includes a conferencing server machine having a conferencing stub application 732 and a conferencing manager application 742. In some implementations, the conferencing stub application 732 and the conferencing manager application 742 may run on the server 102B. A sharing component of the conferencing capability may be optional, and may be initiated by a participant downloading, but not installing, a client media sharing application 722 using the client computing device 112A, 112B, 112C or 112N. However, if the client remote access application 121A, 121B, 121C or 121N is running in a restricted sandbox environment, such as a web browser that does not have access to system resources to collect sharable media, or is not sharing any media, then the participant may not download the client media sharing application 722, but will be unable to share various media with the other participants in the collaborative system. Instead, the participant will be limited to solely viewing the remotely-accessed application program 107A/107B and/or various media shared by other participants in the collaborative session. In some implementations, the client media sharing application 722 may be incorporated into the client remote access application 121A, 121B, 121C or 121N.


The system of FIG. 5A allows the participant that acquires conferencing capability to share media, such as video, audio, desktop screen scrapes, text messages, libraries of images, etc. with other participants in the collaborative session. The conferencing server machine may receive the shared media either directly from the client media sharing application 722 or indirectly from the client remote access application 121A, 121B, 121C or 121 via the conferencing stub application 732. Additionally, a plurality of different participants can provide shared media, which may be simultaneously displayed by the other client computing devices 112A, 112B, 112C or 112N.


In one implementation, the conferencing stub application 732 is a server application (e.g., a plug-in 706) enabled to communicate with the server remote access application 111B. The conferencing stub application 732, however, may not included collaborative features, such as, for example the features that allow the client computing devices 112A, 112B, 112C or 112N to collaboratively interact with the application program 107A/107B. Thus, the conferencing stub application 732 may not be shared by the participants in the session (via the state model 200). Accordingly, in this implementation, there is one conferencing stub application 732 for each client computing device 112A, 112B, 112C or 112N connected to the conferencing server machine. In another implementation, the conferencing manager application 742 is a server application enabled to communicate with the server remote access application 111B, and the functionality of the conferencing stub application 732 exists entirely within the conferencing manager application 742. Further, in yet another implementation, the conferencing manager application 742 is a server application enabled to communicate with the server remote access application 111B, and the conferencing stub application 732 becomes a hybrid client/server, where the conferencing stub application 732 is a server with respect to the client computing devices 112A, 112B, 112C or 112N and a client with respect to the conferencing server machine.


During a collaborative session, as discussed above, the client remote access application 121A, 121B, 121C or 121N operates with the server remote access application 111B in cooperation with the state model 200 to interface with the application program 107A/107B. Similarly, during a conferencing session, the client remote access application 121A, 121B, 121C or 121N operates with the server remote access application 111B in cooperation with the state model 200 to interface with conferencing manager application 742, via the conferencing stub application 732. For example, the conferencing manager application 742 acts as a multiplexer by making shared media received from one client computing device 112A, 112B, 112C or 112N (either directly or indirectly, as discussed above) available to the conferencing stub application 732 of each of the other client computing devices 112A, 112B, 112C or 112N. Specifically, the conferencing stub application 732 and the client remote access application 121A, 121B, 121C or 121N coordinate how various media streams may be reprocessed, elided, combined, re-sampled, etc. before transmission from the conferencing stub application 732 to the client remote access application 121A, 121B, 121C or 121N. For example, the conferencing stub application 732 may mix two or more available audio streams into a single audio stream in order to reduce the bandwidth requirements.



FIG. 5B is a simplified block diagram illustrating systems for providing conferencing in a remote environment. The features in common between FIGS. 5A and 5B are labeled with the same reference numbers. As discussed above with regard to FIG. 5A, conferencing is provided around a remotely-accessed application program 107A/107B. However, in FIG. 5B, conferencing is provided in a remote environment without requiring collaboration around the application program 107A/107B. In this implementation, the client remote access application 121A, 121B, 121C or 121N operates with the server remote access application 111B in cooperation with the state model 200 to interface with conferencing manager application 742, via the conferencing stub application 732, during the conferencing session in the same manner as discussed above.



FIG. 6A illustrates a flow diagram 800 of example operations performed within the system of FIG. 5A. At 802, the application program 107A/107B is remotely accessed. As discussed above, for example, the server remote access application 111B provides access to one or more application programs 107A/107B, which is displayed by the client remote access application 121A, 121B, 121C or 121N. At 803, the client computing device 112A, 112B, 112C or 112N determines whether it has access to system resources to share media. If NO, the process skips to step 804, discussed below, in order to acquire conferencing capability. If YES, the state model 200 is updated, and then the process skips to step 806, discussed below.


At 804, in order to acquire conferencing capability, the participant may download the client media sharing application 722 using the client computing device 112A, 112B, 112C or 112N. The client media sharing application 722 allows the participant to share various media with the other participants in a collaborative session.


At 806, the participant provides the shared media to the conferencing server machine either directly using the client media sharing application 722 or indirectly using the client remote access application 121A, 121B, 121C or 121N via the conferencing stub application 732. In one implementation, a plurality of different participants can provide shared media, which may be simultaneously displayed by the client computing devices 112A, 112B, 112C or 112N. At 808, the client remote access application 121A, 121B, 121C or 121N operates with the server remote access application 111B in cooperation with the state model 200 to interface with conferencing manager application 742, via the conferencing stub application 732. For example, upon receipt of the shared media from one client computing device 112A, 112B, 112C or 112N by the conferencing manager application 742, the conferencing manager application 742 makes the shared media available to each conferencing stub application 732 of the other client computing devices 112A, 112B, 112C or 112N. Then, the server remote access application 111B updates the state model 200.


At 810, the server remote access application 111B generates presentation data in accordance with the updated state model 200 and provides the same to the client remote access application 121A, 121B, 121C, 121N on the client computing device. At 812, the client remote access application 121A, 121B, 121C, 121N updates the display of the client computing device 112A, 112B, 112C or 112N.



FIG. 6B illustrates a flow diagram 800 of example operations performed within the system of FIG. 5B. The features in common between FIGS. 6A and 6B are labeled with the same reference numbers. The example operations of FIG. 6B differ from the example operations of FIG. 6A in that the application program 107A/107B is not required to be initiated to begin conferencing.



FIG. 7 illustrates an example user interface 900 of a viewing-participant's client computing device during a collaborative conferencing session. For example, the user interface 900 may include a view of the application program 902 (i.e., 107A/107B), a view of a shared video stream 904 and a view of shared media 908. Additionally, the view of a shared video stream 904 may include a plurality of shared video streams. Further, the user interface 900 may includes a plurality of views of shared media 908, and the shared media may come from the same and/or different sources. In addition, the user interface 900 may include a floating tool bar 906, which provides the participant with functional controls, such as, for example, activating the interactive digital surface layer, capturing an image of the participant's desktop (i.e., which may then be shared with the other participants in the collaborative session), etc. The interactive digital surface layer is operable to receive user input to collaboratively display annotations input by users during the sessions. The annotations may be made on any portion of the user interface 900, i.e., a view of the application program 902 (i.e., 107A/107B), a view of a shared video stream 904, a view of shared media 908, etc. The floating tool bar 906 may also provide the participant the option of sharing various media such as, for example, audio, video, desktop screen scrapes, text messages, etc. The user interface 900 may also include a swap view button or a full screen button 910, for example, in order to manipulate the displayed views. The user may also swap the various views by clicking and dragging the views on the user interface 900.



FIG. 8 illustrates an example user interface 1000 of a sharing-participant's client computing device during a collaborative conferencing session. The user interface 1000 includes a view of the desktop which the participant is sharing with the other participants in the collaborative session, as well as a floating tool bar 1006.



FIG. 9 illustrates a second example user interface 1100 of a viewing-participant's client computing device during a collaborative conferencing session. Similarly to FIG. 7, the user interface 1100 includes a view of the application program 1102 (i.e., 107A/107B), a view of a shared video stream 1104 and a view of shared media 1108, as well as a floating tool bar 1106. In addition, the user interface 1100 includes a chat view 1114, which allows the participants in the collaborative session to engage in a real-time chat session.



FIG. 10 illustrates a third example user interface 1200 of a viewing-participant's client computing device during a collaborative conferencing session. Similarly to FIGS. 9 and 11, the user interface 1200 includes a view of the application program 1202 (i.e., 107A/107B), a view of a shared video stream 1204 and a view of shared media 1208, as well as a floating tool bar 1206. The tool bar 1206 may also include options for capturing a screen shot 1220 and/or activating the interactive digital surface layer 1222, for example. In addition, the user interface 1200 includes a view of an interactive digital surface layer on white background 121B. The annotations may be made on any portion of the user interface 1200, i.e., a view of the application program 1202 (i.e., 107A/107B), a view of a shared video stream 1204, a view of shared media 1208, a view of an interactive digital surface layer on white background 1216, etc. The white background may allow the participants to make annotations unobstructed by the displayed views. Alternatively or additionally, the white background 1216 may be a view of a whiteboard application to allow participants to draw/make notes on the whiteboard. The drawings/notes maybe captured and saved for later retrieval.



FIG. 11 illustrates an example user interface 1300 including a conferencing manager view 1316 of a sharing-participant's client computing device during a collaborative conferencing session. For example, the conferencing manager view 1316 shows a list of participants in the collaborative session, the color of each participant's annotation, the type of media being shared by each participant (i.e., audio, video, desktop, for example), etc. The user interface 1300 may also include a view of all previous desktop captures 1318 from the collaborative session, as well as buttons for saving desktop captures 1320. In addition, there may be an option to close desktop sharing automatically upon saving captures.


The user interfaces of the present disclosure may be presented on any type of computing device participating within the collaborative conferencing session. Thus, to accommodate the various display areas of the devices that may participate in the collaborative conferencing session, implementations of the present disclosure may provide for refactoring of the display. As such, each type of device that is participating in the collaborative conferencing session presents the user interface having a device-appropriate resolution based on information contained in the state model 200. For example, with reference to the user interface of FIG. 7, if a display is associated with a desktop computer, the entire user interface 900 may be displayed. However, if a display is associated with a handheld mobile device, then a subset of the user interface 900 may be displayed, e.g., the view of the application program 902. The other views may be made available on the handheld mobile via a control provided in the display. Other refactoring schemes are possible depending on the views in the user interface and the device on which the user interface is to be displayed.


During a collaborative session, a user may wish to point to an area of the user interfaces without interacting with the underlying application program 107A/107B. For example, a user may be making a presentation of a slide deck and may wish to “point” to an item on the slide being displayed in the user interface. The interactive digital surface layer may be used to provide such an indication to other users in the collaborative session.


To accommodate the above, the sending of mouse cursor position data may be separated from the sending of mouse input events to the application 107A/107B so that the position and event data can be triggered independently of one another. As such, a cursor position tool may be directed to send cursor information without input events that would otherwise cause an interaction when the user of the tablet device 112N does not desire such interaction with the application program 107A/107B. The above may be achieved by separating a single method that updates the interactive digital surface layer for cursor position into two methods, one of which performs cursor position updates, and one of which queues the input events. Optionally or additionally, the mouse cursor may change characteristics when operating in such a mode. For example, where the mouse cursor is being used for indication purposes, the cursor may thicken, change color, change shape, blink, etc. to indicate to other users that the cursor is being used as an indicator.


While the above may be implemented for all types of client computers, a particular use case is where users of mobile devices having a touch-sensitive interface (e.g., tablet device 112N) wish to indicate to other users what he or she is currently viewing on the display. Typically, a touch of a tablet device represents an interaction with the application program 107A/107B. In accordance with the above, separating the mouse cursor position data (i.e., the touch location) from the sending of mouse input events (i.e., the actual touch) enables users of tablet devices 112N to make such an indication similar to client computers having a pointing device.


In another aspect that may be combined with the above or separately implemented, annotations can be created in the interactive digital surface layer without interacting with the underlying application program 107A/107B, and interactions with the underlying application program 107A/107B do not necessarily create annotations within the interactive digital surface layer. Therefore, the interactive digital surface layer control 1222 may be provided with an option to disable interaction with the underlying application 107A/107B.


Thus, as described above, the present disclosure provides for conferencing capability around a remotely-accessed collaborative application. More generally, the present disclosure provides systems and methods for allowing participants in a collaborative session to share media with the other participants in the collaborative session.



FIG. 12 shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing system environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality.


Numerous other general purpose or special purpose computing system environments or configurations may be used. Examples of well known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like.


Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.


With reference to FIG. 12, an exemplary system for implementing aspects described herein includes a computing device, such as computing device 600. In its most basic configuration, computing device 600 typically includes at least one processing unit 602 and memory 604. Depending on the exact configuration and type of computing device, memory 604 may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in FIG. 12 by dashed line 606.


Computing device 600 may have additional features/functionality. For example, computing device 600 may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 12 by removable storage 608 and non-removable storage 610.


Computing device 600 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by device 600 and includes both volatile and non-volatile media, removable and non-removable media.


Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 604, removable storage 608, and non-removable storage 610 are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 600. Any such computer storage media may be part of computing device 600.


Computing device 600 may contain communications connection(s) 612 that allow the device to communicate with other devices. Computing device 600 may also have input device(s) 614 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 616 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here.


It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.


Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims
  • 1. A method of providing a conferencing capability in a remote environment, comprising: providing a server remote access service, the server remote access service residing in a server tier;providing a respective client remote access application for each of a plurality of client computing devices, the respective client remote access applications running on the client computing devices;providing a state model, wherein the state model comprises an association of logical elements used to synchronize information between the respective client remote access applications and the server remote access service, wherein the logical elements are associated with user interface components of an application displayed at one or more of the client computing devices, each of the server remote access service and the respective client remote access applications being capable of modifying the state model, wherein the state model further comprises session information about the application and information for displaying the logical elements;providing at least one client media sharing application, the at least one client media sharing application running on at least one of the client computing devices, the at least one client media sharing application allowing the at least one of the client computing devices to share media accessible by the at least one of the client computing devices;providing a conferencing manager application and a plurality of conferencing stub applications, the conferencing manager application receiving media shared by the at least one of the client computing devices, each of the conferencing stub applications interfacing with the server remote access service to modify the state model, wherein there is one conferencing stub application for each of the client computing devices; andmodifying the state model to further include the shared media such that the shared media is provided in one or more of the client computing devices,wherein the respective client remote access applications are configured to operate with the server remote access service in cooperation with the state model to interface with the conferencing manager application via the conferencing stub applications, wherein the conferencing manager application acts as a multiplexer by making the shared media from the at least one of the client computing devices available to each of the conferencing stub applications.
  • 2. The method of claim 1, wherein the conferencing manager application receives the shared media directly from the at least one client media sharing application.
  • 3. The method of claim 1, wherein the conferencing manager application receives the shared media indirectly from at least one of the respective client remote access applications via at least one of the conferencing stub applications.
  • 4. The method of claim 1, wherein the shared media is audio, video, images, desktop screen scrapes, or text messages.
  • 5. The method of claim 1, wherein the shared media comprises media accessible by at least two of the client computing devices, and wherein the shared media is simultaneously shared by the at least two of the client computing devices.
  • 6. The method of claim 1, further comprising displaying the shared media on a graphical display of at least one of the client computing devices.
  • 7. The method of claim 1, wherein the conferencing manager application runs on a server device.
  • 8. The method of claim 1, wherein the conferencing manager application runs on a conferencing server device.
  • 9. A non-transitory computer readable storage medium having computer-executable instructions stored thereon for providing a conferencing capability in a remote environment that, when executed by a server device, cause the server device to: provide a server remote access service;provide a state model, wherein the state model comprises an association of logical elements used to synchronize information between a respective client remote access application for each of a plurality of client computing devices and the server remote access service, wherein the logical elements are associated with user interface components of an application displayed at one or more of the client computing devices, each of the server remote access service and the respective client remote access applications being capable of modifying the state model, wherein the state model further comprises session information about the application and information for displaying the logical elements;provide a conferencing manager application and a plurality of conferencing stub applications, the conferencing manager application receiving media shared by the at least one of the client computing devices, each of the conferencing stub applications interfacing with the server remote access service to modify the state model, wherein there is one conferencing stub application for each of the client computing devices; andmodify the state model to further include the shared media such that the shared media is provided in one or more of the client computing devices, wherein the respective client remote access applications are configured to operate with the server remote access service in cooperation with the state model to interface with the conferencing manager application via the conference stub applications, wherein the conferencing manager application acts as a multiplexer by making the shared media from the at least one of the client computing devices available to each of the conferencing stub applications.
  • 10. The non-transitory computer readable storage medium of claim 9, wherein the conferencing manager application receives the shared media directly from a client media sharing application running on at least one of the client computing devices.
  • 11. The non-transitory computer readable storage medium of claim 9, wherein the conferencing manager application receives the shared media indirectly from one of the respective client remote access applications via at least one of the conferencing stub applications.
  • 12. The non-transitory computer readable storage medium of claim 9, wherein the shared media is audio, video, images, desktop screen scrapes, or text messages.
  • 13. The non-transitory computer readable storage medium of claim 9, wherein the shared media comprises media accessible by at least two of the client computing devices, and wherein the shared media is simultaneously shared by the at least two of the client computing devices.
  • 14. The non-transitory computer readable storage medium of claim 9, wherein the server device comprises a plurality of server devices, wherein a first server device executes the server remote access service and a second server device executes the conferencing manager application.
  • 15. The method of claim 1, wherein the application displayed at one or more of the client computing devices is a remotely-accessed application.
  • 16. The method of claim 1, wherein the state model is an Extensible Markup Language (XML) document.
  • 17. The non-transitory computer readable storage medium of claim 9, wherein the application displayed at one or more of the client computing devices is a remotely-accessed application.
  • 18. The non-transitory computer readable storage medium of claim 9, wherein the state model is an Extensible Markup Language (XML) document.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 13/682,243, filed Nov. 20, 2012, entitled “METHODS AND SYSTEMS FOR COLLABORATIVE REMOTE APPLICATION SHARING AND CONFERENCING,” which claims the benefit of U.S. Provisional Patent Application No. 61/563,256, filed on Nov. 23, 2011, entitled “Methods and Systems for Collaborative Remote Application Sharing and Conferencing,” and U.S. Provisional Patent Application No. 61/623,131, filed on Apr. 12, 2012, entitled “Methods and Systems for Collaborative Remote Application Sharing and Conferencing,” the disclosures of which are expressly incorporated herein by reference in their entireties.

US Referenced Citations (345)
Number Name Date Kind
4975690 Torres Dec 1990 A
5249121 Baum Sep 1993 A
5345550 Bloomfield Sep 1994 A
5491800 Goldsmith et al. Feb 1996 A
5555003 Montgomery et al. Sep 1996 A
5742778 Hao et al. Apr 1998 A
5844553 Hao et al. Dec 1998 A
5870559 Leshem et al. Feb 1999 A
5870759 Bauer et al. Feb 1999 A
5903725 Colyer May 1999 A
5909545 Frese, II et al. Jun 1999 A
5920311 Anthias Jul 1999 A
5978842 Noble et al. Nov 1999 A
5987376 Olson et al. Nov 1999 A
5996002 Katsurabayashi et al. Nov 1999 A
6045048 Wilz et al. Apr 2000 A
6061689 Chang et al. May 2000 A
6075531 DeStefano Jun 2000 A
6141698 Krishnan et al. Oct 2000 A
6145098 Nouri et al. Nov 2000 A
6151621 Colyer et al. Nov 2000 A
6253228 Ferris et al. Jun 2001 B1
6342906 Kumar et al. Jan 2002 B1
6343313 Salesky et al. Jan 2002 B1
6453334 Vinson et al. Sep 2002 B1
6453356 Sheard et al. Sep 2002 B1
6529230 Chong Mar 2003 B1
6570563 Honda May 2003 B1
6601233 Underwood Jul 2003 B1
6602185 Uchikubo Aug 2003 B1
6662210 Carleton et al. Dec 2003 B1
6698021 Amini et al. Feb 2004 B1
6742015 Bowman-Amuah May 2004 B1
6748418 Yoshida et al. Jun 2004 B1
6763371 Jändel Jul 2004 B1
6792607 Burd et al. Sep 2004 B1
6918113 Patel et al. Jul 2005 B2
6938096 Greschler et al. Aug 2005 B1
6938212 Nakamura Aug 2005 B2
6970459 Meier Nov 2005 B1
6976077 Lehew et al. Dec 2005 B1
6981062 Suryanarayana Dec 2005 B2
6996605 Low et al. Feb 2006 B2
7003550 Cleasby et al. Feb 2006 B1
7065568 Bracewell et al. Jun 2006 B2
7069227 Lintel, III et al. Jun 2006 B1
7073059 Worely et al. Jul 2006 B2
7133895 Lee et al. Nov 2006 B1
7149761 Cooke et al. Dec 2006 B2
7152092 Beams et al. Dec 2006 B2
7167893 Malone et al. Jan 2007 B1
7174504 Tsao Feb 2007 B2
7181686 Bahrs Feb 2007 B1
7191233 Miller Mar 2007 B2
7193985 Lewis et al. Mar 2007 B1
7197561 Lovy et al. Mar 2007 B1
7240162 de Vries Jul 2007 B2
7246063 James et al. Jul 2007 B2
7254634 Davis et al. Aug 2007 B1
7287054 Lee et al. Oct 2007 B2
7320131 O'Toole, Jr. Jan 2008 B1
7343310 Stender Mar 2008 B1
7346616 Ramanujam et al. Mar 2008 B2
7350151 Nakajima Mar 2008 B1
7356563 Leichtling et al. Apr 2008 B1
7363342 Wang et al. Apr 2008 B1
7418711 Lee et al. Aug 2008 B1
7451196 de Vries et al. Nov 2008 B1
7533146 Kumar May 2009 B1
7577751 Vinson et al. Aug 2009 B2
7620901 Carpenter et al. Nov 2009 B2
7624185 Miller et al. Nov 2009 B2
7647370 Liu et al. Jan 2010 B1
7650444 Dirstine et al. Jan 2010 B2
7656799 Samuels et al. Feb 2010 B2
7676506 Reinsch Mar 2010 B2
7703024 Kautzleben et al. Apr 2010 B2
7706399 Janczak Apr 2010 B2
7725331 Schurenberg et al. May 2010 B2
7783568 Fracchia et al. Aug 2010 B1
7802183 Essin Sep 2010 B1
7810089 Sundarrajan et al. Oct 2010 B2
7831919 Viljoen et al. Nov 2010 B1
7921078 McCuller Apr 2011 B2
7941488 Goodman et al. May 2011 B2
7950026 Urbach May 2011 B1
7966572 Matthews et al. Jun 2011 B2
7984115 Tien et al. Jul 2011 B2
8010901 Rogers Aug 2011 B1
8024523 de Vries et al. Sep 2011 B2
8065166 Maresh et al. Nov 2011 B2
8122341 Dayan et al. Feb 2012 B1
8195146 Prakash et al. Jun 2012 B2
8239773 Billman Aug 2012 B1
8261345 Hitomi et al. Sep 2012 B2
8356252 Raman et al. Jan 2013 B2
8359591 de Vries et al. Jan 2013 B2
8478307 Hayes Jul 2013 B1
8509230 Vinson et al. Aug 2013 B2
8527591 Pirnazar Sep 2013 B2
8527706 de Vries et al. Sep 2013 B2
8533103 Certain et al. Sep 2013 B1
8572178 Frazzini et al. Oct 2013 B1
8606952 Pasetto et al. Dec 2013 B2
8607158 Molander et al. Dec 2013 B2
8627081 Grimen et al. Jan 2014 B2
8667054 Tahan Mar 2014 B2
8799354 Thomas Aug 2014 B2
8832260 Raja et al. Sep 2014 B2
8856259 Burckart et al. Oct 2014 B2
8909703 Gupta et al. Dec 2014 B2
8910112 Li et al. Dec 2014 B2
8924512 Stoyanov et al. Dec 2014 B2
8935328 Tumuluri Jan 2015 B2
9152970 Trahan Oct 2015 B1
9239812 Berlin Jan 2016 B1
9256856 Fairs Feb 2016 B1
9686205 Leitch et al. Jun 2017 B2
20010006382 Sevat Jul 2001 A1
20010033299 Callaway et al. Oct 2001 A1
20010037358 Clubb et al. Nov 2001 A1
20010047393 Arner et al. Nov 2001 A1
20020032751 Bharadwaj Mar 2002 A1
20020032783 Tuatini Mar 2002 A1
20020032804 Hunt Mar 2002 A1
20020051541 Glick et al. May 2002 A1
20020092029 Smith Jul 2002 A1
20020198941 Gavrilescu et al. Dec 2002 A1
20030014735 Achlioptas et al. Jan 2003 A1
20030023670 Walrath Jan 2003 A1
20030055893 Sato et al. Mar 2003 A1
20030065738 Yang et al. Apr 2003 A1
20030120324 Osborn et al. Jun 2003 A1
20030120762 Yepishin et al. Jun 2003 A1
20030149721 Alfonso-Nogueiro et al. Aug 2003 A1
20030149941 Tsao Aug 2003 A1
20030163514 Waldschmidt Aug 2003 A1
20030179230 Seidman Sep 2003 A1
20030184584 Vachuska et al. Oct 2003 A1
20030208472 Pham Nov 2003 A1
20040015842 Nanivadekar et al. Jan 2004 A1
20040029638 Hytcheson et al. Feb 2004 A1
20040039742 Barsness et al. Feb 2004 A1
20040045017 Dorner et al. Mar 2004 A1
20040068516 Lee et al. Apr 2004 A1
20040077347 Lauber et al. Apr 2004 A1
20040103195 Chalasani et al. May 2004 A1
20040103339 Chalasani et al. May 2004 A1
20040106916 Quaid et al. Jun 2004 A1
20040117804 Scahill et al. Jun 2004 A1
20040128354 Horikiri et al. Jul 2004 A1
20040153525 Borella Aug 2004 A1
20040162876 Kohavi Aug 2004 A1
20040183827 Putterman et al. Sep 2004 A1
20040225960 Parikh et al. Nov 2004 A1
20040236633 Knauerhase et al. Nov 2004 A1
20040243919 Muresan et al. Dec 2004 A1
20040249885 Petropoulakis et al. Dec 2004 A1
20050005024 Samuels et al. Jan 2005 A1
20050010871 Ruthfield et al. Jan 2005 A1
20050021687 Anastassopoulos et al. Jan 2005 A1
20050050229 Comeau et al. Mar 2005 A1
20050114711 Hesselink et al. May 2005 A1
20050114789 Chang et al. May 2005 A1
20050138631 Bellotti et al. Jun 2005 A1
20050154288 Wang Jul 2005 A1
20050188046 Hickman et al. Aug 2005 A1
20050188313 Matthews et al. Aug 2005 A1
20050190203 Gery et al. Sep 2005 A1
20050193062 Komine et al. Sep 2005 A1
20050198578 Agrawala et al. Sep 2005 A1
20050216421 Barry et al. Sep 2005 A1
20050240906 Kinderknecht et al. Oct 2005 A1
20050246422 Laning Nov 2005 A1
20060004874 Hutcheson et al. Jan 2006 A1
20060026006 Hindle Feb 2006 A1
20060031377 Ng et al. Feb 2006 A1
20060031481 Patrick et al. Feb 2006 A1
20060036770 Hosn et al. Feb 2006 A1
20060041686 Caspi et al. Feb 2006 A1
20060041891 Aaron Feb 2006 A1
20060053380 Spataro et al. Mar 2006 A1
20060066717 Miceli Mar 2006 A1
20060069797 Abdo et al. Mar 2006 A1
20060085245 Takatsuka et al. Apr 2006 A1
20060085835 Istvan et al. Apr 2006 A1
20060101397 Mercer et al. May 2006 A1
20060112188 Albanese et al. May 2006 A1
20060130069 Srinivasan et al. Jun 2006 A1
20060179119 Kurosawa et al. Aug 2006 A1
20060221081 Cohen et al. Oct 2006 A1
20060231175 Vondracek et al. Oct 2006 A1
20060236328 DeWitt Oct 2006 A1
20060242254 Okazaki et al. Oct 2006 A1
20060258462 Cheng et al. Nov 2006 A1
20060265689 Kuznetsov et al. Nov 2006 A1
20060271563 Angelo et al. Nov 2006 A1
20060288171 Tsien Dec 2006 A1
20060294418 Fuchs Dec 2006 A1
20070024645 Purcell et al. Feb 2007 A1
20070024706 Brannon et al. Feb 2007 A1
20070047535 Varma Mar 2007 A1
20070067754 Chen et al. Mar 2007 A1
20070079244 Brugiolo Apr 2007 A1
20070112880 Yang et al. May 2007 A1
20070120763 De Paepe et al. May 2007 A1
20070130292 Tzruya et al. Jun 2007 A1
20070143398 Graham Jun 2007 A1
20070203944 Batra et al. Aug 2007 A1
20070203990 Townsley et al. Aug 2007 A1
20070203999 Townsley et al. Aug 2007 A1
20070208718 Javid et al. Sep 2007 A1
20070226636 Carpenter et al. Sep 2007 A1
20070233706 Farber et al. Oct 2007 A1
20070244930 Bartlette et al. Oct 2007 A1
20070244962 Laadan et al. Oct 2007 A1
20070244990 Wells Oct 2007 A1
20070256073 Truong et al. Nov 2007 A1
20070282951 Selimis et al. Dec 2007 A1
20080016155 Khalatian Jan 2008 A1
20080028323 Rosen et al. Jan 2008 A1
20080052377 Light Feb 2008 A1
20080134211 Cui Jun 2008 A1
20080146194 Yang et al. Jun 2008 A1
20080159175 Flack Jul 2008 A1
20080183190 Adcox et al. Jul 2008 A1
20080195362 Belcher et al. Aug 2008 A1
20080276183 Siegrist et al. Nov 2008 A1
20080301228 Flavin Dec 2008 A1
20080313282 Warila et al. Dec 2008 A1
20080320081 Shriver-Blake et al. Dec 2008 A1
20090070404 Mazzaferri Mar 2009 A1
20090080523 McDowell Mar 2009 A1
20090089742 Nagulu et al. Apr 2009 A1
20090094369 Woolbridge et al. Apr 2009 A1
20090106422 Kriewall Apr 2009 A1
20090119644 de Vries et al. May 2009 A1
20090164581 Bove et al. Jun 2009 A1
20090172100 Callanan et al. Jul 2009 A1
20090187817 Ivashin et al. Jul 2009 A1
20090209239 Montesdeoca Aug 2009 A1
20090217177 DeGrazia Aug 2009 A1
20090044171 Avadhanula Dec 2009 A1
20090328032 Crow et al. Dec 2009 A1
20100012911 Akinaga et al. Jan 2010 A1
20100017727 Offer et al. Jan 2010 A1
20100018827 Ueda Jan 2010 A1
20100061238 Godbole et al. Mar 2010 A1
20100077058 Messer Mar 2010 A1
20100082747 Yue et al. Apr 2010 A1
20100115023 Peled May 2010 A1
20100131591 Thomas et al. May 2010 A1
20100150031 Allen et al. Jun 2010 A1
20100174773 Penner et al. Jul 2010 A1
20100205147 Lee Aug 2010 A1
20100223566 Holmes et al. Sep 2010 A1
20100223661 Yang Sep 2010 A1
20100268762 Pahlavan et al. Oct 2010 A1
20100268813 Pahlavan et al. Oct 2010 A1
20100274858 Lindberg et al. Oct 2010 A1
20100281107 Fallows et al. Nov 2010 A1
20100306642 Lowet Dec 2010 A1
20110047190 Lee et al. Feb 2011 A1
20110058052 Bolton Mar 2011 A1
20110113350 Carlos May 2011 A1
20110119716 Coleman, Sr. May 2011 A1
20110128378 Raji Jun 2011 A1
20110138016 Jung et al. Jun 2011 A1
20110138283 Marston Jun 2011 A1
20110145863 Alsina et al. Jun 2011 A1
20110154302 Balko et al. Jun 2011 A1
20110154464 Agarwal et al. Jun 2011 A1
20110157196 Nave et al. Jun 2011 A1
20110162062 Kumar et al. Jun 2011 A1
20110184993 Chawla et al. Jul 2011 A1
20110187652 Huibers Aug 2011 A1
20110191438 Huibers et al. Aug 2011 A1
20110191823 Huibers Aug 2011 A1
20110213830 Lopez et al. Sep 2011 A1
20110219419 Reisman Sep 2011 A1
20110222442 Cole et al. Sep 2011 A1
20110223882 Hellgren Sep 2011 A1
20110246891 Schubert et al. Oct 2011 A1
20110252152 Sherry et al. Oct 2011 A1
20110314093 Sheu et al. Dec 2011 A1
20120016904 Mahajan et al. Jan 2012 A1
20120023418 Frields Jan 2012 A1
20120030275 Boller et al. Feb 2012 A1
20120072833 Song et al. Mar 2012 A1
20120072835 Gross et al. Mar 2012 A1
20120079080 Pishevar Mar 2012 A1
20120079111 Luukkala et al. Mar 2012 A1
20120084713 Desai et al. Apr 2012 A1
20120090004 Jeong Apr 2012 A1
20120133675 McDowell May 2012 A1
20120151373 Kominac et al. Jun 2012 A1
20120154633 Rodriguez Jun 2012 A1
20120159308 Tseng et al. Jun 2012 A1
20120159356 Steelberg Jun 2012 A1
20120169874 Thomas et al. Jul 2012 A1
20120210242 Burckart et al. Aug 2012 A1
20120210243 Uhma et al. Aug 2012 A1
20120221792 de Vries et al. Aug 2012 A1
20120226742 Momchilov et al. Sep 2012 A1
20120233555 Psistakis et al. Sep 2012 A1
20120245918 Overton et al. Sep 2012 A1
20120246225 Lemire et al. Sep 2012 A1
20120324032 Chan Dec 2012 A1
20120324358 Jooste Dec 2012 A1
20120331061 Lininger Dec 2012 A1
20130007227 Hitomi et al. Jan 2013 A1
20130013671 Relan et al. Jan 2013 A1
20130031618 Momchilov Jan 2013 A1
20130046815 Thomas et al. Feb 2013 A1
20130054679 Jooste Feb 2013 A1
20130070740 Yovin Mar 2013 A1
20130086155 Thomas et al. Apr 2013 A1
20130086156 McFadzean et al. Apr 2013 A1
20130086652 Kavantzas et al. Apr 2013 A1
20130110895 Valentino et al. May 2013 A1
20130113833 Larsson May 2013 A1
20130117474 Ajanovic et al. May 2013 A1
20130120368 Miller May 2013 A1
20130138791 Thomas et al. May 2013 A1
20130147845 Xie et al. Jun 2013 A1
20130159062 Stiehl Jun 2013 A1
20130159709 Ivory et al. Jun 2013 A1
20130179962 Arai et al. Jul 2013 A1
20130208966 Zhao et al. Aug 2013 A1
20130212483 Brakensiek et al. Aug 2013 A1
20130262566 Stephure et al. Oct 2013 A1
20130297676 Binyamin Nov 2013 A1
20130346482 Holmes Dec 2013 A1
20140136667 Gonsalves et al. May 2014 A1
20140240524 Julia et al. Aug 2014 A1
20140241229 Bertorelle et al. Aug 2014 A1
20140258441 L'Heureux et al. Sep 2014 A1
20140298420 Barton et al. Oct 2014 A1
20150026338 Lehmann et al. Jan 2015 A1
20150067769 Barton et al. Mar 2015 A1
20150156133 Leitch et al. Jun 2015 A1
20150319252 Momchilov et al. Nov 2015 A1
20160054897 Holmes et al. Feb 2016 A1
20160226979 Lancaster et al. Aug 2016 A1
20170228799 Perry et al. Aug 2017 A1
Foreign Referenced Citations (45)
Number Date Country
2646414 Oct 2007 CA
2697936 Mar 2009 CA
2742779 Jun 2010 CA
1278623 Jan 2001 CN
1499841 May 2004 CN
101539932 Sep 2009 CN
101883097 Nov 2010 CN
102129632 Jul 2011 CN
102821413 Dec 2012 CN
0349463 Jan 1990 EP
1422901 May 2004 EP
2007084744 Mar 1995 JP
10040058 Feb 1998 JP
2002055870 Feb 2002 JP
2004206363 Jul 2004 JP
2004287758 Oct 2004 JP
2005031807 Feb 2005 JP
2008099055 Apr 2008 JP
2010256972 Nov 2010 JP
2295752 Mar 2007 RU
2305860 Sep 2007 RU
1998025666 Jun 1998 WO
1998058478 Dec 1998 WO
2001016724 Mar 2001 WO
2001091482 Nov 2001 WO
2002009106 Jan 2002 WO
2003032569 Apr 2003 WO
2003083684 Oct 2003 WO
2008011063 Jan 2008 WO
2008087636 Jul 2008 WO
2010060206 Jun 2010 WO
2010088768 Aug 2010 WO
2010127327 Nov 2010 WO
2011087545 Jul 2011 WO
2012093330 Jul 2012 WO
2012127308 Sep 2012 WO
2013024342 Feb 2013 WO
2013046015 Apr 2013 WO
2013046016 Apr 2013 WO
2013072764 May 2013 WO
2013109984 Jul 2013 WO
2013128284 Sep 2013 WO
2013153439 Oct 2013 WO
2014033554 Mar 2014 WO
2015080845 Jun 2015 WO
Non-Patent Literature Citations (29)
Entry
ADASS XXI Conference Schedule, European Southern Observatory, http://www.eso.org/meetings/2011/adass2011/program/schedule.html#day2, Nov. 7, 2011, 4 pages.
Brandom, R., “Google Photos and the unguessable URL,” The Verge, retrieved on Sep. 25, 2017 from https://www.theverg.com/2015/6/23/8830977/google-photos-security-public-url-privacy-protected, Jun. 23, 2015, 7 pages.
“Calgary Scientific Revolutionizes Application Sharing and Advanced Collaboration with PureWeb 3.0,” Press Release, Jun. 21, 2011, 3 pages.
Coffman, Daniel, et al., “A Client-Server Architecture for State-Dependent Dynamic Visualizations on the Web,” IBM T.J. Watson Research Center, 2010, 10 pages.
Federl, P., “Remote Visualization of Large Multi-dimensional Radio Astronomy Data Sets,” Institute for Space Imaging Science, University of Calgary, 2012, pp. 1-10.
Federl, P., “Remote Visualization of Large Multi-dimensional Radio Astronomy Data Sets,” Institute for Space Imaging Science, University of Calgary, 2012, pp. 11-22.
Fraser, N., “Differential Synchronization,” Google, Mountain View, CA, Jan. 2009, 8 pages.
GoInstant, Shared Browsing Technology, http://website.s3.goinstant.com.s3.amazonaws.com/wp-content/uploads/2012/04/GoInstant-Shared-Web-Technology.pdf, 2012, 4 pages.
“GTK 3, Broadway and an HTML5 websocket gui, for free,” retrieved on Sep. 26, 2017 at http://compsci.ca/v3/viewtopic.php?t=36823, Apr. 12, 2014, pp. 1-3.
Hong, C., et al., “Multimedia Presentation Authoring and Virtual Collaboration in Medicine,” International Journal of Kimics, vol. 8, No. 6, 2010, pp. 690-696.
Jourdain, Sebastien, et al., “ParaViewWeb: A Web Framework for 3D Visualization and Data Processing,” International Journal of Computer Information Systems and Industrial Management Applications, vol. 3, 2011, pp. 870-877.
Layers: Capture Every Item on Your Screen as a PSD Layered Image, Internet Website, retrieved on Jun. 30, 2016 at http://web.archive.org/web/20140218111143, 2014, 9 pages.
Li, S.F., et al., “Integrating Synchronous and Asynchronous Collaboration with Virtual Network Computing,” Internet Computing, IEEE 4.3, 2000, pp. 26-33.
Luo, Y., et al., “Real Time Multi-User Interaction with 3D Graphics via Communication Networks,” 1998 IEEE Conference on Information Visualization, 1998, 9 pages.
Microsoft Computer Dictionary, Microsoft Press, 5th Edition, Mar. 15, 2002, p. 624.
Mitchell, J. Ross, et al., A Smartphone Client-Server Teleradiology System for Primary Diagnosis of Acute Stroke, Journal of Medical Internet Research, vol. 13, Issue 2, 2011, 12 pages.
ParaViewWeb, KitwarePublic, retrieved on Jan. 27, 2014 from http://www.paraview.org/Wiki/ParaViewWeb, 1 page.
Remote Desktop Protocol (RDP), retrieved on May 4, 2014 from http://en.wikipedia.org/wiki/Remote_Desktop_Protocol, 7 pages.
Remote Desktop Services (RDS), Remote App, retrieved on May 4, 2014 from http://en.wikipedia.org/wiki/Remote_Desktop_Services, 9 pages.
Remote Desktop Services (RDS), Windows Desktop Sharing, retrieved on May 4, 2014 from http://en.wikipedia.org/wiki/Remote_Desktop_Services, 9 pages.
Samesurf web real-time co-browser application, http://i.samesurf.com/i/0586021, 2009, 2 pages.
Shim, H., et al., Providing Flexible Services for Managing Shared State in Collaborative Systems, Proceedings of the Fifth European Conference, 1997, pp. 237-252.
Yang, L., et al., “Multirate Control in Internet-Based Control Systems,” IEEE Transactions on Systems, Man, and Cybernetics: Part C: Applications and Reviews, vol. 37, No. 2, 2007, pp. 185-192.
Office Action, dated Nov. 7, 2016, received in connection with JP Patent Application No. 2014542944. (and English Translation).
Office Action and Search Report, dated Oct. 9, 2016, received in connection with CN Patent Application No. 201280057759.2. (and English Translation).
Office Action, dated Oct. 3, 2016, received in connection with JP Patent Application No. 2014532492. (and English Translation).
International Preliminary Report on Patentability and Written Opinion, dated May 27, 2014, received in connection with International Patent Application No. PCT/IB2012/002417.
International Search Report and Written Opinion, dated Feb. 12, 2013, in connection with International Patent Application No. PCT/IB2012/002417.
Supplementary European Search Report, dated Jun. 16, 2015, received in connection with European Patent Application No. 12851967.5.
Related Publications (1)
Number Date Country
20170302708 A1 Oct 2017 US
Provisional Applications (2)
Number Date Country
61563256 Nov 2011 US
61623131 Apr 2012 US
Continuations (1)
Number Date Country
Parent 13682243 Nov 2012 US
Child 15494783 US